| 0.25 – 0.35 | 0.034 ± 0.026 | ± 0.099 | | 1.16 ± 1677 ± 2924 | ± 2924 ± 2924 | | -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -2924 ± -37777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777778888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888
+-33333333333333333333333333333333333333333333333333333333333333
+-5555555555555555555555555555555555555555555555555555555555555
+-1e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+-6e-16
+
+---PAGE_BREAK---
+
+Table D.5: Numerical values for the transverse target double-spin asymmetries measured on protons in bins of $Q^2$, $x_{Bj}$ and $p_T^2$. The systematic uncertainties are obtained using the values given in Table 7.8 and a scale uncertainty of 6.2% accounts for uncertainties in the determination of the target polarisation, beam polarisation and target dilution factor for proton data.
+
+ | ALT,pcos(φ−φS) ± σstat | ALT,pcos(2φ−φS) ± σstat | ALT,pcos φS ± σsys | ALT,pstat ± σstat | ALT,pstat ± σsys |
|---|
| xBj bin | | 0.003 – 0.02 | 0.03 ± 0.05 ± 0.03 | 0.02 ± 0.07 ± 0.03 | -0.05 ± 0.06 ± 0.04 | | | | 0.02 – 0.03 | 0.02 ± 0.11 ± 0.06 | 0.10 ± 0.17 ± 0.07 | -0.08 ± 0.15 ± 0.09 | | | | 0.03 – 0.05 | 0.10 ± 0.16 ± 0.09 | 0.24 ± 0.22 ± 0.10 | -0.31 ± 0.20 ± 0.12 | | | | 0.05 – 0.35 | 0.36 ± 0.29 ± 0.16 | 0.18 ± 0.40 ± 0.17 | 0.02 ± 0.36 ± 0.21 | | | | Q2 bin (GeV/c)2 | | 1.0 – 1.2 | 0.13 ± 0.09 ± 0.05 | 0.01 ± 0.13 ± 0.06 | 0.01 ± 0.12 ± 0.07 | | | | 1.2 – 1.6 | 0.12 ± 0.08 ± 0.06 | 0.09 ± 0.13 ± 0.06 | -0.34 ± 0.12 ± 0.07 | | | | 1.6 – 2.4 | -0.06 ± 0.10 ± 0.06 | 0.09 ± 0.15 ± 0.06 | 0.04 ± 0.13 ± 0.08 | | | | 2.4 – 10.0 | 0.13 ± 0.14 ± 0.08 | 0.20 ± 0.21 ± 0.09 | -0.08 ± 0.19 ± 0.11 | | | | pT2 bin (GeV/c)2 | | 0.05 – 0.10 | 0.09 ± 0.08 ± 0.04 | 0.06 ± 0.12 ± 0.05 | -0.03 ± 0.11 ± 0.06 | | | | 0.10 – 0.15 | 0.03 ± 0.11 ± 0.06 | -0.01 ± 0.16 ± 0.07 | -0.22 ± 0.15 ± 0.09 | | | | 0.15 – 0.25 | 0.02 ± 0.11 ± 0.06 | 0.14 ± 0.16 ± 0.07 | -0.08 ± 0.15 ± 0.09 | | | | 0.25 – 0.35 | 0.04 ± 0.16 ± 0.09 | 0.08 ± 0.23 ± 0.10 | -0.15 ± 0.22 ± 0.13 | | | | 0.35 – 0.50 | 0.37 ± 0.20 ± 0.11 | 0.28 ± 0.28 ± 0.12 | -0.21 ± 0.26 ± 0.15 | | |
+---PAGE_BREAK---
+
+D.5 Correlation Matrix
+
+Figure D.13: Correlation matrix for the 2DLH fit. Here is shown the second bin (left) and the third bin (right) in $x_{Bj}$ for the 2007&2010 sample.
+
+Figure D.14: Correlation matrix for the 2DLH fit. Here is shown the fourth bin in $x_{Bj}$ (left) and the first bin (right) in $Q^2$ for the 2007&2010 sample.
+---PAGE_BREAK---
+
+Figure D.15: Correlation matrix for the 2DLH fit. Here is shown the second bin (left) and the third bin (right) in $Q^2$ for the 2007&2010 sample.
+
+Figure D.16: Correlation matrix for the 2DLH fit. Here is shown the fourth bin in $Q^2$ (left) and the first bin in $p_T^2$ (right) for the 2007&2010 sample.
+---PAGE_BREAK---
+
+Figure D.17: Correlation matrix for the 2DLH fit. Here is shown the second bin (left) and the third bin (right) in $p_T^2$ for the 2007&2010 sample.
+
+Figure D.18: Correlation matrix for the 2DLH fit. Here is shown the fourth bin (left) and the fifth bin (right) in $p_T^2$ for the 2007&2010 sample.
+---PAGE_BREAK---
+
+Bibliography
+
+[1] E. Rutherford, "The scattering of alpha and beta particles by matter and the structure of the atom", Phil.Mag. **21** (1911) 669-688.
+
+[2] E. Leader and M. Anselmino, "A Crisis in the Parton Model: Where, Oh Where Is the Proton's Spin?", Z.Phys. **C41** (1988) 239.
+
+[3] R. Jaffe and A. Manohar, "The G(1) Problem: Fact and Fantasy on the Spin of the Proton", Nucl.Phys. **B337** (1990) 509-546.
+
+[4] X. Ji, "Gauge-Invariant Decomposition of Nucleon Spin and Its Spin-Off", Phys. Rev. Lett. **78** (1997) 610, arXiv:hep-ph/9603249v1.
+
+[5] J. C. Collins, L. Frankfurt, and M. Strikman, "Factorization for hard exclusive electroproduction of mesons in QCD", Phys. Rev. D **56** (1997) 2982, arXiv:hep-ph/9611433.
+
+[6] J. C. Collins, "Factorization for hard exclusive electroproduction", arXiv:hep-ph/9907513 [hep-ph].
+
+[7] M. Burkardt, C. Miller, and W. Nowak, "Spin-polarized high-energy scattering of charged leptons on nucleons", Rept.Prog.Phys. **73** (2010) 016201, arXiv:0812.2208 [hep-ph].
+
+[8] C. A. Aidala, S. D. Bass, D. Hasch, et al., "The Spin Structure of the Nucleon", Rev.Mod.Phys. **85** (2013) 655-691, arXiv:1209.2803 [hep-ph].
+
+[9] M. Diehl and S. Sapeta, "On the analysis of lepton scattering on longitudinally or transversely polarized protons", Eur.Phys.J. **C41** (2005) 515-533, arXiv:hep-ph/0503023 [hep-ph].
+
+[10] T. Arens, O. Nachtmann, M. Diehl, et al., "Some tests for the helicity structure of the pomeron in e p collisions", Z.Phys. **C74** (1997) 651-669, arXiv:hep-ph/9605376 [hep-ph].
+
+[11] A. Kotzinian, "New quark distributions and semi-inclusive electroproduction on the polarized nucleons", Nucl.Phys. **B441** (1995) 234-248, arXiv:hep-ph/9412283 [hep-ph].
+
+[12] A. Bacchetta, M. Diehl, K. Goeke, et al., "Semi-inclusive deep inelastic scattering at small transverse momentum", JHEP **0702** (2007) 093, arXiv:hep-ph/0611265 [hep-ph].
+---PAGE_BREAK---
+
+[13] C. E. Carlson and M. Vanderhaeghen, “Two-Photon Physics in Hadronic Processes”, *Ann.Rev.Nucl.Part.Sci.* **57** (2007) 171–204, [arXiv:hep-ph/0701272 [HEP-PH]].
+
+[14] A. Metz, M. Schlegel, and K. Goeke, “Transverse single spin asymmetries in inclusive deep-inelastic scattering”, *Phys.Lett.* **B643** (2006) 319–324, [arXiv:hep-ph/0610112 [hep-ph]].
+
+[15] H. Jostlein, I. Kim, K. Königsmann, et al., “Two Photon Exchange in Deep Inelastic Scattering”, *Phys.Lett.* **B52** (1974) 485.
+
+[16] European Muon Collaboration, J. Aubert et al., “A Detailed Study of the Nucleon Structure Functions in Deep Elastic Muon Scattering in Iron”, *Nucl.Phys.* **B272** (1986) 158.
+
+[17] HERMES Collaboration, A. Airapetian et al., “Search for a Two-Photon Exchange Contribution to Inclusive Deep-Inelastic Scattering”, *Phys.Lett.* **B682** (2010) 351–354, [arXiv:0907.5369 [hep-ex]].
+
+[18] V. Barone, A. Drago, and P. G. Ratcliffe, “Transverse polarisation of quarks in hadrons”, *Phys.Rept.* **359** (2002) 1–168, [arXiv:hep-ph/0104283 [hep-ph]].
+
+[19] Particle Data Group, J. Beringer et al., “Review of Particle Physics (RPP)”, *Phys.Rev.* **D86** (2012) 010001.
+
+[20] J. Bjorken, “Asymptotic Sum Rules at Infinite Momentum”, *Phys.Rev.* **179** (1969) 1547–1553.
+
+[21] M. Breidenbach, J. I. Friedman, H. W. Kendall, et al., “Observed Behavior of Highly Inelastic electron-Proton Scattering”, *Phys.Rev.Lett.* **23** (1969) 935–939.
+
+[22] On behalf of the COMPASS Collaboration, M. Wilfert, “New COMPASS results on the spin structure function $g_1^p$ and QCD”, *Presentation: XXII. International Workshop on Deep-Inelastic Scattering and Related Subjects DIS2014* (2014).
+
+[23] The HERMES Collaboration, A. Airapetian et al., “Measurement of the virtual-photon asymmetry $A_2$ and the spin-structure function $g_2$ of the proton”, *Eur.Phys.J.* **C72** (2012) 1921, [arXiv:1112.5584 [hep-ex]].
+
+[24] R. P. Feynman, “Very high-energy collisions of hadrons”, *Phys.Rev.Lett.* **23** (1969) 1415–1417.
+
+[25] P. Jimenez-Delgado, W. Melnitchouk, and J. Owens, “Parton momentum and helicity distributions in the nucleon”, *J.Phys.* **G40** (2013) 093102, [arXiv:1306.6515 [hep-ph]].
+
+[26] D. E. Soper, “Parton distribution functions”, *Nucl.Phys.Proc.Suppl.* **53** (1997) 69–80, [arXiv:hep-lat/9609018 [hep-lat]].
+---PAGE_BREAK---
+
+[27] S. Boffi and B. Pasquini, "Generalized parton distributions and the structure of the nucleon", Riv.Nuovo Cim. 30 (2007) 387, arXiv:0711.2625 [hep-ph].
+
+[28] A. V. Manohar, "An Introduction to spin dependent deep inelastic scattering", arXiv:hep-ph/9204208 [hep-ph].
+
+[29] G. Baum, M. Bergstrom, P. Bolton, et al., "A New Measurement of Deep Inelastic e p Asymmetries", Phys.Rev.Lett. 51 (1983) 1135.
+
+[30] European Muon Collaboration, J. Ashman et al., "An Investigation of the Spin Structure of the Proton in Deep Inelastic Scattering of Polarized Muons on Polarized Protons", Nucl.Phys. B328 (1989) 1.
+
+[31] COMPASS Collaboration, C. Adolph et al., "Leading order determination of the gluon polarisation from DIS events with high-$p_T$ hadron pairs", Phys.Lett. B718 (2013) 922-930, arXiv:1202.4064 [hep-ex].
+
+[32] COMPASS Collaboration, C. Adolph et al., "Leading and Next-to-Leading Order Gluon Polarization in the Nucleon and Longitudinal Double Spin Asymmetries from Open Charm Muoproduction", Phys.Rev. D87 (2013) 052018, arXiv:1211.6849 [hep-ex].
+
+[33] PHENIX Collaboration, A. Adare et al., "Inclusive cross section and double helicity asymmetry for $p^0$ production in $p^+p$ collisions at $\sqrt{s}$ = 62.4 GeV", Phys.Rev. D79 (2009) 012003, arXiv:0810.0701 [hep-ex].
+
+[34] STAR Collaboration, L. Adamczyk et al., "Longitudinal and transverse spin asymmetries for inclusive jet production at mid-rapidity in polarized $p+p$ collisions at $\sqrt{s}$ = 200 GeV", Phys.Rev. D86 (2012) 032006, arXiv:1205.2735 [nucl-ex].
+
+[35] X. Ji, "Deeply virtual Compton scattering", Phys. Rev. D 55 (1997) 7114, arXiv:hep-ph/9609381v3.
+
+[36] M. Guidal, H. Moutarde, and M. Vanderhaeghen, "Generalized Parton Distributions in the valence region from Deeply Virtual Compton Scattering", Rept.Prog.Phys. 76 (2013) 066202, arXiv:1303.6600 [hep-ph].
+
+[37] S. Goloskokov, "Cross sections and spin asymmetries in vector meson leptoproduction", arXiv:0910.4308 [hep-ph].
+
+[38] X.-D. Ji and J. Osborne, "One loop corrections and all order factorization in deeply virtual Compton scattering", Phys.Rev. D58 (1998) 094018, arXiv:hep-ph/9801260 [hep-ph].
+
+[39] L. Szymanowski, "QCD collinear factorization, its extensions and the partonic distributions", PoS QNP2012 (2012) 015, arXiv:1208.5688 [hep-ph].
+
+[40] A. Bacchetta, U. D'Alesio, M. Diehl, et al., "Single-spin asymmetries: The Trento conventions", Phys.Rev. D70 (2004) 117504, arXiv:hep-ph/0410050 [hep-ph].
+---PAGE_BREAK---
+
+[41] M. Diehl and A. Vinnikov, “Quarks vs. gluons in exclusive rho electroproduction”, Phys.Lett. **B**609 (2005) 286–290, arXiv:hep-ph/0412162 [hep-ph].
+
+[42] D. Muller, D. Robaschik, B. Geyer, et al., “Wave Functions, Evolution Equations and Evolution Kernels from Light-Ray Operators of QCD”, Fortsch. Phys. **42** (1994) 101, arXiv:hep-ph/9812448v1.
+
+[43] A. Belitsky and A. Radyushkin, “Unraveling hadron structure with generalized parton distributions”, Phys.Rept. **418** (2005) 1–387, arXiv:hep-ph/0504030 [hep-ph].
+
+[44] A. Radyushkin, “Topics in theory of generalized parton distributions”, Phys.Part.Nucl. **44** (2013) 469–489.
+
+[45] M. Diehl, “Generalized parton distributions”, Phys.Rept. **388** (2003) 41–277, arXiv:hep-ph/0307382 [hep-ph].
+
+[46] A. V. Radyushkin, “Nonforward Parton Distributions”, Phys. Rev. D **56** (1997) 5524, arXiv:hep-ph/9704207v7.
+
+[47] K. J. Golec-Biernat and A. D. Martin, “Off diagonal parton distributions and their evolution”, Phys.Rev. **D**59 (1999) 014029, arXiv:hep-ph/9807497 [hep-ph].
+
+[48] K. Goeke, M. V. Polyakov, and M. Vanderhaeghen, “Hard Exclusive Reactions and the Structure of Hadrons”, Prog. Part. Nucl. Phys. **47** (2001) 401, arXiv:hep-ph/0106012.
+
+[49] S. Goloskokov and P. Kroll, “An Attempt to understand exclusive pi+ electroproduction”, Eur.Phys.J. **C**65 (2010) 137–151, arXiv:0906.0460 [hep-ph].
+
+[50] S. Goloskokov and P. Kroll, “Transversity in hard exclusive electroproduction of pseudoscalar mesons”, Eur.Phys.J. **A**47 (2011) 112, arXiv:1106.4897 [hep-ph].
+
+[51] P. Hoodbhoy and X.-D. Ji, “Helicity flip off forward parton distributions of the nucleon”, Phys.Rev. **D**58 (1998) 054006, arXiv:hep-ph/9801369 [hep-ph].
+
+[52] M. Diehl, “Generalized parton distributions with helicity flip”, Eur.Phys.J. **C**19 (2001) 485–492, arXiv:hep-ph/0101335 [hep-ph].
+
+[53] R. L. Jaffe, “Spin, twist and hadron structure in deep inelastic processes”, arXiv:hep-ph/9602236 [hep-ph].
+
+[54] S. Goloskokov and P. Kroll, “Transversity in exclusive vector-meson leptoproduction”, Eur.Phys.J. **C**74 (2014) 2725, arXiv:1310.1472 [hep-ph].
+
+[55] M. Burkardt, “Impact parameter dependent parton distributions and off-forward parton distributions for $\zeta \to 0$”, Phys. Rev. D **62** (2000) 071503, arXiv:hep-ph/0005108v2.
+---PAGE_BREAK---
+
+[56] M. Burkardt, “Erratum: Impact parameter dependent parton distributions and off-forward parton distributions for $\zeta \to 0$ [Phys. Rev. D 62, 071503 (2000)]”, *Phys. Rev. D* **66** (2002) 119903.
+
+[57] M. Burkardt, “Impact Parameter Space Interpretation for Generalized Parton Distributions”, Int. J. Mod. Phys A **18** (2003) 173, arXiv:hep-ph/0207047v3.
+
+[58] J. Arrington, et al., “Conceptual Design Report (CDR) for The Science and Experimental Equipment for The 12 GeV Upgrade of CEBAF”, Conceptual Design Report, April, 2005.
+
+[59] E. Leader and C. Lorce, “The angular momentum controversy: What’s it all about and does it matter?”, Phys. Rept. (2013), arXiv:1309.4235 [hep-ph].
+
+[60] LHPC Collaborations, P. Hagler et al., “Nucleon Generalized Parton Distributions from Full Lattice QCD”, Phys. Rev. D **77** (2008) 094502, arXiv:0705.4295 [hep-lat].
+
+[61] Jefferson Lab Hall A Collaboration, M. Mazouz et al., “Deeply virtual compton scattering off the neutron”, Phys. Rev. Lett. **99** (2007) 242501, arXiv:0709.0450 [nucl-ex].
+
+[62] P. Kroll, “A set of generalized parton distributions”, arXiv:1303.6433 [hep-ph].
+
+[63] A. Radyushkin, “Double distributions and evolution equations”, Phys. Rev. D **59** (1999) 014030, arXiv:hep-ph/9805342 [hep-ph].
+
+[64] M. Polyakov and A. Shuvaev, “On’dual’ parametrizations of generalized parton distributions”, arXiv:hep-ph/0207153 [hep-ph].
+
+[65] V. Guzey and T. Teckentrup, “The Dual parameterization of the proton generalized parton distribution functions H and E and description of the DVCS cross sections and asymmetries”, Phys. Rev. D **74** (2006) 054027, arXiv:hep-ph/0607099 [hep-ph].
+
+[66] S. Goloskokov and P. Kroll, “The Role of the quark and gluon GPDs in hard vector-meson electroproduction”, Eur. Phys. J. C **53** (2008) 367–384, arXiv:0708.3569 [hep-ph].
+
+[67] HERMES Collaboration, A. Airapetian et al., “Single-spin azimuthal asymmetry in exclusive electroproduction of pi+ mesons on transversely polarized protons”, Phys. Lett. B **682** (2010) 345–350, arXiv:0907.2596 [hep-ex].
+
+[68] P. Kroll, “Generalized parton distributions from meson leptoproduction”, Phys. Part. Nucl. **44** (2013) no. 6, 915–919, arXiv:1211.6857 [hep-ph].
+
+[69] M. V. Polyakov and C. Weiss, “Skewed and double distributions in pion and nucleon”, Phys. Rev. D **60** (1999) 114017, arXiv:hep-ph/9902451 [hep-ph].
+---PAGE_BREAK---
+
+[70] S. V. Goloskokov and P. Kroll, “Vector meson electroproduction at small Bjorken-x and generalized parton distributions”, Eur. Phys. J. C 42 (2005) 281, arXiv:hep-ph/0501242v2.
+
+[71] HERMES Collaboration, A. Airapetian et al., “Spin Density Matrix Elements in Exclusive rho0 Electroproduction on H-1 and H-2 Targets at 27.5-GeV Beam Energy”, Eur. Phys. J. C62 (2009) 659–695, arXiv:0901.0701 [hep-ex].
+
+[72] H1 Collaboration, F. Aaron et al., “Diffractive Electroproduction of rho and phi Mesons at HERA”, JHEP 1005 (2010) 032, arXiv:0910.5831 [hep-ex].
+
+[73] COMPASS Collaboration, P. Abbon et al., “The COMPASS experiment at CERN”, Nucl.Instrum.Meth. A577 (2007) 455–518, arXiv:hep-ex/0703049 [hep-ex].
+
+[74] M. Leberig and L. Gatignon, “The M2 Beam Line”, Villar Meeting, CERN, July 2004.
+
+[75] L. Gatignon, “Maximum FLUX in M2 beam”, COMPASS technical board, CERN, September 2010.
+
+[76] A. Abragam, “Polarized Targets in High-Energy and Elsewhere”, Proceedings, High Energy Physics With Polarized Beams and Polarized Targets (1978).
+
+[77] C. P. T. Group, “COMPASS Polarized Target in 2007”, http://wwwcompass.cern.ch/compass/detector/target/Drawings/NH3target07v01bw.pdf (2007).
+
+[78] A. Sandacz, “Dilution factor for exclusive $\rho^0$ production for COMPASS pol. targets”, COMPASS GPD Meeting, October, 2010.
+
+[79] J. Koivuniemi, “Target material in 2007”, Electronic Logbook, September 2008.
+
+[80] J. Koivuniemi, “Target material in 2010”, Private Communication, March 2012.
+
+[81] N. Doshita et al., “Target material data of run 2003”, COMPASS Note 2003-8, January 2004.
+
+[82] J. Koivuniemi, “Target material in 2004”, Electronic Logbook, August 2009.
+
+[83] COMPASS Collaboration, F. Gautheron et al., “COMPASS-II Proposal”, COMPASS-II Proposal, May 2010.
+
+[84] J. Bisplinghoff, D. Eversheim, W. Eyrich, et al., “A scintillating fibre hodoscope for high rate applications”, Nucl.Instrum.Meth. A490 (2002) 101–111.
+
+[85] H. Angerer, R. De Masi, A. Esposito, et al., “Present status of silicon detectors in COMPASS”, Nucl.Instrum.Meth. 512 (2003) 229–238.
+---PAGE_BREAK---
+
+[86] C. Bernet, P. Abbon, J. Ball, et al., "The 40-cm x 40-cm gaseous microstrip detector Micromegas for the high-luminosity COMPASS experiment at CERN", Nucl.Instrum.Meth. A536 (2005) 61-69.
+
+[87] B. Ketzer, J. Ehlers, J. Friedrich, et al., "A fast tracker for COMPASS based on the GEM", Nucl.Phys.Proc.Suppl. 125 (2003) 368-373.
+
+[88] V. Bychkov, N. Dedek, W. Dunnweber, et al., "The large size straw drift chambers of the COMPASS experiment", Nucl.Instrum.Meth. A556 (2006) 66-79.
+
+[89] A. Amoroso et al., "The front-end electronics for the COMPASS MWPCs", Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 518 (2004) no. 1-2, 495-497.
+
+[90] F. Nerling, "COMPASS Calorimetry in view of future plans", arXiv:1007.2948 [physics.ins-det].
+
+[91] G. Baum et al., "The COMPASS RICH project", Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 433 (1999) no. 1-2, 207-211.
+
+[92] P. Abbon, M. Alexeev, H. Angerer, et al., "Particle identification with COMPASS RICH-1", Nucl.Instrum.Meth. A631 (2011) 26-39.
+
+[93] C. Bernet, A. Bravar, J. Hannappel, et al., "The COMPASS trigger system for muon scattering", Nucl.Instrum.Meth. A550 (2005) 217-240.
+
+[94] K. Novotny and E.-M. Kabuss, "Performance of the trigger hodoscopes in 2007 and 2010", COMPASS note 2012-5, November 2012.
+
+[95] F. Herrmann, "Development and Verification of a High Performance Electronic Readout Framework for High Energy Physics", PhD thesis, University of Freiburg, August, 2011.
+
+[96] R. Fruhwirth, "Application of Kalman filtering to track and vertex fitting", Nucl.Instrum.Meth. A262 (1987) 444-450.
+
+[97] PHAST, "PHysics Analysis Software Tools", http://ges.home.cern.ch/ges/phast.
+
+[98] ROOT, "A Data Analysis Framework", http://root.cern.ch/drupal.
+
+[99] G. Ingelman, A. Edin, and J. Rathsman, "LEPTO 6.5: A Monte Carlo generator for deep inelastic lepton - nucleon scattering", Comput.Phys.Commun. 101 (1997) 108-134, arXiv:hep-ph/9605286 [hep-ph].
+
+[100] J.-P. Guillaud, "Miniguide PYTHIA", CERN-CMS-NOTE-2000-070 (2000).
+
+[101] GEANT, "Detector Description and Simulation tool", http://wwwasdoc.web.cern.ch/wwwasdoc/pdfdir/geant.pdf.
+---PAGE_BREAK---
+
+[102] H. Wollny, “Measuring azimuthal asymmetries in semi-inclusive deep-inelastic scattering off transversely polarized protons”, PhD thesis, University of Freiburg, April, 2010.
+
+[103] C. Elia, “Measurement of two-hadron transverse spin asymmetries in SIDIS at COMPASS”, PhD thesis, University of Triest, March, 2012.
+
+[104] C. Adolph, “One-hadron transverse spin effects on a proton target at COMPASS”, PhD thesis, University of Erlangen-Nürnberg, February, 2013.
+
+[105] C. Braun, “Bau und Tests eines sehr dünnen Beam-Counters aus szintillierenden Fasern und Software zur Qualitätsprüfung für das COMPASS Experiment.”, Diploma thesis, University of Erlangen-Nürnberg, January, 2010.
+
+[106] The transversity group, “Badspill/badrun lists for new production of 2010 data”,
+http://compassweb.ts.infn.it/transversity/compass/stability/2010/FINAL BAD_SPILL_RUN_LISTS.tarz,
+2010.
+
+[107] The transversity group, “Badspill/badrun lists for new production of 2007 data”,
+http://compassweb.ts.infn.it/transversity/compass/stability/2007/data/BadrunSpill/ badspill_2009_03_04_complete_new_production.tarz, 2009.
+
+[108] P. Sznajder, “Methodical studies of problems in 2004 and 2007 data”, COMPASS GPD Meeting, April, 2010.
+
+[109] P. Sznajder, “The new beam reconstruction”, COMPASS Analysis Meeting, September, 2011.
+
+[110] K.Schmidt, P. Sznajder, H. Wollny, et al., “Exclusive $\rho^0$ production using the transversely polarized NH$_3$ target in 2007 and 2010”, COMPASS release note, February, 2012.
+
+[111] J. Bedfer, “Cut on number of radiation lengths”, Private Communication, February, 2013.
+
+[112] K.Schmidt, P. Sznajder, H. Wollny, et al., “Exclusive $\rho^0$ production using transversely polarized $^6$LiD (2004) and NH$_3$ (2007) targets”, COMPASS note 2011-6, November, 2011.
+
+[113] S. Goloskokov and P. Kroll, “The Target asymmetry in hard vector-meson electroproduction and parton angular momenta”, Eur.Phys.J. C59 (2009) 809-819, arXiv:0809.4126 [hep-ph].
+
+[114] E. Burtin, N.d’Hose, H. Fischer, et al., “Exclusive $\rho^0$ production using transversely polarized $^6$LiD (2004) and NH$_3$ (2007) targets”, COMPASS note 2009-9, September, 2009.
+
+[115] A. Sandacz, “Separation of coherent and incoherent exclusive processes”, COMPASS Analysis Meeting, August, 2007.
+---PAGE_BREAK---
+
+[116] C. Adolph, C. Braun, F. Bradamante, et al., “Collins, Sivers and two hadron asymmetries for identified hadrons from the 2010 transversely polarized proton data”, COMPASS release note, September, 2012.
+
+[117] C. Adolph, J. Bisplinghoff, C. Braun, et al., “Transverse spin dependent azimuthal asymmetries from 2007 proton data”, COMPASS release note, September, 2010.
+
+[118] HERMES Collaboration, A. Airapetian et al., “Beam-helicity asymmetry arising from deeply virtual Compton scattering measured with kinematically complete event reconstruction”, JHEP 1210 (2012) 042, arXiv:1206.5683 [hep-ex].
+
+[119] H. Wollny, Private Communication, 2011.
+
+[120] A. Sandacz and P. Sznajder, “HEPGEN - generator for hard exclusive leptoproduction”, arXiv:1207.0333 [hep-ph].
+
+[121] A. Kotzinian, “Some studies on azimuthal asymmetry extraction”, COMPASS Analysis Meeting, May, 2007.
+
+[122] S. M. Stigler, “The Epic Story of Maximum Likelihood”, Statistical Science 22 (Nov., 2007) 598–620. http://dx.doi.org/10.1214/07-sts249.
+
+[123] D. Levenberg, “A Method for the Solution of Certain Problems in Least Squares”, Quart. Appl. Math. 2, pages 164-168, 1944.
+
+[124] D. Marquardt, “An Algorithm for Least-Squares Estimation of Nonlinear Parameters”, SIAM J. Appl. Math., 11:431-441, 1963.4, 1963.
+
+[125] B. Gough, “GNU Scientific Library Reference Manual”, 2nd Edition. Network Theory Ltd., ISBN 0954161734., 1963.4, 2006.
+
+[126] G. Mallot, “Principle of a bias-free "2D" transverse asymmetry fit”, COMPASS Analysis Meeting, September, 2007.
+
+[127] COMPASS Collaboration, M. Alekseev et al., “Double spin asymmetry in exclusive $\rho^0$ muoproduction at COMPASS”, Eur.Phys.J. C52 (2007) 255-265, arXiv:0704.1863 [hep-ex].
+
+[128] Gustafsson, K., “Computation of the Dilution Factor for the Year 2002 COMPASS Data”, COMPASS note 2003-3, July, 2003.
+
+[129] Spin muon collaboration, A. Tripet, “Exclusive $\rho^0$ production in polarized DIS at SMC”, Nucl.Phys.Proc.Suppl. 79 (1999) 529-531, arXiv:hep-ex/9906008 [hep-ex].
+
+[130] P. Sznajder, “Uncertainty of dilution factor”, Private Communication, October 2011.
+
+[131] K. Kondo et al., “Polarization measurement in the COMPASS polarized target”, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 526 (2004) 70-75.
+---PAGE_BREAK---
+
+[132] COMPASS Collaboration, V. Y. Alexakhin et al., "First measurement of the transverse spin asymmetries of the deuteron in semi-inclusive deep inelastic scattering", Phys.Rev.Lett. 94 (2005) 202002, arXiv:hep-ex/0503002 [hep-ex].
+
+[133] COMPASS Collaboration, E. Ageev et al., "A New measurement of the Collins and Sivers asymmetries on a transversely polarised deuteron target", Nucl.Phys. B765 (2007) 31-70, arXiv:hep-ex/0610068 [hep-ex].
+
+[134] A. Kotzinian, "MC studies on reconstruction of azimuthal asymmetries", COMPASS Analysis Meeting, Aug, 2007.
+
+[135] A. Kotzinian, "Remarks on acceptance effects in asymmetry extraction", COMPASS note 2007-2, October, 2007.
+
+[136] J. Pretz, "A New Method for Asymmetry Extraction", COMPASS note 2004-11, April, 2004.
+
+[137] COMPASS Collaboration, C. Adolph et al., "Transverse spin effects in hadron-pair production from semi-inclusive deep inelastic scattering", Phys.Lett. B713 (2012) 10-16, arXiv:1202.6150 [hep-ex].
+
+[138] COMPASS Collaboration, C. Adolph et al., "A high-statistics measurement of transverse spin effects in dihadron production from muon-proton semi-inclusive deep-inelastic scattering", arXiv:1401.7873 [hep-ex].
+
+[139] A. Bacchetta and M. Radici, "Partial wave analysis of two hadron fragmentation functions", Phys.Rev. D67 (2003) 094002, arXiv:hep-ph/0212300 [hep-ph].
+
+[140] A. Bacchetta and M. Radici, "Two hadron semi-inclusive production including subleading twist", Phys.Rev. D69 (2004) 074026, arXiv:hep-ph/0311173 [hep-ph].
+
+[141] P. Sznajder, "Acceptance studies with MC", Private Communication, November, 2013.
+
+[142] M. Diehl, "Vector meson production from a polarized nucleon", JHEP 0709 (2007) 064, arXiv:0704.1565 [hep-ph].
+
+[143] COMPASS Collaboration, M. Alekseev et al., "Measurement of the Collins and Sivers asymmetries on transversely polarised protons", Phys.Lett. B692 (2010) 240-246, arXiv:1005.5609 [hep-ex].
+
+[144] COMPASS Collaboration, C. Adolph et al., "Experimental investigation of transverse spin asymmetries in muon-p SIDIS processes: Sivers asymmetries", Phys.Lett. B717 (2012) 383-389, arXiv:1205.5122 [hep-ex].
+
+[145] COMPASS Collaboration, C. Adolph et al., "Experimental investigation of transverse spin asymmetries in muon-p SIDIS processes: Collins asymmetries", Phys.Lett. B717 (2012) 376-382, arXiv:1205.5121 [hep-ex].
+---PAGE_BREAK---
+
+[146] COMPASS Collaboration, C. Adolph et al., "Exclusive $\rho^0$ muoproduction on transversely polarised protons and deuterons", Nucl.Phys. B865 (2012) 1-20, arXiv:1207.4301 [hep-ex].
+
+[147] C. Adolph, C. Braun, F. Bradamante, et al., "Six beyond Collins and Sivers asymmetries for unidentified hadrons from the 2010 transversely polarized proton data", COMPASS Release Note, September, 2012.
+
+[148] M. Alekseev et al., "Azimuthal asymmetries of charged hadrons produced by high-energy muons scattered off longitudinally polarised deuterons", Eur.Phys.J. C70 (2010) 39-49, arXiv:1007.1562 [hep-ex].
+
+[149] A. Tripet, "The Exclusive Production of $\rho^0$ Mesons in Polarized Muon-Nucleon Scattering within the SMC Experiment", PhD thesis, University of Bielefeld, 2002.
+
+[150] HERMES Collaboration, A. Airapetian et al., "Double spin asymmetries in the cross-section of rho0 and phi production at intermediate-energies", Eur.Phys.J. C29 (2003) 171-179, arXiv:hep-ex/0302012 [hep-ex].
+
+[151] H. Fraas, "Cross-Section Asymmetries in Vector Meson Electroproduction and in Inelastic e p Scattering with Polarized Beam and Target", Nucl.Phys. B113 (1976) 532.
+
+[152] COMPASS Collaboration, M. Alekseev et al., "The Spin-dependent Structure Function of the Proton $g_1^p$ and a Test of the Bjorken Sum Rule", Phys.Lett. B690 (2010) 466-472, arXiv:1001.4654 [hep-ex].
+
+[153] R. Barlow, "Systematic errors: Facts and fictions", arXiv:hep-ex/0207026 [hep-ex].
+
+[154] G. Pesaro, "Measurement of COMPASS of transverse spin effects on identified hadrons on a transversely polarised proton target", PhD thesis, University of Trieste, March, 2010.
+
+[155] ZEUS Collaborations, J. Breitweg et al., "Measurement of the spin density matrix elements in exclusive electroproduction of rho0 mesons at HERA", Eur.Phys.J. C12 (2000) 393-410, arXiv:hep-ex/9908026 [hep-ex].
+
+[156] ZEUS Collaboration, S. Chekanov et al., "Exclusive $\rho^0$ production in deep inelastic scattering at HERA", PMC Phys. A1 (2007) 6, arXiv:0708.1478 [hep-ex].
+
+[157] O. A. Rondon-Aramayo, "Corrections to nucleon spin structure asymmetries measured on nuclear polarized targets", Phys.Rev. C60 (1999) 035201.
+
+[158] COMPASS Collaboration, M. Alekseev et al., "Quark helicity distributions from longitudinal spin asymmetries in muon-proton and muon-deuteron scattering", Phys.Lett. B693 (2010) 227-235, arXiv:1007.4061 [hep-ex].
+
+[159] I. Akushevich, "Radiative effects in processes of diffractive vector meson electroproduction", Eur.Phys.J. C8 (1999) 457-463, arXiv:hep-ph/9808309 [hep-ph].
+---PAGE_BREAK---
+
+[160] COMPASS Collaboration, C. Adolph et al., "Transverse target spin asymmetries in exclusive $\rho^0$ muoproduction", Phys.Lett. **B**731 (2014) 19, [arXiv:1310.1454 [hep-ex]](https://arxiv.org/abs/1310.1454).
+
+[161] S. Goloskokov and P. Kroll, "Model calculations", Private Communication, 2013.
+
+[162] K. Schmidt, "Exclusive $\rho^0$ muoproduction on transversely polarised protons and deuterons", Proceedings, Sixth International Conference on Quarks and Nuclear Physics, PoS(QNP2012)044 (2012).
+
+[163] K. Schmidt, "Transverse target spin asymmetries in exclusive $\rho^0$ muoproduction", Proceedings, XXI International Workshop on Deep-Inelastic Scattering and Related Subjects, PoS(DIS2013)223 (2013).
+
+[164] S. Goloskokov, "Model calculations", Private Communication, June 2014.
+
+[165] J. Dudek, R. Ent, R. Essig, et al., "Physics Opportunities with the 12 GeV Upgrade at Jefferson Lab", Eur.Phys.J. **A**48 (2012) 187, [arXiv:1208.1244 [hep-ex]](https://arxiv.org/abs/1208.1244).
+
+[166] C. Aidala, N. Ajitanand, Y. Akiba, et al., "sPHENIX: An Upgrade Concept from the PHENIX Collaboration", arXiv:1207.6378 [nucl-ex].
+
+[167] D. Boer, M. Diehl, R. Milner, et al., "Gluons and the quark sea at high energies: Distributions, polarization, tomography", arXiv:1108.1713 [nucl-th].
+
+[168] A. Accardi, J. Albacete, M. Anselmino, et al., "Electron Ion Collider: The Next QCD Frontier - Understanding the glue that binds us all", arXiv:1212.1701 [nucl-ex].
+
+[169] PHENIX Collaboration, A. Adare et al., "Concept for an Electron Ion Collider (EIC) detector built around the BaBar solenoid", arXiv:1402.1209 [nucl-ex].
+
+[170] S. Abeyratne, A. Accardi, S. Ahmed, et al., "Science Requirements and Conceptual Design for a Polarized Medium Energy Electron-Ion Collider at Jefferson Lab", arXiv:1209.0757 [physics.acc-ph].
+
+[171] A. Lehrach, K. Aulenbacher, O. Boldt, et al., "The polarized electron-nucleon collider project ENC at GSI/FAIR", J.Phys.Conf.Ser. **295** (2011) 012156.
+
+[172] LHeC Study Group, J. Abelleira Fernandez et al., "A Large Hadron Electron Collider at CERN: Report on the Physics and Design Concepts for Machine and Detector", J.Phys. **G**39 (2012) 075001, [arXiv:1206.2913 [physics.acc-ph]](https://arxiv.org/abs/1206.2913).
+
+[173] O. Bruening and M. Klein, "The Large Hadron Electron Collider", Mod.Phys.Lett. **A**28 (2013) no. 16, 1330011, [arXiv:1305.2090 [physics.acc-ph]](https://arxiv.org/abs/1305.2090).
+---PAGE_BREAK---
+
+Danksagung
+
+Diese Arbeit wurde am Physikalischen Institut der Albert-Ludwigs Universität Freiburg angefertigt. An dieser Stelle möchte ich mich bei denjenigen bedanken, die zum Gelingen dieser Arbeit beigetragen haben.
+
+Ein besonderer Dank geht an Prof. Dr. Horst Fischer, der das interessante Thema stellte und die Betreuung dieser Arbeit übernommen hat, für seine unermüdliche Unterstützung und viele wertvollen Anregungen. Er hatte während der gesamten Zeit der Anfertigung dieser Dissertation immer ein offenes Ohr für mich und meine Arbeit.
+
+Herrn Prof. Dr. Königsmann danke ich für die freundliche Aufnahme in seiner Arbeitsgruppe und die Möglichkeit, diese Dissertation im Rahmen der COMPASS Kollaboration anzufertigen.
+
+Dr. Heiner Wollny danke ich für die gute Teamarbeit, viele wertvollen Anregungen und fruchtbare Diskussionen.
+
+Philipp Jörg, Dr. Wolf-Dieter Nowak, Christopher Regali, Sebastian Schopferer, Stefan Sirtl, Tobias Szameitat und Johannes ter Wolbeek danke ich für das sorgfältige Korrekturlesen dieser Arbeit.
+
+Der gesamten Arbeitsgruppe danke ich für die angenehme Arbeitsatmosphäre, Hilfestellungen bei Software- und Analysefragen und lehrreiche Diskussionen und Anregungen.
+
+Meinen Eltern danke ich dafür, dass sie die ganze Zeit über hinter mir standen und mich unterstützt haben.
\ No newline at end of file
diff --git a/samples/texts_merged/2889479.md b/samples/texts_merged/2889479.md
new file mode 100644
index 0000000000000000000000000000000000000000..29dc756d126b395e89c44a5cf3372271be0a06a1
--- /dev/null
+++ b/samples/texts_merged/2889479.md
@@ -0,0 +1,1031 @@
+
+---PAGE_BREAK---
+
+# Adaptive procedure for Fourier estimators: application to deconvolution and decompounding
+
+Céline Duval
+
+MAP5 UMR CNRS 8145
+e-mail: celine.duval@parisdescartes.fr
+
+and
+
+Johanna Kappus
+
+Institut für Mathematik, Universität Rostock
+e-mail: johanna.kappus@uni-rostock.de
+
+**Abstract:** The purpose of this paper is twofold. First, introduce a new adaptive procedure to select the optimal – up to a logarithmic factor – cutoff parameter for Fourier density estimators. Two inverse problems are considered: deconvolution and decompounding. Deconvolution is a typical inverse problem, for which our procedure is numerically simple and stable, a comparison is performed with penalized techniques. Moreover, the procedure and the proof of oracle bounds do not rely on any knowledge on the noise term. Second, for decompounding, i.e. non-parametric estimation of the jump density of a compound Poisson process from the observation of $n$ increments at timestep $\Delta$, build an unified adaptive estimator which is optimal – up to a logarithmic factor – regardless the behavior of $\Delta$.
+
+**MSC 2010 subject classifications:** Primary 62C12, 62C20; secondary 62G07.
+
+**Keywords and phrases:** Adaptive density estimation, deconvolution, de-compounding, model selection.
+
+Received April 2019.
+
+## 1. Introduction and motivations
+
+### 1.1. *Adaptive procedure*
+
+In the literature on non-parametric statistics a lot of space is dedicated to adaptive procedures. Adaptivity may be understood as minimax-adaptivity, i.e. optimal rates of convergence are attained simultaneously over a collection of class of densities, such as Sobolev-balls. Adaptivity may also refer to proving non-asymptotic oracle bounds, i.e. having a procedure that mimics, up to a constant, the estimator that minimizes a given loss function. It is this last notion of adaptivity we adopt here.
+---PAGE_BREAK---
+
+Hereafter, we propose an approach that is relevant for inverse problems when
+the estimator relies on Fourier techniques. This method is inspired from the one
+introduced in Duval and Kappus [20] and is generalized to deconvolution and
+decompounding inverse problems. We present this procedure below in a general
+context, even though in this article we study oracle bounds for two specific
+inverse problems; deconvolution and decompounding.
+
+**Heuristic of the adaptive procedure**
+
+Notations. We first introduce some notations which are used throughout the rest of the text. Given a random variable Z, $\varphi_Z(u) = \mathbb{E}[e^{iuZ}]$ denotes the characteristic function of Z. For $f \in L^1(\mathbb{R})$, $\mathcal{F}f(u) = \int e^{iux}f(x)dx$ is understood to be the Fourier transform of f. Moreover, we denote by $\|\cdot\|$ the L$^2$-norm of functions, $\|f\|^2 := \int |f(x)|^2dx$. Given some function $f \in L^1(\mathbb{R}) \cap L^2(\mathbb{R})$, we denote by $f_m$ the uniquely defined function with Fourier transform $\mathcal{F}f_m = (\mathcal{F}f)\mathbf{1}_{[-m,m]}$.
+
+*General statistical setting.* Consider *n* i.i.d. realizations *Y**j*, 1 ≤ *j* ≤ *n*, of a random variable *Y* with Lebesgue-density *f*Y. Suppose *Y* is related to a variable *X*, with Lebesgue-density *f* through a known transformation **T** relating their characteristic functions: $\varphi_Y = \mathbf{T}(\varphi_X)$, where **T** admits a continuous inverse. To estimate the density *f* of *X* from the ($Y_j$), we build an estimator $\hat{\varphi}_{X,n}$ of $\varphi_X$ as follows
+
+$$
+\hat{\varphi}_{X,n}(u) = \mathbf{T}^{-1}(\hat{\varphi}_{Y,n})(u) \quad \text{where } \hat{\varphi}_{Y,n}(u) := \frac{1}{n} \sum_{j=1}^{n} e^{iuY_j}, u \in \mathbb{R}.
+$$
+
+Cutting off in the spectral domain and applying Fourier inversion gives an esti-
+mator of *f*
+
+$$
+\hat{f}_m(x) = \frac{1}{2\pi} \int_{-m}^{m} e^{-iux} \hat{\varphi}_{X,n}(u) du, \quad \forall m > 0, x \in \mathbb{R}.
+$$
+
+Its performance is measured with a L²-loss, the choice of the cutoff parameter *m* is crucial. The optimal cutoff *m** which minimizes the L2-risk is such that
+
+$$
+\mathbb{E}[\|\hat{f}_{m^*} - f\|^2] = \inf_{m \ge 0} \left\{ \frac{1}{2\pi} \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{1}{2\pi} \int_{-m}^{m} \mathbb{E}[|\hat{\varphi}_{X,n}(u) - \varphi_X(u)|^2] du \right\}. \quad (1.1)
+$$
+
+This optimal value $m^*$ usually depends on the unknown regularity of $f$ and
+is hence not feasible. An adaptive optimal procedure consistst in selecting a
+random cutoff $\hat{m}_n$, calculated from the observations, for which the $L^2$-risk is
+close to the one of $\hat{f}_{m^*}$, meaning that one can establish an oracle bound
+
+$$
+\mathbb{E}[\|\hat{f}_{\hat{m}_n} - f\|^2] \le C \inf_{m \ge 0} \left\{ \frac{1}{2\pi} \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{1}{2\pi} \int_{-m}^{m} \mathbb{E}[|\hat{\varphi}_{X,n}(u) - \varphi_X(u)|^2] du \right\} + r_n,
+$$
+
+for a positive constant $C$ and $r_n$ a negligible remainder. Then, $\hat{f}_{\hat{m}_n}$ is called
+adaptive rate optimal estimator of $f$.
+---PAGE_BREAK---
+
+Heuristic of the adaptive procedure. Suppose there exists a function $F_{\varphi_Y}$, possibly depending on $\mathbf{T}$ and $\varphi_Y$, such that for some positive constant $C$, it holds
+
+$$ \mathbb{E}[|\hat{\varphi}_{X,n}(u) - \varphi_X(u)|^2] \le C|F_{\varphi_Y}(u)|^2 \mathbb{E}[|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2], $$
+
+which is the case e.g. if $\mathbf{T}^{-1}$ is Fréchet differentiable: the quantity $F_{\varphi_Y} = (\mathbf{T}^{-1})'(\varphi_Y)$ is explicit in the deconvolution case. Then, it follows
+
+$$ \mathbb{E}[\|\hat{f}_m - f\|^2] \le \frac{1}{2\pi} \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{C}{n} \int_{-m}^{m} |F_{\varphi_Y}(u)|^2 du, \quad m \ge 0. \quad (1.2) $$
+
+The second term in (1.2) is a majorant of the integrated variance of the estimator. If the upper bound (1.2) is optimal, meaning that it has the same order as (1.1), asymptotically we get $m^* \asymp \bar{m}_n$, where $\bar{m}_n$ is such that the bias-variance compromise in the right hand side of (1.2) is realized. To compute $\bar{m}_n$, we differentiate in $m$ the right hand side of (1.2) giving that $\bar{m}_n$ satisfies
+
+$$ |\varphi_X(\bar{m}_n)|^2 = \frac{C}{n} |F_{\varphi_Y}(\bar{m}_n)|^2 \iff |\mathbf{T}^{-1}(\varphi_Y)(\bar{m}_n)|^2 = \frac{C}{n} |F_{\varphi_Y}(\bar{m}_n)|^2. \quad (1.3) $$
+
+Clearly, (1.3) has an empirical version, we select $\hat{m}_n$ accordingly, let
+
+$$ \hat{m}_n \in \left\{ m \in [0, n] \middle| \left| \frac{\mathbf{T}^{-1}(\hat{\varphi}_{Y,n})(m)}{F_{\hat{\varphi}_{Y,n}}(m)} \right| = \frac{\kappa}{\sqrt{n}} \right\}, \quad (1.4) $$
+
+for some $\kappa > 0$, possibly depending on $n$. As, the solution of (1.4) may not be unique we consider one element in this ensemble, such as its maximum.
+
+**Relation to other works** There exist numerous techniques for adaptivity, we mention some of them together with a non exhaustive list of references. Loosely speaking there exist three main approaches; thresholding techniques for wavelet density estimators (see e.g. [16, 17, 36]), penalized estimators (see e.g. [3, 1, 31, 29]) and pair wise comparison of estimators such as the Goldenshluger and Lepskii's procedure (see e.g. [23, 24, 27]). These techniques have been developed for different inverse problems and in anisotropic multidimensional settings.
+
+All the afore mentioned methods rely on the choice of a parameter to be calibrated by the practitioner, such as $\kappa$ in (1.4). Numerical performances of the selected estimator are sensitive to this choice and many studies have been devoted to the calibration of this parameter (see e.g. Baudry et al. [2] and Lacour et al. [27]). An advantage of the procedure presented here is that, in the cases considered, for all the values of $\kappa$ such that the oracle bound is valid, the corresponding estimator is reasonable.
+
+Many adaptive procedures such as penalization methods minimizes an empirical version of the upper bound (1.2), while the spirit of (1.4) consists in finding the zeroes of an empirical version of the derivative in $m$ of the upper bound (1.2). Roughly speaking, the difference between our procedure and a penalization procedure is the same as the difference between Z-estimators and M-estimators.
+---PAGE_BREAK---
+
+**Adaptation to the deconvolution problem** We consider the deconvolution problem as it is an prototype inverse problem that has been extensively studied in the literature (see the references in Section 2). Moreover, it is a building brick of many inverse problems. One observes $n$ i.i.d. realizations of $Y_i = X_i+\varepsilon_i$, where $X_i$ and $\varepsilon_i$ are independent, the density of $\varepsilon_1$ is known and the density $f$ of $X_1$ is the quantity of interest. Optimal rates of convergence depend on the asymptotic decay of the characteristic function $\varphi_\varepsilon$ of the noise $\varepsilon$, usually *ordinary smooth cases* – when $\varphi_\varepsilon$ decays polynomially to 0 – and *super smooth cases* – when $\varphi_\varepsilon$ decays exponentially to 0.
+
+In that first case our procedure presents many advantages. On a theoretical point of view our procedure and the proof to establish oracle inequalities are the same in both ordinary smooth and super smooth cases, whereas usual adaptive procedures study these two cases separately. Moreover, the proof to get the oracle bound is rather elegant and relies on a fine cutting of the quadratic risk: the most powerful result involved is an Hoeffding concentration inequality. Usually, tools used to establish oracle inequalities rely on more demanding concentration results. On a numerical point of view, our procedure is simple and for all the possible choices of the hyper parameter $\kappa$ in (1.4) predicted by the theory, the associated estimators are relevant. We conduct an extensive simulation study which illustrates the stability of the procedure. We compare our results with a penalization procedure described in Comte and Lacour [12], which are known to be rapid and efficient in deconvolution contexts.
+
+## 1.2. A unified estimator for decompo unding
+
+Consider a compound Poisson process Z,
+
+$$Z_t = \sum_{j=1}^{N_t} X_j, \quad t \ge 0,$$
+
+where $N$ is a Poisson process with intensity $\lambda$ independent of the i.i.d. ran-
+dom variables $(X_j)_{j \in \mathbb{N}}$ with common density $f$. In the decompo
+lem, one discretely observes one trajectory of a $Z$ at sampling rate $\Delta > 0$ over
+$[0, T]: (Z_{i\Delta}, i = 1, \dots, n)$, where $n = \lfloor T/\Delta \rfloor$. The aim is to estimate $f$ from
+these observations. This model is central in many applied fields e.g. statistical
+physics, biology, financial series or mathematical insurance as it is well adapted
+to study phenomena where random independent events occur at random times.
+For instance, in insurance failure theory these events can model the claims that
+insurance companies have to pay to the subscribers, it is the Cramér-Lundberg
+model (see Embrechts et al. [21]).
+
+In the literature, cases $\Delta \to 0$ (high frequency observations) or $\Delta$ fixed
+(low frequency observations, often $\Delta = 1$) have received a lot of attention
+(see the references given in Section 3) and are usually considered separately.
+Here we propose a unified strategy which is valid regardless the behavior of
+$\Delta := \Delta_n \to \Delta_0 \in [0, \infty)$. The dependency in $\Delta_0$ of the upper bound is made
+---PAGE_BREAK---
+
+explicit and shows a deterioration as $\Delta_0$ increases. Those results complement the knowledge on decompounding. Moreover, the estimator remains consistent in cases where $\Delta$ grows to infinity slowly. This latter result is not straightforward, if the jump density is centered and has unit variance, it holds that $Z_{\Delta} = \sqrt{\lambda\Delta\zeta_{\Delta}}$, where $\zeta_{\Delta} \to \mathcal{N}(0, 1)$ as $\Delta \to \infty$. Therefore, one would expect that in these regimes non-parametric estimation of $f$ is impossible as each increment is close, in law, to a parametric Gaussian variable. When $\Delta$ goes too rapidly to infinity, namely as a power of $n\Delta_n$, Duval [19] shows that consistent non-parametric estimation of $f$ is impossible, regardless of the choice of the loss function. Having a consistent estimator for $f$ when $\Delta$ gets large is interesting, usually a Gaussian approximation is made to simplify computations, at the expense of losing the specificities of the jump density (see e.g. Cont and de Larrard [15]).
+
+Finally, we show that our adaptive procedure can be extended to this case and leads to an adaptive and rate optimal estimator of the jump density $f$, up to a logarithmic loss, for all sampling rates such that $\Delta_n < 1/4 \log(n\Delta_n)$ as $n \to \infty$, this condition is fulfilled for fixed or vanishing $\Delta$.
+
+**Organisation of the article** In Section 2 we establish and prove an oracle inequality based on our procedure for the deconvolution problem. We illustrate numerically its performances and compare it with a penalized adaptive optimal estimator given in [12]. Section 3 is dedicated to the decompounding problem. Finally, Section 4 gathers the proofs of the results of Section 3.
+
+## 2. Deconvolution
+
+### 2.1. Statistical setting
+
+Suppose that $X_1, \dots, X_n$ are i.i.d. with density $f$ and are accessible through the noisy observations
+
+$$Y_j = X_j + \varepsilon_j, \quad j = 1, \dots, n.$$
+
+Assume that the $(\varepsilon_j)$ are i.i.d., independent of the $(X_j)$ and such that $\forall u \in \mathbb{R}, \varphi_\varepsilon(u) \neq 0$. Suppose that the distribution of $\varepsilon_1$ is known. This last assumption can be softened, the procedure allows a straightforward generalization to the case where the distribution of $\varepsilon_1$ can be estimated from an additional sample, see Neumann [32]. Then, the mapping $\mathbf{T}$ defined in Section 1.1 is given by $\mathbf{T}: \varphi \mapsto \varphi\varphi_\varepsilon$, which is a continuous mapping of inverse $\mathbf{T}^{-1}: \varphi \mapsto \varphi/\varphi_\varepsilon$ and $(\mathbf{T}^{-1})'$ is equal to $1/\varphi_\varepsilon$.
+
+A deconvolution estimator of the characteristic function $\varphi_X$ of $X$ is given by
+
+$$\frac{\hat{\varphi}_{Y,n}(u)}{\varphi_\varepsilon(u)}, \quad \text{with} \quad \hat{\varphi}_{Y,n}(u) := \frac{1}{n} \sum_{j=1}^{n} e^{iuY_j}, \quad u \in \mathbb{R},$$
+
+denoting the empirical characteristic function. Since $\varphi_X$ is a characteristic function, its absolute value is bounded by 1 and the estimator can hence be improved
+---PAGE_BREAK---
+
+by using the definition
+
+$$
+\hat{\varphi}_{X,n}(u) := \frac{\hat{\varphi}_{Y,n}(u)}{\varphi_{\varepsilon}(u)} \frac{1}{\max\{1, |\frac{\hat{\varphi}_{Y,n}(u)}{\varphi_{\varepsilon}(u)}|\}} , \quad u \in \mathbb{R}. \qquad (2.1)
+$$
+
+Cutting off in the spectral domain and applying Fourier inversion gives the estimator
+
+$$
+\hat{f}_m(x) = \frac{1}{2\pi} \int_{-m}^{m} e^{-iux} \hat{\varphi}_{X,n}(u) du, \quad x \in \mathbb{R}. \tag{2.2}
+$$
+
+This estimator and adaptation techniques have been extensively studied in the literature, including in more general settings than above. Optimal rates of convergence and adaptive procedures are well known in dimension $d=1$ (see e.g. [8, 37, 22, 5, 6, 7, 34, 14] for L$^2$-loss functions or [30] for the L$^\infty$-loss). Results have also been established for multivariate anisotropic densities (see e.g. [13] for L$^2$-loss functions or [35, 28] for L$^p$-loss functions, $p \in [1, \infty]$).
+
+**2.2. Risk bounds and adaptive bandwidth selection**
+
+The following risk bound is well known in the literature on deconvolution.
+
+$$
+\mathbb{E}[\|\hat{f}_m - f\|^2] \le \frac{1}{2\pi} \left( \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{1}{n} \int_{-m}^{m} \frac{du}{|\varphi_\epsilon(u)|^2} \right). \quad (2.3)
+$$
+
+This upper bound is the sum of a bias term that decreases with *m* and a variance term, increasing with *m*. We select $\overline{m}_n$, the optimal cutoff parameter, such that the upper bound (2.3) is minimal
+
+$$
+\overline{m}_n \in \operatorname*{argmin}_{m \ge 0} \left\{ \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{1}{n} \int_{-m}^{m} \frac{du}{|\varphi_\epsilon(u)|^2} \right\}.
+$$
+
+Differentiating the right hand side with respect to $m$, we find that the following holds for the optimal cutoff parameter:
+
+$$
+|\varphi_X(\bar{m}_n)|^2 = \frac{1}{n|\varphi_\epsilon(\bar{m}_n)|^2} \iff |\varphi_X(\bar{m}_n)\varphi_\epsilon(\bar{m}_n)| = |\varphi_Y(\bar{m}_n)| = \frac{1}{\sqrt{n}}. \quad (2.4)
+$$
+
+This equality has an empirical version and we select $\hat{m}_n$ accordingly. In order to ensure adaptivity the following heuristic consideration is helpful. When the characteristic function is replaced by its empirical version, the standard deviation is of the order $n^{-1/2}$. Consequently, estimating $\varphi_Y$ by $\hat{\varphi}_{Y,n}$ makes sense for $|\varphi_Y| \ge n^{-1/2}$. If $|\varphi_Y| < n^{-1/2}$, the noise is dominant so the estimator can be set to zero. This inspires to re-define the estimator of $\varphi_Y$ as follows:
+
+$$
+\tilde{\varphi}_{Y,n}(u) = \hat{\varphi}_{Y,n}(u) \mathbf{1}_{\{||\hat{\varphi}_{Y,n}(u)| \ge \kappa_n n^{-1/2}\}}, \quad u \in \mathbb{R}, \qquad (2.5)
+$$
+---PAGE_BREAK---
+
+with the threshold value $\kappa_n := (1 + \kappa\sqrt{\log n})$. The constant $\kappa > 0$ is specified below and the additional log term is added to ensure good concentration properties on the event considered. Then, the estimator of $f$ given in (2.2) is modified using (2.5) instead of (2.1). We obtain
+
+$$ \tilde{\varphi}_{X,n}(u) := \frac{\tilde{\varphi}_{Y,n}(u)}{\varphi_{\varepsilon}(u)} \frac{1}{\max\{1, |\frac{\tilde{\varphi}_{Y,n}(u)}{\varphi_{\varepsilon}(u)}|\}} , \quad u \in \mathbb{R} $$
+
+and
+
+$$ \tilde{f}_m(x) = \frac{1}{2\pi} \int_{-m}^{m} e^{-iux} \tilde{\varphi}_{X,n}(u) du, \quad x \in \mathbb{R}. $$
+
+We define the empirical cutoff parameter $\hat{m}_n$ as follows. Since $\hat{\varphi}_{Y,n}$ may show an oscillatory behavior and the solution of (2.4) may not be unique, we consider
+
+$$ \hat{m}_n = \max \left\{ m > 0 : |\hat{\varphi}_{Y,n}(m)| = \kappa_n n^{-1/2} \right\} \wedge n^{\alpha}, \qquad (2.6) $$
+
+for some $\alpha \in (0, 1]$. It is worth emphasizing that the calculation of $\hat{m}_n$ does only rely on the empirical characteristic function $\hat{\varphi}_{Y,n}$ which can be estimated from the direct observations, and does not require the evaluation of penalty terms depending on the (perhaps unknown) $\varphi_\varepsilon$. Moreover, this procedure is the same in the super smooth case as well as in the ordinary smooth case. In (2.6) if we set $\alpha = 1$, $\hat{m}_n$ is thresholded to the value $n$, which is natural as if $m \ge n$, the variance term in (2.3) no longer vanishes. It is possible to set $0 < \alpha < 1$, if one has additional knowledge on the regularity of $f$ (see the discussion below).
+
+**Theorem 2.1.** Let $\hat{m}_n$ defined as in (2.6), with $\kappa > \sqrt{2}$ and $\alpha \in (0, 1]$. Then, there exist a positive constant $C_1$ depending only on the choice of $\kappa$ and a universal positive constants $C_2$ such that
+
+$$ \mathbb{E}[\|f - \tilde{f}_{\hat{m}_n}\|^2] \le C_1 \inf_{m \in [0, n^\alpha]} \left\{ \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \frac{\log n}{n} \int_{-m}^{m} \frac{du}{|\varphi_\varepsilon(u)|^2} \right\} + C_2 n^{\alpha - \frac{\kappa^2}{2}}. $$
+
+*Proof of Theorem 2.1. Step 1: An upper bound for $\tilde{f}_m$.*
+Let $m > 0$, we first establish an upper bound for the estimator $\tilde{f}_m$ of $f$ defined as $\tilde{f}_m$ but whose characteristic function is given by (2.5). Parseval equality and the definition (2.5) of $\tilde{\varphi}_{Y,n}$ give
+
+$$ \mathbb{E}[\|\tilde{f}_m - f\|^2] \le \|f_m - f\|^2 + \frac{1}{2\pi} \int_{-m}^{m} \frac{\mathbb{E}[|\tilde{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2]}{|\varphi_\varepsilon(u)|^2} du, $$
+
+where $\mathbb{E}[|\tilde{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2]$ equals
+
+$$ \mathbb{E}[|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2 1_{|\hat{\varphi}_{Y,n}(u)| \ge \frac{\kappa_n}{\sqrt{n}}}] + |\varphi_Y(u)|^2 \mathbb{P}\left(|\hat{\varphi}_{Y,n}(u)| < \frac{\kappa_n}{\sqrt{n}}\right). $$
+---PAGE_BREAK---
+
+The first term in the right hand side is bounded by $1/n$. Recall that $\kappa_n = 1 + \kappa\sqrt{\log n}$, we decompose the second term on the set $\mathcal{A} = \{u, |\varphi_Y(u)| < \frac{1+2\kappa\sqrt{\log n}}{\sqrt{n}}\}$ this leads to
+
+$$
+\begin{align*}
+|\varphi_Y(u)|^2 \mathbb{P}(|\hat{\varphi}_{Y,n}(u)| < \frac{\kappa_n}{\sqrt{n}}) &\le \frac{(1+2\kappa\sqrt{\log n})^2}{n} + \mathbb{P}(|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)| \ge \frac{\kappa\sqrt{\log n}}{\sqrt{n}}) \\
+&\le \frac{(1+2\kappa\sqrt{\log n})^2}{n} + n^{-\frac{\kappa^2}{2}} \le \frac{1+(1+2\kappa\sqrt{\log n})^2}{n},
+\end{align*}
+$$
+
+where we used the Hoeffding inequality and $\kappa > \sqrt{2}$. Finally, gathering all the above inequalities, together with the fact that $\mathbb{E}[||\tilde{f}_m - f||^2] \le \mathbb{E}[||\tilde{f}_m - f||^2]$, we get the following upper bound for $\tilde{f}_m$, $0 < m \le n$,
+
+$$
+\mathbb{E}[||\tilde{f}_m - f||^2] \le ||f_m - f||^2 + \frac{2 + (1 + 2\kappa\sqrt{\log n})^2}{2\pi n} \int_{-m}^{m} \frac{du}{|\varphi_\epsilon(u)|^2}. \quad (2.7)
+$$
+
+**Step 2: Adaptation.** It holds using the Parseval equality, that
+
+$$
+\begin{align*}
+\mathbb{E}[||\tilde{f}_{\hat{m}_n} - f||^2] &= \mathbb{E}\left[\int_{u \in [-\hat{m}_n, \hat{m}_n]^c} |\varphi_X(u)|^2 du\right] + \mathbb{E}\left[\int_{u \in [-\hat{m}_n, \hat{m}_n]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 du\right] \\
+&:= T_1 + T_2.
+\end{align*}
+$$
+
+Let $0 < m \le n$ be fixed. First, consider the event $\mathcal{E} = \{\hat{m}_n < m\}$, on this event we have the straightforward upper bound $T_2 \le \int_{[-m,m]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 du$, we recover a usual variance term. Now we control the surplus in the bias of the estimator $\tilde{f}_{\hat{m}_n}$ decomposing $T_1$ as
+
+$$
+T_1 = \int_{[-m,m]^c} |\varphi_X(u)|^2 du + \mathbb{E} \left[ \int_{|u| \in [\hat{m}_n, m]} |\varphi_X(u)|^2 du \right].
+$$
+
+It is the sum of a usual bias term and an additional term controled using the
+inequality
+
+$$
+|\varphi_X|^2 \leq 2 \frac{|\hat{\varphi}_{Y,n}|^2}{|\varphi_\varepsilon|^2} + 2 \frac{|\varphi_Y - \hat{\varphi}_{Y,n}|^2}{|\varphi_\varepsilon|^2}.
+$$
+
+Along with the definition of $\hat{m}_n$, it gives
+
+$$
+\begin{align*}
+\mathbb{E}[\mathbf{1}_{\mathcal{E}} \int_{|u| \in [\hat{m}_n, m]} |\varphi_X(u)|^2 du] &\le 2\mathbb{E}[\mathbf{1}_{\mathcal{E}} \int_{|u| \in [\hat{m}_n, m]} \frac{\kappa_n^2 n^{-1}}{|\varphi_\varepsilon(u)|^2} du] \\
+&\quad + \int_{-m}^{m} \frac{\mathbb{E}[|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2]}{|\varphi_\varepsilon(u)|^2} du \\
+&\le \int_{-m}^{m} \frac{2(\kappa_n^2 + 1)n^{-1}}{|\varphi_\varepsilon(u)|^2} du.
+\end{align*}
+$$
+
+Then, the definition of $\kappa_n$ and (2.7) imply that, on the event $\mathcal{E}$, for a positive constant $C$ depending only on the choice of $\kappa$,
+
+$$
+\mathbb{E}[||\tilde{f}_{\hat{m}_n} - f||^2 \mathbf{1}_{\mathcal{E}}] \le ||f_m - f||^2 + C \frac{\log n}{n} \int_{-m}^{m} \frac{du}{|\varphi_{\epsilon}(u)|^2}.
+$$
+---PAGE_BREAK---
+
+Second, on the complement set $\mathcal{E}^c$, we immediately have $T_1 \le \int_{[-m,m]^c} |\varphi_X(u)|^2 du$.
+It remains to control the surplus in the variance of $\tilde{f}_{\hat{m}_n}$ using the decomposition
+
+$$T_2 = \int_{u \in [-m, m]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 du + \mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 du \right].$$
+
+By the definition of $\hat{m}_n$, it holds
+
+$$\begin{aligned}
+\mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 \mathbf{1}_{\{||\varphi_Y(u)|| > n^{-1/2}\}} du \mathbf{1}_{\mathcal{E}^c} \right] & \\
+& \leq \int_{|u| \in [m, n^\alpha]} \frac{\mathbb{E}[|\tilde{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2]}{|\varphi_\varepsilon(u)|^2} \mathbf{1}_{\{||\varphi_Y(u)|| > n^{-1/2}\}} du.
+\end{aligned}$$
+
+On the event $\{||\varphi_Y(u)|| > n^{-1/2}\}$, we derive that
+
+$$\begin{aligned}
+\mathbb{E}[|\tilde{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2] &\leq |\varphi_Y(u)|^2 + \mathbb{E}[|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)|^2] \\
+&\leq |\varphi_Y(u)|^2 + \frac{1}{n} \leq 2|\varphi_Y(u)|^2.
+\end{aligned}$$
+
+Consequently, we get
+
+$$\mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 \mathbf{1}_{\{||\varphi_Y(u)|| > n^{-1/2}\}} du \mathbf{1}_{\mathcal{E}^c} \right] \le 2 \int_{[-m, m]^c} |\varphi_X(u)|^2 du.$$
+
+Next, using that $|\tilde{\varphi}_{X,n}(u)| \le 1$, we derive that
+
+$$\begin{aligned}
+& \mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\tilde{\varphi}_{X,n}(u) - \varphi_X(u)|^2 \mathbf{1}_{\{||\varphi_Y(u)|| \le n^{-1/2}\}} du \mathbf{1}_{\mathcal{E}^c} \right] \\
+&\leq \int_{|u| \in [m, n^\alpha]} |\varphi_X(u)|^2 du + 4 \int_{|u| \in [m, n^\alpha]} \mathbb{P}(|\hat{\varphi}_{Y,n}(u)| \geq \kappa_n n^{-1/2}) \mathbf{1}_{\{||\varphi_Y(u)|| \leq n^{-1/2}\}} du \\
+&\leq \int_{|u| \in [m, n^\alpha]} |\varphi_X(u)|^2 du + 4 \int_{|u| \in [m, n^\alpha]} \mathbb{P}(|\hat{\varphi}_{Y,n}(u) - \varphi_Y(u)| > \kappa(\log n/n)^{1/2}) du \\
+&\leq \int_{u \in [m, m]^c} |\varphi_X(u)|^2 du + 8n^{\alpha-\kappa^2/2}.
+\end{aligned}$$
+
+The last inequality is a direct consequence of the Hoeffding inequality. Putting
+the above together, we have shown that for universal positive constants $C_1$ and
+$C_3$ and a constant $C_2$ depending only on $\kappa$, for all $m \ge 0$,
+
+$$\mathbb{E}[||\tilde{f}_{\hat{m}_n} - f||^2] \le C_1 ||f - f_m||^2 + C_2 \frac{\log n}{n} \int_{-m}^{m} \frac{du}{|\varphi_\epsilon(u)|^2} + C_3 n^{\alpha - \kappa^2/2}.$$
+
+Taking the infimum over *m* completes the proof. □
+---PAGE_BREAK---
+
+**Discussion** Theorem 2.1 is non asymptotic and ensures that the estimator $\hat{f}_{\hat{m}_n}$ automatically reaches the bias-variance compromise, up to a logarithmic factor and the multiplicative constant $C_1$.
+
+*Comments on the adaptive procedure.* Similarly to the grouped data setting (see [20]), the computation of the adaptive cutoff (2.6) involves only the set $\{\lvert \hat{\varphi}_{Y,n}(u) \rvert = (1 + \kappa\sqrt{\log n})/\sqrt{n}\}$. Therefore, $\hat{m}_n$ depends only on the empirical characteristic function of the direct observations Y, and not the one of the errors $\varphi_\varepsilon$ nor $\varphi_X$ and the adaptive estimator and the proof of the oracle bound are the same in the super smooth case and the ordinary smooth case. Generalizing the result to the case where the distribution of the error is unknown but where we have e.g. independent i.i.d. realizations $(\varepsilon_1, \dots, \varepsilon_N)$ of $\varepsilon$ should therefore be straightforward.
+
+*Comments on the proof.* Proof of Theorem 2.1 is self contained and relies on fine cuttings of the quadratic risk. The more involved tool used is an Hoeffding inequality, whereas usual techniques involve stronger results such as Thalagrand inequalities. The interest is that it should be robust to small changes of in the modelisation.
+
+Note that compared to [20], the proof relies on more direct arguments. Moreover, it permits to derive a stronger result, namely an oracle type inequality, whereas in [20] we ensured that *given some regularity class* the optimal rate is achieved on this class.
+
+*Choice of the hyper parameters in (2.6).* Regarding the choice of $\alpha$ and $\kappa$ in (2.6), it is always possible to take $\alpha = 1$. Note that the case $\alpha > 1$ is not interesting as, even in the direct problem $\varepsilon = 0$ a.s., if $m > n$ the variance term in (2.3) no longer tends to 0. Taking $\alpha < 1$ is possible only if one has additional information on the target density $f$. For instance, if one knows that $f$ is in a Sobolev class of regularity $\beta$, for some $\beta \ge \beta_0 > 0$,
+
+$$ f \in \mathcal{S}(\beta, L) := \left\{ f \in \mathbf{F}, \int_{\mathbb{R}} (1 + |u|)^{2\beta} |\mathcal{F}f(u)|^2 du \le L \right\} \quad (2.8) $$
+
+where $\mathbf{F}$ is the set of densities with respect to the Lebesgue measure. Then, it holds that $\|f - f_m\|^2 \asymp m^{-2\beta}$ and straightforward computations lead to $m^* \lesssim (n/\log n)^{\frac{1}{2\beta+1}}$ (regardless the the asymptotic decay of $\varphi_\varepsilon$). Then, one may restrict the interval for $\hat{m}_n$ to $[0, n^\alpha]$ where $1 > \alpha > \frac{1}{2\beta_0+1}$. Second, the choice of $\kappa$ must be such that $n^{\alpha-\kappa^2/2}$ is negligible, the choice $\kappa > 2$ always works. The following numerical study illustrates that the procedure is stable in the choice of $\kappa$.
+
+## 2.3. Numerical results
+
+**Stability of the procedure** To illustrate the performances of the method and the influence of the parameter $\kappa$ we proceed as follows. Fix $\alpha = 1$, therefore
+---PAGE_BREAK---
+
+FIG 1. **Direct problem:** Computations by $M = 1000$ Monte Carlo iterations of the L$^2$-risks (y axis) for different values of $\kappa \le (\sqrt{n}-1)\sqrt{\log(n)^{-1}} = 11.6$ (x axis). Estimation of $f$ from $n = 1000$ i.i.d. direct realizations for different distributions: Uniform $U[1,3]$ (plain line), Gaussian $N(2,1)$ (dots), Cauchy (stars), Gamma $\Gamma(2,1)$ (dotted line) and the mixture $0.7N(4,1)+0.3\Gamma(2, \frac{1}{2})$ (triangles).
+
+we do not assume any regularity on the considered examples. For different densities $f$, namely, Uniform $U[1,3]$, Gaussian $N(2,1)$, Cauchy, Gamma $\Gamma(2,1)$ and the mixture $0.7N(4,1)+0.3\Gamma(2, \frac{1}{2})$, and for different values of $\kappa$ we compute the adaptive L$^2$ risks from $M = 1000$ Monte Carlo iterations. The results are displayed on Figures 1, 2 and 3. We consider three different settings:
+
+• The direct density estimation problem (Figure 1): we observe i.i.d. realizations of $f$. It is a particular deconvolution problem where $\varepsilon = 0$ a.s.
+
+• Deconvolution problem with ordinary smooth noise (Figure 2): the error $\varepsilon$ is Gamma $\Gamma(2,1)$ i.e. $|\varphi_{\varepsilon}|$ decays as $|u|^{-2}$ asymptotically.
+
+• Deconvolution problem with super smooth noise (Figure 3): the error $\varepsilon$ is Cauchy i.e. $|\varphi_{\varepsilon}|$ decays as $e^{-|u|}$ asymptotically.
+
+On Figures 1, 2 and 3 we observe that the adaptive rates are small and that the procedure is stable in the choice of kappa. We observe, on these three cases, that the value of $\kappa$ should not be chosen too large but that for a wide range of values the performances are similar. In practice, the value of $n$ is fixed and there is a natural boundary for $\kappa$, indeed observe that it is useless to increase $\kappa$ if $(1+\kappa \log n)n^{-1/2} \ge 1$ as the selection rule (2.6) will be constant equal to $n^\alpha$. Moreover, we expect that if $(1+\kappa \log n)n^{-1/2}$ gets too large, e.g. larger than $1/2$ the performances of the adaptive estimator should deteriorate. This practical consideration encourages to choose $\kappa$ smaller than $(\sqrt{n}-1)\sqrt{\log(n)^{-1}}$. In Figures 1, 2 and 3 it appears that for all the meaningful values of $\kappa$, e.g. smaller than $\frac{1}{2}(\sqrt{n}-1)\sqrt{\log(n)^{-1}}$ for instance, the performances of the adaptive estimator are similar.
+
+**Comparison with a penalization procedure** We compare the performances of our procedure for $\kappa = 8$, with a penalization procedure and with an oracle. For the penalization procedure, we follow Comte and Lacour [12] and consider
+---PAGE_BREAK---
+
+FIG 2. *Deconvolution problem (ordinary smooth case):* Computations by $M = 1000$ Monte Carlo iterations of the L$^2$-risks (y axis) for different values of $\kappa \le (\sqrt{n}-1)\sqrt{\log(n)^{-1}} = 32.6$ (x axis). Estimation of $f$ from $n = 10000$ i.i.d. direct of $X + \varepsilon$ where $\varepsilon$ has distribution $\Gamma(2, 1)$ (i.e. $\varphi_{\varepsilon}(u) = (1-iu)^{-2}$) and for different distributions for $X$: Uniform $U[1, 3]$ (plain line), Gaussian $N(2, 1)$ (dots), Cauchy (stars), Gamma $\Gamma(2, 1)$ (dotted line) and the mixture $0.7N(4, 1) + 0.3\Gamma(2, \frac{1}{2})$ (triangles).
+
+the adaptive estimator $\hat{f}_{\tilde{m}_n}$ which is the estimator defined in (2.2) where
+
+$$
+\tilde{m}_n = \underset{m \in [0, M_n]}{\operatorname{argmin}} \{-\|\hat{f}_m\|^2 + \operatorname{pen}(m)\}, \quad \operatorname{pen}(m) = K \left( \frac{\Delta(m)}{\log(m+1)} \right)^2 \frac{\Delta(m)}{n},
+$$
+
+where $M_n > 0$, $K > 0$ and $\Delta(m) = \frac{1}{2\pi} \int_{[-m,m]} |\varphi_\epsilon(u)|^{-2} du$, which is known in our setting. The parameter $M_n$ is chosen as the maximal integer such that $1 \le \frac{\Delta(m)}{n} \le 2$. For the parameter $K$ it is calibrated by preliminary simulation experiments. For calibration strategies (dimension jump and slope heuristics), the reader is referred to Baudry et al. [2]. Here, we test a grid of values of the $K$'s from the empirical error point of view, to make a relevant choice; the tests are conducted on a set of densities which are different from the one considered hereafter, to avoid overfitting. After these preliminary experiments, $K$ is chosen equal to 2 which is the same value as the one considered in Comte and Lacour [12]. The standard errors are given in parenthesis. The running times for each risks of the penalization procedure and our procedure are similar. However, one should take into account that a preliminary calibration step seems obsolete in our case. In deconvolution problems, the theoretical optimal $K$ can be in some cases far away from the practically optimal $K$ and may vary with the sample size explaining the nessecity of this calibration step (see e.g. Kappus and Mabon [26] where the practical optimal value of $K$ was much smaller than the value predicted by the theory).
+
+Second, an oracle "estimator" is computed $\hat{f}_{m^*}$, which is the estimator defined in (2.2) where $m^*$ corresponds to the following oracle bandwidth
+
+$$
+m^* = \underset{m>0}{\operatorname{argmin}} \mathbb{E}[\|f - \hat{f}_m\|^2].
+$$
+
+This oracle can be explicitly evaluated when $f$ is known. We denote these dif-
+ferent risks by $R$, for the risk of our procedure, $R_{pen}$ for the penalized estimator
+---PAGE_BREAK---
+
+FIG 3. *Deconvolution problem (super smooth case):* Computations by $M = 1000$ Monte Carlo iterations of the L²-risks (y axis) for different values of $\kappa \le (\sqrt{n}-1)\sqrt{\log(n)^{-1}} = 32.6$ (x axis). Estimation of $f$ from $n = 10000$ i.i.d. realizations of $X + \varepsilon$ where $\varepsilon$ has Cauchy distribution (i.e. $\varphi_{\varepsilon}(u) = e^{-|u|}$) and for different distributions for $X$: Uniform $U[1,3]$ (plain line), Gaussian $N(2,1)$ (dots), Cauchy (stars), Gamma $\Gamma(2,1)$ (dotted line) and the mixture $0.7N(4,1)+0.3\Gamma(2, \frac{1}{2})$ (triangles). For the uniform distribution, the rates where stable around the value 0.5, they do not appear on the Figure not to spoil the readability of the other curves.
+
+TABLE 1
+Comparison of the different adaptive estimators for the Gamma distribution.
+
+| fε | n | Gamma(2, 1) |
|---|
| R | μm | Rpen | μm | Ror | m* |
|---|
| Γ(2, 1) | 500 | 4.31 × 10-2 (0.20) | 1.05 (0.07) | 1.97 × 10-2 (0.01) | 0.80 (0.05) | 0.74 × 10-2 (0.36 × 10-2) | 0.66 (0.14) | | 1000 | 1.74 × 10-2 (0.13) | 0.98 (0.04) | 1.70 × 10-2 (0.03) | 0.94 (0.03) | 0.59 × 10-2 (0.28 × 10-2) | 0.72 (0.14) | | 5000 | 0.40 × 10-2 (0.06) | 0.85 (0.01) | 1.30 × 10-2 (0.01) | 1.32 (0.05) | 0.31 × 10-2 (0.13 × 10-2) | 0.91 (0.15) | | C | 500 | 5.27 × 10-2 (0.23) | 0.90 (0.07) | 1.21 × 10-2 (0.69 × 10-2) | 0.56 (0.04) | 0.92 × 10-2 (0.48 × 10-2) | 0.61 (0.12) | | 1000 | 1.84 × 10-2 (0.13) | 0.84 (0.04) | 0.98 × 10-2 (0.61 × 10-2) | 0.70 (0.03) | 0.70 × 10-2 (0.34 × 10-2) | 0.67 (0.13) | | 5000 | 0.51 × 10-2 (0.07) | 0.70 (0.01) | 0.71 × 10-2 (0.01 × 10-2) | 1.10 (0.02) | 0.39 × 10-2 (0.17 × 10-2) | 0.82 (0.13) |
+
+and $R_{or}$ for the oracle procedure. All these risks are computed on 1000 Monte Carlo iterations. The results are gathered in Tables 1 for the Gamma density, 2 for the mixture and 3 for the Cauchy density where C stands for the Cauchy distribution. In each case both an ordinary smooth and a super smooth errors are considered.
+
+**Comparison of the different methods.** Tables 1, 2 and 3 show that all the procedures behave as expected; the L²-risks decreases with *n* and are smaller in the case of an ordinary smooth deconvolution problem than in the case of a super smooth deconvolution problem. The estimator with the smallest risk is the oracle, and the penalized risks are most of the time smaller than our procedure which is consistent with the fact that our procedure has a logarithmic loss and is asymptotic. More precisely for small values of *n* our procedure does not perform as well as the penalized method. But for larger values of *n* it is competitive. We can exhibit particular cases where our procedure is more stable in the choice of the hyper parameter than the penalized procedure, even on large sample sizes
+---PAGE_BREAK---
+
+TABLE 2
+Comparison of the different adaptive estimators on a mixture.
+
+| fe | n | 0.7N(4,1) + 0.3Γ(4, 1/2) |
|---|
| R | ṁ | Rpen | ṁ̃ | Ror | m* |
|---|
| Γ(2,1) | 500 | 1.77 × 10-2 (0.13) | 0.92 (0.05) | 0.78 × 10-2 (0.65 × 10-2) | 0.78 (0.03) | 0.33 × 10-2 (0.17 × 10-2) | 0.59 (0.16) | | 1000 | 0.74 × 10-2 (0.08) | 0.89 (0.03) | 0.73 × 10-2 (0.64 × 10-2) | 0.87 (0.03) | 0.26 × 10-2 (0.15 × 10-2) | 0.67 (0.14) | | 5000 | 0.13 × 10-2 (0.03) | 0.79 (0.01) | 0.38 × 10-2 (0.30 × 10-2) | 1.10 (0.01) | 0.01 × 10-2 (0.06 × 10-3) | 0.82 (0.11) | | C | 500 | 2.78 × 10-2 (0.15) | 0.82 (0.05) | 0.73 × 10-2 (0.50 × 10-2) | 0.55 (0.05) | 0.43 × 10-2 (0.20 × 10-2) | 0.50 (0.14) | | 1000 | 1.02 × 10-2 (0.10) | 0.77 (0.03) | 0.65 × 10-2 (0.52 × 10-2) | 0.67 (0.03) | 0.34 × 10-2 (0.16 × 10-2) | 0.58 (0.15) | | 5000 | 0.21 × 10-2 (0.04) | 0.66 (0.01) | 0.59 × 10-2 (0.48 × 10-2) | 0.94 (0.02) | 0.16 × 10-2 (0.10 × 10-2) | 0.74 (0.11) |
+
+TABLE 3
+Comparison of the different adaptive estimators for the Cauchy distribution.
+
+
+
+
+ | fe |
+ n |
+ C |
+
+
+ | R |
+ ṁ̂ |
+ Rpen |
+ ṁ̃ |
+ Ror |
+ m* |
+
+
+
+
+ | Γ(2, 1) |
+ 500 |
+ 2.69 × 10-2 (0.16) |
+ 0.59 (0.07) |
+ 1.00 × 10-2 (0.67 × 10-2) |
+ 0.68 (0.03) |
+ 0.67 × 10-2 (0.29 × 10-2) |
+ 0.62 (0.10) |
+
+
+ | 1000 |
+ 1.00 × 10-2 (0.09) |
+ 0.84 (0.04) |
+ 0.97 × 10-2 (0.70 × 10-2) |
+ 0.82 (0.05) |
+ 0.49 × 10-2 (0.21 × 10-2) |
+ 0.67 (0.10) |
+
+
+ | 5000 |
+ 0.27 × 10-2 ( 0.05) |
+ 0.69 ( 0.01) |
+ ( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( ) (> )( )( )( )( )( )( )( )( )( )( )( )( )(&c; |
+ &c; |
+ &c; |
+ &c; |
+ &c; |
+
+
+ | C |
+ 500 |
+ 2.50 × 10-2 (0.16) |
+ &c; ()&c; |
+ &c; ()&c; |
+ &c; ()&c; |
+ &c; ()&c; |
+ &c; ()&c; |
+
+
+ | 1000 |
+ 1.00 × 10-2 (&c; &c; |
+ &c; &c; |
+ &c; &c; |
+ &c; &c; |
+ &c; &c; |
+ &c; &c; |
+
+
+ | 5000 |
+ &c; &c; &c; |
+ &c; &c; &c; |
+ &c; &c; &c; |
+ &c; &c; &c; |
+ &c; &c; &c; |
+ &c; &c; &c; |
+
+
+
+
+(see Figure 4 for example). This is due to the fact that the penalized constant $K$ that is suitable for small values of $n$ is different than for larger values of $n$. In practice a logarithmic term in $n$ is added in the penalty term, that is theoretically unnecessary and entails a logarithmic loss but improves the numerical results. If we add this logarithmic term (we replace $K = 2$ with $\tilde{K} \log(n)^{2.5}$ with $\tilde{K} = 0.3$ and the multiplying $\log(n)^{2.5}$ factor as suggested in Comte et al. [14]). This second penalty procedure performs well for all values of $n$ and when $n$ gets large it has similar performances as our procedure (see Table 4). For our procedure, changing $\kappa$ for smaller values of $n$ does not improve the results.
+
+### 3. Decompoonding
+
+#### **3.1. Statistical setting**
+
+Let $Z$ be a compound Poisson process with intensity $\lambda > 0$ and jump density $f$, i.e.
+
+$$Z_t := \sum_{j=1}^{N_t} X_j, \quad t \geq 0$$
+---PAGE_BREAK---
+
+FIG 4. Comparison for different values of K and $\kappa$ the penalized estimator (green) and our adaptive estimator (blue). Estimation of $f \sim (0.3G(3, \frac{1}{2}) + 0.7G(4, 1))$ (bold black) from $n = 10000$ observations.
+
+where N is an homogeneous Poisson process with intensity $\lambda$ and independent of the i.i.d. variables ($X_j$) with common density $f$. One trajectory of $Z$ is observed at sampling rate $\Delta$ over $[0, T]$, $T = n\Delta$, $n \in \mathbb{N}$. Non-parametric estimation of $f$, or its Lévy density $\lambda f$ has been the subject of many papers, among others, [4, 9, 11, 18, 38] and [25] for the multidimensional setting.
+
+We observe $Z$ at the time points $j\Delta$, $j = 1, \dots, n$, for $\Delta > 0$, denote the $j$-th increment by $Y_{j\Delta} = Z_{j\Delta} - Z_{(j-1)\Delta}$. We aim at estimating $f$ from the increments $(Y_{j\Delta}, j = 1, \dots, n)$. Consider $\varphi$ the characteristic function of $X_1$ and $\varphi_{\Delta}$ the characteristic function of $Z_{\Delta} = Y_{\Delta}$. The Lévy-Kintchine formula relates them as follows
+
+$$ \varphi_{\Delta}(u) = \exp(\Delta\lambda(\varphi(u) - 1)), \quad u \in \mathbb{R}. $$
+---PAGE_BREAK---
+
+TABLE 4
+Risks and selected cutoff of the penalized procedure with an additional logarithmic term in
+the penalty. Notation $\mathcal{M}$ stands for the mixture $0.7\mathcal{N}(4, 1) + 0.3\Gamma(4, \frac{1}{2})$.
+
+| fε | n | G | M | C |
|---|
| Rp∈n | m̃p∈n | Rp∉n | m̃p∉n | Rp∉n | m̃p∉n |
|---|
| Γ(2, 1) | 500 | 1.17 × 10-2 (0.81 × 10-2) | 0.72 (0.03) | 0.63 × 10-2 (0.51 × 10-2) | 0.69 (0.02) | 0.83 × 10-2 (0.46 × 10-2) | 0.62 (0.005) | | 1000 | 0.94 × 10-2 (0.63 × 10-2) | 0.82 (0.04) | 0.40 × 10-2 (0.30 × 10-2) | 0.75 (0.02) | 0.59 × 10-2 (0.32 × 10-2) | 0.69 (0.03) | | 5000 | 0.62 × 10-2 (0.62 × 10-2) | 1.08 (0.01) | 0.20 × 10-2 (0.16 × 10-2) | 0.94 (0.02) | 0.31 × 10-2 (0.19 × 10-2) | 0.91 (0.02) | | c | 500 | 1.34 × 10-2 (0.43 × 10-2) | 0.45 (0.01) | 0.60 × 10-2 (0.25 × 10-2) | 0.45 (0.01) | 1.31 × 10-2 (0.24 × 10-2) | 0.41 (0.02) | | 1000 | 0.91 × 10-2 (0.43 × 10-2) | 0.59 (0.02) | 0.74 × 10-2 (0.51 × 10-2) | 0.51 (0.003) | 0.94 × 10-2 (0.14 × 10-2) | 0.45 (0.002) | | 5000 | 0.56 × 10-2 (0.33 × 10-2) | 0.81 (0.05) | 0.21 × 10-2 (0.15 × 10-2) | 0.75 (0.001) | 0.35 × 10-2 (0.11 × 10-2) | 0.65 (0.02) |
+
+Then, the mapping $\mathbf{T}$ defined in Section 1.1 is given by $\mathbf{T}: \varphi \mapsto \varphi_{\Delta}$. As $\mathcal{Z}$ is a compound Poisson process, $|\varphi_{\Delta}|$ is bounded from below by $e^{-2\lambda\Delta}$, which remains bounded away from 0 as long as $\Delta < \infty$. Moreover, if $\mathbb{E}[|X_1|] < \infty$ it holds that $\varphi$ is differentiable and we can then define the distinguished logarithm of $\varphi_{\Delta}$ (see Lemma 1 in [20])
+
+$$ \varphi(u) = 1 + \frac{\operatorname{Log}(\varphi_{\Delta}(u))}{\lambda\Delta}, \quad \text{where } \operatorname{Log}(\varphi_{\Delta}(u)) = \int_{0}^{u} \frac{\varphi'_{\Delta}(z)}{\varphi_{\Delta}(z)} dz, u \in \mathbb{R}. \quad (3.1) $$
+
+For simplicity, we assume that the intensity $\lambda$ is known: $\lambda = 1$. Following (3.1), an estimator of $\varphi$ is hence given by
+
+$$ \hat{\varphi}_n(u) = 1 + \frac{1}{\Delta} \operatorname{Log}(\hat{\varphi}_{\Delta,n}(u)), \quad u \in \mathbb{R} \qquad (3.2) $$
+
+with
+
+$$ \operatorname{Log}(\hat{\varphi}_{\Delta,n}(u)) := \int_{0}^{u} \frac{\hat{\varphi}'_{\Delta,n}(z)}{\hat{\varphi}_{\Delta,n}(z)} dz, \quad \text{where} \quad \hat{\varphi}_{\Delta,n}^{(k)}(z) = \frac{1}{n} \sum_{j=1}^{n} (iY_{j\Delta})^k e^{izY_{j\Delta}}, \quad (3.3) $$
+
+$k \in \{0, 1\}$. The quantity $\operatorname{Log}(\hat{\varphi}_{\Delta,n})$: if $\varphi_{\Delta}$ never cancels, it may not be the case of its estimator $\hat{\varphi}_{\Delta,n}$. Usually, to prevent this issue a local threshold is used and $(\hat{\varphi}_{\Delta,n}(z))^{-1}$ is replaced with $(\hat{\varphi}_{\Delta,n}(z))^{-1}\mathbf{1}_{|\hat{\varphi}_{\Delta,n}(z)|>r_n$, for some vanishing sequence $r_n$ (see e.g. Neumann and Reiß [33]). Here we do not use a local threshold inside the integral in (3.3), replacing $(\hat{\varphi}_{\Delta,n}(z))^{-1}$ by $(\hat{\varphi}_{\Delta,n}(z))^{-1}\mathbf{1}_{|\hat{\varphi}_{\Delta,n}(z)|>r_n$, instead, we will threshold the integral so that our estimator $\operatorname{Log}(\varphi_{\Delta,n})$ of $\operatorname{Log}(\varphi_{\Delta,n})$ satisfies $\operatorname{Log}(\varphi_{\Delta,n}) = \operatorname{Log}(\hat{\varphi}_{\Delta,n})$. For that, consider
+
+$$ \tilde{\varphi}_n(u) := \hat{\varphi}_n(u) \mathbf{1}_{|\hat{\varphi}_n(u)| \le 4}, \quad u \in \mathbb{R}, \qquad (3.4) $$
+
+where $\hat{\varphi}_n$ is given by (3.2). The choice of a threshold equal to 4 is technical (see the proof of Theorem 3.1). Cutting off in the spectral domain an applying a
+---PAGE_BREAK---
+
+Fourier inversion provides the estimator if $f$
+
+$$ \hat{f}_{m,\Delta}(x) = \frac{1}{2\pi} \int_{-m}^{m} e^{-iux} \tilde{\varphi}_n(u) du, \quad x \in \mathbb{R}. \qquad (3.5) $$
+
+## 3.2. Adaptive upper bound
+
+### 3.2.1. Upper bound and discussion on the rate
+
+**Theorem 3.1.** Assume that $\mathbb{E}[X_1^2] < \infty$, $\Delta < \frac{1}{4}\log(n\Delta)$ and $n\Delta \to \infty$ as $n \to \infty$. Then, for any $m \ge 0$ it holds
+
+$$ \mathbb{E}[\|\hat{f}_{m,\Delta} - f\|^2] \le \|f_m - f\|^2 + \frac{2}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_\Delta(u)|^2} + 2.5^2 \mathbb{E}[X_1^2] \frac{m}{n\Delta} + 2.5^2 \frac{m^2}{(n\Delta)^2}. $$
+
+The constraint $\Delta < \frac{1}{4}\log(n\Delta)$ is fulfilled for any bounded $\Delta$ as $n\Delta \to \infty$. Moreover it allows $\Delta$ to be such that $\Delta := \Delta_n \to 0$ and $\Delta_n \to \infty$, not too fast. This last point is interesting. An estimator that is optimal simultaneously when $\Delta$ is fixed or vanishing and consistent, optimal up to a logarithmic loss, when $\Delta$ tends to infinity, has scarcely been investigated, nor the estimation problem when the sampling rate goes to infinity. To the knowledge of the authors, the only similar result was released shortly after our result in Coca [10]. In [10], a $L_p, p \ge 1$ adaptive optimal nonparametric estimation of the Lévy density (which is related to the jump density) is studied. Both results are complementary, our estimator is adapted to $L_2$ and has the advantage that its definition (both adaptive and non adaptive) is simpler, leading to more succinct proofs and we provide a numerical study. In the remaining of this paragraph, we discuss the different rates of convergence implied by Theorem 3.1 according to the behavior of $\Delta$.
+
+**Discussion on the rates** The upper bound derived in Theorem 3.1 is the sum of four terms: a bias, two variance terms $V \asymp \frac{e^{4\Delta m}}{n\Delta}$ (using that $|\varphi_\Delta(u)| \ge e^{-2\Delta}$) and $V' \asymp \frac{m}{n\Delta}$, which is always smaller or of the same order as $V$, and a remainder. Assume that $f$ lies in the Sobolev ball $\mathcal{S}(\beta, L)$ (see (2.8)). Then, the bias $\|f - f_m\|^2$ has asymptotic order $m^{-2\beta}$ and we may derive the following rates of convergence.
+
+* **Microscopic and mesoscopic regimes.** Let $\Delta = \Delta_n$ be such that $\Delta_n \to \Delta_0 \in [0, \infty)$ such that $n\Delta_n \to \infty$. Then, the bias variance com-
+promise leads to the choice $m^* = (e^{-4\Delta_0} n\Delta_0)^{\frac{1}{2\beta+1}}$ and to the rate of
+convergence $(e^{-4\Delta_0} n\Delta_0)^{-\frac{2\beta}{2\beta+1}}$ that matches the optimal rates of conver-
+gence as $\Delta_0$ is fixed or tending to 0. Indeed, the rate is in $T^{-\frac{2\beta}{2\beta+1}}$, with
+$T = n\Delta_n$ denoting the time horizon, it is clearly rate optimal as it corre-
+sponds to the optimal rate of convergence to estimate the jump density of
+a compound Poisson process from continuous observations ($\Delta = 0$). The
+---PAGE_BREAK---
+
+constant $e^{-4\Delta_0}$ appearing in the rate depends exponentially on $\Delta_0$, which asymptotically as little effect but in practice deteriorates the numerical performances.
+
+* **Macroscopic regime.** Let $\Delta = \Delta_n \to \infty$ such that $\Delta_n < \frac{1}{4} \log(n\Delta_n)$
+The variance term $V$ tends to 0, so that the estimator is consistent. Heuris-
+tically, if $\Delta$ goes to infinity the central limit theorem states that $Y_\Delta$ is
+close in law to a parametric Gaussian variable, e.g. if $f$ is centered and
+with unit variance it holds that: $\sqrt{\Delta}^{-1} Y_\Delta \xrightarrow[\Delta\to\infty]{d} \mathcal{N}(0,1)$. Consequently,
+the fact that $f$ can be constantly estimated is non trivial. Duval [19] es-
+tablishes that if $\Delta = O((n\Delta)^{\delta})$, for some $\delta \in (0,1)$, i.e. when $\Delta_n$ goes
+rapidly to infinity, there exists no consistent non-parametric estimator of
+$f$. The fact that estimation is impossible when $\Delta$ goes too rapidly to infin-
+ity was established through an asymptotic equivalence result. In this case
+it is always possible to build two different compound Poisson processes
+for which the statistical experiments generated by their increments are
+asymptotically equivalent. Therefore, the result of Theorem 3.1 is new in
+that context. We may distinguish two additional regimes:
+
+1. Slow macroscopic regime. If $\Delta_n = o(\log(n\Delta_n))$, the choice $m^* = (e^{-4\Delta_n} n\Delta_n)^{\frac{1}{2\beta+1}}$ leads to the rate of convergence $(e^{-4\Delta_n} n\Delta_n)^{-\frac{2\beta}{2\beta+1}}$. There is no lower bound in the literature to ensure if this rate is optimal. However if $\Delta$ goes slowly to infinity, for example if $\Delta_n = \log(\log(n\Delta_n))$, then the rate is $((\log(n\Delta_n))^{-4} n\Delta_n)^{-\frac{2\beta}{2\beta+1}}$, which is rate optimal, up to the logarithmic loss that may not be optimal.
+
+2. Intermediate macroscopic regime. Let $\Delta_n = \delta \log(n\Delta_n)$, $0 < \delta < 1/4$, then $m^* = (n\Delta_n)^{\frac{1-4\delta}{2\beta+1}}$, leading to the rate $(n\Delta_n)^{-\frac{2\beta(1-4\delta)}{2\beta+1}}$. This rate deteriorates as $\delta$ increases. The limit $\delta = 1/4$ imposed by Theorem 3.1 may not be optimal, no lower bound adapted to this case exists in the literature.
+
+The interest of the macroscopic regime is mainly theoretical as in practice if $\Delta$ is a large constant to get $e^{-4\Delta}n\Delta$ large one should consider a huge amount $n$ of observations. However, this regime enlightens the role of the sampling rate $\Delta$ in the non-parametric estimation of the jump density. Using [19], consistent non-parametric estimation of the jump density is impossible if $\exists \delta > 0$, $\Delta_n = O(n\Delta_n)^\delta$, the remaining questions are what happens in between and if the log loss in the upper bound that appears when $\Delta_n \to \infty$ is avoidable or not. The constant $1/4$ in the bound $\Delta_n < 1/4 \log(n\Delta_n)$ of Theorem 3.1 can probably be improved.
+
+### 3.2.2. *Adaptive choice of the cutoff parameter*
+
+We consider the optimal cutoff $\bar{m}_n$ given by
+
+$$
+\bar{m}_n \in \underset{m \ge 0}{\operatorname{arginf}} \left\{ \|f_m - f\|^2 + \frac{2}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_{\Delta}(u)|^2} + 2 \cdot 5^2 \mathbb{E}[X_1^2] \frac{m}{n\Delta} + 2^{3.5} \cdot 5^2 \frac{m^2}{(n\Delta)^2} \right\}.
+$$
+---PAGE_BREAK---
+
+Following the previous strategy, the upper bound given by Theorem 3.1 is optimal, at least for $\Delta \to [0, \infty)$. The leading variance terms is in $\frac{me^{4\Delta}}{n\Delta}$, we differentiate in $m$ the upper bound to find that the optimal cutoff $m^* \asymp \bar{m}_n$ is such that:
+
+$$|\varphi(\overline{m}_n)|^2 = \frac{e^{4\Delta}}{n\Delta},$$
+
+which has an empirical version, we select $\hat{m}_n$ accordingly. As in the deconvolution setting, we modify the estimator $\tilde{\varphi}_n$ in (3.4) which is set to 0 when the estimator of $|\varphi|$ is smaller than $1/\sqrt{n\Delta}$, meaning that the noise is dominant. Define
+
+$$\bar{\varphi}_n(u) := \tilde{\varphi}_n(u) \mathbf{1}_{|\bar{\varphi}_n(u)| \ge \kappa_{n,\Delta} / \sqrt{n\Delta}}, \quad u \in \mathbb{R} \quad (3.6)$$
+
+where $\kappa_{n,\Delta} := (e^{2\Delta} + \kappa\sqrt{\log(n\Delta)})$, $\kappa > 0$, and the new the estimator of $f$
+
+$$\bar{f}_{m,\Delta}(x) = \frac{1}{2\pi} \int_{-m}^{m} e^{-iux} \bar{\varphi}_n(u) du, \quad x \in \mathbb{R}.$$
+
+Finally, we introduce the empirical threshold, for some $\alpha \in (0, 1]$ and $\kappa > 0$
+
+$$\hat{m}_n = \max \left\{ m \ge 0 : |\bar{\varphi}_n(m)| = \frac{\kappa_{n,\Delta}}{\sqrt{n\Delta}} \right\} \wedge (n\Delta)^{\alpha}.$$
+
+**Theorem 3.2.** Assume that $\mathbb{E}[X_1^4] < \infty$, $\kappa > \frac{e^{2\Delta}}{\Delta}$, $\Delta < \frac{1}{4}\log(n\Delta)$ and $n\Delta \to \infty$ as $n \to \infty$. Then, for a positive constant $C_1$, depending on $\kappa$, $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$, and $C_2$ a constant depending on $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$, it holds
+
+$$\begin{align*}
+& \mathbb{E}[\|\bar{f}_{\hat{m}_n, \Delta} - f\|^2] \\
+& \le C_1 \inf_{m \in [0, (n\Delta)^{\alpha}]} \left\{
+ \begin{aligned}[t]
+ & \|f_m - f\|^2 + \frac{\log(n\Delta)m}{n\Delta} + \frac{1}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_{\Delta}(u)|^2} + \frac{m^2}{(n\Delta)^2} \\
+ & + C_2 \left( \frac{1}{n\Delta} + (n\Delta)^{\alpha - \kappa^2 \Delta^2 e^{-4\Delta}} \right).
+ \end{aligned}
+\right\}
+\end{align*}$$
+
+If $\kappa > \frac{\sqrt{2}e^{2\Delta}}{\Delta}$ the last additional term is negligible, regardless the value $\alpha \le 1$
+and Theorem 3.2 ensures that the adaptive estimator $\bar{f}_{\hat{m}_n, \Delta}$ satisfies the same
+upper bound as in Theorem 3.1. Therefore, it is adaptive and rate optimal, up
+to a logarithmic term and the multiplicative constant $C_1$, in the microscopic and
+mesoscopic regimes defined above. In the macroscopic regimes such that $\Delta :=$
+$\Delta_n \to \infty$ such that $\Delta_n < \frac{1}{4}\log(n\Delta_n)$ as $n \to \infty$ the estimator is consistent. Note
+that to establish the adaptive upper bound we imposed a stronger assumption
+that $\mathbb{E}[X_1^4] < \infty$. In the following numerical study, we recover that the procedure
+is stable in the choice of $\kappa$.
+
+**3.3. Numerical results**
+
+As for the deconvolution problem, we illustrate the performance of this adaptive
+estimator for different densities $f$. We consider the same densities as for the
+---PAGE_BREAK---
+
+FIG 5. **Decompounding:** Computations by $M = 1000$ Monte Carlo iterations of the L$^2$-risks (y axis) for different values of $\kappa \le \frac{1}{2}(\sqrt{n}-1)\sqrt{\log(n)^{-1}} = 11.9$ (x axis). Estimation of $f$ from $n = 5000$ ($T = 5000$ and $\Delta = 1$) increments of a compound Poisson process with intensity $\lambda = 1$ and jump density $f$: Uniform $U[1,3]$ (plain line), Gaussian $N(2,1)$ (dots), Gamma $\Gamma(2,1)$ (dotted line) and the mixture $0.7\mathcal{N}(4,1)+0.3\Gamma(2, \frac{1}{2})$ (triangles).
+
+deconvolution problem, the Cauchy density excepted as it is not covered by our
+procedure: it has infinite moments. We compute the adaptive L²-risks of our
+procedure over 1000 Monte Carlo iterations for various values of κ. We consider
+$n = 5000$ and the sampling interval $\Delta = 1$. The results are represented on
+Figure 5, we observe that the rates are small and stable regardless the value of
+$\kappa$ and the density considered.
+
+**4. Proofs of Section 3**
+
+**4.1. Proof of Theorem 3.1**
+
+4.1.1. Preliminaries
+
+We establish two technical Lemmas used in the proof of Theorem 3.1.
+
+**Lemma 4.1.** Let $m > 0$ and $\zeta > 0$ and define the event
+
+$$
+\Omega_{\zeta, \Delta}(m) := \left\{ \forall u \in [-m, m], \left| \hat{\phi}_{\Delta, n}(u) - \varphi_{\Delta}(u) \right| \le \zeta \sqrt{\frac{\log(n\Delta)}{n\Delta}} \right\}.
+$$
+
+1. If $\mathbb{E}[X_1^2]$ is finite, then, the following holds for $\eta > 0$ and any $\zeta > \sqrt{\Delta(1+2\eta)}$,
+$$
+\mathbb{P}(\Omega_{\zeta,\Delta}(m)^c) \le \frac{\mathbb{E}[X_1^2]}{n\Delta} + 4 \frac{m}{(n\Delta)^{\eta}}.
+$$
+
+2. If $\mathbb{E}[X_1^4]$ is finite, then, the following holds for $\eta > 0$ and any $\zeta > \sqrt{\Delta(1+2\eta)}$,
+$$
+\mathbb{P}(\Omega_{\zeta,\Delta}(m)^c) \le \frac{C}{(n\Delta)^2} + 4 \frac{m}{(n\Delta)^{\eta}},
+$$
+
+where C depends on $\mathbb{E}[X_1^4]$ and $\mathbb{E}[X_1^2]$.
+---PAGE_BREAK---
+
+*Proof of Lemma 4.1.* Consider the events
+
+$$A(c) := \left\{ \left| \frac{1}{n} \sum_{j=1}^{n} |Y_{j\Delta}| - \mathbb{E}[|Y_{\Delta}|] \right| \le c \right\}$$
+
+$$B_{h,\tau}(m) := \left\{ \forall |k| \le \left\lfloor \frac{m}{h} \right\rfloor, |\hat{\varphi}_{\Delta,n}(kh) - \varphi_{\Delta}(kh)| \le \tau \sqrt{\frac{\log(n\Delta)}{n\Delta}} \right\}$$
+
+for some positive constants $c, h$ and $\tau$ to be determined. First, using that $x \to e^{iux}$ is 1-Lipschitz and that $\mathbb{E}[|Y_\Delta|] \le \Delta\mathbb{E}[|X_1|]$ we get on the event $A(c)$
+
+$$|\hat{\varphi}_{\Delta,n}(u) - \hat{\varphi}_{\Delta,n}(u+h)| \mathbf{1}_{A(c)} \leq h(\Delta\mathbb{E}[|X_1|] + c), \quad \forall u \in \mathbb{R}, h > 0. \quad (4.1)$$
+
+If $\mathbb{E}[X_1^2]$ is finite the Markov inequality and the bound $\mathbb{V}[|Y_\Delta|] \le \mathbb{V}[Y_\Delta] = \Delta\mathbb{E}[X_1^2]$ lead to
+
+$$\mathbb{P}(A(c)^c) \le \frac{\Delta \mathbb{E}[X_1^2]}{c^2 n}. \quad (4.2)$$
+
+If $\mathbb{E}[X_1^4]$ is finite (4.2) can be improved using that
+
+$$\mathbb{E}\left[\left(\sum_{j=1}^{n} (|Y_{j\Delta}| - \mathbb{E}[|Y_{\Delta}|])^4\right)\right] \le n\Delta^2\mathbb{E}[X_1^4] + 3n(n-1)\Delta^2\mathbb{E}[X_1^2],$$
+
+leading to
+
+$$\mathbb{P}(A(c)^c) \le C \frac{\Delta^2}{c^4 n^2}, \quad (4.3)$$
+
+where $C$ is a constant depending on $\mathbb{E}[X_1^4]$ and $\mathbb{E}[X_1^2]$. Second, we have that
+
+$$
+\begin{align*}
+\mathbb{P}(B_{h,\tau}(m)^c) &\le \mathbb{P}\left(\exists |k| \le \left\lfloor \frac{m}{h} \right\rfloor, |\hat{\varphi}_{\Delta,n}(kh) - \varphi_{\Delta}(kh)| > \tau \sqrt{\frac{\log(n\Delta)}{n\Delta}}\right) \\
+&\le \sum_{k=-\left\lfloor m/h \right\rfloor}^{\left\lfloor m/h \right\rfloor} \mathbb{P}\left(|\hat{\varphi}_{\Delta,n}(kh) - \varphi_{\Delta}(kh)| > \tau \sqrt{\frac{\log(n\Delta)}{n\Delta}}\right) \\
+&\le \sum_{k=-\left\lfloor m/h \right\rfloor}^{\left\lfloor m/h \right\rfloor} 2 \exp\left(-\frac{\tau^2 \log(n\Delta)}{2\Delta}\right) = 4^{\left\lfloor \frac{m}{h} \right\rfloor} (n\Delta)^{-\tau^2/(2\Delta)}
+\end{align*}
+$$
+
+where the last inequality is obtained applying the Hoeffding inequality. Let $|u| \le m$, there exists $k$ such that $u \in [kh - \frac{h}{2}, kh + \frac{h}{2}]$ and we can write that
+
+$$
+\begin{aligned}
+& \mathbf{1}_{A(c) \cap B_{h, \tau}(m)} |\hat{\varphi}_{\Delta, n}(u) - \varphi_{\Delta}(u)| \\
+& \leq \mathbf{1}_{A(c) \cap B_{h, \tau}(m)} (|\hat{\varphi}_{\Delta, n}(u) - \hat{\varphi}_{\Delta, n}(kh)| + |\hat{\varphi}_{\Delta, n}(kh) - \varphi_{\Delta}(kh)| + |\varphi_{\Delta}(kh) - \varphi_{\Delta}(u)|).
+\end{aligned}
+$$
+
+Using (4.1), the definition of $B_{h,\tau}(m)$ and that $x \to e^{iux}$ is 1-Lipschitz, lead to
+
+$$
+\mathbf{1}_{A(c) \cap B_{h,\tau}(m)} \sup_{u \in [-m, m]} |\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)| \le 2h\Delta E[|X_1|] + hc + \tau \sqrt{\frac{\log(n\Delta)}{n\Delta}}. \quad (4.4)
+$$
+---PAGE_BREAK---
+
+Taking $c = \Delta$, $h = o(\sqrt{\frac{\log(n\Delta)}{n\Delta}})$ such that $h > 1/\sqrt{n\Delta}$ and $\zeta > \tau$, (4.4) shows
+that $A(c) \cap B_{h,\tau}(m) \subset \Omega_{\zeta,\Delta}(m)$. Moreover, it follows from $h > 1/\sqrt{n\Delta}$, (4.1)
+and (4.2) that, for all $\eta > 0$
+
+$$
+\begin{align*}
+\mathbb{P}(\Omega_{\zeta, \Delta}^c(m)) &\le \mathbb{P}(A^c(\Delta)) + \mathbb{P}(B_{h, \tau}^c(m)) \\
+&\le \frac{\mathbb{E}[X_1^2]}{n\Delta} + 4\left\lceil \frac{m}{h} \right\rceil (n\Delta)^{-\frac{\tau^2}{2\Delta}} \\
+&\le \frac{\mathbb{E}[X_1^2]}{n\Delta} + 4m(n\Delta)^{\frac{\Delta-\tau^2}{2\Delta}}.
+\end{align*}
+$$
+
+Finally, choosing $\tau^2 = \Delta(1 + 2\eta)$ leads to the result. The second inequality is
+obtained follows from similar arguments using (4.2) instead of (4.3). $\square$
+
+**Lemma 4.2.** Let $\gamma > 0$, define
+
+$$
+M_{n,\Delta}^{(\gamma)} := \min \{ m \ge 0 : |\varphi_{\Delta}(m)| = \gamma \sqrt{\log(n\Delta)/(n\Delta)} \},
+$$
+
+with the convention $\inf\{\emptyset\} = +\infty$. Take $\gamma > \zeta > 0$, then, we have
+
+$$
+\mathbf{1}_{|u| \le M_{n,\Delta}^{\gamma} \wedge m, \Omega_{\zeta,\Delta}(m)} \left| \text{Log}(\hat{\varphi}_{\Delta,n}(u)) - \text{Log}(\varphi_{\Delta}(u)) \right| \le \frac{\gamma}{\zeta} \log \left( \frac{\gamma}{\gamma - \zeta} \right) \frac{|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)|}{|\varphi_{\Delta}(u)|}.
+$$
+
+Proof of Lemma 4.2. First note that for $|u| \le M_{n,\Delta}^\gamma$, the ratio $\frac{\varphi'_{\Delta}}{\varphi_{\Delta}}$ is well defined.
+Moreover, on the event $\Omega_{\zeta,\Delta}(m)$ then we have that
+
+$$
+|\hat{\varphi}_{\Delta,n}(u)| \geq |\varphi_{\Delta}(u)| - |\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)| \geq (\gamma - \zeta)\sqrt{\frac{\log(n\Delta)}{n\Delta}} > 0, \quad \forall |u| \leq m.
+$$
+
+Then, the quantity $\frac{\hat{\varphi}'_{\Delta,n}}{\varphi'_{\Delta,n}}$ is also well defined if $\gamma > \zeta$. For $v \in \mathbb{R}$, notice that
+
+$$
+\frac{\hat{\varphi}'_{\Delta,n}(v)}{\hat{\varphi}_{\Delta,n}(v)} - \frac{\varphi'_{\Delta}(v)}{\varphi_{\Delta}(v)} = \frac{\left(-\frac{(\hat{\varphi}_{\Delta,n}(v)-\varphi_{\Delta}(v))}{\varphi_{\Delta}(v)}\right)'}{\left(1-\frac{(\hat{\varphi}_{\Delta,n}(v)-\varphi_{\Delta}(v))}{\varphi_{\Delta}(v)}\right)}. \quad (4.5)
+$$
+
+On the event $\Omega_{\zeta,\Delta}(m)$, it holds $\forall u \in [-m \wedge M_{n,\Delta}^{\gamma}, m \wedge M_{n,\Delta}^{\gamma}]$
+
+$$
+|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)| \leq \zeta \sqrt{\log(n\Delta)} (n\Delta)^{-1/2} \leq \frac{\zeta}{\gamma} |\varphi_{\Delta}(u)|, \quad (4.6)
+$$
+
+where $\gamma > \zeta$. Then, a Neumann series expansion, with (4.5) and (4.6) gives for
+$|v| \le m \wedge M_{n,\Delta}^\gamma$,
+
+$$
+\frac{\tilde{\varphi}'_{\Delta,n}(v)}{\tilde{\varphi}_{\Delta,n}(v)} - \frac{\varphi'_{\Delta}(v)}{\varphi_{\Delta}(v)} = - \sum_{\ell=0}^{\infty} \left( \frac{\tilde{\varphi}_{\Delta,n}(v) - \varphi_{\Delta}(v)}{\varphi_{\Delta}(v)} \right)' \left( \frac{\tilde{\varphi}_{\Delta,n}(v) - \varphi_{\Delta}(v)}{\varphi_{\Delta}(v)} \right)^{\ell},
+$$
+
+where
+
+$$
+\left( \frac{\tilde{\varphi}(v) - \varphi_{\Delta}(v)}{\varphi_{\Delta}(v)} \right)' \left( \frac{\tilde{\varphi}_{\Delta,n}(v) - \varphi_{\Delta}(v)}{\varphi_{\Delta}(v)} \right)^{\ell} = \frac{1}{l+1} \left[ \left( \frac{\tilde{\varphi}_{\Delta,n}(v) - \varphi_{\Delta}(v)}{\varphi_{\Delta}(v)} \right)^{l+1} \right]'.
+$$
+---PAGE_BREAK---
+
+Using $\hat{\varphi}_{\Delta}(0) - \varphi_{\Delta}(0) = 0$ and (4.6), we get
+
+$$
+\begin{align}
+& |\mathbf{1}_{|u| \le m \wedge M_{n,\Delta}^{\gamma}, \Omega_{\zeta,\Delta}(m)}| \left| \int_0^u \left( \frac{\hat{\varphi}'_{\Delta,n}(v)}{\hat{\varphi}_{\Delta,n}(v)} - \frac{\varphi'_{\Delta}(v)}{\varphi_{\Delta}(v)} \right) dv \right| \nonumber \\
+& \le \sum_{\ell=0}^{\infty} \frac{1}{\ell+1} \frac{|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)|^{\ell+1}}{|\varphi_{\Delta}(u)|^{\ell+1}} \le \frac{|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)|}{|\varphi_{\Delta}(u)|} \sum_{\ell=0}^{\infty} \frac{(\zeta/\gamma)^{\ell}}{\ell+1} \nonumber \\
+& = \frac{\gamma}{\zeta} \log \left( \frac{\gamma}{\gamma-\zeta} \right) \frac{|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)|}{|\varphi_{\Delta}(u)|}, \tag{4.7}
+\end{align}
+$$
+
+which completes the proof.
+□
+
+### 4.1.2. Proof of Theorem 3.1
+
+We have the decomposition
+
+$$
+\|\hat{f}_{m, \Delta} - f\|^2 = \|f_m - f\|^2 + \|\hat{f}_{m, \Delta} - f_m\|^2 = \|f_m - f\|^2 + \frac{1}{2\pi} \int_{-m}^{m} |\tilde{\varphi}_n(u) - \varphi(u)|^2 du.
+$$
+
+Let $\gamma > \zeta$, we decompose the second term on the events $\{m \le M_{n,\Delta}^\gamma\}$ and $\Omega_{\zeta,\Delta}(m)$ of Lemma 4.2,
+
+$$
+\begin{align*}
+\int_{-m}^{m} |\tilde{\varphi}_n(u) - \varphi(u)|^2 du &= \int_{-m \wedge M_{n,\Delta}^{\gamma}}^{m \wedge M_{n,\Delta}^{\gamma}} \mathbf{1}_{\Omega_{\zeta,\Delta}(m)} |\tilde{\varphi}_n(u) - \varphi(u)|^2 du \\
+&\quad + \mathbf{1}_{m > M_{n,\Delta}^{\gamma}, \Omega_{\zeta,\Delta}(m)} \int_{|u| \in [M_{n,\Delta}^{\gamma}, m]} |\tilde{\varphi}_n(u) - \varphi(u)|^2 du \\
+&\quad + \mathbf{1}_{\Omega_{\zeta,\Delta}(m)^c} \int_{-m}^{m} |\tilde{\varphi}_n(u) - \varphi(u)|^2 du \\
+&:= T_{1,n} + T_{2,n} + T_{3,n}.
+\end{align*}
+$$
+
+Fix $\gamma_\Delta = \frac{2\zeta}{1\wedge\Delta} > \zeta$. On the event $\{|u| \le m \wedge M_{n,\Delta}^{\gamma_\Delta}, \Omega_{\zeta,\Delta}(m)\}$, Lemma 4.2 and equations (3.1), (3.2) and (4.7), along with (4.6), imply
+
+$$
+|\hat{\varphi}_n(u)| \le 1 + \frac{|\text{Log}(\hat{\varphi}_{\Delta,n}(u)) - \text{Log}(\varphi_\Delta(u))| + |\text{Log}(\varphi_\Delta(u))|}{\Delta} \le 3 + \frac{1}{\Delta} \log \left( \frac{\gamma}{\gamma-\zeta} \right) \le 4,
+$$
+
+consequently $\tilde{\varphi}_n(u) = \hat{\varphi}_n(u)$. Then, we get from Lemma 4.2 and the definition of $\gamma_\Delta$, that
+
+$$
+\begin{align*}
+\mathbb{E}[T_{1,n}] &= \frac{1}{\Delta^2} \int_{-m \wedge M_{n,\Delta}^{\gamma_\Delta}}^{m \wedge M_{n,\Delta}^{\gamma_\Delta}} \mathbb{E}\left[\mathbf{1}_{\Omega_{\zeta,\Delta}(m)} \middle| \text{Log}(\hat{\varphi}_{\Delta,n}(u)) - \text{Log}(\varphi_\Delta(u))\right]^2 du \\
+&\leqslant \frac{1}{\Delta^2} \int_{-m}^{m} \frac{\mathbb{E}\left[|\hat{\varphi}_{\Delta,n}(u) - \varphi_\Delta(u)|^2\right]}{|\varphi_\Delta(u)|^2} du.
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Direct computations together with the Lévy-Kintchine formula lead to
+
+$$
+\mathbb{E}[|\hat{\varphi}_{\Delta,n}(u) - \varphi_{\Delta}(u)|^2] = \frac{1 - |\varphi_{\Delta}(u)|^2}{n} \le \frac{(2\Delta|\text{Re}(\varphi(u)) - 1|) \wedge 1}{n} \le \frac{2\Delta \wedge 1}{n}.
+$$
+
+We derive that
+
+$$
+\mathbb{E}[T_{1,n}] \leq \frac{2\Delta \wedge 1}{n\Delta^2} \int_{-m}^{m} \frac{du}{|\varphi_{\Delta}(u)|^2} \leq \frac{2}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_{\Delta}(u)|^2}.
+$$
+
+Next, fix $\zeta > \sqrt{5\Delta}$, Lemma 4.1 with $\eta = 2$ gives
+
+$$
+\mathbb{E}[T_{3,n}] \le 2 5^2 m \left( \frac{\mathbb{E}[X_1^2]}{n\Delta} + 4 \frac{m}{(n\Delta)^2} \right).
+$$
+
+Moreover, using $|\varphi_\Delta(u)| \ge e^{-2\Delta}, \forall u \in \mathbb{R}$ together with the constraint $\Delta \le$
+$\delta \log(n\Delta), \delta < \frac{1}{4}$, we get
+
+$$
+|\varphi_{\Delta}(u)| \ge (n\Delta)^{-2\delta} > \gamma_{\Delta}\sqrt{\log(n\Delta)/(n\Delta)}, \quad \forall u \in \mathbb{R}.
+$$
+
+Finally, $M_{n, \Delta}^{\gamma_{\Delta}} = +\infty$, $\forall \zeta > 0$ and $T_{2,n} = 0$ almost surely. Gathering all terms completes the proof. $\square$
+
+**4.2.** *Proof of Theorem 3.2*
+
+Preliminary
+
+**Lemma 4.3.** *Assume that E[X₁⁴] < ∞. Let η > 0, α ∈ (0,1] and c(Δ) = κΔe⁻²Δ. Then, for some positive constant C depending only on E[X₁²] and E[X₁⁴], it holds for ̂φₙ defined in (3.2) that, for all u ∈ [-(nΔ)α, (nΔ)α],*
+
+$$
+P(|\hat{\phi}_n(u) - \phi(u)| \geq \frac{\kappa \sqrt{\log n\Delta}}{\sqrt{n\Delta}}) \leq 2(n\Delta)^{-c(\Delta)^2} + \frac{C}{(n\Delta)^2} + 4(n\Delta)^{\alpha-\eta}.
+$$
+
+*Proof.* We use Lemmas 4.1 and 4.2 with $\gamma_\Delta = \frac{2\zeta}{1\vee\Delta}$ and $\zeta > \sqrt{\Delta(1+2\eta)}$, $\eta > 0$. First it holds that
+
+$$
+\begin{align*}
+& P(|\hat{\phi}_n(u) - \phi(u)| \geq \kappa \sqrt{\frac{\log(n\Delta)}{n\Delta}}) \\
+&= P(|\text{Log}(\hat{\phi}_{\Delta,n}(u)) - \text{Log}(\phi(u))| \geq \kappa\Delta \sqrt{\frac{\log(n\Delta)}{n\Delta}}) \\
+&\leq P(|\hat{\phi}_{\Delta,n}(u) - \phi(u)| \geq |\phi_{\Delta}(u)|\kappa\Delta \sqrt{\frac{\log(n\Delta)}{n\Delta}}) + P(\Omega_{\zeta,\Delta}^{c}((n\Delta)^{\alpha})).
+\end{align*}
+$$
+
+Then, we derive from the Hoeffding inequality and Lemma 4.1 that
+
+$$
+P(|\hat{\varphi}_n(u) - \varphi(u)| \geq \kappa \sqrt{\frac{\log(n\Delta)}{n\Delta}}) \leq 2(n\Delta)^{-c(\Delta)^2} + \frac{C}{(n\Delta)^2} + 4(n\Delta)^{\alpha-\eta},
+$$
+
+where $C$ depends on $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$. $\square$
+---PAGE_BREAK---
+
+*Proof of Theorem 3.2*
+
+Step 1: An upper bound for $\bar{f}_{m,\Delta}$. Let $0 < m < (n\Delta)^{\alpha}$, Parseval equality and (3.6) leads to
+
+$$ \mathbb{E}[\|\bar{f}_{m,\Delta} - f\|^2] \leq \|f_m - f\|^2 + \frac{1}{2\pi} \int_{-m}^{m} \mathbb{E}[|\bar{\varphi}_n(u) - \varphi(u)|^2] du, $$
+
+where
+
+$$ \begin{aligned} & \mathbb{E}[|\bar{\varphi}_n(u) - \varphi(u)|^2] \\ &= \mathbb{E}[|\tilde{\varphi}_n(u) - \varphi(u)|^2 \mathbf{1}_{|\tilde{\varphi}_n(u)| \ge \frac{\kappa_{n,\Delta}}{\sqrt{n\Delta}}}] + |\varphi(u)|^2 \mathbb{P}(|\tilde{\varphi}_n(u)| < \frac{\kappa_{n,\Delta}}{\sqrt{n\Delta}}). \end{aligned} $$
+
+The first term in the right hand side is bounded using Theorem 3.1. For the second term, recall that $\kappa_{n,\Delta} = e^{2\Delta} + \kappa\sqrt{\log(n\Delta)}$ and decompose it on the set $\mathcal{A} = \{u, |\varphi(u)| < \frac{e^{2\Delta}+2\kappa\sqrt{\log(n\Delta)}}{\sqrt{n\Delta}}\}$ this leads to
+
+$$ \begin{align*} & |\varphi(u)|^2 \mathbb{P}\left(|\tilde{\varphi}_n(u)| < \frac{\kappa_{n,\Delta}}{\sqrt{n\Delta}}\right) \\ & \le \frac{(e^{2\Delta} + 2\kappa\sqrt{\log(n\Delta)})^2}{n\Delta} + \mathbb{P}\left(|\tilde{\varphi}_n(u) - \varphi(u)| \ge \frac{\kappa\sqrt{\log n\Delta}}{\sqrt{n\Delta}}\right) \\ & \le \frac{1 + (1 + 2\kappa\sqrt{\log(n\Delta)})^2}{n\Delta} + \mathbb{P}\left(|\hat{\varphi}_n(u) - \varphi(u)| \ge \frac{\kappa\sqrt{\log n\Delta}}{\sqrt{n\Delta}}\right) \\ & \le C e^{4\Delta} \frac{\log(n\Delta)}{n\Delta} \end{align*} $$
+
+from Lemma 4.3 with $\eta > 2$, $\kappa > e^{2\Delta}/\Delta$ and where $C$ depends on $\kappa$, $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$. Finally, gathering all the above inequalities, we get the following upper bound for $\bar{f}_{m,\Delta}$, $0 < m \le (n\Delta)^{\alpha}$,
+
+$$ \mathbb{E}[\|\bar{f}_{m,\Delta} - f\|^2] \le \|f_m - f\|^2 + Ce^{4\Delta} \frac{\log(n\Delta)}{n\Delta} m + \frac{2}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_\Delta(u)|^2} + \frac{2^{3.5^2} m^2}{(n\Delta)^2}, \quad (4.8) $$
+
+where $C$ depends on $\kappa$, $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$.
+
+In the sequel $C$ denotes a constant depending on $\kappa$, $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$ whose value may change from line to line.
+
+Step 2: Adaptation. Let $0 < m \le (n\Delta)^{\alpha}$ be fixed. Consider the event $\mathcal{E} = \{\hat{m}_n < m\}$, on this event we control the surplus in the bias of the estimator $\tilde{f}_{\hat{m}_n}$. Using the inequality $|\varphi|^2 \le 2|\tilde{\varphi}_n|^2 + 2|\varphi - \tilde{\varphi}_n|^2$, along with the definition of $\hat{m}_n$ and (4.8), give
+
+$$ \mathbb{E}\left[\mathbf{1}_{\mathcal{E}} \int_{|u| \in [\hat{m}_n, m]} |\varphi(u)|^2 du\right] \le 2\mathbb{E}\left[\mathbf{1}_{\mathcal{E}} \int_{|u| \in [\hat{m}_n, m]} \frac{\kappa_{n,\Delta}^2}{n\Delta} du\right] + 2 \int_{-m}^{m} \mathbb{E}[|\tilde{\varphi}_n(u) - \varphi(u)|^2] du $$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+& \le 4 \frac{\kappa_{n, \Delta}^2 m}{n \Delta} + \frac{4}{n \Delta} \int_{-m}^{m} \frac{du}{|\varphi_{\Delta}(u)|^2} \\
+& \qquad + 2^2 5^2 \mathbb{E}[X_1^2] \frac{m}{n \Delta} + 2^4 5^2 \frac{m^2}{(n \Delta)^2} + C e^{4 \Delta} \frac{\log(n \Delta)}{n \Delta} m.
+\end{align*}
+$$
+
+Recall that $\kappa_{n,\Delta} = e^{2\Delta} + \kappa\sqrt{\log(n\Delta)}$ together with (4.8), this implies immedi-
+ately that, on the event $\mathcal{E}$,
+
+$$
+\mathbb{E}[\|\bar{f}_{\hat{m}_n, \Delta} - f\|^2 \mathbf{1}_{\mathcal{E}}] \le \|f_m - f\|^2 + C e^{4\Delta} \frac{\log(n\Delta)m}{n\Delta} + \frac{5}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_\Delta(u)|^2} + \frac{2^5 5^2 m^2}{(n\Delta)^2}.
+$$
+
+Second, consider the complement set $\mathcal{E}^c$, where we control the surplus in the
+variance of $\tilde{f}_{\hat{m}_n}$. By the definition of $\hat{m}_n$, it holds
+
+$$
+\mathbb{E}\left[\int_{|u| \in [m, \hat{m}_n]} |\bar{\varphi}_n(u) - \varphi(u)|^2 du \mathbf{1}_{\mathcal{E}^c}\right] \leq \int_{|u| \in [m, (n\Delta)^{\alpha}]} \mathbb{E}[|\bar{\varphi}_n(u) - \varphi(u)|^2] du.
+$$
+
+Let $\eta > 2$, such that $\alpha - \eta < -1$, and $\zeta > \sqrt{\Delta(1+2\eta)}$ and $\gamma_{\Delta}$ as in the proof of Theorem 3.1 (leading to $M_{n,\Delta}^{\gamma_{\Delta}} = +\infty$). Then, Lemmas 4.1 (decomposing on $\Omega_{\zeta,\Delta}((n\Delta)^{\alpha})$) and 4.2 lead to
+
+$$
+\begin{align*}
+\mathbb{E}[|\bar{\varphi}_n(u) - \varphi(u)|^2] &\le |\varphi(u)|^2 + \mathbb{E}[|\hat{\varphi}_n(u) - \varphi(u)|^2] \\
+&\le |\varphi(u)|^2 + \frac{2}{n\Delta|\varphi_\Delta(u)|^2} + \frac{\mathbb{E}[X_1^2]}{n\Delta} + \frac{4}{n\Delta}.
+\end{align*}
+$$
+
+First, on the event $\{|u| > e^{2\Delta}/\sqrt{n\Delta}\}$, we obtain
+
+$$
+\mathbb{E}[|\bar{\varphi}_n(u) - \varphi(u)|^2] \leq |\varphi(u)|^2 (6 + \mathbb{E}[X_1^2]).
+$$
+
+Consequently, define $C_0 := 6 + \mathbb{E}[X_1^2]$, then,
+
+$$
+\mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\bar{\varphi}_n(u) - \varphi(u)|^2 \mathbf{1}_{\{|u| > e^{2\Delta}/\sqrt{n\Delta}\}} du \mathbf{1}_{\mathcal{E}^c} \right] \le C_0 \int_{[-m, m]^c} |\varphi(u)|^2 du.
+$$
+
+Next, using that $|\bar{\varphi}_n(u)| \le 4$ and the definition of $\hat{m}_n$, we derive that
+
+$$
+\begin{align*}
+& \mathbb{E} \left[ \int_{|u| \in [m, \hat{m}_n]} |\bar{\varphi}_n(u) - \varphi(u)|^2 \mathbf{1}_{\{|u| \le e^{2\Delta}/\sqrt{n\Delta}\}} du \mathbf{1}_{\mathcal{E}^c} \right] \\
+&\leqslant \int_{|u| \in [m, (n\Delta)^{\alpha}]} |\varphi(u)|^2 du + 5^2 \int_{|u| \in [m, (n\Delta)^{\alpha}]} P(|\hat{\varphi}_n(u)| \geq \kappa_{n,\Delta}/\sqrt{n\Delta}) \mathbf{1}_{\{|u| \leq e^{2\Delta}/\sqrt{n\Delta}\}} du \\
+&\leqslant \int_{|u| \in [m, (n\Delta)^{\alpha}]} |\varphi(u)|^2 du + 5^2 \int_{|u| \in [m, (n\Delta)^{\alpha}]} P(|\hat{\varphi}_n(u) - \varphi(u)| > \kappa \sqrt{\log(n\Delta)/(n\Delta)}) du.
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Finally, we bound the last term using Lemma 4.3 with $\eta > 3$, such that $2\alpha - \eta < -1$ and $\zeta > \sqrt{\Delta(1 + 2\eta)}$, it follows that
+
+$$ \int_{|u| \in [m, (n\Delta)^{\alpha}]} \mathbb{P}(|\hat{\varphi}_n(u) - \varphi(u)| > \kappa \sqrt{\frac{\log(n\Delta)}{n\Delta}}) du \le 4(n\Delta)^{\alpha-c(\Delta)^2} + \frac{C'}{n\Delta}, $$
+
+where $C'$ depends on $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$. Putting the above together, we have shown that for a positive constant $C_1$, depending on $\kappa$, $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$, and $C_2$ a constant depending on $\mathbb{E}[X_1^2]$ and $\mathbb{E}[X_1^4]$
+
+$$ \begin{aligned} \mathbb{E}[\|\bar{f}_{\hat{m}_n, \Delta} - f\|^2] \le C_1 (\|f_m - f\|^2 &+ \frac{\log(n\Delta)m}{n\Delta} + \frac{1}{n\Delta} \int_{-m}^{m} \frac{du}{|\varphi_\Delta(u)|^2} + \frac{m^2}{(n\Delta)^2}) \\ &+ C_2 (\frac{1}{n\Delta} + (n\Delta)^{\alpha-c(\Delta)^2}). \end{aligned} $$
+
+Taking the infimum in $m$ completes the proof. $\square$
+
+## References
+
+[1] A. Barron, L. Birgé, and P. Massart. Risk bounds for model selection via penalization. *Probability theory and related fields*, 113(3):301–413, 1999. MR1679028
+
+[2] J.-P. Baudry, C. Maugis, and B. Michel. Slope heuristics: overview and implementation. *Statistics and Computing*, 22(2):455–470, 2012. MR2865029
+
+[3] L. Birgé, P. Massart, et al. Minimum contrast estimators on sieves: exponential bounds and rates of convergence. *Bernoulli*, 4(3):329–375, 1998. MR1653272
+
+[4] B. Buchmann and R. Grübel. Decompoounding: an estimation problem for Poisson random sums. *Ann. Statist.*, 31(4):1054–1074, 2003. MR2001642
+
+[5] C. Butucea. Deconvolution of supersmooth densities with smooth noise. *Canadian Journal of Statistics*, 32(2):181–192, 2004. MR2064400
+
+[6] C. Butucea and A. B. Tsybakov. Sharp optimality in density deconvolution with dominating bias. I. *Teor. Veroyatn. Primen.*, 52(1):111–128, 2007. MR2354572
+
+[7] C. Butucea and A. B. Tsybakov. Sharp optimality in density deconvolution with dominating bias. II. *Teor. Veroyatn. Primen.*, 52(2):336–349, 2007. MR2742504
+
+[8] R. J. Carroll and P. Hall. Optimal rates of convergence for deconvolving a density. *Journal of the American Statistical Association*, 83(404):1184–1186, 1988. MR0997599
+
+[9] A. J. Coca. Efficient nonparametric inference for discretely observed compound poisson processes. *Probability Theory and Related Fields*, 170(1-2):475–523, 2018. MR3748329
+
+[10] A. J. Coca. Adaptive nonparametric estimation for compound poisson processes robust to the discrete-observation scheme. *1803.09849*, [math.ST].
+---PAGE_BREAK---
+
+[11] F. Comte, C. Duval, and V. Genon-Catalot. Nonparametric density estimation in compound poisson processes using convolution power estimators. *Metrika*, 77(1):163–183, 2014. MR3152023
+
+[12] F. Comte and C. Lacour. Data-driven density estimation in the presence of additive noise with unknown distribution. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 73(4):601–627, 2011. MR2853732
+
+[13] F. Comte and C. Lacour. Anisotropic adaptive kernel deconvolution. In *Annales de l'Institut Henri Poincaré, Probabilités et Statistiques*, volume 49, pages 569–609. Institut Henri Poincaré, 2013. MR3088382
+
+[14] F. Comte, Y. Rozenholc, and M.-L. Taupin. Finite sample penalization in adaptive density deconvolution. *J. Stat. Comput. Simul.*, 77(11-12):977–1000, 2007. MR2416478
+
+[15] R. Cont and A. De Larrard. Price dynamics in a markovian limit order market. *SIAM Journal on Financial Mathematics*, 4(1):1–25, 2013. MR3032934
+
+[16] D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation by wavelet shrinkage. *Biometrika*, pages 425–455, 1994. MR1311089
+
+[17] D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard. Wavelet shrinkage: asymptopia? *Journal of the Royal Statistical Society. Series B (Methodological)*, pages 301–369, 1995. MR1323344
+
+[18] C. Duval. Density estimation for compound Poisson processes from discrete data. *Stochastic Process. Appl.*, 123(11):3963–3986, 2013. MR3091096
+
+[19] C. Duval. When is it no longer possible to estimate a compound poisson process? *Electronic Journal of Statistics*, 8(1):274–301, 2014. MR3189556
+
+[20] C. Duval and J. Kappus. Nonparametric adaptive estimation for grouped data. *Journal of Statistical Planning and Inference*, 182:12–28, 2017. MR3574505
+
+[21] P. Embrechts, C. Klüppelberg, and T. Mikosch. *Modelling extremal events: for insurance and finance*, volume 33. Springer Science & Business Media, 2013. MR1458613
+
+[22] J. Fan. On the optimal rates of convergence for nonparametric deconvolution problems. *The Annals of Statistics*, pages 1257–1272, 1991. MR1126324
+
+[23] A. Goldenshluger and O. Lepski. Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality. *The Annals of Statistics*, pages 1608–1632, 2011. MR2850214
+
+[24] A. Goldenshluger and O. Lepski. On adaptive minimax density estimation on $\mathbb{R}^d$. *Probability Theory and Related Fields*, 159(3-4):479–543, 2014. MR3230001
+
+[25] S. Gugushvili, F. Van der Meulen, and P. Spreij. Nonparametric bayesian inference for multidimensional compound Poisson processes. *Modern Stochastics: Theory and Applications*, 2(1):1–15, Mar 2015. MR3356921
+
+[26] J. Kappus and G. Mabon. Adaptive density estimation in deconvolution problems with unknown error distribution. *Electronic Journal of Statistics*, 8(2):2879–2904, 2014. MR3299125
+
+[27] C. Lacour, P. Massart, and V. Rivoirard. Estimator selection: a new method with applications to kernel density estimation. *Sankhya A*, 79(2):298–335,
+---PAGE_BREAK---
+
+2017. MR3707423
+
+[28] O. V. Lepski and T. Willer. Oracle inequalities and adaptive estimation in the convolution structure density model. Ann. Statist., 47(1):233-287, 2020. MR3909933
+
+[29] M. Lerasle. Optimal model selection in density estimation. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, volume 48, pages 884-908. Institut Henri Poincaré, 2012. MR2976568
+
+[30] K. Lounici and R. Nickl. Uniform risk bounds and confidence bands in wavelet deconvolution. Annals of Statistics, 39:201-231, 2011. MR2797844
+
+[31] P. Massart. Concentration inequalities and model selection, volume 6. Springer, 2007. MR2319879
+
+[32] M. H. Neumann. On the effect of estimating the error density in non-parametric deconvolution. J. Nonparametr. Statist., 7(4):307-330, 1997. MR1460203
+
+[33] M. H. Neumann and M. Reiß. Nonparametric estimation for Lévy processes from low-frequency observations. Bernoulli, 15(1):223-248, 2009. MR2546805
+
+[34] M. Pensky, B. Vidakovic, et al. Adaptive wavelet estimator for nonparametric density deconvolution. The Annals of Statistics, 27(6):2033-2053, 1999. MR1765627
+
+[35] G. Rebelles. Structural adaptive deconvolution under $l_p$-losses. Mathematical Methods of Statistics, 25(1):26-53, 2016. MR3480609
+
+[36] P. Reynaud-Bouret, V. Rivoirard, and C. Tuleau-Malot. Adaptive density estimation: a curse of support? Journal of Statistical Planning and Inference, 141(1):115-139, 2011. MR2719482
+
+[37] L. A. Stefanski. Rates of convergence of some estimators in a class of deconvolution problems. Statistics & Probability Letters, 9(3):229-235, 1990. MR1045189
+
+[38] B. van Es, S. Gugushvili, and P. Spreij. A kernel type nonparametric density estimator for decompounding. Bernoulli, 13(3):672-694, 2007. MR2348746
\ No newline at end of file
diff --git a/samples/texts_merged/290491.md b/samples/texts_merged/290491.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ce1379eb2c79b154cb1cfb039c0a33ba592fe2b
--- /dev/null
+++ b/samples/texts_merged/290491.md
@@ -0,0 +1,336 @@
+
+---PAGE_BREAK---
+
+# Optimization of Impedance Plane Reducing Coupling between Antennas
+
+Yong S. Joe¹, Jean-François D. Essiben², Eric R. Hedin¹, Jacquie Thérèse N. Bisse³, Jacques Matanga⁴
+
+¹Center for Computational Nanosciences, Department of Physics and Astronomy, Ball State University, Muncie, USA; ²Department of Electrical Engineering Advanced Teachers' Training College for Technical Education, University of Douala, Douala, Cameroon; ³Department of Electrical Engineering and Computer Science, University Institute of Technology, University of Douala, Douala, Cameroon; ⁴Industrial Engineering Faculty, University of Douala, Douala, Cameroon.
+Email: ysjoe@bsu.edu
+
+Received September 2nd, 2010; revised November 5th, 2010; accepted November 19th, 2010.
+
+## ABSTRACT
+
+This paper provides a solution for the design optimization of two-dimensional impedance structures for a given electromagnetic field distribution. These structures must provide electromagnetic compatibility between antennas located on a plane. The optimization problem is solved for a given attenuation of the complete field. Since the design optimization gives a complex law of impedance distribution with a large real part, we employ the method of pointwise synthesis for the optimization of the structure. We also consider the design optimization case where the structure has zero impedance on its leading and trailing edges. The method of moments is used to solve the integral equations and the numerical solution is presented. The calculated impedance distribution provides the required level of antenna decoupling. The designs are based on the concept of soft and hard surfaces in electromagnetics.
+
+**Keywords:** Coupling, Design Optimization, Pointwise Synthesis
+
+## 1. Introduction
+
+In recent years, there has been growing interest in artificial electromagnetic materials, such as electromagnetic band-gap (EBG) structures. An EBG material is a periodic structure in which electromagnetic states are not allowed at certain frequency bands (bandgaps) [1]. The EBG structure has also been applied in antenna design to suppress surface waves and to improve the radiation performance of the antenna [2-9]. The majority of EBG structures used in the microstrip patch antenna application are of a single period [2].
+
+In practice, it is often required to provide significant decoupling between the receiving and transmitting antennas located on a common surface at a small distance from each other. One of the most widespread ways of reducing coupling between antennas is the use of a periodic structure [10-17]. The main point of this method consists in the fact that under certain conditions such a structure “pushes away” the field from its surface, in this way reducing the amount of energy which enters the receiving antenna.
+
+Well-known papers from different journals [10-17] are usually devoted to the research of decoupling efficiency
+
+between antennas located on different surfaces, by using different electrodynamic structures (interference coverings and corrugated structures). In these papers, the influence on the level of antenna decoupling of different combinations of structural parameters is considered; i.e., only the task of analysis is solved. However, effective design of decoupling structures requires the solution of the design optimization problem. Probably the only example of known research where an effort is made to solve the design optimization problem of decoupling devices is in Reference [18]. In this book, the law of distribution of purely reactive impedance was obtained, providing faster reduction of the field along a geodesic line which connects the antennas, compared with an ideal conducting surface. However, the authors of the book limited their research to only a specific case of purely reactive impedance. Furthermore, the results in Reference [18] do not present the distribution formulas of the synthesized impedance.
+
+In this paper, we propose a solution to the design optimization problem of impedance surfaces with the goal of creating effective decoupling structures. In particular, we investigate the degree of electromagnetic field (EMF) attenuation along the structure, the degree of reduction of
+---PAGE_BREAK---
+
+the complete field level across the impedance part of the structure, and the decoupling level between the antennas. In addition, we determine the degree of influence of the resistive part of the impedance on the rate of the field attenuation along the impedance structure and the influence of the initial and final parts of the impedance structure on the level of the complete field.
+
+The paper is organized as follows: In Section 2, we consider a solution to the design optimization problem of complex passive surface impedance using the law of electromagnetic field distribution. A solution to the problem using the pointwise synthesis method is given in Section 3, and numerical results are discussed in Section 4.
+
+## 2. Optimizing the Impedance Plane
+
+### 2.1. Statement of the Design Optimization Problem
+
+In this section, we consider a solution to the design optimization of a complex passive (Re(Z) ≥ 0) surface impedance Z for a given EMF distribution, and we study the achieved spatial decoupling of antenna devices located on the same plane. In practice, the problem of design optimization is solved in the absence of the second antenna, meaning the electromagnetic field decreases as a function of increasing distance from the source [18]. Furthermore, a receiving antenna is placed at the points with minimum intensity of the “interfering” field. This is the reason why we only solve the design optimization problem for a single transmitting antenna. The optimization problem can also be explained by considering that in order to provide electromagnetic compatibility between antennas by means of a surface impedance Z, it is necessary to minimize the functional $K_c = P_{rec}/P_{ir}$, where $P_{rec}$ and $P_t$ are the powers of the signals at the exit of the receiving and the transmitting antennas, respectively. The decoupling coefficient, K, is inversely related to the value of $K_c$, and is defined as $K=-10\log(K_c)$ [19]. Minimization of the functional is usually performed by means of non-linear programming and requires the specification of an initial choice for the variation of the impedance distribution, Z. This choice is of great significance because it will influence the number of necessary iteration steps for the minimization of $K_c$ in order to reach global and local minima. The closer the initial choice is to the optimized design, the fewer the number of iterations steps. However, to widen the applicability of the results, we chose an initial solution which is nearly arbitrary (given the constraints imposed on the antenna), and work from there towards the optimized design.
+
+To begin the process of design optimization, we first consider a solution to the two-dimensional design problem for the arrangement shown in **Figure 1**. On the plane, S, at the point with coordinates (x = 0, y = 0) let an antenna be located which is in the shape of an infinite thread of in-phase magnetic current, directed along z-axis. The opening of a narrow parallel-plate waveguide (**Figure 1**) can serve as a physical model of such a radiator. This kind of source creates an electromagnetic field in the upper space with a magnetic field vector of intensity:
+
+$$H_z^i(x) = - \frac{kI_0^m}{4W} H_0^{(2)}(k|x|), \quad (1)$$
+
+where $k = 2\pi/\lambda$ is the wave number; $\lambda$ is the wavelength; $H_0^{(2)}$ is the zeroth-order Hankel function of the 2nd kind; $I_0^m$ is the current amplitude; and W = 120π (ohms) is the characteristic resistance of free space. On the surface S, the boundary impedance conditions of Shukin-Leontovich are fulfilled:
+
+$$E_x = ZH_z . \qquad (2)$$
+
+It is necessary to determine the dependence of the passive impedance $Z(x)$ ($\text{Re}(Z) \ge 0$) for a given variation of the magnetic field, $H_z(x)$ on the surface S. Once $Z(x)$ is obtained, the complete field in the upper space is found, and then the degree of decoupling between antennas can be obtained.
+
+### 2.2. Solution of the Design Optimization Problem
+
+To solve the problem, we use the Lorentz lemma for the upper space in **Figure 1**. As a result, we obtain the Fredholm integral equation of the second kind relative to the complete field $H_z(x)$ on the surface S:
+
+$$H_z(x) = 2H_z^i(x) + \int_{-\infty}^{\infty} Z(x')H_z(x')H_{z1}^m(x,x')dx' \quad (3)$$
+
+where $H_{z1}^m(x,x') = -\frac{k}{4W}H_0^{(2)}(k|x-x'|)$ and
+
+$H_z^i(x) = H_0^{(2)}(k|x|)$ is the field of an antenna radiation.
+
+**Figure 1.** A narrow parallel-plate waveguide with impedance.
+---PAGE_BREAK---
+
+Now, we consider the complete magnetic field on the
+section $x \in [-L, L]$ for the entire upper space. From
+Equation. (3), relative to the complete field, $E_x(x)$, we
+obtain a Fredholm integral equation of the first kind:
+
+$$
+H_z(x) = 2H_0^{(2)}(k|x|) - \frac{k}{2W} \int_{-L}^{L} E_x(x') H_0^{(2)}(k|x-x'|) dx' \quad (4)
+$$
+
+The solution of Equation (4), a complete magnetic
+field, $H_z$, on the final interval $x \in [-L, L]$ relative to
+$E_x(x)$, can be obtained numerically. For example,
+through the method of Krylov-Bogolyubov [18], the re-
+quired dependence of the impedance distribution can be
+found from the boundary conditions of Equation (2),
+$Z(x) = E_x(x)/H_z(x)$. In this case, the feasibility of the
+required impedance is not imposed by any limitations
+and therefore, can be checked in the process of calcula-
+tion. In this way, we define a category of the passive im-
+pedance decoupling structures.
+
+**3. Pointwise Synthesis**
+
+The problem of the optimization of the design of the
+complex passive surface impedance is solved in the pre-
+vious section. The resulting structure gives the complex
+dependence of the impedance, where its real part is large
+and positive. The realization of such an impedance with a
+large real (resistive) part is a complicated task on a pla-
+nar surface. However, there is another way of achieving a
+large value of impedance: this can in practice be realized
+by a corrugated structure with the depth of corrugations
+divisible approximately by $(2m+1)\lambda/4$, $(m=0, 1, 2, \dots)$,
+even though such a structure is narrowbanded.
+
+In this section, we consider the variation of antenna
+decoupling with the help of a purely reactive structure,
+for which the real part of the synthesized dependence of
+the complex impedance is simply taken as equal to zero.
+We assume a weak dependence of tangential components
+$E_x$ and $H_z$ on the impedance in Equation (1) and ex-
+press the boundary impedance conditions, Equation (2),
+in a complex form:
+
+$$
+(\mathrm{Re}(Z)+i\mathrm{Im}(Z))(\mathrm{Re}(H_z)+i\mathrm{Re}(H_z))=\mathrm{Re}(E_x)+i\mathrm{Im}(E_x) \tag{5}
+$$
+
+Then, the solution of the system of equations
+
+$$
+\begin{cases}
+\operatorname{Re}(Z)\operatorname{Re}(H_z) - \operatorname{Im}(Z)\operatorname{Im}(H_z) = \operatorname{Re}(E_x) \\
+\operatorname{Re}(Z)\operatorname{Im}(H_z) + \operatorname{Im}(Z)\operatorname{Re}(H_z) = \operatorname{Im}(E_x)
+\end{cases}
+\qquad (6)
+$$
+
+can be obtained with the help of the method of linear
+programming [20-23]. Here, if the real part of the im-
+pedance in Equation (6) is set to equal zero (i.e., the re-
+sistive impedance ($\operatorname{Re}(Z)=0$), we obtain the minimum
+
+deviation of the solution of the system of equations in
+Equation (6) at each point of the surface of the decoup-
+ling structure (such a synthesis is called “pointwise”
+[20]). This minimum deviation point can be obtained
+using the exchange method of Stiefel [24]:
+
+$$
+\mathrm{Im}(Z') = \frac{\mathrm{Im}(H_z)\mathrm{Im}(E_x) - \mathrm{Re}(H_z)\mathrm{Re}(E_x)}{2\mathrm{Re}(H_z)\mathrm{Im}(H_z)} \quad (7)
+$$
+
+In order to look at the behavior of the overall imped-
+ance, which is the essence of the exchange method, the
+relationship between the imaginary and real parts of the
+impedance is explicitly plotted in **Figure 2**. The solution
+of Equation (7) is a point which is equidistant from
+where the straight lines, given by Equations. (7), where
+they cross the ordinate axis ($\text{Im}(Z)$).
+
+**4. Results and Discussion**
+
+* We state the complete magnetic field on the impedance part, specifying the shape of the field as:
+
+$$
+H_z(x) = 2H_z^i(x)e^{-\alpha|x|}, \quad (8)
+$$
+
+where $\alpha$ is the coefficient of attenuation ($\text{Re}(\alpha) \ge 0$).
+
+The synthesized impedance must provide sharper EMF
+attenuation along the structure, as compared with an ideal
+conducting plane. The attenuation factor is defined by the
+value of the coefficient $\alpha$. As an example, **Figure 3**
+shows the dependence of the synthesized impedance
+
+Figure 2. Illustration for the exchange method.
+
+**Figure 3. Impedance variations: Re(Z) (solid and dashed curves) and Im(Z) (dotted and dash-dotted curves) for the decay coefficients α = 0.5k and α = 0.75k.**
+---PAGE_BREAK---
+
+[Re(Z) and Im(Z)] for two different coefficients of
+attenuation, where the length of the impedance structure
+is L = λ. It is seen that the real parts of the impedance
+are positive [Re(Z) for α = 0.5k (solid) and
+α = 0.75k (dashed)] and two imaginary parts are nega-
+tive (capacitive) [Im(Z) for α = 0.5k (dotted) and
+α = 0.75k (dash-dotted)]. We note here that the imped-
+ance has a monotonic increase and the magnitude of the
+impedance is larger when α = 0.75k.
+
+Next, we study the degree of the influence of this im-
+pedance on the coupling of antennas. **Figure 4(a)** shows
+the dependence of $H_z(x)$ on the synthesized imped-
+ance (**Figure 3**), normalized relative to the field $H_{z0}(x)$
+above an ideal conducting plane, for $\alpha = 0.5k$ (solid
+curve) and $\alpha = 0.75k$ (dashed curve). As we can see, a
+greater attenuation of the field is accompanied by a stee-
+per slope of the impedance alteration, which coincides
+with the results in the Ref. [18]. The main difference
+consists in the fact that the impedance obtained in this
+paper not only has a reactive component but also a resis-
+tive component, Re(Z) ≥ 0. In order to measure the
+degree of influence of the resistive part Re(Z) ≥ 0 of
+the impedance, Z(x), on the field attenuation along the
+impedance structure, we show in **Figure 4(b)** the De-
+
+pendence of $H_z(x)$ for the synthesized impedance ($\alpha = k$), normalized relative to the field $H_{z0}(x)$ above an ideal conducting plane, with the active component, Re($Z$) ≥ 0 (solid curve) and without it, Re($Z$) = 0, (dashed curve). The calculations show that the presence of the resistive part of the impedance not only doesn't worsen the level of decoupling between antennas, as stated in Ref. [13], but, in fact, increases it by about an additional 5 dB. These results are probably caused by the different dependence of the impedance obtained in this paper compared with what was analyzed in Ref. [13] (this dependence is called uniform). The results of the design optimization show that greater attenuation of the field is reached with a higher rate of impedance growth (generally of its capacitive part, **Figure 3**). However, the rate of impedance growth (slope of the curves in **Figure 4(b)**) cannot be arbitrarily large because of practical limitations on the precision of the production of the structures. Therefore, one way to increase the rate of impedance alteration (increase the decoupling) substitutes the monotonically growing impedance with a periodic variation.
+
+**Figure 5** shows the dependence of the initial synthe-
+sized impedance with $\alpha=k$, for the structure with
+length $L=\lambda$ ($\operatorname{Re}(Z)$, red solid curve and $\operatorname{Im}(Z)$,
+blue dotted curve) and compressed by a factor of three
+($\operatorname{Re}(Z)$, black dashed curve and $\operatorname{Im}(Z)$, purple
+dash-dotted curve), i.e., with the rate of impedance al-
+teration three times greater. **Figure 6** shows the depend-
+ence of the field, $H_z(x)$, normalized relative to the field
+$H_{z0}(x)$ above an ideal conducting plane, for the initial
+impedance (synthesized in **Figure 5**, solid and dotted
+curves) and periodic impedance (see **Figure 5**, dashed
+and dash-dotted curves). Here, the dash-dotted curve
+shows the case for homogeneous purely reactive
+
+Figure 4. (a) Behavior of the field attenuation for the synthesized impedance: $\alpha = 0.5k$ (solid curve) and $\alpha = 0.75k$ (dashed curve). (b) Behavior of the field attenuation for the synthesized impedance ($\alpha = k$) with the active component, Re($Z$) $\ge 0$ (solid curve), and without it, Re($Z$) = 0 (dashed curve).
+
+Figure 5. Variation of the initial synthesized impedance with $\alpha=k$, for the structure with length $L=\lambda$ ($\mathrm{Re}(Z)$, solid curve and $\mathrm{Im}(Z)$, dotted curve) and compressed by a factor of 3, ($\mathrm{Re}(Z)$, dashed curve, and $\mathrm{Im}(Z)$, dash-dotted curve); i.e., the slope of the impedance variation is 3 times larger.
+---PAGE_BREAK---
+
+Figure 6. Solid curve: variation of the field, $H_z(x)$, normalized relative to the field, $H_{z0}(x)$, above an ideal conducting plane for the initial impedance (synthesized in Figure 5, solid and dotted curves). Dashed curve: periodical impedance (see Figure 5, dashed and dash-dotted curves). Here, the dash-dotted curve shows the case for homogeneous purely reactive impedance, $Z = -10i$.
+
+impedance $Z = -10i$. As we can see, the level of decoupling grows not monotonically, but at the end of each period, the growth of the coupling coefficient is sharp near the minimum of the impedance, which is characteristic for the propagation of radio waves above non-uniform spreading surfaces [18]. Nevertheless, from comparison of the curves in **Figure 6**, we can see that the use of several periods of impedance alteration brings an additional gain in the degree of antennas decoupling, 10~40 dB bigger as compared with the monotonic variation (**Figure 5**, solid and dotted curves). Definitely, the level of the field behind the impedance decoupling structure is defined primarily through the variation of the impedance. Besides, it is evident that the main role is played here by the rate of impedance increase in immediate proximity to the antenna. The results presented in **Figure 6** show that from this point of view, the best results are obtained with constant but large capacitive impedance. Nevertheless, in this case, the field along the structure decreases inversely with distance to the 3/2 power. However, the most important thing is that the constant impedance gives the best results outside the impedance structure, i.e., with $x \ge L$, where the placement of the receiving antenna is assumed.
+
+* We next consider the design optimization of the structure for a given field with the following form:
+
+$$H_z(x) = 2\Omega H_z^t(x) |H_0^{(2)}(k|x)|^2 \quad (9)$$
+
+where $\Omega < 1$ is the coefficient defining the degree of reduction of the overall field intensity on the impedance part of the structure.
+
+**Figure 7** shows the variation of the synthesized impedance ($\text{Re}(Z)$, solid curve and $\text{Im}(Z)$, dashed curve) for the structure with the parameters: $L = \lambda$ and $\Omega = 10^{-4}$. The behavior of the impedance shows that the
+
+Figure 7. Variation of the synthesized impedance ($\text{Re}(Z)$, solid curve and $\text{Im}(Z)$, dashed curve). The parameters for the calculation are $L = \lambda$ and $\Omega = 10^{-4}$.
+
+main load in reduction of the field along the structure is carried by the part at the beginning and at the end of the structure, and this does not depend on the length of the structure itself. For example, the dependence of the synthesized impedance distribution for the structure with length $L = 3\lambda$ is shown in **Figure 8**. As the length of the structure increases, the characteristics of the impedance remain the same as in **Figure 7**. The only difference is that **Figure 7** is a compressed version of **Figure 8** by a factor of 3 for the length of the structure, with a minor deviation of impedance value.
+
+The real and imaginary values of the impedance are influenced only by the degree of the field reduction on the structure. To demonstrate this fact, we show in **Figure 9** the variation of the impedance on the structure with the following parameters: $L = \lambda$ and $\Omega = 10^{-3}$. When the degree of the field reduction changes from $\Omega = 10^{-4}$ (**Figure 7**) to $\Omega = 10^{-3}$ (**Figure 9**), the real and imaginary values of the impedance are reduced approximately 5 times and 2.6 times smaller, respectively.
+
+In order to see the dependence of the field attenuation on the synthesized impedance, we plot $|H_z(x)/H_{z0}(x)|$ as a function of $x/\lambda$ in **Figure 10**. The solid curve
+
+Figure 8. Variation of the synthesized impedance ($\text{Re}(Z)$, solid curve and $\text{Im}(Z)$, dashed curve) with the length $L = 3\lambda$.
+---PAGE_BREAK---
+
+Figure 9. Variation of the synthesized impedance (Re(Z), solid curve and Im(Z), dashed curve) with the following parameters: L = λ and Ω = 10⁻³.
+
+Figure 10. Behavior of the field attenuation relative to Figure 7 (solid curve) and with a purely reactive impedance, Re(Z) = 0, (dashed curve).
+
+corresponds to the impedance distribution (both Re(Z) and Im(Z), shown in Figure 7) and the dashed curve with the purely reactive impedance (Re(Z) = 0). This result indicates that the prevailing role in the reduction of the level of the complete field behind the impedance structure is played by the reactive part of impedance. The presence of an active component leads to an additional decrease of the field behind the structure, making it 3~5 dB smaller.
+
+We notice from the behavior of the curves in Figures 7-9 that the main load falls on the initial and final parts of the structure, (the so-called “take off” and “landing” grounds of the structure [25]). To confirm this, we design a structure with zero impedance on its initial and final parts, shown in Figure 11, where the parameters are used for L = λ and Ω = 10⁻⁴. Figure 12 shows the dependence of |Hz(x)/Hz0(x)| on the impedance distribution for a structure with zero impedance on its initial part (at the distances L₁ = 0 and L₂ = 0.1λ, solid curve), and with zero impedance on its final part (at the distances L₁ = 0.9λ and L₂ = L = λ, dashed curve). For the comparison, the dash-dotted curve corresponds to the pure impedance structure. The calculations show that the main role in providing decoupling between antennas belongs to the initial part (the “take off” stripe), directly
+
+Figure 11. Synthesis of structures with zero impedance on their initial and final sections. The parameters for the calculation are L = λ and Ω = 10⁻⁴.
+
+Figure 12. Behavior of the field attenuation for the variation of the synthesized impedance structures with zero impedance on its initial section (at the distances L₁ = 0λ and L₂ = 0.1λ, solid curve) and with zero impedance on its final part (at the distances L₁ = 0.9λ and L₂ = L (L = 1λ), dashed curve). For comparison, the dash-dotted curve corresponds with the pure impedance structure.
+
+touching the opening of the transmitting antenna. The impact of the final part in providing decoupling is much less. In addition, the “take off” stripe defines not only the level of the complete field above the impedance structure but also behind it, where the placement of the receiving antenna is assumed. At the same time, the presence of the ideal conducting part, even if it is very small (0.1λ), leads to sharp growth in the value of the complex impedance of the structure. This, in turn, makes its practical realization more difficult. This statement applies also for any length of the structure. We note that increasing the length of the impedance structure causes a significant reduction of the field only on the structure itself, but this reduction is much smaller behind it.
+
+Finally, we consider the method of pointwise synthesis. The distributions of reactive impedance and the variation of |Hz(x)/Hz0(x)| for the designed structure are shown in Figures 13 and 14, respectively, with the same parameters as in Figure 7, where the active part of the impedance is taken to be equal to zero (solid curves) and the structure with reactance is calculated by Equation (7)
+---PAGE_BREAK---
+
+Figure 13. Variation of the reactive impedance relative to the initial impedance (solid curve) and the optimized impedance (dashed curve) in accordance with Equation (7). The parameters are the same as in Figure 7.
+
+Figure 14. Behavior of the field attenuation for the structure, in which after the solution of the problem of synthesis, the active part of the impedance is set equal to zero (solid curves) and for the structure with reactance calculated by Equation (7) (dashed curves). The reactance optimized in accordance with Equation (7) (Figure 13, dashed curve) completely differs from the initial calculation (Figure 13, solid curve) because in it, the capacitive impedance along with the inductive impedance has emerged. The parameters are the same as in Figure 7.
+
+(dashed curves). The impedance of the reactance optimized in accordance with Equation (7) (Figure 13, dashed curve) differs completely from the initial calculation (Figure 13, solid curve), because in it, the capacitive impedance along with the inductive impedance has emerged. We obtain more detail in the behavior of the complete field on the impedance part of the structure. In Figure 14, behavior of the field attenuation for the structure, in which after the solution of the problem of synthesis, the active part of the impedance is set equal to zero (solid curves) and for the structure with reactance calculated by Equation (7) (dashed curves). The alternating character of the impedance distribution suggests that a structure which realizes it (for example, a corrugated one) will turn out to be more broadband. Common to all calculations was the large negative reactance near the
+
+opening of the transmitting antenna.
+
+The reactance given in Equation (7) has a dependence similar to the function, $ctg(x)$. A similar form of the impedance (reactance) distribution was obtained in Ref. [26], where the authors considered the design optimization of an impedance plane, transforming the cylindrical front of the wave of a linear source into the front of the heterogeneous flat wave reflected in a given direction. This similarity encourages the use of the results of this paper in the interest of decoupling antennas.
+
+## 5. Conclusions
+
+In summary, we have solved the problems of the optimization of the design of a complex passive surface impedance for a given electromagnetic field distribution and by means of pointwise synthesis, with the purpose of thereby creating optimized decoupling structures. We present different variations of the field along the impedance structure. We have shown that a greater attenuation of the field is accompanied with a greater degree of impedance variation and that the use of several periods of impedance variation brings an additional gain in the amount of antenna decoupling. We also calculate the degree of influence of the resistive part of the impedance on the rate of the field attenuation along the impedance structure. Finally, we notice that the most efficient way of reducing mutual coupling between antennas located on the same plane is the placement of the impedance structure with a large value of reactive impedance near the transmitting antenna. However, during the solution of the design optimization, the interaction between antennas was not taken into account. It is evident that the presence of the receiving aperture antenna will lead to undesirable reduction of antenna decoupling by means of the heterogeneous character created by the aperture of the receiving antenna. Future research will investigate this question as well.
+
+The derived impedance variation can be used as an independent solution of the problem of providing electromagnetic compatibility, as well as the first step in further optimization of the structure with the help of non-linear programming methods.
+
+## 6. Acknowledgements
+
+One of the authors (J.-F. D. Essiben) wishes to thank Professor Yu. V. Yukhanov from the Taganrog Institute of Technology at the Southern Federal University in Russia for helpful discussions.
+
+## REFERENCES
+
+[1] G. Goussetis, A. P. Feresidis and J. C. G. Apostolopoulos, "Periodically Loaded 1-D Metallodielectric Electromagnetic Bandgap Structures for Miniaturization and Band-
+---PAGE_BREAK---
+
+width Enhancement,” *IEE Proceedings of Microwave Antennas Propagation*, Vol. 151, No. 6, 2004, pp. 481-484. doi:10.1049/ip-map:20040814
+
+[2] C. C. Chiau, X. Chen and C. Parini, “Multiperiod EBG Structure for Wide Stopband Circuits,” *IEE Proceedings of Microwave Antennas Propagation*, Vol. 150, No. 6, 2003, pp. 489-492. doi:10.1049/ip-map:20031087
+
+[3] N. C. Karmakar and M. N. Mollah, “Potential Applications of PBG Engineered Structures in Microwave Engineering: Part I,” *Microwave Journal*, Vol. 47, No. 7, 2004, pp. 22-44.
+
+[4] B. I. Rumsey, Z. Popovic and M. P. May, “Surface-Wave Guiding Using Periodic Structures,” *IEEE APS-International Symposium Digest*, Salt Lake City, 17-21 July 2000, pp. 342-345.
+
+[5] D. Sievenpiper and E. Yablonovitch, “Eliminating Surface Currents with Metallodielectric Photonic Crystals,” *IEEE International Microwave Symposium Digest*, Baltimore, 7-12 June 1998, pp. 663-666.
+
+[6] D. Sievenpiper, L. Zhang, R. F. J. Broas, N. G. Alexopoulos and E. Yablonovitch, “High-Impedance Electromagnetic Surfaces with a Forbidden Frequency Band,” *IEEE Transactions on Microwave Theory and Techniques*, Vol. 47, No. 11, 1999, pp. 2059-2074. doi:10.1109/22.798001
+
+[7] K. C. Chen, C. K. C. Tzuang, Y. Qian and T. Itoh, “Leaky Properties of Microstrip above a Perforated Ground Plane,” *IEEE International Microwave Symposium Digest*, Anaheim, 13-19 June 1999, pp. 69-72.
+
+[8] A. Freni, C. Mias and R. L. Ferrari, “Hybrid Finite-Element Analysis of Electromagnetic Plane Wave Scattering from Axially Periodic Cylindrical Structures,” *IEEE Transactions on Antennas and Propagation*, Vol. 46, No. 12, 1998, pp. 1859-1866. doi:10.1109/8.743824
+
+[9] P. S. Kildal, A. A. Kishk and A. Tengs, “Reduction of Forward Scattering from Cylindrical Objects Using Hard Surfaces,” *IEEE Transactions on Antennas and Propagation*, Vol. 38, No. 10, 1990, pp. 1537-1544. doi:10.1109/8.59765
+
+[10] S. Benenson and A. I. Kurkchan, “Decoupling of Antennas by Means of Periodic Structures,” *Radiotechnics and Electronics*, Vol. 37, No. 12, 1995, pp. 77-89.
+
+[11] K. K. Belostotskaya, M. A. Vasilyev and V. M. Legkov, “Spatial Decoupling between Antennas on Big Size Solids,” *Radiotechnics*, Vol. 41, No. 10, 1986, pp. 77-79.
+
+[12] V. N. Lavrushev and Y. E. Sedelnikov, “Construction of Antennas Taking into Account Decoupling Requirements,” *Transactions of Higher Education Institutions, Radio Electronics*, Vol. 23, No. 2, 1980, pp. 31-38.
+
+[13] V. V. Martsafey and I. G. Shvayko, “Influence of Corrugated Structures on Interaction of Near-Omnidirectional Antennas,” *Transactions of Higher Education Institutions*, Radio Electronics, Vol. 24, No. 5, 1981, pp. 18-22.
+
+
+
+[14] A. V. Kashin and V. I. Solovyov, “Research of Small-Sized on Interaction of Near-Omnidirectional Antennas, Located on a Circular Cylindrical Surface,” *Transactions of Higher Education Institutions, Radio Electronics*, Vol. 25, No. 2, 1982, pp. 78-80.
+
+[15] Y. L. Lomukhin, S. D. Badmayev and N. B. Chimindorz-hiev, “Decoupling of Antennas by the Edge of Conducting Semi-Plane,” *Radiotechnics*, Vol. 40, No. 8, 1985, pp. 47-50.
+
+[16] V. V. Martsafey and M. A. Solodovnikov, “Synthesis of Near-Omnidirectional Antenna with an Increased Electromagnetic Compatibility,” *Transactions of Higher Education Institutions, Radio Physics*, Vol. 23, No. 10, 1980, pp. 1250-1255.
+
+[17] A. G. Kurkchan, “Coupling between Antennas in the Presence of Corrugated Structures,” *Radiotechnics and Electronics*, Vol. 22, No. 7, 1977, pp. 1362-1373.
+
+[18] O. N. Tereshin, V. M. Sedov and A. F. Chaplin, “Synthesis of Antennas on Decelerating Structures,” Communication Press, Moscow, 1980.
+
+[19] Y. S. Joe, J.-F. D. Essiben and E. M. Cooney, “Radiation Characteristics of Waveguide Antennas Located on the Same Impedance Plane,” *Journal of Physics D: Applied Physics*, Vol. 41, No. 12, 2008, pp. (125503)1-11.
+
+[20] V. G. Sharvarko, “Pointwise Synthesis in the Inverse Task of Scattering for an Impedance Cylinder,” *Scattering of Electromagnetic Waves*, Taganrog, Vol. 44, No. 1, 1975, pp. 71-96.
+
+[21] B. M. Petrov and V. G. Sharvarko, “Inverse Problem of Diffraction for an Impedance Cylinder,” *Transactions of Higher Education Institutions, Radio Electronics*. Vol. 18, No. 12, 1975, pp. 90-93.
+
+[22] V. G. Sharvarko, “About the Realized Diagrams in Inverse Problem of Scattering for an Impedance Cylinder,” *Scattering of Electromagnetic Waves*, Taganrog, Vol. 44, No. 1, 1975, pp. 87-96.
+
+[23] B. M. Petrov and V. G. Sharvarko, “Approximated Solutions of Inverse Problem of Scattering for a Circular Impedance Cylinder,” *Scattering of Electromagnetic Waves*, Taganrog, Vol. 41, No. 1, 1976, pp. 11-24.
+
+[24] E. L. Stiefel, “Über Diskrete Undlineare Tscheby-Scheff-Appproximation,” *Numerische Mathematik*, Vol. 1, 1959, pp. 1-28.
+doi:10.1007/BF01386369
+
+[25] G. P. Grudinskaya, “Propagation of Radio Waves,” Higher School Press, Moscow, 1975.
+
+[26] A. Y. Yukhanov, “Two-Dimensional Task of Impedance Plane Synthesis,” *Radio Engineering Circuits, Signals and Devices*, Taganrog, Vol. 45, 1998, pp. 92-95.
+
+Copyright © 2011 SciRes.
+
+WET
\ No newline at end of file
diff --git a/samples/texts_merged/2932683.md b/samples/texts_merged/2932683.md
new file mode 100644
index 0000000000000000000000000000000000000000..d943f4a78ef62e21532adc6149e1c1b069231a27
--- /dev/null
+++ b/samples/texts_merged/2932683.md
@@ -0,0 +1,498 @@
+
+---PAGE_BREAK---
+
+10-2012
+
+# Image reconstruction from compressive samples via a max-product EM algorithm
+
+Zhao Song
+Iowa State University
+
+Aleksandar Dogandžić
+Iowa State University, ald@iastate.edu
+
+Follow this and additional works at: http://lib.dr.iastate.edu/ece_pubs
+
+Part of the Applied Statistics Commons, and the Signal Processing Commons
+
+The complete bibliographic information for this item can be found at http://lib.dr.iastate.edu/ece_pubs/46. For information on how to cite this item, please visit http://lib.dr.iastate.edu/howtocite.html.
+
+This Conference Proceeding is brought to you for free and open access by the Electrical and Computer Engineering at Iowa State University Digital Repository. It has been accepted for inclusion in Electrical and Computer Engineering Publications by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.
+---PAGE_BREAK---
+
+# Image reconstruction from compressive samples via a max-product EM algorithm
+
+**Abstract**
+
+We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing structured approximately sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a hidden Markov tree (HMT). Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We employ a max-product algorithm to implement the maximization (M) step of our EM iteration. The noise variance is a regularization parameter that controls signal sparsity. We select the noise variance so that the corresponding estimated signal and state variables (obtained upon convergence of the EM iteration) have the largest marginal posterior distribution. Our numerical examples show that the proposed algorithm achieves better reconstruction performance compared with the state-of-the-art methods
+
+**Keywords**
+
+Belief propagation, expectation maximization (EM) algorithm, hidden Markov tree (HMT), image reconstruction, max-product algorithm, structured sparsity, sparse signal reconstruction
+
+**Disciplines**
+
+Applied Statistics | Electrical and Computer Engineering | Signal Processing
+
+**Comments**
+
+This article is from *Proceedings of SPIE 8499* (2012): 849908, doi:10.1117/12.930862. Posted with permission.
+
+**Rights**
+
+Copyright 2012 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
+---PAGE_BREAK---
+
+# Image reconstruction from compressive samples via a max-product EM algorithm
+
+Zhao Song and Aleksandar Dogandžić
+
+ECpE Department, Iowa State University, 3119 Coover Hall, Ames, IA, 50011
+
+## ABSTRACT
+
+We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing structured approximately sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a hidden Markov tree (HMT). Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We employ a max-product algorithm to implement the maximization (M) step of our EM iteration. The noise variance is a regularization parameter that controls signal sparsity. We select the noise variance so that the corresponding estimated signal and state variables (obtained upon convergence of the EM iteration) have the largest marginal posterior distribution. Our numerical examples show that the proposed algorithm achieves better reconstruction performance compared with the state-of-the-art methods.
+
+**Keywords:** Belief propagation, expectation maximization (EM) algorithm, hidden Markov tree (HMT), image reconstruction, max-product algorithm, structured sparsity, sparse signal reconstruction.
+
+## 1. INTRODUCTION
+
+The advent of compressive sampling (compressed sensing) in the past few years has sparked research activity in sparse signal reconstruction, whose main goal is to estimate the *sparsest* $p \times 1$ signal coefficient vector $s$ from the $N \times 1$ measurement vector $y$ satisfying the following underdetermined system of linear equations: $y = Hs$, where $H$ is an $N \times p$ *sensing matrix* and $N \le p$.
+
+A tree dependency structure is exhibited by the wavelet coefficients of many natural images.¹⁻⁵ A probabilistic Markov tree structure has been employed to model the statistical dependency between the state variables of wavelet coefficients.³ An approximate *belief propagation* algorithm has been first applied to compressive sampling in the work by Baron et al.,⁶ where it has been employed for Bayesian signal reconstruction for sparse Rademacher sensing matrices. Donoho et al.⁷ simplified the sum-product algorithm by approximating messages with Gaussian distribution specified by two scalar parameters, leading to their *approximate message passing (AMP)* algorithm. Following the AMP framework, Schniter⁸ proposed a *turbo-AMP* structured sparse signal recovery method based on loopy belief propagation and turbo equalization and applied it to reconstruct one-dimensional signals; Som and Schniter⁵ apply the turbo-AMP approach to reconstruct compressible images. However, the above references do not employ the exact form of the messages and also have the following limitations: Baron et al.⁶ rely on sparsity of the sensing matrix, the methods by Baron et al.⁶ and Donoho et al.⁷ apply to unstructured signals only, and the turbo-AMP approach⁵,⁸ needs columns of the sensing matrix to be normalized, see [5, eq. (22)] and [8, Sec. IV.A].
+
+In this paper, we combine the hierarchical measurement model by Figueiredo and Nowak⁹ with a Markov tree prior on the binary state variables that identify the large- and small-magnitude signal coefficients and develop a Bayesian maximum *a posteriori* (MAP) expectation-maximization (EM) signal reconstruction scheme that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance, where
+
+E-mail: {zhaosong,ald}@iastate.edu.
+---PAGE_BREAK---
+
+the maximization (M) step employs a max-product belief propagation algorithm. Unlike the previous work, we *do not* approximate the message form in our belief propagation scheme. Unlike the turbo-AMP scheme,⁵,⁸ our reconstruction scheme *does not* require the columns of the sensing matrix to be normalized. Since there are no loops in the graphical model behind our M-step objective function, the M step of our EM algorithm is exact. We have proposed a similar EM algorithm for a random signal model with purely sparse deterministic signal component and a noninformative prior on this component.¹⁰ The noise variance is a regularization parameter that controls signal sparsity and is selected so that the estimated signal and state variables have the largest marginal posterior distribution.
+
+In Section 2, we introduce our measurement and prior models. Section 3 describes the proposed EM algorithm, where the M step implementation via the max-product algorithm is presented in Section 3.1. The selection of the regularization noise variance parameter is discussed in Section 4. Numerical simulations in Section 5 compare reconstruction performances of the proposed and existing methods.
+
+We introduce the notation: $I_n$ and $\mathbf{0}_{n×1}$ denote the identity matrix of size $n$ and the $n \times 1$ vector of zeros, respectively; “$T$” and $\|\cdot\|_p$ are the transpose and $ℓ_p$ norm, respectively; $\mathcal{N}(\mathbf{x}; \mu, \Sigma)$ denotes the probability distribution function (pdf) of a multivariate Gaussian random vector $\mathbf{x}$ with mean $\mu$ and covariance matrix $\Sigma$; Inv-$\chi^2(\sigma^2$; $\nu, \sigma_0^2$) denotes the pdf of a scaled inverse chi-square distribution with $\nu$ degrees of freedom and a scale parameter $\sigma_0^2$, see [11, p. 50 and App. A]; $|\mathcal{T}|$ is the cardinality of the set $\mathcal{T}$; $v(\cdot)$ is an invertible operator that transforms the matrix element indices into vector element indices*. Finally, $\rho_H$ denotes the largest singular value of a matrix $H$, also known as the spectral norm of $H$, and “$\odot$” denotes the Hadamard (elementwise) product.
+
+## 2. MEASUREMENT AND PRIOR MODELS
+
+We model an $N \times 1$ real-valued measurement vector $\mathbf{y}$ using the standard additive Gaussian noise measurement model:²,⁵
+
+$$p_{\mathbf{y}|\boldsymbol{\theta}}(\mathbf{y}|\boldsymbol{\theta}) = \mathcal{N}(\mathbf{y}; H \mathbf{s}, \sigma^2 I_p) \quad (1)$$
+
+where $H$ is an $N \times p$ real-valued sensing matrix with rank($H$) = $N$ satisfying (without loss of generality)
+
+$$\rho_H = 1 \quad (2)$$
+
+$\mathbf{s} = [s_1, s_2, \dots, s_p]^T$ is an unknown $p \times 1$ real-valued signal coefficient vector, and $\sigma^2$ is the unknown noise variance. The set of the unknown parameters is
+
+$$\boldsymbol{\theta} = (\mathbf{s}, \sigma^2) \quad (3)$$
+
+with parameter space
+
+$$\Theta = \mathbb{R}^p \times [0, +\infty). \quad (4)$$
+
+We adopt the Jeffreys' noninformative prior for the variance component $\sigma^2$:
+
+$$p_{\sigma^2}(\sigma^2) \propto (\sigma^2)^{-1}. \quad (5)$$
+
+Define the vector of binary state variables $\mathbf{q} = [q_1, q_2, \dots, q_p]^T \in \{0, 1\}^p$ that determine if the magnitudes of the signal components $s_i$, $i = 1, 2, \dots, p$ are small ($q_i = 0$) or large ($q_i = 1$). Assume that $s_i$ are conditionally independent given $q_i$ and assign the following prior pdf to the signal coefficients:
+
+$$p_{s|\mathbf{q}, \sigma^2}(\mathbf{s}|\mathbf{q}, \sigma^2) = \prod_{i=1}^{p} [\mathcal{N}(s_i; 0, \gamma^2 \sigma^2)]^{q_i} [\mathcal{N}(s_i; 0, \epsilon^2 \sigma^2)]^{1-q_i} \quad (6a)$$
+
+where $\gamma^2$ and $\epsilon^2$ are known positive constants and, typically, $\gamma^2 \gg \epsilon^2$. Hence, the large- and small-magnitude signal coefficients $s_i$ corresponding to $q_i = 1$ and $q_i = 0$ are modeled as zero-mean Gaussian random variables
+
+*This operator is based on the MATLAB wavelet decomposition function `wavedec2` with Haar wavelet and has also been used by He and Carin⁴ and Som and Schniter.⁵
+---PAGE_BREAK---
+
+with variances $\gamma^2 \sigma^2$ and $\epsilon^2 \sigma^2$, respectively. Consequently, $\gamma^2$ and $\epsilon^2$ are relative variances (to the noise variance $\sigma^2$) of the large- and small-magnitude signal coefficients. Equivalently,
+
+$$p_{s|\mathbf{q}, \sigma^2}(\mathbf{s} | \mathbf{q}, \sigma^2) = \mathcal{N}(s; 0_{p \times 1}, \sigma^2 D(q)) \quad (6b)$$
+
+where
+
+$$D(\boldsymbol{q}) = \operatorname{diag}\{(^\gamma q_1) (\epsilon^2)^{1-q_1}, (^\gamma q_2) (\epsilon^2)^{1-q_2}, \dots, (^\gamma q_p) (\epsilon^2)^{1-q_p}\}. \quad (6c)$$
+
+We now introduce the Markov tree prior probability mass function (pmf) on the state variables $q_i$.³,⁵ To make this probability model easier to understand, we introduce two-dimensional signal element indices ($i_1, i_2$). Recall that the conversion operator $v(\cdot)$ is invertible; hence, there is a one-to-one correspondence between the corresponding one- and two-dimensional signal element indices. A parent wavelet coefficient with a two-dimensional position index ($i_1, i_2$) has four children in the finer wavelet decomposition level with two-dimensional indices ($2i_1 - 1, 2i_2 - 1$), ($2i_1 - 1, 2i_2$), ($2i_1, 2i_2 - 1$) and ($2i_1, 2i_2$), see Fig. 1(a). The parent-child dependency assumption implies that, if a parent coefficient in a certain wavelet decomposition level has small (large) magnitude, then its children coefficients in the next finer wavelet decomposition level tend to have small (large) magnitude as well. Denote by $\rho$ and $c$ the numbers of rows and columns of the image, and by $L$ the number of wavelet decomposition levels (tree depth).
+
+We set the prior pmf $p_q(\mathbf{q})$ as follows. In the first wavelet decomposition level ($l=1$), assign
+
+$$p_{q_i}(1) = \Pr\{q_i = 1\} = \begin{cases} 1, & i \in \mathcal{A} \\ P_{\text{root}}, & i \in \mathcal{T}_{\text{root}} \end{cases} \quad (7a)$$
+
+where
+
+$$\mathcal{A} = v(\{1, 2, \dots, \rho/2^L\} \times \{1, 2, \dots, c/2^L\}) \quad (7b)$$
+
+$$\mathcal{T}_{\text{root}} = v([\{1, 2, \dots, \rho/2^{L-1}\} \times \{1, 2, \dots, c/2^{L-1}\}]) \setminus \mathcal{A} \quad (7c)$$
+
+are the sets of indices of the approximation and root node coefficients and $P_{\text{root}} \in (0,1)$ is a known constant denoting the prior probability that a root node signal coefficient has large magnitude, see Fig. 1(b). In the levels $l=2,3,\dots,L$, assign
+
+$$p_{q_i | q_{\pi(i)}}(1 | q_{\pi(i)}) = \begin{cases} P_H, & q_{\pi(i)} = 1 \\ P_L, & q_{\pi(i)} = 0 \end{cases} \quad (7d)$$
+
+where $\pi(i)$ denotes the index of the parent of node $i$. Here, $P_H \in (0,1)$ and $P_L \in (0,1)$ are known constants denoting the probabilities that the signal coefficient $s_i$ is large if the corresponding parent signal coefficient is large or small, respectively.
+
+Our wavelet tree structure consists of $|\mathcal{T}_{\text{root}}|$ trees and spans all signal wavelet coefficients except the approximation coefficients; hence, the set of indices of the wavelet coefficients within the trees is
+
+$$\mathcal{T} = v([\{1, 2, \dots, \rho\} \times \{1, 2, \dots, c\}]) \setminus \mathcal{A} \quad (7e)$$
+
+Define also the set of leaf variable node indices within the tree structure as
+
+$$\mathcal{T}_{\text{leaf}} = v([\{1, 2, \dots, \rho\} \times \{1, 2, \dots, c\}] \setminus [\{1, 2, \dots, \rho/2\} \times \{1, 2, \dots, c/2\}]) \quad (7f)$$
+
+see Fig. 1(b). More complex models are possible; see e.g., He and Carin⁴ and Som and Schniter,⁵ which, however, need at least 10 hyperparameters to specify the prior for the same wavelet tree and did not report large-scale examples. Here, we only need five tuning parameters $P_{\text{root}}$, $P_H$, $P_L$, $\gamma^2$, and $\epsilon^2$, each with a clear meaning. A fairly crude choice of these parameters is sufficient for achieving good reconstruction performance, see Section 5.
+
+The logarithm of the prior pmf $p_q(\mathbf{q})$ is
+
+$$\ln p_q(\mathbf{q}) = \text{const} + [\sum_{i \in A} \ln 1(q_i = 1)] + [\sum_{i \in T_{\text{root}}} q_i \ln P_{\text{root}} + (1 - q_i) \ln(1 - P_{\text{root}})] \\ + [\sum_{i \in T \setminus T_{\text{root}}} q_i q_{\pi(i)} \ln P_H + (1 - q_i) q_{\pi(i)} \ln(1 - P_H) + q_i (1 - q_{\pi(i)}) \ln P_L + (1 - q_i) (1 - q_{\pi(i)}) \ln(1 - P_L)] \quad (7g)$$
+
+where const denotes the terms that are not functions of $\mathbf{q}$.
+---PAGE_BREAK---
+
+Figure 1. (a) Wavelet quadtree structure with $L = 3$ levels and (b) types of wavelet decomposition coefficients.
+
+## 2.1 Bayesian Inference
+
+Define the vectors of state variables and signal coefficients
+
+$$ \xi = [\xi_1^T \ \xi_2^T \ \dots \ \xi_p^T]^T, \quad \xi_i = [q_i, s_i]^T. \tag{8} $$
+
+The joint posterior distribution of $\xi$ and $\sigma^2$ is
+
+$$ p_{\xi, \sigma^2 | y}(\xi, \sigma^2 | y) \propto p_{y|\theta}(\theta|y) p_{s|q, \sigma^2}(s|q, \sigma^2) p_q(q) p_{\sigma^2}(\sigma^2) \\ \propto (\sigma^2)^{-(p+N+2)/2} \exp[-0.5 \|y - Hs\|_2^2/\sigma^2 - 0.5 s^T D^{-1}(q)s/\sigma^2] (\epsilon^2/\gamma^2)^{0.5 \sum_{i=1}^p q_i} p_q(q) \tag{9} $$
+
+which implies
+
+$$ p_{\sigma^2|\xi,y}(\sigma^2|\xi,y) = \operatorname{Inv}\chi^2\left(\sigma^2 \middle| p + N, \frac{\|\mathbf{y}-\mathbf{H}\mathbf{s}\|_2^2 + \mathbf{s}^T D^{-1}(\mathbf{q})\mathbf{s}}{p + N}\right) \tag{10a} $$
+
+$$ p_{\xi|y}(\xi|y) = \frac{p_{\xi, \sigma^2|y}(\xi, \sigma^2|y)}{p_{\sigma^2|\xi,y}(\sigma^2|\xi,y)} \propto p_q(q) (\epsilon^2/\gamma^2)^{0.5 \sum_{i=1}^{p} q_i} / \left[ \frac{\|\mathbf{y}-\mathbf{H}\mathbf{s}\|_2^2 + \mathbf{s}^T D^{-1}(\mathbf{q})\mathbf{s}}{p+N} \right]^{(p+N)/2} \tag{10b} $$
+
+and
+
+$$ p_{\xi | \sigma^2, y}(\xi | \sigma^2, y) \propto \exp[-0.5 \|y - Hs\|_2^2 / \sigma^2 - 0.5 s^T D^{-1}(q)s / \sigma^2] (\epsilon^2 / \gamma^2)^{0.5 \sum_{i=1}^{p} q_i} p_q(q). \tag{10c} $$
+
+We wish to maximize (10b) with respect to $\xi$, but cannot perform this task directly. Consequently, we adopt the following indirect approach: We first develop an EM algorithm for maximizing $p_\xi|\sigma^2,y(\xi|\sigma^2,y)$ in (10c) for a given $\sigma^2$ (Section 3) and then propose a grid search scheme for selecting the best regularization parameter $\sigma^2$ so that the estimated signal and state variables have the largest marginal posterior distribution (10b) (Section 4).
+
+## 3. AN EM ALGORITHM FOR MAXIMIZING $P_\xi|\sigma^2,y(\xi|\sigma^2,y)$
+
+Motivated by [9, Sec. V.A], we introduce the following hierarchical two-stage model:
+
+$$ p_{y|z,\sigma^2}(\mathbf{y}|z,\sigma^2) = \mathcal{N}(\mathbf{y}; H\mathbf{z}, \sigma^2(I_N - HHH^\mathrm{T})) \tag{11a} $$
+
+$$ p_{z|s}(\mathbf{z}|s) = \mathcal{N}(z; s, \sigma^2 I_p) \tag{11b} $$
+---PAGE_BREAK---
+
+where $z$ is an $p \times 1$ vector of missing data. Observe that the assumption (2) guarantees that the covariance matrix $\sigma^2(I_N - H H^T)$ in (11a) is positive semidefinite.
+
+Our EM algorithm for maximizing $p_{\xi|\sigma^2}, y(\xi|\sigma^2, y)$ in (10c) consists of iterating between the following expectation (E) and maximization (M) steps:
+
+$$ \text{E step:} \quad z^{(j)} = [z_1^{(j)}, z_2^{(j)}, \dots, z_p^{(j)}]^T = s^{(j)} + H^T (y - H s^{(j)}) \qquad (12a) $$
+
+and
+
+$$ \text{M step:} \quad \xi^{(j+1)} = \arg\max_{\xi} \left\{ -0.5 [\|z^{(j)} - s\|_2^2 + s^T D^{-1}(q)s] / \sigma^2 + \ln[p_q(q)] + 0.5 \ln(\epsilon^2/\gamma^2) \sum_{i=1}^{p} q_i \right\} \quad (12b) $$
+
+where $j$ denotes the iteration index. To simplify the notation, we omit the dependence of the iterates $\xi^{(j)}$ on $\sigma^2$ in this section. Denote by $\xi^{(+\infty)}$ and $s^{(+\infty)}$ the estimates of $\xi$ and $s$ obtained upon convergence of the above EM iteration.
+
+Note that the M step in (12b) is equivalent to finding the mode of the following distribution:
+
+$$ p_{\xi|\sigma^2,z}(\xi|\sigma^2, z^{(j)}) \propto p_{\xi_A|\sigma^2,z}(\xi_A|\sigma^2, z^{(j)}) p_{\xi_\tau|\sigma^2,z}(\xi_\tau|\sigma^2, z^{(j)}) \quad (13) $$
+
+where $\xi_A$ and $\xi_\tau$ consist of $\xi_i$, $i \in A$ and $\xi_i$, $i \in T$, respectively, and
+
+$$ p_{\xi_A | \sigma^2, z}(\xi_A | \sigma^2, z) \propto \left\{ \prod_{i \in A} \mathcal{N}(z_i; s_i, \sigma^2) \mathcal{N}(s_i; 0, \gamma^2 \sigma^2) \mathbb{I}(q_i = 1) \right\} \quad (14a) $$
+
+$$ p_{\xi_{\tau} | \sigma^2, z}(\xi_{\tau} | \sigma^2, z) \propto \left\{ \prod_{i \in T} \mathcal{N}(z_i; s_i, \sigma^2) [\mathcal{N}(s_i; 0, \gamma^2 \sigma^2)]^{q_i} [\mathcal{N}(s_i; 0, \epsilon^2 \sigma^2)]^{1-q_i} \right\} p_{q_{\tau}}(q_{\tau}). \quad (14b) $$
+
+Here, (14a) follows from (7a) and (14b) corresponds to the hidden Markov tree (HMT) probabilistic model that contains no loops. Fig. 2 depicts an HMT that is a part of the probabilistic model (14b). Maximizing $p_{\xi_A|\sigma^2,z}(\xi_A|\sigma^2, z^{(j)})$ in (14a) with respect to $\xi_i$, $i \in A$ yields the M step for the approximation signal coefficients:
+
+$$ \xi_i^{(j+1)} = \left[1, \frac{\gamma^2 z_i^{(j)}}{1+\gamma^2}\right]^T, \quad i \in A \qquad (15) $$
+
+where we have used the fact that
+
+$$ \arg\max_{s_i} \mathcal{N}(z_i; s_i, \sigma^2) \mathcal{N}(s_i; 0, \tau^2) = \tau^2 z_i / (\sigma^2 + \tau^2). \quad (16) $$
+
+In the following section, we employ the max-product belief propagation algorithm$^{12-14}$ to each tree in our wavelet tree structure, with the goal to find the mode of $p_{\xi_\tau|\sigma^2,z}(\xi_\tau|\sigma^2, z)$; then, our M step in (12b) for the nodes $i \in T$ reduces to applying this algorithm to find the mode of $p_{\xi_\tau|\sigma^2,z}(\xi_\tau|\sigma^2, z^{(j)})$.
+
+## 3.1 Maximizing $p_{\xi_\tau|\sigma^2,z}(\xi_\tau|\sigma^2, z)$
+
+We represent the HMT probabilistic model for $p_{\xi_\tau|\sigma^2,z}(\xi_\tau|\sigma^2, z)$ via potential functions as [see (14b)]
+
+$$ p_{\xi_{\tau} | \sigma^2, z}(\xi_{\tau} | \sigma^2, z) \propto \left[ \prod_{i \in T \setminus T_{\text{root}}} \psi_i(\xi_i) \psi_{i,\pi(i)}(q_i, q_{\pi(i)}) \right] \left[ \prod_{i \in T_{\text{root}}} \psi_i(\xi_i) \right] \quad (17) $$
+
+where
+
+$$ \psi_i(\xi_i) = \begin{cases} N(z_i; s_i, \sigma^2) [N(s_i; 0, \gamma^2 \sigma^2)]^{q_i} [N(s_i; 0, \epsilon^2 \sigma^2)]^{1-q_i}, & i \in T \setminus T_{\text{root}} \\ N(z_i; s_i, \sigma^2) [P_{\text{root}} N(s_i; 0, \gamma^2 \sigma^2)]^{q_i} [(1-P_{\text{root}}) N(s_i; 0, \epsilon^2 \sigma^2)]^{1-q_i}, & i \in T_{\text{root}} \end{cases} \quad (18a) $$
+
+and, for $i \in T \setminus T_{\text{root}}$,
+
+$$ \psi_{i,\pi(i)}(q_i, q_{\pi(i)}) = [P_H^{q_i} (1-P_H)^{1-q_i}]^{q_{\pi(i)}} [P_L^{q_i} (1-P_L)^{1-q_i}]^{1-q_{\pi(i)}}. \quad (18b) $$
+
+Our algorithm for maximizing (17) consists of computing and passing upward and downward messages and calculating and maximizing beliefs.
+---PAGE_BREAK---
+
+Figure 2. A hidden Markov tree, part of the probabilistic model (14b).
+
+### 3.1.1 Computing and Passing Upward Messages
+
+We propagate the upward messages from the lowest decomposition level (i.e., the leaves) towards the root of the tree. Fig. 3(a) depicts the computation of the upward message from variable node $\xi_i$ to its parent node $\xi_{\pi(i)}$ wherein we also define a child of $\xi_i$ as a variable node $\xi_k$ with index $k \in \text{ch}(i)$, where $\text{ch}(i)$ is the index set of the children of $i$: for $i = v(i_1, i_2)$, $\text{ch}(i) = \{v((2i_1 - 1, 2i_2 - 1), (2i_1 - 1, 2i_2), (2i_1, 2i_2 - 1), (2i_1, 2i_2))\}$. Here, we use a circle and an edge with an arrow to denote a variable node and a message, respectively. The upward messages have the following general form:¹³
+
+$$m_{i \to \pi(i)}(q_{\pi(i)}) = \alpha \max_{\xi_i} \left\{ \psi_i(\xi_i) \psi_{i,\pi(i)}(q_i, q_{\pi(i)}) \prod_{k \in \text{ch}(i)} m_{k \to i}(q_i) \right\} \quad (19)$$
+
+where $\alpha > 0$ denotes a normalizing constant used for computational stability.¹³ For nodes that have no children (corresponding to the level $L$, i.e., $i \in \mathcal{T}_{\text{leaf}}$), we set the multiplicative term $\prod_{k \in \text{ch}(i)} m_{k \to i}(\xi_i)$ in (19) to one.
+
+The only two candidates for $\xi_i$ in the maximization of (19) are $[0, \hat{s}_i(0)]^T$ and $[1, \hat{s}_i(1)]^T$, see (16) and (18), where
+
+$$\hat{s}_i(0) = \frac{\epsilon^2}{1 + \epsilon^2} z_i, \quad \hat{s}_i(1) = \frac{\gamma^2}{1 + \gamma^2} z_i. \quad (20)$$
+
+Substituting these candidates into (19) and normalizing the messages yields
+
+$$m_{i \to \pi(i)}(q_{\pi(i)}) = [\mu_i^u(0)]^{1-q_{\pi(i)}} [\mu_i^u(1)]^{q_{\pi(i)}} \quad (21a)$$
+---PAGE_BREAK---
+
+Figure 3. Computing and passing (a) upward and (b) downward messages.
+
+where $[\mu_i^u(0), \mu_i^u(1)]^T = \mu_i^u$,
+
+$$ \mu_i^u = \frac{\left[ \max\{\nu_{0,i}^u \odot \eta_i^u\}, \max\{\nu_{1,i}^u \odot \eta_i^u\} \right]^T}{\max\{\nu_{0,i}^u \odot \eta_i^u\} + \max\{\nu_{1,i}^u \odot \eta_i^u\}} = \frac{\left[ \exp(\ln \max\{\nu_{0,i}^u \odot \eta_i^u\}) - \ln \max\{\nu_{1,i}^u \odot \eta_i^u\}), \ 1 \right]^T}{1 + \exp(\ln \max\{\nu_{0,i}^u \odot \eta_i^u\} - \ln \max\{\nu_{1,i}^u \odot \eta_i^u\})} \quad (21b) $$
+
+$$ \nu_{0,i}^{u} = [1 - P_L, \ P_L]^T \odot \phi(z_i) \qquad (21c) $$
+
+$$ \nu_{1,i}^{u} = [1 - P_H, \ P_H]^T \odot \phi(z_i) \qquad (21d) $$
+
+$$ \eta_i^u = \begin{cases} \odot_{k \in \text{ch}(i)} \mu_k^u, & i \in T \setminus T_{\text{leaf}} \\ [1, 1]^T, & i \in T_{\text{leaf}} \end{cases} \qquad (21e) $$
+
+$$ \phi(z) = [\exp\{-0.5 z^2 / (\sigma^2 + \sigma^2 \epsilon^2)\}/\epsilon, \exp\{-0.5 z^2 / (\sigma^2 + \sigma^2 \gamma^2)\}/\gamma]^T \quad (21f) $$
+
+and $\epsilon = \sqrt{\epsilon^2} > 0$ and $\gamma = \sqrt{\gamma^2} > 0$. To derive (21a), we have used the following fact [see (16)]:
+
+$$ \max_{s_i} \mathcal{N}(z_i; s_i, \sigma^2) \mathcal{N}(s_i; 0, \tau^2) = \frac{1}{\sqrt{2\pi}\sigma^2\sqrt{2\pi}\tau^2} \exp[-0.5 z_i^2 / (\sigma^2 + \tau^2)]. \quad (22) $$
+
+A numerically stable implementation of (21b) that we employ is illustrated in the second expression in (21b). Similarly, the elementwise products in (21c)-(21e) are implemented as exponentiated sums of logarithms of the product terms.
+
+### 3.1.2 Computing and Passing Downward Messages
+
+Upon obtaining all the upward messages, we now compute the downward messages and propagate them from the root towards the lowest level (i.e., the leaves). Fig. 3(b) depicts the computation of the downward message from the parent $\xi_{\pi(i)}$ to the variable node $\xi_i$, which involves upward messages to $\xi_{\pi(i)}$ from its other children, i.e. the siblings of $\xi_i$, marked as $\xi_k$, $k \in \text{sib}(i)$. This downward message also requires the message sent to $\xi_{\pi(i)}$ from its parent node, which is the grandparent of $\xi_i$, denoted by $\xi_{\text{gp}(i)}$. The downward messages have the following general form:¹³
+
+$$ m_{\pi(i) \to i}(q_i) = \alpha \max_{\xi_{\pi(i)}} \left\{ \psi_{\pi(i)}(\xi_{\pi(i)}) \psi_{i,\pi(i)}(q_i, q_{\pi(i)}) m_{\text{gp}(i) \to \pi(i)}(\pi(i)) \prod_{k \in \text{sib}(i)} m_{k \to \pi(i)}(q_{\pi(i)}) \right\} \quad (23) $$
+---PAGE_BREAK---
+
+where $\alpha > 0$ denotes a normalizing constant used for computational stability. For the variable nodes $i$ in the second decomposition level that have no grandparents (i.e., $\pi(i) \in \mathcal{T}_{\text{root}}$), we set the multiplicative term $m_{\text{gp}(i) \to \pi(i)}(\pi(i))$ in (23) to one.
+
+The only two candidates for $\xi_{\pi(i)}$ in the maximization of (23) are $[0, \hat{s}_{\pi(i)}(0)]^T$ and $[1, \hat{s}_{\pi(i)}(1)]^T$, see (16), (18), and (20). Consequently,
+
+$$m_{\pi(i) \to i}(q_i) = [\mu_i^d(0)]^{1-q_i} [\mu_i^d(1)]^{q_i} \quad (24a)$$
+
+for $\pi(i) \in \mathcal{T} \setminus \mathcal{T}_{\text{leaf}}$, where $[\mu_i^d(0), \mu_i^d(1)]^T = \boldsymbol{\mu}_i^d$ and
+
+$$\boldsymbol{\mu}_i^d = \frac{\left[\max\{\boldsymbol{\nu}_{0,i}^d \odot \boldsymbol{\eta}_i^d\}, \max\{\boldsymbol{\nu}_{1,i}^d \odot \boldsymbol{\eta}_i^d\}\right]^T}{\max\{\boldsymbol{\nu}_{0,i}^d \odot \boldsymbol{\eta}_i^d\} + \max\{\boldsymbol{\nu}_{1,i}^d \odot \boldsymbol{\eta}_i^d\}} = \frac{\left[\exp(\ln \max\{\boldsymbol{\nu}_{0,i}^d \odot \boldsymbol{\eta}_i^d\}) - \ln \max\{\boldsymbol{\nu}_{1,i}^d \odot \boldsymbol{\eta}_i^d\}\right], 1\right]^T}{1 + \exp(\ln \max\{\boldsymbol{\nu}_{0,i}^d \odot \boldsymbol{\eta}_i^d\}) - \ln \max\{\boldsymbol{\nu}_{1,i}^d \odot \boldsymbol{\eta}_i^d\}} \quad (24b)$$
+
+$$\boldsymbol{\nu}_{0,i}^d = [1 - P_L, \ 1 - P_H]^T \odot \phi(z_{\pi(i)}) \odot [\bigodot_{k \in \text{sib}(i)} \boldsymbol{\mu}_k^u] \quad (24c)$$
+
+$$\boldsymbol{\nu}_{1,i}^d = [P_L, \ P_H]^T \odot \phi(z_{\pi(i)}) \odot [\bigodot_{k \in \text{sib}(i)} \boldsymbol{\mu}_k^u] \quad (24d)$$
+
+$$\boldsymbol{\eta}_i^d = \begin{cases} [1 - P_{\text{root}}, \ P_{\text{root}}]^T, & \pi(i) \in \mathcal{T}_{\text{root}} \\ \boldsymbol{\mu}_{\pi(i)}^d, & \pi(i) \in (\mathcal{T} \setminus \mathcal{T}_{\text{root}}) \setminus \mathcal{T}_{\text{leaf}}. \end{cases} \quad (24e)$$
+
+We have used (22) to derive (24a).
+
+The above upward and downward messages have discrete representations, which is practically important and is a consequence of the fact that we use a Gaussian prior on the signal coefficients, see (6). Indeed, in contrast with the existing message passing algorithms for compressive sampling,⁵⁻⁸ our max-product scheme employs exact messages.
+
+### 3.1.3 Maximizing Beliefs
+
+Upon computing and passing all the upward and downward messages, we maximize the beliefs, which have the following general form:¹³†
+
+$$b(\boldsymbol{\xi}_i) = \alpha \psi_i(\boldsymbol{\xi}_i) m_{\pi(i) \to i}(q_i) \prod_{k \in \text{ch}(i)} m_{k \to i}(q_i) \quad (25)$$
+
+for each $i \in T$, where $\alpha > 0$ is a normalizing constant. We then use these beliefs to obtain the mode
+
+$$\hat{\boldsymbol{\xi}}_{\mathcal{T}} = \arg\max_{\boldsymbol{\xi}_{\mathcal{T}}} p_{\boldsymbol{\xi}_{\mathcal{T}}} |z(\boldsymbol{\xi}_{\mathcal{T}}|z) \quad (26)$$
+
+where the elements of $\hat{\boldsymbol{\xi}}_{\mathcal{T}}$ are [see (20)]
+
+$$\hat{\boldsymbol{\xi}}_i = [\hat{q}_i, \hat{s}_i(\hat{q}_i)]^T = \arg\max_{\boldsymbol{\xi}_i} b(\boldsymbol{\xi}_i) = \begin{cases} [1, \hat{s}_i(1)]^T, & \beta_i(1) \ge \beta_i(0) \\ [0, \hat{s}_i(0)]^T, & \text{otherwise} \end{cases}, \quad i \in \mathcal{T} \quad (27a)$$
+
+and
+
+$$\beta_i = [\beta_i(0), \beta_i(1)]^T = \begin{cases} [1 - P_{\text{root}}, P_{\text{root}}]^T \odot \phi(z_i) \odot \boldsymbol{\eta}_i^u, & i \in T_{\text{root}} \\ \phi(z_i) \odot \boldsymbol{\mu}_i^d \odot \boldsymbol{\eta}_i^u, & i \in T \setminus T_{\text{root}}. \end{cases} \quad (27b)$$
+
+We have used (16) and (22) to solve the maximization in (27a) and derive (27a)-(27b).
+
+†In (25), we set $m_{\pi(i)\to i}(q_i) = 1$ if $i \in T_{\text{root}}$ and $\prod_{k \in \text{ch}(i)} m_{k\to i}(q_i) = 1$ if $i \in T_{\text{leaf}}$.
+---PAGE_BREAK---
+
+### 4. SELECTING $\sigma^2$
+
+We apply our EM algorithm in Section 3 using a range of values of the regularization parameter $\sigma^2$. We traverse the grid of $K$ values of $\sigma^2$ sequentially and use the signal estimate from the previous grid point to initialize the signal estimation at the current grid point; in particular, we move from a larger $\sigma^2$ (say $\sigma_{old}^2$) to the next smaller $\sigma_{new}^2$ ($< \sigma_{old}^2$) and use $s^{(+\infty)}(\sigma_{old}^2)$ (obtained upon convergence of the EM iteration in Section 3 for $\sigma^2 = \sigma_{old}^2$) to initialize the EM iteration at $\sigma_{new}^2$. The largest $\sigma^2$ and the initial signal estimate at this grid point are selected as
+
+$$ \sigma_{\text{MAX}}^2 = \|y\|_2^2/(p+N+2), \quad s^{(0)}(\sigma_{\text{MAX}}^2) = \mathbf{0}_{p \times 1} \qquad (28a) $$
+
+and the consecutive grid points $\sigma_{new}^2$ and $\sigma_{old}^2$ satisfy
+
+$$ \sigma_{\text{new}}^2 = \sigma_{\text{old}}^2/d \qquad (28b) $$
+
+where $d > 1$ is a selected constant.
+
+Finally, we select the $\sigma^2$ from the above grid of candidates that yields the largest marginal posterior distribution (10b). Denote this $\sigma^2$ by $\sigma_*^2$; then, we select the final estimates of $\xi$ and $s$ as $\xi^{(+\infty)}(\sigma_*^2)$ and $s^{(+\infty)}(\sigma_*^2)$, respectively.
+
+### 5. NUMERICAL EXAMPLES
+
+We compare the reconstruction performances of the following methods:
+
+• our proposed max-product EM algorithm in Section 3 with the variance parameter $\sigma^2$ selected as described in Section 4 (labeled MP-EM) using $K = 12$ grid points with $d = 2$ and the tuning parameters chosen as
+
+$$ \gamma^2 = 1000, \quad \epsilon^2 = 0.1, \quad P_{\text{root}} = P_H = 0.2, \quad P_L = 10^{-5}; \qquad (29) $$
+
+• the turbo-AMP approach⁵ with a MATLAB implementation at http://www.ece.osu.edu/~schniter/turboAMPimaging and the tuning parameters chosen as the default values in this implementation;
+
+• the fixed-point continuation active set algorithm¹⁵ (labeled FPC$_{AS}$) that aims at minimizing the La-grangian cost function
+
+$$ 0.5 \|y - Hs\|_2^2 + \tau \|s\|_1 \qquad (30a) $$
+
+with the regularization parameter $\tau$ computed as
+
+$$ \tau = 10^a \|H^T y\|_\infty \qquad (30b) $$
+
+where $a$ is a tuning parameter chosen to achieve good reconstruction performance;
+
+• the Barzilai-Borwein version of the gradient-projection for sparse reconstruction method with debiasing in [16, Sec. III.B] (labeled GPSR) with the convergence threshold $tolP = 10^{-5}$ and tuning parameter $a$ in (30b) chosen to achieve good reconstruction performance;
+
+• the double overrelaxation (DORE) thresholding method in [17, Sec. III] initialized by the zero sparse signal estimate:
+
+$$ s^{(0)} = \mathbf{0}_{p \times 1}; \qquad (31) $$
+
+• the normalized iterative hard thresholding (NIHT) scheme¹⁸ initialized by the zero $s^{(0)}$ in (31);
+
+• the model-based iterative hard thresholding (MB-IHT) algorithm¹¹ using a greedy tree approximation,¹⁹ initialized by the zero $s^{(0)}$ in (31).
+---PAGE_BREAK---
+
+For the MP-EM, DORE, NIHT, and MB-IHT iterations, we use the following convergence criterion:
+
+$$
+\| \mathbf{s}^{(p+1)} - \mathbf{s}^{(p)} \|_{2}^{2} / p < 10^{-14}. \qquad (32)
+$$
+
+For GPSR and FPCAS, we tuned the regularization parameter τ manually by varying *a* with the set {−1, −2, −3, −4, −5, −6, −7, −8, −9}: the best reconstruction performances are achieved for *a* = −3.
+
+The sensing matrix *H* has the following structure:
+
+$$
+H = \frac{1}{\rho_{\Phi}} \Phi \Psi
+$$
+
+where $\Phi$ is the $N \times p$ sampling matrix and $\Psi$ is the $p \times p$ orthogonal inverse Haar wavelet transform matrix (satisfying $\Psi\Psi^T = I_p$) with
+
+$$
+L = 4
+\tag{34}
+$$
+
+wavelet decomposition levels. Note that *H* in (33) satisfies (2). In our examples presented here, the sampling matrices $\Phi$ are random Gaussian (see Section 5.1) or structurally random²⁰ (see Section 5.2).
+
+## 5.1 Image Reconstruction Using Gaussian I.I.D. Sensing Matrices
+
+We reconstruct the 128 × 128 ‘Cameraman’ image from compressive samples generated using random sampling matrices $\Phi$ with independent, identically distributed (i.i.d.) standard normal elements. Our performance metric is the normalized mean square error (MSE) of a signal coefficient vector estimate $\tilde{s}$:
+
+$$
+\text{MSE}\{\tilde{s}\} = \frac{\text{E}_{\Phi}[\|\tilde{s} - s\|_2^2]}{\|s\|_2^2} \qquad (35)
+$$
+
+computed using 10 Monte Carlo trials.
+
+We set the sparsity level *r* for NIHT and DORE as 2000 N/p and 2500 N/p for MB-IHT, tuned for good MSE performance.
+
+Recall that the turbo-AMP approach needs columns of the sensing matrix to be normalized, see [5, eq. (22)].
+When applying the turbo-AMP method, we scale the sensing matrix as follows:
+
+$$
+H_{\text{scale}} = (1/\sqrt{N}) \Phi \Psi \tag{36}
+$$
+
+so that it has approximately normalized columns; this approximation becomes more accurate as we increase *N*. With measurements **y** and scaled sensing matrix *H*scale, turbo-AMP returns the scaled signal estimate *s*scale, and we compute the final turbo-AMP signal estimate as ($\rho_\Phi/\sqrt{N}$) *s*scale, whose performance is evaluated using (35).
+
+Fig. 4 shows the MSE performances of different algorithms as functions of the normalized number of mea-
+surements (subsampling factor) *N*/*p*. MP-EM achieves the best MSE when *N*/*p* ≤ 0.35. The MSEs of GPSR
+and FPCAS are close to each other and smaller than those of DORE, NIHT, and MB-IHT for all *N*/*p* and the
+MSE of MP-EM is 1.8 to 2.5 times smaller than that of GPSR and FPCAS, see Fig. 4.
+
+The relatively poor performance of MB-IHT is likely due to the inflexibility of the greedy tree approximation and *deterministic tree structure* that it employs‡; a relatively poor performance of MB-COSAMP (which employs the same deterministic tree structure) has also been reported in [5, Sec. IV.B].
+
+For $N/p \le 0.35$, turbo-AMP performs similarly to DORE, NIHT, and MB-IHT, but it outperforms all other methods for $N/p > 0.35$. Such a good performance of turbo-AMP at large $N$ is likely facilitated by the fact that the norms of the columns of (36) become closer to unity as $N$ grows.
+
+‡This deterministic tree structure model requires that, for binary states equal to one (identifying large signal coefficients), the binary states of all their ancestors must be one as well.
+---PAGE_BREAK---
+
+Figure 4. MSEs as functions of the subsampling factor $N/p$.
+
+## 5.2 Image Reconstruction Using a Structurally Random Sampling Matrix
+
+We now reconstruct the standard 256×256 ‘Lena’ and ‘Cameraman’ images from structurally random compressive samples, which implies that the sensing matrix $H$ has orthonormal rows (i.e., $HH^T = I_N$)²⁰ and, consequently, $\rho_\Phi = \rho_H = 1$. Our performance metric is the peak signal-to-noise ratio (PSNR) of an estimated signal $\tilde{s}$:
+
+$$ \text{PSNR (dB)} = 10 \log_{10} \left\{ \frac{\left[ (\Psi \tilde{s})_{\text{MAX}} - (\Psi s)_{\text{MIN}} \right]^2}{\|\tilde{s} - s\|_2^2 / p} \right\}. \quad (37) $$
+
+We set the sparsity level $r$ for NIHT and DORE as 12500 $N/p$ and 15000 $N/p$ for MB-IHT, tuned for good PSNR performance.
+
+When applying the turbo-AMP method, we scale the sensing matrix as follows:
+
+$$ H_{\text{scale}} = (\sqrt{p/N}) \Phi \Psi \quad (38) $$
+
+With measurements $\mathbf{y}$ and scaled sensing matrix $H_{\text{scale}}$, turbo-AMP returns the scaled signal estimate $s_{\text{scale}}$, and we compute the final turbo-AMP signal estimate as $(\sqrt{p/N}) s_{\text{scale}}$, whose performance is evaluated using (37). Our empirical experience shows that the sensing matrix scaling in (38) improves the reconstruction performance of the turbo-AMP algorithm in this example.
+
+Fig. 5(a) shows the PSNRs achieved by various methods when reconstructing the 256 × 256 ‘Lena’ image. For $N/p < 0.4$, the proposed MP-EM method outperforms all other methods, where the performance improvement compared with the closest competitor varies between 1.3 dB and 2 dB. For $N/p = 0.4$, turbo-AMP outperforms all other methods.
+
+Fig. 5(b) shows the PSNRs achieved by various methods when reconstructing the 256 × 256 ‘Cameraman’ image. For $N/p < 0.4$, the proposed MP-EM method outperforms all other methods by at least 2.2 dB. For $N/p = 0.4$, turbo-AMP outperforms all other methods; however, it performs quite poorly for $N/p < 0.35$.
+
+Fig. 6 shows the reconstructed 256 × 256 ‘Cameraman’ images by different methods for $N/p = 0.375$. The MP-EM algorithm achieves better reconstructed image quality compared with the other methods.
+
+## 6. CONCLUDING REMARKS
+
+We presented a Bayesian EM algorithm for image reconstruction from compressive samples using a Markov tree prior for the image wavelet coefficients. We employed the max-product belief propagation algorithm to implement the M step of the proposed EM iteration. Compared with the existing message passing algorithms
+---PAGE_BREAK---
+
+Figure 5. PSNRs as functions of the subsampling factor *N*/*p* for the 256 × 256 (a) ‘Lena’ and (b) ‘Cameraman’ images.
+
+in the compressive sampling area, our method does not approximate the message form. The simulation results show that our algorithm often outperforms existing algorithms for standard test images and sampling operators.
+
+The performance of our MP-EM scheme can be improved by employing multiple initializations. In particular, it is of interest to examine initialization of each grid point by the turbo-AMP estimate (in addition to the initialization in Section 4), which would likely improve the performance of MP-EM at large *N*/*p*. Our future work will also include the convergence analysis of the MP-EM algorithm and incorporating other measurement models.
+
+REFERENCES
+
+[1] Baraniuk, R., Cevher, V., Duarte, M. F., and Hegde, C., "Model-based compressive sensing," *IEEE Trans. Inf. Theory* **56**, 1982–2001 (Apr. 2010).
+
+[2] Cevher, V., Indyk, P., Carin, L., and Baraniuk, R., "Sparse signal recovery and acquisition with graphical models," *IEEE Signal Process. Mag.* **27**, 92–103 (Nov. 2010).
+
+[3] Crouse, M. S., Nowak, R. D., and Baraniuk, R. G., "Wavelet-based statistical signal processing using hidden Markov models," *IEEE Trans. Signal Process.* **46**, 886–902 (Apr. 1998).
+
+[4] He, L. and Carin, L., "Exploiting structure in wavelet-based Bayesian compressive sensing," *IEEE Trans. Signal Process.* **57**, 3488–3497 (Sept. 2009).
+
+[5] Som, S. and Schniter, P., "Compressive imaging using approximate message passing and a Markov-tree prior," *IEEE Trans. Signal Process.* **60**(99), 3439–3448 (2012).
+
+[6] Baron, D., Sarvotham, S., and Baraniuk, R. G., "Bayesian compressive sensing via belief propagation," *IEEE Trans. Signal Process.* **58**, 269–280 (Jan. 2010).
+
+[7] Donoho, D., Maleki, A., and Montanari, A., "Message-passing algorithms for compressed sensing," *Proc. Nat. Acad. Sci* **106**(45), 18914–18919 (2009).
+
+[8] Schniter, P., "Turbo reconstruction of structured sparse signals," in [*Proc. Conf. Inform. Sci. Syst.*], 1–6 (2010).
+
+[9] Figueiredo, M. A. T. and Nowak, R. D., "An EM algorithm for wavelet-based image restoration," *IEEE Trans. Image Process.* **12**, 906–916 (2003).
+
+[10] Song, Z. and Dogandžić, A., "A Bayesian max-product EM algorithm for reconstructing structured sparse signals," in [*Proc. Conf. Inform. Sci. Syst.*], (2012).
+
+[11] Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B., [*Bayesian Data Analysis*], Chapman & Hall, second ed. (2004).
+
+[12] Koller, D. and Friedman, N., [*Probabilistic Graphical Models*], MIT Press, Cambridge, MA (2009).
+---PAGE_BREAK---
+
+Figure 6. The ‘Cameraman’ image reconstructed by various methods for N/p = 0.375.
+---PAGE_BREAK---
+
+[13] Weiss, Y. and Freeman, W., "On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs," *IEEE Trans. Inf. Theory* **47**(2), 736-744 (2001).
+
+[14] Pearl, J., [*Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference*], Morgan Kaufmann, San Mateo, CA (1988).
+
+[15] Wen, Z., Yin, W., Goldfarb, D., and Zhang, Y., "A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation," *SIAM J. Sci. Comput.* **32**(4), 1832-1857 (2010).
+
+[16] Figueiredo, M., Nowak, R., and Wright, S., "Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems," *IEEE J. Sel. Topics Signal Process.* **1**(4), 586-597 (2007).
+
+[17] Qiu, K. and Dogandžić, A., "Sparse signal reconstruction via ECME hard thresholding," *IEEE Trans. Signal Process.* **60** (2012).
+
+[18] Blumensath, T. and Davies, M. E., "Normalized iterative hard thresholding: Guaranteed stability and performance," *IEEE J. Selected Topics Signal Processing* **4**(2), 298-309 (2010).
+
+[19] Baraniuk, R. G., "Optimal tree approximation with wavelets," in [*Proc. SPIE Wavelet Applicat. Signal Image Processing VII*], 196-207 (1999).
+
+[20] Do, T., Gan, L., Nguyen, N., and Tran, T., "Fast and efficient compressive sensing using structurally random matrices," *IEEE Trans. Signal Process.* **60**(1), 139-154 (2012).
\ No newline at end of file
diff --git a/samples/texts_merged/2947864.md b/samples/texts_merged/2947864.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d9cef5bc1a08d7cf40687147916a9af0ff05fa1
--- /dev/null
+++ b/samples/texts_merged/2947864.md
@@ -0,0 +1,547 @@
+
+---PAGE_BREAK---
+
+# A CCA2 Secure Variant of the McEliece Cryptosystem
+
+Nico Döttling, Rafael Dowsley, Jörn Müller-Quade and Anderson C. A. Nascimento
+
+**Abstract**—The McEliece public-key encryption scheme has become an interesting alternative to cryptosystems based on number-theoretical problems. Differently from RSA and ElGa-mal, McEliece PKC is not known to be broken by a quantum computer. Moreover, even tough McEliece PKC has a relatively big key size, encryption and decryption operations are rather efficient. In spite of all the recent results in coding theory based cryptosystems, to the date, there are no constructions secure against chosen ciphertext attacks in the standard model – the *de facto* security notion for public-key cryptosystems.
+
+In this work, we show the first construction of a McEliece based public-key cryptosystem secure against chosen ciphertext attacks in the standard model. Our construction is inspired by a recently proposed technique by Rosen and Segev.
+
+**Index Terms**—Public-key encryption, CCA2 security, McEliece assumptions, standard model
+
+## I. INTRODUCTION
+
+Indistinguishability of messages under adaptive chosen ciphertext attacks is one of the strongest known notions of security for public-key encryption schemes (PKE). Many computational assumptions have been used in the literature for obtaining cryptosystems meeting such a strong security notion. Given one-way trapdoor permutations, we know how to obtain CCA2 security from any semantically secure public-key cryptosystem [27], [34], [23]. Efficient constructions are also known based on number-theoretic assumptions [9] or on identity based encryption schemes [6]. Obtaining a CCA2 secure cryptosystem (even an inefficient one) based on the McEliece assumptions in the standard model has been an open problem in this area for quite a while. We note, however, that secure schemes in the random oracle model have been proposed in [19].
+
+Recently, Rosen and Segev proposed an elegant and simple new computational assumption for obtaining CCA2 secure PKEs: *correlated products* [33]. They provided constructions of correlated products based on the existence of certain *lossy*
+
+Rafael Dowsley is with the Department of Computer Science and Engineering, University of California at San Diego (UCSD), 9500 Gilman Drive, La Jolla, California 92093, USA. Email: rdowsley@cs.ucsd.edu. This work was partially done while the author was with the Department of Electrical Engineering, University of Brasilia. Supported in part by NSF grant CCF-0915675 and by a Focht-Powell fellowship.
+
+Nico Döttling and Jörn Müller-Quade are with the Institute Cryptography and Security, Karlsruhe Institute of Technology. Am Fasanengarten 5, 76128 Karlsruhe, Germany. E-mail: {ndoett,muellerq}@ira.uka.de
+
+Anderson C. A. Nascimento is with the Department of Electrical Engineering, University of Brasilia. Campus Universitário Darcy Ribeiro, Brasília, CEP: 70910-900, Brazil. E-mail: andclay@ene.unb.br.
+
+A preliminary version of this work, enciphering just a single message rather than many possibly correlated ones, has appeared at the proceedings of CT-RSA - 2009 [11].
+
+trapdoor functions [29] which in turn can be based on the decisional Diffie-Hellman problem and on Paillier's decisional residuosity problem [29].
+
+In this paper, we show that ideas similar to those of Rosen and Segev can be applied for obtaining an efficient construction of a CCA2 secure PKE built upon the McEliece assumption. Inspired by the definition of correlated products [33], we define a new kind of PKE called k-repetition CPA secure cryptosystem and provide an adaptation of the construction proposed in [33] to this new scenario. Such cryptosystems can be constructed from very weak (one-way CPA secure) PKEs and randomized encoding functions. In contrast, Rosen and Segev give a more general, however less efficient, construction of correlated secure trapdoor functions from lossy trapdoor functions. We show directly that a randomized version of the McEliece cryptosystem [28] is k-repetition CPA secure and obtain a CCA2 secure scheme in the standard model. The resulting cryptosystem encrypts many bits as opposed to the single-bit PKE obtained in [33]. We expand the public and secret-keys and the ciphertext by a factor of k when compared to the original McEliece PKE.
+
+In a concurrent and independent work [16], Goldwasser and Vaikuntanathan proposed a new CCA2 secure public-key encryption scheme based on lattices using the construction by Rosen and Segev. Their scheme assumed that the problem of learning with errors (LWE) is hard [32].
+
+A direct construction of correlated products based on McEliece and Niederreiter PKEs has been obtained by Persichetti [30] in a subsequent work.
+
+## II. PRELIMINARIES
+
+### A. Notation
+
+If $x$ is a string, then $|x|$ denotes its length, while $|S|$ represents the cardinality of a set $S$. If $n \in \mathbb{N}$ then $1^n$ denotes the string of $n$ ones. $s \leftarrow S$ denotes the operation of choosing an element $s$ of a set $S$ uniformly at random. $w \leftarrow A(x, y, \dots)$ represents the act of running the algorithm $A$ with inputs $x, y, \dots$ and producing output $w$. We write $w \leftarrow A^O(x, y, \dots)$ for representing an algorithm $A$ having access to an oracle $O$. We denote by $\text{Pr}[E]$ the probability that the event $E$ occurs. If $a$ and $b$ are two strings of bits or two matrices, we denote by $a|b$ their concatenation. The transpose of a matrix $M$ is $M^T$. If $a$ and $b$ are two strings of bits, we denote by $\langle a, b \rangle$ their dot product modulo 2 and by $a \oplus b$ their bitwise XOR. $U_n$ is an oracle that returns an uniformly random element of $\{0, 1\}^n$.
+
+We use the notion of randomized encoding-function for functions $E$ that take an input $m$ and random coins $s$ and
+---PAGE_BREAK---
+
+output a randomized representation $E(m; s)$ from which $m$ can be recovered using a decoding-function $D$. We will use such randomized encoding-functions to make messages entropic or unguessable.
+
+## B. Public-Key Encryption Schemes
+
+A Public-Key Encryption Scheme (PKE) is defined as follows:
+
+*Definition 1:* (Public-Key Encryption). A public-key encryption scheme is a triplet of algorithms (Gen, Enc, Dec) such that:
+
+* Gen is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$ and outputs a public-key pk and a secret-key sk. The public-key specifies the message space $\mathcal{M}$ and the ciphertext space $\mathcal{C}$.
+
+* Enc is a (possibly) probabilistic polynomial-time encryption algorithm which receives as input a public-key pk, a message $m \in \mathcal{M}$ and random coins r, and outputs a ciphertext $c \in \mathcal{C}$. We write Enc(pk, m; r) to indicate explicitly that the random coins r are used and Enc(pk, m) if fresh random coins are used.
+
+* Dec is a deterministic polynomial-time decryption algorithm which takes as input a secret-key sk and a ciphertext c, and outputs either a message $m \in \mathcal{M}$ or an error symbol $\perp$.
+
+* (Completeness) For any pair of public and secret-keys generated by Gen and any message $m \in \mathcal{M}$ it holds that $\text{Dec}(sk, \text{Enc}(pk, m; r)) = m$ with overwhelming probability over the randomness used by Gen and the random coins r used by Enc.
+
+A basic security notion for public-key encryption schemes is One-Wayness under chosen-plaintext attacks (OW-CPA). This notion states that every PPT-adversary $\mathcal{A}$, given a public-key pk and a ciphertext c of a uniformly chosen message $m \in \mathcal{M}$, has only negligible probability of recovering the message m (The probability runs over the random coins used to generate the public and secret-keys, the choice of m and the coins of $\mathcal{A}$).
+
+Below we define the standard security notions for public-key encryption schemes, namely, indistinguishability against chosen-plaintext attacks (IND-CPA) [15] and against adaptive chosen-ciphertext attacks (IND-CCA2) [31]. Our game definition follows the approach of [17].
+
+*Definition 2:* (IND-CPA security). To a two-stage adversary $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ against PKE we associate the following experiment.
+
+$$
+\begin{aligned}
+\operatorname{Exp}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cpa}}(n): \\
+(\mathrm{pk}, \mathrm{sk}) &\leftarrow \operatorname{Gen}(1^{n}) \\
+(\mathrm{m}^{0}, \mathrm{m}^{1}, \mathrm{state}) &\leftarrow \mathcal{A}_{1}(\mathrm{pk}) \text { s.t. } |\mathrm{m}^{0}|=|\mathrm{m}^{1}| \\
+b &\leftarrow\{0,1\} \\
+c^{*} &\leftarrow \operatorname{Enc}(\mathrm{pk}, \mathrm{m}^{b}) \\
+b' &\leftarrow \mathcal{A}_{2}(c^{*}, \mathrm{state}) \\
+\text { If } b=b' &\text { return 1, else return 0. }
+\end{aligned}
+ $$
+
+We define the advantage of $\mathcal{A}$ in the experiment as
+
+$$ \operatorname{Adv}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cpa}}(n)=\left|\operatorname{Pr}\left[\operatorname{Exp}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cpa}}(n)=1\right]-\frac{1}{2}\right| $$
+
+We say that PKE is indistinguishable against chosen- plaintext attacks (IND-CPA) if for all probabilistic polynomial- time (PPT) adversaries $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ the advantage of $\mathcal{A}$ in the above experiment is a negligible function of $n$.
+
+*Definition 3:* (IND-CCA2 security). To a two-stage adversary $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ against PKE we associate the following experiment.
+
+$$
+\begin{array}{l}
+\operatorname{Exp}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cca2}}(n): \\
+(\mathrm{pk}, \mathrm{sk}) \leftarrow \operatorname{Gen}(1^n) \\
+(\mathrm{m}^0, \mathrm{m}^1, \mathrm{state}) \leftarrow \mathcal{A}_1^{\mathrm{Dec}(\mathrm{sk}, \cdot)}(\mathrm{pk}) \text { s.t. } |\mathrm{m}^0| = |\mathrm{m}^1| \\
+b \leftarrow \{\mathrm{0}, \mathrm{1}\} \\
+c^* \leftarrow \operatorname{Enc}(\mathrm{pk}, \mathrm{m}^b) \\
+b' \leftarrow \mathcal{A}_2^{\mathrm{Dec}(\mathrm{sk}, \cdot)}(c^*, \mathrm{state}) \\
+\text { If } b=b' &\text { return 1, else return 0. }
+\end{array}
+ $$
+
+The adversary $\mathcal{A}_2$ is not allowed to query $\text{Dec}(sk, \cdot)$ with $c^*$. We define the advantage of $\mathcal{A}$ in the experiment as
+
+$$ \operatorname{Adv}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cca2}}(n)=\left|\operatorname{Pr}\left[\operatorname{Exp}_{\mathrm{PKE}, \mathcal{A}}^{\mathrm{cca2}}(n)=1\right]-\frac{1}{2}\right| $$
+
+We say that PKE is indistinguishable against adaptive chosen-ciphertext attacks (IND-CCA2) if for all probabilistic polynomial-time (PPT) adversaries $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ that make a polynomial number of oracle queries the advantage of $\mathcal{A}$ in the experiment is a negligible function of $n$.
+
+## C. McEliece Cryptosystem
+
+In this Section we define the basic McEliece cryptosys- tem [25], following [36] and [28]. Let $\mathcal{F}_{n,t}$ be a family of binary linear error-correcting codes given by two parameters $n$ and $t$. Each code $C \in \mathcal{F}_{n,t}$ has code length $n$ and minimum distance greater than $2t$. We further assume that there exists an efficient probabilistic algorithm Generate$_{n,t}$ that samples a code $C \in \mathcal{F}_{n,t}$ represented by a generator-matrix $G_C$ of dimensions $l \times n$ together with an efficient decoding procedure Decode$_C$ that can correct up to $t$ errors.
+
+The McEliece PKE consists of a triplet of probabilistic algorithms ($\text{Gen}_{\text{McE}}, \text{Enc}_{\text{McE}}, \text{Dec}_{\text{McE}}$) such that:
+
+* The probabilistic polynomial-time key generation algorithm $\text{Gen}_{\text{McE}}$, computes $(G_C, \text{Decode}_C) \leftarrow \text{Generate}_{n,t}()$, sets $\text{pk} = G_C$ and $\text{sk} = \text{Decode}_C$ and outputs $(\text{pk}, \text{sk})$.
+
+* The probabilistic polynomial-time encryption algorithm $\text{Enc}_{\text{McE}}$, takes the public-key $\text{pk} = G_C$ and a plaintext $m \in F_1^l$ as input and outputs a ciphertext $c = mG_C \oplus e$, where $e \in \{\mathrm{0}, 1\}^n$ is a random vector of Hamming-weight $t$.
+
+* The deterministic polynomial-time decryption algorithm $\text{Dec}_{\text{McE}}$, takes the secret-key $\text{sk} = \text{Decode}_C$ and a ciphertext $c \in F_2^n$, computes $m = \text{Decode}_C(c)$ and outputs $m$.
+
+This basic variant of the McEliece cryptosystem is OW- CPA secure (for a proof see [36] Proposition 3.1), given that matrices $G_C$ generated by $\text{Generate}_{n,t}$ are pseudorandom
+---PAGE_BREAK---
+
+(Assumption 4 below) and decoding random linear codes is
+hard when the noise vector has hamming weight $t$.
+
+There exist several optimization for the basic scheme,
+mainly improving the size of the public-key. Biswas and
+Sendrier [5] show that the public generator-matrix $\mathbf{G}$ can be
+reduced to row echelon form, reducing the size of the public-
+key from $l \cdot n$ to $l \cdot (n-l)$ bits. However, we cannot adopt this
+optimization into our scheme of section IV¹, as it implies a
+simple attack compromising IND-CPA security² (whereas [5]
+prove OW-CPA security).
+
+In this work we use a slightly modified version of the
+basic McEliece PKE scheme. Instead of sampling an error
+vector e by choosing it randomly from the set of vectors with
+Hamming-weight t, we generate e by choosing each of its
+bits according to the Bernoulli distribution $B_{\theta}$ with parameter
+$\theta = \frac{l}{n} - \epsilon$ for some $\epsilon > 0$. Clearly, a simple argument based
+on the Chernoff bound gives us that the resulting error vector
+should be within the error capabilities of the code but for a
+negligible probability in n. The reason for using this error-
+distribution is that one of our proofs utilizes the fact that the
+concatenation $e_1|e_2$ of two Bernoulli-distributed vectors $e_1$ and $e_2$ is again Bernoulli distributed. Clearly, it is not the case that
+$e_1|e_2$ is a uniformly chosen vector of Hamming-weight $2t$ if
+each $e_1$ and $e_2$ are uniformly chosen with Hamming-weight $t$.
+Using the Bernoulli error-distribution, we base the security
+of our scheme on the pseudorandomness of the McEliece
+matrices $\mathbf{G}$ and the pseudorandomness of the learning parity
+with noise (LPN) problem (see below).
+
+D. McEliece Assumptions and Attacks
+
+In this subsection, we discuss the hardness assumptions for
+the McEliece cryptosystem. Let $\mathcal{F}_{n,t}$ be a family of codes
+together with a generation-algorithm Generate$_{n,t}$ as above and
+let $\mathbf{G}_C$ be the corresponding generator-matrices. An adversary
+can attack the McEliece cryptosystem in two ways: either
+he can try to discover the underlying structure which would
+allow him to decode efficiently or he can try to run a generic
+decoding algorithm. This high-level intuition that there are
+two different ways of attacking the cryptosystem can be
+formalized [36]. Accordingly, the security of the cryptosystem
+is based on two security assumptions.
+
+The first assumption states that for certain families $\mathcal{F}_{n,t}$, the distribution of generator-matrices $\mathbf{G}_C$ output by Generate$_{n,t}$ is pseudorandom. Let $l$ be the dimension of the codes in $\mathcal{F}_{n,t}$.
+
+**Assumption 4:** Let $\mathbf{G}_C$ be distributed by $(\mathbf{G}_C, \text{Decode}_\mathbf{C}) \leftarrow \text{Generate}_{n,t}(\mathbf{R})$ and $\mathbf{R}$ be distributed by $\mathbf{R} \leftarrow \mathcal{U}(\mathbb{F}_2^{k \times n})$. For every PPT algorithm $\mathcal{A}$ it holds that
+
+$$|\Pr[\mathcal{A}(\mathbf{G}_C) = 1] - \Pr[\mathcal{A}(\mathbf{R}) = 1]| < \text{negl}(n).$$
+
+In the classical instantiation of the McEliece cryptosystem,
+$\mathcal{F}_{n,t}$ is chosen to be the family of irreducible binary Goppa-
+codes of length $n = 2^m$ and dimension $l = n - tm$. For this
+
+instantiation, an efficient distinguisher was built for the case of
+high-rate codes [12], [13] (i.e., codes where the rate are very
+close to 1). But, for codes that do not have a high-rate, no
+generalization of the previous distinguisher is known and the
+best known attacks [8], [24] are based on the support splitting
+algorithm [35] and have exponential runtime. Therefore, one
+should be careful when choosing the parameters of the Goppa-
+codes, but for encryption schemes it is possible to use codes
+that do not have high-rate.
+
+The second security assumption is the difficulty of the decoding problem (a classical problem in coding theory), or equivalently, the difficulty of the learning parity with noise (LPN) problem (a classical problem in learning theory). The best known algorithms for decoding a random linear code are based on the information set decoding technique [21], [22], [37]. Over the years, there have been improvements in the running time [7], [3], [14], [4], [26], [1], but the best algorithms still run in exponential time.
+
+Below we give the definition of LPN problem following the description of [28].
+
+*Definition 5:* (LPN search problem). Let $s$ be a random binary string of length $l$. We consider the Bernoulli distribution $B_\theta$ with parameter $\theta \in (0, \frac{1}{2})$. Let $Q_{s,\theta}$ be the following distribution:
+
+$$\{(a, \langle s, a \rangle \oplus e)|a \leftarrow \{0, 1\}^l, e \leftarrow B_\theta\}$$
+
+For an adversary $\mathcal{A}$ trying to discover the random string $s$,
+we define its advantage as:
+
+$$\mathrm{Adv}_{\mathrm{LPN}\theta,\mathcal{A}}(l) = \Pr[\mathcal{A}^{\mathcal{Q}_{s,\theta}} = s | s \leftarrow \{0, 1\}^l]$$
+
+The LPNθ problem with parameter θ is hard if the advantage of all PPT adversaries 𝒜 that make a polynomial number of oracle queries is negligible.
+
+Katz and Shin [18] introduce a distinguishing variant of
+the LPN-problem, which is more useful in the context of
+encryption schemes.
+
+*Definition 6:* (LPNDP, LPN distinguishing problem). Let $s, a$ be binary strings of length $l$. Let further $Q_{s,\theta}$ be as in Definition 5. Let $\mathcal{A}$ be a PPT-adversary. The distinguishing-advantage of $\mathcal{A}$ between $Q_{s,\theta}$ and the uniform distribution $U_{l+1}$ is defined as
+
+$$\mathrm{Adv}_{\mathrm{LPNDP}, \mathcal{A}}(l) = |\Pr[\mathcal{A}^{\mathcal{Q}_{s,\theta}} = 1 | s \leftarrow \{0, 1\}^l] - \Pr[\mathcal{A}^{U_{l+1}} = 1]|$$
+
+The LPNDPθ with parameter θ is hard if the advantage of all PPT adversaries A is negligible.
+
+Further, [18] show that the LPN-distinguishing problem is
+as hard as the LPN search-problem with similar parameters.
+
+*Lemma 1:* ([18]) Say there exists an algorithm $\mathcal{A}$ making $q$ oracle queries, running in time $t$, and such that
+
+$$\mathrm{Adv}_{\mathrm{LPNDP},\theta,\mathcal{A}}(l) \geq \delta$$
+
+Then there exists an adversary $\mathcal{A}'$ making $q' = O(q\delta^{-2}\log l)$ oracle queries, running in time $t' = O(tl\delta^{-2}\log l)$, and such that
+
+$$\mathrm{Adv}_{\mathrm{LPN}\theta,\mathcal{A}'}(l) \geq \frac{\delta}{4}$$
+
+The reader should be aware that in the current state of the
+art, the average-case hardness of these two assumptions, as
+
+¹Neither is it possible for the scheme of [28], on which our k-repetition McEliece scheme is based upon.
+
+²The scheme of [28] encrypts by computing $c = (m|s) \cdot \mathbf{G} \oplus e)$. If $\mathbf{G}$ is in row-echelon form, $m \oplus e'$ is a prefix of c, where $e'$ is a prefix of e. Thus an IND-CPA adversary can distinguish between the encryptions of two plaintexts $m_0$ and $m_1$ by checking whether the prefix of $c^*$ is closer to $m_0$ or $m_1$.
+---PAGE_BREAK---
+
+| (m,t) | plaintext size | ciphertext size | security (key) |
|---|
| (10,50) | 524 | 1024 | 491 | | (11,32) | 1696 | 2048 | 344 | | (12,40) | 3616 | 4096 | 471 |
+
+Fig. 1. A table of McEliece key parameters and security estimates taken from [36].
+
+well as all other assumptions used in public-key cryptography, cannot be reduced to the worst-case hardness of a NP-hard problem³ (and even if that was the case, we do not even know if $P \neq NP$). The confidence on the hardness of solving all these problems on average-case (that is what cryptography really needs) comes from the lack of efficient solutions despite the efforts of the scientific community over the years. But more studies are, of course, necessary in order to better assess the difficulties of such problems. We should highlight that when compared to cryptosystems based on number-theoretical assumptions such as the hardness of factoring or of computing the discrete-log, the cryptosystems based on coding and lattice assumptions have the advantage that no efficient quantum algorithm breaking the assumptions is known. One should also be careful when implementing the McEliece cryptosystem as to avoid side-channel attacks [38].
+
+## E. Signature Schemes
+
+Now we define signature schemes (SS) and the security notion called one-time strong unforgeability.
+
+**Definition 7:** (Signature Scheme). A signature scheme is a triplet of algorithms (Gen, Sign, Ver) such that:
+
+* Gen is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$ and outputs a verification key vk and a signing key dsk. The verification key specifies the message space $\mathcal{M}$ and the signature space $\mathcal{S}$.
+
+* Sign is a (possibly) probabilistic polynomial-time signing algorithm which receives as input a signing key dsk and a message $m \in \mathcal{M}$, and outputs a signature $\sigma \in \mathcal{S}$.
+
+* Ver is a deterministic polynomial-time verification algorithm which takes as input a verification key vk, a message $m \in \mathcal{M}$ and a signature $\sigma \in \mathcal{S}$, and outputs a bit indicating whether $\sigma$ is a valid signature for m or not (i.e., the algorithm outputs 1 if it is a valid signature and outputs 0 otherwise).
+
+* (Completeness) For any pair of signing and verification keys generated by Gen and any message $m \in \mathcal{M}$ it holds that $Ver(vk, m, Sign(dsk, m)) = 1$ with overwhelming probability over the randomness used by Gen and Sign.
+
+**Definition 8:** (One-Time Strong Unforgeability). To a two-stage adversary $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ against SS we associate the following experiment.
+
+³Quite remarkably, some lattice problems enjoy average-case to worst-case reductions, but these are not for problems known to be NP-hard.
+
+$$
+\begin{array}{l}
+\text{Exp}_{\text{SS},\mathcal{A}}^{\text{otsu}}(n): \\
+(vk, dsk) \leftarrow \text{Gen}(1^n) \\
+(m, \text{state}) \leftarrow \mathcal{A}_1(\text{vk}) \\
+\sigma \leftarrow \text{Sign}(\text{dsk}, m) \\
+(m^*, \sigma^*) \leftarrow \mathcal{A}_2(m, \sigma, \text{state}) \\
+\text{If } Ver(vk, m^*, \sigma^*) = 1 \text{ and } (m^*, \sigma^*) \neq (m, \sigma) \text{ return} \\
+1, \text{else return } 0
+\end{array}
+ $$
+
+We say that a signature scheme SS is one-time strongly unforgeable if for all probabilist polynomial-time (PPT) adversaries $\mathcal{A} = (\mathcal{A}_1, \mathcal{A}_2)$ the probability that $\text{Exp}_{\text{SS},\mathcal{A}}^{\text{otsu}}(n)$ outputs 1 is a negligible function of $n$. One-way functions are sufficient to construct existentially unforgeable one-time signature schemes [20], [27].
+
+### III. k-REPETITION PKE
+
+#### A. Definitions
+
+We now define a *k*-repetition Public-Key Encryption.
+
+**Definition 9:** (*k*-repetition Public-Key Encryption). For a PKE (Gen, Enc, Dec) and a randomized encoding-function E with a decoding-function D, we define the *k*-repetition public-key encryption scheme ($\text{PKE}_k$) as the triplet of algorithms ($\text{Gen}_k$, $\text{Enc}_k$, $\text{Dec}_k$) such that:
+
+* $\text{Gen}_k$ is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$ and calls PKE's key generation algorithm $k$ times obtaining the public-keys $(pk_1, ..., pk_k)$ and the secret-keys $(sk_1, ..., sk_k)$. $\text{Gen}_k$ sets the public-key as $pk = (pk_1, ..., pk_k)$ and the secret-key as $sk = (sk_1, ..., sk_k)$.
+
+* $\text{Enc}_k$ is a probabilistic polynomial-time encryption algorithm which receives as input a public-key $pk = (pk_1, ..., pk_k)$, a message $m \in \mathcal{M}$ and coins $s$ and $r_1, ..., r_k$, and outputs a ciphertext $c = (c_1, ..., c_k) = (\text{Enc}(pk_1, E(m; s); r_1), ..., \text{Enc}(pk_k, E(m; s); r_k))$.
+
+* $\text{Dec}_k$ is a deterministic polynomial-time decryption algorithm which takes as input a secret-key $sk = (sk_1, ..., sk_k)$ and a ciphertext $c = (c_1, ..., c_k)$. It outputs a message $m$ if $D(\text{Dec}(sk_1, c_1)), ..., D(\text{Dec}(sk_k, c_k))$ are all equal to some $m \in \mathcal{M}$. Otherwise, it outputs an error symbol $\perp$.
+
+* (Completeness) For any $k$ pairs of public and secret-keys generated by $\text{Gen}_k$ and any message $m \in \mathcal{M}$ it holds that $\text{Dec}_k(\text{sk}, \text{Enc}_k(pk, m)) = m$ with overwhelming probability over the random coins used by $\text{Gen}_k$ and $\text{Enc}_k$.
+
+We also define security properties that the *k*-repetition Public-Key Encryption scheme used in the next sections should meet.
+
+**Definition 10:** (Security under uniform *k*-repetition of encryption schemes). We say that $\text{PKE}_k$ (built from an encryption scheme PKE) is secure under uniform *k*-repetition if $\text{PKE}_k$ is IND-CPA secure.
+
+**Definition 11:** (Verification under uniform *k*-repetition of encryption schemes). We say that $\text{PKE}_k$ is verifiable under uniform *k*-repetition if there exists an efficient deterministic algorithm Verify such that given a ciphertext $c \in C$, the public-key $pk = (pk_1, ..., pk_k)$ and any $sk_i$ for $i \in \{1, ..., k\}$, it
+---PAGE_BREAK---
+
+holds that if Verify(c, pk, ski) = 1 then Deck(sk, c) = m for some m ≠ ⊥ (i.e. c decrypts to a valid plaintext).
+
+Notice that for the scheme PKEk to be verifiable, the underlying scheme PKE cannot be IND-CPA secure, as the verification algorithm of PKEk implies an efficient IND-CPA adversary against PKE. Thus, we may only require that PKE is OW-CPA secure.
+
+**B. IND-CCA2 Security from verifiable IND-CPA Secure k-repetition PKE**
+
+In this subsection we construct the IND-CCA2 secure public-key encryption scheme (PKEcca2) and prove its security. We assume the existence of an one-time strongly unforgeable signature scheme SS = (Gen, Sign, Ver) and of a PKEk that is secure and verifiable under uniform *k*-repetition.
+
+We use the following notation for derived keys: For a public-key $pk = (pk_1^0, pk_1^1, \dots, pk_k^0, pk_k^1)$ and a $k$-bit string vk we write $pk^{\text{vk}} = (pk_1^{\text{vk}_1}, \dots, pk_k^{\text{vk}_k})$. We will use the same notation for secret-keys sk.
+
+* **Key Generation:** Gencca2 is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$. Gencca2 calls PKE's key generation algorithm 2k times to obtain public-keys $pk_1^0, pk_1^1, \dots, pk_k^0, pk_k^1$ and secret-keys $sk_1^0, sk_1^1, \dots, sk_k^0, sk_k^1$. It sets $pk = (pk_1^0, pk_1^1, \dots, pk_k^0, pk_k^1)$, $sk = (sk_1^0, sk_1^1, \dots, sk_k^0, sk_k^1)$ and outputs pk, sk
+
+* **Encryption:** Enccca2 is a probabilistic polynomial-time encryption algorithm which receives as input the public-key $pk = (pk_1^0, pk_1^1, \dots, pk_k^0, pk_k^1)$ and a message $m \in M$ and proceeds as follows:
+
+ 1) Executes the key generation algorithm of the signature scheme obtaining a signing key dsk and a verification key vk.
+
+ 2) Compute $c' = \text{Enc}_k(\text{pk}^{\text{vk}}, m; r)$ where $r$ are random coins.
+
+ 3) computes the signature $\sigma = \text{Sign}(dsk, c')$.
+
+ 4) Outputs the ciphertext $c = (c', vk, \sigma)$.
+
+* **Decryption:** Deccca2 is a deterministic polynomial-time decryption algorithm which takes as input a secret-key $sk = (sk_1^0, sk_1^1, \dots, sk_k^0, sk_k^1)$ and a ciphertext $c = (c', vk, \sigma)$ and proceeds as follows:
+
+ 1) If $\text{Ver}(vk, c', \sigma) = 0$, it outputs $\perp$ and halts.
+
+ 2) It computes and outputs $m = \text{Dec}_k(\text{sk}^{\text{vk}}, c')$.
+
+Note that if $c'$ is an invalid ciphertext (i.e. not all $c'_i$ decrypt to the same plaintext), then Deccca2 outputs $\perp$ as Deck outputs $\perp$.
+
+As in [33], we can apply a universal one-way hash function to the verification keys (as in [10]) and use $k = n^\epsilon$ for a constant $0 < \epsilon < 1$. Note that the hash function in question need not be modeled as a random oracle. For ease of presentation, we do not apply this method in our scheme description.
+
+**Theorem 1:** Given that SS is an one-time strongly unforgeable signature scheme and that PKEk is IND-CPA secure and verifiable under uniform *k*-repetition, the public-key encryption scheme PKEcca2 is IND-CCA2 secure.
+
+*Proof:* In this proof, we closely follow [33]. Denote by $\mathcal{A}$ the IND-CCA2 adversary. Consider the following sequence of games.
+
+* **Game 1** This is the IND-CCA2 game.
+
+* **Game 2** Same as game 1, except that the signature-keys $(vk^*, dsk^*)$ that are used for the challenge-ciphertext $c^*$ are generated before the interaction with $\mathcal{A}$ starts. Further, game 2 always outputs $\perp$ if $\mathcal{A}$ sends a decryption query $c = (c', vk, \sigma)$ with $vk = vk^*$.
+
+We will now establish the remaining steps in two lemmata.
+
+**Lemma 2:** It holds that $\text{view}_{\text{Game1}}(\mathcal{A}) \approx_c \text{view}_{\text{Game2}}(\mathcal{A})$, given that $(\text{Gen}, \text{Sign}, \text{Ver})$ is an one-time strongly unforgeable signature scheme.
+
+*Proof:* Given that $\mathcal{A}$ does not send a valid decryption query $c = (c', vk, \sigma)$ with $vk = vk^*$ and $c \neq c^*$, $\mathcal{A}$'s views in game 1 and game 2 are identical. Thus, in order to distinguish game 1 and game 2 $\mathcal{A}$ must send a valid decryption query $c = (c', vk, \sigma)$ with $vk = vk^*$ and $c \neq c^*$. We will use $\mathcal{A}$ to construct an adversary $\mathcal{B}$ against the one-time strong unforgeability of the signature scheme $(\text{Gen}, \text{Sign}, \text{Ver})$. $\mathcal{B}$ basically simulates the interaction of game 2 with $\mathcal{A}$, however, instead of generating $vk^*$ itself, it uses the $vk^*$ obtained from the one-time strong unforgeability experiment. Furthermore, $\mathcal{B}$ generates the signature $\sigma$ for the challenge-ciphertext $c^*$ by using its signing oracle provided by the one-time strong unforgeability game. Whenever $\mathcal{A}$ sends a valid decryption query $c = (c', vk, \sigma)$ with $vk = vk^*$ and $c \neq c^*$, $\mathcal{B}$ terminates and outputs $(c', \sigma)$. Obviously, $\mathcal{A}$'s output is identically distributed in Game 2 and $\mathcal{B}$'s simulation. Therefore, if $\mathcal{A}$ distinguishes between game 1 and game 2 with non-negligible advantage $\epsilon$, then $\mathcal{B}$'s probability of forging a signature is also $\epsilon$, thus breaking the one-time strong unforgeability of $(\text{Gen}, \text{Sign}, \text{Ver})$. $\blacksquare$
+
+**Lemma 3:** It holds that $\text{Adv}_{\text{Game2}}(\mathcal{A})$ is negligible in the security parameter, given that PKE$_k$ is verifiable and IND-CPA secure under uniform k-repetition.
+
+*Proof:* Assume that $\text{Adv}_{\text{Game2}}(\mathcal{A}) \ge \epsilon$ for some non-negligible $\epsilon$. We will now construct an IND-CPA adversary $\mathcal{B}$ against PKE$_k$ that breaks the IND-CPA security of PKE$_k$ with advantage $\epsilon$. Instead of generating $pk$ like game 2, $\mathcal{B}$ proceeds as follows. Let $pk^* = (pk_1^j, \dots, pk_k^j)$ be the public-key provided by the IND-CPA experiment to $\mathcal{B}$. $\mathcal{B}$ first generates a pair of keys for the signature scheme $(vk^*, dsk^*) \leftarrow \text{Gen}(1^n)$. Then, the public-key $pk$ is formed by setting $pk^{\text{vk}^*} = pk^*$. All remaining components $pk_i^j$ of $pk$ are generated by $(pk_i^j, sk_i^j) \leftarrow \text{Gen}(1^n)$, for which $\mathcal{B}$ stores the corresponding $sk_i^j$. Clearly, the $pk$ generated by $\mathcal{B}$ is identically distributed to the $pk$ generated by game 2, as the Gen-algorithm of PKE$_k$ generates the components of $pk$ independently. Now, whenever $\mathcal{A}$ sends a decryption query $c = (c', vk, \sigma)$, where $vk \neq vk^*$ (decryption queries with $vk = vk^*$ are not answered by game 2), $\mathcal{B}$ picks an index $i$ with $vk_i \neq vk_i^{vk_i}$ and checks if $\text{Verify}(c', pk, sk_i^{vk_i}) = 1$, if not it outputs $\perp$. Otherwise it computes $m = \text{D}(\text{Dec}(sk_i, c'_i))$. Verifiability guarantees that it holds that $\text{Dec}_k(\text{sk}^{\text{vk}}, c') = m$, i.e. the output $m$ is identically distributed as in game 2. When $\mathcal{A}$ sends the challenge-messages $m_0, m_1$, $\mathcal{B}$ forwards
+---PAGE_BREAK---
+
+$m_0, m_1$ to the IND-CPA experiments and receives a challenge-ciphertext $c^*$. $\mathcal{B}$ then computes $\sigma = \text{Sign}(\text{disk}^*, c^{*'})$ and sends $c^* = (c^*, \text{vk}^*, \sigma)$ to $\mathcal{A}$. This $c^*$ is identically distributed as in game 2. Once $\mathcal{A}$ produces output, $\mathcal{B}$ outputs whatever $\mathcal{A}$ outputs. Putting it all together, $\mathcal{A}$'s views are identically distributed in game 2 and in the simulation of $\mathcal{B}$. Therefore it holds that $\text{Adv}_{\text{IND-CPA}}(\mathcal{B}) = \text{Adv}_{\text{Game2}}(\mathcal{A}) \ge \epsilon$. Thus $\mathcal{B}$ breaks the IND-CPA security of PKE$_k$ with non-negligible $\epsilon$, contradicting the assumption.
+
+Plugging Lemma 2 and Lemma 3 together immediately establishes that any PPT IND-CCA2 adversary $\mathcal{A}$ has at most negligible advantage in winning the IND-CCA2 experiment for the scheme PKE$_{\text{cca2}}$.
+
+#### IV. A VERIFIABLE k-REPETITION MCELIECE SCHEME
+
+In this section, we will instantiate a verifiable $k$-repetition encryption scheme $\text{PKE}_{\text{McE},k} = (\text{Gen}_{\text{McE},k}, \text{Enc}_{\text{McE},k}, \text{Dec}_{\text{McE},k})$ based on the McEliece cryptosystem.
+
+In [28] it was proved that the cryptosystem obtained by changing the encryption algorithm of the McEliece cryptosystem to encrypt $s|m$ (where $s$ is random padding) instead of just encrypting the message $m$ (the so called Randomized McEliece cryptosystem) is IND-CPA secure, if $|s|$ is chosen sufficiently large for the LPNDP to be hard (e.g. linear in the security-parameter $n$). We will therefore use the randomized encoding-function $E(m;s) = s|m$ (with $|s| \in \Omega(n)$) in our verifiable $k$-repetition McEliece scheme. As basis scheme PKE for our verifiable $k$-repetition McEliece scheme we use the OW-CPA secure textbook McEliece with a Bernoulli error-distribution.
+
+The verification algorithm $\text{Verify}_{\text{McE}}(c, pk, sk_i)$ works as follows. Given a secret-key $sk_i$ from the secret-key vector sk, it first decrypts the $i$-th component of $c$ by $x = \text{Dec}_{\text{McE}}(sk_i, c_i)$. Then, for all $j = 1, \dots, k$, it checks whether the vectors $c_j \oplus xG_j$ have a Hamming-weight smaller than $t$, where $G_j$ is the generator-matrix given in $pk_j$. If so, $\text{Verify}_{\text{McE}}$ outputs 1, otherwise 0. Clearly, if $\text{Verify}_{\text{McE}}$ accepts, then all ciphertexts $c_j$ are close enough to the respective codewords $xG_j$, i.e. invoking $\text{Dec}_{\text{McE}}(sk_j, c_j)$ would also output $x$. Therefore, we have that $\text{Verify}_{\text{McE}}(c, pk, sk_i) = 1$, if and only if $\text{Dec}_{\text{McE},k}(sk, c) = m$ for some $m \in \mathcal{M}$.
+
+#### A. Security of the k-repetition Randomized McEliece
+
+We now prove that the modified Randomized McEliece is IND-CPA secure under $k$-repetition.
+
+By the completeness of each instance, the probability that in one instance $i \in \{1, \dots, k\}$ a correctly generated ciphertext is incorrectly decoded is negligible. Since $k$ is polynomial, it follows by the union bound that the probability that a correctly generated ciphertext of PKE$_{k,McE}$ is incorrectly decoded is also negligible. So $PKE_{k,McE}$ meets the completeness requirement.
+
+Denote by $\mathbf{R}_1, \dots, \mathbf{R}_k$ random matrices of size $l \times n$, by $\mathbf{G}_1, \dots, \mathbf{G}_k$ the public-key matrices of the McEliece cryptosystem and by $\mathbf{e}_1, \dots, \mathbf{e}_k$ the error vectors. Define $l_1 = |s|$ and $l_2 = |m|$. Let $\mathbf{R}_{i,1}$ and $\mathbf{R}_{i,2}$ be the $l_1 \times n$ and $l_2 \times n$
+
+sub-matrices of $\mathbf{R}_i$ such that $\mathbf{R}_i^T = \mathbf{R}_{i,1}^T |\mathbf{R}_{i,2}^T$. Define $\mathbf{G}_{i,1}$ and $\mathbf{G}_{i,2}$ similarly.
+
+**Lemma 4:** The scheme $\text{PKE}_{\text{McE},k}$ is IND-CPA secure, given that both the McEliece assumption and the LPNDP assumption hold.
+
+*Proof:* Let $\mathcal{A}$ be an IND-CPA adversary against $\text{PKE}_{\text{McE},k}$. Consider the following three games.
+
+* **Game 1** This is the IND-CPA game.
+
+* **Game 2** Same as game 1, except that the components $pk_i$ of the public-key pk are computed by $pk_i = (\mathbf{R}_i, t, \mathcal{M}, \mathcal{C})$ instead of $pk_i = (\mathbf{G}_i, t, \mathcal{M}, \mathcal{C})$, where $\mathbf{R}_i$ is a randomly chosen matrix of the same size as $\mathbf{G}_i$.
+
+* **Game 3** Same as game 2, except that the components $c_i$ of the challenge-ciphertext $c^*$ are not computed by $c_i = (s|m)\mathbf{R}_i \oplus e_i$ but rather chosen uniformly at random.
+
+Indistinguishability of game 1 and game 2 follows by a simple hybrid-argument using the McEliece assumption, we omit this for the sake of brevity. The indistinguishability of game 2 and game 3 can be established as follows. First observe that it holds that $c_i = (s|m)\mathbf{R}_i \oplus e_i = (s\mathbf{R}_{i,1} \oplus e_i) \oplus m\mathbf{R}_{i,2}$ for $i=1, \dots, k$. Setting $\mathbf{R}_1 = \mathbf{R}_{1,1}|,\dots|\mathbf{R}_{k,1}, \mathbf{R}_2 = \mathbf{R}_{1,2}|,\dots|\mathbf{R}_{k,2}$ and $e = e_1|\dots|e_k$, we can write $c^* = (s\mathbf{R}_1 \oplus e) \oplus m\mathbf{R}_2$. Now, the LPNDP assumption allows us to substitute $s\mathbf{R}_1 \oplus e$ with a uniformly random distributed vector u, as s and $\mathbf{R}_1$ are uniformly distributed and e is Bernoulli distributed. Therefore $c^* = u \oplus m\mathbf{R}_2$ is also uniformly distributed. Thus we have reached game 3. $\mathcal{A}$'s advantage in game 3 is obviously 0, as the challenge-ciphertext $c^*$ is statistically independent of the challenge bit b. This concludes the proof.
+
+#### V. GENERALIZED SCHEME
+
+As in [33], it is possible to generalize the scheme to encrypt correlated messages instead of encrypting $k$ times the same message $m$. In this Section, we show that a similar approach is possible for our scheme, yielding an IND-CCA2 secure McEliece variant that has asymptotically the same ciphertext expansion as the efficient IND-CPA scheme of [19]. We now present a generalized version of our encryption scheme using a correlated plaintext space.
+
+##### A. Definitions
+
+*Definition 12:* ($\tau$-Correlated Messages) We call a tuple of messages $(m_1, \dots, m_k)$ $\tau$-correlated for some constant $0 < \gamma < 1$ and $\tau = (1-\gamma)k$, if given any $\tau$ messages of tuple it is possible to efficiently recover all the messages. We denote the space of such messages tuples by $\mathcal{M}_{\text{Cor}}$.
+
+Basically, $\tau$-correlated messages can be erasure-corrected. Now we define a correlated public-key encryption scheme.
+
+*Definition 13:* (Correlated Public-Key Encryption). For a PKE ($\text{Gen}, \text{Enc}, \text{Dec}$) and a randomized encoding-function E that maps from the plaintext-space $\mathcal{M}$ to the correlated plaintext-space $\mathcal{M}_{\text{Cor}}$ (with corresponding decoding-function D), we define the correlated public-key encryption scheme $(\text{PKE}_{\text{Cor}})$ as the triplet of algorithms $(\text{Gen}_{\text{Cor}}, \text{Enc}_{\text{Cor}}, \text{Dec}_{\text{Cor}})$ such that:
+---PAGE_BREAK---
+
+* GenCor is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$ and calls PKE's key generation algorithm $k$ times obtaining the public-keys ($pk_1, \dots, pk_k$) and the secret-keys ($sk_1, \dots, sk_k$). GenCor sets the public-key as $pk = (pk_1, \dots, pk_k)$ and the secret-key as $sk = (sk_1, \dots, sk_k)$.
+
+* EncCor is a probabilistic polynomial-time encryption algorithm which receives as input a public-key $pk = (pk_1, \dots, pk_k)$ and a message $m \in M$. The algorithm computes $\tilde{m} = (\tilde{m}_1, \dots, \tilde{m}_k) = E(m; s)$ (with fresh random coins $s$) and outputs the ciphertext $c = (c_1, \dots, c_k) = (\text{Enc}(pk_1, \tilde{m}_1), \dots, \text{Enc}(pk_k, \tilde{m}_k))$.
+
+* DecCor is a deterministic polynomial-time decryption algorithm which takes as input a secret-key $sk = (sk_1, \dots, sk_k)$ and a ciphertext $c = (c_1, \dots, c_k)$. It first computes a tuple $\tilde{m} = (\tilde{m}_1, \dots, \tilde{m}_k) \in M_{\text{Cor}}$, outputs $m = D(\tilde{m})$ if $\tilde{m} \in M_{\text{Cor}}$, if not it outputs an error symbol $\perp$.
+
+* (Completeness) For any $k$ pairs of public and secret-keys generated by GenCor and any message $m = (m_1, \dots, m_k) \in M_{\text{Cor}}$ it holds that DecCor(sk, EncCor(pk, m)) = m with overwhelming probability over the randomness used by GenCor and EncCor.
+
+We also define security properties that the Correlated Public-Key Encryption scheme used in the next sections should meet.
+
+**Definition 14:** (Security of Correlated Public-Key Encryption). We say that $PKE_{\text{Cor}}$ (built from an encryption scheme $PKE$) is secure if $PKE_{\text{Cor}}$ is IND-CPA secure.
+
+**Definition 15:** ($\tau$-Verification). We say that $PKE_{\text{Cor}}$ is $\tau$-verifiable if the exists a efficient deterministic algorithm Verify, such that given a ciphertext $c \in C$, the public-key $pk = (pk_1, \dots, pk_k)$ and any $\tau$ distinct secret-keys $sk_T = (sk_{t_1}, \dots, sk_{t_\tau})$ (with $T = \{t_1, \dots, t_\tau\}$), it holds that if Verify(c, pk, T, sk$_T$) = 1 then DecCor(sk, c) = m for some $m \neq \perp$ (i.e. c decrypts to a valid plaintext).
+
+## B. IND-CCA2 Security from IND-CPA Secure Correlated PKE
+
+We now describe the IND-CCA2 secure public-key encryption scheme ($PKE'_{cca2}$) built using the correlated PKE and prove its security. We assume the existence of a correlated PKE, $PKE_{\text{Cor}}$, that is secure and $\tau$-verifiable. We also use an error correcting code ECC: $\Sigma^l \to \Sigma^k$ with minimum distance $\tau$ and polynomial-time encoding. Finally, we assume the existence of an one-time strongly unforgeable signature scheme SS = (Gen, Sign, Ver) in which the verification keys are elements of $\Sigma^l$ (we assumed that the verification keys are elements of $\Sigma^l$ only for simplicity, we can use any signature scheme if there is a injective mapping from the set of verification keys to $\Sigma^l$).
+
+We will use the following notation: For a codeword $d = (d_1, \dots, d_k) \in \text{ECC}$, set $pk^d = (pk_1^{d_1}, \dots, pk_k^{d_k})$. Analogously for sk.
+
+* Key Generation: $Gen'_{cca2}$ is a probabilistic polynomial-time key generation algorithm which takes as input a security parameter $1^n$. $Gen'_{cca2}$ proceeds as follows. It calls PKE's key generation algorithm $|Σ|^k$ times obtaining the public-keys $(pk_1^1, \dots, pk_1^{|Σ|}, \dots, pk_k^1, \dots, pk_k^{|Σ|})$
+
+and the secret-keys $(sk_1^1, \dots, sk_1^{|Σ|}, \dots, sk_k^1, \dots, sk_k^{|Σ|})$. Outputs $pk = (pk_1^1, \dots, pk_1^{|Σ|}, \dots, pk_k^1, \dots, pk_k^{|Σ|})$ and $sk = (sk_1^1, \dots, sk_1^{|Σ|}, \dots, sk_k^1, \dots, sk_k^{|Σ|})$.
+
+* Encryption: $Enc'_{cca2}$ is a probabilistic polynomial-time encryption algorithm which receives as input the public-key $pk = (pk_1^1, \dots, pk_1^{|Σ|}, \dots, pk_k^1, \dots, pk_k^{|Σ|})$ and a message $m = (m_1, \dots, m_k) \in M$ and proceeds as follows:
+
+ 1) Executes the key generation algorithm of the signature scheme SS obtaining a signing key dsk and a verification key vk. Computers $d = \text{ECC}(vk)$. Let $d_i$ denote the i-element of d.
+
+ 2) Computers $c' = \text{Enc}_{\text{Cor}}(pk^d, m)$.
+
+ 3) Computers the signature $\sigma = \text{Sign}(dsk, c')$.
+
+ 4) Outputs the ciphertext $c = (c', vk, \sigma)$.
+
+* Decryption: $Dec'_{cca2}$ is a deterministic polynomial-time decryption algorithm which takes as input a secret-key $sk = (sk_1^1, \dots, sk_1^{|Σ|}, \dots, sk_k^1, \dots, sk_k^{|Σ|})$ and a ciphertext $c = (c', vk, \sigma)$ and proceeds as follows:
+
+ 1) If $\text{Ver}(vk, c', \sigma) = 0$, it outputs $\perp$ and halts. Otherwise, it performs the following steps.
+
+ 2) Compute $d = \text{ECC}(vk)$.
+
+ 3) Compute $m = \text{Dec}_{\text{Cor}}(sk^d, c)$ and output m.
+
+**Theorem 2:** Given that SS is an one-time strongly unforgeable signature scheme and that $PKE_{\text{Cor}}$ is secure and $\tau$-verifiable, the public-key encryption scheme $PKE'_{cca2}$ is IND-CCA2 secure.
+
+**Proof:** The proof is almost identical to the proof of theorem 1. Denote by $\mathcal{A}$ the IND-CCA2 adversary. Consider the following two of games.
+
+* **Game 1** This is the IND-CCA2 game.
+
+* **Game 2** Same as game 1, except that the signature-keys $(vk^*, dsk^*)$ that are used for the challenge-ciphertext $c^*$ are generated before the interaction with $\mathcal{A}$ starts. Further, game 2 terminates and outputs $\perp$ if $\mathcal{A}$ sends a decryption query with $c = (c', vk, \sigma)$ with $vk = vk^*$.
+
+Again, we will split the proof of Theorem 2 in two lemmata.
+
+**Lemma 5:** From $\mathcal{A}$'s view, game 1 and game 2 are computationally indistinguishable, given that SS is an existentially unforgeable one-time signature-scheme.
+
+We omit the proof, since it is identical to the proof of lemma 2.
+
+**Lemma 6:** It holds that $\text{Adv}_{\text{Game2}}(\mathcal{A})$ is negligible in the security parameter, given that $PKE_{\text{Cor}}$ is verifiable IND-CPA secure correlated public-key encryption scheme.
+
+**Proof:** We proceed as in the proof of Lemma 3. Assume that $\text{Adv}_{\text{Game2}}(\mathcal{A}) \ge \epsilon$ for some non-negligible $\epsilon$. We will now construct an IND-CPA adversary $\mathcal{B}$ against $PKE_{\text{Cor}}$ that breaks the IND-CPA security of $PKE_{\text{Cor}}$ with advantage $\epsilon$. Again, instead of generating $pk$ like game 2, $\mathcal{B}$ will construct $pk$ using the public-key $pk'$ provided by the IND-CPA experiment. Let $d = \text{ECC}(vk^*)$. $\mathcal{B}$ sets $pk^d = pk^*$. All remaining components $pk_i^j$ of $pk$ are generated by $(pk_i^j, sk_i^j) \leftarrow \text{Gen}(1^n)$, for which $\mathcal{B}$ stores the corresponding $sk_i^j$. Obviously, the $pk$ generated by $\mathcal{B}$ is identically distributed to the $pk$ generated by game 2, as in both cases all components are $pk_i^j$ are generated independently by the key-generation algorithm Gen of PKE.
+---PAGE_BREAK---
+
+Whenever $\mathcal{A}$ sends a decryption query with $vk \neq vk^*$, $\mathcal{B}$ does the following. Let $d = \text{ECC}(vk)$ and $d^* = \text{ECC}(vk^*)$. Since the two codewords $d$ and $d^*$ are distinct and the code ECC has minimum-distance $\tau$, there exist a $\tau$-set of indices $T \subseteq \{1, \dots, k\}$ such that it holds for all $t \in T$ that $d_t \neq d_t^*$. Thus, the public-keys $pk_i^{d_t}$, for $t \in T$ were generated by $\mathcal{B}$ and it thus knows the corresponding secret-keys $sk_i^{d_t}$. $\mathcal{B}$ checks if $\text{Verify}(c', pk^d, T, sk_T^d) = 1$ holds, i.e. if $c'$ is a valid ciphertext for PKE$_{\text{Cor}}$ under the public-key $pk^d$. If so, $\mathcal{B}$ decrypts $\tilde{m}_T = (\tilde{m}_l | t \in T) = (\text{Dec}(\text{sk}_i^{d_t}, c'_l)|t \in T)$. Since the plaintext-space $\mathcal{M}_{\text{Cor}}$ is $\tau$-correlated, $\mathcal{B}$ can efficiently recover the whole message $\tilde{m}$ from the $\tau$-submessage $\tilde{m}_T$. Finally, $\mathcal{B}$ decodes $m = \text{D}(\tilde{m})$ to recover the message $m$ and outputs $m$ to $\mathcal{A}$. Observe that the verifiability-property of PKE$_{\text{Cor}}$ holds regardless of the subset $T$ used to verify. Thus, from $\mathcal{A}$'s view the decryption-oracle behaves identically in game 2 and in $\mathcal{B}$'s simulation.
+
+Finally, when $\mathcal{A}$ sends its challenge messages $m_0$ and $m_1$, $\mathcal{B}$ forwards $m_0$ and $m_1$ to the IND-CPA experiment for PKE$_{\text{Cor}}$ and receives a challenge-ciphertext $c^{*\prime}$. $\mathcal{B}$ then computes $\sigma = \text{Sign}(sk^*, c^{*\prime})$ and outputs the challenge-ciphertext $c' = (c^{*\prime}, vk^*, \sigma)$ to $\mathcal{A}$. When $\mathcal{A}$ generates an output, $\mathcal{B}$ outputs whatever $\mathcal{A}$ outputs.
+
+Putting it all together, $\mathcal{A}$'S views are identically distributed in game 2 and $\mathcal{B}$'s simulation. Therefore, it holds that $\text{Adv}_{\text{IND-CPA}}(\mathcal{B}) = \text{Adv}_{\text{game2}}(\mathcal{A}) \ge \epsilon$. Thus, $\mathcal{B}$ breaks the IND-CPA security of PKE$_{\text{Cor}}$ with non-negligible advantage $\epsilon$, contradicting the assumption. ■
+
+Plugging Lemma 5 and Lemma 6 establish that any PPT IND-CCA2 adversary $\mathcal{A}$ has at most negligible advantage in winning the IND-CCA2 experiment for the scheme PKE'$_{\text{cca2}}$. ■
+
+### C. Verifiable Correlated PKE based on the McEliece Scheme
+
+We can use a modified version of the scheme presented in Section IV to instantiate a $\tau$-correlated verifiable IND-CPA secure McEliece scheme $\text{PKE}_{\text{McE,Cor}}$. A corresponding IND-CCA2 secure scheme is immediately implied by the construction in Section V-B. As plaintext-space $\mathcal{M}_{\text{Cor}}$ for $\text{PKE}_{\text{McE,Cor}}$, we choose the set of all tuples $(s|y_1, \dots, s|y_k)$, where $s$ is a $n$-bit string and $(y_1, \dots, y_k)$ is a codeword from code $\mathbb{C}$ that can efficiently correct $k - \tau$ erasures. Clearly, $\mathcal{M}_{\text{Cor}}$ is $\tau$-correlated. Let $\mathcal{E}_{\mathbb{C}}$ be the encoding-function of $\mathbb{C}$ and $\mathcal{D}_{\mathbb{C}}$ the decoding-function of $\mathbb{C}$. The randomized encoding-function $\mathcal{E}_{\text{McE,Cor}}$ used by $\text{PKE}_{\text{McE,Cor}}$ proceeds as follows. Given a message $m$ and random coins $s$, it first computes $(y_1, \dots, y_k) = \mathcal{E}_{\mathbb{C}}(m)$ and outputs $(s|y_1, \dots, s|y_k)$. The decoding-function $\mathcal{D}_{\text{McE,Cor}}$ takes a tuple $(s|y_1, \dots, s|y_k)$ and outputs $\mathcal{D}_{\text{C}}(y_1, \dots, y_k)$. Like in the scheme of Section IV, the underlying OW-CPA secure encryption-scheme PKE is textbook-McEliece.
+
+The $\tau$-correlatedness of $\text{PKE}_{\text{McE,Cor}}$ follows directly by the construction of $\mathcal{M}_{\text{Cor}}$, $\mathcal{E}_{\text{McE,Cor}}$ and $\mathcal{D}_{\text{McE,Cor}}$. It remains to show verifiability and IND-CPA security of the scheme. The Verify$_{\text{McE}}$-algorithm takes a ciphertext $c = (c_1, \dots, c_k)$, a public-key pk, an partial secret-key sk$_T$ (for a $\tau$-sized index-set T) and proceeds as follows. First, it decrypts the
+
+components of c at the indices of T, i.e. it computes $x_t = \text{Dec}_{\text{McE}}(sk_t, c_t)$ for $t \in T$. Then, it checks whether all $x_t$ are of the form $x_t = s|y_t$ for the same string s. If not, it stops and outputs 0. Next, it constructs a vector $\tilde{y} \in \Sigma^k$ with $\tilde{y}_i = y_i$ for $i \in T$ and $\tilde{y}_i = \perp$ (erasure) for $i \notin T$. Verify then runs the erasure-correction algorithm of C on $\tilde{y}$. If the erasure-correction fails, it stops and outputs 0. Otherwise let $y = (y_1, \dots, y_k)$ be the corrected vector returned by the erasure-correction. Then, Verify sets $x = (s|y_1, \dots, s|y_k)$. Let $\mathbf{G}_1, \dots, \mathbf{G}_k$ be the generator-matrices given in $pk_1, \dots, pk_k$. Finally, Verify checks whether all the vectors $c_j \oplus x\mathbf{G}_j$, for $j = 1, \dots, k$, have Hamming-weight smaller than t. If so, it outputs 1, otherwise 0. Clearly, if Verify$_{\text{McE}}$ outputs 1, then the ciphertext-components $c_j$ of c are valid McEliece encryptions.
+
+The IND-CPA-security is proven analogously to Lemma 4. First, the McEliece generator-matrices $\mathbf{G}_i$ are replaced by random matrices $\mathbf{R}_i$, then, using the LPNDP-assumption, vectors of the form $s\mathbf{R}_i \oplus e_i$ are replaced by uniformly random vectors $u_i$. Likewise, after this transformation the adversarial advantage is 0.
+
+## VI. ACKNOWLEDGMENTS
+
+We would like to thank Edoardo Persichetti for comments on the definition of k-repetition PKE in an earlier version of this work. We would like to thank the anonymous referees who provided us with valuable feedback that greatly improved the quality of this paper.
+
+## REFERENCES
+
+[1] A. Becker, A. Joux, A. May, A. Meurer. Decoding Random Binary Linear Codes in $2^n/2^0$: How $1+1=0$ Improves Information Set Decoding. EUROCRYPT 2012. pp. 520–536.
+
+[2] E.R. Berlekamp, R.J. McEliece, H.C.A van Tilborg. On the Inherent Intractability of Certain Coding Problems. IEEE Trans. Inf. Theory. Vol. 24, pp. 384–386. 1978.
+
+[3] D. J. Bernstein,T. Lange, C. Peters. Attacking and Defending the McEliece Cryptosystem. PQCrypto 2008. pp. 31–46.
+
+[4] D. J. Bernstein, T. Lange, C. Peters. Smaller Decoding Exponents: Ball-Collision Decoding. CRYPTO 2011. pp. 743–760. 2011.
+
+[5] B. Biswas, N. Sendrier. McEliece Cryptosystem Implementation: Theory and Practice. PQCrypto. pp. 47–62. 2008.
+
+[6] R. Canetti, S. Halevi, J. Katz. Chosen-Ciphertext Security from Identity-Based Encryption. EUROCRYPT 2004. pp. 207–222.
+
+[7] A. Canteaut, F. Chabaud. A New Algorithm for Finding Minimum-weight Words in a Linear Code: Application to Primitive Narrow-sense BCH Codes of Length 511. IEEE Trans. Inf. Theory. Vol. 44(1), pp. 367–378. 1998.
+
+[8] N. Courtois, M. Finiasz, N. Sendrier. How to Achieve a McEliece Digital Signature Scheme. ASIACRYPT 2001. pp. 157–174.
+
+[9] R. Cramer, V. Shoup. A Practical Public Key Cryptosystem Provably Secure Against Adaptive Chosen Ciphertext Attack. CRYPTO 1998. pp. 13–25.
+
+[10] D. Dolev, C. Dwork, M. Naor. Non-malleable Cryptography. SIAM J. Comput. Vol 30(2), pp. 391–437. 2000.
+
+[11] R. Dowsley, J. Müller-Quade, A. C. A. Nascimento. A CCA2 Secure Public Key Encryption Scheme Based on the McEliece Assumptions in the Standard Model. CT-RSA 2009. pp. 240–251.
+
+[12] J.-C. Faugère, V. Gauthier, A. Otmani, L. Perret, J.-P. Tillich. A distinguisher for High Rate McEliece Cryptosystems. Information Theory Workshop (ITW), 2011 IEEE. pp. 282–286, 2011
+
+[13] J.-C. Faugère, A. Otmani, L. Perret, J.-P. Tillich. Algebraic Cryptanalysis of McEliece Variants with Compact Keys. EUROCRYPT 2010. pp. 279–298. 2010.
+
+[14] M. Finiasz and N. Sendrier. Security Bounds for the Design of Code-based Cryptosystems. Asiacrypt 2009, LNCS 5912, pp. 88–105.
+---PAGE_BREAK---
+
+[15] S. Goldwasser, S. Micali. Probabilistic Encryption. *J. Comput. Syst. Sci.* Vol 28(2), pp. 270–299. 1984.
+
+[16] S. Goldwasser, V. Vaikuntanathan. Correlation-secure Trapdoor Functions from Lattices. Manuscript, 2008.
+
+[17] D. Hofheinz, E. Kiltz. Secure Hybrid Encryption from Weakened Key Encapsulation. *CRYPTO 2007*. pp. 553–571.
+
+[18] J. Katz, J. S. Shin: Parallel and Concurrent Security of the HB and HB+ Protocols. *EUROCRYPT 2006*. pp. 73–87.
+
+[19] K. Kobara and H. Imai. Semantically Secure McEliece Public-Key Cryptosystems Conversions for McEliece PKC, LNCS 1992, Springer, 2001.
+
+[20] L. Lamport. Constructing Digital Signatures from One-Way Functions, *SRI Intl. CSL-98*. Oct. 1979.
+
+[21] P. J. Lee and E. F. Brickell. An Observation on the Security of McElieces Public-key Cryptosystem. *EUROCRYPT 1988*, pages 275280, 1988.
+
+[22] J. S. Leon. A Probabilistic Algorithm for Computing Minimum Weights of Large Error-correcting Codes. *IEEE Transactions on Information Theory*, 34(5):1354–1359, 1988.
+
+[23] Y. Lindell. A Simpler Construction of CCA2-Secure Public-Key Encryption under General Assumptions. *EUROCRYPT 2003*. pp. 241–254.
+
+[24] P. Loidreau, N. Sendrier. Weak keys in McEliece Public-key Cryptosystem. *IEEE Transactions on Information Theory*. pp. 1207–1212. 2001.
+
+[25] R.J. McEliece: A Public-Key Cryptosystem Based on Algebraic Coding Theory. *Deep Space Network Progress Report*, 1978.
+
+[26] A. May, A. Meurer, E. Thomae. Decoding Random Linear Codes in $\mathcal{O}(2^{0.054n})$. *ASIACRYPT 2011*. pp. 107–124.
+
+[27] M. Naor and M. Yung. Universal One-Way Hash Functions and their Cryptographic Applications. *21st STOC*. pp. 33–43. 1989.
+
+[28] R. Nojima, H. Imai, K. Kobara, K. Morozov, Semantic Security for the McEliece Cryptosystem without Random Oracles. *International Workshop on Coding and Cryptography (WCC) 2007*. pp. 257–268. Journal version in *Designs, Codes and Cryptography*. Vol. 49, No. 1-3, pp. 289–305. 2008.
+
+[29] C. Peikert, B. Waters. Lossy Trapdoor Functions and Their Applications. *STOC 2008*. pp. 187–196.
+
+[30] E. Persichetti, Personal Communication.
+
+[31] C. Rackoff, D. R. Simon: Non-Interactive Zero-Knowledge Proof of Knowledge and Chosen Ciphertext Attack. *CRYPTO 1991*. pp. 433–444.
+
+[32] O. Regev. On Lattices, Learning with Errors, Random Linear Codes, and Cryptography. *STOC 2005*. pp. 84–93.
+
+[33] A. Rosen, G. Segev. Chosen-Ciphertext Security via Correlated Products. *TCC 2009*. pp. 419–436.
+
+[34] A. Sahai. Non-Malleable Non-Interactive Zero Knowledge and Adaptive Chosen- Ciphertext Security. In *40th FOCS*. pp. 543–553.
+
+[35] N. Sendrier. Finding the Permutation Between Equivalent Linear Codes: The Support Splitting Algorithm. *IEEE Trans. Inf. Theory*. Vol. 46(4), pp.1193–1203. 2000.
+
+[36] N. Sendrier. On the Use of Structured Codes in Code Based Cryptography. *Coding Theory and Cryptography III, The Royal Flemish Academy of Belgium for Science and the Arts*. 2010.
+
+[37] J. Stern. A Method for Finding Codewords of Small Weight. *3rd International Colloquium on Coding Theory and Applications*, pp. 106–113, 1989.
+
+[38] F. Strenzke, E. Tews, H. G. Molter, R. Overbeck, A. Shoufan. Side Channels in the McEliece PKC. *PQCrypto 2008*, pp. 216-229.
\ No newline at end of file
diff --git a/samples/texts_merged/3016938.md b/samples/texts_merged/3016938.md
new file mode 100644
index 0000000000000000000000000000000000000000..5dcb1f0513516a49030e363da9a521aebdc03e29
--- /dev/null
+++ b/samples/texts_merged/3016938.md
@@ -0,0 +1,488 @@
+
+---PAGE_BREAK---
+
+# The Naproche Project
+Controlled Natural Language Proof Checking
+of Mathematical Texts
+
+POSTPRINT
+
+Marcos Cramer, Bernhard Fisseni, Peter Koepke, Daniel Kühlwein, Bernhard Schröder, and Jip Veldman
+
+University of Bonn and University of Duisburg-Essen
+{cramer, koepke, kuehlwei, veldman}@math.uni-bonn.de,
+{bernhard.fisseni, bernhard.schroeder}@uni-due.de
+http://www.naproche.net
+
+**Abstract.** This paper discusses the semi-formal language of mathematics and presents the Naproche CNL, a controlled natural language for mathematical authoring. Proof Representation Structures, an adaptation of Discourse Representation Structures, are used to represent the semantics of texts written in the Naproche CNL. We discuss how the Naproche CNL can be used in formal mathematics, and present our prototypical Naproche system, a computer program for parsing texts in the Naproche CNL and checking the proofs in them for logical correctness.
+
+**Keywords:** CNL, mathematical language, formal mathematics, proof checking.
+
+## 1 Introduction
+
+The Naproche project¹ (NAtural language PROof CHEcking) studies the semi-formal language of mathematics (SFLM) as used in mathematical journals and textbooks from the perspectives of linguistics, logic and mathematics. A central goal of Naproche is to develop and implement a controlled natural language (CNL) for mathematical texts which can be transformed automatically into equivalent formulae of first-order logic using methods of computational linguistics.
+
+One possible application of this mathematical CNL is to make formal mathematics more readable to the average mathematician; this application is fundamental to most other applications and therefore is currently the focus of Naproche development. In order to show the feasibility of this goal, we develop the prototypical Naproche system, which can automatically check texts written in the Naproche CNL for logical correctness. We test this system by reformulating parts of mathematical textbooks and the basics of mathematical theories in the Naproche CNL and automatically checking the resulting texts.
+
+¹ Naproche is a joint initiative of PETER KOEPKE (Mathematics, University of Bonn) and BERNHARD SCHRÖDER (Linguistics, University of Duisburg-Essen). The Naproche system is technically supported by GREGOR BÜCHEL from the University of Applied Sciences in Cologne.
+---PAGE_BREAK---
+
+Once Naproche has sufficiently broad coverage, we also plan to use it as a tool that can help undergraduate students to learn how to write formally correct proofs and thus get used to (a subset of) SFLM. This possible application of Naproche also makes it more evident why we are emphasising a lot on the naturalness of our CNL.
+
+We first discuss the relevant features of SFLM and then discuss the Naproche CNL, Proof Representation Structures and the translation to first order logic. Finally, we describe how proof checking in Naproche can be of use to the field of formal mathematics and discuss further development of Naproche.
+
+## 2 The Semi-Formal Language of Mathematics
+
+As an example of the semi-formal language of mathematics (SFLM), we cite a proof for the theorem "$\sqrt{2}$ is irrational" from HARDY-WRIGHT's introduction to number theory [8].
+
+If $\sqrt{2}$ is rational, then the equation $a^2 = 2b^2$ is soluble in integers *a*, *b* with $(a, b) = 1$. Hence $a^2$ is even, and therefore *a* is even. If $a = 2c$, then $4c^2 = 2b^2$, $2c^2 = b^2$, and *b* is also even, contrary to the hypothesis that $(a, b) = 1$.
+
+SFLM incorporates the syntax and semantics of the general natural language, so that it takes over its complexity and some of its ambiguities. However, SFLM texts are distinguished from common language texts by several characteristics:
+
+- They combine natural language expressions with mathematical symbols and formulae, which can syntactically function as noun phrases or sub-propositions.
+
+- Constructions which are hard to disambiguate are generally avoided.
+
+- Mathematical symbols can be used for disambiguation, e.g. by use of variables instead of anaphoric pronouns.
+
+- Assumptions can be introduced and retracted. For example, the proof cited above is a proof by contradiction: At the beginning, it is assumed that $\sqrt{2}$ is rational. The claims that follow are understood to be relativised to this assumption. Finally the assumption leads to a contradiction, and is retracted to conclude that $\sqrt{2}$ is irrational.
+
+- Mathematical texts are highly structured. At a global level, they are commonly divided into building blocks like definitions, lemmas, theorems and proofs. Inside a proof, assumptions can be nested into other assumptions, so that the scopes of assumptions define a hierarchical proof structure.
+
+- Definitions add new symbols and expressions to the vocabulary and fix their meaning.
+
+- Proof steps are commonly justified by referring to results in other texts, or previous passages in the same text. So there are intertextual and intratextual references.
+
+- Furthermore, SFLM texts often contain commentaries and hints which guide the reader through the process of the proof, e.g. by indicating the method of proof ("by contradiction", "by induction") or giving analogies.
+---PAGE_BREAK---
+
+## 3 The Naproche CNL
+
+The Naproche CNL is currently a small but functional subset of SFLM which includes some of the above mentioned characteristics of SFLM. We first discuss the syntax of the Naproche CNL, and finally present a CNL version of the example proof from Section 2. A more detailed specification of the CNL syntax can be found in [5].
+
+In the Naproche CNL we distinguish between *macrostructure* (general text structure) and *microstructure* (grammatical structure in a sentence).
+
+### 3.1 Macrostructure
+
+A Naproche text is structured by structure markers: *Axiom*, *Theorem*, *Lemma*, *Proof*, *Qed* and *Case*. For example, a theorem is presented after the structure marker *Theorem*, and its proof follows between the structure markers *Proof* and *Qed*. In a proof by case distinction, each case starts with the word *Case*, and the end of a case distinction can be marked by a sentence starting with *in both cases* or *in all cases*.
+
+Assumptions are always introduced by an assumption trigger (e.g. *let*, *consider*, *assume that*). A sentence starting with *thus* always retracts the most recently introduced assumption that hasn't yet been retracted. When finishing a proof with a *qed*, all assumptions introduced in the proof get retracted.
+
+Definitions can be used to define the meanings of constants, function symbols, relation symbols, verbs, adjectives and nouns. They can be marked by the words *definition* or *define*.
+
+Here is an extract from our reformulation of EDMUND LANDAU'S *Grundlagen der Analysis* [12], with all structure markers marked in bold font:
+
+**Axiom 3:** For every $x, x' \neq 1$.
+**Axiom 4:** If $x' = y'$, then $x = y$.
+**Theorem 1:** If $x \neq y$ then $x' \neq y'$.
+**Proof:** Assume that $x \neq y$ and $x' = y'$. Then by axiom 4, $x, y$. **Qed.**
+**Theorem 2:** For all $x, x' \neq x$.
+**Proof:** By axiom 3, $1' \neq 1$. Suppose $x' \neq x$. Then by theorem 1, $(x')' \neq x'$. Thus by induction, for all $x, x' \neq x$. **Qed.**
+**Definition 1:** Define + recursively:
+$$x+1 = x'. x+y' = (x+y)'$$
+
+Additionally we present an example of a proof by case distinction in the Naproche CNL:
+
+**Theorem:** No square number is prime.
+
+**Proof:** Let $n$ be a square number.
+
+**Case 1:** $n = 0$ or $n = 1$. Then $n$ is not prime.
+
+**Case 2:** $n \neq 0$ and $n \neq 1$. Then there is an $m$ such that $n = m^2$. Then $m \neq 1$ and $m \neq n$. $m$ divides $n$, i.e. $n$ is not prime.
+
+So in both cases $n$ is not prime. **Qed.**
+---PAGE_BREAK---
+
+## 3.2 Microstructure
+
+Mathematical terms and formulae can be combined with natural language expressions to form statements, definitions and assumptions: The use of nouns, adjectives, verbs, natural language connectives (e.g. *and*, *or*, i.e., *if*, *iff*) and natural language quantification (e.g. *for all*, *there is*, *every*, *some*, *no*) works as in natural English, only with some syntactical limitations similar to those in Attempto Controlled English [7]. Sentences in a proof can start with words like *then*, *hence*, *therefore* etc. Mathematical terms can function as noun phrases, and mathematical formulae can function as sentences or sub-clauses. The language of mathematical formulae and terms that can be used in the Naproche CNL is a first order language with function symbols, including some syntactic sugar that is common in the mathematical formulae and terms found in SFLM.
+
+## 3.3 Coverage
+
+The Naproche CNL in its present state does not cover all constructs of SFLM, but most texts can be rewritten in the Naproche CNL in such a way that they resemble the original text and still read like SFLM.
+
+In our biggest test so far, we translated the first chapter of LANDAU'S *Grundlagen der Analysis* [12] (17 pages) into the Naproche CNL. The resulting text can be seen online [4], and stays close to the original. Another example of a reformulation in the Naproche CNL can be seen in the following subsection.
+
+We are continuously extending the language to allow more and more SFLM constructs.
+
+## 3.4 $\sqrt{2}$ in Naproche CNL
+
+The irrationality proof can be reformulated in the Naproche CNL as follows:
+
+**Theorem:** $\sqrt{2}$ is irrational.
+
+**Proof:**
+
+Assume that $\sqrt{2}$ is rational. Then there are integers $a, b$ such that $a^2 = 2 \cdot b^2$ and $\text{gcd}(a, b)$ 1. Hence $a^2$ is even, and therefore $a$ is even. So there is an integer $c$ such that $a = 2 \cdot c$. Then $4 \cdot c^2 = 2 \cdot b^2$, $2 \cdot c^2 = b^2$, and $b$ is even. Contradiction. QED.
+
+Naproche uses LATEX input, so the actual input used would be the LATEX code that was used to typeset the above text. We are also working on the alternative of using the mathematical WYSIWYG editor TeXmacs [15].
+
+# 4 Proof Representation Structures
+
+Texts written in the Naproche CNL are translated into *Proof Representation Structures* or PRSs (see [10], [11]). PRSs are Discourse Representation Structures (DRSs, [9]), which are enriched in such a way as to represent the distinguishing characteristics of SFLM discussed in Section 2.
+---PAGE_BREAK---
+
+A PRS has five constituents: An identification number, a list of discourse referents, a list of mathematical referents, a list of textual referents and an ordered list of conditions². Similar to DRSs, we can display PRSs as “boxes” (Figure 1).
+
+Fig. 1. A PRS with identification number *i*, discourse referents $d_1, ..., d_n$, mathematical referents $m_1, ..m_k$, conditions $c_1, ..., c_t$ and textual referents $r_1, ..r_p$
+
+Mathematical referents are the terms and formulae which appear in the text. As in DRSs, discourse referents are used to identify objects in the domain of the discourse. However, the domain contains two kinds of objects: mathematical objects like numbers or sets, and the symbols and formulae which are used to refer to or make claims about mathematical objects. Discourse referents can identify objects of either kind.
+
+PRSs have identification numbers, so that we can refer to them from other points in the discourse. The textual referents indicate the intratextual and intertextual references.
+
+Just as in the case of DRSs, PRSs and PRS conditions are defined recursively: Let $A, B, B_1, ..., B_n$ be PRSs, $d, d_1, ..., d_n$ discourse referents and $m$ a mathematical referent. Then
+
+- for any *n*-ary predicate *p* (e.g. expressed by adjectives and noun phrases in predicative use and verbs in SFLM), *p*(*d*₁, ..., *d*ₙ) is a condition.
+
+- *holds*(*d*) is a condition representing the claim that the formula referenced by *d* is true.
+
+- math_id(*d*, *m*) is a condition which binds a discourse referent to a mathematical referent (a formula or a term).
+
+- $\mathcal{A}$ is a condition.
+
+ - $\neg \mathcal{A}$ is a condition, representing a negation.
+
+- $\mathcal{A} \Rightarrow B$ is a condition, representing an assumption ($\mathcal{A}$) and the set of claims made inside the scope of this assumption ($B$).
+
+- $A \leftrightarrow B$ is a condition, representing a logical equivalence.
+
+- $A \Leftarrow B$ is a condition, representing a reversed conditional (i.e. of the form "... if ...")
+
+- $A \lor B$ is a condition, representing an inclusive disjunction.
+
+- $>\langle (\Lambda_1, ..., \Lambda_n)$ is a condition, representing an exclusive disjunction, i.e. the claim that precisely one of $\Lambda_1, ..., \Lambda_n$ holds.
+
+² The order of the conditions in a PRS reflects the argument structure of a proof and is relevant to the PRS semantics. This was in part inspired by ASHER's SDRT [1].
+---PAGE_BREAK---
+
+- `<=>` (*A*₁, ..., *A*ₙ) is a condition, representing the claim that at most one of *A*₁, ..., *A*ₙ holds.
+
+- *A* := *B* is a condition, representing a definition of a predicate.
+
+- *f*: *A* → *B* is a condition, representing a definition of a function or constant (where *A* fixes the scope of the function and *B* specifies its defining equality or property).
+
+- *contradiction* is a condition, representing a contradiction.
+
+## 4.1 PRS Construction
+
+The algorithm creating PRSs from CNL proceeds sequentially [10]:
+
+- It starts with the empty PRS.
+
+- Structure markers open special structure PRSs.
+
+- An assumption triggers a condition of the form *A* ⇒ *B*, where *A* contains the assumption and *B* contains the representation of all claims made inside the scope of that assumption.
+
+- Representations of single sentences are produced in a way similar to a standard threading construction algorithm (see [2]).
+
+We clarify both the PRS construction algorithm and the functions of the various kinds of PRS conditions by presenting examples which show how the most important PRS conditions are constructed from input text:
+
+### Negation conditions
+
+Negation conditions work just as in Discourse Representation Theory:
+
+$$ \begin{array}{l} n \text{ is not prime} \\ \jne \\ \neg \begin{array}{|c|c|} \hline & \\ \hline n \text{ is prime} \\ \hline \end{array} \end{array} $$
+
+The downward arrow in this and the following examples roughly corresponds to one step of the PRS construction algorithm. The "n is prime" in this PRS would still be processed further.
+
+### Implication conditions
+
+PRS conditions of the form *A* ⇒ *B* are called *implication conditions*. One function of them is to represent conditionals of the form If...then:
+
+If $x' = y'$, then $x = y$.
+
+$$ \begin{array}{c} \begin{array}{|c|c|} \hline & \\ \hline x' = y' \\ \hline \end{array} \Rightarrow \begin{array}{|c|c|} \hline & \\ \hline x = y \\ \hline \end{array} \\ \jne \end{array} $$
+---PAGE_BREAK---
+
+When an assumption is introduced in a proof, and then something is derived
+from that assumption, this is also represented by an implication condition: The
+assumption is placed in the left PRS of the condition, and all proof steps until
+the assumption gets retracted are placed in the right PRS. Axioms are treated
+like assumptions that never get retracted.
+
+The following example shows how different linguistic constructions can yield implication conditions, and how these can be nested inside one another:
+
+Axiom 4: If $x' = y'$, then $x = y$.
+Assume that $x' = y'$. Then by axiom 4, $x = y$.
+
+$$ \begin{array}{c} \text{axiom(4)} \\ \text{If } x' = y', \text{ then } x = y. \end{array} \quad \Rightarrow \quad \begin{array}{|l|l} \hline & \\ \hline \text{Assume that } x' \ y'. \\ \text{Then by axiom 4, } x \ y. \\ \hline \end{array} $$
+
+$$ \begin{array}{ccc} \text{axiom(4)} & \Rightarrow & x' = y' \\ \cline{3-3} x' = y' & \Rightarrow & x = y \\ \cline{3-3} & & \boxed{x' = y'} \\ \cline{3-3} & & \text{axiom(4)} \\ \cline{3-3} & & \\ \cline{3-3} \end{array} $$
+
+Note that the *axiom(4)* entry in the second PRS is not a condition, but a textual referent to a premise (see 5.3).
+
+Predicate conditions
+
+Predicate conditions work just like in DRT:
+
+There is an even prime.
+
+$$ \begin{array}{|c|c|} \hline 1 & \\ \hline \multicolumn{2}{|c|}{\mathit{even}(1)} \\ \hline \multicolumn{2}{|c|}{\mathit{prime}(1)} \\ \hline \end{array} $$
+
+Note that we use natural numbers (1, 2, 3, ... ) as discourse referents.
+
+math_id conditions
+
+As in DRT, all objects that are talked about are represented by discourse referents. Additionally, the mathematical terms (e.g. the variables) used in the text
+---PAGE_BREAK---
+
+are listed in the PRS as mathematical referents. It is therefore necessary to indicate which discourse referent refers to the same object as a certain mathematical referent. For this we use math_id conditions:
+
+There are integers *m*, *n* such that *m* divides *n*.
+
+| 1, 2 | m, n | | integer(1) | | math_id(1, m) | | integer(2) | | math_id(2, n) | | divide(1, 2) |
+
+### holds conditions
+
+When a mathematical formula is used, this gets represented by a *holds* condition, which indicated the truth of the formula:
+
+Then $m \neq 1$ and $m \neq n$.
+
+| 1, 2 | $m \neq 1, m \neq n$ | | math_id(1, m / 1) | | holds(1) | | math_id(2, m / n) | | holds(2) |
+
+### Definition conditions
+
+There are two special kinds of conditions for definitions. Here we concentrate on just one of these two kinds; conditions of this kind are of the form "A : B" and represent definitions by biconditionals:
+
+Define *m* to be square iff there is an integer *n* such that *m* = *n*².
+
+
+
+$$ \therefore \quad \begin{array}{l} 2, 3 \\ \mathit{math\_id}(2, n) \\ \mathit{math\_id}(3, m = n^2) \\ \mathit{holds}(3) \end{array} $$
+---PAGE_BREAK---
+
+Bare PRSs as conditions
+
+Contrary to the case of DRSs, a bare PRS can be a direct condition of a PRS.
+This allows us to represent in a PRS the structure of a text divided into building
+blocks (definitions, lemmas, theorems, proofs) by structure markers.
+
+Theorem 1: If $x \neq y$ then $x' \neq y'$.
+
+Proof: Assume that $x \neq y$ and $x' = y'$. Then by axiom 4, $x = y$. Qed.
+
+$\sqrt{2}$ is irrational
+
+The PRS constructed from the example proof, "√2 is irrational.", is shown in
+Figure 2.
+
+## 4.2 Accessibility in PRSs
+
+Unlike for DRSs, accessibility for PRSs is not just a relation between PRSs, but also a relation between PRS conditions, and between PRSs and PRS conditions. As in the case of DRSs, accessibility can be defined via a subordination relation:
+
+Let $\Theta_1$ and $\Theta_2$ be PRSs or PRS conditions.³ $\Theta_1$ directly subordinates $\Theta_2$ if
+
+* $\Theta_1$ is a PRS, and $\Theta_2$ is the first condition of $\Theta_1$, or
+* $\Theta_1$ and $\Theta_2$ are PRS conditions of a PRS B, and $\Theta_1$ appears before $\Theta_2$ in the list of conditions of B, or
+* $\Theta_1$ is a PRS condition of the form $\neg\Theta_2$, or
+
+³ $\Theta_1$ and $\Theta_2$ should be understood to be occurrences of PRSs or PRS conditions. For example, if a PRS has as its condition list $(c_1, c_2, c_1)$, then $c_2$ subordinates the second occurrence of $c_1$, but not its first occurrence. So strictly speaking it is necessary to speak about occurrences ("this occurrence of $c_2$ subordinates the second occurrence of $c_1$").
+---PAGE_BREAK---
+
+Fig. 2. The PRS corresponding to proof of "$\sqrt{2}$ is irrational.", depicted without PRS identifiers and textual references.
+---PAGE_BREAK---
+
+- $\Theta_1$ is a PRS condition of the form $\Theta_2 \Rightarrow B, \quad \Theta_2 \Leftarrow B, \quad \Theta_2 \Leftrightarrow B, \quad \Theta_2 \lor B, \quad B \lor \Theta_2, \quad \Uparrow (\Theta_2, B_1, \dots, B_n), \quad \Downarrow (\Theta_2, B_1, \dots, B_n), \quad \Theta_2 := B$ or
+$f: \Theta_2 \rightarrow B$, or
+
+- $\Theta_1 \Rightarrow \Theta_2, \quad \Theta_1 \Leftarrow \Theta_2, \quad \Theta_1 \Leftrightarrow \Theta_2, \quad \Theta_1 := \Theta_2$ or $f: \Theta_1 \rightarrow \Theta_2$ is a PRS condition, or
+
+The relation "$\Theta_1$ is accessible from $\Theta_2$" is the transitive closure of the relation "$\Theta_1$ subordinates $\Theta_2$".
+
+A discourse referent *d* or a mathematical referent *m* is said to be accessible from Θ, if there is a PRS Β that is accessible from Θ and that contains *d* or *m* in its list of discourse or mathematical referents.
+
+In the PRS of the $\sqrt{2}$ example (Figure 2), we can derive from the above definitions that the discourse referent 2 is accessible from all the PRSs and PRS conditions inside the PRS that is to the right of the "⇒". The discourse referent 7 and the mathematical referent $a^2$ are only accessible in the PRSs and conditions that are graphically below the line where they are introduced. On the other hand the discourse referent 1 is only accessible in the PRS in which it gets introduced, as this PRS does not subordinate any other PRSs.
+
+## 4.3 Motivations for the Accessibility Constraints
+
+For those complex PRS conditions that are analogous to standard DRS conditions (i.e. $\neg A, A \Rightarrow B$ and $A \lor B$), the accessibility constraints are equivalent to the standard accessibility constraints in DRT: In all three cases the discourse referents introduced in *A* and *B* are not accessible for the later discourse; in *A* $\Rightarrow B$ the discourse referents from *A* are accessible in *B*, but not so for $A \lor B$.
+
+The fact that discourse referents introduced in one of the argument PRSs of a complex PRS condition should not be accessible after that condition naturally extends to those complex conditions that are specific to PRSs ($A \Leftarrow B, A \Rightarrow B, \Uparrow (A_1, \dots, A_n), \Downarrow (A_1, \dots, A_n), A: B, f: A \rightarrow B$).
+
+For example, after the definition “Define *m* to be square iff there is an integer *n* such that *m* = $n^2$,” neither *n* nor *m* should be accessible. The corresponding PRS condition has the form $A := B$ and can be seen in Figure 3. A introduces a discourse referent for *m* which is also used in *B* (since *m* is used to the right of the “(iff)”). This example also shows that the discourse referents introduced by A should still be accessible in *B*, as they are according to our above definition of accessibility.
+
+Fig. 3. The PRS condition corresponding to the sentence "Define *m* to be square iff there is an integer *n* such that *m* = *n*"
+
+$$
+\begin{array}{c|c}
+1 & m \\
+\hline
+\mathit{math\_id}(1, m) \\
+\mathit{square}(1)
+\end{array}
+\quad :=
+\begin{array}{|c|l}
+\hline
+2, 3 & n, m = n^2 \\
+\hline
+\mathit{math\_id}(2, n) & \\
+\mathit{math\_id}(3, m = n^2) & \\
+\mathit{holds}(3) & \\
+\hline
+\end{array}
+$$
+---PAGE_BREAK---
+
+Similarly, there are natural examples that show that also in conditions of the form $A \Leftrightarrow B$, $A \Leftarrow B$ and $f: A \to B$ the discourse referents of A should be accessible in B:
+
+- "A natural number *n* is even if and only if $n \mid 1$ is odd."
+
+- "A natural number is even if it is not odd."⁴
+
+- "For *x* a positive real, define $\sqrt{x}$ to be the unique positive real *y* such that
+ *y*² = *x*."
+
+The condition << (*A*₁, ..., *A*ₙ) expresses an exclusive disjunction, so it is natural that accessibility is blocked between the disjuncts just as in the case of the inclusive disjunction condition *A* ∨ *B*. The similar condition >> (*A*₁, ..., *A*ₙ) expresses the weaker statement that at most one of the "disjuncts" must hold, but it linguistically behaves very similar to the statement that precisely one of them holds, so that the same accessibility constraints apply.
+
+## 4.4 PRS Semantics
+
+Similarly to the case of the semantics of first order formulae, defining the semantics of PRSs means fixing the criteria under which a PRS is satisfied in a certain structure. A *structure* is a set together with relations and functions defined on the elements of the set.
+
+The difference to the semantics of first order logic is that PRS semantics is defined dynamically⁵: Each PRS condition changes the *context* for the subsequent PRS conditions. A *context* is defined to be a triple consisting of a structure and two kinds of variable assignments⁶, one for discourse referents and one for mathematical referents.
+
+- Any definition condition extends the structure in this context triple, so that it contains the newly defined symbol.
+
+- Any PRS condition that is a simple PRS extends the two assignments in the triple, so that they are also defined on the discourse referents and mathematical referents introduced in that PRS.
+
+Any other kind of PRS conditions doesn't actually change the context, but must satisfy the context in which it is interpreted.
+
+For a PRS to be satisfied in a certain structure $\mathfrak{A}$, it must be possible to sequentially modify the context $[\mathfrak{A}, \emptyset, \emptyset]$ by each condition in the PRS.
+
+⁴ Concerning this “reversed conditional” as well as the biconditional above, we should mention that we do not take these examples to be linguistically analogous to donkey sentences. Indeed, these cases should probably be seen as cases of generic interpretation of “a”, whereas the “a” in the antecedent of donkey sentences is normally not considered to be a case of generic interpretation.
+
+⁵ This dynamic definition of PRS semantics is partly based on the dynamic definition of DRS semantics found in [2].
+
+⁶ A *variable assignment* is a function that assigns elements of the domain of the structure to variables, in our case to discourse referents or mathematical referents.
+---PAGE_BREAK---
+
+The details of the PRS semantics can be found in [6]. Note that textual
+referents are not included in the PRS semantics. They are used in the Logic
+module of the Naproche system (See 5.3).
+
+## 4.5 Translating PRSs into First-Order Logic
+
+PRSs, like DRSs, have a canonical translation into first-order logic. This trans-
+lation is performed in a way similar to the DRS to first-order translation de-
+scribed in the book by BLACKBURN and BOS [2]. For example, in both DRS and
+PRS translations to first-order logic, the introduction of discourse referents is
+translated into first-order logic using existential quantifiers, unless the discourse
+referents are introduced in a DRS/PRS A of a condition of the form A $\Rightarrow$ B,
+in which case a universal quantifier is used in the translation. There are two
+main differences between the PRS to first order logic translation and DRS to
+first-order logic translations that need discussion:
+
+math_id conditions are not translated into sub-formulae of the first-order
+translation. A math_id condition, which binds a discourse referent *n* to a
+term *t*, triggers a substitution of *n*, by *t*, in the translation of subsequent
+conditions. A math_id condition, which binds a discourse referent *n* to a
+formula *φ*, causes a subsequent *holds(n)* condition to be translated to *φ*.
+Definition conditions are also not translated into sub-formulae of the trans-
+lation. Instead, a condition of the form *A* := *B* triggers a substitution of the
+relation symbol it defines by the first-order translation of *B* in subsequent
+conditions.
+
+Cramer [6] presents a rigorous definition of this translation as well as a proof
+that this translation is sound, i.e. that a PRS is satisfied in a structure if and
+only if its translation into first order logic is satisfied in that structure.
+
+# 5 Formal Mathematics in Naproche CNL
+
+Currently our primary goal is to use the Naproche CNL in the field of formal
+mathematics.
+
+## 5.1 Formal Mathematics
+
+Mathematicians usually publish *informal proofs*, i.e. proofs for which there are no formal criteria of correctness. Informal proofs contrast with *formal proofs*, in which only syntactical rules can be applied and hence every logical step is explicit.
+
+Formal mathematics aims at carrying out all of mathematics in a completely
+formal way, i.e. formalise all mathematical statements in strictly formal lan-
+guages and carry out all proofs in formal proof calculi.
+
+However, formal proofs tend to be lengthy and difficult to read. This problem
+can be limited by the use of formal mathematics system: These are computer
+systems which facilitate the formalisation of mathematical proofs. In prominent
+---PAGE_BREAK---
+
+examples of formal mathematics systems like *Mizar* [13] and *Coq* [3], this is achieved by allowing the user to use a more readable formal language and to leave out some of the simpler steps of a derivation, which can be filled in by a computer. However, even proofs written in these systems are still not very readable for an average mathematician, mainly because the syntax is more similar to a programming language than to SFML and because they contain much information that human readers consider redundant.
+
+One possible application of the Naproche CNL is to use it in an even more natural system for formal mathematics. As a prototypical version of such a system, we have developed the *Naproche system*.
+
+## 5.2 The Naproche System
+
+The Naproche system can parse texts written in the Naproche CNL, build the appropriate PRSs, and check them for correctness using automated theorem provers. Apart from several small examples in group theory and set theory, we reformulated the first chapter of EDMUND LANDAU'S *Grundlagen der Analysis* [12] in the Naproche CNL, and automatically checked it for correctness using the Naproche system.
+
+The Naproche system consists of three main modules: The User Interface, the Linguistics module and the Logic module. Figure 4 shows an overview of the architecture of the Naproche system.
+
+Fig. 4. The architecture of the Naproche system
+---PAGE_BREAK---
+
+The User Interface is the standard communication gateway between the user and the Naproche system. The input text is entered using the User Interface, and the other modules report back to the User Interface.
+
+The user enters the input text, written in the Naproche CNL, on the interface, which transforms the input into a Prolog list and hands it over to the Linguistics module. Currently, our User Interface is web-based and can be found on the Naproche website, www.naproche.net. We are also working on a plug-in for the WYSIWYG editor TeXmacs [15].
+
+The Linguistics module creates a PRS from the CNL text. The text is parsed using a Prolog-DCG parser that builds up the PRS during the parsing process. The created PRS can either be given to the Logic module, or reported back to the User Interface. With the help of automated theorem provers, the Logic module checks PRSs for logical correctness. The result of the check is sent to the User Interface.
+
+## 5.3 Checking PRSs
+
+The checking algorithms keeps a list of first order formulae it considers to be true, called the *list of premises*, which gets continuously updated during the checking process.
+
+To check a PRS, the algorithms considers the conditions of the PRS. The conditions are checked sequentially and each condition is checked under the currently active premises. According to the kind of condition, the Naproche system creates obligations which have to be discharged by an automated theorem prover⁷. For example, a *holds* condition is accepted under the premises θ if the automated theorem prover can derive the formula, which is bound to the discourse referent of the *holds* conditions, from θ.
+
+The more premises are available, the harder it gets for the automated theorem prover to discharge an obligation. Therefore we implemented a premises selection algorithm that tries to select only the relevant premises for an obligations. This is where the textual referents of a PRS are used. If a PRS contains a textual referent, the premises corresponding to that referent are taken to be relevant for the corresponding obligation.
+
+If all obligations of a condition can be discharged, then the condition is accepted. After a conditions is accepted, the active premises get updated, and the next condition gets checked. A PRS is accepted by Naproche if all its conditions are accepted consecutively.
+
+If the PRS created from a Naproche CNL text is accepted by our checking algorithm, one can conclude that the text is correct (of course under the assumption that the automatic theorem prover only creates correct proofs).
+
+## 5.4 Speed and Scalability
+
+One important issue for every algorithm is speed and scalability.
+
+⁷ We access automated theorem provers, currently Otter and E Prover, through GEOFF SUTCLIFF's SystemOnTPTP [14].
+---PAGE_BREAK---
+
+Single sentences are parsed in milliseconds, a text of 30 sentences takes about one second. Our biggest example, the Landau text, consists of 326 sentences. The corresponding PRS is created in 11 seconds.
+
+The obligation creation algorithm needs less than a second for small texts (less than 30 sentences), and 4 seconds for the Landau PRS.
+
+Discharging an obligation heavily depends on the automated theorem prover used, the available premises and the actual query. Using the E Prover, version 1.0, all obligations created from the Landau text can be discharged in 80 seconds.
+
+All tests were carried out on a 2 GHz Intel Pentium M with 2 GB Ram.
+
+# 6 Conclusion
+
+We discussed the particular properties of the Semi-Formal Language of Mathematics and presented the Naproche CNL as a controlled, and thus automatically tractable, subset of SFLM. We defined an extension of Discourse Representation Structures, called Proof Representation Structures, which is adapted so as to represent the relevant content of texts written in the Naproche CNL. Some theoretical properties of Proof Representation Structures were discussed in this paper, but a more exhaustive treatment can be found in the works we refered to.
+
+Since we currently focus on applications in the field of formal mathematics, our implementation, the Naproche system, is geared to the needs of this application. The system translates Naproche CNL texts into Proof Representation Structures, which are then checked for logical correctness; if they are correct, this amounts to the original text being correct.
+
+We believe that the naturalness of the Naproche CNL could make formal mathematics more feasible to the average mathematician.
+
+# 7 Related and Future Work
+
+CLAUS ZINN developed a system for processing SFLM texts, which he calls *informal mathematical discourse* [17]. He used an extended version of DRSs, which he also called Proof Representation Structures, to represent the meaning of a text. The PRS definitions of Naproche and ZINN are different, however.
+
+We are collaborating with the VeriMathDoc project [16], which is developing a mathematical assistant system that naturally supports mathematicians in the development, authoring and publication of mathematical work. We are working towards common formats for exchanging mathematical proofs, e.g. for corpora of mathematical texts which help to assess to what extent Naproche can and should be extended to optimally suit writers-readers of SFML and the purpose of automated verification.
+
+We shall extend the Naproche controlled natural language to include a richer mathematical formula language and more argumentative mathematical phrases and constructs. Once the Naproche CNL is sufficiently extensive and powerful, we shall reformulate and check proofs from various areas of mathematics in a way "understandable" to men and machines.
+---PAGE_BREAK---
+
+References
+
+1. Asher, N.: Reference to Abstract Objects in Discourse (1993)
+2. Blackburn, P., Bos, J.: Working with Discourse Representation Theory: An Advanced Course in Computational Linguistics (2003)
+3. Coq Development Team: The Coq Proof Assistant Reference Manual: Version v8.1 (July 2007), http://coq.inria.fr
+4. Carl, M., Cramer, M., Kühlwein, D.: Landau in Naproche, ch. 1, http://www.naproche.net/downloads/2009/landauChapter1.pdf
+5. Cramer, M.: The Controlled Natural Language of Naproche in a nutshell, http://www.naproche.net/wiki/doku.php?id=dokumentation:language
+6. Cramer, M.: Mathematisch-logische Aspekte von Beweisrepräsentationsstrukturen, Master's thesis, University of Bonn (2008), http://naproche.net/downloads.shtml
+7. Fuchs, N.E., Höfler, S., Kaljurand, K., Rinaldi, F., Schneider, G.: Attempto Controlled English: A Knowledge Representation Language Readable by Humans and Machines
+8. Hardy, G.H., Wright, E.M.: An Introduction to the Theory of Numbers, 4th edn. (1960)
+9. Kamp, H., Reyle, U.: From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language. Kluwer Academic Publisher, Dordrecht (1993)
+10. Kolev, N.: Generating Proof Representation Structures for the Project Naproche, Magister thesis, University of Bonn (2008), http://naproche.net/downloads.shtml
+11. Kühlwein, D.: A calculus for Proof Representation Structures, Diploma thesis, University of Bonn (2008), http://naproche.net/downloads.shtml
+12. Landau, E.: Grundlagen der Analysis, 3rd edn. (1960)
+13. Matuszewski, R., Rudnicki, P.: Mizar: the first 30 years. Mechanized Mathematics and Its Applications 4(2005) (2005)
+14. Sutcliffe, C.: System Description: System on TPTP. In: CADE, pp. 406–410 (2000)
+15. Texmacs Editor website: http://www.texmacs.org/
+16. VeriMathDoc website: http://www.ags.uni-sb.de/~afiedler/verimathdoc/
+17. Zinn, C.: Understanding Informal Mathematical Discourse, PhD thesis at the University of Erlangen (2004), http://citeseer.ist.psu.edu/233023.html
\ No newline at end of file
diff --git a/samples/texts_merged/3110207.md b/samples/texts_merged/3110207.md
new file mode 100644
index 0000000000000000000000000000000000000000..a2ccfb166dc576bedbad872ad07981a76b653413
--- /dev/null
+++ b/samples/texts_merged/3110207.md
@@ -0,0 +1,2677 @@
+
+---PAGE_BREAK---
+
+# Contents
+
+1 **Introduction and overview** 2
+
+1.1 Acknowledgements 2
+
+1.2 PALSfit3 2
+
+2 **About PALSfit** 4
+
+2.1 General fitting criterion 4
+
+2.2 The POSITRONFIT model 5
+
+2.3 The log-normal extension 7
+
+2.4 Source correction 8
+
+2.5 The RESOLUTIONFIT model 8
+
+3 **Input and output** 9
+
+3.1 PALSfit input 9
+
+3.2 POSITRONFIT control file 9
+
+3.3 RESOLUTIONFIT control file 13
+
+3.4 PALSfit output 14
+
+3.5 POSITRONFIT main output 15
+
+3.6 RESOLUTIONFIT main output 17
+
+3.7 Channel ranges 18
+
+4 **Experience with PALSfit** 19
+
+4.1 POSITRONFIT experience 19
+
+4.2 RESOLUTIONFIT experience 20
+
+5 **Appendix A: Fit and statistics** 23
+
+5.1 Unconstrained NLLS data fitting 23
+
+5.2 Constraints 25
+
+5.3 Statistical analysis 25
+
+5.4 Marquardt minimization 31
+
+5.5 Separable least-squares technique 33
+
+5.6 Various mathematics, statistics, and numerics 34
+
+6 **Appendix B: Model details** 40
+
+6.1 POSITRONFIT 41
+
+6.2 RESOLUTIONFIT 44
+
+7 **Appendix C: Log-normal details** 45
+
+7.1 Log-normal POSITRONFIT model formulas 45
+
+7.2 Fixing one of the log-normal parameters $\tau_m$ or $\sigma$ 46
+
+7.3 Jacobian matrix for output presentation 47
+
+7.4 Numerical evaluation of log-normal integrals 48
+
+7.5 Discarding log-normal lifetime components 49
+---PAGE_BREAK---
+
+**8 Appendix D: Quality check** 49
+
+8.1 Simulation of lifetime spectra 49
+
+8.2 Verifying POSITRONFIT by statistical analysis 50
+
+8.3 Comparison of PALSfit3 with LT10 53
+
+**9 Appendix E: Exclusion of channels** 55
+
+References 57
+
+# 1 Introduction and overview
+
+An important aspect of doing experiments by positron annihilation lifetime spectroscopy (PALS) is to be able to reliably analyse the measured spectra in order to extract physically meaningful parameters. A number of computer programs have been developed over the last many years by various authors for this purpose. Most of these programs have used various methods of least squares fitting of a model function to the experimental data [1–15], while others carry out a direct deconvolution of the measured spectra using different criteria for obtaining the optimal solution [16–23]. At our laboratory we have concentrated on developing programs for least squares fitting of positron lifetime spectra.
+
+## 1.1 Acknowledgements
+
+The development of PALSfit programs, including the present report, has been supported by the Technical University of Denmark (AIT Department and Department of Energy Conversion and Storage). Also the input and inspiring questions from colleagues world-wide have been much appreciated. Thanks are due to Dr. Tetsuya Hirade for allowing us to apply his PALGEN program for further development of spectrum simulation tools, and to Dr. Anne Margrethe Larsen who resolved several issues about the LATEX typesetting system. Niels Jørgen Pedersen (deceased) was involved in the early phases of this project.
+
+## 1.2 PALSfit3
+
+PALSfit Version 3, or PALSfit3 for short, is our most recent software of this kind. It is based on the well tested PATFIT and PALSfit (Version 1 and 2) software [8, 12, 24], which have been used extensively by the positron annihilation community. Now PALSfit3 allows each lifetime component to be fitted not only with a simple decaying exponential, but also with a broadened decaying exponential function. The reason for introducing broadening of lifetimes arises from the fact that if all lifetime components are assumed to be simple decaying exponentials, it is very difficult, if not impossible, to reliably separate groups of components whose lifetime values are close. This is due to the fact that when fitting a sum of simple decaying exponentials, a very strong correlation exists between such lifetimes. Hence, instead of assuming several individual lifetime components in the analysis, we allow in PALSfit3 that many components with close lifetimes may be joined into just one component which in principle is a decaying exponential, but with a lifetime that can have a distribution of values (this distribution has been assumed to be a so-called log-normal distribution). In addition, a number of new graphics displays are provided to ease the selection of some input parameters and to display results of spectrum analyses.
+
+The two cornerstones in PALSfit are the following least-squares fitting modules:
+---PAGE_BREAK---
+
+* POSITRONFIT extracts lifetimes and intensities from lifetime spectra.
+
+* RESOLUTIONFIT determines the lifetime spectrometer time resolution function to be used in POSITRONFIT analyses.
+
+Correspondingly PALSfit may run in either of two modes, producing a POSITRONFIT analysis or a RESOLUTIONFIT analysis, respectively.
+
+Common for both modules is that a model function will be fitted to a measured spectrum. This model function consists of a function representing the physics of the positron decay which is convoluted with the experimental time resolution function, plus a constant background. The 'physics function' consists of a sum of decaying exponentials each of which may be broadened by convolution with a log-normal distribution (only in POSITRONFIT). The time resolution function is described by a sum of Gaussians which may be displaced with respect to each other.
+
+Various types of constraints may be imposed on the fitting parameters.
+
+A correction for the contribution to a measured lifetime spectrum from positrons annihilating outside the sample can be made during the POSITRONFIT analysis.
+
+The front page of the present report shows an example of a POSITRONFIT Spectrum setup window. Various additional tabs at the bottom can be selected to enter or change input data (Resolution function, Background and Area and Lifetimes and Corrections) or display results of analyses (Graphics, Text output and Multispectrum plot).
+
+In RESOLUTIONFIT, parameters determining the shape of the resolution function can be fit-ted, normally by analysing lifetime spectra which contain mainly one component. The extracted resolution function may then be used in POSITRONFIT to analyse more complicated spec-tra. PALSfit3 can easily feed the resolution function determined by RESOLUTIONFIT into POSITRONFIT. In the latter program the shape of the resolution function is fixed.
+
+In the following, Chapter 2 presents a brief overview of the PALSfit3 model. In Chapter 3 follows a detailed description of the PALSfit3 input and output, while Chapter 4 conveys some experiences we and others have gained with PALSfit3 and its predecessors.
+
+Appendices A–C contain the mathematical and statistical details which constitute the foundation for the programs. Appendix D deals with quality checks of the programs, carried out by statistical analyses of the results from fitting a long series of simulated spectra as well as by comparing results obtained by both PALSfit3 and LT programs (Giebel and Kansy [15]). Appendix E discusses “Exclusion of channels”, a new feature in PALSfit3 described by Jens V Olsen.
+
+PALSfit3 is available from the website www.palsfit.dk.
+
+A contemporary edition of the PATFIT package, roughly equivalent to PALSfit3 without its GUI, is available too. It contains command-driven versions of POSITRONFIT and RESOLUTIONFIT and might be useful for batch processing under Windows or in a Linux environment where it is also available.
+---PAGE_BREAK---
+
+Fig. 1 An example of a window in the PALSfit program, which shows a spectrum to be analysed, some of the input parameters for the analysis as well as icons, buttons, menus and tabs that are used to define the analysis and to display the results in numerical or graphical form.
+
+# 2 About PALSfit
+
+## 2.1 General fitting criterion
+
+Common for POSITRONFIT and RESOLUTIONFIT is that they fit a parameterized model function to a distribution (a “spectrum”) of experimental data values $y_i$. In the actual case these are count numbers which are recorded in “channels”. We use the least-squares criterion, i.e. we seek values of the *k* model parameters $b_1, \dots, b_k$ that minimizes
+
+$$ \phi = \sum_{i=1}^{n} w_i (y_i - f_i(b_1, \dots, b_k))^2 \quad (1) $$
+
+where *n* is the number of data values, $f_i(b_1, \dots, b_k)$ the model prediction corresponding to data value no. *i*, and $w_i$ a fixed weight attached to *i*; in this work we use “statistical weighting”,
+
+$$ w_i = \frac{1}{s_i^2} \qquad (2) $$
+
+where $s_i^2$ is the variance of $y_i$. As some of the parameters enter our models nonlinearly, we must use an iterative fitting technique. In PALSfit we use separable least-square methods to obtain the parameter estimates. Details of the solution methods and the statistical inferences are given in Appendix A. As a result of the calculations, a number of fitting parameters are estimated that characterize the fitted model function and hence the measured spectrum (e.g. lifetimes and intensities). A number of different constraints may be imposed on the fitting parameters. The two most important types of constraints are that 1) a parameter can be fixed to a certain value,
+---PAGE_BREAK---
+
+and 2) a linear combination of lifetime intensities is put equal to zero (this latter constraint can be used to fix the ratio of intensities).
+
+## 2.2 The POSITRONFIT model
+
+Let us first consider the “simple” POSITRONFIT model, i.e. without any broadening of components. In this case the model function is a sum of decaying exponentials convoluted with the resolution function of the lifetime spectrometer, plus a constant background. Each exponential corresponds to a single lifetime component. Let $t$ be the time, $k_0$ the number of lifetime components, $a_j$ the decay function for component $j$, $R$ the time-resolution function, and $B$ the background. The resulting expression is given in full detail in Appendix B, Section 6.1; here we state the model in an annotated form using the symbol $*$ for convolution:
+
+$$f(t) = \sum_{j=1}^{k_0} (a_j * R)(t) + B \quad (3)$$
+
+where
+
+$$a_j(\tau) = \begin{cases} A_j \exp(-(\tau - T_0)/\tau_j), & \tau > T_0 \\ 0, & \tau < T_0 \end{cases} \quad (4)$$
+
+In (4) $\tau_j$ is the mean lifetime of the jth component, and $A_j$ is a pre-exponential factor. The integral
+
+$$\int_{T_0}^{\infty} A_j \exp(-(\tau - T_0)/\tau_j) d\tau = A_j \tau_j \quad (5)$$
+
+is called the *area* or the *absolute intensity* of the component. If not for the resolution function $R$, $t=T_0$ would be the onset time for the decaying exponentials, hence $T_0$ is called “time-zero”. We assume, furthermore, that $R$ is given by a weighted sum of $k_g$ Gaussians which may be displaced with respect to each other:
+
+$$R(\tau) = \sum_{p=1}^{k_g} \omega_p G_p(\tau) \quad (6)$$
+
+where
+
+$$G_p(\tau) = \frac{1}{\sqrt{2\pi}s_p} \exp\left(-\frac{(\tau - \Delta_p)^2}{2s_p^2}\right) \quad (7)$$
+
+and
+
+$$\sum_{p=1}^{k_g} \omega_p = 1 \quad (8)$$
+
+The Gaussian (7) is centered around the shift $\Delta_p$. Its standard deviation $s_p$ is related to its Full Width at Half Maximum by
+
+$$\text{FWHM}_p = 2\sqrt{2\ln 2} s_p \approx 2.3548 s_p \quad (9)$$
+
+We also see that
+
+$$\int_{-\infty}^{\infty} R(\tau) d\tau = 1 \quad (10)$$
+
+Regarding the time scale, the choices of $t=0$ and the time unit are arbitrary. Considering the actual physical experiment, the positron annihilation lifetimes $\tau_j$ are often measured in ns. In this *physical time* representation all the other temporal model parameters in (3–10), i.e. $t$, $\tau$, $T_0$, $\Delta_p$, and $s_p$, would be in ns too, and it is natural to set $T_0=0$.
+
+However, for some of the quantities it is more convenient to use a time scale directly related to the spectrum recording system. Let the spectrum be recorded in $n_{ch}$ channels, numbered $i_{ch}=1,2,\dots,n_{ch}$. Each channel represents a time slot whose common width will be used as the time unit. We let channel No. $i_{ch}$ begin at $t=i_{ch}-1$ and end at $t=i_{ch}$. This implies that $t=0$
+---PAGE_BREAK---
+
+corresponds to the beginning of the first channel. In this channel scale the “time-zero” $T_0$ will usually take some positive fractional value, say $T_0 = 120.36$. The time $t$ defined in this way is called the *channel time*.
+
+Our way of defining the channel time is by no means standard. Others may prefer to let $t=0$ fall in the middle of a channel, or may choose to number the channels 0, 1, 2, ... In earlier versions of our software $t=0$ corresponded to the left end of a fictive channel 0. Of course such differences have no influence on the analysis itself, except for the nominal value of $T_0$. The scales for physical time and channel time are assumed to be connected by a fixed parameter $C$:
+
+$$1 \text{ channel} = C \text{ ns} \tag{11}$$
+
+The curve given by (3) is continuous, but since the spectra are recorded in channels of a multichannel analyser or similar, this curve shall for proper comparison be transformed into a histogram by integration over intervals each being one channel wide.
+
+If all the $n_{ch}$ channels in the spectrum are used in the least-squares analysis by (1), we have $n = n_{ch}$ and we simply identify the channel number $i_{ch}$ with the data value number $i$ from (1). Thus we substitute for $f_i$ in (1) the channel average of the model count,
+
+$$f_i = \int_{i-1}^{i} f(t) \, dt, \quad i = 1, \dots, n \tag{12}$$
+
+with $f(t)$ given by (3), so that (12) is fitted to the measured spectrum. However, often only a subset of the channels are used in the analysis. If this subset starts in channel $i_{ch}^{\min}$ and ends in $i_{ch}^{\max}$ (inclusive), where
+
+$$1 \le i_{\text{ch}}^{\min} \le i_{\text{ch}}^{\max} \le n_{\text{ch}} \tag{13}$$
+
+we should generalize (12) to
+
+$$f_i = \int_{i_{\text{ch}}^{\min}+i-2}^{i_{\text{ch}}^{\min}+i-1} f(t) \, dt, \quad i = 1, \dots, n \tag{14}$$
+
+where now
+
+$$n = i_{\text{ch}}^{\max} - i_{\text{ch}}^{\min} + 1 \tag{15}$$
+
+In any case, we obtain as the result a model for the least-squares analysis of the form
+
+$$f_i = \sum_{j=1}^{k_0} F_{ij} + B \tag{16}$$
+
+where $F_{ij}$ is the contribution from lifetime component $j$ in spectrum channel $i_{ch}^{\min} + i - 1$. (We relegate the full write-up of $F_{ij}$ to Appendix B, Section 6.1.) We recall that $f_i$ in (12), (14), and (16) corresponds to $f_i(b_1, \dots, b_k)$ in Section 2.1, formula (1).
+
+The fitting parameters in POSITRONFIT are the *lifetimes* ($\tau_j$), the *relative intensities* defined as
+
+$$I_j = \frac{A_j \tau_j}{\sum_{k=1}^{k_0} A_k \tau_k} \tag{17}$$
+
+the *time-zero* ($T_0$), and the *background* ($B$). Each of these parameters may be fixed to a chosen value. In another type of constraint you may put one or more linear combinations of intensities equal to zero in the fitting, i.e.
+
+$$\sum_{j=1}^{k_0} h_{lj} I_j = 0 \tag{18}$$
+
+These constraints can be used to fix ratios of intensities. Finally, it is possible to fix the total area of the spectrum in the fitting,
+
+$$\sum_{j=1}^{k_0} A_j \tau_j + \text{background area} = \text{constant} \tag{19}$$
+---PAGE_BREAK---
+
+This may be a useful option if, for example, the peak region of the measured spectrum is not included in the analysis.
+
+The necessary mathematical processing of the POSITRONFIT model for the least-squares analysis is outlined in Appendix B, Section 6.1.
+
+## 2.3 The log-normal extension
+
+Until now we have assumed that each lifetime component consists of a decaying exponential function (convoluted with a resolution function) with a single decay rate (equal to the inverse of the lifetime $\tau_*$). We may say that its probability density function (pdf) is a delta function,
+
+$$f(\tau) = \delta(\tau - \tau_*) \quad (20)$$
+
+Sometimes it is more realistic to assume that a lifetime may have some continuous distribution. We shall here consider the case where one (or more) of the lifetimes obeys the *log-normal* distribution which is implemented in PALSfit3.
+
+Let us recapitulate the general properties of the log-normal distribution. We say that the stochastic variable $\tau$ has a log-normal distribution if $\ln \tau$ has a normal distribution,
+
+$$\ln \tau \sim N(\ln \tau_*; \sigma_*^2) \quad (21)$$
+
+for some positive $\tau_*$ and $\sigma_*$. Thus the mean and variance of $\ln \tau$ are $\ln \tau_*$ and $\sigma_*^2$, respectively. To find $F$, the cumulative distribution function (CDF) of $\tau$, we note that
+
+$$F(x) = P\{\tau < x\} = P\left\{\frac{\ln \tau - \ln \tau_*}{\sigma_*} < \frac{\ln x - \ln \tau_*}{\sigma_*}\right\} \quad (22)$$
+
+In terms of the CDF for $N(0,1)$,
+
+$$\Phi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{1}{2}t^2} dt = \frac{1}{2} \left(1 + \operatorname{erf}\left(\frac{x}{\sqrt{2}}\right)\right) \quad (23)$$
+
+eq. (22) can be written
+
+$$F(x) = \Phi\left(\frac{\ln x - \ln \tau_*}{\sigma_*}\right) \quad (24)$$
+
+Taking the derivative, this gives the pdf of $\tau$:
+
+$$f(\tau) = \frac{1}{\tau_* \sqrt{2\pi}} \exp\left(-\frac{1}{2\sigma_*^2} (\ln \tau - \ln \tau_*)^2\right) \quad (25)$$
+
+with support $(0, \infty)$. In the limit $\sigma_* \to 0$ (25) tends to (20). We see from (24) that $\tau_*$ equals the median value for the log-normal distribution; $\sigma_*$ is dimensionless and is sometimes called the scale or the shape parameter. The mean and variance are
+
+$$E[\tau] = \tau_m = \tau_* \exp(\frac{1}{2}\sigma_*^2) \quad (26)$$
+
+$$\text{Var}[\tau] = \sigma^2 = \tau_*^2 \exp(\sigma_*^2)(\exp(\sigma_*^2) - 1) \quad (27)$$
+
+Hence the standard deviation is
+
+$$\sigma = \tau_* \exp(\frac{1}{2}\sigma_*^2) \sqrt{\exp(\sigma_*^2) - 1} \quad (28)$$
+
+and the coefficient of variation is
+
+$$\frac{\sqrt{\mathrm{Var}[\tau]}}{E[\tau]} = \frac{\sigma}{\tau_m} = \sqrt{\exp(\sigma_*^2) - 1} \quad (29)$$
+
+We see that when the parameter $\sigma_*$ is small, it is approximately equal to the coefficient of variation. By solving the equation
+
+$$\frac{d}{d\tau} \ln(f(\tau)) = 0 \quad (30)$$
+---PAGE_BREAK---
+
+for $\tau$, where $f(\tau)$ is given by (25), we find that the most probable $\tau$-value (mode) for the log-normal distribution is
+
+$$ \tau_{\max} = \tau_* \exp(-\sigma_*^2) \quad (31) $$
+
+The log-normal distribution is completely specified by either of the parameter sets $(\tau_*, \sigma_*)$ or $(\tau_m, \sigma)$. The relation between the two sets was given in (26–27); the inverse transformation reads
+
+$$ \tau_* = \frac{\tau_m^2}{\sqrt{\sigma^2 + \tau_m^2}} \quad (32) $$
+
+$$ \sigma_*^2 = \ln \left( 1 + \frac{\sigma^2}{\tau_m^2} \right) \quad (33) $$
+
+The Full Width at Half Maximum is given by
+
+$$ \text{FWHM} = 2\tau_* \exp(-\sigma_*^2) \sinh(\sqrt{2\ln 2}\sigma_*) \quad (34) $$
+
+or, equivalently
+
+$$ \text{FWHM} = 2\tau_m^4 (\sigma^2 + \tau_m^2)^{-3/2} \sinh \left[ \sqrt{2 \ln 2} \cdot \ln \left( 1 + \frac{\sigma^2}{\tau_m^2} \right) \right] \quad (35) $$
+
+In POSITRONFIT we use the representation $(\tau_*, \sigma_*)$ for internal calculations, while $(\tau_m, \sigma)$ is used for input and output. More details related to POSITRONFIT and the log-normal distribution are given in Appendix C.
+
+## 2.4 Source correction
+
+Normally in an experiment a fraction $\alpha$ of the positrons will not annihilate in the sample, but for example in the source or at surfaces. In POSITRONFIT it is possible to make a correction for this (“source correction”). First, the raw spectrum data are fitted in a first iteration cycle. Then, the spectrum for the source correction is subtracted from the raw spectrum. The corrected spectrum is then fitted in a second iteration cycle. In this second cycle it is optional to choose another number of lifetime components as well as type and number of constraints than were used in the first iteration cycle. The source correction spectrum $f_i^s$ itself is composed of $k_s$ lifetime components and expressed in analogy with (16) (with $B=0$) as follows:
+
+$$ f_i^s = \sum_{j=1}^{k_s} F_{ij}^s \quad (36) $$
+
+If $\tau_j^s$ and $A_j^s$ are the lifetime and pre-exponential factor, respectively, of source-correction component $j$, then
+
+$$ \sum_{j=1}^{k_s} A_j^s \tau_j^s = \alpha \sum_{j=1}^{k_0} A_j \tau_j \quad (37) $$
+
+Log-normal broadening of the source-correction lifetime components is accepted.
+
+## 2.5 The RESOLUTIONFIT model
+
+The RESOLUTIONFIT model function is basically the same as for POSITRONFIT, Eqs. (3–16). A few additional formulas relevant to RESOLUTIONFIT are given in Appendix B, Section 6.2. The purpose of RESOLUTIONFIT is to extract the shape of the resolution function. The widths and shifts (Eqs. (9) and (7)) of the Gaussians in the resolution function are therefore included as fitting parameters. In order not to have too many fitting parameters, the intensities of the Gaussians are fixed parameters. For the same reason it is normally advisable to determine resolution functions by fitting only simple lifetime spectra, i.e. spectra containing only one major lifetime component. The extracted resolution function may then be used in POSITRONFIT to analyse
+---PAGE_BREAK---
+
+more complicated spectra. Along the same line, RESOLUTIONFIT does not include as many
+features as does POSITRONFIT, e.g. there is no source correction and there are no constraints
+possible on time-zero or on the area. Moreover, log-normal broadening of lifetime components is
+not available. The background can be free or fixed, just like in POSITRONFIT.
+
+Hence, the fitting parameters in RESOLUTIONFIT are the lifetimes (τj), their relative intensities (Ij), the background (B), the time-zero (T0), and the widths and shifts of the Gaussians in the resolution function. Each of these parameters, except T0, may be constrained to a fixed value and, as in POSITRONFIT, linear combinations of lifetime intensities may be constrained to zero in the fitting. At least one shift must be fixed.
+
+# 3 Input and output
+
+PALSfit requires — together with the spectrum to be analysed — a set of input data, e.g. some characteristic parameters of the lifetime spectrometer, guesses of the parameters to be fitted, and possible constraints on these parameters. For one analysis of the spectrum, these data shall be organised in a block structured *dataset* which is saved in a so-called *control file*. In order to carry out several analyses of the same spectrum or of different spectra, several datasets may be stacked in the same control file.
+
+The most direct way to generate datasets and control files is to run PALSfit and edit the input
+data by using the PALSfit menus. Nevertheless there might be situations where an inspection
+or an external editing of the content of a control file is required. For example, as mentioned
+in Chapter 1, it may be useful in certain situations (batch processing) to run the command-
+driven PATFIT programs POSITRONFIT and RESOLUTIONFIT directly. In that case you will
+also need to know the structure of the input files. Note that PATFIT and PALSfit are input-
+compatible.
+
+In any case, the knowledge of the structure of the control files may give the user a good overview
+of the capabilities of PALSfit3. Therefore, in the following we shall describe the contents of the
+control files for POSITRONFIT and RESOLUTIONFIT in some detail.
+
+Each dataset in a control file is partitioned into a number of data blocks, corresponding roughly to the menus in PALSfit. Each block is initiated by a block header. For example, the first block header reads
+
+POSITRONFIT DATA BLOCK 1: OUTPUT OPTIONS
+
+in the case of POSITRONFIT, and similarly for RESOLUTIONFIT.
+
+## 3.1 PALSfit input
+
+As mentioned above, PALSfit3 can interactively generate and/or edit the control file for either POSITRONFIT or RESOLUTIONFIT. Previously generated control files can be used as default input values. A number of checks on the consistency of the generated control data are built into PALSfit3. PALSfit3 is largely self-explanatory regarding input editing.
+
+## 3.2 POSITRONFIT control file
+
+A sample PALSfit3 control file for POSITRONFIT with a single dataset is shown below:
+---PAGE_BREAK---
+
+Block 1 contains output options:
+
+Apart from the block header there is only one record. It contains normally 4 integer keys¹ in its first 4 positions. Each key is either 0 or 1. The value 1 causes some output action to be taken, whereas 0 omits this action. The actions of the 4 keys are:
+
+1. Write input echo to result file
+
+2. Write each iteration output to result file
+
+3. Write residual plot to result file
+
+4. Write correlation matrix to result file
+
+Regardless of these keys, POSITRONFIT always produces the *Main Output*, cf. Section 3.4.
+
+¹In fact, the record may contain an additional key which is not an output option but an indicator of the so-called log-normal fineness, see Appendix C, Section 7.4.
+---PAGE_BREAK---
+
+**Block 2 contains the spectrum:**
+
+The first record (after the block header) contains the integer NCH, which is the total number of channels in the spectrum.
+
+Next record contains a description of precisely how the spectrum values are “formatted” in the file—expressed as a so-called FORMAT in the programming language FORTRAN [25].
+
+After this, two text records follow. In the first a name of a spectrum file is given. (Even when INSPEC = 1 (see below) this name should be present, but is in that case not used by the program.) In the other record an identification label of the spectrum is given.
+
+The next record contains the integer INSPEC taking a value of either 0 or 1. INSPEC = 1 means that the spectrum is an intrinsic part of the present control file. In this case the next record should be a text line with an identification label for the spectrum. The subsequent records are supposed to hold the NCH spectrum values. On the other hand, INSPEC = 0 means that the spectrum is expected to reside in an external spectrum file with the file name entered above. The program opens this file (which may contain several spectra) and scans it for a record whose start matches the identification label. After a successful match, the matching (text) line and the spectrum itself are read from the subsequent records in exactly the same way as in the case INSPEC = 1.
+
+**Block 3 contains information related to the measuring system:**
+
+The first 2 records (after the block header) contain 2 channel numbers ICHA1 and ICHA2. These numbers are lower and upper bounds for the definition of a total area range.
+
+The next 2 records contain also 2 channel numbers ICHMIN and ICHMAX. These define in the same way the channel range which is used in the least-squares analysis.
+
+The next record contains the channel width *C* measured in ns, cf. (11) in Section 2.2.
+
+The last 2 records in the block deal with $T_0$ (time=0 channel number, may be fractional). First comes a constraint flag being either a G or an F. G stands for guessed (i.e. free) $T_0$, F stands for fixed $T_0$. The other record contains the initial (guessed or fixed) value of $T_0$.
+
+Rules for proper channel specifications are given in Section 3.7.
+
+**Block 4 contains input for definition of the resolution function:**
+
+The first record (after the block header) contains the number $k_g$ of Gaussian components in the resolution function.
+
+Each of the next 3 records contains $k_g$ numbers. In the first record we have the full widths at half maxima of the Gaussians (in ns), $FWHM_j$, $j = 1, ..., k_g$, in the second their relative intensities (in percent) $\omega_j$, $j = 1, ..., k_g$, and in the third their peak displacements (in ns) $\Delta t_j$, $j = 1, ..., k_g$.
+
+**Block 5 contains data for lifetime components and intensity constraints:**
+
+The first record (after the block header) contains the number $k_0$ of lifetime components assumed in the model.
+---PAGE_BREAK---
+
+Each of the next 4 records contain $k_0$ data. In the first we have the constraint flags (G = guessed, F = fixed) for the lifetimes (mean values in case of log-normal broadening). The second record contains the initial values (guessed or fixed) for each of the $k_0$ lifetimes. In the standard case of no log-normal broadening of the lifetime components, the 3rd record should contain the flags F (=fixed), while the 4th should contain $k_0$ zeroes. In the case of log-normal broadening (= standard deviation), the 3rd record should contain the flags F or G, while the 4th should contain the initial values (guessed or fixed) for each of the $k_0$ broadenings.
+
+The next record after this contains an integer $m$ denoting the number and type of intensity constraints. $|m|$ is equal to the number of constraints; $m$ itself may be positive, negative, or zero. If $m=0$ there is no further input data in the block. If $m > 0$, $m$ of the relative intensities are fixed. In this case the next data item is a pair of records with the numbers $j_l, l = 1, \dots, m$ and $I_{j_l}, l = 1, \dots, m$; here $j_l$ is the term number (the succession agreeing with the lifetimes on the previous record) associated with constraint number $l$, and $I_{j_l}$ is the corresponding fixed relative intensity (in percent). If $m < 0$, $|m|$ linear combinations of the intensities are equal to zero. In this case $|m|$ records follow, each containing the $k_0$ coefficients $h_{lj}, j = 1, \dots, k_0$ to the intensities for one of the linear combinations, cf. equation (18) in Section 2.2.
+
+**Block 6 contains data related to the background:**
+
+The first record (after the block header) contains an integer indicator KB, assuming one of the values 0, 1, or 2. KB = 0 means a free background (to be fitted); in this case no more data follows in this block. If KB = 1 the background is fixed to the spectrum average from channel ICHBG1 to channel ICHBG2. These two channel numbers follow on the next 2 records. If KB = 2, the background is fixed to an input value which is entered on the next record.
+
+**Block 7 contains input for constraining the total area:**
+
+The first record (after the block header) holds an integer indicator KAR, assuming one of the values 0, 1, or 2. KAR = 0 means no area constraint; in this case no more data follows in this block. If KAR > 0, the area between two specified channel limits ICHBEG and ICHEND will be fixed, and these channel numbers follow on the next two records. If KAR = 1, the area is fixed to the measured spectrum, and no more input will be needed. If KAR = 2 the area is fixed to an input value which is entered on the next record.
+
+**Block 8 contains source correction data:**
+
+The first record (after the block header) contains an integer $k_s$ denoting the number of components in the source correction spectrum. $k_s = 0$ means no source correction, in which case the present block contains no more data.
+
+If $k_s > 0$, the next 3 records contain the lifetimes $\tau_j^s$, the lifetime broadenings $\sigma_j^s$ and the relative intensities (in percent) $I_j^s, j = 1, \dots, k_s$ for the source correction terms.
+
+On the next record is the number $\alpha$ which is the percentage of positrons that annihilate in the source, cf. equation (37) in Section 2.4.
+
+Then follows a record with an integer ISEC. When ISEC = 0 the new iteration cycle after the source correction starts from parameter guesses equal to the converged values from the first (correction-free) cycle. ISEC = 1 tells that the second cycle starts from new input data. These
+---PAGE_BREAK---
+
+2nd-cycle input data are now entered in exactly the same way as the 1st-cycle data in Block 5.
+ISEC = 2 works as ISEC = 1, but with the additional possibility of changing the status of T₀;
+in this case two more records follow, the first containing the constraint flag (G = guessed, F =
+fixed) for T₀ and the second the value of T₀.
+
+With the end of Block 8 the entire POSITRONFIT dataset is completed. However, as previously mentioned, PALSfit3 accepts multiple datasets in the same POSITRONFIT control file.
+
+## 3.3 RESOLUTIONFIT control file
+
+A sample PALSfit3 control file for RESOLUTIONFIT with a single dataset is shown below:
+
+```
+RESOLUTIONFIT DATA BLOCK 1: OUTPUT OPTIONS
+0000
+RESOLUTIONFIT DATA BLOCK 2: SPECTRUM
+1023
+(10i7)
+.\Test_spectra-Metal-rep.DAT
+51108 Cu-ann
+0
+RESOLUTIONFIT DATA BLOCK 3: CHANNEL RANGES. TIME SCALE. TIME-ZERO.
+3
+1023
+175
+1000
+0.0158
+194.0000
+RESOLUTIONFIT DATA BLOCK 4: RESOLUTION FUNCTION
+3
+GGG
+0.2500 0.2000 0.3500
+70.0000 20.0000 10.0000
+FGG
+0.0000 -0.0400 -0.0250
+RESOLUTIONFIT DATA BLOCK 5: LIFETIMES AND INTENSITY CONSTRAINTS
+3
+FFG
+0.1100 0.1800 0.4000
+0
+RESOLUTIONFIT DATA BLOCK 6: BACKGROUND CONSTRAINTS
+0
+```
+
+**Block 1 contains output options:**
+
+It is identical to the corresponding block in the POSITRONFIT control file (but of course the name RESOLUTIONFIT must appear in the block header).
+
+**Block 2 contains the spectrum:**
+
+It is identical to the corresponding block in the POSITRONFIT control file.
+
+**Block 3 contains information related to the measuring system:**
+---PAGE_BREAK---
+
+It is identical to the corresponding block in the POSITRONFIT control file, except for the status of $T_0$. In RESOLUTIONFIT the last record always contains a guessed value of $T_0$, so there is no preceding G or F flag.
+
+Rules for proper channel specifications are given in Section 3.7.
+
+**Block 4 contains input for definition and initialization of the resolution function:**
+
+The first record (after the block header) contains the number $k_g$ of Gaussian components in the resolution function. Each of the next two records contains $k_g$ data. In the first we have the constraint flags (G=guessed, F=fixed) for the Gaussian widths. The second contains the initial values (guessed or fixed) of the full widths at half maxima of the Gaussians (in ns), $FWHM_j^{ini}$, $j = 1,..., k_g$. The next record contains the $k_g$ Gaussian component intensities in percent, $\omega_j$, $j = 1,..., k_g$. The last two records in the block contain again $k_g$ data each. First, we have the constraint flags (G=guessed, F=fixed) for the Gaussian shifts; notice that not all the shifts can be free. Next, we have the initial (guessed or fixed) peak displacements (in ns), $\Delta_j^{ini}$, $j = 1,..., k_g$.
+
+**Block 5 contains data for the lifetime components in the lifetime spectrum as well as constraints on their relative intensities:**
+
+It is similar to the corresponding block in the POSITRONFIT control file, but without the two records about the log-normal input.
+
+**Block 6 contains data related to the background:**
+
+It is identical to the corresponding block in the POSITRONFIT control file.
+
+This completes the RESOLUTIONFIT dataset. Multiple datasets can be handled in the same way as for POSITRONFIT.
+
+## 3.4 PALSfit output
+
+After a successful POSITRONFIT or RESOLUTIONFIT analysis PALSfit presents the results of the analysis in several output files as well as in graphical displays. The most important of the files is the Analysis report (result file) the content of which is displayed automatically after the analysis comes to an end. It can also be viewed by choosing the “Analysis Report” tab. It has the following contents:
+
+a) An edited result section, which is the *Main Output* for the analysis. It contains the final estimates of the fitting parameters and their standard deviations. In addition, all the guessed input parameters as well as information on constraints are quoted. Furthermore, three statistical numbers, “chi-square”, “reduced chi-square”, and “significance of imperfect model” are shown. They inform about the agreement between the measured spectrum and the model function (Appendix A, Section 5.3). A few key numbers are displayed for quick reference, giving the number of components and the various types of constraints; they are identified by letters or abbreviations.
+---PAGE_BREAK---
+
+b) An input echo (optional). This is a raw copy of all the input data contained in the dataset.
+
+c) Fitting parameters after each iteration (optional). The parameters shown are internal; after convergence they may need a transformation prior to presentation in the Main Output.
+
+d) An estimated correlation matrix for the parameters (optional). This matrix and its interpretation is discussed in Appendix A, Section 5.3.
+
+As indicated above, the outputs b)–d) are optional, while the Main Output is always produced. In addition to the Main output, two 'graphics files' (*.pfg and *.pft from POSITRONFIT, *.rfg and *.rft from RESOLUTIONFIT) are produced (and may optionally be saved). They contain data necessary for the generation of plots of measured and fitted spectra. These plots will be displayed by choosing the tab "Graphics" or the tab "Multispectrum plot".
+
+## 3.5 POSITRONFIT main output
+
+In the following we give an example of the Main Output part of a POSITRONFIT analysis report produced by PALSfit3, with a brief explanation of its contents (for details about the input possibilities consult Section 3.2):
+
+```
+PALSfit - Version 3.115 28-apr-2017 - Licensed to Morten Eldrup
+Input file: C:\PALSfit3\PALSfit3_test-data2017\PVAc-test-rep.pfc
+POSITRONFIT. Version 3.115 Job time 10:29:49.49 03-MAY-17
+**********************************************************************
+37200 PVAC, T=414K
+in file: .\Test_Spectr-Polymer-Rep.DAT
+**********************************************************************
+Dataset 1 LT LN LX SX IX BG TZ AR GA
+3 0 0 0 0 1 0 0 3
+Time scale ns/channel : 0.026800
+Area range starts in ch 5 and ends in ch 2000
+Fit range starts in ch 275 and ends in ch 2000
+Resolution FWHM (ns) : 0.2984 0.2546 0.2396
+Function Intensities (%) : 12.0000 13.0000 75.0000
+Shifts (ns) : -0.1038 0.0802 0.0000
+---------------- Initial Parameters ------------------
+Time-zero (ch.no): 285.0000G
+Lifetimes (ns) : 0.2000G 0.5000G 3.2000G
+Background fixed to mean from ch 1400 to ch 2000 = 551.1963
+------ Results before source correction ------
+Convergence obtained after 5 iterations
+Chi-square = 1768.66 with 1719 degrees of freedom
+Lifetimes (ns) : 0.2191 0.4635 3.2092
+Intensities (%) : 22.4624 50.7573 26.7803
+Time-zero Channel time : 285.0458
+Total area from fit : 3.93911E+06 from table : 3.94045E+06
+------------------ Source Correction ------------------
+Lifetimes (ns) : 0.3803 2.0000
+Intensities (%) : 86.9972 13.0028
+Total (%) : 9.1957
+---------- Initial 2nd cycle Parameters ----------
+
+Lifetimes (ns) : 0.1500G 0.4000G 3.2000G
+Sigma (ns) : 0.2000G
+Lin comb coeff. : -3.0000 0.0000 1.0000
+
+########## Final Results ##########
+Dataset 1 LT LN LX SX IX BG TZ AR GA
+3 1 0 0 -1 1 0 0 3
+Convergence obtained after 8 additional iterations
+```
+---PAGE_BREAK---
+
+Chi-square = 1776.83 with 1719 degrees of freedom
+Reduced chi-square = chi-square/dof = 1.034 with std deviation 0.034
+Significance of imperfect model = 83.81 %
+
+Lifetimes (ns) : 0.1667 0.4265 3.2757
+Std deviations : 0.0055 0.0015 0.0142
+
+Sigma (ns) : 0.1102
+Std deviations : 0.0063
+
+Intensities(%) LC: 9.3713 62.5150 28.1138
+Std deviations : 0.0420 0.1679 0.1259
+
+Mean lifetime : 1.2032
+Std deviation : 0.0026
+
+Background counts/channel : 551.1963
+Std deviations : mean
+
+Time-zero channel time : 285.0628
+Std deviations : 0.0149
+
+Total area from fit : 3.67870E+06 from table : 3.67939E+06
+
+芻
+# P o s i t r o n F i t
+芻
+
+This output was obtained by running PALSfit3 with the dataset in Section 3.2. It does not represent a typical analysis of a spectrum, but rather illustrates a number of program features.
+
+After a heading which contains the spectrum headline the key numbers are displayed in the upper right hand corner. LT indicates the number of lifetime components (k₀), LN of these are log-normally distributed, LX is the number of fixed lifetimes, SX the number of fixed lifetime broadenings, IX the number and type of intensity constraints (a positive number for fixed intensities, a negative number for linear combinations of intensities, i.e. the number m, Section 3.2, Block 5), BG the type of background constraint (KB, Section 3.2, Block 6), TZ whether time-zero is free or fixed (0 = free, 1 = fixed), AR the type of area constraint (KAR, Section 3.2, Block 7), and GA the number of Gaussians used to describe the time resolution function (k_g). The rest of the upper part of the output reproduces various input parameters, such as those for the resolution function (the shape of which is fixed), and the initial values (G for guessed and F for fixed) of the fitting parameters for the first iteration cycle.
+
+The next part ("Results before source correction") contains the outcome of the first iteration cycle. If convergence could not be obtained, a message will be given and the iteration procedure discontinued, but still the obtained results are presented. Then follows information about the goodness of the fit (Appendix A, Section 5.3).
+
+The next part ("Source correction" and "Initial 2nd Cycle Parameters") shows the parameters of the chosen source correction, which accounts for those positrons that annihilate outside the sample. It is followed by optional initial values of the fitting parameters for the second iteration cycle.
+
+The "Final Results" part contains the number of iterations in the final cycle, followed by three lines with information about the goodness of the fit (Appendix A, Section 5.3). Then follows a survey of the final estimates of the fitted (and fixed) parameters and their standard deviations. The "LC" in the intensity line indicates that we have intensity constraints of the linear-combination type (cf. the negative IX in the upper right hand corner). The "total area from fit" is calculated as $\sum_j A_j \tau_j$ plus the background inside the "area range" specified in the beginning of the Main Output. The "total area from table" is the total number of counts in the (source corrected) measured spectrum inside the "area range".
+---PAGE_BREAK---
+
+## 3.6 RESOLUTIONFIT main output
+
+Below you find an example of the Main Output from a RESOLUTIONFIT analysis by PALSfit, with a brief explanation of its contents (for more details about the input possibilities consult Section 3.3):
+
+PALSfit - Version 3.104 8-feb-2017 - Licensed to Morten Eldrup
+Input file: C:\PALSfit3\PALSfit3_test-data2017\Cu-ann.rfc
+Dataset 1
+
+RESOLUTIONFIT Version 3.104 Job time 13:07:16.81 08-MAR-17
+
+51108 Cu-ann
+
+in file: .\Test_spectra-Metal-rep.DAT
+
+Dataset 1
+
+Time scale ns/channel : 0.015800
+Area range starts in ch 3 and ends in ch 1023
+Fit range starts in ch 175 and ends in ch 1000
+
+Initial FWHM (ns) : 0.2500G 0.2000G 0.3500G
+Resolution Intensities (%) : 70.0000 20.0000 10.0000
+Function Shifts (ns) : 0.0000F -0.0400G -0.0250G
+
+Other init. Time-zero(chtime): 194.0000
+Parameters Lifetimes (ns) : 0.1100F 0.1800F 0.4000G
+
+ Final results
+
+Convergence obtained after 6 iterations
+Chi-square = 842.59 with 815 degrees of freedom
+Reduced chi-square = Chi-square/dof = 1.034 with std deviation 0.050
+Significance of imperfect model = 75.56 %
+
+Resolution function:
+
+GA WX SX
+3 0 1
+
+FWHM (ns) : 0.2504 0.1921 0.2969
+Std deviations : 0.0026 0.0048 0.0385
+
+Intensities (%) : 70.0000 20.0000 10.0000
+
+Shifts (ns) : 0.0000 -0.0438 -0.0607
+Std deviations : fixed 0.0093 0.0412
+
+Lifetime components:
+
+LT LX IX
+3 2 0
+
+Lifetimes (ns) : 0.1100 0.1800 0.4594
+Std deviations : fixed fixed 0.0160
+
+Intensities (%) : 82.2079 14.8386 2.9535
+Std deviations : 0.7790 0.9797 0.2206
+
+Background:
+
+B
+0
+
+Counts/channel : 150.6070
+Std deviation : 0.4937
+
+Time-zero Channel time : 193.5894
+Std deviations : 0.3573
+
+Total area From fit : 2.81744E+06 From table : 2.82156E+06
+
+Shape parameters for resolution curve (ns):
+
+N 2 5 10 30 100 300 1000
+FW at 1/N 0.2435 0.3756 0.4532 0.5584 0.6598 0.7444 0.8301
+MIDP at 1/N 0.0031 0.0057 0.0063 0.0054 0.0022 -0.021 -0.073
+
+Peak position of resolution curve: Channel # 192.3999
+
+Resolutionfit
+---PAGE_BREAK---
+
+This output was obtained by running `PALSfit3` with the dataset listed in Section 3.3.
+
+After a heading which includes the spectrum headline, the upper part of the output reproduces various input parameters in a way that is very similar to the POSITRONFIT output. The important difference is that in RESOLUTIONFIT all the FWHMs and all of the shifts except one, may be fitting parameters. If the background is fixed to a mean value between certain channel limits, these limits as well as the resulting background value are displayed.
+
+In the "Final Results" part, the number of iterations used to obtain convergence is given first. The next three lines contain information about how good the fit is, similar to the main output from POSITRONFIT (for definition of the terms see Appendix A, Section 5.3).
+
+Next follows the estimated values of the fitted (and fixed) parameters and their standard deviations (for fixed parameters *fixed* is written instead of the standard deviation). This part is divided into three, one giving the parameters for the resolution curve, one with the lifetimes and their intensities, and one showing the background. Each part has one or three key numbers displayed in the upper right hand corner. For the resolution function the *GA* indicates the number of Gaussians ($k_g$), *WX* the number of fixed widths, and *SX* the number of fixed shifts. For the lifetime components the *LT* indicates the number of these ($k_0$), *LX* the number of fixed lifetimes, and *IX* the number and type of intensity constraints. As in POSITRONFIT, a positive value of *IX* means fixed intensities, while a negative value indicates constraints on linear combinations of intensities, the absolute value giving the number of constraints. Finally the background output follows, where *B* indicates the type of background constraint (KB, Section 3.3, Block 6), and after the estimated time-zero the "total area from fit" and "total area from table" are given, both calculated as in POSITRONFIT.
+
+For easy comparison of the extracted resolution curve with other such curves, a table of the full width of this curve at different fractions of its peak value is displayed, as well as of the midpoints of the curve compared to the peak position. The latter number clearly shows possible asymmetries in the resolution curve. Also the channel number of the position of the peak (maximum value) of the resolution curve is given.
+
+## 3.7 Channel ranges
+
+As we saw in the description of the control files for POSITRONFIT (P) and RESOLUTIONFIT (R) in Sections 3.2 and 3.3, the data blocks contain a number of integers defining various kinds of *channel ranges*. Each range is an interval [*M*, *N*] and is thus equal to the set of integers *i* satisfying *M* ≤ *i* ≤ *N*. Using the same acronyms for the channel limits as before, we can collect all the channel ranges in the following table:
+
+| Name | Definition | Program | Symbol | Lines in PALSfit3 graphics |
|---|
| Total range | [1,NCH] | P R | T | None | | Area range | [ICHA1,ICHA2] | P R | A | Red | | Fit range | [ICHMIN,ICHMAX] | P R | F | Green | | Background range | [ICHBG1,ICHBG2] | P R | B | Blue | | Fixed-area range | [ICHBEG,ICHEND] | P | Af | Brown |
+
+The first three ranges are always present while the existence of the two latter depends on optional constraints. There are certain restrictions on the ranges *T*, *F*, *A*, *B*, *Af* (when present). All must be nonempty subsets of *T*, and all the remaining must be subsets of *A*. Thus in formal terms:
+
+$$ \emptyset \subset A \subseteq T \qquad (38) $$
+
+$$ \emptyset \subset F \subseteq A \qquad (39) $$
+
+$$ \emptyset \subset B \subseteq A \qquad (40) $$
+
+$$ \emptyset \subset A_f \subseteq A \qquad (41) $$
+---PAGE_BREAK---
+
+These restrictions are exhaustive. If a restriction is violated, PALSfit3 makes a suitable modification of the range in order to amend the problem and tries to continue. (On the other hand, PATFIT refuses to make an analysis in such a case.)
+
+# 4 Experience with PALSfit
+
+In this section we shall give a short account of some of the experiences we (and others) have had with PALSfit3 and its predecessor versions, in particular the program components POSITRON-FIT and RESOLUTIONFIT. In general, these fitting programs have proved to be very reliable and easy to use. Further discussion can be found in [8, 24], from which we shall quote frequently in the remaining part of the present Chapter.
+
+The aim of fitting measured spectra will normally be to extract as much information as possible from the spectra. This often entails that one tries to resolve as many lifetime components as possible. However, this has to be done with great care. Because of the correlations between the fitting parameters, and between the fitting parameters and other input parameters, the final estimates of the parameters may be very sensitive to small uncertainties in the input parameters. Therefore, in general, extreme caution should be exercised in the interpretation of the fitted parameters. This is further discussed in e.g. [26–34]. In this connection, an advantage of the software is the possibility of various types of constraints which makes it possible to select meaningful numbers and types of fitting parameters.
+
+## 4.1 POSITRONFIT experience
+
+The experience gained with POSITRONFIT over a number of years shows that in metallic systems with lifetimes in the range 0.1 – 0.5 ns it is possible to obtain information about at most three lifetime components in unconstrained analyses [35–37] while in some insulators where positronium is formed, up to four components (unconstrained analysis) may be extracted e.g. [38–41]. (This does not mean of course that the spectra cannot be composed of more components than these numbers. This problem is briefly discussed in, e.g. [26, 33]. Various other aspects of the analysis of positron lifetime spectra are discussed in for example [42–45]). In this connection it is very useful to be able to change the number of components from the first to the second iteration cycle. In this way, the spectrum can be fitted with two different numbers of components within the same analysis (it is also advantageous to use this feature when a source correction removes, e.g., a long-lived lifetime component from the raw spectrum).
+
+In our experience POSITRONFIT always produces the same estimates of the fitted parameters after convergence, irrespective of the initial guesses (except in some extreme cases). However, others have informed us that for spectra containing very many counts (of the order of $10^7$) one may obtain different results, depending upon the initial guesses of the fitting parameters, i.e. local minima exist in the $\chi^2$ as function of the fitting parameters; these minima are often quite shallow. When this happens, POSITRONFIT as well as most other least-squares fitting codes are in trouble, because they just find some local minimum. From a single fitting you cannot know whether the absolute minimum in the parameter space has been found. The problem of “global minimization” is much harder to solve, but even if we could locate the deepest minimum we would have no guarantee that this would give the “best” parameter values from a physical point of view. In such cases it may be necessary to make several analyses of each spectrum with different initial parameter guesses or measure more than one spectrum under the same conditions, until enough experience has been gained about the analysis behaviour for a certain type of spectra.
+---PAGE_BREAK---
+
+The important, particular feature of PALSfit3 is that each lifetime component may be broadened by a log-normal distribution, whose width is characterized by its standard deviation $\sigma$. These widths may be fixed or act as fitting parameters. During the fitting procedure it may happen that a $\sigma$ becomes increasingly smaller (i.e. approaching zero) thus indicating that the lifetime component is essentially a simple exponential decay. In such cases POSITRONFIT neglects the broadening and continues the fitting procedure with a simple decaying exponential instead (Appendix C, Section 7.5). If such an event has taken place during the fitting, it will be indicated on the top of the Main output.
+
+## 4.2 RESOLUTIONFIT experience
+
+When using a software component as RESOLUTIONFIT an important question of course is whether it is possible in practice to separate the resolution function reliably from the lifetime components. Our experience and that of others [6, 7, 9, 33, 41] suggest that this separation is possible, although in general great care is necessary to obtain well-defined results [8, 33]. The reason for this is the same as mentioned above, viz. that more than one minimum for $\chi^2$ may exist.
+
+From a practical point of view the question arises as to whether there is too strong correlation between some of the parameters defining the resolution function and the lifetime parameters, in particular when three Gaussians (or more) are used to describe the resolution function. As in the example used in this report (Sections 3.3 and 3.6), we have often measured annealed copper in order to deduce the resolution function. Even with different settings of the lifetime spectrometer, the copper lifetime normally comes out from a RESOLUTIONFIT analysis within a few ps (statistical scatter) of 110 ps (in agreement with results of others, e.g. [46]). Thus, the lifetime is well-defined and separable from the resolution function, even though many parameters are free to vary in the fitting procedure. However, because of the many parameters used to describe the resolution function, one frequently experiences that two (or more) different sets of resolution function parameters may be obtained from the same spectrum in different analyses, if different initial guesses are applied. The lifetimes and intensities come out essentially the same in the different analyses, the fits are almost equally good, and a comparison of the widths at the various heights of the resolution curves obtained in the analyses show that they are essentially identical. Thus, in spite of the many fitting parameters (i.e. so many that the same resolution curve may be described by more than one set of parameters), it still seems possible to separate the lifetimes and resolution function reliably, at least when the lifetime spectrum contains a short-lived component of about 80–90 % intensity or more.
+
+On the other hand, one cannot be sure that the lifetimes can always be separated easily from the resolution function. If, for example, the initial guesses for the fitting parameters are far from the correct parameters, the result of the fitting may be that, for instance, the fitted resolution function is strongly asymmetrical thereby describing in part the slope of the spectrum which arises from the shorter lifetime component. This latter component will then come out with a shorter lifetime than the correct one. Such cases — where the resolution function parameters will be strongly correlated to the main lifetime — will be more likely the shorter the lifetime is and the broader the resolution function is.
+
+In principle, it is impossible from the analysis alone to decide whether lifetimes and resolution function are properly separated. However, in practice it will normally be feasible. If the main lifetime and the resolution curve parameters are strongly correlated, it is an indication that they are not properly separated. This correlation may be seen by looking for the changes in the lifetimes or resolution function when a small change is made in one of the resolution function parameters (intensity or one of the fitting parameters using a constraint). Other indications that the lifetimes and resolution function are not properly separated will be that the resulting lifetime
+---PAGE_BREAK---
+
+deviates appreciably from established values for the particular material or that the half width of the resolution function deviates clearly from the width measured directly with, e.g., a Co-60 source. If the lifetime and the resolution function cannot be separated without large uncertainties on both, one may have to constrain the lifetime to an average or otherwise determined value. Thus, it will always be possible to extract a resolution function from a suitably chosen lifetime spectrum.
+
+A separate question is whether a sum of Gaussians can give a proper representation of the “true” lifetime spectrometer resolution curve, or if some other functional form, e.g., a Gaussian convoluted with two exponentials [6, 9], is better. Of course, it will depend on the detailed shape of the spectrometer resolution curve, but practical experience seems to show that the two descriptions give only small differences in the extracted shape of the curve [9, 33], and the better the resolution is, the less does a small difference influence the extracted lifetime parameters [33]. The sum-of-Gaussians used in PALSfit was chosen because such a sum in principle can represent any shape.
+
+Once a resolution function has been determined from one lifetime measurement, another problem arises: Can this function be used directly for another set of measurements? This problem is not directly related to the software, but we shall discuss it briefly here. The accuracy of the determined resolution function will of course depend on the validity of the basic assumption about the measured lifetime spectrum from which it is extracted. This assumption is that the spectrum consists of a known number of lifetime components (e.g. essentially only one as discussed above) in the form of decaying exponentials convoluted with the resolution function. However, this “ideal” spectrum may be distorted in various ways in a real measurement. For example, instead of one lifetime, the sample may give rise to two almost equal lifetimes which cannot be separated. This will, of course, influence the resulting resolution function. So will source or surface components which cannot be clearly separated from the main component. Another disturbance of the spectrum may be caused by gamma-quanta which are scattered from one detector to the other in the lifetime spectrometer. Such scattered photons may give rise to quite large distortions of a lifetime spectrum. How large they are will depend on energy window settings and source-sample-detector arrangement of the lifetime spectrometer [33, 47, 48]. (Apart from the distortions, these spectrometer characteristics will, of course, also influence the width and shape of the correct resolution function.) In digital lifetime spectrometers it seems possible to discriminate more efficiently against some of these undesired distortions of measured spectra [49–51].
+
+Finally, by means of an example let us briefly outline the way we try to obtain the most accurate resolution function for a set of measurements. Let us say that we do a series of measurements under similar conditions (e.g. an annealing sequence for a defect-containing metal sample). In between we measure an annealed reference sample of the same metal, with — as far as possible — the same source and in the same physical arrangement, and thereby determine the resolution curve. This is done for example on January 2, 7, 12, etc. to keep track of possible small changes due to electronic drift. We then make reasonable interpolations between these resolution curves and use the interpolated values in the analysis of the lifetime spectra for the defect containing samples. Sometimes it is not feasible to always measure the annealed sample in exactly the same physical arrangement as the defect containing sample (for example if the annealing sequence takes place in a cryostat). Then we determine resolution curves from measurements on the annealed sample inside and outside the cryostat (the results may be slightly different) before and after the annealing sequence. The possible time variation (due to electronic drift) of the resolution function is then determined from measurements on the annealed sample outside the cryostat. The same variation is finally applied to the resolution curve valid for measurements inside the cryostat.
+
+As we often use many parameters to describe a resolution function these parameters may appear with rather large scatter. To obtain well-defined variations with time it is often useful in a second analysis of the annealed metal spectra to constrain one or two of the parameters to some average values. With this procedure we believe that we come as close as possible to a reliable resolution
+---PAGE_BREAK---
+
+function. We are reluctant to determine the resolution function directly from the spectra for the defected metal sample, as we feel that the lack of knowledge of the exact number of lifetime components makes the determination too uncertain.
+
+Let us finally point to one more useful result of an ordinary RESOLUTIONFIT analysis apart
+from the extraction of the resolution curve, viz. the determination of the “source correction”. If
+the sample gives rise to only one lifetime component, any remaining components must be due
+to positrons annihilating outside the sample and is therefore normally considered as a source
+correction. In the RESOLUTIONFIT Main Output (Section 3.6) the 0.110 ns is the annealed-Cu
+lifetime, while the 0.18 ns, 14.8386% component is the estimated lifetime and intensity compo-
+nent for the positrons annihilating in the 0.5 mg/cm² nickel foil surrounding the source material.
+The 0.4594 ns, 2.9535% component, that is determined by the analysis, is believed to arise from
+positrons annihilating in the NaCl source material and on surfaces. This component may be dif-
+ferent for different sources and different samples (due to different backscattering). We consider the
+latter two components as corrections to the measured spectra in any subsequent POSITRONFIT
+analyses (when the same source and similar sample material have been used for all measurements).
+---PAGE_BREAK---
+
+# 5 Appendix A: Fit and statistics
+
+The first three sections of Appendix A contain general information about nonlinear least-squares (NLLS) methods and their statistical interpretations with relevance for PALSfit, but without going into details with the specific models involved; these are discussed in Chapter 2 and also in Appendices B and C.
+
+The remaining sections are of a more technical nature. Section 5.4 presents essential principles of NLLS solution methods. Section 5.5 documents the separable least-squares technique which is of utmost importance for the efficiency and robustness of PALSfit, and Section 5.6 contains various mathematical and numerical details.
+
+## 5.1 Unconstrained NLLS data fitting
+
+We shall first present an overview of the unconstrained nonlinear least-squares (NLLS) method for data fitting.
+
+In the classical setup it is assumed that some general model is given,
+
+$$y = f(x; b_1, b_2, \dots, b_k) = f(x; \mathbf{b}) \quad (42)$$
+
+where $x$ and $y$ are the independent and dependent variable, respectively, and $\mathbf{b} = (b_j)$ is a parameter vector with $k$ components which may enter linearly or nonlinearly in (42), and so we may talk about linear and nonlinear parameters $b_j$. Further, a set of $n$ data points $(x_i, y_i)$ ($i = 1, \dots, n$) is given, $x_i$ being the independent and $y_i$ the dependent variable; we shall here introduce the data vector $\mathbf{y} = (y_i)$, also called the *spectrum*. Such a spectrum is usually the result of an experiment. We assume $n \ge k$. According to the least squares principle we should determine $\mathbf{b} \in \mathbb{R}^k$ such that
+
+$$\phi(\mathbf{b}) = \sum_{i=1}^{n} w_i (y_i - f(x_i; \mathbf{b}))^2 \quad (43)$$
+
+is minimized. The $w_i$ are the weights of the data; until further notice they are just arbitrary fixed positive numbers. (In many applications weights are omitted which corresponds to equal weighting, $w_i = 1$.)
+
+When setting up equation (43) it was assumed that the $x_i$ were fixed points corresponding to the independent variable $x$ in (42). In practice, however, we do not always have this situation. For example, if $x$ represents time, and the equipment records certain events in fixed time intervals ($t_{i-1}, t_i$) called *channels*, it would be natural to compare $y_i$ with an average of the model function in (42) over ($t_{i-1}, t_i$). Hence it is appropriate to replace (43) by
+
+$$\phi(\mathbf{b}) = \sum_{i=1}^{n} w_i (y_i - f_i(\mathbf{b}))^2 \quad (44)$$
+
+In general all we need is a “recipe” $f_i(\mathbf{b})$ to compute the model values to be compared with the data values $y_i$. The reformulation (44) is just a generalisation of the pointwise formulation (43) who has $f_i(\mathbf{b}) = f(x_i, \mathbf{b})$. This has no influence on the least squares analysis to be described presently. In the following we shall assume that the functions $f_i$ are sufficiently smooth in the argument $\mathbf{b}$.
+
+By introducing the matrix $\mathbf{W} = \text{diag}(w_i)$ and the $n$-vector $\mathbf{f}(\mathbf{b}) = (f_i(\mathbf{b}))$ we can express (44) in vector notation as follows:
+
+$$\phi(\mathbf{b}) = \|\mathbf{W}^{1/2}(\mathbf{y} - \mathbf{f}(\mathbf{b}))\|^2 \quad (45)$$
+
+Here $\|\cdot\|$ denotes the usual Euclidean norm. The corresponding minimization problem reads
+
+$$\phi_{\min} = \min_{\mathbf{b} \in \mathbb{R}^k} \{\|\mathbf{W}^{1/2}(\mathbf{y} - \mathbf{f}(\mathbf{b}))\|^2\} \quad (46)$$
+---PAGE_BREAK---
+
+A solution **b** to (46) satisfies the gradient equation
+
+$$
+\nabla \phi(\mathbf{b}) = \mathbf{0} \tag{47}
+$$
+
+which is equivalent to the *k* scalar equations
+
+$$
+\frac{\partial \phi(\mathbf{b})}{\partial b_j} = 0, \quad j = 1, \dots, k \tag{48}
+$$
+
+By (44) and (48) we obtain
+
+$$
+\sum_{i=1}^{n} w_i (y_i - f_i(\mathbf{b})) p_{ij} = 0, \quad j = 1, \dots, k \quad (49)
+$$
+
+where
+
+$$
+p_{ij} = \frac{\partial f_i(\mathbf{b})}{\partial b_j} \qquad (50)
+$$
+
+It is practical to collect the derivatives (50) in the $n \times k$ matrix
+
+$$
+\mathbf{P} = (p_{ij}) \tag{51}
+$$
+
+The equations (49) are called the normal equations for the problem. They are in general nonlinear and must be solved iteratively. Solution methods will be discussed in Sections 5.4 and 5.5. Only for linear or linearized models the normal equations are linear.
+
+It is instructive to consider the linear case in some detail. Here (42) takes the form
+
+$$
+y = \sum_{j=1}^{k} g_j(x) b_j \tag{52}
+$$
+
+The *x*-dependence in $g_j(x)$ is arbitrary and may very well be nonlinear; what matters is that the fitting parameters $b_j$ should enter linearly in the model. The derivatives $p_{ij} = g_j(x_i)$ are independent of $b_j$, and (43) or (44) can be written
+
+$$
+\phi(\mathbf{b}) = \sum_{i=1}^{n} w_i \left(y_i - \sum_{j=1}^{k} p_{ij} b_j\right)^2 \quad (53)
+$$
+
+The normal equations take the classical form
+
+$$
+\sum_{j'=1}^{k} \sum_{i=1}^{n} w_i p_{ij} p_{ij'} b_{j'} = \sum_{i=1}^{n} w_i y_i p_{i,i}, \quad j = 1, \dots, k \quad (54)
+$$
+
+We have $\mathbf{f}(\mathbf{b}) = \mathbf{P}\mathbf{b}$, and the equations (45–46) can be written
+
+$$
+\phi(\mathbf{b}) = \| \mathbf{W}^{1/2} (\mathbf{y} - \mathbf{P}\mathbf{b}) \|^{2} \quad (55)
+$$
+
+$$
+\phi_{\min} = \min_{\mathbf{b} \in \mathbb{R}^k} \{\| \mathbf{W}^{1/2} (\mathbf{y} - \mathbf{P}\mathbf{b}) \|^{2}\} \quad (56)
+$$
+
+The problem (56) is solved by (54) which can be written
+
+$$
+\mathbf{P}^{\mathrm{T}} \mathbf{W} \mathbf{P} \mathbf{b} = \mathbf{P}^{\mathrm{T}} \mathbf{W} \mathbf{y}
+\quad (57)
+$$
+
+where T stands for transpose. For unweighted data we have $\mathbf{W} = \mathbf{I}_n$ ($\mathbf{I}_n$ is the unit matrix of order $n$), and so
+
+$$
+\mathbf{P}^{\mathrm{T}} \mathbf{P} \mathbf{b} = \mathbf{P}^{\mathrm{T}} \mathbf{y}
+\quad (58)
+$$
+
+Assuming that the coefficient matrix $\mathbf{P}^\mathrm{T}\mathbf{W}\mathbf{P}$ in (57) is nonsingular, it must be positive definite too. The same applies to $\mathbf{P}^\mathrm{T}\mathbf{P}$ in (58). The case described in (52–58) represents a general *linear regression model*. It is a fundamental building block in NLLS procedures and their statistical analysis.
+
+Returning to the nonlinear case we shall ignore the complications from possible non-uniqueness when solving the normal equations (49). Here we just assume that a usable solution **b** can be found.
+---PAGE_BREAK---
+
+## 5.2 Constraints
+
+It is important to be able to impose constraints on the free variation of the model parameters. In principle a constraint could be an equality, $h(\mathbf{b}) = 0$, as well as an inequality $h(\mathbf{b}) \ge 0$, where $h(\mathbf{b})$ is an arbitrary function of the parameter vector.
+
+Although inequality constraints could sometimes be useful, we abandon them in this work because they would lead to quadratic programming problems, and thereby complicate our models considerably. In our algorithm there is, however, a built-in sign check on some of the nonlinear parameters (e.g. annihilation rates). Should an iteration step make such a parameter negative, a new iterate is determined by halving the correction vector from the old one. As a rule, many such “sign excursions” means an inadequate model parameterizing; in practice the sign excursions are often a simple and robust way of removing redundant parameters. On the other hand, no sign checks are made on the linear parameters.
+
+Incorporation of general equality constraints would be possible in the framework of our least-squares method, and are indeed used in Appendix C, Section 7.2. Otherwise, apart from trivial single-parameter constraints, $b_j = c$, linear constraints on the linear parameters are sufficient for our purpose, and as we shall see, involve straightforward generalizations of the unconstrained setup discussed previously.
+
+In Section 5.5 we shall describe the separable least-squares technique used in PALSfit. The effect of this method is to define subproblems in which the minimization takes place in the space of the linear parameters only. Hence the incorporation of constraints can just as well be discussed in terms of the linear model (52) where $\phi(\mathbf{b})$ is given by (53). In other words, in the constraints analysis we replace $k$ by the number $p$ of linear parameters in the model and consider an all-linear model where $\mathbf{b}$ is replaced by the “linear” parameter vector $\alpha \in \mathbb{R}^p$.
+
+Thus we assume that $m$ independent and consistent linear constraints on the $p$ components of $\alpha$ are given ($m \le p$):
+
+$$h_{l1}\alpha_1 + \dots + h_{lp}\alpha_p = \gamma_l, \quad l = 1, \dots, m \qquad (59)$$
+
+In vector form (59) reads
+
+$$\mathbf{H}\boldsymbol{\alpha} = \boldsymbol{\gamma} \qquad (60)$$
+
+where $\mathbf{H} = (h_{lj})$ is an $m \times p$ matrix and $\boldsymbol{\gamma} = (\gamma_l)$ is an $m$-vector. Both $\mathbf{H}$ and the augmented matrix $(\mathbf{H}, \boldsymbol{\gamma})$ are of rank $m$.
+
+A number of technical questions about how the constraints (59) or (60) influence the NLLS procedure will be discussed in Section 5.6.
+
+## 5.3 Statistical analysis
+
+In this section we address the question of the statistical scatter in the parameters and $\phi_{\min}$ that can be expected in NLLS parameter estimation. In particular we are interested in the standard deviations of the parameters and in their correlation coefficients.
+
+### Covariance matrix of the parameters
+
+Suppose the spectrum ($y_i$) contains experimental values subject to statistical fluctuations, while the weights ($w_i$) are fixed. Ideally we should imagine an infinite ensemble of similar spectra $\mathbf{y} = (y_i)$ be given. Let us first consider the unconstrained case. Through solution of the normal equations (49) each spectrum $\mathbf{y}$ gives rise to a parameter estimate $\mathbf{b} = \mathbf{b}(\mathbf{y})$. Hence also $\mathbf{b}$ becomes a random (vector) variable with a certain joint distribution.
+
+We shall use the symbol $E[\cdot]$ for expected value (ensemble mean) and $\text{Var}[\cdot]$ for variance. We
+---PAGE_BREAK---
+
+introduce the “ensemble-mean spectrum”
+
+$$
+\eta = (\eta_i) = \mathrm{E}[\boldsymbol{y}] \tag{61}
+$$
+
+and the corresponding hypothetic estimate
+
+$$
+b_0 = (b_{j0}) = b(\eta) \tag{62}
+$$
+
+Thus **b**₀ is the solution of (49) corresponding to the particular spectrum (ηᵢ). Now, given an arbitrary spectrum (yᵢ), let the corresponding parameter vector be **b** = (bⱼ). If we assume that **b** − **b**₀ = Δ**b** = (Δbⱼ) is so small that our model is locally linear in **b** around **b**₀, we have to a first-order approximation
+
+$$
+f_i(\mathbf{b}) = f_i(\mathbf{b}_0) + \sum_{j=1}^{k} p_{ij} \Delta b_j \quad (63)
+$$
+
+where $p_{ij}$ are the derivatives (50) evaluated at $\mathbf{b}_0$. We insert (63) into the normal equations (49) and obtain a linear equation system of order $k$ with $\Delta b_j$ as unknowns. In vector notation this system reads
+
+$$
+\mathbf{P}^{\mathrm{T}} \mathbf{W} \mathbf{P} \mathbf{\Delta} \mathbf{b} = \mathbf{P}^{\mathrm{T}} \mathbf{W} \mathbf{\Delta} \mathbf{y} \qquad (64)
+$$
+
+where $\Delta y = y - f(b_0)$ has the components
+
+$$
+\Delta y_i = y_i - f_i(b_0), \quad i = 1, \dots, n \tag{65}
+$$
+
+We note the similarity with the linear case (57). The system (64) has the solution
+
+$$
+\Delta b = K \Delta y \tag{66}
+$$
+
+where
+
+$$
+K = (\mathbf{P}^\mathrm{T}\mathbf{W}\mathbf{P})^{-1}\mathbf{P}^\mathrm{T}\mathbf{W} \quad (67)
+$$
+
+The covariance matrix of a vector variable **v** will here be denoted Σ(**v**). (Other names are dispersion matrix and variance-covariance matrix, since the diagonal row contains the component variances.) It is well-known that if two vectors **v** and **w** are related by a linear transformation
+
+$$
+\boldsymbol{w} = \mathbf{A}\boldsymbol{v} \tag{68}
+$$
+
+then
+
+$$
+\Sigma(w) = A\Sigma(v)A^T \tag{69}
+$$
+
+Our primary goal is to estimate the covariance matrix
+
+$$
+\Sigma(\mathbf{b}) = (\sigma_{jj'}) = \begin{pmatrix} \sigma_{11} & \cdots & \sigma_{1k} \\ \vdots & \ddots & \vdots \\ \sigma_{k1} & \cdots & \sigma_{kk} \end{pmatrix} \tag{70}
+$$
+
+for the parameter vector **b** obtained from the NLLS. The standard deviations of the estimated parameters are extracted from its diagonal as σⱼ = √σⱼⱼ, while the off-diagonal entries contain the covariances. In the usual way we construct the correlation matrix
+
+$$
+\mathbf{R} = \begin{pmatrix} 1 & \cdots & \rho_{1k} \\ \vdots & \ddots & \vdots \\ \rho_{k1} & \cdots & 1 \end{pmatrix} \tag{71}
+$$
+
+by the formula
+
+$$
+\rho_{jj'} = \sigma_{jj'}/(\sigma_j \sigma_{j'}) \tag{72}
+$$
+
+Equation (66) shows that $\Delta\mathbf{b}$ is related to $\Delta\mathbf{y}$ by a (locally) linear transformation, and so we obtain from (69) the approximate result
+
+$$
+\Sigma(\mathbf{b}) = \mathbf{K}\Sigma(\mathbf{y})\mathbf{K}^{\mathrm{T}} \quad (73)
+$$
+
+We now assume that the measurements $y_i$ are independent. Let
+
+$$
+\operatorname{Var}[y_i] = s_i^2, \quad i = 1, \dots, n \tag{74}
+$$
+---PAGE_BREAK---
+
+such that $s_i$ is the standard deviation of $y_i$. Then $\Sigma(\mathbf{y}) = \text{diag}(s_i^2)$. We also assume that the variances $s_i^2$ ($i = 1, \dots, n$) are known, or at least that estimates are available. With this knowledge it is appropriate to use the statistical weighting introduced in (2) in Section 2.1. We can show that this leads to a simple form of $\Sigma(\mathbf{b})$. By using (73) and observing that (2) implies
+
+$$ \mathbf{W}\mathbf{\Sigma}(\mathbf{y}) = \mathbf{I}_n \qquad (75) $$
+
+we obtain after reduction the formula
+
+$$ \mathbf{\Sigma}(\mathbf{b}) = (\mathbf{P}^\mathrm{T}\mathbf{W}\mathbf{P})^{-1} \qquad (76) $$
+
+which holds at least approximately. It is exact in the linear regression case (57).
+
+If we assume a normal distribution of $y_i$, then the parameter estimates too will be (approximately) normally distributed and their joint distribution is completely determined by the covariance matrix $(\sigma_{jj'})$. The natural statistical interpretation of $(\sigma_{jj'})$ or $(\sigma_j)$, $(\rho_{jj'})$ is an estimate of the covariance structure of the computed parameters in a series of repetitions of the spectrum recording under identical physical conditions.
+
+## Distribution of $\phi_{\min}$
+
+Still under the assumption of a locally linear model and of statistical weighting as described, we shall next study the distribution of $\phi_{\min}$ in (46). Here we make the additional assumptions that we have an ideal model, i.e.
+
+$$ f_i(\mathbf{b}_0) = \eta_i \qquad (77) $$
+
+and that each measurement $y_i$ has a Gaussian distribution,
+
+$$ y_i \sim \mathcal{N}(\eta_i, s_i^2) \qquad (78) $$
+
+Then by (65) and (77)
+
+$$ \Delta y_i \sim \mathcal{N}(0, s_i^2) \qquad (79) $$
+
+Moreover **b** will be approximately joint Gaussian. Let **b** = ($b_j$) be the solution of the normal equations (49). Then we obtain the following approximate expression of $\phi_{\min}$ from the linear expansion (63), which is valid for small $\Delta\mathbf{b} = (b_j - b_{j0})$:
+
+$$ \phi_{\min} = \sum_{i=1}^{n} w_i (y_i - \eta_i - \sum_{j=1}^{k} p_{ij} (b_j - b_{j0}))^2 \qquad (80) $$
+
+This can also be written
+
+$$ \phi_{\min} = \| \mathbf{W}^{1/2} (\Delta \mathbf{y} - \mathbf{P} \Delta \mathbf{b}) \|^{2} = (\Delta \mathbf{y} - \mathbf{P} \Delta \mathbf{b})^{\mathrm{T}} \mathbf{W} (\Delta \mathbf{y} - \mathbf{P} \Delta \mathbf{b}) \qquad (81) $$
+
+with $\Delta\mathbf{y} = (\mathbf{y}_i - \mathbf{\eta}_i)$ and $\mathbf{P}$ given by (50–51). By (66–67) $\phi_{\min}$ becomes a quadratic form in $\Delta\mathbf{y}$:
+
+$$ \phi_{\min} = \Delta\mathbf{y}^{\mathrm{T}}\mathbf{B}\Delta\mathbf{y} \qquad (82) $$
+
+where **B** is found to
+
+$$ \mathbf{B} = \mathbf{W} - \mathbf{W}\mathbf{P}(\mathbf{P}^{\mathrm{T}}\mathbf{W}\mathbf{P})^{-1}\mathbf{P}^{\mathrm{T}}\mathbf{W} \qquad (83) $$
+
+Defining $u_i = \Delta y_i / s_i$, we see from (79) that $u_i$ becomes a standardized normal variable,
+
+$$ u_i \sim \mathcal{N}(0, 1) \qquad (84) $$
+
+Since the statistical weighting (2) was assumed, we have
+
+$$ \Delta\mathbf{y} = \mathbf{W}^{-1/2}\mathbf{u} \qquad (85) $$
+
+Then $\phi_{\min}$ can be expressed as a quadratic form in $\mathbf{u} = (u_i)$:
+
+$$ \phi_{\min} = \mathbf{u}^{\mathrm{T}}\mathbf{C}\mathbf{u} \qquad (86) $$
+
+with
+
+$$ \mathbf{C} = \mathbf{W}^{-1/2}\mathbf{B}\mathbf{W}^{-1/2} = \mathbf{I}_n - \mathbf{M} \qquad (87) $$
+---PAGE_BREAK---
+
+where
+
+$$
+\mathbf{M} = \mathbf{W}^{\frac{1}{2}} \mathbf{P} (\mathbf{P}^{\mathrm{T}} \mathbf{W} \mathbf{P})^{-1} \mathbf{P}^{\mathrm{T}} \mathbf{W}^{\frac{1}{2}} \quad (88)
+$$
+
+Considering first the unconstrained case, the matrix **M** has its full rank *k*. All its *k* nonzero eigenvalues are unity, as is easily verified by premultiplying **M****x** = **λ****x** by **P**T**W**1/2. Thus **C** is of rank *n* − *k* with *n* − *k* unity eigenvalues, and so there is an orthogonal substitution **u** = **Qz** which transforms φmin into a sum of *f* = *n* − *k* squares:
+
+$$
+\phi_{\min} = \sum_{i=1}^{n-k} z_i^2
+\qquad
+(89)
+$$
+
+where the $z_i$ are independent, and each $z_i \sim N(0, 1)$. This means that $\phi_{\min}$ has a $\chi^2$-distribution with $f = n-k$ degrees of freedom,
+
+$$
+\phi_{\min} \sim \chi^2(f) \tag{90}
+$$
+
+For this reason $\phi_{\min}$ is often called $\chi^2$. Thus
+
+$$
+\chi^2 = \phi_{\min} = \min_{b \in \mathbb{R}^k} \sum_{i=1}^{n} \left( \frac{y_i - f_i(b)}{s_i} \right)^2
+\quad (91)
+$$
+
+Influence of linear constraints on the statistics
+
+The results derived for Σ(**b**) and φmin are independent of the applied fitting technique. But we
+have assumed an unconstrained variation of all components of the k-vector **b**. If there are m
+independent linear constraints on the parameters, then the analysis still holds good for a “basic
+subset” of kfree independent parameter components, where
+
+$$
+k_{\text{free}} = k - m \tag{92}
+$$
+
+In this case we obtain $\Sigma(\mathbf{b})$ by incorporating the linear constraints (59) or (60) and expressing the remaining components (deterministically) in terms of the free ones. Consequently, (90) is still valid, but now we have
+
+$$
+f = n - k_{\text{free}}
+\quad (93)
+$$
+
+Details of these operations are found in Section 5.6.
+
+Covariance matrix for any vector that is expressed in terms of b
+
+Suppose now that the solution vector **b** ∈ ℝ*k* is applied to calculate a new vector **b**' = **b**'(**b**) ∈ ℝ*k*'.
+Then the covariance matrix of **b**' is approximately
+
+$$
+\Sigma(\mathbf{b}') = \mathbf{J}\Sigma(\mathbf{b})\mathbf{J}^\mathrm{T} \quad (94)
+$$
+
+where
+
+$$
+\mathbf{J} = \frac{d\mathbf{b}'}{db} \tag{95}
+$$
+
+is the $k' \times k$ Jacobian matrix of the transformation. This follows from the differential relation
+
+$$
+\mathrm{d}b'_j = \frac{\partial b'_j}{\partial b_1} \mathrm{d}b_1 + \cdots + \frac{\partial b'_j}{\partial b_k} \mathrm{d}b_k, \quad j = 1, \ldots, k' \quad (96)
+$$
+
+together with (69). In PALSfit we use rather simple parameter transformations when passing
+from **b** to **b'**, or no transformation at all. Examples are lifetimes τj in ns instead of annihilation
+rates λj in channels-1, and widths in FWHM instead of in standard deviations. These give rise
+to trivial diagonal elements in **J**. On the other hand, the presentation of relative intensities Ij
+instead of absolute intensities Jj induces a diagonal block in the upper-left corner of **J** with the
+(j,j')-entry Ij(δjj' − Ij)/Jj, δjj' being the Kronecker delta.
+
+Standard deviation of a derived parameter
+---PAGE_BREAK---
+
+The covariance matrix (70) may be used to estimate the standard deviation of a single new parameter that is a function of the primary parameters produced by PALSfit. The standard deviation $\sigma_z$ of such a parameter, $z$, is (to a first-order approximation) given by:
+
+$$ \sigma_z^2 = \sum_{\mu=1}^{k} \sum_{\nu=1}^{k} \frac{\partial z}{\partial b_{\mu}} \frac{\partial z}{\partial b_{\nu}} \sigma_{\mu\nu} = \sum_{\mu=1}^{k} \left( \frac{\partial z}{\partial b_{\mu}} \right)^2 \sigma_{\mu}^2 + 2 \sum_{\mu=1}^{k-1} \sum_{\nu=\mu+1}^{k} \frac{\partial z}{\partial b_{\mu}} \frac{\partial z}{\partial b_{\nu}} \rho_{\mu\nu} \sigma_{\mu} \sigma_{\nu} \quad (97) $$
+
+This result follows by the transformation rule (94). PALSfit3 uses it to estimate the standard deviation of the mean lifetime
+
+$$ \tau_m = \sum_{j=1}^{k_0} I_j \tau_j \qquad (98) $$
+
+in POSITRONFIT, where $I_j$ was defined in (17). Another possible application would be to let $z$ be a trapping rate.
+
+## Forcing a single parameter to be shifted
+
+There is another property of the correlation matrix which might be useful. Suppose an analysis of a given spectrum results in an estimated parameter vector **b** = ($b_j$), $j = 1, ..., k$. One may ask: What happens to the remaining components if one of them, say $b_1$, is forced to be shifted a small amount $\Delta b_1$, and the analysis is repeated with the same spectrum? It can be shown that the other components will be shifted according to the formula
+
+$$ \Delta b_j = (\sigma_j / \sigma_1) \rho_{1j} \Delta b_1, \quad j = 2, \dots, k \qquad (99) $$
+
+The formula (99) refers to a single spectrum and is therefore deterministic. In principle its validity is restricted to small shifts due to the nonlinearity of our models. In our experience the formula is applicable up to at least $\Delta b_1 \approx 2-3 \times \sigma_1$ for well-defined fitting problems with small $\sigma_j$. For fits with large $\sigma_j$ it seems to be valid only up to $\Delta b_1 \approx 0.1-0.2 \times \sigma_1$, and in certain pathological cases it fails completely; such failures may be ascribed to imperfect models or strong nonlinearities. We shall here give a proof of (99). In the following we consider $\phi(\mathbf{b})$ with fixed spectrum ($y_i$). We fix $\Delta b_1$ and seek the conditional minimum when the other parameters vary. We shall use the approximative formula (113) (proved in Section 5.4) with $\mathbf{d} = \Delta \mathbf{b}$:
+
+$$ \phi(\mathbf{b} + \Delta \mathbf{b}) \approx \phi(\mathbf{b}) + \nabla \phi(\mathbf{b}) \cdot \Delta \mathbf{b} + \Delta \mathbf{b}^T \mathbf{P}^T \mathbf{W} \mathbf{P} \Delta \mathbf{b} \qquad (100) $$
+
+We introduce the vector $z$ with components $\Delta b_2, ..., \Delta b_k$. The gradient term in (100) can be written
+
+$$ \nabla \phi(\mathbf{b}) \cdot \Delta \mathbf{b} = \frac{\partial \phi}{\partial b_1} \Delta b_1 + \nabla_z \phi(\mathbf{b}) \cdot z \qquad (101) $$
+
+where $\nabla_z \phi(\mathbf{b})$ must be zero. Making the partition
+
+$$ \mathbf{P}^T \mathbf{W} \mathbf{P} = \begin{pmatrix} a_{11} & d^\mathrm{T} \\ d & C \end{pmatrix} \qquad (102) $$
+
+(100) can then be written
+
+$$ \phi(\mathbf{b} + \Delta\mathbf{b}) = \phi(\mathbf{b}) + \frac{\partial\phi}{\partial b_1}\Delta b_1 + a_{11}(\Delta b_1)^2 + 2\Delta b_1 z^\mathrm{T}\mathbf{d} + z^\mathrm{T}\mathbf{C}z \qquad (103) $$
+
+To minimize (103) we take the derivative with respect to $z$. After equating the result to zero we deduce that
+
+$$ z = -\Delta b_1 C^{-1} d \qquad (104) $$
+
+Next we make the similar partitioning
+
+$$ \Sigma(\mathbf{b}) = \begin{pmatrix} \sigma_{11} & s^\mathrm{T} \\ s & \Gamma \end{pmatrix} \qquad (105) $$
+
+We assume statistical weighting which implies the identity $\mathbf{P}^T\mathbf{W}\mathbf{P}\Sigma(\mathbf{b}) = \mathbf{I}_k$, cf. (76). From this we infer that
+
+$$ C^{-1}d = -\frac{1}{\sigma_{11}}s \qquad (106) $$
+---PAGE_BREAK---
+
+Inserting this in (104) yields
+
+$$z = \frac{\Delta b_1}{\sigma_{11}} s \quad (107)$$
+
+which proves formula (99) since $s$ has components $\sigma_{21}, \dots, \sigma_{k1}$.
+
+**The $\chi^2$ test for goodness of fit**
+
+We saw that $\chi^2 = \phi_{\min}$ under certain assumptions has a $\chi^2$-distribution with $f = n - k_{\text{free}} = n - (k-m)$ degrees of freedom, where $m$ is the number of constraints and $k_{\text{free}} = k - m$ is the effective number of free parameters in the estimation. The mean and variance of $\phi_{\min} = \chi^2$ are
+
+$$E[\chi^2] = f \quad (108)$$
+
+and
+
+$$\operatorname{Var}[\chi^2] = 2f \quad (109)$$
+
+From the $\chi^2$ statistics one can derive a “goodness-of-fit” significance test for the validity of the asserted ideal model, cf. (77). In such a $\chi^2$-test we compute the probability $P\{\chi^2 < \chi_{\text{obs}}^2\}$ that a $\chi^2$-distributed variable with $f$ degrees of freedom will not exceed the observed value $\chi_{\text{obs}}^2$. A value close to 100% indicates systematic deviation from the assumed model, and we use the phrase “significance of imperfect model” for this probability. We also compute the quantity
+
+$$V = \chi^2/f, \quad (110)$$
+
+with mean
+
+$$E[V] = 1 \quad (111)$$
+
+and variance
+
+$$\operatorname{Var}[V] = 2/f \quad (112)$$
+
+$V$ is sometimes called the “reduced chi-square” or the “variance of the fit”; with a good fit this quantity should be close to unity.
+
+**Statistical assumptions in the NLLS analysis**
+
+We conclude this section with some comments on the underlying assumptions in the statistical NLLS analysis which were:
+
+1. Small fluctuations of each data value $y_i$, i.e. $\operatorname{Var}[y_i]$ small.
+
+2. Our model is only weakly nonlinear in the parameter vector **b**.
+
+3. An ideal model which means that (77) holds.
+
+4. The data values $y_i$ are independent.
+
+5. Each $y_i$ has a Gaussian distribution.
+
+6. “Statistical weighting” (2).
+
+7. The population variances $\operatorname{Var}[y_i]$ are known in advance.
+
+Assumptions 1, 2, 3 should be considered together; for example, violation of 1 and 3 may both invalidate the linear approximation (63).
+
+Assumption 1 is a fair approximation in PALSfit applications, where it should be understood in the relative meaning; it holds provided the counts $y_i$ are not too small.
+
+Assumption 3 expresses that our model “explains” the observed data perfectly, apart from the inevitable statistical noise. This hypothesis was subject to a chi-square goodness-of-fit test as explained.
+---PAGE_BREAK---
+
+Assumption 4 is natural in many applications; however in practice some measurements might show a certain correlation between neighbouring data values.
+
+Assumption 5 is needed only for the analysis of the goodness-of-fit. Many distributions encountered in practice do not deviate much from the normal distribution and thus admits an approximately correct analysis. In particular, this is true for Poisson counting statistics, again provided the counts are large enough.
+
+Regarding Assumption 6, statistical weighting is a convenient means to equalize the impact from the individual observations $y_i$ on the fit. To accomplish it we shall need (estimates of) the variances $\text{Var}[y_i]$ (see also Assumption 7).
+
+Regarding Assumption 7, the theoretical values of the population variances $\text{Var}[y_i]$ are sometimes unavailable and need to be estimated. In some applications the variances are only known up to a constant of proportionality. By using statistical weighting nevertheless, this would not affect the outcome of the NLLS parameter estimation itself. However, the chi-square analysis would not be possible in the usual way due to the lack of normalization.
+
+## 5.4 Marquardt minimization
+
+As mentioned in Section 5.1 the normal equations (49) are in general nonlinear and must be solved iteratively. We now describe such an iterative method called Marquardt's principle, which is a combination of two classical unconstrained minimization procedures; constraints will be taken care of as described in Sections 5.2 and 5.6.
+
+Basically, we use Newton's iterative method (other names are the Gauss-Newton or the Taylor series method), which we shall presently explain. However, first we shall prove the following expansion formula which is approximately correct provided **d** is small, the fit is good, and the model locally linear:
+
+$$ \phi(\mathbf{b} + \mathbf{d}) \approx \phi(\mathbf{b}) + \nabla\phi(\mathbf{b}) \cdot \mathbf{d} + \mathbf{d}^T \mathbf{P}^T \mathbf{W} \mathbf{P} \mathbf{d} \quad (113) $$
+
+with the usual meaning of **W** and **P**. We use Taylor's quadratic limit formula
+
+$$ \phi(\mathbf{b} + \mathbf{d}) = \phi(\mathbf{b}) + \nabla\phi(\mathbf{b}) \cdot \mathbf{d} + \frac{1}{2}\mathbf{d}^T\mathbf{S}\mathbf{d} + o(\|\mathbf{d}\|^2) \quad (114) $$
+
+where $\mathbf{S} = \{s_{jj'}\}$ is the Hessian matrix of $\phi(\mathbf{b})$. From the expression (44) we find
+
+$$ s_{jj'} = \frac{\partial^2 \phi(\mathbf{b})}{\partial b_j \partial b_{j'}} = 2 \sum_{i=1}^{n} w_i \left( \frac{\partial f_i}{\partial b_j} \frac{\partial f_i}{\partial b_{j'}} - (y_i - f_i) \frac{\partial^2 f_i}{\partial b_j \partial b_{j'}} \right) \quad (115) $$
+
+with $f_i = f_i(\mathbf{b})$. We shall neglect the term $\sum_{i=1}^n w_i(y_i - f_i)\partial^2 f_i/\partial b_j \partial b_{j'}$ in (115). The reason for doing so is that we expect some cancellation to take place in the summation process, because the residuals $y_i - f_i$ are supposed to fluctuate around zero when the fit is good. We have also assumed that the second derivatives $\partial^2 f_i/\partial b_j \partial b_{j'}$, which express the nonlinearity of the model, are not too large. Hence, approximately
+
+$$ \mathbf{S} = 2\mathbf{P}^T\mathbf{W}\mathbf{P} \quad (116) $$
+
+Inserting this in (114) we establish (113). Returning to Newton's method, let **b** be a guessed or previously iterated parameter vector. Newton's correction step **d** now solves the local minimization problem
+
+$$ \min_{\mathbf{d} \in \mathbb{R}^k} \{\phi(\mathbf{b} + \mathbf{d})\} \quad (117) $$
+
+or
+
+$$ \nabla\phi(\mathbf{b} + \mathbf{d}) = \mathbf{0} \quad (118) $$
+
+for fixed **b**. We approximate $\phi(\mathbf{b} + \mathbf{d})$ by (113). For brevity we shall write
+
+$$ \mathbf{A} = \mathbf{P}^T\mathbf{W}\mathbf{P} \quad (119) $$
+---PAGE_BREAK---
+
+Assuming that $\mathbf{P}$ has full rank, $\mathbf{A}$ will be positive definite. We shall need the formula
+
+$$ \nabla(\nabla\phi(\mathbf{b}) \cdot \mathbf{d}) \approx 2\mathbf{A}\mathbf{d} \quad (120) $$
+
+which is proved by applying the same approximation as in (115). Then we obtain
+
+$$ \nabla\phi(\mathbf{b}) + 2\mathbf{A}\mathbf{d} + \mathcal{O}(\|\mathbf{d}\|^2) = \mathbf{0} \quad (121) $$
+
+Thus the Newton step $\mathbf{d}$ is computed from the normal equation system
+
+$$ \mathbf{A} \mathbf{d} = \mathbf{g} \quad (122) $$
+
+where
+
+$$ \mathbf{g} = -\frac{1}{2} \nabla \phi(\mathbf{b}) \quad (123) $$
+
+According to (44) the components of $\mathbf{g}$ are
+
+$$ g_j = \sum_{i=1}^{n} w_i (y_i - f_i(\mathbf{b})) p_{ij}, \quad j = 1, \dots, k \quad (124) $$
+
+(We see that the normal equations (49) are satisfied when $\mathbf{g} = \mathbf{0}$.) Subsequently $\mathbf{b}+\mathbf{d}$ replaces $\mathbf{b}$ as the new iterate, and the iterations continue. With the pure Newton method we cannot guarantee that the new $\phi = \phi(\mathbf{b}+\mathbf{d})$ is smaller than the old one. Indeed the procedure often tends to diverge due to strong nonlinearities, in particular when the initial guess is bad. To ensure a decrease in $\phi$ we modify (122) as follows,
+
+$$ (\mathbf{A} + \lambda\mathbf{D})\mathbf{d} = \mathbf{g} \quad (125) $$
+
+where $\mathbf{D}$ is a diagonal matrix with the same diagonal row as the positive definite matrix $\mathbf{A}$. This is Marquardt's equation [52]. Written out in full, we see from (119) and (124) that it takes the form:
+
+$$ (\mathbf{P}^{\mathrm{T}}\mathbf{W}\mathbf{P} + \lambda\mathbf{D})\mathbf{d} = \mathbf{P}^{\mathrm{T}}\mathbf{W}(\mathbf{y} - \mathbf{f}(\mathbf{b})) \quad (126) $$
+
+where $\mathbf{y} = (y_i)$ and $\mathbf{f}(\mathbf{b}) = (f_i(\mathbf{b}))$. $\lambda$ is a parameter that is at our disposal. It provides interpolation between Newton's method and a gradient-like method. The former is obtained by setting $\lambda = 0$, cf. (122). On the other hand, when $\lambda \to \infty$ we obtain a solution vector proportional to $\mathbf{D}^{-1}\mathbf{g}$. According to (123) $\mathbf{g}$ is proportional to the negative gradient vector $-\nabla\phi$, so $\mathbf{D}^{-1}\mathbf{g}$ becomes a scaled version of $-\nabla\phi$ and shares with this the property that $\phi$ (assumed > $\phi_{\min}$) certainly decreases initially along the correction vector, although it need not have the steepest descent direction. We can now roughly sketch Marquardt's procedure. The equation to be solved at iteration number $r$ reads
+
+$$ (\mathbf{A}^{(r)} + \lambda^{(r)}\mathbf{D}^{(r)})\mathbf{d}^{(r)} = \mathbf{g}^{(r)} \quad (127) $$
+
+From its solution $\mathbf{d}^{(r)}$ we calculate
+
+$$ \mathbf{b}^{(r+1)} = \mathbf{b}^{(r)} + \mathbf{d}^{(r)} \quad (128) $$
+
+and a new $\phi$-value, $\phi^{(r+1)}$. Now it is essential that $\lambda^{(r)}$ is so chosen that
+
+$$ \phi^{(r+1)} \leq \phi^{(r)} \quad (129) $$
+
+If we are not already at the minimum, it is always possible to satisfy (129) by selecting a sufficiently large $\lambda^{(r)}$, and so we avoid the divergence problems encountered in Newton's method. However, $\lambda^{(r)}$ should not be chosen unnecessary large, because we then get a small correction vector of gradient-like type which would give slow convergence. In the later iterations, when convergence is approached, $\lambda$ should be small. Then we approach Newton's method which has a fast (quadratic) rate of convergence near the minimum. The procedure has converged when $\phi^{(r)}$ and $\mathbf{b}^{(r)}$ are stationary with increasing $r$. The algorithm is sometimes called the Levenberg-Marquardt method (LM) since Levenberg [53] already in 1944 put forward essential parts of the ideas taken up by Marquardt in 1963 [52].
+
+Over the years LM has undergone a number of refinements, adding more robustness to it. In earlier versions of PALSfit we used LM as in [52]. But pure LM puts no bounds on the step vector
+---PAGE_BREAK---
+
+$d(\lambda)$. Newer LM implementations use a "trust-region" enhancement and replace the unrestricted minimization of $\phi$ by the quadratic programming problem
+
+$$ \min_{d(\lambda)} \left\{ \phi : \| \mathbf{D}^{1/2} d(\lambda) \| \le \Delta \right\} \qquad (130) $$
+
+where $\mathbf{D}$ is the matrix entering (125). The effect of (130) is to restrict the size of $d = d(\lambda)$. The bound $\Delta$ is adjusted each time a major iteration step begins and is decreased if the solution of (130) does not provide a suitable correction. We have adopted this idea for use in PALSfit from the work of Moré [54], as implemented in the software package MINPACK-1 [55] for unconstrained least-squares minimization. A subroutine from this package, LMPAR, performs minor iterations by finding a value of $\lambda$ that solves (130) approximately. The optimal $\lambda$ is saved for use as an initial estimate in the next major step. Details of this technique are found in Moré [54].
+
+## 5.5 Separable least-squares technique
+
+An efficiency gain in solving the NLLS problem (46) can be obtained when some of the parameters $b_j$ enter the model $f(\mathbf{b}) = (f_i(\mathbf{b}))$ linearly. Indeed this is the case in PALSfit. The least-squares problem is then called separable (or semilinear). Separable procedures have been studied by several authors [3,56–60] and have proved successful in this work and in many other applications as well. They provide a decomposition of the space spanned by the model parameters. Thus we can rearrange the parameters to comply with the partitioning of
+
+$$ \mathbf{b} = {}^p_q \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \qquad (131) $$
+
+into a “linear” $p$-vector $\alpha = (\alpha_j)$ and a “nonlinear” $q$-vector $\beta = (\beta_j)$, where $p+q=k$. The model functions $f_i(\mathbf{b})$ can then be written
+
+$$ f_i(\mathbf{b}) = \sum_{j=1}^{p} \alpha_j f_{ij}(\beta) = (\mathbf{F}(\beta)\alpha)_i, \quad i = 1, \dots, n \qquad (132) $$
+
+where $\mathbf{F} = \mathbf{F}(\beta)$ is an $n \times p$ matrix with elements $f_{ij} = f_{ij}(\beta)$. With these definitions $\mathbf{f}(\mathbf{b})$ can be written
+
+$$ \mathbf{f}(\mathbf{b}) = \mathbf{F}(\beta)\alpha \qquad (133) $$
+
+In separable NLLS we consider the linear subproblems of (46) where $\beta$ is fixed and $\alpha$ varies:
+
+$$ \min_{\alpha \in \mathbb{R}^p} \{\|\mathbf{W}^{1/2}(\mathbf{y} - \mathbf{F}(\beta)\alpha)\|^2 : \beta \text{ fixed}\} \qquad (134) $$
+
+Considering first the unconstrained case, the standard linear least-squares analysis tells that $\alpha = \alpha(\beta)$ is the solution of the $p$th-order normal equation system
+
+$$ \mathbf{F}^T\mathbf{W}\mathbf{F}\alpha = \mathbf{F}^T\mathbf{W}\mathbf{y} \qquad (135) $$
+
+cf. the linear regression case (52–58). Turning to the determination of the nonlinear part $\beta$ of the parameter vector $\mathbf{b}$, we realize that an iterative method is needed. In fact, there will be an outer loop, where each step provides a correction vector $\mathbf{d}$ to $\beta$, and an inner procedure which invokes a linear minimization (134–135) each time a new trial value of $\beta$ is chosen. We can formulate the nonlinear outer minimization as follows:
+
+$$ \min_{\beta \in \mathbb{R}^q} \{\|\mathbf{W}^{1/2}(\mathbf{y} - \mathbf{F}(\beta)\alpha(\beta))\|^2\} \qquad (136) $$
+
+We solve (136) by a modified Marquardt procedure as explained in Section 5.4, where $\mathbf{b}$ should be replaced by $\beta$. Indeed, equation (126) takes the form
+
+$$ (\mathbf{P}^T\mathbf{W}\mathbf{P} + \lambda\mathbf{D})\mathbf{d} = \mathbf{P}^T\mathbf{W}(\mathbf{y} - \mathbf{F}(\beta)\alpha(\beta)) \qquad (137) $$
+---PAGE_BREAK---
+
+where $\mathbf{P}$ is now the $n \times q$ matrix with elements
+
+$$p_{ij} = \frac{\partial f_i}{\partial \beta_j} \qquad (138)$$
+
+and $\mathbf{D}$ is a diagonal matrix with the same diagonal row as $\mathbf{P}^\mathrm{T}\mathbf{W}\mathbf{P}$.
+
+A crucial point in the separable procedure is the evaluation of (138), which can be accomplished by considering the vector $\mathbf{f} = \mathbf{f}(\mathbf{b})$ in (133). Note that $\mathbf{f}$ depends on $\boldsymbol{\beta}$ directly through $\mathbf{F}$ and indirectly through $\boldsymbol{\alpha} = \boldsymbol{\alpha}(\boldsymbol{\beta})$; hence
+
+$$\frac{\partial \mathbf{f}}{\partial \beta_j} = \frac{\partial \mathbf{F}}{\partial \beta_j} \boldsymbol{\alpha} + \mathbf{F} \frac{\partial \boldsymbol{\alpha}}{\partial \beta_j} \qquad (139)$$
+
+To find $\partial\boldsymbol{\alpha}/\partial\beta_j$ we take the derivative of both members of (135). This leads to
+
+$$\mathbf{F}^{\mathrm{T}}\mathbf{W}\mathbf{F} \frac{\partial \boldsymbol{\alpha}}{\partial \beta_j} = \frac{\partial \mathbf{F}^{\mathrm{T}}}{\partial \beta_j} \mathbf{W} (\mathbf{y} - \mathbf{F}\boldsymbol{\alpha}) - \mathbf{F}^{\mathrm{T}}\mathbf{W} \frac{\partial \mathbf{F}}{\partial \beta_j} \boldsymbol{\alpha}, \quad j = 1, \dots, q \qquad (140)$$
+
+For an ideal model the term in (140) containing the residual vector $\mathbf{y} - \mathbf{F}\boldsymbol{\alpha}$ is negligible when the minimum is approached, but is important when the current iterate is far from convergence.
+
+Now we can give a summary of the complete strategy for the unconstrained separable minimization of $\phi$: Start the outer iterations from a guessed value of $\boldsymbol{\beta}$, and select suitable initial values for $\lambda$ and the Moré bound $\Delta$. For each outer iteration, solve the linear subproblem (134–135) for $\boldsymbol{\alpha}$ and calculate $\phi$. Compute the Jacobian elements $\partial\mathbf{f}/\partial\beta_j$ from (140) and (139), and form $\mathbf{P}$ and $\mathbf{D}$. Then enter an inner procedure and find near-optimal values of $\lambda$ and the correction vector to $\boldsymbol{\beta}$, $\mathbf{d} = \mathbf{d}(\lambda)$, using Marquardt's method with Moré's modification. Update the bound $\Delta$, replace $\boldsymbol{\beta}$ by $\boldsymbol{\beta} + \mathbf{d}$, and resume the outer iteration loop. The procedure is finished when convergence is obtained.
+
+When implementing our separable algorithm, there is a practical difficulty in handling $\partial\mathbf{f}/\partial\beta_j$ in (139). For each data value we must evaluate a $p \times q$ matrix of scalar derivatives which means altogether $n \times p \times q$ values. To reduce the memory demand we use a packed (“sparse-matrix”) scheme for storing only the nonzero derivatives.
+
+Linear constraints on linear model parameters, as they occur in PALSfit, are readily integrated in the separable NLLS procedure, cf. Sections 5.2 and 5.6.
+
+## 5.6 Various mathematics, statistics, and numerics
+
+In this section a number of technical details are collected. They all have relevance to previously discussed subjects.
+
+### Estimation of background and weight smoothing
+
+Considering Assumptions 6 and 7 in Section 5.3 we face the problem that we do not know $\sigma_i^2 = \text{Var}[y_i]$ when setting up the statistical weighting (2). Each count $y_i$ is distributed in a Poisson distribution with a certain mean value $\eta_i$,
+
+$$y_i \in \mathbf{P}(\eta_i) \qquad (141)$$
+
+implying that
+
+$$E[y_i] = \eta_i \qquad (142)$$
+
+$$\operatorname{Var}[y_i] = \eta_i \qquad (143)$$
+
+The ideal weighting
+
+$$w_i = \frac{1}{\eta_i} \qquad (144)$$
+---PAGE_BREAK---
+
+would provide approximately central (i.e. unbiased) least-squares estimates of the model parameters. Since the $\eta_i$ are unknown we must use some kind of approximation. We may simply take
+
+$$w_i = \frac{1}{y_i} \tag{145}$$
+
+or rather use the modified formula
+
+$$w_i = \frac{1}{\max(y_i, 1)} \tag{146}$$
+
+because it is possible to record $y_i = 0$. However, in the following we shall assume that the probability of this is negligible,
+
+$$P\{y_i = 0\} \approx 0 \tag{147}$$
+
+The “raw” weights (145) will normally fluctuate, and we shall now show that this may induce a bias on the estimate of the background. In a fitting model like (3) with a free background $B$ it is often the case that the major contribution to (1) comes from channels where the impact from other fitting parameters are negligible compared to $B$. This implies that $B$ is virtually independent of the other parameters, such that obtaining a least-squares estimate $B^*$ of $B$ is essentially a one-parameter problem,
+
+$$B^* = \operatorname{argmin}_{B \in \mathbb{R}} \sum_{i=1}^{n} w_i (y_i - B)^2 \tag{148}$$
+
+where *n* should be properly adjusted. Assuming this situation, we find the unique solution
+
+$$B^* = \frac{\sum w_i y_i}{\sum w_i} \tag{149}$$
+
+In the “theoretical” case (144) we evaluate $B^* = B_0^*$ by inserting (144) in (149). Because $B$ is the only parameter, we have $\eta_i = B$ and then
+
+$$B_0^* = \frac{\sum y_i}{n} = \langle y_i \rangle \tag{150}$$
+
+where $\langle \cdot \rangle$ stands for averaging over channels. $B_0^*$ is indeed central as
+
+$$E[B_0^*] = E[y_i] = \eta_i = B \tag{151}$$
+
+On the other hand, in the “real” case (145), we evaluate $B^* = B_1^*$ by inserting (145) in (149), and then it turns out that $B_1^*$ is not central. In fact we can show that $B_1^*$ underestimates the background roughly by 1,
+
+$$E[B_1^*] \approx B - 1 \tag{152}$$
+
+under the additional assumptions that *n* as well as the counts $y_i$ and the background $B$ are reasonably large. We then obtain
+
+$$B_1^* = \frac{n}{\sum \frac{1}{y_i}} = \frac{1}{\left\langle \frac{1}{y_i} \right\rangle} \tag{153}$$
+
+and
+
+$$E[B_1^*] \approx \frac{1}{E\left[\frac{1}{y_i}\right]} \tag{154}$$
+
+From the Poisson distribution (141) we have the probability
+
+$$P\{y_i = k\} = \frac{B^k}{k!} e^{-B} \tag{155}$$
+
+and, taking (147) into account, this gives
+
+$$E\left[\frac{1}{y_i}\right] \approx \sum_{k=1}^{\infty} \frac{k}{k} \frac{B^k}{k!} e^{-B} \tag{156}$$
+
+This sum can be evaluated with the result
+
+$$E\left[\frac{1}{y_i}\right] \approx e^{-B}(Ei(B) - \gamma - \log B) \tag{157}$$
+---PAGE_BREAK---
+
+where $Ei$ is the exponential integral defined by
+
+$$ \mathrm{Ei}(x) = - \int_{-x}^{\infty} \frac{e^{-t}}{t} dt \quad (\text{Cauchy principal value when } x > 0) \qquad (158) $$
+
+and $\gamma$ is Euler's constant. Since $B$ was assumed fairly large, we can use the asymptotic expansion
+
+$$ \mathrm{Ei}(B) \sim \frac{e^B}{B} \left\{ 1 + \sum_{r=1}^{\infty} \frac{r!}{B^r} \right\}, \quad B \to +\infty \qquad (159) $$
+
+We shall use the first-order approximation, and discarding $\gamma$ and $\log B$ in (157) we then obtain
+
+$$ E\left[\frac{1}{y_i}\right] \approx \frac{1}{B}\left(1+\frac{1}{B}\right) \qquad (160) $$
+
+Finally (154) gives the approximate result (152) that the bias is -1. Monte Carlo simulations working with the Poisson process confirms (152).
+
+An approximate removal of the background bias can be accomplished by *weight smoothing* which can be done in several ways. Earlier PALSfit versions carried out an extra iteration cycle after the first cycle had converged. (This should be done anyway in case of source correction in POSITRON-FIT.) Between the two iteration cycles the weight fluctuations were removed by replacing (145) by the weights
+
+$$ w_i = \frac{1}{y_i^f} \qquad (161) $$
+
+where $y_i^f$ is the model-predicted $y_i$-estimate at the end of the first cycle. This procedure did in fact remove the bias, but also has some drawbacks. When the fit is not perfect, the smoothing not only removes the fluctuations but also distorts the overall shape of the weight function, which results in an unreliable statistical analysis.
+
+Later PALSfit versions use instead a heuristic *non-parametric* (i.e. model-independent) weight smoothing, once and for all at the beginning of the analysis. Guided by log-plots of various typical spectra $y(t)$, we make a continuous piecewise linear fit of $\log_{10} y(t)$, in which the knot abscissas and knot values are the fitting parameters. Our algorithm uses separable least squares optimization where the inner minimisation determines the ordinates for given knot abscissas, while the outer minimisation furnishes the optimal position of the knot abscissas. The latter is controlled by an “amoeba type” procedure (Nelder and Mead, in Numerical Recipes [61]). We obtain good results by dividing the spectrum in at most 4 parts and using the piecewise linear algorithm on each of these. There will be fewer than 4 parts when, for example, the peak position is outside the interval from $i_{ch}^{\min}$ to $i_{ch}^{\max}$, cf. (13). The number of segments used in each part is normally 4 but less when there are few channels in a part. Common fit values at contiguous parts are obtained by a simple averaging. This ensures an overall continuity of the fit. When the minimum value of the counts within some part exceeds a critical value, say 1000, we abandon the smoothing there, since the spectrum within that part may already be considered smooth. From the computed fit values we finally obtain the smoothed weights $w_i$ by taking antilogarithm and reciprocal.
+
+This way of estimating the weights has very little influence on the values of the fitted parameters. Typically the parameters change by an amount which is an order of magnitude smaller than the parameter standard deviations, or less. The main effect is that the program becomes more robust against some extreme choices of input parameters to the analysis. Moreover the reliability of the statistical analysis for imperfect fits is improved.
+
+Implementation of linear constraints
+
+We consider the constrained linear least-squares problem (cf. (59–60)),
+
+$$ \phi_{\min} = \min_{\alpha \in \mathbb{R}^p} \{\phi(\alpha) = \|W^{1/2}(y - F\alpha)\|^2 : H\alpha = \gamma\} \qquad (162) $$
+---PAGE_BREAK---
+
+This problem is a constrained version of the separable NLLS subproblem in (134), where an
+optimal linear parameter vector $\alpha \in \mathbb{R}^p$ was computed for a given nonlinear parameter vector
+$\beta \in \mathbb{R}^q$. Thus the derivative matrix $F = (\partial f_i / \partial \alpha_j)$ is of size $n \times p$. One way of handling this
+constraint problem would be to use Lagrange multipliers. This method was used in early versions
+of POSITRONFIT. As a result, the unconstrained normal equation system (cf. (57)),
+
+$$
+\mathbf{F}^{\mathrm{T}} \mathbf{W} \mathbf{F} \boldsymbol{\alpha}=\mathbf{F}^{\mathrm{T}} \mathbf{W} \boldsymbol{y}
+\quad(163)
+$$
+
+was extended to a block matrix system
+
+$$
+\begin{pmatrix} \mathbf{F}^{\mathrm{T}} \mathbf{W} \mathbf{F} & \mathbf{H}^{\mathrm{T}} \\ \mathbf{H} & \mathbf{0} \end{pmatrix} \begin{pmatrix} \boldsymbol{\alpha} \\ \boldsymbol{\mu} \end{pmatrix} = \begin{pmatrix} \mathbf{F}^{\mathrm{T}} \mathbf{W} \mathbf{y} \\ \boldsymbol{\gamma} \end{pmatrix}
+\quad (164)
+$$
+
+where the vector $2\mu$ contains the Lagrange multipliers. Although (164) is simple enough, there
+are some drawbacks in this procedure. The coefficient matrix in (164) is not positive definite as
+is $\mathbf{F}^\mathrm{T}\mathbf{W}\mathbf{F}$. This follows by considering the quadratic form $Q$ associated with LHS(164),
+
+$$
+Q = (\mathbf{F}\alpha)^{\mathrm{T}} \mathbf{W} \mathbf{F}\alpha + 2\mu \cdot \gamma
+\quad (165)
+$$
+
+which may become negative. As a result the numerical stability of the calculations might be
+degraded. We also note that the constraints increase the size of the “effective normal equation
+system” from $p \times p$ to $(p+m) \times (p+m)$. Below we describe an elimination method which is now
+in use in PALSfit. It offers better stability, reduced computer time, and reduced storage demand.
+Since the rank of $\mathbf{H}$ is $m$, we can construct a nonsingular matrix by picking $m$ independent
+columns from $\mathbf{H}$. A suitable permutation will move these columns to the $m$ first positions. This
+can be expressed in terms of a $p$th-order permutation matrix $\mathbf{\Pi}_p$ by
+
+$$
+\mathbf{H}\mathbf{\Pi}_p = \mathbf{H}' = {}_{m}\genfrac{}{}{0pt}{}{\mathbf{B}}{\mathbf{N}} \tag{166}
+$$
+
+In the language of linear programming we call **B** a “basis matrix” for **H**, whereas the columns
+in **N** are called “nonbasic”. Because **Π***p* is orthogonal, **Π***p*T **Π***p* = **I***p*, (60) can be written
+
+$$
+\mathbf{H}'\boldsymbol{\alpha}' = \boldsymbol{\gamma}
+\quad (167)
+$$
+
+with
+
+$$
+\alpha' = \Pi_p^T \alpha
+\quad (168)
+$$
+
+The homogeneous equation $\mathbf{H}'\boldsymbol{\alpha}' = \mathbf{0}$ corresponding to (167) has the complete solution
+
+$$
+\boldsymbol{\alpha}' = \mathbf{Y}'\boldsymbol{t}, \quad t \in \mathbb{R}^{p-m}
+\quad (169)
+$$
+
+where
+
+$$
+\mathbf{Y}' = {}_{p-m}\genfrac{}{}{0pt}{}{m}{p-m} \begin{pmatrix} -\mathbf{B}^{-1}\mathbf{N} \\ \mathbf{I}_{p-m} \end{pmatrix}
+\quad (170)
+$$
+
+A particular solution to (167) is
+
+$$
+\alpha' = \alpha'_{0} = {}_{p-m}\genfrac{}{}{0pt}{}{m}{p-m} \begin{pmatrix} B^{-1}\gamma \\ 0 \end{pmatrix}
+\quad (171)
+$$
+
+Hence the complete solution of (167) is
+
+$$
+\alpha' = \alpha'_{0} + Y't, \quad t \in \mathbb{R}^{p-m}
+\quad (172)
+$$
+
+Finally from (168) we get the complete solution of (60):
+
+$$
+\alpha = \alpha_0 + Yt, \quad t \in \mathbb{R}^{p-m}
+\quad (173)
+$$
+
+where
+
+$$
+\alpha_0 = \Pi_p \alpha'_0
+\quad (174)
+$$
+---PAGE_BREAK---
+
+and
+
+$$
+\mathbf{Y} = \Pi_p \mathbf{Y}' \tag{175}
+$$
+
+It is practical to partition Πₚ in column sections as follows:
+
+$$
+\Pi_p = {}_p\Pi_1 {}^{p-m}\Pi_2 \quad (176)
+$$
+
+Then (174) becomes
+
+$$
+\alpha_0 = \Pi_1 B^{-1} \gamma \quad (177)
+$$
+
+To express (175) we note that
+
+$$
+N = H\Pi_2
+\quad
+(178)
+$$
+
+and so
+
+$$
+\mathbf{Y} = (\mathbf{I}_p - \mathbf{\Pi}_1 \mathbf{B}^{-1} \mathbf{H}) \mathbf{\Pi}_2 \quad (179)
+$$
+
+Moreover
+
+$$
+\mathbf{B} = \mathbf{H}\mathbf{\Pi}_{1} \qquad (180)
+$$
+
+Then, by (166) and (175) we have
+
+$$
+\mathbf{H}\mathbf{Y} = \mathbf{H}'\mathbf{\Pi}_{p}^{\mathrm{T}}\mathbf{\Pi}_{p}\mathbf{Y}' = \mathbf{H}'\mathbf{Y}' = \mathbf{0} \quad (181)
+$$
+
+and the columns of $\mathbf{Y}$ form a basis of the null space or kernel of $\mathbf{H}$. Using (173) we can reformulate
+the constrained $p$-dimensional problem (162) to an unconstrained $(p-m)$-dimensional problem:
+
+$$
+\phi_{\min} = \min_{t \in \mathbb{R}^{p-m}} \{\| \mathbf{W}^{1/2}(\mathbf{y} - \mathbf{F}\alpha_0 - \mathbf{F}\mathbf{Y}t) \|^{2}\} \quad (182)
+$$
+
+We see that this can be derived from (162) by removing the constraints, substituting $p-m$ for $p$, $\mathbf{y}-\mathbf{F}\boldsymbol{\alpha}_0$ for $\mathbf{y}$, $\mathbf{FY}$ for $\mathbf{F}$, and $t$ for $\boldsymbol{\alpha}$. Thus we can immediately write down the normal equation system for (182) by making the corresponding substitutions in (163):
+
+$$
+(\mathbf{F}\mathbf{Y})^\mathrm{T}\mathbf{W}(\mathbf{F}\mathbf{Y})\mathbf{t} = (\mathbf{F}\mathbf{Y})^\mathrm{T}\mathbf{W}(\mathbf{y} - \mathbf{F}\boldsymbol{\alpha}_0) \quad (183)
+$$
+
+Turning to our separable setup, this equation replaces (135). Equation (139) becomes
+
+$$
+\frac{\partial f}{\partial \beta_j} = \frac{\partial F}{\partial \beta_j} \alpha + FY \frac{\partial t}{\partial \beta_j} \quad (184)
+$$
+
+while the counterpart to (140) is obtained by deriving (183) and using (173). The result is
+
+$$
+(\mathbf{F}\mathbf{Y})^{\mathrm{T}}\mathbf{W}(\mathbf{F}\mathbf{Y}) \frac{\partial t}{\partial \beta_j} = \frac{\partial (\mathbf{F}\mathbf{Y})^{\mathrm{T}}}{\partial \beta_j} \mathbf{W}(\mathbf{y}-\mathbf{F}\boldsymbol{\alpha}) - (\mathbf{F}\mathbf{Y})^{\mathrm{T}}\mathbf{W} \frac{\partial \mathbf{F}}{\partial \beta_j} \boldsymbol{\alpha} \quad (185)
+$$
+
+Next we shall derive an expression for the covariance matrix $\Sigma(\mathbf{b})$ of the total parameter vector $\mathbf{b} = (\boldsymbol{\alpha}, \boldsymbol{\beta})$ when the constraints (59) or (60) are included. Realizing that $\Sigma(\mathbf{b})$ is independent of the actual fitting method, we can estimate it by perturbing the solution vector $\mathbf{b}$ at the end of iterations. From the normal equation system (183) we deduce in analogy with (76) that
+
+$$
+\Sigma(t) = \{(F\mathbf{Y})^T W(F\mathbf{Y})\}^{-1} \quad (186)
+$$
+
+Let $\mathbf{P}$ denote the $n \times k$ matrix containing the derivatives $\partial f_i / \partial b_j$ with respect to all the $p+q$
+components of $\mathbf{b} = (\boldsymbol{\alpha}, \boldsymbol{\beta})$. We may then make the partition
+
+$$
+\begin{equation}
+\mathbf{P} = {}_n\Pi_1 (\mathbf{F}^T \mathbf{P}_\beta) \tag{187}
+\end{equation}
+$$
+
+We note that
+
+$$
+{}_n(\mathbf{F}\mathbf{Y}~\mathbf{P}_{\beta}) = \mathbf{P}\mathbf{Z} \qquad (188)
+$$
+
+where **Z** is given by
+
+$$
+Z = {}_q P \begin{pmatrix} Y & 0 \\ 0 & I_q \end{pmatrix} \tag{189}
+$$
+---PAGE_BREAK---
+
+This means that (186) can be extended from $t$ to $(t, \beta)$ as follows:
+
+$$ \Sigma(t, \beta) = (\mathbf{P}\mathbf{Z})^{\mathrm{T}}\mathbf{W}(\mathbf{P}\mathbf{Z})^{-1} \quad (190) $$
+
+Furthermore, since
+
+$$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \begin{pmatrix} \alpha_0 \\ 0 \end{pmatrix} + \mathbf{Z} \begin{pmatrix} t \\ \beta \end{pmatrix} \quad (191) $$
+
+we obtain the result
+
+$$ \Sigma(\mathbf{b}) = \mathbf{Z} \{( \mathbf{P}\mathbf{Z})^{\mathrm{T}} \mathbf{W}(\mathbf{P}\mathbf{Z}) \}^{-1} \mathbf{Z}^{\mathrm{T}} \quad (192) $$
+
+## Scaling in separable NLLS
+
+The numerical solution of many of the linear-algebraic and optimization subproblems in our algorithm is accomplished by software from the standard packages LINPACK [62] and MINPACK-1 [55]. To accommodate application of this software we found it convenient to rescale the NLLS problem formulation. We shall recast the original minimization problem (46) to
+
+$$ \phi_{\min} = \min_{\mathbf{b} \in \mathbb{R}^k} \{ \| \mathbf{r}(\mathbf{b}) \|_2^2 \} \quad (193) $$
+
+where $\mathbf{r}(\mathbf{b})$ is a (scaled) *residual vector* with components
+
+$$ r_i = w_i^{1/2} (y_i - f_i(\mathbf{b})) \quad (194) $$
+
+This induces a number of vector and matrix transformations containing the matrix scaling factor $\mathbf{W}^{1/2}$:
+
+$$ z = \mathbf{W}^{1/2} y \quad (195) $$
+
+$$ e = \mathbf{W}^{1/2} f \quad (196) $$
+
+$$ \mathbf{E} = \mathbf{W}^{1/2} \mathbf{F} \quad (197) $$
+
+$$ \mathbf{G} = \mathbf{W}^{1/2} \mathbf{P} \quad (198) $$
+
+Then the counterparts of (133–137) become:
+
+$$ e(\mathbf{b}) = \mathbf{E}(\beta)\alpha \quad (199) $$
+
+$$ \min_{\alpha \in \mathbb{R}^p} \{\|\mathbf{z} - \mathbf{E}(\beta)\alpha\|^2 : \beta \text{ fixed}\} \quad (200) $$
+
+$$ \mathbf{E}^{\mathrm{T}}\mathbf{E}\alpha = \mathbf{E}^{\mathrm{T}}z \quad (201) $$
+
+$$ \min_{\beta \in \mathbb{R}^q} \{\phi(\beta) = \|z - \mathbf{E}(\beta)\alpha(\beta)\|^2\} \quad (202) $$
+
+$$ (\mathbf{G}^{\mathrm{T}}\mathbf{G} + \lambda\mathbf{D})d = \mathbf{G}^{\mathrm{T}}(z - \mathbf{E}(\beta)\alpha) \quad (203) $$
+
+Moreover (139–140) are replaced by
+
+$$ \frac{\partial e}{\partial \beta_j} = \frac{\partial \mathbf{E}}{\partial \beta_j} \alpha + \mathbf{E} \frac{\partial \alpha}{\partial \beta_j} \quad (204) $$
+
+$$ \mathbf{E}^{\mathrm{T}}\mathbf{E}\frac{\partial \alpha}{\partial \beta_j} = \frac{\partial \mathbf{E}^{\mathrm{T}}}{\partial \beta_j}(z - \mathbf{E}\alpha) - \mathbf{E}^{\mathrm{T}}\frac{\partial \mathbf{E}}{\partial \beta_j}\alpha \quad (205) $$
+
+while the corresponding equations (184–185) with constraints are replaced by
+
+$$ \frac{\partial e}{\partial \beta_j} = \frac{\partial \mathbf{E}}{\partial \beta_j} \alpha + \mathbf{EY} \frac{\partial t}{\partial \beta_j} \quad (206) $$
+
+$$ (\mathbf{EY})^{\mathrm{T}}\mathbf{EY} \frac{\partial t}{\partial \beta_j} = \frac{\partial (\mathbf{EY})^{\mathrm{T}}}{\partial \beta_j}(z - \mathbf{E}\alpha) - (\mathbf{EY})^{\mathrm{T}}\frac{\partial \mathbf{E}}{\partial \beta_j}\alpha \quad (207) $$
+
+We see that the effect of these transformations is to “hide” the weights $w_i$ entirely.
+
+## QR decomposition
+---PAGE_BREAK---
+
+A direct solution of normal equations, even by Choleski decomposition, may present numerical
+difficulties inherent with the ill-conditioning of the positive-definite coefficient matrix, say **E**TE
+in (201) which is just used here as an illustrative example. Instead we use a procedure based on
+the so-called QR decomposition of the **n** × **p** matrix **E**, viz.
+
+$$
+\mathbf{E} = \mathbf{Q}\mathbf{R} \tag{208}
+$$
+
+where $\mathbf{Q}$ is an $n \times p$ matrix with orthonormal columns and $\mathbf{R}$ is a $p \times p$ upper triangular matrix (see, e.g., Chapter 9 in [62]). Using (208) the system (201) is reformulated to $\mathbf{R}\alpha = \mathbf{Q}^\mathrm{T}\mathbf{z}$, which can be easily solved by back-substitution. A similar procedure can be used when solving (205) for $\partial\alpha/\partial\beta_j$, with $\mathbf{R}$ being saved after the solution of (201).
+
+The numerical computation of the covariance matrix $\Sigma(\mathbf{b})$ in (192) can also be done by QR technique. In a general context we may ignore the scaling and constraint matrix factors and just assume that the covariance matrix is obtained by inverting the matrix $\mathbf{P}^\mathrm{T}\mathbf{P}$ where $\mathbf{P}$ is some $n \times p$ matrix. As in (208) we make a QR decomposition
+
+$$
+\mathbf{P} = \mathbf{Q}\mathbf{R} \tag{209}
+$$
+
+which leads to
+
+$$
+(\mathbf{P}^{\mathrm{T}}\mathbf{P})^{-1} = \mathbf{R}^{-1}(\mathbf{R}^{-1})^{\mathrm{T}} \quad (210)
+$$
+
+In some ill-conditioned problems the diagonal row of **R** may contain very small elements, which would render the evaluation of (210) completely erratic. There exists a variant of the QR decomposition with column scaling and pivoting that admits a judicious discarding rule for insignificant elements in the **R**-diagonal [55, 62]. Following this idea, we shall replace (209) with
+
+$$
+\mathbf{P} \Lambda \Pi = \mathbf{Q} \mathbf{R} \tag{211}
+$$
+
+where $\Lambda$ is a diagonal scaling matrix, $\Pi$ a permutation matrix, and the diagonal elements of $\mathbf{R}$ are in non-increasing order of magnitude. The entries in $\Lambda$ are chosen as the inverse Euclidean norms of the column vectors of $\mathbf{P}$ and might be called “uncoupled standard deviations”. Instead of (210) we obtain
+
+$$
+(\mathbf{P}^{\mathrm{T}}\mathbf{P})^{-1} = \Lambda\boldsymbol{\Pi}\mathbf{R}^{-1}(\Lambda\boldsymbol{\Pi}\mathbf{R}^{-1})^{\mathrm{T}} \quad (212)
+$$
+
+which is verified by solving (211) for $\mathbf{P}$, then calculating $\mathbf{P}^\mathrm{T}$ and $\mathbf{P}^\mathrm{T}\mathbf{P}$, and finally $(\mathbf{P}^\mathrm{T}\mathbf{P})^{-1}$. The expression (212) is only used for the “significant” parameters which corresponds to the upper part of $\mathbf{R}$. The variance of the “insignificant” parameters are estimated by their uncoupled standard deviations, while the covariance calculation for such parameters are abandoned.
+
+Presentation of the correlation matrix
+
+The correlation matrix **R** in (71–72) will be evaluated (optionally) in both POSITRONFIT and RESOLUTIONFIT. The internal evaluation follows the parameter order in (130), i.e. first the linear, then the nonlinear parameters. However, from the user's point of view, the nonlinear parameters are probably the more important. Hence we make a post-processing of **R** by applying a permutation $\pi$ to both the rows and the columns, where
+
+$$
+\pi = \begin{pmatrix}
+1 & \cdots & p & p+1 & \cdots & p+q \\
+q+1 & \cdots & q+p & 1 & \cdots & q
+\end{pmatrix}
+\quad (213)
+$$
+
+# 6 Appendix B: Model details
+
+In Sections 2.2 and 2.5 we gave a short presentation of the theoretical models used in POSITRON-
+FIT and RESOLUTIONFIT. Below we shall fill the gap between the rather brief description given
+there of the underlying mathematical models, and the least-squares theory in Appendix A.
+---PAGE_BREAK---
+
+## 6.1 POSITRONFIT
+
+Writing formula (3) as
+
+$$f(t) = \sum_{j=1}^{k_0} \sum_{p=1}^{k_g} \omega_p (a_j * G_p)(t) + B \quad (214)$$
+
+we must evaluate the convolution integral
+
+$$ (a_j * G_p)(t) = \int_{-\infty}^{\infty} a_j(v)G_p(t-v)dv \quad (215) $$
+
+where $a_j$ and $G_p$ were defined in (4) and (7), respectively. Henceforward, we prefer to describe the decay of a lifetime component in terms of the annihilation rate
+
+$$ \lambda_j = 1/\tau_j \quad (216) $$
+
+instead of the lifetime $\tau_j$ itself. It can be shown that
+
+$$ (a_j * G_p)(t) = \frac{1}{2} A_j \phi(t - T_0 - \Delta_p, \lambda_j, s_p) \quad (217) $$
+
+Here
+
+$$ \phi(u, \lambda, s) = \exp\left(-\lambda u + \frac{1}{2}\lambda^2 s^2\right) \operatorname{erfc}\left(\frac{\lambda s^2 - u}{\sqrt{2}s}\right) \quad (218) $$
+
+where erfc stands for the complementary error function
+
+$$ \operatorname{erfc}(x) = 1 - \operatorname{erf}(x) \quad (219) $$
+
+and erf in turn is defined by
+
+$$ \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp(-t^2) dt \quad (220) $$
+
+Inserting (217) in (214) we get
+
+$$ f(t) = \frac{1}{2} \sum_{j=1}^{k_0} A_j \sum_{p=1}^{k_g} \omega_p \phi(t - T_0 - \Delta_p, \lambda_j, s_p) + B \quad (221) $$
+
+Next, we compute the integrated model-predicted count $f_i$ defined by equation (12) in Section 2.2. We use that, up to a constant,
+
+$$ \int \phi(u, \lambda, s) du = -\frac{1}{\lambda} \psi(u, \lambda, s) \quad (222) $$
+
+where
+
+$$ \psi(u, \lambda, s) = \phi(u, \lambda, s) + \operatorname{erfc}\left(\frac{u}{\sqrt{2}s}\right) \quad (223) $$
+
+The functions $\phi$ and $\psi$ are building blocks in the POSITRONFIT model. For the predicted channel counts we obtain
+
+$$ f_i = \sum_{j=1}^{k_0} F_{ij} + B = \sum_{j=1}^{k_0} \alpha_j f_{ij} + B \quad (224) $$
+
+where
+
+$$ \alpha_j = \frac{1}{2} A_j / \lambda_j = \frac{1}{2} A_j \tau_j \quad (225) $$
+
+is half the absolute intensity of lifetime component j,
+
+$$ f_{ij} = \sum_{p=1}^{k_g} \omega_p \{\psi(t_{i-1,p}, \lambda_j, s_p) - \psi(t_{ip}, \lambda_j, s_p)\} \quad (226) $$
+
+and where we use the shorthand notation
+
+$$ t_{ip} = i_{ch}^{\min} + i - 1 - T_0 - \Delta_p \quad (227) $$
+---PAGE_BREAK---
+
+By now we have arrived at the model expression $f_i = f_i(\mathbf{b})$ entering the least-squares formulation of the fitting problem given in Appendix A. We also see that (224) is separable as required; the parameter vector $\mathbf{b}$ splits into a “linear” parameter $\alpha$ and a “nonlinear” one $\beta$ given by
+
+$$ \alpha = (\alpha_1, \dots, \alpha_{k_0}, B) \tag{228} $$
+
+and
+
+$$ \beta = (\lambda_1, \dots, \lambda_{k_0}, T_0) \tag{229} $$
+
+Thus the separable fitting theory of Section 5.5 applies. To perform the computations outlined there, we must evaluate the derivatives of $f_{ij}$ in (226) with respect to $\lambda_j$ and $T_0$; this job is facilitated by the following two formulas:
+
+$$ \frac{\partial \psi}{\partial u} = -\lambda \phi(u, \lambda, s) \tag{230} $$
+
+and
+
+$$ \frac{\partial \psi}{\partial \lambda} = (\lambda s^2 - u)\phi(u, \lambda, s) - \sqrt{\frac{2}{\pi}} s \exp\left(-\frac{u^2}{2s^2}\right) \tag{231} $$
+
+(230) shows that $\psi$ is a decreasing function of $u$. From (226–227) and (230–231) we obtain
+
+$$ \frac{\partial f_{ij}}{\partial T_0} = \sum_{p=1}^{k_g} \omega_p f_{ijp}^T \tag{232} $$
+
+$$ \frac{\partial f_{ij}}{\partial \lambda_j} = \sum_{p=1}^{k_g} \omega_p f_{ijp}^\lambda \tag{233} $$
+
+where
+
+$$ f_{ijp}^T = -\lambda_j \delta \phi_{ijp} \tag{234} $$
+
+$$ f_{ijp}^\lambda = \phi_{i-1,jp} - (\lambda_j s_p^2 - t_{ip}) \delta \phi_{ijp} + s_p \sqrt{2/\pi} \delta_{ip}^{\text{exp}} \tag{235} $$
+
+Moreover
+
+$$ \phi_{ijp} = \phi(t_{ip}, \lambda_j, s_p) \tag{236} $$
+
+$$ \delta\phi_{ijp} = \phi_{ijp} - \phi_{i-1,jp} \tag{237} $$
+
+$$ \delta_{ip}^{\text{exp}} = \exp\left(-\frac{t_{ip}^2}{2s_p^2}\right) - \exp\left(-\frac{t_{i-1,p}^2}{2s_p^2}\right) \tag{238} $$
+
+Considering now the practical computation of $\phi$ by (218), we reparametrize the arguments by the substitution
+
+$$ (x, y) = \left( \frac{\lambda s^2 - u}{\sqrt{2}s}, \frac{u}{\sqrt{2}s} \right) \tag{239} $$
+
+This gives
+
+$$ \phi = \exp(x^2 - y^2) \operatorname{erfc}(x) \tag{240} $$
+
+where
+
+$$ x + y > 0 \tag{241} $$
+
+If $x \le 0$ it is safe to use (240) since in that case $x^2 - y^2 < 0$ due to (241). But when $x \gg 0$ a numerical problem may occur. Let us express $\operatorname{erfc}(x)$ in terms of a confluent hypergeometric function. Using the Whittaker notation [63] we get
+
+$$ \operatorname{erfc}(x) = \frac{1}{\sqrt{\pi}} x^{-\frac{1}{2}} \exp\left(-\frac{1}{2}x^2\right) W_{-\frac{1}{4}, \frac{1}{4}}(x^2) \tag{242} $$
+
+which has the asymptotic expression
+
+$$ \operatorname{erfc}(x) \approx \frac{1}{\sqrt{\pi x}} (1 + O(x^{-2})) e^{-x^2}, \quad x \to +\infty \tag{243} $$
+---PAGE_BREAK---
+
+Now suppose that also $x^2 - y^2 \gg 0$. Then (243) shows that $\phi$ itself is small; nevertheless, the first factor of (240) is large and may cause an overflow in the computer. At the same time, the second factor is very small and likely to underflow. A remedy is to replace (240) by
+
+$$ \phi = \exp(-y^2) \operatorname{erfcx}(x) \quad (244) $$
+
+when $x > 0$, where erfcx stands for the *scaled complementary error function*
+
+$$ \operatorname{erfcx}(x) = \exp(x^2) \operatorname{erfc}(x) \quad (245) $$
+
+It is not hard to develop robust and accurate numerical approximations for this slowly varying function. Since
+
+$$ \operatorname{erfcx}(x) = \frac{1}{\sqrt{\pi}} x^{-\frac{1}{2}} \exp\left(\frac{1}{2}x^2\right) W_{-\frac{1}{4}, \frac{1}{4}}(x^2) \quad (246) $$
+
+we have the integral representation [63],
+
+$$ \operatorname{erfcx}(x) = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} (x^2 + t)^{-\frac{1}{2}} e^{-t} dt \quad (247) $$
+
+which decreases steadily for increasing $x > 0$. The function erfcx($x$) behaves asymptotically as $1/(\sqrt{\pi}x)$ owing to (243).
+
+We may include the special case $s_p = 0$ (no instrumental smearing) by letting $s \to 0$ in (218), (223), (230), (231). Using the limit formulas
+
+$$ \lim_{s \to 0} \exp\left(-\frac{u^2}{2s^2}\right) = 0 \quad (248) $$
+
+and
+
+$$ \lim_{s \to 0} \operatorname{erfc}\left(\frac{u}{\sqrt{2}s}\right) = \begin{cases} 2 & u < 0 \\ 0 & u > 0 \end{cases} \quad (249) $$
+
+where (248) holds good for almost all $u$, we may collect the necessary limit functions in the following scheme:
+
+| Function | $u < 0$ | $u > 0$ |
|---|
| $\phi$ | 0 | $2e^{-\lambda u}$ | | $\psi$ | 2 | $2e^{-\lambda u}$ | | $\partial\psi/\partial u$ | 0 | -$2\lambda e^{-\lambda u}$ | | $\partial\psi/\partial\lambda$ | 0 | -$2u e^{-\lambda u}$ |
+
+(250)
+
+We also want to be able to handle the special case of a fixed zero lifetime $\tau = 0$, i.e. $\lambda = 1/\tau = \infty$. This requires that we compute the limiting forms of (218) and (230) as $\lambda \to \infty$. We find:
+
+$$ \lim_{\lambda \to +\infty} \phi(u, \lambda, s) = 0 \quad (251) $$
+
+$$ \lim_{\lambda \to +\infty} \left[ \frac{\partial \psi}{\partial u}(u, \lambda, s) = -\lambda \phi(u, \lambda, s) \right] = -\frac{1}{s} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{u^2}{2s^2}\right) \quad (252) $$
+
+which implies that (232) becomes
+
+$$ \lim_{\lambda \to +\infty} \frac{\partial f_{ij}}{\partial T_0} = -\sqrt{\frac{2}{\pi}} \sum_{p=1}^{k_g} \frac{\omega_p}{s_p} \delta_{ip}^{\exp} \quad (253) $$
+
+In Section 2.2 we mentioned the types of constraints which could be imposed on the parameters in POSITRONFIT. Those constraints that fix one of the “primary” fitting parameters listed in (228) and (229) are realized by deleting the corresponding components from $\alpha$ or $\beta$. This may apply to $B$, $\lambda_j$, and $T_0$. Constraints of the type “fixed relative intensity” are not of this simple type because the relative intensities $\alpha_j / (\sum \alpha_{j'})$ are not primary parameters. But obviously such constraints are expressible as linear constraints on the linear parameters $\alpha_j$, i.e. relations of the form
+
+$$ \sum h_{ij} \alpha_j = \gamma_i \quad (254) $$
+
+where $h_{ij}$ are known coefficients, cf. (59). The same holds good for constraints of the type “a linear combination of the relative intensities = 0”, as well as the total-area constraint (19).
+---PAGE_BREAK---
+
+## 6.2 RESOLUTIONFIT
+
+Although the basic model in RESOLUTIONFIT is the same as in POSITRONFIT, there are certain differences regarding which parameters enter as fitting parameters since the standard deviations (widths) $s_p$ and the shifts $\Delta t_p$ are fitting parameters in RESOLUTIONFIT. Hence (229) should be replaced by
+
+$$ \beta = (\lambda_1, \dots, \lambda_{k_0}, T_0, s_1, \dots, s_{k_g}, \Delta_1, \dots, \Delta_{k_g}) \quad (255) $$
+
+In Section 2.5 we mentioned the types of constraints which could be imposed on the parameters in RESOLUTIONFIT. Some of the parameters of (255) can be fixed and in that case should be deleted from $\beta$. This may apply to $\lambda_j$, $s_p$, and $\Delta t_p$. Notice that $T_0$ in RESOLUTIONFIT is always a free parameter. As a consequence, we must require that at least one of the shifts $\Delta t_p$ be fixed.
+
+In addition to (230–238) we shall need the following formulas:
+
+$$ \frac{\partial \psi}{\partial s} = \lambda^2 s \phi(u, \lambda, s) - \sqrt{\frac{2}{\pi}} \lambda \exp\left(-\frac{u^2}{2s^2}\right) \quad (256) $$
+
+$$ \frac{\partial f_{ij}}{\partial \Delta_p} = -\lambda_j \omega_p \delta \phi_{ijp} \quad (257) $$
+
+$$ \frac{\partial f_{ij}}{\partial s_p} = \omega_p \{-\lambda_j^2 s_p \delta\phi_{ijp} + \lambda_j \sqrt{2/\pi} \delta_{ip}^{\text{exp}}\} \quad (258) $$
+
+RESOLUTIONFIT also requires some extra limit formulas for $\lambda = \infty$. Applying (252) to (257) we obtain
+
+$$ \lim_{\lambda \to +\infty} \frac{\partial f_{ij}}{\partial \Delta_p} = -\sqrt{\frac{2}{\pi}} \frac{\omega_p}{s_p} \delta_{ip}^{\text{exp}} \quad (259) $$
+
+The limit of (256) is found to
+
+$$ \lim_{\lambda \to +\infty} \left[ \frac{\partial \psi}{\partial s}(u, \lambda, s) = \lambda^2 s \phi(u, \lambda, s) - \sqrt{\frac{2}{\pi}} \lambda \exp\left(-\frac{u^2}{2s^2}\right) \right] = \frac{u}{s^2} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{u^2}{2s^2}\right) \quad (260) $$
+
+and applying this to (258) we get
+
+$$ \lim_{\lambda \to +\infty} \frac{\partial f_{ij}}{\partial s_p} = \sqrt{\frac{2}{\pi}} \frac{\omega_p}{s_p^2} \left\{ t_{i-1,p} \exp\left(-\frac{t_{i-1,p}^2}{2s_p^2}\right) - t_{ip} \exp\left(-\frac{t_{ip}^2}{2s_p^2}\right) \right\} \quad (261) $$
+
+The limit formulas (251), (252), (260) can be verified by using (239–240) and (243).
+
+In RESOLUTIONFIT we compute shape parameters for the fitted resolution curve. To explain our method, we consider a general composite Gaussian resolution function,
+
+$$ f(t) = \sum_{j=1}^{k} \alpha_j G(t, \sigma_j, \Delta_j) \quad (262) $$
+
+where $k \in \mathbb{N}$, $\alpha_j > 0$, $t \in \mathbb{R}$, $\sigma_j > 0$, $\Delta_j \in \mathbb{R}$, and $\sum_{j=1}^k \alpha_j = 1$. Moreover,
+
+$$ G(t, \sigma, \Delta) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(t-\Delta)^2}{2\sigma^2}\right) \quad (263) $$
+
+is a normalized Gaussian satisfying
+
+$$ \int_{-\infty}^{\infty} G(t, \sigma, \Delta) dt = 1 \quad (264) $$
+
+We assume that (262) is a unimodal function. Then the peak position $t_p$ is uniquely determined by solving the equation $f'(t) = 0$ wrt $t$. Seeking a numerical solution we shall use the Newton-Raphson method,
+
+$$ t := t - \frac{f'(t)}{f''(t)} \quad (265) $$
+---PAGE_BREAK---
+
+We have
+
+$$f'(t) = \sum_{j=1}^{k} \alpha_j G'(t, \sigma_j, \Delta_j), \quad f''(t) = \sum_{j=1}^{k} \alpha_j G''(t, \sigma_j, \Delta_j) \qquad (266)$$
+
+where
+
+$$G'(t, \sigma, \Delta) = -\frac{t-\Delta}{\sqrt{2\pi}\sigma^3} \exp\left(-\frac{(t-\Delta)^2}{2\sigma^2}\right), \quad G''(t, \sigma, \Delta) = \frac{(t-\Delta)^2-\sigma^2}{\sqrt{2\pi}\sigma^5} \exp\left(-\frac{(t-\Delta)^2}{2\sigma^2}\right) \quad (267)$$
+
+An initial guess of $t$ to start the NR procedure may be provided from a pre-tabulation of $f(t)$ in (262), using a suitable fineness of the $t$-entries. Such a table may also present a numerical verification that (262) is indeed unimodal. We also want to calculate a table of full-width-at-1/$n$-max values for a series $n_1, n_2, \dots, n_m$ of $n$-values. We take $m=7$ and use the values in the following table:
+
+| $i$ | $n_i$ | | 1 | 2 | | 2 | 5 | | 3 | 10 | | 4 | 30 | | 5 | 100 | | 6 | 300 | | 7 | 1000 |
+
+Each of the equations
+
+$$f(t) = \frac{1}{n_i} f(t_p), \quad i = 1, \dots, m \qquad (268)$$
+
+will be satisfied by 2 values of $t$ such that we have
+
+$$f(t_p - \tau_i^-) = f(t_p + \tau_i^+) = \frac{1}{n_i} f(t_p), \quad i = 1, \dots, m \qquad (269)$$
+
+where $\tau_i^-$ and $\tau_i^+$ are both positive. The corresponding full-width and midtpoint are
+
+$$FW_i = \tau_i^+ + \tau_i^-, \quad \mu_i = t_p + \frac{1}{2}(\tau_i^+ - \tau_i^-) \qquad (270)$$
+
+It is problematic to use the Newton-Raphson method directly to obtain a numerical solution of (268). The reason is the inflexion points of the resolution curve (262) which may trap the NR-procedure. We may still use NR but we shall work with the logarithmic counterpart to (268),
+
+$$\psi(t) = \ln f(t) - \ln f(t_p) + \ln n_i = 0, \quad i = 1, \dots, m \qquad (271)$$
+
+In analogy with (265) the NR rule here becomes
+
+$$t := t - \frac{\psi(t)}{\psi'(t)} \text{ where } \psi'(t) = \frac{f'(t)}{f(t)} \qquad (272)$$
+
+Again, our pre-tabulation of $f(t)$ will be useful in providing initial $t$-values for the NR iterations.
+
+# 7 Appendix C: Log-normal details
+
+## 7.1 Log-normal POSITRONFIT model formulas
+
+We shall here discuss the PALSfit3 extension of the POSITRONFIT model to cope with the situation where some of the lifetimes obey log-normal distributions. Such an extension requires a modification of the model description in Section 6.1.
+---PAGE_BREAK---
+
+When implementing the log-normal extension in POSITRONFIT we shall still work with annihilation rates rather than lifetimes in the internal model formulation. A stochastic annihilation rate $\lambda$ is related to the corresponding stochastic lifetime $\tau$ by
+
+$$ \lambda = \frac{1}{\tau}, \quad \ln \lambda = -\ln \tau \qquad (273) $$
+
+cf. (216). If $\tau$ has a log-normal distribution defined by (21-25), it follows from (273) that
+
+$$ \ln \lambda \sim \mathcal{N}(\ln \lambda_*, \sigma_*^2) \qquad (274) $$
+
+where
+
+$$ \lambda_* = \frac{1}{\tau_*} \qquad (275) $$
+
+and hence $\lambda$ is log-normally distributed with the same $\sigma_*$ parameter as $\tau$. Then the pdf for $\lambda$ becomes
+
+$$ f(\lambda) = f(\lambda; \lambda_*, \sigma_*) = \frac{1}{\lambda \sigma_* \sqrt{2\pi}} \exp \left( -\frac{1}{2\sigma_*^2} (\ln \lambda - \ln \lambda_*)^2 \right) \qquad (276) $$
+
+while the CDF is
+
+$$ F(\lambda) = \Phi\left(\frac{\ln \lambda - \ln \lambda_*}{\sigma_*}\right) \qquad (277) $$
+
+Extension of the classical POSITRONFIT model (Appendix B, Section 6.1) to the log-normal case requires evaluation of the log-normal probability-weighted integral of (223),
+
+$$ I = \int_{0}^{\infty} f(\lambda, \lambda_{*}, \sigma_{*}) \psi(u, \lambda, s) d\lambda \qquad (278) $$
+
+We shall also need the derivatives $\partial I / \partial u$, $\partial I / \partial \lambda_*$, $\partial I / \partial \sigma_*$. From (230) we obtain
+
+$$ \frac{\partial I}{\partial u} = - \int_{0}^{\infty} f(\lambda; \lambda_{*}, \sigma_{*}) \lambda \phi(u, \lambda, s) d\lambda \qquad (279) $$
+
+In the following we use the abbreviation
+
+$$ \mu(\lambda) = \ln \lambda - \ln \lambda_{*} = \ln(\lambda/\lambda_{*}) \qquad (280) $$
+
+To find $\partial I / \partial \lambda_*$ we must compute $\partial f(\lambda; \lambda_*, \sigma_*) / \partial \lambda_*$. The result is:
+
+$$ \frac{\partial}{\partial \lambda_{*}} f(\lambda; \lambda_{*}, \sigma_{*}) = f(\lambda; \lambda_{*}, \sigma_{*}) \frac{\mu(\lambda)}{\lambda_{*} \sigma_{*}^{2}} \qquad (281) $$
+
+Thus
+
+$$ \frac{\partial I}{\partial \lambda_*} = \int_0^\infty f(\lambda; \lambda_*, \sigma_*) \frac{\mu(\lambda)}{\lambda_* \sigma_*^2} d\lambda \qquad (282) $$
+
+Similarly for $\partial I / \partial \sigma_*$:
+
+$$ \frac{\partial}{\partial \sigma_{*}} f(\lambda; \lambda_{*}, \sigma_{*}) = -f(\lambda; \lambda_{*}, \sigma_{*}) \frac{\sigma_{*}^{2} - \mu(\lambda)^{2}}{\sigma_{*}^{3}} \qquad (283) $$
+
+$$ \frac{\partial I}{\partial \sigma_*} = -\int_0^\infty f(\lambda; \lambda_*, \sigma_*) \frac{\sigma_*^2 - \mu(\lambda)^2}{\sigma_*^3} \psi(u, \lambda, s) d\lambda \qquad (284) $$
+
+## 7.2 Fixing one of the log-normal parameters $\tau_m$ or $\sigma$
+
+In our least squares fitting procedures we want to be able to fix either the mean $\tau_m$ of the log-normal distribution or its standard deviation $\sigma$, or both. The cases ($\tau_m$ fixed, $\sigma$ free) and ($\tau_m$ free, $\sigma$ fixed) require special attention. Considering first $\tau_m$ fixed to $\tau_{m0}$ and $\sigma$ free, this implies that the intrinsic parameters $\tau_*$ (or $\lambda_* = 1/\tau_*$) and $\sigma_*$ cannot vary freely but are bounded by the constraint
+
+$$ \frac{1}{\lambda_*} \exp\left(\frac{1}{2}\sigma_*^2\right) = \tau_{m0} \qquad (285) $$
+---PAGE_BREAK---
+
+Equation (285) defines the function
+
+$$
+\lambda_{*}(\sigma_{*}) = \frac{1}{\tau_{m0}} \exp(\frac{1}{2}\sigma_{*}^{2}) \qquad (286)
+$$
+
+We may eliminate $\lambda_*$ from the pdf $f(\lambda; \lambda_*, \sigma_*)$ by considering $\sigma_*$ as the only free parameter. This gives the new pdf
+
+$$
+f(\lambda; \lambda_*(\sigma_*), \sigma_*) = f_1(\lambda; \sigma_*) \tag{287}
+$$
+
+If we interprete $f(\lambda; \lambda_*, \sigma_*)$ in (278) and (279) according to (287), these formulas still hold good.
+To obtain $\partial I / \partial \sigma_*$ we compute $\partial f_1(\lambda; \sigma_*) / \partial \sigma_*$. We have
+
+$$
+\frac{\partial f_1(\lambda; \sigma_*)}{\partial \sigma_*} = \frac{\partial f(\lambda; \lambda_*, \sigma_*)}{\partial \lambda_*} \frac{\mathrm{d}\lambda_*}{\mathrm{d}\sigma_*} + \frac{\partial f(\lambda; \lambda_*, \sigma_*)}{\partial \sigma_*} \qquad (288)
+$$
+
+where, by (286),
+
+$$
+\frac{d\lambda_*}{d\sigma_*} = \lambda_*\sigma_* \tag{289}
+$$
+
+By (281) and (283) we find that (284) should be replaced by
+
+$$
+\frac{\partial I}{\partial \sigma_{*}} = \int_{0}^{\infty} f(\lambda; \lambda_{*}, \sigma_{*}) \frac{\mu(\lambda)\sigma_{*}^{2} - \sigma_{*}^{2} + \mu(\lambda)^{2}}{\sigma_{*}^{3}} \psi(u, \lambda, s)d\lambda \quad (290)
+$$
+
+where $\mu(\lambda)$ was defined in (280). When the iteration on $\sigma_*$ has converged in the least squares
+iteration process, we may improve the fit by a final recalculation of $\lambda_*$ by (286) before using (28)
+to evaluate $\sigma$. We proceed similarly when keeping $\tau_m$ free and fixing $\sigma$ to $\sigma_0$. Here we have the
+constraint
+
+$$
+\frac{1}{\lambda_{*}} \exp\left(\frac{1}{2}\sigma_{*}^{2}\right) \sqrt{\exp(\sigma_{*}^{2}) - 1} = \sigma_{0} \qquad (291)
+$$
+
+which defines the function
+
+$$
+\sigma_*(\lambda_*) = \sqrt{\ln\left(\frac{1}{2}\left(1 + \sqrt{1 + 4\lambda_*^2\sigma_0^2}\right)\right)} \quad (292)
+$$
+
+We obtain
+
+$$
+f(\lambda; \lambda_*, \sigma_*(\lambda_*)) = f_2(\lambda; \lambda_*) \tag{293}
+$$
+
+$$
+\frac{\partial f_2(\lambda; \lambda_*)}{\partial \lambda_*} = \frac{\partial f(\lambda; \lambda_*, \sigma_*)}{\partial \lambda_*} + \frac{\partial f(\lambda; \lambda_*, \sigma_*)}{\partial \sigma_*} \frac{\mathrm{d}\sigma_*}{\mathrm{d}\lambda_*} \quad (294)
+$$
+
+and then, with the abbreviation
+
+$$
+\kappa = 1 - \exp(-\sigma_*^2) \tag{295}
+$$
+
+we get
+
+$$
+\frac{d\sigma_*}{d\lambda_*} = \frac{\kappa}{1+\kappa} \frac{1}{\lambda_* \sigma_*} \qquad (296)
+$$
+
+$$
+\frac{\partial I}{\partial \lambda_{*}} = - \int_{0}^{\infty} f(\lambda; \lambda_{*}, \sigma_{*}) \frac{\kappa \sigma_{*}^{2} - (1 + \kappa) \sigma_{*}^{2} \mu(\lambda) - \kappa \mu(\lambda)^{2}}{(1 + \kappa) \lambda_{*} \sigma_{*}^{4}} \psi(u, \lambda, s) d\lambda \quad (297)
+$$
+
+In this case we should use (292) to make a final recalculation of $\sigma_*$ before using (26) to evaluate
+$\tau_m$.
+
+**7.3 Jacobian matrix for output presentation**
+
+We shall also need the Jacobian matrix
+
+$$
+\frac{\partial (\tau_m, \sigma)}{\partial (\tau_*, \sigma_*)} = \begin{pmatrix} \frac{\partial \tau_m}{\partial \tau_*} & \frac{\partial \tau_m}{\partial \sigma_*} \\ \frac{\partial \sigma}{\partial \tau_*} & \frac{\partial \sigma}{\partial \sigma_*} \end{pmatrix} \qquad (298)
+$$
+---PAGE_BREAK---
+
+which is used for covariance estimation for the output presentation. From (26) and (28) we obtain
+
+$$ \frac{\partial \tau_m}{\partial \tau_*} = \exp\left(\frac{1}{2}\sigma_*^2\right) \qquad (299) $$
+
+$$ \frac{\partial \tau_m}{\partial \sigma_*} = \tau_* \sigma_* \exp\left(\frac{1}{2}\sigma_*^2\right) \qquad (300) $$
+
+$$ \frac{\partial \sigma}{\partial \tau_*} = \exp\left(\frac{1}{2}\sigma_*^2\right) \sqrt{\exp(\sigma_*^2) - 1} \qquad (301) $$
+
+$$ \frac{\partial \sigma}{\partial \sigma_{*}} = \frac{\tau_{*} \sigma_{*} \exp\left(\frac{1}{2} \sigma_{*}^{2}\right) (2 \exp(\sigma_{*}^{2}) - 1)}{\sqrt{\exp(\sigma_{*}^{2}) - 1}} \qquad (302) $$
+
+From (302) we derive
+
+$$ \lim_{\sigma_* \to 0} \frac{\partial \sigma}{\partial \sigma_*} = \tau_* \qquad (303) $$
+
+In the special cases where either $\tau_m$ or $\sigma$ is fixed, (299–302) hold good for the free parameter.
+
+## 7.4 Numerical evaluation of log-normal integrals
+
+All the integrals (278), (279), (282), (284), (290), (297) must be evaluated numerically. In a broader context, let us consider the evaluation of integrals of the form
+
+$$ J = \int_{0}^{\infty} f(x)g(x)dx \qquad (304) $$
+
+where *f* is a general pdf satisfying
+
+$$ \int_{0}^{\infty} f(x) dx = 1, \quad f(x) > 0 \text{ in } (0, \infty) \qquad (305) $$
+
+and *g* is an arbitrary function. The integral (304) may be thought of as a probability-weighted mean of *g*, and so we shall be inspired by the Monte Carlo method when devising a numerical integration scheme for it. We write
+
+$$ J = \int_{x=0}^{\infty} g(x) dF(x) \qquad (306) $$
+
+where *F* is the CDF corresponding to *f*. Then we substitute
+
+$$ F(x) = \xi, \quad x = F^{-1}(\xi), \quad \xi \in (0, 1) \qquad (307) $$
+
+(cf. description of the PALGEN program in Appendix D, Section 8.1) and obtain
+
+$$ J = \int_{0}^{1} g(F^{-1}(\xi)) d\xi \qquad (308) $$
+
+We may now apply a quadrature rule on (308) i.e. some approximation formula of the type
+
+$$ J \approx \sum_{\nu=1}^{N} w_{\nu} g(F^{-1}(\xi_{\nu})) \qquad (309) $$
+
+where *N* is the number of quadrature nodes, $w_\nu$ are positive weights, and $\xi_\nu$ the node abscissas. Moreover
+
+$$ \sum_{\nu=1}^{N} w_{\nu} = 1 \qquad (310) $$
+
+and
+
+$$ 0 < \xi_1 < \dots < \xi_N < 1 \qquad (311) $$
+
+We could use the simple rectangular rule
+
+$$ w_{\nu} = \frac{1}{N}, \quad \xi_{\nu} = \frac{2\nu - 1}{2N} \qquad (312) $$
+---PAGE_BREAK---
+
+but prefer the more efficient Gauss-Legendre (GL) method [61]. Returning to the specific pdf (276), we see from (308–309) that we shall need the inverse function $F^{-1}$ of the function $F$ in (277). This means that we must solve the equation
+
+$$F(x) = \Phi\left(\frac{\ln x - \ln \lambda_{*}}{\sigma_{*}}\right) = \xi \quad (313)$$
+
+for $x$. The result is
+
+$$x = F^{-1}(\xi) = \lambda_{*} \exp(\sigma_{*} \Phi^{-1}(\xi)) \quad (314)$$
+
+Thus, finding the inverse of $F$ is reduced to computing the inverse $\Phi^{-1}$ of the CDF $\Phi$ for $N(0,1)$. Standard software is available for this purpose. We can then write
+
+$$J = \int_{0}^{1} g(\lambda_{*} \exp(\sigma_{*} \Phi^{-1}(\xi))) d\xi \quad (315)$$
+
+The corresponding numerical approximation is
+
+$$J \approx \sum_{\nu=1}^{N} w_{\nu} g(\lambda_{*} \exp(\sigma_{*} \Phi^{-1}(\xi_{\nu}))) \quad (316)$$
+
+It is this formula that is used in the numerical approximation of all the log-normal integrals in Sections 7.1 and 7.2. The number of quadrature nodes $N$ is called the *log-normal fineness* of our approximation. In PALSfit3 the fineness defaults to $N = 32$ which provides a reasonable compromise between speed and accuracy, but other values might be entered by the user, either automatically through PALSfit3 or by external means. Indeed, this could be done by inserting a value of $N$ in the output option record for POSITRONFIT, cf. Section 3.2 Block 1. The number may be placed anywhere in position 5–80; omission or zero implies the default value.
+
+## 7.5 Discarding log-normal lifetime components
+
+During the least squares fitting procedure it may sometimes happen that $\sigma_*$ (or, equivalently, $\sigma$) tends to zero as the iterations proceed. In such a case PALSfit3 automatically discards the broadened component and replaces it by a “classic” component with a simple decaying exponential and then resumes the iterations from there. In other words PALSfit3 may detect situations where log-normal broadenings do not contribute to the fitting of the observations.
+
+# 8 Appendix D: Quality check
+
+## 8.1 Simulation of lifetime spectra
+
+In 1997 Hirade² developed a simulation program PALGEN, written in the BASIC language. PALGEN uses Monte Carlo in its simplest version, the so-called *direct simulation*. This is close to the real-world physical setup and admits an independent assessment of the capability of POSITRON-FIT to recover correct lifetime values. Cheung et al. [64], also in 1997, describe a simulation tool which is equivalent to PALGEN; they used their program to study the merits of the POSITRON-FIT software as an analysis tool. At DTU Risø Campus we have later created a FORTRAN-based version of PALGEN.
+
+The main sampling principle in PALGEN is to split the recorded annihilation time $t$ as follows:
+
+$$t = T_0 + t_{\text{anni}} + t_{\text{gauss}} + t_{\text{shift}} \quad (317)$$
+
+²Personal communication, Tetsuya Hirade, then at Department of Materials Science, Japan Atomic Energy Research Institute.
+---PAGE_BREAK---
+
+Here $T_0$ is the given time-zero, $t_{\text{anni}}$ the true annihilation time, $t_{\text{gauss}}$ the instrumental smear component, and $t_{\text{shift}}$ a deterministic shift value associated with the latter. We begin with the sampling of $t_{\text{anni}}$.
+
+First a random number determines the actual lifetime component from the given intensities. Then, assuming first a fixed lifetime $\tau$, we have the exponential probability density function (pdf) for $x = t_{\text{anni}}$,
+
+$$f(x) = \frac{1}{\tau} \exp\left(-\frac{x}{\tau}\right) \qquad (318)$$
+
+This corresponds to the cumulative distribution function (CDF)
+
+$$F(x) = 1 - \exp\left(-\frac{x}{\tau}\right) \qquad (319)$$
+
+To sample $x = t_{\text{anni}}$ we use the classical Monte Carlo formula,
+
+$$x = F^{-1}(\xi) \qquad (320)$$
+
+which is the inverse form of
+
+$$F(x) = \xi \qquad (321)$$
+
+where $\xi \in (0, 1)$ is a uniform random number. By replacing $\xi$ with $1 - \xi$ we then obtain the sampling formula
+
+$$t_{\text{anni}} = -\tau \ln \xi \qquad (322)$$
+
+Next we perform the sampling of $t_{\text{gauss}}$. Assuming a composite Gaussian distribution, we first pick one of the Gaussian components. Let the standard deviation of this be $\sigma$. Then we use the well-known polar method of Box and Muller [65] to generate a standardized normal variate $\eta$ with mean 0 and variance 1, and obtain
+
+$$t_{\text{gauss}} = \eta \sigma \qquad (323)$$
+
+Finally, the selected Gaussian component determines $t_{\text{shift}}$. We have now obtained a new sample value of $t$ as given by (317). Using a binning procedure this is converted to a count in a certain channel of the simulation spectrum to be generated. The sampling of the background proceeds independently of the remaining spectrum and is accomplished by simple uniform multinomial sampling with bins = channels.
+
+In Section 2.3 we discussed a log-normal extension of the POSITRONFIT model, in which the lifetime was assumed to be a stochastic variable $\tau$ distributed with the pdf given in (25). To cover this situation, only a minor addition to PALGEN was needed. In order to sample $\tau$ we must know $\tau_*$ and $\sigma_*$. Then we can use (21) to generate $\ln \tau$ by the Box-Muller method. Having sampled $\tau$ we continue as before from (318) and onwards.
+
+## 8.2 Verifying POSITRONFIT by statistical analysis
+
+We have developed an ad-hoc tool POSCHECK which is a batch of software components intended to run consecutively under Windows, cf. the following bat file:
+
+```
+rem Check POSITRONFIT by using simulated spectra
+PALGEN
+rem PALGEN reads generate.inp, produces spectrum.dat and generate.log
+FILECOPY
+rem FILECOPY reads template.pfc and generate.inp, produces generate.pfc
+rem template.pfc describes how the POSITRONFIT analysis should be done
+POSITRONFIT generate.pfc positron.out
+rem run POSITRONFIT with input generate.pfc and output positron.out
+PANALYSE
+rem PANALYSE makes statistical analysis of POSITRONFIT output
+rem PANALYSE reads generate.inp and positron.out, produces psummary.txt
+if errorlevel 1 goto quit
+:quit
+```
+---PAGE_BREAK---
+
+We restrict our scope to verification of POSITRONFIT without source correction. POSCHECK has 3 main components:
+
+- Simulation program PALGEN as described in Section 8.1
+
+- POSITRONFIT
+
+- Statistical analyser PANALYSE
+
+From given input data, PALGEN produces a random set of say $n$ individual counting spectra, collected in a single spectrum file. Each spectrum is then analysed by POSITRONFIT. The result is a (multidimensional) sample of $n$ individual outputs containing parameter estimates and estimated standard deviations. After this, the auxiliary program PANALYSE collects all these sample estimates and makes a simple statistical overview ("tally") analysis. In the POSITRON-FIT input we use off-central parameter guesses. We give here an illustrative example of applying POSCHECK where we use the following input data:
+
+- Total number of channels = 512, fit range = [35,512]
+
+- Background = 680, total area = $9 \cdot 10^6$ (without Bg); free lifetimes, Bg, and total area
+
+- $C = 0.0773$ ns/ch, cf. (11) in Section 2.2
+
+- Instrumental resolution function with 1 gaussian, FWHM = 0.42 ns
+
+- 2 lifetime components with $\tau = 0.30$ ns and $\tau = 2.00$ ns, respectively
+
+- No log-normal smearing of lifetime components
+
+- Lifetime intensities: 60% and 40%, respectively
+
+- $T_0 = 136$
+
+- Sample size $n = 100$
+
+We show below (part of) the output produced by PANALYSE:
+
+ANALYSIS REPORT OBTAINED FROM 100 SPECTRA
+INITIAL RANDOM NUMBER: 54711441
+
+TALLY STATISTICS FOR REDUCED CHI-SQUARE
+
+SAMPLE MEAN (SM): 1.014088
+SAMPLE DEVI: 0.063280
+MEAN PREDICTED DEVI 0.065094
+SEM: 0.006328
+U=(SM-TARGET)/SEM: 2.226363
+MEAN SIGNIFICANCE % 57.246279
+
+TALLY STATISTICS FOR LIFETIMES
+
+TARGET: 0.300000 2.000000
+SAMPLE MEAN (SM): 0.299974 1.999719
+SAMPLE DEVI: 0.000448 0.001898
+MEAN PREDICTED DEVI 0.000400 0.002000
+SEM: 0.000045 0.000190
+U=(SM-TARGET)/SEM: -0.579734 -1.480639
+
+TALLY STATISTICS FOR INTENSITIES
+
+TARGET: 60.000000 40.000000
+SAMPLE MEAN (SM): 59.999830 40.000170
+SAMPLE DEVI: 0.046572 0.046572
+MEAN PREDICTED DEVI 0.042172 0.042172
+SEM: 0.004657 0.004657
+U=(SM-TARGET)/SEM: -0.036503 0.036503
+
+TALLY STATISTICS FOR BACKGROUND
+
+TARGET: 680.000000
+---PAGE_BREAK---
+
+SAMPLE MEAN (SM): 680.081674
+SAMPLE DEVI: 1.002812
+MEAN PREDICTED DEVI 1.500642
+SEM: 0.100281
+U=(SM-TARGET)/SEM: 0.814450
+
+TALLY STATISTICS FOR TO
+TARGET: 136.000000
+SAMPLE MEAN (SM): 136.000038
+SAMPLE DEVI: 0.002181
+MEAN PREDICTED DEVI 0.002300
+SEM: 0.000218
+U=(SM-TARGET)/SEM: 0.174251
+
+CORRELATION MATRIX AVERAGED OVER 100 PREDICTIONS
+
+ | INTEN | INTEN | BACKG | LIFET | LIFET | TZERO |
|---|
| INTEN | 1.000 | -1.000 | -0.112 | 0.666 | 0.706 | -0.249 | | INTEN | -1.000 | 1.000 | 0.112 | -0.666 | -0.706 | 0.249 | | BACKG | -0.112 | 0.112 | 1.000 | -0.107 | -0.273 | 0.056 | | LIFET | 0.666 | -0.666 | -0.107 | 1.000 | 0.540 | -0.682 | | LIFET | 0.706 | -0.706 | -0.273 | 0.540 | 1.000 | -0.233 | | TZERO | -0.249 | 0.249 | 0.056 | -0.682 | -0.233 | 1.000 |
+
+PEARSON CORRELATION MATRIX BY 100 PARAMETER ESTIMATES
+
+ | INTEN | INTEN | BACKG | LIFET | LIFET | TZERO |
|---|
| INTEN | 1.000 | -1.000 | -0.052 | 0.742 | 0.686 | -0.273 |
|---|
| INTEN | -1.000 | 1.000 | 0.052 | -0.742 | -0.686 | 0.273 |
|---|
| BACKG | -0.052 | 0.052 | 1.000 | -0.123 | -0.322 | 0.100 |
|---|
| LIFET | 0.742 | -0.742 | -0.123 | 1.000 | 0.575 | -0.688 |
|---|
| LIFET | 0.686 | -0.686 | -0.322 | 0.575 | 1.000 | -0.266 |
|---|
| TZERO | -0.273 | 0.273 | 0.100 | -0.688 | -0.266 | 1.000 |
|---|
+
+To explain the meaning of the tally statistics, note that the **TARGET** values are input values to **PALGEN** (except for the reduced $\chi^2$ where TARGET = 1). The **SAMPLE MEAN (SM)** and **SAMPLE DEVI** are for any parameter *x* given by the usual formulas for sample mean and sample standard deviation:
+
+$$ \text{SM} = \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i \qquad (324) $$
+
+$$ \text{SAMPLE DEVI} = s = \sqrt{\frac{\sum_{i=1}^{n} (x_i - \bar{x})^2}{n-1}} \qquad (325) $$
+
+The MEAN PREDICTED DEVI is the average over the *n* predictions of the standard deviation made by POSITRONFIT. SEM is the Standard Error of the Mean:
+
+$$ \text{SEM} = (\text{SAMPLE DEVI}) / \sqrt{n} \qquad (326) $$
+
+The quantity (SM-TARGET)/SEM is called the *u*-value, and when *n* is not too small, we expect this to be distributed approximately in a standardized normal distribution, $u \sim N(0, 1)$.
+
+Our results show good agreement between TARGET and SAMPLE MEAN. Moreover, except for the Bg, there is a fair agreement between SAMPLE DEVI and MEAN PREDICTED DEVI. For the *u*-values we see that |*u*| is of the order of magnitude 1, which should be expected.
+
+We have used POSCHECK successfully in many other kinds of problems, including some with log-normal lifetime broadenings.
+
+If we increase the sample size *n* to very large values, there is a tendency to get |*u*|-values substantially larger than 1. This is because round-off and imperfect numerical algorithms begin to dominate over the statistical errors.
+---PAGE_BREAK---
+
+Concerning the correlation matrix we have made another kind of comparison between predicted values and sample values. The first of the two matrices, $M_1$ is formed by a simple averaging over the $n$ matrices predicted by POSITRONFIT. In the second one, $M_2$, the elements are estimated from the sample itself. Indeed, each element is computed as a Pearson Product Moment Correlation (PPMC). The general formula for this correlation coefficient between $x$ and $y$ is:
+
+$$r = \frac{n(\sum xy) - (\sum x)(\sum y)}{\sqrt{[n \sum x^2 - (\sum x)^2][n \sum y^2 - (\sum y)^2]}} \quad (327)$$
+
+In our example we see that there is a rough equivalence between $M_1$ and $M_2$, $M_1 \approx M_2$, again apart from the Bg entries. The bad news are that the convergence of $M_2 = M_2(n)$ is extremely slow, as $n \to \infty$. The good news are that we have the opposite situation for $M_1$ where the convergence $M_1(n) \to M_1(\infty)$ is fast. Already $M_1(1)$ (single sample) is a reasonable approximation to $M_1(\infty)$ (infinite sample).
+
+## 8.3 Comparison of PALSfit3 with LT10
+
+The model function we have assumed in PALSfit3 to provide a realistic broadening (standard deviation) of each of the decaying exponentials is a log-normal distribution. This is the same as in the LT10 program by Giebel and Kansy [15]. We have therefore carried out a comparison of PALSfit3 with LT10 by simulating – with PALGEN – a series of spectra and analysed them with both programs and for both carried out a statistical analysis of the results like the one described in Section 8.2.
+
+The input data to the simulation were the following:
+
+| Total number of channels | = 2000 | | Area without background | = 4 · 106 | | Background | = 800 | | Time-zero | = 259 | | Time per channel | = 0.015 ns |
+
+**Resolution Function:**
+
+| Number of Gaussians | = 2 | | FWHM of Gaussians | = 0.25, 0.35 (ns) | | Intensities of Gaussians | = 80, 20 (%) | | Shifts of Gaussians | = 0, 0.075 (ns) |
+
+**Lifetime components:**
+
+| Number of lifetimes | = 3 | | Lifetimes | = 0.15, 0.4, 1.8 (ns) | | Log-normal broadenings | = 0, 0.1, 0.4 (ns) | | Lifetime intensities | = 15, 40, 45 (%) |
+
+| Number of simulated spectra: | 20 |
+
+The input data to the analysis of the simulated spectra were the following:
+
+- Start- and stop-channel for the analyses: 240-1994 (PALSfit), 241–1995 (LT10); the channel number deviates by 1 between the two programs.
+
+- Time-zero, FIXED at 259 (PALSfit), 260 (LT10).
+---PAGE_BREAK---
+
+- Time per channel = 0.015 ns
+
+- *Resolution Function:*
+
+- Number of Gaussians = 2
+
+- FWHM of Gaussians = 0.25, 0.35 (ns)
+
+- Intensities of Gaussians = 80, 20 (%)
+
+- Shifts of Gaussians = 0, 0.075 (ns)
+
+- *Lifetime components:*
+
+- Number of lifetimes = 3
+
+- Lifetimes = 0.15, 0.4, 1.8, all GUESSED
+
+- Log-normal broadenings = 0 FIXED, 0.1 FIXED, 0.4 GUESSED
+
+The resolution function parameters were fixed in both programs. The reason for this was that the two programs treat these parameters differently, i.e. in LT10 the resolution function parameters may be fitted, while time-zero ($T_0$) is a fixed input parameter. In PALSfit (POSITRONFIT) on the other hand, the shape of the resolution function (also described by a sum of Gaussians) is fixed, while $T_0$ may be fitted. Because of this difference between the two programs, to make the fitting conditions identical, both $T_0$, the FWHMs and the shifts of the Gaussians were chosen as fixed parameters in the analyses with both programs.
+
+Results of statistical analyses of output from the two programs are shown below:
+
+| Comparison for | PALSfit3 | | | LT10 | |
|---|
| Chi-square | | | | | | | Sample mean | 0.9974 | | | 0.9973 | | | Sample std | 0.0272 | | | 0.0260 | | | Mean predicted std | 0.0338 | | | | | | Lifetimes (ns) | | | | | | | Sample mean | 0.1491 | 0.3979 | 1.7970 | 0.1491 | 0.3981 | | Sample std | 0.0065 | 0.0166 | 0.0156 | 0.0064 | 0.0167 | | Mean predicted std | 0.0059 | 0.0138 | 0.0121 | 0.0049 | 0.0267 | | Broadening (ns) | | | | | | | Sample mean | | | 0.4021 | | 0.4002 | | Sample std | | | 0.0403 | | 0.0514 | | Mean predicted std | | | 0.0386 | | 0.0481 | | Intensities (%) | | | | | | | Sample mean | 14.7237 | 40.1539 | 45.1225 | 14.7331 | 40.1547 | | Sample std | 1.8242 | 1.0487 | 0.8729 | 1.7958 | 1.0024 | | Mean predicted std | 1.5698 | 1.0922 | 0.6467 | 0.0000 | 8.8982 | | Background | | | | | | | Sample mean | 800.2811 | | | 800.3022 | | | Sample std | 0.9523 | | | 1.2596 | | | Mean predicted std | 1.0414 | | | 0.7784 | |
+
+*Statistics for 20 PALGEN-generated spectra, analysed with PALSfit3 (Vers. 3.113) and LT10 (Vers. 10.2.2.2.) "Sample mean" is the average of the results from the 20 spectra used in the test.*
+---PAGE_BREAK---
+
+"Sample std" is the sample standard deviation which measures the scatter of the 20 results. "Mean predicted std" is the average of the standard deviations for the fitted parameters as predicted by the programs.
+
+In general, there is good agreement between the “Sample mean” values obtained by the two programs for all the fitted parameters as well as for the Chi-square. Similarly, the “Sample standard deviation” also shows good agreement between the two programs, i.e. the scatter of the results are similar.
+
+However, when it comes to the ‘Standard deviations’ of the fitted parameters, as estimated by the programs, there are quite large differences. PALSfit3 seems to predict standard deviations of the fitted parameters fairly well (“Mean predicted std” agrees fairly well with “Sample std” for all parameters) while LT10 produces larger deviations for many of the parameters (in particular for the intensities, the predicted standard deviations of which show large scatter).
+
+# 9 Appendix E: Exclusion of channels
+
+Exclusion of channels is a new feature in PALSfit3 designed for positron lifetime spectra with “bumps”: In some spectra (in particular measured with positron beams) artifacts appear (mainly due to scattered positrons) in the form of bumps in the spectra, see Fig. 2. (Thanks to David Keeble and Werner Egger for inspiration to this part [66].)
+
+Fig. 2: Example of a PALS-spectrum with “bumps”.
+
+In order to be able to extract basic lifetime components from such spectra by a PALSfit analysis, such artifacts should be excluded from the analysis. Therefore it is convenient to be able to remove parts of the spectrum in the fitting process. The recent versions of PALSfit3 are able to do so.
+---PAGE_BREAK---
+
+Fig. 3: The spectrum from Fig. 2 with the total fitting range (indicated by green lines) and two exclusion ranges (indicated by cyan lines). Only the black parts of the spectrum are used by the fitting procedure.
+
+**Mathematical implementation:**
+
+The mathematical implementation of exclusion ranges is simply carried out by putting the least-squares fitting weights $w_i$ (Eq. (1) in Chapter 2) within these channel ranges to zero.
+
+**How to set up the exclusion ranges in PALSfit3:**
+
+In the tab "Spectrum", click "Change" to show the "Spectrum setup" window.
+
+When the actual spectrum has been selected, and the fitting ranges defined (the green lines), you may press the Ctrl-button on the keyboard, and while keeping the button pressed, you position the cursor at the beginning of the area, you want to exclude. Press the left mouse-button (still pressing the Ctrl-button) and move the cursor to the end of the area, you want to exclude, and then release the mouse-button. Finally, release the Ctrl-button. Now the exclusion range is indicated by two cyan colored lines pointing away from the exclusion range. In the same way you may specify up to three exclusion ranges.
+
+Fig. 4: The Spectrum setup window with one exclusion range
+---PAGE_BREAK---
+
+If a second range partly overlaps a previous one, the two ranges will collapse into one, covering
+the total range.
+
+In order to remove an exclusion range, click on the relevant “Drop” button.
+
+Numerical values of the limits of exclusion ranges are shown at the bottom of the “Spectrum
+setup” window. In order to change one of the limits of an exclusion range, click on the proper
+field (e.g. in Fig. 4, click on the “974” field if you want to increase/decrease the upper limit of
+the exclusion range) and use the + or - key on the keyboard to make the change.
+
+If an exclusion range attempts to cross one of the fitting limits (green lines), it will be dropped.
+
+The selection of ranges in Figs. 3 and 4 are only meant as illustrations and do not necessarily represent cases of realistic analyses.
+
+References
+
+[1] S. J. Tao, "Methods of data reduction in analyzing positron annihilation lifetime spectra," *IEEE Transactions on Nuclear Science*, vol. NS15, pp. 175-187, 1968.
+
+[2] P. C. Lichtenberger, J. R. Stevens, and T. D. Newton, "Analysis of counting distributions with a complex character," *Canadian Journal of Physics*, vol. 50, pp. 345-351, 1972.
+
+[3] P. Kirkegaard and M. Eldrup, "POSITRONFIT: A versatile program for analysing positron lifetime spectra," *Computer Physics Communications*, vol. 3, pp. 240-255, 1972.
+
+[4] P. Kirkegaard and M. Eldrup, "POSITRONFIT EXTENDED: A new version of a program for analysing positron lifetime spectra," *Computer Physics Communications*, vol. 7, pp. 401-409, 1974.
+
+[5] J. K. Stevenson, "Graphical method for the analysis of multicomponent exponential decay data," *International Journal of Applied Radiation and Isotopes*, vol. 28, pp. 900-918, 1977.
+
+[6] W. K. Warburton, "DBLCON: A version of POSITRONFIT with non-gaussian prompt for analysing positron lifetime spectra," *Computer Physics Communications*, vol. 13, pp. 371-379, 1978.
+
+[7] C. J. Virtue, R. J. Douglas, and B. T. A. McKee, "Interactive POSITRONFIT: A new version of a program for analysing positron lifetime spectra," *Computer Physics Communications*, vol. 15, pp. 97-105, 1978.
+
+[8] P. Kirkegaard, M. Eldrup, O. E. Mogensen, and N. J. Pedersen, "Program system for analysing positron lifetime spectra and angular correlation curves," *Computer Physics Communications*, vol. 23, pp. 307-335, 1981.
+
+[9] W. Puff, "PFPOSFIT: A new version of a program for analysing positron lifetime spectra with non-gaussian prompt curve," *Computer Physics Communications*, vol. 30, pp. 359-368, 1983.
+
+[10] G. H. Dai, J. Fu, and Q. S. Liu, "A program for the interactive analysis of positron lifetime spectra on personal computers with the aid of screen graphics," *Applied Physics A*, vol. 53, pp. 303-309, 1991.
+
+[11] J. Kansy, "Microcomputer program for analysis of positron annihilation lifetime spectra," *Nuclear Instruments and Methods in Physics Research A*, vol. 374, pp. 235-244, 1996.
+
+[12] J. V. Olsen, P. Kirkegaard, N. J. Pedersen, and M. Eldrup, "PALSfit: A new program for the evaluation of positron lifetime spectra," *Phys. Stat. Sol. (c)*, vol. 4, pp. 4004-4006, 2007.
+---PAGE_BREAK---
+
+[13] A. Karbowski, J. J. Fisz, G. P. Karwasz, J. Kansy, and R. S. Brusa, "Genetic algorithms for positron lifetime data," *ACTA PHYSICA POLONICA A*, vol. 113, pp. 1365-1372, 2008.
+
+[14] C. Pascual-Izarra, A. W. Dong, S. J. Pas, A. J. Hill, B. J. Boyd, and C. J. Drummond, "Advanced fitting algorithms for analyzing positron annihilation lifetime spectra," *Nuclear Instruments and Methods in Physics Research A*, vol. 603, pp. 456-466, 2009.
+
+[15] D. Giebel and J. Kansy, "A new version of LT program for positron lifetime spectra analysis," *Materials Science Forum*, vol. 666, pp. 138-141, 2011.
+
+[16] D. M. Schrader and S. G. Usmar, "Nonexponential decay: The problems of deconvolution and lifetime extraction," in *Positron annihilation studies of fluids* (S. C. Sharma, ed.), pp. 215-238, World Scientific, 1988.
+
+[17] Y. Zhu and R. B. Gregory, "Analysis of positron annihilation lifetime data presented as a sum of convoluted exponentials with the program SPLMOD," *Nuclear Instruments and Methods in Physics Research A*, vol. 284, pp. 443-451, 1989.
+
+[18] R. B. Gregory and Y. Zhu, "Analysis of positron annihilation lifetime data by numerical Laplace inversion with the program CONTIN," *Nuclear Instruments and Methods in Physics Research A*, vol. 290, pp. 172-182, 1990.
+
+[19] R. B. Gregory, "Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors," *Nuclear Instruments and Methods in Physics Research A*, vol. 302, pp. 496-507, 1991.
+
+[20] A. Shukla and M. Peter, "Maximum entropy analysis of positron annihilation lifetime spectra," *Materials Science Forum*, vol. 105-110, pp. 1981-1984, 1992.
+
+[21] L. Hoffmann, A. Shukla, M. Peter, B. Barbiellini, and A. A. Manuel, "Linear and non-linear approaches to solve the inverse problem: applications to positron annihilation experiments," *Nuclear Instruments and Methods in Physics Research A*, vol. 335, pp. 276-287, 1993.
+
+[22] A. Shukla, M. Peter, and L. Hoffmann, "Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter," *Nuclear Instruments and Methods in Physics Research A*, vol. 335, pp. 310-317, 1993.
+
+[23] A. H. Deng, B. K. Panda, S. Fung, C. D. Beling, and D. M. Schrader, "Positron lifetime analysis using the matrix inverse Laplace transformation method," *Nuclear Instruments and Methods in Physics Research B*, vol. 140, pp. 439-448, 1998.
+
+[24] P. Kirkegaard, N. J. Pedersen, and M. Eldrup, "PATFIT-88: a data-processing system for positron annihilation spectra on mainframe and personal computers," Tech. Rep. Risoe-M; No.2740, Technical University of Denmark, DTU Risoe Campus, DK-4000 Roskilde, Denmark, February 1989. Available as pdf-file from www.palsfit.dk.
+
+[25] M. Metcalf and J. Reid, FORTRAN 90/95 explained. Oxford: Oxford University Press, 1996.
+
+[26] M. Eldrup, O. Mogensen, and G. Trumpy, "Positron lifetimes in pure and doped ice and in water," *J. Chem. Phys.*, vol. 57, pp. 495-504, 1972.
+
+[27] M. Eldrup, "Positron lifetimes in water and ice, and in frozen aqueous solutions," Tech. Rep. Risoe-R; No.254, Technical University of Denmark, DTU Risoe Campus, DK-4000 Roskilde, Denmark, 1971. PhD Thesis; available in pdf from www.palsfit.dk.
+
+[28] M. Eldrup, Y. M. Huang, and B. T. A. McKee, "Estimates of uncertainties in analysis of positron lifetime spectra for metals," *Appl. Phys.*, vol. 15, pp. 65-71, 1978.
+
+[29] W. Puff, "The influence of several parameters on the lifetimes and intensities of positron lifetime spectra of metals," *Appl. Phys.*, vol. 18, pp. 165-168, 1979.
+---PAGE_BREAK---
+
+[30] H. Sormann, P. Kindl, and W. Puff, "Investigations on the reliability of a multi-component analysis of positron lifetime spectra, using a new method of producing computer-simulated test spectra," *Nucl. Instr. and Meth.*, vol. 206, pp. 203-209, 1983.
+
+[31] H. Sormann, P. Kindl, and W. Puff, "Reliability tests of the multi-component analysis of positron lifetime spectra," in *Positron Annihilation* (P. C. Jain, R. M. Singru, and K. P. Gopinathan, eds.), pp. 848-850, World Scientific, 1985.
+
+[32] H. Sormann, P. Kindl, and G. Reiter, "Numerical evaluation of disturbed positron lifetime spectra," in *Proc. 8th Int. Conf. on Positron Annihilation*, (Gent, Belgium), pp. 645-647, 1988. World Scientific 1989.
+
+[33] M. Eldrup, I. K. MacKenzie, B. T. A. McKee, and D. Segers, "Open discussion on experimental techniques and data analysis (for bulk systems)," in *Proc. 8th Int. Conf. on Positron Annihilation*, (Gent, Belgium), pp. 216-226, 1988. World Scientific 1989.
+
+[34] M. Franz, T. Hehenkamp, J.-E. Kluin, and J. D. Gervey, "Computer simulation of positron-lifetime spectroscopy on thermally generated vacancies in copper and comparison with experimental results," *Phys. Rev. B*, vol. 48, pp. 3507-3510, 1993.
+
+[35] M. Eldrup, O. E. Mogensen, and J. H. Evans, "A positron annihilation study of the annealing of electron irradiated molybdenum," *J. Phys. F: Metal Phys.*, vol. 6, pp. 499-521, 1976.
+
+[36] H. E. Hansen, R. Talja, H. Rajainmäki, H. K. Nielsen, B. Nielsen, and R. M. Nieminen, "Positron studies of hydrogen-defect interactions in proton irradiated molybdenum," *Appl. Phys.*, vol. A36, pp. 81-92, 1985.
+
+[37] K. O. Jensen, M. Eldrup, N. J. Pedersen, and J. H. Evans, "Annealing behaviour of copper and nickel containing high concentrations of krypton studied by positron annihilation and other techniques," *J. Phys. F: Metal Phys.*, vol. 18, pp. 1703-1724, 1988.
+
+[38] F. M. Jacobsen, M. Eldrup, and O. E. Mogensen, "The temperature dependence of the two positronium bubble states in liquid SF₆," *Chem. Phys.*, vol. 50, pp. 393-403, 1980.
+
+[39] D. Lightbody, J. N. Sherwood, and M. Eldrup, "Temperature and phase dependence of positron lifetimes in solid cyclohexane," *Chem. Phys.*, vol. 93, pp. 475-484, 1985.
+
+[40] F. M. Jacobsen, O. E. Mogensen, and N. J. Pedersen, "High resolution positron lifetime measurements in non-polar liquids," in *Proc. 8th Int. Conf. on Positron Annihilation*, (Gent, Belgium), pp. 651-653, 1988. World Scientific 1989.
+
+[41] P. Kindl, W. Puff, and H. Sormann, "A free four-term analysis of positron lifetime spectra of γ-irradiated teflon," *Phys. stat. sol.* (a), vol. 58, pp. 489-494, 1980.
+
+[42] M. Eldrup, B. Singh, S. Zinkle, T. Byun, and K. Farrell, "Dose dependence of defect accumulation in neutron irradiated copper and iron," *J. Nucl. Mater.*, vol. 307-311, pp. 912-917, 2002.
+
+[43] B. Somieski, T. E. M. Staab, and R. Krause-Rehberg, "The data treatment influence on the spectra decomposition in positron lifetime spectroscopy part 1," *Nucl. Instr. and Meth.* A, vol. 381, pp. 128-140, 1996.
+
+[44] T. E. M. Staab, B. Somieski, and R. Krause-Rehberg, "The data treatment influence on the spectra decomposition in positron lifetime spectroscopy part 2," *Nucl. Instr. and Meth.* A, vol. 381, pp. 141-151, 1996.
+
+[45] G. Dlubek, "Do MELT and CONTIN programs accurately reveal the lifetime distribution in polymers," *Nucl. Instr. and Meth.* B, vol. 142, pp. 191-202, 1998.
+
+[46] W. Lühr-Tanck, H. Bosse, T. Kurschat, M. Ederhof, A. Sager, and T. Hehenkamp, "Positron lifetime and Doppler-broadening measurements in noble metals and their alloys," *Appl. Phys.*, vol. A44, pp. 209-211, 1987.
+---PAGE_BREAK---
+
+[47] L. Dorikens-Vanpraet, D. Segers, and M. Dorikens, "The influence of geometry on the resolution of a positron annihilation lifetime spectrometer," *Appl. Phys.*, vol. 23, pp. 149-152, 1980.
+
+[48] S. Dannefaer, "On the effect of backscattering of γ-quanta and statistics in positron-annihilated lifetime measurements," *Appl. Phys.*, vol. A26, pp. 255-259, 1981.
+
+[49] H. Saito, Y. Nagashima, T. Kuvihara, and T. Hyodo, "A new positron lifetime spectrometer using a fast digital oscilloscope and BaF₂ scintillators," *Nucl. Instr. and Meth. in Phys. Res.* A, vol. 487, pp. 612-617, 2002.
+
+[50] J. Nissilä, K. Rytsölä, R. Aavikko, A. Laakso, K. Saarinen, and P. Hautojärvi, "Performance analysis of a digital positron lifetime spectrometer," *Nucl. Instr. and Meth. in Phys. Res.* A, vol. 538, pp. 778-789, 2005.
+
+[51] F. Becvar, J. Cizek, I. Prochazka, and J. Janotova, "The asset of ultra-fast digitizers for positron-lifetime spectroscopy," *Nucl. Instr. and Meth. in Phys. Res.* A, vol. 539, pp. 372-385, 2005.
+
+[52] D. W. Marquardt, "An algorithm for least-squares estimation of nonlinear parameters," *J. Soc. Ind. Appl. Math.*, vol. 11, pp. 431-441, 1963.
+
+[53] K. Levenberg, "A method for the solution of certain nonlinear problems in least squares," *Quart. Appl. Math.*, vol. 2, pp. 164-168, 1944.
+
+[54] J. J. Moré, "The Levenberg-Marquardt algorithm: Implementation and theory," in *Lecture Notes in Mathematics*, 630, Proceedings, Biennial Conference, Dundee 1977 (A. Dold, B. Eckmann, and G. A. Watson, eds.), pp. 105-116, Springer-Verlag, 1978.
+
+[55] J. J. Moré, B. S. Garbow, and K. E. Hillstrom, "User guide for MINPACK-1," Tech. Rep. ANL-80-74, Argonne National Laboratory, Applied Mathematics Division, 9700 South Cass Avenue, Argonne, Illinois 60439, 1980.
+
+[56] P. Kirkegaard, "A FORTRAN IV version of the sum-of-exponential least-squares code EXPOSUM," Tech. Rep. Risoe-M; No.1279, Technical University of Denmark, DTU Risoe Campus, DK-4000 Roskilde, Denmark, September 1970. Available in pdf from www.palsfit.dk.
+
+[57] P. Kirkegaard, "Some aspects of the general least-squares problem for data fitting," Tech. Rep. Risoe-M; No.1399, Technical University of Denmark, DTU Risoe Campus, DK-4000 Roskilde, Denmark, August 1971. Available in pdf from www.palsfit.dk.
+
+[58] P. Kirkegaard and M. Eldrup, "The least-squares fitting programme POSITRONFIT: principles and formulas," Tech. Rep. Risoe-M; No.1400, Technical University of Denmark, DTU Risoe Campus, DK-4000 Roskilde, Denmark, September 1971. Available in pdf from www.palsfit.dk.
+
+[59] L. Kaufman, "A variable projection method for solving separable nonlinear least squares problems," *BIT*, vol. 15, pp. 49-57, 1975.
+
+[60] L. Kaufman and V. Pereyra, "A method for separable nonlinear least squares problems with separable nonlinear equality constraints," *SIAM J. Num. An.*, vol. 15, pp. 12-20, 1978.
+
+[61] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, *Numerical Recipes in Fortran 90*. Cambridge: Cambridge University Press, 1996.
+
+[62] J. J. Dongarra, C. B. Moler, J. R. Bunch, and G. W. Stewart, *LINPACK*, User's guide.
+Philadelphia: SIAM, 1979.
+
+[63] E. T. Whittaker and G. N. Watson, *A Course of Modern Analysis*. Cambridge: University Press, 4th ed., 1952.
+---PAGE_BREAK---
+
+[64] S. H. Cheung, C. D. Beling, S. H. Y. Fung, and P. K. MacKeown, "Investigation into the use of POSITRONFIT in the recovery of lifetime parameters using Monte-Carlo simulated lifetime spectra," *Materials Science Forum*, vol. 255-257, pp. 738-740, 1997.
+
+[65] G. Box and M. Muller, "A note on the generation of random normal deviates," *Annals Math. Statistics*, vol. 29, pp. 610-611, 1958.
+
+[66] D. J. Keeble, J. D. Major, L. Ravelli, W. Egger, and K. Durose, "Vacancy defects in CdTe thin films," *Phys. Rev. B*, vol. 84, p. 174122, 2011.
\ No newline at end of file
diff --git a/samples/texts_merged/3114781.md b/samples/texts_merged/3114781.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d72c6e4133fcf766001cbeaec895c29c146b485
--- /dev/null
+++ b/samples/texts_merged/3114781.md
@@ -0,0 +1,41 @@
+
+---PAGE_BREAK---
+
+Spring 2013: PhD Analysis Preliminary Exam
+
+Instructions:
+
+1. All problems are worth 10 points. Explain your answers clearly. Un-clear answers will not receive credit. State results and theorems you are using.
+
+2. Use separate sheets for the solution of each problem.
+
+**Problem 1:** Consider the Hilbert space $\mathcal{H} = L^2([0, 1])$ with inner product $\langle f, h \rangle = \int_0^1 f(x)h(x) dx$. Let
+
+$$V = \{f \in L^2([0, 1]) : \int_0^1 xf(x)dx = 0\} \subset \mathcal{H}$$
+
+and $g(x) \equiv 1$. Find the closest element to $g$ in $V$. Justify your answer.
+
+**Problem 2:** Let $(B, |||)||)$ be a Banach space. Recall that the spectrum of a bounded linear operator $A \in L(B)$ is defined as
+
+$$\sigma(A) = \{\lambda \in \mathbb{C} : \lambda I - A \text{ is not invertible}\}.$$
+
+Consider a sequence of bounded linear operators $A_n \in L(B)$ which converges in norm to a bounded linear operator $A \in L(B)$. Assume that all spectra are the same, i.e. $\sigma_0 := \sigma(A_1) = \sigma(A_2) = \dots$. Show that $\sigma_0 \subset \sigma(A)$.
+
+**Problem 3:** Consider the function
+
+$$f(x) = \begin{cases} 2\sin(x) + 3, & x > 0 \\ -2\sin(x) + c, & x \le 0 \end{cases} .$$
+
+Find its distributional derivative $\frac{df}{dx}$ and $\frac{d^2 f}{dx^2}$. For which values of $c$ will $f \in W_{\text{loc}}^{1,p}(\mathbb{R})$? Justify your answer.
+
+**Problem 4:** Prove that
+
+$$\lim_{\epsilon \to 0^+} \int_0^\infty \frac{\epsilon}{\epsilon^2 + x} \sin(1/x) dx = 0.$$
+---PAGE_BREAK---
+
+**Problem 5:** Prove that the image of the space $C^k(\mathbb{T})$ of k times continuously differentiable functions on the unit circle under the Fourier transform is contained in the set of sequences satisfying $|c_n| = o(|n|^{-k})$ and contains the set of sequences satisfying $|c_n| = o(|n|^{-k-1-\epsilon})$, $\epsilon > 0$. (Recall: $f(n) = o(h(n))$ as $n \to \infty$ means that for every $\delta > 0$ there exists an $N$ such that $|f(n)| < \delta|g(n)|$ for all $n > N$).
+
+**Problem 6:** Let $I$ be the interval $(0, 1)$ and $q \ge p \ge 1$. Show that there exists a constant $C = C(p, q, I)$ such that
+
+$$ ||u||_{L^q(I)} \le C ||u||_{W^{1,p}(I)} $$
+
+for all $u \in W_0^{1,p}(I)$. (Hint: First show that $||u||_{L^\infty(I)} \le C ||u||_{W^{1,p}(I)}$ for all $u \in W_0^{1,p}(I)$.)
\ No newline at end of file
diff --git a/samples/texts_merged/324098.md b/samples/texts_merged/324098.md
new file mode 100644
index 0000000000000000000000000000000000000000..86ad3e1f51c99eaff60ba3a45f067c07cadebf0c
--- /dev/null
+++ b/samples/texts_merged/324098.md
@@ -0,0 +1,315 @@
+
+---PAGE_BREAK---
+
+On the Expressivity of Inconsistency Measures (Extended Abstract)*
+
+Matthias Thimm
+
+University of Koblenz-Landau
+Germany
+
+Abstract
+
+We survey recent approaches to inconsistency measurement in propositional logic and provide a comparative analysis in terms of their expressivity. For that, we introduce four different expressivity characteristics that quantitatively assess the number of different knowledge bases that a measure can distinguish. Our approach aims at complementing ongoing discussions on rationality postulates for inconsistency measures by considering expressivity as a desirable property. We evaluate a large selection of measures on the proposed characteristics and conclude that the distance-based measure $I_{\text{dalal}}^{\Sigma}$ from [Grant and Hunter, 2013] has maximal expressivity along all considered characteristics.
+
+1 Introduction
+
+Inconsistency measurement is about the quantitative assessment of the severity of inconsistencies in knowledge bases. Consider the following two knowledge bases $\mathcal{K}_1$ and $\mathcal{K}_2$ formalised in propositional logic:
+
+$$ \mathcal{K}_1 = \{a, b \lor c, \neg a \land \neg b, d\} \quad \mathcal{K}_2 = \{a, \neg a, b, \neg b\} $$
+
+Both knowledge bases are classically inconsistent as for $\mathcal{K}_1$ we have $\{a, \neg a \land \neg b\} \models \bot$ and for $\mathcal{K}_2$ we have, e. g., $\{a, \neg a\} \models \bot$. These inconsistencies render the whole knowledge bases useless for reasoning if one wants to use classical reasoning techniques. In order to make the knowledge bases useful again, one can either rely on non-monotonic/para-consistent reasoning techniques [Makinson, 2005; Priest, 1979] or one revises the knowledge bases appropriately to make them consistent [Hansson, 2001]. Looking at the knowledge bases $\mathcal{K}_1$ and $\mathcal{K}_2$ one can observe that the severity of their inconsistency is different. In $\mathcal{K}_1$, only two out of four formulas ($a$ and $\neg a \land \neg b$) are “participating” in making $\mathcal{K}_1$ inconsistent while for $\mathcal{K}_2$ all formulas contribute to its inconsistency. Furthermore, for $\mathcal{K}_1$ only two propositions ($a$ and $b$) are conflicting and using e. g. paraconsistent reasoning one could still infer meaningful statements about $c$ and $d$. For $\mathcal{K}_2$
+
+no such statement can be made. This leads to the assessment that $\mathcal{K}_2$ should be regarded more inconsistent than $\mathcal{K}_1$.
+
+Inconsistency measures can be used to analyse inconsistencies and to provide insights on how to repair them. An inconsistency measure $\mathcal{I}$ is a function on knowledge bases, such that the larger the value $\mathcal{I}(\mathcal{K})$ the more severe the inconsistency in $\mathcal{K}$. A lot of different approaches of inconsistency measures have been proposed, mostly for classical propositional logic [Hunter and Konieczny, 2004; 2008; 2010; Ma et al., 2009; Mu et al., 2011; Xiao and Ma, 2012; Grant and Hunter, 2011; 2013; McAreavey et al., 2014; Jabbour et al., 2014], but also for classical first-order logic [Grant and Hunter, 2008], description logics [Ma et al., 2007; Zhou et al., 2009], default logics [Doder et al., 2010], and probabilistic and other weighted logics [Ma et al., 2012; Thimm, 2013; Potyka, 2014]. Due to this plethora of inconsistency measures it is hard to determine which measure to use for an application and which measure is meaningful. Rationality postulates have been proposed that address the issue of assessing the quality of a measure—see e. g. [Hunter and Konieczny, 2006; Mu et al., 2011]—but many of these properties have been criticised to address only a specific point of view, see [Besnard, 2014] for a recent discussion on this topic.
+
+In this paper, we take a different perspective on the evaluation of inconsistency measures by considering a quantitative analysis of their expressivity, that is, we study how many different (inconsistent) knowledge bases can be distinguished by a given inconsistency measure. By the term expressivity we here refer to the property of a semantical concept—here, an inconsistency measure—and its capability to distinguish syntactical constructs—here, knowledge bases—, similarly as it has been done for the analysis of expressivity of semantics for other logical languages, see e. g. skepticism relations for formal argumentation [Baroni and Giacomin, 2008]. Our analysis is meant to complement the study on rationality postulates and is, of course, not meaningful on its own as the compliance of measures with the basic intuitions behind inconsistency measures can only be assessed by rationality postulates. However, we introduce expressivity of inconsistency measures as an additional method to evaluate their quality. In particular, we propose four different expressivity characteristics that quantify the relation between the number of different values of an inconsistency measure wrt.
+
+*This paper is an extended abstract of an article in the Artificial Intelligence Journal [Thimm, 2016a].
+---PAGE_BREAK---
+
+different notions of the size of the knowledge base, such as
+number of formulas or number of propositions. We conduct
+a thorough comparative analysis of different inconsistency
+measures from the literature [Hunter and Konieczny, 2008;
+2010; Grant and Hunter, 2011; Knight, 2002; Thimm, 2016b;
+Grant and Hunter, 2013; Mu *et al.*, 2011; Jabbour and Rad-
+daoui, 2013; Xiao and Ma, 2012; Doder *et al.*, 2010] and
+classify these measures in a hierarchy of expressivity. In our
+study, we made several interesting observations, such as the
+relation between the measure $\mathcal{I}_{\text{MI}}$ [Grant and Hunter, 2011]
+and Sperner families [Sperner, 1928] and of the measure
+$\mathcal{I}_{\text{MI}^c}$ [Grant and Hunter, 2011] with profiles of Boolean func-
+tions. One of our results is that the distance-based measure
+$\mathcal{I}_{\text{dalal}}^{\Sigma}$ from [Grant and Hunter, 2013] has maximal expressiv-
+ity along all considered characteristics.
+
+We give necessary preliminaries in Section 2. In Section 3
+we present four different expressivity characteristics and eval-
+uate the considered inconsistency measures wrt. these charac-
+teristics. We conclude in Section 4. All inconsistency mea-
+sures discussed in this paper have been implemented and an
+online interface to try out these measures is available¹.
+
+**2 Preliminaries**
+
+Let At be some fixed propositional signature, i.e., a (possi-
+bly infinite) set of propositions, and let $\mathcal{L}(\text{At})$ be the corre-
+sponding propositional language constructed using the usual
+connectives $\wedge$ (and), $\vee$ (or), and $\neg$ (negation).
+
+**Definition 1.** A knowledge base $K$ is a finite set of formulas $K \subseteq \mathcal{L}(\text{At})$. Let $\mathbb{K}$ be the set of all knowledge bases.
+
+If $X$ is a formula or a set of formulas we write $At(X)$ to
+denote the set of propositions appearing in $X$. Semantics to
+a propositional language is given by *interpretations* and an
+*interpretation* $\omega$ on $At$ is a function $\omega : At \to \{\text{true, false}\}$.
+Let $\Omega(\text{At})$ denote the set of all interpretations for $At$. An
+interpretation $\omega$ satisfies (or is a *model* of) a proposition $a \in
+At$, denoted by $\omega \models a$, if and only if $\omega(a) = \text{true}$. The
+satisfaction relation $\models$ is extended to formulas in the usual
+way.
+
+For $\Phi \subseteq \mathcal{L}(\mathrm{At})$ we also define $\omega \models \Phi$ if and only if $\omega \models \phi$ for every $\phi \in \Phi$. Define furthermore the set of models Mod($X$) = {$\omega \in \Omega(\mathrm{At}) | \omega \models X$} for every formula or set of formulas $\dot{X}$. If Mod($X$) = $\emptyset$ we also write $X \models \bot$ and say that $X$ is inconsistent.
+
+Let $\mathbb{R}_{\ge 0}^{\infty}$ be the set of non-negative real values includ-
+ing $\infty$. Inconsistency measures are functions $\mathcal{I} : \mathbb{K} \rightarrow
+\mathbb{R}_{\ge 0}^{\infty}$ that aim at assessing the severity of the inconsistency
+in a knowledge base $\mathcal{K}$, cf. [Grant and Hunter, 2011].
+The basic idea is that the larger the inconsistency in $\mathcal{K}$
+the larger the value $\mathcal{I}(\mathcal{K})$ and $\mathcal{I}(\mathcal{K}) = 0$ if and only if
+$\mathcal{K}$ is consistent. However, inconsistency is a concept that
+is not easily quantified and there have been a couple of
+proposals for inconsistency measures so far, in particular
+for classical propositional logic, see e. g. [Besnard, 2014;
+McAreavey et al., 2014; Jabbour et al., 2014; Hunter et al.,
+2014] for some recent works. We selected 15 inconsistency
+
+measures from the literature in order to conduct our analy-
+sis on expressivity, taken from [Hunter and Konieczny, 2008;
+2010; Grant and Hunter, 2011; Knight, 2002; Thimm, 2016b;
+Grant and Hunter, 2013; Mu et al., 2011; Xiao and Ma, 2012;
+Doder et al., 2010]. We only give the formal definitions of
+two of those, see [Thimm, 2016a] for the remaining defini-
+tions.
+
+The drastic measure $\mathcal{I}_d$ is usually considered as a baseline approach for inconsistency measurement.
+
+**Definition 2.** The drastic inconsistency measure $\mathcal{I}_d : \mathbb{K} \to \mathbb{R}_{\ge 0}^\infty$ is defined as
+
+$$
+\mathcal{I}_d(\mathcal{K}) = \begin{cases} 1 & \text{if } \mathcal{K} \models \bot \\ 0 & \text{otherwise} \end{cases}
+$$
+
+for $\mathcal{K} \in \mathbb{K}$.
+
+A more fine-grained approach can be devised by taking minimal inconsistent subsets into account. A set $M \subseteq K$ is called *minimal inconsistent subset (MI)* of $K$ if $M \models \bot$ and there is no $M' \subset M$ with $M' \models \bot$. Let MI($K$) be the set of all Mls of $K$.
+
+**Definition 3.** The MI-inconsistency measure $\mathcal{I}_{\text{MI}}: \mathbb{K} \to \mathbb{R}_{\ge 0}^{\infty}$ is defined as
+
+$$
+\mathcal{I}_{\text{MI}}(\mathcal{K}) = |\text{MI}(\mathcal{K})|
+$$
+
+for $\mathcal{K} \in \mathbb{K}$.
+
+**Example 4.** Consider the knowledge bases $\mathcal{K}_1$ and $\mathcal{K}_2$ from the introduction:
+
+$$
+\begin{align*}
+\mathcal{K}_1 &= \{\boldsymbol{a}, \boldsymbol{b} \lor c, \neg\boldsymbol{a} \land \neg\boldsymbol{b}, d\} \\
+\mathcal{K}_2 &= \{\boldsymbol{a}, \neg\boldsymbol{a}, \boldsymbol{b}, \neg\boldsymbol{b}\}
+\end{align*}
+$$
+
+Here we have
+
+$$
+\begin{align*}
+\mathrm{MI}(\mathcal{K}_1) &= \{\{\boldsymbol{a}, \neg\boldsymbol{a} \land \neg\boldsymbol{b}\}\} \\
+\mathrm{MI}(\mathcal{K}_2) &= \{\{\boldsymbol{a}, \neg\boldsymbol{a}\}, \{\boldsymbol{b}, \neg\boldsymbol{b}\}\}
+\end{align*}
+$$
+
+Therefore we obtain $\mathcal{I}_{\text{MI}}(\mathcal{K}_1) = 1$ and $\mathcal{I}_{\text{MI}}(\mathcal{K}_2) = 2$.
+
+**3 Expressivity Characteristics**
+
+In the literature, inconsistency measures are usually analyti-
+cally evaluated on a set of rationality postulates.² Some basic
+example postulates given in [Hunter and Konieczny, 2006]
+are the following (let $\mathcal{I}$ be any inconsistency measure)
+
+Consistency $\mathcal{I}(\mathcal{K}) = 0$ if and only if $\mathcal{K}$ is consistent
+
+Monotony if $K' \subseteq K$ then $\mathcal{I}(K) \leq \mathcal{I}(K')$
+
+Independence for all $\alpha \in K$, if $\alpha \notin M$ for every $M \in MI(K)$ then $\mathcal{I}(K) = \mathcal{I}(K \setminus \{\alpha\})$
+
+Satisfaction of the property *consistency* ensures that all con-
+sistent knowledge bases receive a minimal inconsistency
+value and every inconsistent knowledge base receives a posi-
+tive inconsistency value (we already implicitly required satis-
+faction of this postulate in the definition of an inconsistency
+
+¹http://tweetyproject.org/w/incmes/
+
+²Some few works also consider empirical evaluation on com-
+putational performance and accuracy of algorithms approximat-
+ing existing inconsistency measures, see e.g. [Ma *et al.*, 2009;
+McAreavey *et al.*, 2014; Thimm, 2016b]
+---PAGE_BREAK---
+
+measure). The postulate *monotony* states that the value of in-
+consistency can only increase when adding new information.
+*Independence* states that removing “harmless” formulas from
+a knowledge base does not change the value of inconsistency.
+Besides these three postulates a series of other postulates have
+been proposed in the literature, see [Thimm, 2016c] for a re-
+cent survey. However, some of these postulates are disputed
+as each of them usually covers only a single aspect of incon-
+sistency, such as *independence*, which focuses on the role of
+minimal inconsistent subsets. An excellent discussion on the
+rationality of various postulates for inconsistency measures
+can be found in [Besnard, 2014]. Besides Besnard, several
+other authors have also criticised the rationality of individ-
+ual postulates—discussions can be found in almost all papers
+cited before—and so there is some disagreement on which
+postulates are meaningful and which are not. One the one
+hand this calls for more work on rationality postulates and,
+on the other hand, it also suggests to investigate additional
+means for comparison. In the following, we propose a novel
+quantitative approach to evaluate and compare inconsistency
+measures that aims at complementing the existing approach
+of rationality postulates.
+
+The drastic inconsistency measure $\mathcal{I}_d$ is usually considered as a very naive baseline approach for inconsistency measurement. Surprisingly, this measure already satisfies many rationality postulates such as *consistency*, *monotony*, and *independence* (the proofs are straightforward). What sets it apart from other more sophisticated inconsistency measures is that it cannot differentiate between different inconsistent knowledge bases. However, this demand is exactly what inconsistency measures are supposed to satisfy. While the *qualitative* behaviour of inconsistency measures is being discussed quite deeply using rationality postulates, their *quantitative* properties in terms of *expressivity* have been almost neglected so far.³ With expressivity of inconsistency measures we here mean the number of different values an inconsistency measure can attain. We investigate the expressivity of inconsistency measures along four different dimensions of subclasses of knowledge bases.
+
+**Definition 5.** Let $\phi$ be a formula. The length $l(\phi)$ of $\phi$ is recursively defined as
+
+$$ l(\phi) = \begin{cases} 1 & \text{if } \phi \in \mathrm{At} \\ 1 + l(\phi') & \text{if } \phi = \neg \phi' \\ 1 + l(\phi_1) + l(\phi_2) & \text{if } \phi = \phi_1 \wedge \phi_2 \\ 1 + l(\phi_1) + l(\phi_2) & \text{if } \phi = \phi_1 \vee \phi_2 \end{cases} $$
+
+**Definition 6.** Define the following subclasses of the set of all knowledge bases $\mathbb{K}$:
+
+$$
+\begin{align*}
+\mathbb{K}^v(n) &= \{\mathcal{K} \in \mathbb{K} \mid |\mathrm{At}(\mathcal{K})| \le n\} \\
+\mathbb{K}^f(n) &= \{\mathcal{K} \in \mathbb{K} \mid |\mathcal{K}| \le n\} \\
+\mathbb{K}^l(n) &= \{\mathcal{K} \in \mathbb{K} \mid \forall \phi \in \mathcal{K}: l(\phi) \le n\} \\
+\mathbb{K}^p(n) &= \{\mathcal{K} \in \mathbb{K} \mid \forall \phi \in \mathcal{K}: |\mathrm{At}(\phi)| \le n\}
+\end{align*}
+$$
+
+³Some few rationality postulates such as Attenuation [Mu et al., 2011] are addressing this issue only in some very limited form and from a particular point of view.
+
+In other words, $\mathbb{K}^v(n)$ is the set of all knowledge bases that mention at most $n$ different propositions, $\mathbb{K}^f(n)$ is the set of all knowledge bases that contain at most $n$ formulas, $\mathbb{K}^l(n)$ is the set of all knowledge bases that contain only formulas with maximal length $n$, and $\mathbb{K}^p(n)$ is the set of all knowledge bases that contain only formulas that mention at most $n$ different propositions each. The motivation for considering these particular subclasses of knowledge bases is that each of them considers a different aspect of the size of a knowledge base. As a syntactical object, a knowledge base is a set of formulas, and both the number of formulas (considered by the class $\mathbb{K}^f(n)$) and the length of each formula ($\mathbb{K}^l(n)$) are the essential parameters that define its size. From a semantical point of view, the number of propositions appearing in each formula ($\mathbb{K}^p(n)$) and in the complete knowledge base ($\mathbb{K}^v(n)$) define the scope of the knowledge. Larger numbers for both of them also indicate larger scope and thus greater size. Inconsistency measures should adhere to the size of the knowledge base in terms of their expressivity. For example, the number of possible inconsistency values of a particular measure should not decrease when moving from a set $\mathbb{K}^v(n)$ to set $\mathbb{K}^v(n')$ with $n' > n$, as knowledge bases with $n'$ formulas should provide a larger variety in terms of inconsistency as knowledge bases of size $n$. Indeed, this property is true for all considered measures as $\mathbb{K}^v(n) \subseteq \mathbb{K}^v(n')$ (the same holds for all classes above).
+
+The aim of our expressivity analysis is to investigate the number of different values that a specific inconsistency measure can attain on different subclasses of knowledge bases. We formalise this idea using *expressivity characteristics* as follows.
+
+**Definition 7.** Let $\mathcal{I}$ be an inconsistency measure and $n > 0$. Let $\alpha \in \{v, f, l, p\}$. The $\alpha$-characteristic $C^\alpha(\mathcal{I}, n)$ of $\mathcal{I}$ wrt. $n$ is defined as
+
+$$ C^\alpha(\mathcal{I}, n) = |\{\mathcal{I}(\mathcal{K}) | \mathcal{K} \in \mathbb{K}^\alpha(n)\}| $$
+
+In other words, $C^\alpha(\mathcal{I}, n)$ is the number of different inconsistency values $\mathcal{I}$ assigns to knowledge bases from $\mathbb{K}^\alpha(n)$. Note that these characteristics are not always the same as the maximal value of an inconsistency measure on a specific set of knowledge bases, even if the codomain of the measure is the natural numbers. Indeed, it can be the case that intermediate values cannot be attained.
+
+We now come to the main contribution of [Thimm, 2016a], which is a thorough study of the considered inconsistency measures in terms of our four proposed expressivity characteristics.
+
+**Theorem 8.** The $\alpha$-characteristics $C^\alpha(\mathcal{I}, n)$ ($\alpha \in \{f, v, l, p\}$) for the inconsistency measures $\mathcal{I}_d$, $\mathcal{I}_{M\uparrow}$, $\mathcal{I}_{M\downarrow\uparrow}$, $\mathcal{I}_{\eta}$, $\mathcal{I}_c$, $\mathcal{I}_{LP_m}$, $\mathcal{I}_{mc}$, $\mathcal{I}_p$, $\mathcal{I}_{hs}$, $\mathcal{I}_{dalal}^\Sigma$, $\mathcal{I}_{dalal}^{\max}$, $\mathcal{I}_{dalal}^{\text{hit}}$, $\mathcal{I}_{D_f}$, $\mathcal{I}_{mv}$, and $\mathcal{I}_{nc}$ are as shown in Table 1.
+
+The complete proof of the above theorem can be found in [Thimm, 2016a].
+
+Table 1 shows that the measure $\mathcal{I}_{dalal}^\Sigma$ has maximal expressivity wrt. all four expressivity characteristics (among the considered inconsistency measures) and, as expected, the drastic inconsistency measure $\mathcal{I}_d$ is the least expressive one. One can also observe that for many measures the values of
+---PAGE_BREAK---
+
+ | Cv(I,n) | Cf(I,n) | Cl(I,n) | Cp(I,n) |
|---|
| Id | 2 | 2 | 2* | 2 | | IMl | ∞ | (n/|n/2|) + 1 | ∞* | ∞ | | IMlc | ∞ | ≤ Ψ(n)‡ | ∞* | ∞ | | Iη | Φ(2n)† | ≤ Φ(n/|n/2|)† | ∞** | ∞* | | Ic | n + 1 | ∞ | ∞* | ∞ | | ILPm | Φ(n) | ∞ | ∞* | ∞ | | Imc | ∞ | (n/|n/2|)** | ∞* | ∞ | | Ip | ∞ | n + 1 | ∞* | ∞ | | Ihs | 2n + 1 | n + 1 | ∞** | ∞* | | IΣdalal | ∞ | ∞* | ∞* | ∞ | | Imaxdalal | n + 2 | ∞* | [(n + 7)/3]** | n + 2 | | Ihitdalal | ∞ | n + 1 | ∞* | ∞ | | IDf | ∞ | ≤ Ψ(n)‡ | ∞* | ∞ | | Imv | n + 1 | ∞* | ∞* | ∞ | | Inc | ∞ | n + 1 | ∞* | ∞ |
+
+Table 1: Characteristics of inconsistency measures ($n \ge 1$)
+
+*only for $n > 1$
+
+**only for $n > 3$
+
+†Φ(x) is the number of fractions in the Farey series of order x and can be defined as $\Phi(x) = |\{k/l | l = 1, \dots, x, k = 0, \dots, l\}|$, see e. g. http://oeis.org/A005728
+
+‡Ψ(n) is the number of profiles of monotone Boolean functions of n variables, see e. g. http://oeis.org/A220880
+
+$C^v(I, n)$ and $C^f(I, n)$ are complementary, i. e., if a measure has a high value in $C^f$ it has small value in $C^v$ (consider e. g. $I_c$ and $I_p$). This is due to the fact that many measures measure only a specific aspect of inconsistency and usually belong either to the Ml-based family of inconsistency measures—which focus on using minimal inconsistent subsets for measuring—or the variable-based family—which focus on conflicting propositions—, cf. [Hunter and Konieczny, 2008]. Therefore, they are constrained in their expressivity if one of these dimensions is limited. For example, if the number of formulas in a knowledge base is restricted, so is the number of minimal inconsistent subsets.
+
+**Remark 9.** Note that [Thimm, 2016a] also considered the measure $I_{P_m}$ [Jabbour and Raddaoui, 2013] and reported it to have maximal expressivity. However, the original publication [Jabbour and Raddaoui, 2013] falsely claimed that $I_{P_m}$ satisfies the consistency postulate, which is usually deemed a necessary requirement for inconsistency measures. However, $I_{P_m}$ does not comply with this basic property as e. g. for inconsistent $K_{P_m} = \{a, \neg(a \land a)\}$, $I_{P_m}(K_{P_m}) = 0$, cf. Definition 2, Proposition 2, and Section 3 in [Jabbour and Raddaoui, 2013]. Therefore, we omit discussing $I_{P_m}$ in this paper.
+
+## 4 Summary and Conclusion
+
+We conducted a focused but extensive comparative analysis of inconsistency measures from the recent literature in terms of their expressivity. For that, we introduced 4 different expressivity characteristics and conducted an analytical evaluation of the considered measures wrt. these expressivity characteristics. Our findings also revealed some interesting relationships of inconsistency measures to, e. g., set theory and
+
+monotone Boolean functions, see [Thimm, 2016a] for a discussion. Finally, the measure $I_{dalal}^\Sigma$ [Grant and Hunter, 2013] has been proven to be maximally expressive wrt. all our characteristics.
+
+Expressivity characteristics provide a novel evaluation method for assessing the quality of inconsistency measures. It has to be noted, however, that high expressivity alone is not a sufficient criterion for doing this. It is straightforward to construct measures that exhibit maximal expressivity along all discussed dimensions, but fail to comply with the intuitions one expects from inconsistency measures. The use of rationality postulates—such as the ones presented and discussed in [Hunter and Konieczny, 2006; Mu et al., 2011; Besnard, 2014]—must still serve as first-level evaluation criterion. If measures satisfy the same (or a similar set of) rationality postulates, expressivity can be used to make further quality assessments.
+
+To the best of our knowledge, our work is the most extensive comparative analysis of inconsistency measures so far. All inconsistency measures discussed in this paper have been implemented and an online interface to try out these measures is available⁴.
+
+⁴http://tweetyproject.org/w/incmes/
+---PAGE_BREAK---
+
+## References
+
+[Baroni and Giacomin, 2008] Pietro Baroni and Massimiliano Giacomin. A systematic classification of argumentation frameworks where semantics agree. In *Proc. of the 2nd Int. Conf. on Computational Models of Argument (COMMA'08)*, pages 37–48, 2008.
+
+[Besnard, 2014] P. Besnard. Revisiting postulates for inconsistency measures. In *Logics in Artificial Intelligence*, pages 383–396. Springer, 2014.
+
+[Doder et al., 2010] D. Doder, M. Raskovic, Z. Markovic, and Z. Ognjanovic. Measures of inconsistency and defaults. *Int J Approx Reason*, 51:832–845, 2010.
+
+[Grant and Hunter, 2008] J. Grant and A. Hunter. Analysing inconsistent first-order knowledgebases. *Artif Intell*, 172(8-9):1064–1093, 2008.
+
+[Grant and Hunter, 2011] J. Grant and A. Hunter. Measuring consistency gain and information loss in stepwise inconsistency resolution. In *Proc. of the 11th European Conf. on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU'11)*, pages 362–373. Springer, 2011.
+
+[Grant and Hunter, 2013] J. Grant and A. Hunter. Distance-based measures of inconsistency. In *Proc. of the 12th European Conf. on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU'13)*, pages 230–241. Springer, 2013.
+
+[Hansson, 2001] S. O. Hansson. *A Textbook of Belief Dynamics*. Kluwer Academic Publishers, 2001.
+
+[Hunter and Konieczny, 2004] A. Hunter and S. Konieczny. Approaches to measuring inconsistent information. In *Inconsistency Tolerance*, pages 189–234. Springer, 2004.
+
+[Hunter and Konieczny, 2006] A. Hunter and S. Konieczny. Shapley inconsistency values. In *Proc. of the 10th Int. Conf. on Knowledge Representation (KR'06)*, pages 249–259. AAAI Press, 2006.
+
+[Hunter and Konieczny, 2008] A. Hunter and S. Konieczny. Measuring inconsistency through minimal inconsistent sets. In *Proc. of the 11th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR'2008)*, pages 358–366. AAAI Press, 2008.
+
+[Hunter and Konieczny, 2010] A. Hunter and S. Konieczny. On the measure of conflicts: Shapley inconsistency values. *Artif Intell*, 174(14):1007–1026, 2010.
+
+[Hunter et al., 2014] A. Hunter, S. Parsons, and M. Wooldridge. Measuring inconsistency in multi-agent systems. *Künst Intell*, 28:169–178, 2014.
+
+[Jabbour and Raddaoui, 2013] S. Jabbour and B. Raddaoui. Measuring inconsistency through minimal proofs. In *Proceedings of the 12th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty*, pages 290–301. Springer, 2013.
+
+[Jabbour et al., 2014] S. Jabbour, Y. Ma, B. Raddaoui, and L. Sais. Prime implicates based inconsistency characterization. In *Proceedings of the 21st European Conference on Artificial Intelligence (ECAI'14)*, pages 1037–1038, 2014.
+
+[Knight, 2002] K. M. Knight. *A Theory of Inconsistency*. PhD thesis, University Of Manchester, 2002.
+
+[Ma et al., 2007] Y. Ma, G. Qi, P. Hitzler, and Z. Lin. Measuring inconsistency for description logics based on paraconsistent semantics. In *Proceedings of the 9th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty*, pages 30–41. Springer, 2007.
+
+[Ma et al., 2009] Y. Ma, G. Qi, G. Xiao, P. Hitzler, and Z. Lin. An anytime algorithm for computing inconsistency measurement. In *Knowledge Science, Engineering and Management*, pages 29–40. Springer, 2009.
+
+[Ma et al., 2012] J. Ma, W. Liu, and P. Miller. A characteristic function approach to inconsistency measures for knowledge bases. In *Proc. of the 6th Int. Conf. on Scalable Uncertainty Management (SUM'12)*, 2012.
+
+[Makinson, 2005] D. Makinson. *Bridges from Classical to Nonmonotonic Logic*. College Publications, 2005.
+
+[McAreavey et al., 2014] K. McAreavey, W. Liu, and P. Miller. Computational approaches to finding and measuring inconsistency in arbitrary knowledge bases. *Int J Approx Reason*, 55:1659–1693, 2014.
+
+[Mu et al., 2011] K. Mu, W. Liu, Z. Jin, and D. Bell. A Syntax-based Approach to Measuring the Degree of Inconsistency for Belief Bases. *Int J Approx Reason*, 52(7):978–999, 2011.
+
+[Potyka, 2014] N. Potyka. Linear programs for measuring inconsistency in probabilistic logics. In *Proc. of the 14th Int. Conf. on Principles of Knowledge Representation and Reasoning*, 2014.
+
+[Priest, 1979] G. Priest. Logic of Paradox. *J Philos Logic*, 8:219–241, 1979.
+
+[Sperner, 1928] E. Sperner. Ein Satz über Untermengen einer endlichen Menge (in German). *Mathematische Zeitschrift*, 27(1):544–548, 1928.
+
+[Thimm, 2013] M. Thimm. Inconsistency measures for probabilistic logics. *Artif Intell*, 197:1–24, 2013.
+
+[Thimm, 2016a] M. Thimm. On the expressivity of inconsistency measures. *Artif Intell*, 234:120–151, 2016.
+
+[Thimm, 2016b] M. Thimm. Stream-based inconsistency measurement. *Int J Approx Reason*, 68:68–87, 2016.
+
+[Thimm, 2016c] Matthias Thimm. On the compliance of rationality postulates for inconsistency measures: A more or less complete picture. *Künst Intell*, 2016.
+
+[Xiao and Ma, 2012] G. Xiao and Y. Ma. Inconsistency measurement based on variables in minimal unsatisfiable subsets. In *Proceedings of the 20th European Conference on Artificial Intelligence (ECAI'12)*, 2012.
+
+[Zhou et al., 2009] L. Zhou, H. Huang, G. Qi, Y. Ma, Z. Huang, and Y. Qu. Measuring inconsistency in dl-lite ontologies. In *Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 01*, pages 349–356.
+IEEE Computer Society, 2009.
\ No newline at end of file
diff --git a/samples/texts_merged/3246292.md b/samples/texts_merged/3246292.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e67783691e881c4a8d7f842e67be11ef0b857c4
--- /dev/null
+++ b/samples/texts_merged/3246292.md
@@ -0,0 +1,579 @@
+
+---PAGE_BREAK---
+
+A New Numerical Solution for System of Linear
+Equations
+
+Iman Shojaei*¹ and Hossein Rahami†²
+
+¹Engineering Optimization Research Group, College of Engineering, University of Tehran
+
+²School of Engineering Science, College of Engineering, University of Tehran, Tehran, Iran
+
+**ABSTRACT**
+
+In this paper we have developed a numerical method for solving system of linear equations through taking advantages of properties of repetitive tridiagonal matrices. A system of linear equations is usually obtained in the final step of many science and engineering problems such as problems involving partial differential equations. In the proposed algorithm, the problem is first solved for repetitive tridiagonal matrices (i.e., system of linear equations) and a closed-from relationship is obtained.
+
+**Keyword:** linear system of equations, partial differential equations, numerical solution, efficient analysis, repetitive tridiagonal matrices, iterative methods.
+
+AMS subject Classification: 65F10
+
+**ARTICLE INFO**
+
+Article history:
+
+Research Paper
+
+Received 11, January 2021
+
+Received in revised form 14,
+April 2021
+
+Accepted 10 May 2021
+
+Available online 01, June 2021
+
+# 1 Abstract continued
+
+This relationship is then used for solving a general matrix through converting the matrix into a repetitive tridiagonal matrix and a remaining matrix that is moved to the right-hand side of the equation. Therefore, the problem is converted into a repetitive tridiagonal matrix problem where we have a vector of unknowns on the right-hand side (in addition
+
+*shojaei.iman@gmail.com
+
+†Corresponding author: H. Rahami. Email: hrahami@ut.ac.ir
+---PAGE_BREAK---
+
+to the left-hand side) of the equation. The problem is solved iteratively by first using an initial guess to define the vector on the right-hand side of the equation and then solving the problem using the closed-from relationship for repetitive tridiagonal matrices. The new obtained solution is then substituted in the right-hand side of the equation and the tridiagonal problem is solved again. This process is carried out iteratively until convergence is achieved. Computational complexity of the method is investigated and efficiency of the method is shown through several examples. As indicated in the examples, one of the advantages of the proposed method is its high rate of convergence in problems where the given matrix includes large off-diagonal entries. In such problems, methods like Jacobi, Gauss-Seidel, and Successive Over-Relaxation will either have a low rate of convergence or be unable to converge.
+
+## 2 Introduction
+
+The final step of many engineering problems is solving a system of linear equations. In general, such systems can be solved using direct or iterative methods. Direct methods, such as Gaussian elimination or LU decomposition, are not usually used for problems with large and/or sparse matrices as these methods are computationally expensive and require storage of data and high speed computations. Therefore, several iterative methods [1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 15, 18, 20, 21, 22, 24, 28] have been developed that can, in general, be classified as two groups of stationary and non-stationary methods.
+Since the goal is to solve the equation $Ax = b$, one can rewrite the equation as
+
+$$x = (I - A)x + b$$
+
+to form the iterative formula $x^{(k)} = (I - A)x^{(k-1)} + b$. In general, this equation can be written in the form $x^{(k)} = Bx^{(k-1)} + c$ where matrix $B$ and vector $c$ can be updated in each iteration. In stationary methods, matrix $B$ and vector $c$ do not change with iterations and remain unchanged. Specifically, Jacobi and Gauss-Seidel methods are well-known examples of stationary methods that have been widely used in literature to solve systems of linear equations. Jacobi method is a straightforward method with simple interpretation and implementation. The drawback of this method, however, is its relatively low rate of convergence. Gauss-Seidel method is similar to the Jacobi method, but uses updated values in each iteration of the algorithm leading to a faster convergence [4, 16, 25, 26]. To further improve the rate of convergence, Successive Over-Relaxation (SOR) algorithm has been suggested that is obtained via adding a parameter $\omega$ to the Gauss-Seidel method.
+The general form of SOR algorithm is
+
+$$x_i^{(k)} = x_i^{(k-1)} + \omega \frac{R_i^{(k-1)}}{a_{i,i}}, \quad i = 1:n$$
+
+$$R_i^{(k-1)} = b_i - \sum_{j=1}^{i-1} a_{i,j} x_j^{(k)} - \sum_{j=i}^{n} a_{i,j} x_j^{(k-1)}$$
+---PAGE_BREAK---
+
+which is simplified to the Gauss-Seidel method for $\omega = 1$. For the case of $0 < \omega < 1$ (so-called under relaxation), the method can prevent divergence of the solution and damps potential oscillations during iterations. For the case of $1 < \omega < 2$ (so-called over relaxation), the method increases the rate of convergence. Finally, when $\omega > 2$, the solution will diverge. Likewise, Ehrlic (1981) introduced a method, called Ad-Hoc SOR, wherein they used a new approach for updating variables in order to improve the rate of convergence [12].
+
+In non-stationary methods computations involve information that change in each iteration of the solution [13, 14, 23]. Specifically, constant values of the algorithm in each iteration of the solution is updated with the goal of reducing the norm of error. Conjugate Gradient is a well-known example of such methods. Performance and comparison between stationary versus non-stationary methods for solving singular problems can be found in [16].
+
+Solving a system of linear equations using the methods above is challenging when the coefficient matrix is Stieltjes. Stieltjes matrix is a positive definite matrix wherein the off-diagonal entries are negative and diagonal entries are positive. To solve such matrices AGMG and CG-AMG multigrid algorithms have been suggested. Todini and Pilati (1987) used a global Gradient algorithm for hydraulic analysis of pipe networks [25]. Also, Webster (1998) used an efficient multigrid method to analyze fluid flow in pipe networks [27].
+
+In this paper we have proposed a numerical method for solving system of linear equations through taking advantages of properties of repetitive tridiagonal matrices. The method is computationally efficient, $O(n^2)$, and its rate of convergence is high in problems involving matrices of large off-diagonal entries where methods like Jacobi, Gauss-Seidel, and SOR will either have a low rate of convergence or be unable to converge. There are several methods in literature for solving a system of linear equations. Specifically, we have evaluated the performance of our algorithm against three established algorithms in literature: Jacobi, Gauss-Seidel, and SOR methods. Several practical examples including partial differential equation (PDE) problems from mathematical finance have been solved using the proposed method.
+
+### 3 Jacobi method for solving system of linear equations $Au = c$
+
+Consider matrix **A** below
+
+$$ A = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{bmatrix} \quad (1) $$
+---PAGE_BREAK---
+
+Matrix **A** is decomposed into a diagonal matrix **D** and the remainder **R**
+
+$$
+\mathbf{A} = \begin{bmatrix}
+a_{11} & & & \\
+& a_{22} & & \\
+& & \ddots & \\
+& & & a_{nn}
+\end{bmatrix} + \begin{bmatrix}
+0 & a_{12} & \cdots & a_{1n} \\
+a_{21} & 0 & \cdots & a_{2n} \\
+\vdots & \vdots & \ddots & \vdots \\
+a_{n1} & a_{n2} & \cdots & 0
+\end{bmatrix} = \mathbf{D} + \mathbf{R}. \quad (2)
+$$
+
+We will have
+
+$$(\mathbf{D} + \mathbf{R})\mathbf{u} = \mathbf{c} \qquad (3)$$
+
+and
+
+$$\mathbf{Du} = \mathbf{c} - \mathbf{Ru} \tag{4}$$
+
+Setting an initial approximation as $\mathbf{u}^0 = \mathbf{D}^{-1}\mathbf{c}$ and obtaining a solution
+
+$$\mathbf{u}^1 = \mathbf{D}^{-1}(\mathbf{c} - \mathbf{R}\mathbf{u}^0) \quad (5)$$
+
+Repeating the procedure leads to an iterative algorithm from which the solution is obtained
+
+$$\mathbf{u}^{n+1} = \mathbf{D}^{-1}(\mathbf{c} - \mathbf{R}\mathbf{u}^n) \quad \text{and} \quad n = 0, 1, 2, \dots \qquad (6)$$
+
+The condition for convergence of Jacobi method (and any other iterative method) is when the spectral radius of the iteration matrix ($\mathbf{D}^{-1}\mathbf{R}$) is less than 1. One sufficient condition for convergence of the method is that matrix $\mathbf{A}$ is diagonally dominant. Here we presented the algorithm for Jacobi method as the simplest iterative method for system of linear equations. Readers are referred to the relevant literature for detail and formulation of other iterative methods such as Gauss-Seidel and SOR.
+
+# 4 Closed-Form Solution of $Mx = f$ when $M$ is a tridiagonal matrix
+
+Eigenvalues and eigenvectors of a tridiagonal matrix, of dimension $N-1$, of the form
+
+$$
+M = \begin{bmatrix}
+b & c & \cdots & 0 \\
+a & b & c & \cdots \\
+a & b & c & \cdots \\
+\vdots & \vdots & \ddots & \vdots \\
+a & b & c & \cdots \\
+\cdots & a & b & c \\
+0 & a & b & c
+\end{bmatrix}
+\qquad (7)
+$$
+
+are calculated [19, 30] using
+
+$$
+\lambda_n = b + 2\sqrt{ac} \cos \frac{n\pi}{N} \quad \text{and} \quad n = 1, 2, \ldots, N-1
+$$
+
+$$v_j^n = \left( \frac{a}{c} \right)^{j-1} \sin \frac{n j \pi}{N} \quad \text{and} \quad n = 1, 2, \ldots, N-1 \qquad (8)$$
+
+$$v^n = [v_1^n, v_2^n, \ldots, v_{N-1}^n]^t.$$
+---PAGE_BREAK---
+
+And if $a = c$, we will have
+
+$$ \lambda_n = b + 2a \cos \frac{n\pi}{N} \quad \text{and} \quad n = 1, 2, \dots, N-1 $$
+
+$$ v_j^n = \sin \frac{nj\pi}{N} \quad \text{and} \quad n = 1, 2, \dots, N-1 \qquad (9) $$
+
+$$ v^n = [v_1^n, v_2^n, \dots, v_{N-1}^n]^t. $$
+
+Since the matrix is symmetric (of dimension $N - 1$), its eigenvectors ($v^1, v^2, \dots, v^{N-1}$) provide an orthogonal basis for $N - 1$ space. Therefore, we can expand **x** and **f** in terms of ($v^1, v^2, \dots, v^{N-1}$) basis:
+
+$$ \mathbf{x} = \sum_{i=1}^{N-1} x_i \mathbf{v}^i \quad \text{and} \quad \mathbf{f} = \sum_{i=1}^{N-1} b_i \mathbf{v}^i \qquad (10) $$
+
+where $f_i$s are known (i.e., can readily be computed)
+
+$$ f_i = (\mathbf{v}^i)^t \mathbf{f} \qquad (11) $$
+
+and $x_i$s are our unknowns that should be computed through substitution in $\mathbf{M}\mathbf{x} = \mathbf{f}$
+
+$$ \mathbf{M} \sum_{i=1}^{N-1} x_i \mathbf{v}^i = \sum_{i=1}^{N-1} f_i \mathbf{v}^i \qquad (12) $$
+
+where we can write
+
+$$ \mathbf{M} \sum_{i=1}^{N-1} x_i \mathbf{v}^i = \sum_{i=1}^{N-1} x_i \mathbf{M} \mathbf{v}^i = \sum_{i=1}^{N-1} x_i \lambda_i \mathbf{v}^i \qquad (13) $$
+
+We can now re-express Eq. (12) as
+
+$$ \sum_{i=1}^{N-1} \lambda_i x_i \mathbf{v}^i = \sum_{i=1}^{N-1} f_i \mathbf{v}^i \qquad (14) $$
+
+Because $\mathbf{v}^i$s are linearly independent (they form a basis), we will have
+
+$$ \lambda_i x_i = f_i \rightarrow x_i = \frac{f_i}{\lambda_i} \quad \text{and} \quad n = 1, 2, \dots, N-1. \qquad (15) $$
+
+Therefore,
+
+$$ x = \sum_{i=1}^{N-1} \frac{f_i}{\lambda_i} \mathbf{v}^i = \sum_{i=1}^{N-1} \mathbf{v}^i \frac{f_i}{\lambda_i} = \sum_{i=1}^{N-1} \frac{\mathbf{v}^i (\mathbf{v}^i)^t}{\lambda_i} \mathbf{f} \qquad (16) $$
+
+Finally,
+
+$$ \mathbf{x} = \sum_{i=1}^{N-1} \frac{\left[ \sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N} \right]^t \left[ \sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N} \right]}{b + 2a \cos \frac{i\pi}{N}} \mathbf{f} \qquad (17) $$
+---PAGE_BREAK---
+
+# 5 A numerical method for the solution of Au = c when A is an arbitrary matrix
+
+Here, we want to get advantages of the closed form solution for tridiagonal matrices to develop an efficient numerical method for a linear system of equations **A**u = **c** when **A** is arbitrary. Consider matrix **A** below
+
+$$
+\mathbf{A} = \begin{bmatrix}
+a_{11} & a_{12} & \dots & a_{1n} \\
+a_{21} & a_{22} & \dots & a_{2n} \\
+\vdots & \vdots & \ddots & \vdots \\
+a_{n1} & a_{n2} & \dots & a_{nn}
+\end{bmatrix} \tag{18}
+$$
+
+Lets decompose matrix $\mathbf{A}$ into a tridiagonal matrix, $\mathbf{A}_1$, and the remaining terms, $\mathbf{A}_2$. $b$ and $a$ entries in tridiagonal matrix $\mathbf{A}_1$ are chosen to be, respectively, the average of diagonal entries (i.e., $a_{11}, \dots, a_{nn}$) and the average of upper and lower neighboring-diagonals entries (i.e., $a_{12}, \dots, a_{n-1,n}, \dots, a_{21}, \dots, a_{n,n-1}$).
+
+$$
+\mathbf{A} = \begin{bmatrix}
+b & a \\
+a & b \\
+\vdots & \vdots \\
+a & b
+\end{bmatrix} + \begin{bmatrix}
+a_{11} - b & a_{12} - a & \cdots & a_{1n} \\
+a_{21} - a & a_{22} - b & \cdots & a_{2n} \\
+\vdots & \vdots & \ddots & \vdots \\
+a_{n1} - b & a_{n2} & \cdots & a_{nn} - b
+\end{bmatrix} = \mathbf{A}_1 + \mathbf{A}_2. \quad (19)
+$$
+
+We will have
+
+$$(\mathbf{A}_1 + \mathbf{A}_2)\mathbf{u} = \mathbf{c} \qquad (20)$$
+
+and
+
+$$
+\mathbf{A}_1 \mathbf{u} = \mathbf{c} - \mathbf{A}_2 \mathbf{u}. \tag{21}
+$$
+
+Replacing $\mathbf{A}_2$ by $\mathbf{A} - \mathbf{A}_1$
+
+$$
+\mathbf{A}_1 \mathbf{u} = \mathbf{c} - (\mathbf{A} - \mathbf{A}_1) \mathbf{u} \quad (22)
+$$
+
+Lets initialize **u** using the following approximation
+
+$$
+\mathbf{A}_{1}\mathbf{u}^{0} = \mathbf{c} \tag{23}
+$$
+
+Since $\mathbf{A}_1$ is tridiagonal, according to previous section we will have
+
+$$
+\mathbf{u}^0 = \sum_{i=1}^{N-1} \frac{\left[ \sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N} \right]^t \left[ \sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N} \right]}{b + 2a \cos \frac{i\pi}{N}} c \quad (24)
+$$
+
+Now, we can improve the results through:
+
+$$
+\mathbf{A}_1 \mathbf{u}^1 = \mathbf{c} - (\mathbf{A} - \mathbf{A}_1) \mathbf{u}^0
+$$
+
+And in general we can use the following iterative equation to solve the problem
+
+$$
+\mathbf{A}_1 \mathbf{u}^{(n+1)} = \mathbf{c} - (\mathbf{A} - \mathbf{A}_1) \mathbf{u}^n
+$$
+---PAGE_BREAK---
+
+Since $\mathbf{A}_1$ is tridiagonal, we can write the equation as follows
+
+$$ \mathbf{u}^{n+1} = \sum_{i=1}^{N-1} \frac{[\sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N}]^t [\sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N}]}{b + 2a \cos \frac{i\pi}{N}} [c - (\mathbf{A}-\mathbf{A}_1)\mathbf{u}^n] \quad (27) $$
+
+As such, an iterative formula for solving a general linear system of equations is obtained through properties of tridiagonal matrices. The condition for convergence of the method is when the spectral radius of the iteration matrix, $\mathbf{A}_1^{-1}(\mathbf{A}-\mathbf{A}_1)$, is less than 1. We did not formally study the potential sufficient conditions for convergence of the proposed method. However, through several examples we have shown that in many cases that Jacobi, Gauss-Seidel, and SOR methods do not converge the proposed method converges.
+
+## 6 Computational complexity of the method
+
+For one iteration of the method, the dominant computational complexity is $O(n^2)$:
+
+* $(\mathbf{A} - \mathbf{A}_1)\mathbf{u}^n = \mathbf{M}\mathbf{V}$: Multiplication of a matrix ($\mathbf{M}$) and a vector ($\mathbf{V}$), of the complexity $O(n^2)$.
+
+For sigma calculations we will have
+
+* $[\sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N}] [c - (\mathbf{A} - \mathbf{A}_1)\mathbf{u}^n] = \mathbf{P}^t \mathbf{V}$. Multiplication of a row vector ($\mathbf{P}^t$) and a column vector ($\mathbf{V}$), of the complexity $O(n)$. The outcome here would be a scaler ($S$).
+
+* $[\sin \frac{i\pi}{N}, \sin \frac{2i\pi}{N}, \dots, \sin \frac{(N-1)i\pi}{N}]^t S = \mathbf{P} S$: Multiplication of a column vector ($\mathbf{P}$) and a scaler ($S$), of the complexity $O(n)$.
+
+* $n$ times calculation of the two steps above (because of sigma) leads to a computational complexity of $O(n^2)$.
+
+Therefore, the dominant computational complexity of the method for one iteration is $O(n^2)$. If $m$ iterations are required to achieve a desired accuracy, the computational complexity of the method would be $O(mn^2)$. However, since the required number of iterations to achieve convergence is much smaller than the dimension of matrix $\mathbf{A}$ (i.e., $m << n$), computational complexity of the method would be $O(n^2)$.
+
+## 7 Examples
+
+**Example 1.** Consider the following linear system of equations taken from:
+
+$$ \mathbf{A} = \begin{bmatrix} 10 & -1 & 2 & 0 \\ -1 & 11 & -1 & 3 \\ 2 & -1 & 10 & -1 \\ 0 & 3 & -1 & 8 \end{bmatrix}, \quad \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix} 6 \\ 25 \\ -11 \\ 15 \end{bmatrix} $$
+---PAGE_BREAK---
+
+The system of equations was solved using matrix inversion (i.e., the exact solution), the proposed method, and three iterative methods (Table 1). Matrix $\mathbf{A}_1$ in the proposed method and matrix $\mathbf{D}$ in Jacobi method are as follows:
+
+$$ \mathbf{A}_1 = \begin{bmatrix} 9.75 & -1 & 0 & 0 \\ -1 & 9.75 & -1 & 0 \\ 0 & -1 & 9.75 & -1 \\ 0 & 0 & -1 & 9.75 \end{bmatrix} \qquad \mathbf{D} = \begin{bmatrix} 10 & 0 & 0 & 0 \\ 0 & 11 & 0 & 0 \\ 0 & 0 & 10 & 0 \\ 0 & 0 & 0 & 8 \end{bmatrix} $$
+
+Table 1: Comparison between the exact solution, the proposed method, and three iterative methods for solving a matrix equation of 4 unknowns.
+
+| x = A-1b | 5 iterations | 5 iterations | 5 iterations | 5 iterations |
|---|
| Exact | Proposed | Jacobi | GaussSeidel | SOR (ω = 1.2) | | 1.0000 | 1.0003 | 0.9981 | 1.0001 | 1.0021 | | 2.0000 | 2.0008 | 2.0023 | 2.0000 | 1.9993 | | -1.0000 | -0.9995 | -1.0019 | -1.0000 | -0.9999 | | 1.0000 | 1.0008 | 1.0035 | 0.9999 | 1.0009 | | ||Exact - Iterative|| | 0.0012 | 0.0050 | 0.0001 | 0.0023 |
+
+It is observed that the convergence rate of the proposed method is higher than that of the Jacobi and SOR methods but a little bit lower than that of the Gauss-Seidel method.
+
+**Example 2. BlackScholes equation**
+
+BlackScholes equation is a partial differential equation in mathematical finance that describes the price of the option over time:
+
+$$ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 s^2 \frac{\partial^2 V}{\partial s^2} + rs\frac{\partial V}{\partial s} - rV = 0, $$
+
+or in a more general form we can write:
+
+$$ \frac{\partial V}{\partial t} + a(s,t) \frac{\partial^2 V}{\partial s^2} + b(s,t) \frac{\partial V}{\partial s} + c(s,t)V = 0. $$
+
+Using Crank-Nicholson finite difference method we will have:
+
+$$ \frac{V_{n,j+1} - V_{n,j}}{\Delta t} + \frac{a_{n,j+1}}{2} \left( \frac{V_{n+1,j+1} - 2V_{n,j+1} + V_{n-1,j+1}}{\Delta s^2} \right) + \frac{a_{n,j}}{2} \left( \frac{V_{n+1,j} - 2V_{n,j} + V_{n-1,j}}{\Delta s^2} \right) + \frac{b_{n,j+1}}{2} \left( \frac{V_{n+1,j+1} - V_{n-1,j+1}}{2\Delta s} \right) + \frac{b_{n,j}}{2} \left( \frac{V_{n+1,j} - V_{n-1,j}}{2\Delta s} \right) + \frac{1}{2} c_{n,j+1} V_{n,j+1} + \frac{1}{2} c_{n,j} V_{n,j} = 0. $$
+
+We can rewrite the equation as:
+
+$$ A_{n,j+1}V_{n-1,j+1}+(1+B_{n,j+1})V_{n,j+1}+C_{n,j+1}V_{n+1,j+1} = -A_{n,j}V_{n-1,j}+(1+B_{n,j})V_{n,j}-C_{n,j}V_{n+1,j}, $$
+---PAGE_BREAK---
+
+where
+
+$$
+\begin{align*}
+A_{n,j} &= \frac{1}{2}v_1 a_{n,j} - \frac{1}{4}v_2 b_{n,j} \\
+B_{n,j} &= -v_1 a_{n,j} + \frac{1}{2}\Delta t c_{n,j}, \\
+C_{n,j} &= \frac{1}{2}v_1 a_{n,j} + \frac{1}{4}v_2 b_{n,j}, \\
+v_1 &= \frac{\Delta t}{\Delta s^2} \quad \text{and} \quad v_2 = \frac{\Delta t}{\Delta s}.
+\end{align*}
+$$
+
+These equations hold for $n = 1, 2, \dots, N-1$. Boundary conditions provide two additional equations. In a matrix form and for boundary conditions of $V(0, t) = 0$ and $V(s_{\max}, t) = s_{\max} - Ee^{-r(T-t)}$, we will have
+
+$$
+\begin{align*}
+V_{0,j} &= 0, \\
+V_{N,j} &= N \Delta s - E e^{-r(j)\Delta t}, \\
+V_{0,j+1} &= 0, \\
+V_{N,j+1} &= N \Delta s - E e^{-r(j+1)t}.
+\end{align*}
+$$
+
+$$
+M_{j+1}^{L} V_{j+1} + r_{j+1}^{L} = M_{j}^{R} V_{j} + r_{j}^{R}
+$$
+
+$$
+\mathbf{M}_{j+1}^{\mathbf{L}} \mathbf{V}_{j+1} + \mathbf{r}_{j+1}^{\mathbf{L}} =
+\begin{bmatrix}
+1+B_{1,j+1} & C_{1,j+1} & & & & \\
+A_{2,j+1} & 1+B_{2,j+1} & C_{2,j+1} & & & \\
+& A_{3,j+1} & \cdots & \cdots & & \\
+& & \ddots & 1+B_{N-2,j+1} & C_{N-2,j+1} & \\
+& & & A_{N-1,j+1} & 1+B_{N-1,j+1} &
+\end{bmatrix}
+\begin{bmatrix}
+V_{1,j+1} \\ V_{2,j+1} \\ \vdots \\ V_{N-2,j+1} \\ V_{N-1,j+1}
+\end{bmatrix}
++
+\begin{bmatrix}
+A_{1,j+1}V_{0,j+1} \\ 0 \\ \vdots \\ 0 \\ C_{N-1,j+1}V_{N,j+1}
+\end{bmatrix}
+$$
+
+$$
+\mathbf{M}_j^{\mathbf{R}} \mathbf{V}_j + \mathbf{r}_j^{\mathbf{R}} =
+\begin{bmatrix}
+1-B_{1,j} & -C_{1,j} & & & \\
+-A_{2,j} & 1-B_{2,j} & -C_{2,j} & & \\
+& -A_{3,j} & \cdots & \cdots & \\
+& & \ddots & 1-B_{N-2,j} & -C_{N-2,j} \\
+& & & -A_{N-1,j} & 1-B_{N-1,j}
+\end{bmatrix}
+\begin{bmatrix}
+V_{1,j} \\ V_{2,j} \\ \vdots \\ V_{N-2,j} \\ V_{N-1,j}
+\end{bmatrix}
++
+\begin{bmatrix}
+A_{1,j+1}\mathbf{V}_{0,j+1} \\ 0 \\ \vdots \\ 0 \\ C_{N-1,j+1}\mathbf{V}_{N,j+1}
+\end{bmatrix}
+$$
+
+$$
+M_{j+1}^{L} V_{j+1} = M_{j}^{R} V_{j} + r_{j}^{R} - r_{j+1}^{L}
+$$
+
+The matrix and vectors on the right hand side of the equation are known so we can write
+the equation as
+
+$$
+M_{j+1}^L V_{j+1} = b
+$$
+
+Now, using the known matrix $\mathbf{M}_{j+1}^{\mathrm{L}}$ and vector $b$, we can find vector $\mathbf{V}_{j+1}$. The obtained vector is then used on the right hand side of the equation in the next iteration to find vector $\mathbf{V}$ in subsequent step.
+
+Consider the following parameters for the problem:
+
+$\sigma = 0.25$ Volatility of the stock
+
+$r = 0.2$ Interest rate
+
+$S_{max}$ = 20 Maximum stock price
+
+$S_{min}$ = 0 Minimum stock price
+---PAGE_BREAK---
+
+$T = 1$ Maturation of contract
+
+$E = 10$ Exercise price of the underlying
+
+$M = 1600$ Number of time points
+
+$N = 160$ Number of stock price points
+
+$\Delta t = \frac{T}{M} = 0.000625$ Time step
+
+$\Delta s = \frac{S_{max} - S_{min}}{N} = 0.125$ Price step
+
+Starting with $j = 0$, we will have
+
+$$ \mathbf{M}_{1}^{\mathrm{L}} \mathbf{V}_{1} = \left( \begin{bmatrix} 1.0000 & 0.0000 & & & \\ 0.0000 & 1.0000 & 0.0000 & & \\ & 0.0000 & \cdots & \cdots & \\ & & 0.5186 & 0.2358 & \\ & & 0.2487 & 0.5125 & 0.2389 \\ & & & 0.2519 & 0.5063 \end{bmatrix}_{159 \times 159} \right) \mathbf{V}_{1} = \mathbf{b} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 9.75 \\ 7.39 \end{bmatrix}_{159 \times 1} $$
+
+The linear system of equations was solved using the three iterative methods, the proposed method, and matrix inversion (i.e., the exact solution). Matrix $\mathbf{A}_1$ in the proposed method and matrix $D$ in Jacobi method are as follows:
+
+$$ \mathbf{A}_1 = \begin{bmatrix} 0.8339 & 0.0828 & & & \\ 0.0828 & 0.8339 & 0.0828 & & \\ & 0.0828 & \cdots & \cdots & \\ & & \cdots & 0.8339 & 0.0828 \\ & & & 0.8339 & 0.0828 \\ & & & 0.0828 & 0.8339 \end{bmatrix} $$
+
+$$ \mathbf{D} = \begin{bmatrix} 1.0000 & & & & \\ & 1.0000 & & & \\ & & \ddots & 0.5186 & \\ & & & 0.5124 & \\ & & & & 0.5063 \end{bmatrix} $$
+
+Vector $\mathbf{V}_1$ of dimension $159 \times 1$ was obtained as (Table 2):
+
+It can be observed that the difference between exact and proposed solutions (the norm of difference between the two vectors) is smaller than 0.1 after 8 iterations (Table 2, Fig. 1). The proposed method outperformed Jacobi and Gauss-Seidel methods but had close performance to that of SOR method. To achieve almost same accuracy using Jacobi and Gauss-Seidel methods, we needed 34 and 10 iterations, respectively (Table 2, Fig. 1). In this problem $\omega = 1.2$ was the optimal value of $\omega$ in SOR algorithm. It is notable that although SOR algorithm had the same performance as the proposed method, adjusting $\omega$ in this algorithm to achieve the best performance requires trial and error leading to several additional iterations.
+
+Now with $\mathbf{V}_1$ in hand, we can continue with the solution to calculate $\mathbf{V}_2$ in the next time step. The procedure is repeated until all unknowns are found ($j = 0, 1, ..., M-1600$). This means that to solve the problem using a Jacobi and Gauss-Seidel methods, we will need
+---PAGE_BREAK---
+
+Figure 1: Performance of the proposed and SOR methods after 8 iterations were very close to the exact solution (norm of error ~ 0.1). Gauss-Seidel method reached the same accuracy after 10 iterations whereas Jacobi method needed ~ 34 iterations to become comparable with the exact solution. The indicated results in the figure are for 8 iterations in all methods.
+
+Table 2: Comparison between the exact solution, the proposed method, and three iterative methods for solving a matrix equation of 159 unknowns. After 8 iterations the proposed method outperformed Jacobi and Gauss-Seidel methods but had close performance to that of SOR method. Jacobi and Gauss-Seidel methods reached the desired accuracy after 34 and 10 iterations, respectively. In this problem $\omega = 1.2$ was the optimal value of $\omega$ in SOR algorithm. Although SOR algorithm had the same performance as the proposed method, adjusting $\omega$ in this algorithm required trial and error leading to several additional iterations.
+
+| V1 = (M1L)-1b | 8 iterations | 8 iterations | 8 iterations | 8 iterations | 34 iterations | 10 iterations |
|---|
| Exact | Proposed | Jacobi | GaussSeidel | SOR (=1.2) | Jacobi | GaussSeidel | | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | | 0.3762 | 0.3762 | 0.3762 | 0.3762 | 0.3762 | 0.3762 | 0.3762 | | 0.5012 | 0.5012 | 0.5012 | 0.5012 | 0.5012 | 0.5012 | 0.5012 | | 0.6262 | 0.6262 | 0.6262 | 0.6262 | 0.6262 | 0.6262 | 0.6262 | | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | | 9.5474 | 9.4974 | 7.0627 | 9.4623 | 9.5196 | 9.5040 | 9.5051 | | 9.8749 | 9.9234 | 7.9059 | 9.9373 | 9.8926 | 9.8395 | 9.9056 | | 9.6928 | 9.6620 | 8.5132 | 9.6618 | 9.6851 | 9.6717 | 9.6776 | | ||Exact - Iterative|| | 0.0914 | 5.8903 | 0.1794 | 0.1082 | 0.0938 | 0.0986 |
|---|
+
+~ 341600 = 54,400 and ~ 101600 = 16,000 iterations, whereas to solve the problem with
+---PAGE_BREAK---
+
+the same accuracy using the proposed method we will need ~ 81600 = 12,800 iterations.
+
+**Example 3.** Another advantage of the proposed method is its power in solving problems with large off-diagonal entries. In such cases Jacobi, Gauss-Seidel, and SOR methods were either slow or unable to converge.
+
+Consider a coefficient matrix **M** with relatively large off-diagonal entries above and below the main diagonal and consider same matrix **b** as in previous example:
+
+$$ \mathbf{M} = \begin{bmatrix}
+\begin{array}{ccc}
+2.0500 & 0.5000 & \\
+0.5001 & 2.0500 & 0.5000 \\
+& 0.5002 &
+\end{array}
+&
+\begin{array}{ccc}
+\vdots & \vdots & \vdots \\
+1.5686 & 0.7358 & \\
+& 0.7487 & 1.5625 \\
+& & 0.7519
+\end{array}
+&
+\begin{array}{c}
+\vdots \\
+\vdots \\
+\vdots \\
+\vdots \\
+\vdots
+\end{array}
+\end{bmatrix}_{159 \times 159} \quad \text{and} \quad \mathbf{b} = \begin{bmatrix}
+0 \\
+0 \\
+\vdots \\
+9.75 \\
+7.39
+\end{bmatrix}_{159 \times 1}
+$$
+
+Figure 2: : In a problem with relatively large off-diagonal entries the norm of error for the proposed solution was ~ 0.02 after 3 iterations (the exact and proposed solutions are not distinguishable visually in the figure). Jacobi, Gauss-Seidel, and SOR methods needed 55, 14, and 11 iterations respectively to achieve the same accuracy. The indicated results in the figure are for 3 iterations in all methods.
+
+It can be observed that the convergence rate of Jacobi, Gauss-Seidel, and SOR methods is lower than the proposed method (Table 3, Fig. 2). Now, lets further increase the value
+---PAGE_BREAK---
+
+Table 3: Comparison between the exact solution, the proposed method, and three iterative methods for a problem with relatively large off-diagonal entries. The proposed method reached the error of ~0.02 after 3 iterations whereas Jacobi, Gauss-Seidel, and SOR methods needed 55, 14, and 11 iterations respectively to achieve the same accuracy. In this problem $\omega = 1.2$ was the optimal value of $\omega$ in SOR algorithm.
+
+| V1 = M-1b | 3 iterations | 3 iterations | 3 iterations | 3 iterations | 55 iterations | 14 iterations | 11 iterations |
|---|
| Exact | Proposed | Jacobi | GaussSeidel | SOR (ω = 1.2) | Jacobi | GaussSeidel | SOR (ω = 1.2) | | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | | ... | ... | ... | ... | ... | ... | ... | ... | | 0.7379 | 0.7379 | 0.8159 | 0.7505 | 0.8013 | 0.7379 | 0.7379 | 0.7380 | | 0.7789 | 0.7789 | 0.8634 | 0.7922 | 0.8457 | 0.7789 | 0.7789 | 0.7790 | | 0.8199 | 0.8199 | 0.9112 | 0.8340 | 0.8902 | 0.8199 | 0.8199 | 0.8200 | | ... | ... | ... | ... | ... | ... | ... | ... | | 3.1328 | 3.1215 | 4.9600 | 2.9120 | 2.9274 | 3.1402 | 3.1329 | 3.1279 | | 3.2304 | 3.2404 | 4.7282 | 3.4229 | 3.3536 | 3.2359 | 3.2304 | 3.2335 | | 3.1907 | 3.1847 | 3.9699 | 3.0977 | 3.1399 | 3.1936 | 3.1907 | 3.1893 | | ||Exact - Iterative|| | 0.0217 | 7.4424 | 0.5266 | 1.6869 159×159 M = [ |
|---|
+
+of off-diagonal entries:
+
+$$M = \begin{bmatrix}
+\begin{array}{ccccc}
+2.0500 & 0.5000 & & & \\
+& 0.5001 & 2.0500 & 0.5000 & \\
+& & 0.5002 & \dots & \dots \\
+& & & \ddots & 1.5686 & 0.7358 \\
+& & & & 0.7487 & 1.5625 & 0.7389 \\
+& & & & & 0.7519 & 1.5563
+\end{array}
+\bigg|_{159 \times 159}
+\begin{array}{c}
+\text{SOR } (\omega = 1.2) \\
+\text{Jacobi} \\
+\text{GaussSeidel} \\
+\text{Proposed} \\
+\text{Iterative} \\
+\text{Error}
+\end{array}
+$$
+
+It can be observed that while in the proposed method the error after 15 iterations is less than 0.06, the other three methods cannot converge. In the SOR method, all other values of $0 < \omega < 2$ also lead to divergence of the solution.
+
+# 8 Discussion and Conclusions
+
+There are several methods to solve a system of linear equations. In general, these methods can be classified as direct and iterative methods. Given the computational demand of direct methods, iterative methods are more often used in practice. One of simplest iterative methods used to solve a system of linear equations is Jacobi method. This method, however, has a slow rate of convergence and may also diverge in many problems with non-dominant diagonals of coefficient matrices. Therefore, several algorithms (Gauss-Seidel and SOR amongst others) have been developed to improve the efficiency of Jacobi method. These methods, however, have their own advantages and disadvantages so that we still need to develop new efficient methods for many challenging problems.
+
+In this paper a new iterative method for system of linear equations was presented. We first developed a closed-form solution for repetitive tridiagonal matrices. Then, for a given
+---PAGE_BREAK---
+
+Table 4: Comparison between the exact solution, the proposed method, and three iterative methods for a problem with dominant off-diagonal entries. The error for the proposed solution is ~ 0.05 after 15 iterations. The iterative methods could not converge. In the SOR method, all other values of $0 < \omega < 2$ also lead to divergence of the solution.
+
+| V = M-1b | 15 iterations | 15 iterations | 15 iterations | 15 iterations |
|---|
| Exact | Proposed | Jacobi | GaussSeidel | SOR (ω = 1.2) | | 0.9948 | 1.0002 | 0 | 0 | 0 | | -0.5098 | -0.5117 | 0 | 0 | 0 | | -0.7335 | -0.7383 | 0 | 0 | 0 | | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | | 0.8424 | 0.8380 | 0 | 0 | 0 | | -0.7934 | -0.7886 | 0 | 0 | 0 | | -0.4439 | -0.4413 | 0 | 0 | 0 | | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | | 1.6268 | 1.6300 | 1.2608e + 12 | 2.2481e + 54 | 3.8421e + 62 | | 1.7270 | 1.7315 | 9.0905e + 11 | -7.2086e + 54 | -1.4778e + 63 | | 0.0330 | 0.0266 | 4.8016e + 11 | 1.9694e + 55 | 4.8308e + 63 |
+
+$$||\text{Exact} - \text{Iterative}||$$
+
+arbitrary matrix, it was decomposed into a tridiagonal matrix and the remainder. The remainder was added to the right-hand side of the equation such that the vector of unknowns was generated on both sides of the equation. As such, a typical iterative formulation was obtained where the solution was achieved through iterations over the closed-form solution of tridiagonal matrices. One of the advantages of the proposed method is its high rate of convergence without additional computational cost (the computational complexity of the method is $O(n^2)$). Furthermore, as opposed to many iterative algorithms developed to improve the convergence rate of Jacobi method, the proposed method does not need any additional parameters to be adjusted. For instance, in SOR algorithm there exists a parameter adjustment of which requires trial and error which adds additional cost to the problem. As indicated in several examples, the proposed algorithm is efficient in tackling problems involving matrices of large off-diagonal entries whereas methods like Jacobi, Gauss-Seidel, and SOR either have a slow rate of convergence or are unable to converge in such problems. This feature of the algorithm is because of taking directly the off-diagonal entries into account when formulating the tridiagonal matrices.
+
+## References
+
+[1] Abolpour Mofrad, A., Sadeghi, M.R., Panario, D., Solving sparse linear systems of equations over finite fields using bit-flipping algorithm, Linear Algebra and its Applications, 439(7) (2013), 1815-1824.
+---PAGE_BREAK---
+
+[2] Ahamed, A.K.Ch., and Magouls, F., Efficient implementation of Jacobi iterative method for large sparse linear systems on graphic processing units, The Journal of Supercomputing, 73(8) (2017) 3411-3432.
+
+[3] Alyahya, H., Mehmood, R., and Katib, I., Parallel Iterative Solution of Large Sparse Linear Equation Systems on the Intel MIC Architecture, Smart Infrastructure and Applications, (2019) 377-407.
+
+[4] Barrett, R., Berry, M., Chan, T.F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and Van der Vorst, H., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd ed. Philadelphia, PA: SIAM, (1994).
+
+[5] Bhrawy, A.H., An efficient Jacobi pseudospectral approximation for nonlinear complex generalized Zakharov system, Appl. Math. Comput., 247 (2014) 30-46.
+
+[6] Bhrawy, A.H., and Zaky, M.A., A method based on the Jacobi tau approximation for solving multi-term time-space fractional partial differential equations, J. Comput. Phys., 281 (2015) 876-895.
+
+[7] Bohacek, J., Kharicha, A., Ludwig, A., Wu, M., Holzmann, T., and Karimi-Sibakib, E., A GPU solver for symmetric positive-definite matrices vs. traditional codes, Computers & Mathematics with Applications, 78(9) (2019) 2933-2943.
+
+[8] Dias da Cunha, R., and Hopkins, T., The Parallel Iterative Methods (PIM) package for the solution of systems of linear equations on parallel computers, Applied Numerical Mathematics, 19(1-2) (1995) 33-50.
+
+[9] Doha, E.H., Bhrawy, A.H., and Ezz-Eldien, S.S., A new Jacobi operational matrix: An application for solving fractional differential equations, Applied Mathematical Modelling, 36 (2012) 4931-4943.
+
+[10] Doha, E.H., Bhrawy, A.H., and Ezz-Eldien, S.S., Efficient Chebyshev spectral methods for solving multi-term fractional orders differential equations, Applied Mathematical Modelling, 35(12) (2011) 5662-5672.
+
+[11] Doha, E.H., Bhrawy, A.H., and Hafez, R.M., A Jacobi-Jacobi dual-Petrov-Galerkin method for third- and fifth-order differential equations, Math. Comput. Modell., 53 (2011) 1820-1832.
+
+[12] Ehrlich, L.W., An Ad-Hoc SOR method, J. Comput. Phys., 44(1) (1981) 31-45.
+
+[13] Golub, G., and O'Leary, D., Some history of the conjugate gradient and Lanczos methods, SIAM Rev., 31 (1989) 50-102.
+
+[14] Golub, G., and Van Loan, C., Matrix Computations, second edition, The Johns Hopkins University Press, Baltimore, (1989).
+---PAGE_BREAK---
+
+[15] Guan, J., Kalhoro, Z.A., and Chandio, A.A., Tridiagonal iterative method for linear systems, University of Sindh Journal of Information and Communication Technology, (USJICT), 1(1) (2017).
+
+[16] Hadjidimos, A., Optimum stationary and nonstationary iterative methods for the solution of singular linear systems, Numerische Mathematik, 51(5) (1987) 517-530.
+
+[17] Hageman, L., and Young, D., Applied Iterative Methods, New York: Academic Press, (1981).
+
+[18] Hezari, D., Edalatpour, V., and Khojasteh Salkuyeh D., Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numerical Linear Algebra with Applications, 22(4) (2015) 761-776.
+
+[19] Kaveh, A., Rahami, H., and Shojaei, I., Swift Analysis of Civil Engineering Structures Using Graph Theory Methods, Springer Nature, (2020)
+
+[20] Kebede, T., Second Degree Refinement Jacobi Iteration Method for Solving System of Linear Equation, International Journal of Computing Science and applied mathematics, 3(1) (2017) 5-10.
+
+[21] Pan, V.Y., and Qian, G., Solving linear systems of equations with randomization, augmentation and aggregation, Linear Algebra and its Applications, 437(12) (2012) 2851-2876.
+
+[22] Pratapa, Ph.P., Suryanarayana, Ph., and Pask, J.E., Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems, Journal of Computational Physics, 306(1) (2016) 43-54.
+
+[23] Santos, N.R., and Evans, D.J., Non-stationary iterative solution of positive definite linear systems, International Journal of Computer Mathematics, (2007) 289-304.
+
+[24] Tian, Zh., Tian, M., Zhang, Y., and Wen, P., An iteration method for solving the linear system Ax=b, Computers & Mathematics with Applications, 75(8) (2018) 2710-2722.
+
+[25] Todini, E., and Pilati, S., A Gradient Algorithm for the Analysis of Pipe Networks, in International Conference on Computer Applications for Water Supply and Distribution. Leicester, UK., (1987).
+
+[26] Varga, R., Matrix Iterative Analysis. Englewood Cliffs, NJ: Prentice-Hall, (1962).
+
+[27] Webster, R., Efficient Algebraic Multigrid Solvers with Elementary Restriction and Prolongation, International Journal for Numerical Methods in Fluids, 28 (1998) 317-336.
+---PAGE_BREAK---
+
+[28] Xiao-yong, Z., and Junlin, L., Convergence analysis of Jacobi pseudo-spectral method for the Volterra delay integro-differential equations, Appl. Math. info. Sci., 9 (2015) 135-145.
+
+[29] Young, D., Iterative Solutions of Large Linear Systems, New York: Academic Press, (1971).
+
+[30] Yueh, W.Ch., Eigenvalues of Several Tridiagonal Matrices, Applied Mathematics E-Notes, 5 (2005) 66-74.
\ No newline at end of file
diff --git a/samples/texts_merged/3308727.md b/samples/texts_merged/3308727.md
new file mode 100644
index 0000000000000000000000000000000000000000..0250a072371b38fc1d4a5c99e3cb43cdc2eda439
--- /dev/null
+++ b/samples/texts_merged/3308727.md
@@ -0,0 +1,474 @@
+
+---PAGE_BREAK---
+
+**Yutian Chen**
+
+Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
+
+**Zoubin Ghahramani**
+
+Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
+Alan Turing Institute, 96 Euston Road, London NW1 2DB, UK
+
+YUTIAN.CHEN@ENG.CAM.AC.UK
+
+ZOUBIN@ENG.CAM.AC.UK
+
+**Abstract**
+
+Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.
+
+1. Introduction
+
+Sampling a random variable from a discrete (conditional) distribution is one of the core operations in Monte Carlo methods. It is an ubiquitous and often necessary component for inference algorithms such as Gibbs sampling and particle filtering. Applying discrete sampling for large-scale problems has been a challenging task like other Monte Carlo algorithms due to the high computational burden. Various approaches have been proposed to address different dimensions of “large scales”. For example, distributed algorithms have been used to sample a model with a large number of discrete variables (Newman et al., 2009; Bratires et al., 2010; Wu et al., 2011), smart transition kernels were described for Markov chain Monte Carlo (MCMC) algorithms to sample efficiently a single variable
+
+with a large or even infinite state space (Li et al., 2014; Kalli et al., 2011). This paper is focused on another dimension of the “large-scales” where the variable to sample has a large degree of statistical dependency.
+
+Consider a random variable with a finite domain $X \in \mathcal{X}$ and a distribution in the following form
+
+$$p(X = x) \propto \tilde{p}(X = x), \text{ with } \tilde{p}(X = x) = f_0(x) \prod_{n=1}^{N} f_n(x), \quad (1)$$
+
+where $f_n$ can be any function of $x$. Such distribution occurs frequently in machine learning problems. For example, in Bayesian inference for a model with parameter $X$ and $N$ i.i.d. observations $\mathcal{D} = \{\mathbf{y}_n\}_{n=1}^N$, the posterior distribution of $X$ depends on all the observations when sufficient statistics is not available. The unnormalized posterior distribution can be written as $\tilde{p}(X|\mathcal{D}) = p(X) \prod_{i=1}^N p(\mathbf{y}_i|X)$. In undirected graphical model inference problems where a node $X_i$ appears in $N$ potential functions, the distribution of $X_i$ depends on the value of all of the $N$ functions. The unnormalized conditional distribution is $\tilde{p}(X_i|\mathbf{x}_{-i}) = \prod_{n=1}^N \phi_n(X_i, \mathbf{x}_{-i})$, where $\mathbf{x}_{-i}$ denotes the value of all the other nodes in the graph and $\phi_n$ denotes a potential function that includes $X_i$ in the scope. In this paper we study how to sample a discrete random variable $X$ in a manner that is scalable in $N$.
+
+A common approach to address the big data problem is divide-and-conquer that uses parallel or distributed computing resources to process data in parallel and then synchronize the results periodically or merely once in the end (Scott et al., 2013; Medlar et al., 2013; Xu et al., 2014).
+
+An orthogonal approach has been studied for the Metropolis-Hastings (MH) algorithm in a general state space by running a sampler with subsampled data. This approach can be combined easily with the distributed computing idea for even better scalability (e.g. Ahn et al., 2015).
+
+Maclaurin & Adams (2015) introduced an MH algorithm
+---PAGE_BREAK---
+
+in an augmented state space that could achieve higher effi-
+ciency than the standard MH by processing only a subset of
+active data every iteration while still preserving the correct
+stationary distribution. But the introduction of auxiliary
+variables might also slow down the overall mixing rate in
+the augmented space.
+
+Approximate MH algorithms have been proposed in the subsampling approach with high scalability. The stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011) and its extensions (Ahn et al., 2012; Chen et al., 2014; Ding et al., 2014) introduced efficient proposal distributions based on subsampled data. Approximate algorithms induce bias in the stationary distribution of the Markov chain. But given a fixed amount of runtime they could reduce the expected error in the Monte Carlo estimate via a proper trade-off between variance and bias by mixing faster w.r.t. the runtime. This is particularly important for large-scale learning problems when the runtime is one of the limiting factors for generalization performance (Bottou & Bousquet, 2008). However, the stochastic gradient MCMC approach usually skips the rejection step in order to obtain sublinear time complexity and the induced bias is very hard to estimate or control.
+
+Another line of research on approximate subsampled MH
+algorithms does not ignore the rejection step but controls
+the error with an approximate rejection step based on a sub-
+set of data (Korattikara et al., 2014; Bardenet et al., 2014).
+The bias can thus be better controlled (Mitrophanov, 2005;
+Pillai & Smith, 2014). That idea has also been extended to
+slice sampling (DuBois et al., 2014) and Gibbs for binary
+variables (Korattikara et al., 2014).
+
+In this paper we follow the last line of research and pro-
+pose a novel approximate sampling algorithm to improve
+the scalability of sampling discrete distributions. We first
+reformulate the problem in Eq. 1 as a Multi-Armed Ban-
+dit (MAB) problem with a finite reward population via the
+Gumbel-Max trick (Papandreou & Yuille, 2011), and then
+propose three algorithms with theoretical guarantees on the
+approximation error and an upper bound of N|X| on the
+sample size. This is to our knowledge the first attempt to
+address discrete sampling problem with a large number of
+dependencies and our work will likely contribute to a more
+complete library of scalable MCMC algorithms. More-
+over, the racing algorithm in Sec. 3.3 provides a unified
+framework for subsampling-based discrete sampling, MH
+(Korattikara et al., 2014; Bardenet et al., 2014) and slice
+sampling (DuBois et al., 2014) algorithms as discussed in
+Sec. 4. We also show in the experiments that our algorithm
+can be combined straightforwardly with stochastic gradient
+MCMC to achieve both high efficiency and controlled bias.
+Lastly, the proposed algorithms also deserve their own in-
+terest for MAB problems under this particular setting.
+
+We first review an alternative way of drawing discrete vari-
+ables and build a connection with MABs in Sec. 2, then
+propose three algorithms in Sec. 3. We discuss related
+work in Sec. 4 and evaluate the proposed algorithms on
+both synthetic data and real-world problems of Bayesian
+inference and graphical model inference in Sec. 5. Particu-
+larly, we show how our proposed sampler can be combined
+conveniently as a building component with other subsam-
+pling sampler for a hierarchical Bayesian model. Sec. 6
+concludes the paper with a discussion.
+
+## 2. Approximate Discrete Sampling
+
+**2.1. Discrete Sampling as an Optimization Problem**
+
+The common procedure to sample $X$ from a discrete domain $\mathcal{X} = \{1, 2, \dots, D\}$ is to first normalize $\tilde{p}(X)$ and compute the CDF $F(X = x) = \sum_{i=1}^{x} p(X = i)$. Then draw a uniform random variable $u \sim \text{Uniform}(0, 1]$, and find $x$ that satisfies $F(x - 1) < u \le F(x)$. This procedure requires computing the sum of all the unnormalized probabilities. For $\tilde{p}$ in the form of Eq. 1 this is $O(ND)$.
+
+An alternative procedure is to first draw $D$ i.i.d. samples from the standard Gumbel distribution¹ $\varepsilon_i \sim \text{Gumbel}(0, 1)$, and then solve the following optimization problem:
+
+$$x = \underset{i \in \mathcal{X}}{\operatorname{argmax}} \log \tilde{p}(i) + \varepsilon_i. \qquad (2)$$
+
+It is shown in Kuzmin & Warmuth (2005) that $x$ follows the distribution $p(X)$. With this method after drawing random variables that do not depend on $\tilde{p}$, we turn a random sampling problem to an optimization problem. While the computational complexity is the same to draw an *exact* sample, an *approximate* algorithm may potentially save computations by avoiding computing accurate values of $\tilde{p}(X = x)$ when $x$ is considered unlikely to be the maximum as discussed next.
+
+**2.2. Approximate Discrete Sampling as a Multi-Armed Bandits Problem**
+
+In a Multi-Armed Bandit (MAB) problem, the $i$'th bandit is a slot machine with an arm, which when pulled generates an i.i.d. reward $l_i$ from a distribution associated with that arm with an unknown mean $\mu_i$. The optimal arm identification problem for MABs (Bechhofer, 1958; Paulson, 1964) in the fixed confidence setting is to find the arm with the highest mean reward with a confidence $1 - \delta$ using as few pulls as possible.
+
+Under the assumption of Eq. 1, the solution in Eq. 2 can be
+
+¹The Gumbel distribution is used to model the maximum extreme value distribution. If a random variable $Z \sim \text{Exp}(1)$, then $-\log(Z) \sim \text{Gumbel}(0, 1)$. $\varepsilon$ can be easily drawn as $-\log(-\log(u))$ with $u \sim U[0, 1]$.
+---PAGE_BREAK---
+
+expressed as
+
+$$
+\begin{align*}
+x &= \underset{i \in \mathcal{X}}{\operatorname{argmax}} \sum_{n=1}^{N} \log f_n(i) + \log f_0(i) + \epsilon_i \\
+&= \underset{i \in \mathcal{X}}{\operatorname{argmax}} \sum_{n=1}^{N} \underbrace{\left( \log f_n(i) + \frac{1}{N} (\log f_0(i) + \epsilon_i) \right)}_{\stackrel{\text{def}}{=} l_{i,n}} \\
+&= \underset{i \in \mathcal{X}}{\operatorname{argmax}} \frac{1}{N} \sum_{n=1}^{N} l_{i,n} = \underset{i \in \mathcal{X}}{\operatorname{argmax}} \mathbb{E}_{l_i \sim \text{Uniform}(\mathcal{L}_i)} [l_i] \\
+&\stackrel{\text{def}}{=} \underset{i \in \mathcal{X}}{\operatorname{argmax}} \mu_i
+\end{align*}
+\tag{3}
+$$
+
+where $\mathcal{L}_i \stackrel{\text{def}}{=} \{l_{i,1}, l_{i,2}, \dots, l_{i,N}\}$. After drawing $D$ Gumbel variables $\varepsilon_i$, we turn the discrete sampling problem into the optimal arm identification problem in MABs where the reward $l_i$ is uniformly sampled from a finite population $\mathcal{L}_i$. An approximate algorithm that solves the problem with a fixed confidence may avoid drawing all the rewards from an obviously sub-optimal arm and save computations. We show the induced bias in the sample distribution as follows with the proof in Appx. A.1.
+
+**Proposition 1.** If an algorithm solves (2) exactly with a probability at least $1 - \delta$ for any value of $\varepsilon$, the total variation between the sample distribution $\hat{p}$ and the true distribution is bounded by
+
+$$
+\|\hat{p}(X) - p(X)\|_{\text{TV}} \leq \delta \quad (4)
+$$
+
+When applied in the MCMC framework as a transition kernel, we can apply immediately the theories in Mitrophanov (2005); Pillai & Smith (2014) to show that the approximate Markov chain satisfies uniform ergodicity under regular conditions and the analysis of convergence rate are readily available under various assumptions. So the discrete sampling problem of this paper reduces to finding a good MAB algorithm for Eq. 2 in our problem setting.
+
+### 3. Algorithms for MABs with a Finite Population and Fixed Confidence
+
+The key difference of our problem from the regular MABs is that our rewards are generated from a finite population while regular MABs assume i.i.d. rewards. Because one can obtain the exact mean by sampling all the $N$ values $l_{i,n}$ for arm $i$ without replacement, a good algorithm should pull no more than $N$ times for each arm regardless of the mean gap between arms. We introduce three algorithms in this section whose sample complexity is upper bounded by $O(ND)$ in the worst case and can be very efficient when the mean gap is large.
+
+### 3.1. Notations
+
+The iteration of an algorithm is indexed by $t$. We denote the entire index set with $[N] = \{1, 2, \dots, N\}$, the sampled set of reward indices up to $t$'th iteration from arm $i$ with $\mathcal{N}_i^{(t)} \subseteq [N]$, and the corresponding number of sampled rewards with $T_i^{(t)}$. We define the estimated mean for $i$'th arm with $\hat{\mu}_i^{(t)} \stackrel{\text{def}}{=} \frac{1}{|\mathcal{N}_i^{(t)}|} \sum_{n \in \mathcal{N}_i^{(t)}} l_{i,n}$, the natural variance (biased) estimate with $(\hat{\sigma}_i^{(t)})^2 \stackrel{\text{def}}{=} \frac{1}{|\mathcal{N}_i^{(t)}|} \sum_{n \in \mathcal{N}_i^{(t)}} (l_{i,n} - \hat{\mu}_i^{(t)})^2$, the variance estimate of the mean gap between two arm with $(\hat{\sigma}_{i,j}^{(t)})^2 \stackrel{\text{def}}{=} \frac{1}{|\mathcal{N}_i^{(t)}|} \sum_{n \in \mathcal{N}_i^{(t)}} ((l_{i,n} - l_{j,n}) - (\hat{\mu}_i^{(t)} - \hat{\mu}_j^{(t)}))^2$ (defined only when $\mathcal{N}_i^{(t)} = \mathcal{N}_j^{(t)}$), the bound of the reward value $C_i \stackrel{\text{def}}{=} \max_{n,n'}\{l_{i,n} - l_{i,n'}\}$. The subscripts and superscripts may be dropped for notational simplicity when the meaning is clear from the context.
+
+### 3.2. Adapted lil'UCB
+
+We first study one of the state-of-the-art algorithms for fixed-confidence optimal arm identification problem and adjust it for the finite population setting. The lil'UCB algorithm (Jamieson et al., 2014) maintains an upper confidence bound (UCB) of $\mu_i$ that is inspired by the law of the iterated logarithm (LIL) for every arm. At each iteration, it draws a single sample from the arm with the highest bound and updates it. The algorithm terminates when some arm is sampled much more often than all the other arms. We refer readers to Fig. 1 of Jamieson et al. (2014) for details. The time complexity for $t$ iterations is $\mathcal{O}(\log(D)t)$. It was shown in Jamieson et al. (2014) that lil'UCB achieved the optimal sample complexity up to constants.
+
+However, lil'UCB requires i.i.d. rewards for each arm $i$, that is, sampled with replacement from $\mathcal{L}_i$. Therefore, the total number of samples $t$ is unbounded and could be $\gg ND$ when the means are close to each other. We adapt lil'UCB for our problem with the following modifications:
+
+1. Samples $l_{i,n}$ without replacement for each arm but keep different arms independent.
+
+2. When $T_i^{(t)} = N$ for some arm $i$, the estimate $\hat{\mu}_i^{(t)}$ becomes exact. So set its UCB to $\hat{\mu}_i^{(t)}$.
+
+3. The algorithm terminates either with the original stopping criterion or when the arm with the highest upper bound has an exact mean estimate, whichever comes first.
+
+The adapted algorithm satisfies all the theoretical guarantees in Thm. 2 of Jamieson et al. (2014) with additional properties as shown in the following proposition with proof in Appx. A.2.
+---PAGE_BREAK---
+
+**Proposition 2.** Theorem 2 of Jamieson et al. (2014) holds for the adapted lil'UCB algorithm. Moreover $T_i^{(t)} \le N, \forall i, t$. Therefore, when the algorithm terminates, $t = \sum_{i \in X} T_i^{(t)} \le ND$.
+
+Notice that Thm. 2 of Jamieson et al. (2014) shows that $t$ scales roughly as $O(1/\Delta^2)$ with $\Delta$ being the mean gap and therefore $t \ll ND$ when the gap is large.
+
+### 3.3. Racing Algorithm for a Finite Population
+
+When rewards are sampled without replacement, the negative correlation between rewards would generally improve the convergence of $\hat{\mu}_i$. Unfortunately, the bound in lil'UCB ignores the negative correlation when $T_i^{(t)} < N$ even with the adaptations. We introduce a new family of racing algorithms (Maron & Moore, 1994) that takes advantage of the finite population setting as shown in Alg. 1. The choice of the uncertainty bound function $G$ differentiates specific algorithms and two examples will be discussed in the following sections.
+
+Alg. 1 maintains a set of candidate set $\mathcal{D}$ initialized with all arms. At iteration $t$, a shared mini-batch of $m^{(t)}$ indices are drawn w/o replacement for all survived arms in $\mathcal{D}$. Then the uncertainty bound $G$ is used to eliminate sub-optimal arms with a given confidence. The algorithm stops when only one arm remains. We require for $m^{(t)}$ that the total number of sampled indices $T^{(t*)} = \sum_{t=1}^{t^*} m^{(t)}$ equals $N$ at the last iteration $t^*$. Particularly, we take a doubling schedule $T^{(t)} = 2T^{(t-1)}$ (so $t^* = \lceil \log_2 \frac{N}{m^{(1)}} \rceil + 1$) and leave $m^{(1)}$ as a free parameter. We also require $G(\cdot, T, \cdot, \cdot) = 0$ whenever $T = N$ so that Alg. 1 always stops within $t^*$ iterations. The computational complexity for $t$ iterations is $O(DT^{(t)})$ with the marginal estimate $\hat{\sigma}_i$ and $O(D^2T^{(t)})$ with the pairwise estimate $\hat{\sigma}_{i,j}$. The former version is more efficient than the latter when $\mathcal{D}$ is large at the price of a looser bound.
+
+**Proposition 3.** If $G$ satisfies
+
+$$ \mathcal{E} \stackrel{\text{def}}{=} P(\exists t < t^*, \hat{\mu}^{(t)} - \mu > G(\delta, T^{(t)}, \hat{\sigma}^{(t)}, C)) \le \delta, \quad (5) $$
+
+for any $\delta \in (0, 1)$ with a probability at least $1 - \delta$, Alg. 1 returns the optimal arm with at most ND samples.
+
+The proof is provided in Appx. A.3. Unlike adapted lil'UCB, Racing draws a shared set of sample indices among all the arms and could provide a tighter bound with pairwise variance estimates $\hat{\sigma}_{i,j}$ when there is positive correlation, a typical case in Bayesian inference problems.
+
+#### 3.3.1. RACING WITH SERFLING CONCENTRATION BOUNDS FOR $G$
+
+Serfling (1974) studied the concentration inequalities of sampling without replacement and obtained an improved
+
+**Algorithm 1 Racing Algorithm with a Finite Reward Population**
+
+**input** Number of arms $D$, population size $N$, mini-batch sizes $\{m^{(t)}\}_{t=1}^{t^*}$, confidence level $1 - \delta$, uncertainty bound function $G(\delta, T, \hat{\sigma}, C)$, range of samples $C_i$ (optional).
+
+$t \leftarrow 0, T \leftarrow 0, D \leftarrow \{1, 2, \dots, D\}, N \leftarrow \emptyset$
+
+**while** $|D| > 1$ **do**
+
+$t \leftarrow t + 1$
+
+Sample w/o replacement $m^{(t)}$ indices $\mathcal{M} \subseteq [N] \setminus N$, and set $\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{M}, T \leftarrow T + m^{(t)}$
+
+Compute $l_{i,n}, \forall i \in D, n \in \mathcal{M}$, and update $\hat{\mu}_i$ and $\hat{\sigma}_i$ (or $\hat{\sigma}_{i,j}, \forall i \in D$).
+
+Find the best arm $x \leftarrow \operatorname{argmax}_{i \in D} \hat{\mu}_i$
+
+Eliminate sub-optimal arms when the estimated gap is large $D \leftarrow D \setminus \{i : \hat{\mu}_x - \hat{\mu}_i > G(\frac{\delta}{D}, T, \hat{\sigma}_x, C_x) + G(\frac{\delta}{D}, T, \hat{\sigma}_i, C_i)\}$ (or $D \leftarrow D \setminus \{i : \hat{\mu}_x - \hat{\mu}_i > G(\frac{\delta}{D-1}, T, \hat{\sigma}_{x,i}), C_x + C_i\}$)
+
+**end while**
+
+**output** $\mathcal{D}$
+
+Hoeffding bound. Bardenet & Maillard (2013) extended the work and provided an empirical Bernstein-Serfling bound that was later used for the subsampling-based MH algorithm (Bardenet et al., 2014): for any $\delta \in (0, 1]$ and any $n \le N$, with probability $1 - \delta$, it holds that
+
+$$
+\begin{align}
+\hat{\mu}_n - \mu &\le \hat{\sigma}_n \sqrt{\frac{2\rho_n \log(5/\delta)}{n}} + \frac{\kappa C \log(5/\delta)}{n} \\
+&\stackrel{\text{def}}{=} B_{\text{EBS}}(\delta, n, \hat{\sigma}_n, C)
+\end{align}
+$$
+
+where $\kappa = \frac{7}{3} + \frac{3}{\sqrt{2}}$, and $\rho_n =$
+$$
+\begin{cases}
+1 - \pi_{n-1} & \text{if } n \le N/2 \\
+(1 - \pi_n)(1 + \frac{1}{n}) & \text{if } n > N/2
+\end{cases}
+$$
+, with $\pi_n$ def= $\frac{n}{N}$.
+The extra term $\rho_n$ that is missing in regular empirical Bernstein bounds reduces the bound significantly when $n$ is close to $N$. We set $m^{(1)} = 2$ in Alg. 1 to provide a valid $\hat{\sigma}^{(t)}$ for any $t$ and set the uncertain bound $G$ with the empirical Bernstein-Serfling (EBS) bounds as
+
+$$ G_{\text{EBS}}(\delta, T, \hat{\sigma}, C) = B_{\text{EBS}}\left(\frac{\delta}{t^*-1}, T, \hat{\sigma}, C\right) \quad (7) $$
+
+It is trivial to prove that $G_{\text{EBS}}$ satisfies the condition in Eq. 5 using a union bound over $t < t^*$.
+
+#### 3.3.2. RACING WITH A NORMAL ASSUMPTION FOR $G$
+
+The concentration bounds often give a conservative strategy as they assume an arbitrary bounded reward distribution. When the number of drawn samples is large, the central limit theorem suggests that $\hat{\mu}^{(t)}$ follows approximately a Gaussian distribution. Korattikara et al. (2014) made such
+---PAGE_BREAK---
+
+an assumption and obtained a tighter bound. We first provide an immediate corollary of Prop. 2 in Appx. A of Korrattikara et al. (2014).
+
+**Corollary 4.** Let $\hat{\mu}_{\text{unit}}^{(t)}$, $t = 1, 2, \dots, t^*$ be the estimated means using sampling without replacement from any finite population with mean $\mu$ and unit variance. The joint normal random variables $\tilde{\mu}^{(t)}$ that match the mean and covariance matrix with $\hat{\mu}_{\text{unit}}^{(t)}$ follow a Gaussian random walk process as
+
+$$p_{\mu}(\tilde{\mu}^{(t)} | \tilde{\mu}^{(1)}, \dots, \tilde{\mu}^{(t-1)}) = \mathcal{N}(m_t(\tilde{\mu}^{(t-1)}), S_t) \quad (8)$$
+
+where $m_t = \mu + A_t(\tilde{\mu}_{t-1} - \mu)$, $S_t = \frac{B_t}{T^{(t)}} \left(1 - \frac{T^{(t)-1}}{N-1}\right)$,
+$A_t = \frac{\pi_{t-1}(1-\pi_t)}{\pi_t(1-\pi_{t-1})}$, $B_t = \frac{\pi_t - \pi_{t-1}}{\pi_t(1-\pi_{t-1})}$ with $\pi_t$ short for $\pi_{T^{(t)}}$.
+
+**Remark 5.** The marginal distribution $p(\tilde{\mu}^{(t)}) = \mathcal{N}\left(\mu, \frac{1}{T^{(t)}}\left(1 - \frac{T^{(t)}-1}{N-1}\right)\right)$ where the variance approaches 0 when $T^{(t)} \to N$.
+
+**Assumption 6.** When $T^{(t)} \gg 1$, $\forall t$, we assume $\hat{\sigma}^{(t)} \approx \sigma$ and the central limit theorem suggests that the joint distribution of $\hat{\mu}^{(t)}/\hat{\sigma}^{(t)}$ can be approximated by the joint distribution of $\tilde{\mu}^{(t)}$.
+
+With the normal assumption, we choose the uncertainty bound $G$ in the following form
+
+$$G_{\text{Normal}}(\delta, T, \hat{\sigma}) = \frac{\hat{\sigma}}{\sqrt{T}} \left(1 - \frac{T-1}{N-1}\right)^{1/2} B_{\text{Normal}} \quad (9)$$
+
+Intuitively we use a constant confidence level, $\Phi(B_{\text{Normal}})$, for all marginal distributions of $\hat{\mu}^{(t)}$ over $t$ where $\Phi(\cdot)$ is the CDF of the standard normal. To choose the constant $B_{\text{Normal}}$, we plug $G_{\text{Normal}}$ into the condition for $G$ in Eq. 5 and apply the normal distribution (8) to solve the univariate equation $\mathcal{E}(B) = \delta$. This way of computing $G$ gives a tighter bound than applying the union bound across $t$ as in the previous section because it takes into account the correlation of mean estimates across iterations. Appx. B provides a lookup table and a plot of $B_{\text{Normal}}(\delta) = \mathcal{E}^{-1}(\delta)$. Notice that $B_{\text{Normal}}$ only needs to be computed once and we can obtain it for any $\delta$ by either interpolating the table or computing numerically with code to be shared (runtime < 1 second). For the parameter of the first mini-batch size $m^{(1)}$, a value of 50 performs robustly in all experiments.
+
+We provide the sample complexity below with the proof in Appx. A.4. Particularly, $T^*(\Delta) \to DN$ as $\Delta \to 0$, and $T^*(\Delta) = Dm^{(1)}$ when $\Delta \ge 2B_{\text{Normal}}(\delta/D')\sqrt{(N/m^{(1)} - 1)/(N-1)}$.
+
+**Proposition 7.** Let $x^*$ be the best arm and $\Delta$ be the minimal normalized gap of means from other arms, defined as $\min_{i \neq x^*} \frac{\mu_{x^*} - \mu_i}{\sigma_{x^*} + \sigma_i}$ when using marginal variance estimate $\hat{\sigma}_i$ and $\min_{i \neq x^*} \frac{\mu_{x^*} - \mu_i}{\sigma_{x^*, i}}$ when using pairwise variance estimate $\hat{\sigma}_{x,i}$. If Assump. 6 holds, with a probability at least
+
+$1 - \delta$ Racing-Normal draws no more rewards than
+
+$$T^*(\Delta) = D \left[ \frac{N}{(N-1) \frac{\Delta^2}{4B_{\text{Normal}}^2 (\delta/D')} + 1} \right]_{m^{(1)}} \quad (10)$$
+
+where $[n]_m \stackrel{\text{def}}{=} m2^{[\log_2 n/m]} \wedge N \ge n, \forall n \le N$. $D' \stackrel{\text{def}}{=} D$ if using $\hat{\sigma}_i$ and is $D-1$ if using $\hat{\sigma}_{x,i}$.
+
+## 3.4. Variance Reduction for Random Rewards with Control Variates
+
+The difficulty of MABs depends heavily on the ratio of the mean gap to the reward noise, $\Delta$. To improve the signal noise ratio, we exploit the control variates technique (Wilson, 1984) to reduce the reward variance. Consider a variable $h_{i,n}$ whose expectation $\mathbb{E}_{n\sim[N]}[h_{i,n}]$ can be computed efficiently. The residue reward $l_{i,n} - h_{i,n} + \mathbb{E}_n[h_{i,n}]$ has the same mean as $l_{i,n}$ and the variance is reduced if $h_{i,n} \approx l_{i,n}$. In the Bayesian inference experiment where the factor $f_n(X=i) = p(y_n|X=i)$, we adopt a similar approach as Wang et al. (2013) and take the Taylor expansion of $l_{i,n}$ around a reference point $\hat{y}$ as
+
+$$l_{i,n} \approx l_i(\hat{y}) + g_i^T(y_n - \hat{y}) + \frac{1}{2}(y_n - \hat{y})^T H_i (y_n - \hat{y}) \stackrel{\text{def}}{=} h_{i,n} \quad (11)$$
+
+where $g_i$ and $H_i$ are the gradient and Hessian matrix of $\log p(y|i)$ respectively evaluated at $\hat{y}$. $\mathbb{E}[h_{i,n}]$ can be computed analytically with the first two moments of $y_n$. A typical choice of $\hat{y}$ is $\mathbb{E}[y]$.
+
+The control variate method is mostly useful for Racing-Normal. For algorithms depending on a reward bound $C$ in order to get a tight bound for $l_{i,n} - h_{i,n}$ it requires a more restrictive condition for $C$ as in Bardenet et al. (2015) and we might end up with an even more conservative strategy in general cases.
+
+## 4. Related Work
+
+The Gumbel-Max trick has been exploited in Kuzmin & Warmuth (2005); Papandreou & Yuille (2011); Maddison et al. (2014) for different problems. The closest work is Maddison et al. (2014) where this trick is extended to draw continuous random variables with a Gumbel process, reminiscent to adaptive rejection sampling.
+
+Our work is closely related to the optimal arm identification problem for MABs with a fixed confidence. This is, to our knowledge, the first work to consider MABs with a finite population. The proposed algorithms tailored under this setting could be of interest beyond the discrete sampling problem. The normal assumption in Sec. 3.3.2 is similar to UCB-Normal in Auer et al. (2002) but the latter assumes a normal distribution for individual rewards and will perform poorly when it does not hold.
+---PAGE_BREAK---
+
+The bounds in Sec. 3.3 are based on subsampling-based MH algorithms in Bardenet et al. (2014); Korattikara et al. (2014). The proposed algorithm extends those ideas from MH to discrete sampling. In fact, let $x$ and $x'$ be the current and proposed value in an MH iteration, Racing-EBS and Racing-Normal reduce to the algorithms in Bardenet et al. (2014) and Korattikara et al. (2014) respectively if we set
+
+$$
+\begin{aligned}
+\mathcal{X} &= \{x, x'\}, & f_0(1) &= u p(x)q(x'|x), \\
+f_0(2) &= p(x')q(x|x'), & f_n(x) &= p(\mathbf{y}_n|x)
+\end{aligned}
+\quad (12) $$
+
+where $p(x)$ is the prior distribution, $u \sim \text{Uniform}[0, 1]$ and $q(\cdot|\cdot)$ is the proposal distribution. The difference with Bardenet et al. (2014) is that we distribute the error $\delta$ evenly across $t$ in Eq. 7 while Bardenet et al. (2014) set $\delta_t = (p-1)/(p(T^{(t)})^p)\delta$ with $p$ a free parameter. The differences with Korattikara et al. (2014) are that we take a doubling schedule for $m^{(t)}$ and replace the t-test with the normal assumption. We find that our algorithms are more efficient and robust than both original algorithms in practice. Moreover, the binary Gibbs sampling in Appx. F of Korattikara et al. (2014) is also a special case of Racing-Normal with $D=2$. Therefore, Alg. 1 provides a unifying approach to a family of subsampling-based samplers.
+
+The variance reduction technique is similar to the proxies in Bardenet et al. (2015), but the control variate here is a function in the data space while the proxy in the latter is a function in the parameter space. We do not assume the posterior distribution is approximate Gaussian and our algorithm works with multi-modal distributions.
+
+It is important not to confuse the focus of our algorithm for the big *N* problem in Eq. 1 with other algorithms that address sampling for a large state space (big *D*) or similarly a high-dimensional vector of discrete variables (exponentially large *D*). The combination of these two approaches for problems with both big *N* and big *D* is possible but beyond the scope of this paper.
+
+# 5. Experiments
+
+Since this is the first work to discuss efficient discrete sampling for problem (1), we compare the adapted lil'UCB, Racing-EBS, Racing-Normal with the exact sampler only. We report the result of Racing-Normal in real data experiments only as the speed gains of the other two are marginal.
+
+## 5.1. Synthetic Data
+
+We construct a distribution with $D = 10$ by sampling $N = 10^5$ rewards of $l_{i,n}$ for each state from one of the three distributions $\mathcal{N}(0, 1)$, $\text{Uniform}[0, 1]$, $\text{LogNormal}(0, 2)$. We normalized $l_{i,n}$ to have a fixed distribution $p(X)$ in Fig. 1(a) and a reward variance $\sigma^2$ that controls the difficulty. The normal distribution is the ideal setting for
+
+Racing-Normal, and the uniform distribution is desirable for adapted lil'UCB and Racing-EBS as the reward bound is close to $\sigma$. The LogNormal distribution, whose ex. kurtosis $\approx 4000$, is difficult for all due to the heavy tail. We use a tight bound $C = \max\{l_{i,n} - l_{i,n'}\}$ for Racing-EBS. We set the scale parameter of adapted lil'UCB with $C/2$ and other parameters with the heuristic setting in Jamieson et al. (2014). Racing uses the pairwise variance estimate.
+
+Fig. 1(b)-(d) show the empirical error of best arm identification by drawing $10^4$ samples of $X$ for each setting and vary the target error bound $\delta \in [10^{-3}, 0.1]$. The bound appears very loose for lil'UCB and Racing-EBS but is sharp for Racing-Normal when the noise is large (1(b)) and $\delta \ll 1$. This is consistent with the direct comparison of uncertainty bounds in Fig. 1(e). Consequently, given the same error tolerance $\delta$ Racing-Normal requires much fewer rewards than the other conservative strategies in all the settings except when $\sigma = 10^{-5}$ and $l_{i,n} \sim \text{Uniform}[0, 1]$, as shown in Fig. 1(f)-(h). We verify the observations with more experiments in Appx. C.1 with $D \in \{2, 100\}$ and marginal estimate $\hat{\sigma}_i$.
+
+Surprisingly, Racing-Normal performs robustly regardless of reward distributions with the first mini-batch size $m^{(1)} = 50$ while it was shown in Bardenet et al. (2014) that the algorithm with the same normal assumption in Korattikara et al. (2014) failed with LogNormal even when $m^{(1)} = 500$. The dramatic improvement in robustness is mainly due to our doubling scheme where central limit theorem applies quickly with $m^{(t)}$ increasing exponentially. We do not claim that the single trick will solve the problem completely because there still exist cases in theory with extremely heavy-tailed reward distributions where our normal assumption does not hold and the algorithm will fail to meet the confidence level. In practice, we do not observe that pathological case in any of the experiments.
+
+## 5.2. Bayesian ARCH Model Selection
+
+We evaluate Racing-Normal in a Bayesian model selection problem for the auto-regressive conditional heteroskedasticity (ARCH) models. The discrete sampler is integrated in the Markov chain as a building component to sample the hierarchical model. Specifically, we consider a mixture of ARCHs for the return $r_t$ of stock price series with student-t innovations, each component with a different order q:
+
+$$
+\begin{align*}
+r_t &= \sigma_t z_t, & z_t &\stackrel{iid}{\sim} t_\nu(0, 1), & \sigma_t^2 &= \alpha_0 + \sum_{i=1}^q \alpha_i r_{t-i}^2, \\
+q &\sim \text{Discrete}(\boldsymbol{\pi}), & \alpha_i, \nu &\stackrel{iid}{\sim} \text{Gamma}(1, 1)
+\end{align*} $$
+
+where $\boldsymbol{\pi} = \{\pi_q : q \in \mathbb{Q}\}$ is the prior distribution of a candidate model in the set $\mathbb{Q}$. The random variables to infer include the discrete model choice $q$ and continuous param-
+---PAGE_BREAK---
+
+Figure 1. Synthetic data. ((b),(c),(d)) Estimated error with 95% confidence interval. Plots not shown if no error occured. ((f),(g),(h)) proportion of sampled rewards. $l_{i,n}$ is sampled from Normal (×), Uniform (○) and LogNormal (□) distributions. Plots of Racing-Normal overlap in ((f),(g),(h)).
+
+eters {$\alpha_i$}$_{i=0}^q$, $\nu$. We adopt the augmented MCMC algo-
+rithm in Carlin & Chib (1995) to avoid transdimensional
+moves. We apply subsampling-based scalable algorithms
+to sample all variables with subsampled observations {$r_i$}:
+Racing-Normal Gibbs for $q$, stochastic gradient Langevin
+dynamics (SGLD) (Welling & Teh, 2011) corrected with
+Racing-Normal MH (Sec. 4) for $\alpha_i$ and $\nu$. We use adjusted
+priors $\tilde{\pi}_q$ as suggested by Carlin & Chib (1995) for suffi-
+cient mixing between all models and tune them with adap-
+tive MCMC. The adjusted posterior $\tilde{p}(q|\mathbf{r}) \propto \tilde{\pi}_q p(\mathbf{r}|q)$ is
+then close to uniform and the value $\pi_q/\tilde{\pi}_q$ provides an es-
+timate to the real unnormalized posterior $p(q|\mathbf{r})$. Control
+variates are also applied to reduce variance. Details of the
+sampling algorithm are provided in Appx. C.2.
+
+We apply the model on the 5-minute Shanghai stock ex-
+change composite index of one year consisting of about
+13,000 data points (Fig. 2(a)). $\mathbb{Q} = \{5, 10, 15, 20, 25, 30\}$.
+We set $m^{(1)} = 50$ and $\delta = 0.05$. The control variate
+method reduces the reward variance by 2~3 orders of mag-
+nitude. Fig. 2(b) shows the estimated log-posterior of $q$ by
+normalizing $\pi_q/\tilde{\pi}_q$ in the adaptive MCMC as a function
+of the number of likelihood evaluations (proportional to
+runtime). The subsampling-based sampler (Sub) converges
+about three times faster. We then fix $\tilde{\pi}_q$ for a fixed station-
+ary distribution and run MCMC for $10^5$ iterations to com-
+pare Sub with the exact sampler. The empirical error rates
+for Racing-Normal Gibbs and MH are about $4 \times 10^{-4}$ and
+$2 \times 10^{-3}$ respectively. Fig. 2(c) shows estimated adjusted
+posterior with 5 runs, and (d) compares the auto-correlation
+of sample $q$. Sub obtains over twice the effective sample
+
+size without noticeable bias after the burn-in period.
+
+**5.3. Author Coreference**
+
+We then study the performance in a large-scale graphical
+model inference problem. The author coreference prob-
+lem for a database of scientific paper citations is to clus-
+ter the mentions of authors into real persons. Singh et al.
+(2012) addressed this problem with a conditional random
+field model with pairwise factors. The joint and conditional
+distributions are respectively
+
+$$
+p_{\theta}(\mathbf{y}|\mathbf{x}) \propto \exp\left(\sum_{y_i=y_j, i \neq j, \forall i, j} f_{\theta}(x_i, x_j)\right),
+$$
+
+$$
+p_{\theta}(Y_i = y_i | Y_{-i}, \mathbf{x}) \propto \exp \left( \sum_{y_j \in C_y = \{j: y_j = y, j \neq i\}} f_{\theta}(x_i, x_j) \right)
+$$
+
+where $\mathbf{x} = \{x_i\}_{i=1}^N$ is the set of observed author mentions and $y_i \in \mathbb{N}^+$ is the cluster index for $i$'th mention. The factor $f_\theta(x_i, x_j)$ measures the similarity between two mentions based on author names, coauthors, paper title, etc, parameterized by $\theta$. In the conditional distribution, $y_i$ can take a value of any non-empty cluster or another empty cluster index. When a cluster $C_y$ contains a lot of mentions, a typical case for common author names, the number of factors to be evaluated $N_y = |C_y|$ will be large. We consider the MAP inference problem with fixed $\theta$ using annealed Gibbs sampling (Finkel et al., 2005). We apply Racing-Normal to sample $Y_i$ by subsampling $C_y$ for each
+---PAGE_BREAK---
+
+Figure 2. Bayesian ARCH Model Selection. Solid: exact, dashed: approximate using Sub Gibbs + SGLD with Sub MH.
+
+Figure 3. Author Coreference. Bigger $B^3$ F-1 score is better.
+
+candidate value $y$. An important difference of this problem from Eq. 1 is that $N_y \neq N_{y'}$, $\forall y \neq y'$ and $N_y$ has a heavy tail distribution. We let the mini-batch size depend on $N_y$ with details provided in Appx. C.3.
+
+We run the experiment on the union of an unlabeled DBLP dataset of BibTex entries with about 5M authors and a Rексa corpus of about 11K author mentions with 3160 entries labeled. We monitor the clustering performance on the labeled subset with the $B^3$ F-1 score (Bagga & Baldwin, 1998). We use $\delta = 0.05$ and the empirical error rate is about 0.046. The number of candidate values $D$ varies in 2 ~ 215 and $N_y$ varies in 1 ~ 1829 upon convergence. Fig. 3(a) shows the F-1 score as a function of the number of factor evaluations with 7 random runs for each algorithm. Sub Gibbs converges about three times faster than exact Gibbs. Fig. 3(b) shows F-1 as a function of iterations that renders almost identical behavior for both algorithms, which suggests negligible bias in Sub Gibbs. The relative number of the evaluated factors of sub to exact Gibbs indicates about a 5-time speed up near convergence. The initial speed up is small because every cluster is initialized with a single mention, i.e. $N_y = 1$.
+
+## 6. Discussion
+
+We consider the discrete sampling problem with a high degree of dependency and proposed three approximate algorithms under the framework of MABs with theoretical guarantees. The Racing algorithm provides a unifying approach to various subsampling-based Monte Carlo algorithms and also improves the robustness of the original MH algorithm in Korattikara et al. (2014). This is also the first work to discuss MABs under the setting of a finite reward population.
+
+
+
+Empirical evaluations show that Racing-Normal achieves a robust and the highest speed-up among all competitors. Whilst adaptive lil'UCB shows inferior empirical performance to Racing-Normal, it has a better sample complexity w.r.t. the number of arms $D$. It will be a future direction to combine the bound of Racing-Normal with other MAB algorithms including lil'UCB for a better scalability in $D$. Another important problem is on how to relax the assumptions for Racing-Normal without sacrificing the performance.
+
+It would also be an interesting direction to extend our work to draw continuous random variables efficiently with the Gumbel process (Maddison et al., 2014). In continuous state space, there are infinitely many “arms” and a naive application of our algorithm will lead to infinitely large error bound. This problem can be alleviated with algorithms for contextual MAB problems.
+---PAGE_BREAK---
+
+## Acknowledgements
+
+We thank Matt Hoffman for helpful discussions on the connection of our work to the MAB problems. We also thank all the reviewers for their constructive comments. We acknowledge funding from the Alan Turing Institute, Google, Microsoft Research and EPSRC Grant EP/I036575/1.
+
+## References
+
+Ahn, Sungjin, Korattikara, Anoop, and Welling, Max. Bayesian posterior sampling via stochastic gradient fisher scoring. In *Proceedings of the 29th International Conference on Machine Learning (ICML-12)*, pp. 1591–1598, 2012.
+
+Ahn, Sungjin, Korattikara, Anoop, Liu, Nathan, Rajan, Suju, and Welling, Max. Large-scale distributed bayesian matrix factorization using stochastic gradient mcmc. In *Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 9–18. ACM, 2015.
+
+Auer, Peter, Cesa-Bianchi, Nicolo, and Fischer, Paul. Finite-time analysis of the multiarmed bandit problem. *Machine learning*, 47(2-3):235–256, 2002.
+
+Bagga, Amit and Baldwin, Breck. Algorithms for scoring coreference chains. In *LREC workshop on linguistics coreference*, volume 1, pp. 563–566, 1998.
+
+Bardenet, Rémi and Maillard, Odalric-Ambrym. Concentration inequalities for sampling without replacement. *arXiv preprint arXiv:1309.4029*, 2013.
+
+Bardenet, Rémi, Doucet, Arnaud, and Holmes, Chris. Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach. In *Proceedings of The 31st International Conference on Machine Learning*, pp. 405–413, 2014.
+
+Bardenet, Rémi, Doucet, Arnaud, and Holmes, Chris. On markov chain monte carlo methods for tall data. *arXiv preprint arXiv:1505.02827*, 2015.
+
+Bechhofer, Robert E. A sequential multiple-decision procedure for selecting the best one of several normal populations with a common unknown variance, and its use with various experimental designs. *Biometrics*, 14(3): 408–429, 1958.
+
+Bottou, L. and Bousquet, O. The tradeoffs of large scale learning. In *NIPS*, volume 20, pp. 161–168, 2008.
+
+Bratires, S., van Gael, J., Vlachos, A., and Ghahramani, Z. Scaling the iHMM: Parallelization versus hadoop. In *Computer and Information Technology (CIT), 2010 IEEE 10th International Conference on*, pp. 1235–1240, June 2010.
+
+Carlin, B.P. and Chib, S. Bayesian model choice via Markov chain Monte Carlo. *Journal of the Royal Statistical Society, Series B*, 57:473–484, 1995.
+
+Chen, Tianqi, Fox, Emily, and Guestrin, Carlos. Stochastic gradient hamiltonian monte carlo. In *Proceedings of the 31st International Conference on Machine Learning (ICML-14)*, pp. 1683–1691, 2014.
+
+Ding, Nan, Fang, Youhan, Babbush, Ryan, Chen, Changyou, Skeel, Robert D, and Neven, Hartmut. Bayesian sampling using stochastic gradient thermostats. In *Advances in neural information processing systems*, pp. 3203–3211, 2014.
+
+DuBois, Christopher, Korattikara, Anoop, Welling, Max, and Smyth, Padhraic. Approximate slice sampling for Bayesian posterior inference. In *Proceedings of AIS-TATS*, pp. 185–193, 2014.
+
+Finkel, Jenny Rose, Grenager, Trond, and Manning, Christopher. Incorporating non-local information into information extraction systems by gibbs sampling. In *Proceedings of the 43rd ACL*, pp. 363–370, 2005.
+
+Hoeffding, Wassily. Probability inequalities for sums of bounded random variables. *Journal of the American statistical association*, 58(301):13–30, 1963.
+
+Jamieson, Kevin, Malloy, Matthew, Nowak, Robert, and Bubeck, Sébastien. lil'UCB: An optimal exploration algorithm for multi-armed bandits. In *Proceedings of The 27th COLT*, pp. 423–439, 2014.
+
+Kalli, Maria, Griffin, Jim E., and Walker, Stephen G. Slice sampling mixture models. *Statistics and computing*, 21 (1):93–105, 2011.
+
+Korattikara, Anoop, Chen, Yutian, and Welling, Max. Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In *Proceedings of the 31st International Conference on Machine Learning*, ICML 2014, Beijing, China, 21-26 June 2014, pp. 181–189, 2014.
+
+Kuzmin, Dima and Warmuth, Manfred K. Optimum follow the leader algorithm. In *Proceedings of the 18th annual conference on Learning Theory*, pp. 684–686. Springer-Verlag, 2005.
+
+Li, Aaron, Ahmed, Amr, Ravi, Sujith, and Smola, Alex. Reducing the sampling complexity of topic models. In *Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD)*, 2014.
+
+Maclaurin, Dougal and Adams, Ryan Prescott. Firefly monte carlo: Exact mcmc with subsets of data. In *Twenty-Fourth International Joint Conference on Artificial Intelligence*, 2015.
+---PAGE_BREAK---
+
+Maddison, Chris J., Tarlow, Daniel, and Minka, Tom. A* sampling. In *Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), NIPS*, pp. 3086–3094, 2014.
+
+Maron, Oded and Moore, Andrew W. Hoeffding races: Accelerating model selection search for classification and function approximation. In *Advances in Neural Information Processing Systems* 6, pp. 59–66. Morgan-Kaufmann, 1994.
+
+Medlar, Alan, Głowacka, Dorota, Stanescu, Horia, Bryson, Kevin, and Kleta, Robert. Swiftlink: parallel mcmc linkage analysis using multicore cpu and gpu. *Bioinformatics*, 29(4):413–419, 2013.
+
+Mitrophanov, A Yu. Sensitivity and convergence of uniformly ergodic markov chains. *Journal of Applied Probability*, pp. 1003–1014, 2005.
+
+Newman, David, Asuncion, Arthur, Smyth, Padhraic, and Welling, Max. Distributed algorithms for topic models. *The Journal of Machine Learning Research*, 10:1801–1828, 2009.
+
+Papandreou, G. and Yuille, A. Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models. In *Proceedings of ICCV*, pp. 193–200, Barcelona, Spain, November 2011.
+
+Paulson, Edward. A sequential procedure for selecting the population with the largest mean from k normal populations. *The Annals of Mathematical Statistics*, pp. 174–180, 1964.
+
+Pillai, Natesh S and Smith, Aaron. Ergodicity of approximate MCMC chains with applications to large data sets. *arXiv preprint arXiv:1405.0182*, 2014.
+
+Scott, Steven L, Blocker, Alexander W, Bonassi, Fernando V, Chipman, H, George, E, and McCulloch, R. Bayes and big data: The consensus monte carlo algorithm. *EFaBBayes 250 conference*, 16, 2013.
+
+Serfling, R. J. Probability inequalities for the sum in sampling without replacement. *Ann. Statist.*, 2(1):39–48, 01 1974.
+
+Singh, Sameer, Wick, Michael, and McCallum, Andrew. Monte Carlo MCMC: efficient inference by approximate sampling. In *Proceedings of EMNLP-CoNLL 2012*, pp. 1104–1113, 2012.
+
+Wang, Chong, Chen, Xi, Smola, Alex J, and Xing, Eric P. Variance reduction for stochastic gradient optimization. In *Advances in Neural Information Processing Systems*, pp. 181–189, 2013.
+
+Welling, Max and Teh, Yee W. Bayesian learning via stochastic gradient Langevin dynamics. In *Proceedings of ICML 2011*, pp. 681–688, 2011.
+
+Wilson, James R. Variance reduction techniques for digital simulation. *American Journal of Mathematical and Management Sciences*, 4(3-4):277–312, 1984.
+
+Wu, Yao, Yan, Qiang, Bickson, Danny, Low, Yucheng, and Yang, Qing. Efficient multicore collaborative filtering. In *ACM KDD CUP workshop*, 2011.
+
+Xu, M., Teh, Y. W., Zhu, J., and Zhang, B. Distributed context-aware bayesian posterior sampling via expectation propagation. In *Advances in Neural Information Processing Systems*, 2014.
\ No newline at end of file
diff --git a/samples/texts_merged/3316628.md b/samples/texts_merged/3316628.md
new file mode 100644
index 0000000000000000000000000000000000000000..346a77e472cd17458c416955d4d413926d0b4bbf
--- /dev/null
+++ b/samples/texts_merged/3316628.md
@@ -0,0 +1,787 @@
+
+---PAGE_BREAK---
+
+When are Timed Automata Weakly Timed Bisimilar to
+Time Petri Nets?
+
+Béatrice Bérard, Franck Cassez, Serge Haddad, Didier Lime, Olivier Henri Roux
+
+► To cite this version:
+
+Béatrice Bérard, Franck Cassez, Serge Haddad, Didier Lime, Olivier Henri Roux. When are Timed Automata Weakly Timed Bisimilar to Time Petri Nets?. Theoretical Computer Science, Elsevier, 2008, 403 (2–3), pp.202–220. inria-00363024
+
+HAL Id: inria-00363024
+
+https://hal.inria.fr/inria-00363024
+
+Submitted on 20 Feb 2009
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+When are Timed Automata weakly timed
+bisimilar to Time Petri Nets?
+
+B. Bérard¹, F. Cassez², S. Haddad³, D. Lime², O.H. Roux²
+
+¹ LAMSADE, Paris, France
+E-mail: Beatrice.Berard@lamsade.dauphine.fr
+
+² IRCCyN, Nantes, France
+E-mail: {Franck.Cassez | Didier.Lime | Olivier-h.Roux} @irccyn.ec-nantes.fr
+
+³ LSV, Cachan, France
+E-mail: Serge.Haddad@lsv.ens-cachan.fr
+
+**Abstract.** In this paper⁴, we compare Timed Automata (TA) and Time Petri Nets (TPN) with respect to weak timed bisimilarity. It is already known that the class of bounded TPNs is strictly included in the class of TA. It is thus natural to try and identify the subclass $\mathcal{T}\mathcal{A}^{wtb}$ of TA equivalent to some TPN for the weak timed bisimulation relation. We give a characterization of this subclass and we show that the membership problem and the reachability problem for $\mathcal{T}\mathcal{A}^{wtb}$ are PSPACE-complete. Furthermore we show that for a TA in $\mathcal{T}\mathcal{A}^{wtb}$ with integer constants, an equivalent TPN can be built with integer bounds but with a size exponential w.r.t. the original model. Surprisingly, using rational bounds yields a TPN whose size is linear.
+
+**Keywords:** Time Petri Nets, Timed Automata, Weak Timed Bisimilarity.
+
+# 1 Introduction
+
+**Time in Petri nets.** Adding explicit time to classical models of dynamic systems was first done in the seventies for Petri nets [24,27], with the aim to verify quantitative properties of systems. Since then, various timed models based on Petri nets were proposed. Among them, the two most prominent ones are Time Petri Nets (TPN) [24,10] and Timed Petri Nets (TdPN) [27,2]. In TPNs, a time interval is associated with each transition and a transition can fire if its enabling duration belongs to its interval. Since time elapsing must not disable transitions, TPNs naturally model urgency requirements. Efficient verification methods have been designed for bounded TPNs (e.g. [11]) implemented in several tools [19,11]. Roughly speaking, in TdPNs a time interval is associated with each arc, a token can be consumed along an input arc if its age belongs to the corresponding interval and the initial age of token produced along an output arc is non deterministically selected inside the corresponding interval. Contrary to
+
+⁴ A preliminary version of this work has been published in [8].
+---PAGE_BREAK---
+
+TPNs, there is no urgency mechanism and tokens may become useless due to time elapsing. However the lazy behaviour of such nets has led to the design of verification methods for unbounded nets [1].
+
+**Time in finite automata.** Timed Automata (TA), introduced in the seminal paper [4] have yielded a significant breakthrough in the theory of modelling and analysis of timed systems. The most commonly used variant of TA, called Safety TA has been defined in [21] and this is the one we study here. A TA is a finite automaton equipped with a set of clocks which evolve synchronously with time. Elementary constraints on clock values restrict the sojourn in a location and the firing of transitions. In addition, transitions may involve some clock reset. Extensions of this model have been subsequently proposed (e.g. diagonal or linear constraints, silent transitions, non deterministic updates, etc. [13]). In these models, verification is based on a finite partition of clock values and is supported by various tools [28,25].
+
+**Expressiveness of timed models.** Due to the diversity of time mechanisms involved in these models, their relative expressiveness is a natural issue. More precisely, given two different formalisms, some questions must be considered:
+
+- Are they equally expressive? Otherwise is one formalism more powerful than the other?
+
+- Given a model of the first kind can we decide whether it is equivalent to a model of the second kind (*membership problem*)? In this case, can we build such an equivalent model?
+
+Two standard equivalence criteria are the family of timed languages generated and the weak timed bisimilarity of the models. For instance, in the framework of TdPNs w.r.t. the language criterion, it has been shown that read arcs add expressive power and that TdPNs and TA are incomparable [14]. In the framework of TA w.r.t. the language criterion, it has been shown that silent transitions add expressive power [9] and (very recently) that the corresponding membership problem is undecidable [15].
+
+**Comparison of TA and TPNs.** In [20], the authors compare Timed State Machines (TSM, a restricted version of TA) and TPNs, giving a translation from TSM to TPN that preserves timed languages. In [7], we have designed a more general translation between TA and TPNs with better complexity. In [6], we studied the effect of different semantics for TPNs on expressiveness w.r.t. weak timed bisimilarity. Here, we are interested in comparing the expressive power of TA and TPN for weak timed bisimilarity. Recall that there are unbounded TPNs for which no bisimilar TA exists. This is a direct consequence of the following observation: the untimed language of a TA is regular which is not necessarily the case for TPNs. It was proved in [17] that bounded TPNs form a strict subclass of the class of timed automata, in the sense that for each bounded TPN $N$, there exists a TA which is weakly timed bisimilar to $N$ but the converse is false. A similar result can be found in [22], where it is obtained by a completely different approach. However given a TA, deciding whether there exists a bisimilar TPN remained an open question.
+---PAGE_BREAK---
+
+**Our Contribution.** In this work, we give a characterization of the maximal subclass $\mathcal{T}\mathcal{A}^{wtb}$ of timed automata which admit a weakly timed bisimilar TPN. This condition is not intuitive and relates to the topological properties of the so-called region automaton associated with a TA. To prove that the condition is necessary, we introduce the notion of *uniform bisimilarity*, which is stronger than weak timed bisimilarity. Conversely, when the condition holds for a TA, we provide two effective constructions of bisimilar TPNs: the first one with rational constants has a size linear w.r.t. the TA, while the other one, which uses only integer constants has an exponential size. From this characterization, we deduce that given a TA, the problem of deciding whether there is a TPN bisimilar to it, is *PSPACE*-complete. Thus, we obtain that the membership problem is *PSPACE*-complete. Finally we also prove that the reachability problem is *PSPACE*-complete.
+
+**Outline of the paper.** In section 2, we recall the definitions for Timed Transition Systems which are used to describe the semantics of TPNs and TA, and for timed bisimilarity relations. We also describe Timed Automata (TA) and the associated notions of zones and regions. Section 3 presents Time Petri Nets (TPNs) and explains the characterization of TA bisimilar to TPNs. The following sections are devoted to the proof for this characterization: section 4 defines the notion of uniform bisimulation and gives the proof that the condition is necessary, while sections 5 and 6 show that the condition is sufficient by exhibiting two constructions which differ by their complexity. We give complexity results in section 7 and we conclude in section 8.
+
+## 2 Timed Transition Systems and Timed Automata
+
+**Notations.** Let $\Sigma$ be a finite alphabet, $\Sigma^*$ (resp. $\Sigma^\omega$) the set of finite (resp. infinite) words of $\Sigma$ and $\Sigma^\infty = \Sigma^* \cup \Sigma^\omega$. We also use $\Sigma_e = \Sigma \cup \{\varepsilon\}$ with $\varepsilon$ (the empty word) not in $\Sigma$.
+
+The sets $\mathbb{N}$, $\mathbb{Q}_{\ge 0}$ and $\mathbb{R}_{\ge 0}$ are respectively the sets of natural, non-negative rational and non-negative real numbers. We denote by $0$ the tuple $v \in \mathbb{N}^n$ such that $v(k) = 0$ for all $1 \le k \le n$. For a natural number $K$ and a tuple $v \in \mathbb{R}_{\ge 0}^n$, we write $v \le_K \vec{k}$ if $v(k) \le K$ for all $1 \le k \le n$. For a non negative real number $z$, we denote by $[\![z]\!]$ the integral part of $z$ (the greatest natural number which is less than or equal to $z$) and, similarly, by $[\![z]\!]$ the smallest natural number greater than or equal to $z$. We also denote by fract($z$) the fractional part of $z$. Let $g > 0$ in $\mathbb{N}$, we write $\mathbb{N}_g = \{\frac{i}{g} | i \in \mathbb{N}\}$. A tuple $v \in \mathbb{Q}^n$ belongs to the g-grid if $v(k) \in \mathbb{N}_g$ for all $1 \le k \le n$.
+
+An interval $I$ of $\mathbb{R}_{\ge 0}$ is a $\mathbb{Q}_{\ge 0}$-interval iff its left endpoint belongs to $\mathbb{Q}_{\ge 0}$ and its right endpoint belongs to $\mathbb{Q}_{\ge 0} \cup \{\infty\}$. We set $I^\downarrow = \{x \mid x \le y$ for some $y \in I\}$, the downward closure of $I$ and $I^\uparrow = \{x \mid x \ge y$ for some $y \in I\}$, the upward closure of $I$. We denote by $\mathcal{I}(\mathbb{Q}_{\ge 0})$ the set of $\mathbb{Q}_{\ge 0}$-intervals of $\mathbb{R}_{\ge 0}$.
+---PAGE_BREAK---
+
+## 2.1 Timed Transition Systems and Equivalence Relations
+
+Timed transition systems describe systems which combine discrete and continuous evolutions. They are used to define and compare the semantics of time Petri nets and timed automata.
+
+A Timed Transition System (TTS) is a transition system $S = (Q, q_0, \rightarrow)$, where $Q$ is the set of configurations, $q_0 \in Q$ is the initial configuration and the relation $\rightarrow$ consists of either delay moves $q \xrightarrow{d} q'$, with $d \in \mathbb{R}_{\ge 0}$, or discrete moves $q \xrightarrow{a} q'$, with $a \in \Sigma_{\epsilon}$. Moreover, we require standard properties for the relation $\rightarrow$:
+
+* **Time-Determinism:** if $q \xrightarrow{d} q'$ and $q \xrightarrow{d} q''$ with $d \in \mathbb{R}_{\ge 0}$, then $q' = q''$
+* **0-delay:** $q \xrightarrow{0} q$
+* **Additivity:** if $q \xrightarrow{d} q'$ and $q' \xrightarrow{d'} q''$ with $d, d' \in \mathbb{R}_{\ge 0}$, then $q \xrightarrow{d+d'} q'$
+* **Continuity:** if $q \xrightarrow{d} q'$, then for every $d'$ and $d''$ in $\mathbb{R}_{\ge 0}$ such that $d = d' + d''$, there exists $q''$ such that $q \xrightarrow{d'} q'' \xrightarrow{d''} q'$.
+
+These properties have been formally introduced in the framework of Algebra of Timed Processes [23] but are also satisfied in TTS studied here. With these properties, a run of $S$ can be defined as a finite or infinite sequence of moves $\rho = q_0 \xrightarrow{d_0} q'_0 \xrightarrow{a_0} q_1 \xrightarrow{d_1} q'_1 \xrightarrow{a_1} \cdots q_n \xrightarrow{d_n} q'_n \dots$ where discrete actions alternate with durations. For a finite run, we also write $q \xrightarrow{d_0 a_0 \dots d_n \dots} q'$. The word Untimed($\rho$) in $\Sigma^{\infty}$ is obtained by the concatenation $a_0 a_1 \dots$ of labels in $\Sigma_{\epsilon}$ (so empty labels disappear), and Duration($\rho$) = $\sum_{i=0}^{|\rho|} d_i$, where $|\rho|$ is the length of $\rho$.
+
+From a TTS, we define the relation $\Rightarrow \subseteq Q \times (\Sigma \cup \mathbb{R}_{\ge 0}) \times Q$ for $a \in \Sigma$ and $d \in \mathbb{R}_{\ge 0}$ by:
+
+- $q \xrightarrow{d} q'$ iff $\exists \rho = q \xrightarrow{w} q'$ with Untimed($\rho$) = $\epsilon$ and Duration($\rho$) = $d$,
+
+- $q \xrightarrow{a} q'$ iff $\exists \rho = q \xrightarrow{w} q'$ with Untimed($\rho$) = $a$ and Duration($\rho$) = $0$.
+
+**Definition 1 (Weak Timed Bisimilarity).** Let $S_1 = (Q_1, q_0^1, \rightarrow_1)$ and $S_2 = (Q_2, q_0^2, \rightarrow_2)$ be two TTS and let $\approx$ be a binary relation over $Q_1 \times Q_2$. We write $q \approx q'$ for $(q, q') \in \approx$. The relation $\approx$ is a weak timed bisimulation between $S_1$ and $S_2$ iff $q_0^1 \approx q_0^2$ and for all $a \in \Sigma \cup \mathbb{R}_{\ge 0}$:
+
+- if $q_1 \xrightarrow{a} q'_1$ and $q_1 \approx q_2$ then $\exists q_2 \xrightarrow{a} q'_2$ such that $q'_1 \approx q'_2$;
+
+- conversely, if $q_2 \xrightarrow{a} q'_2$ and $q_1 \approx q_2$ then $\exists q_1 \xrightarrow{a} q'_1$ such that $q'_1 \approx q'_2$.
+
+Two TTS $S_1$ and $S_2$ are weakly timed bisimilar, written $S_1 \approx_{\omega} S_2$, if there exists a weak timed bisimulation relation between them.
+
+Strong timed bisimilarity would require similar properties for transitions labeled by $a \in \Sigma \cup \mathbb{R}_{\ge 0}$, but with $\xrightarrow{a}$ instead of $\Rightarrow$. Thus it forbids the possibility of simulating a move by a sequence. On the other hand, weak timed bisimilarity is more precise than language equivalence and it is well-known to be central among equivalence relations between timed systems. In the rest of the paper, we abbreviate weak timed bisimilarity by bisimilarity and we explicitly name other equivalences when needed.
+---PAGE_BREAK---
+
+## 2.2 Timed Automata
+
+First defined in [4], the model of timed automata (TA) associates with a finite automaton a set of non negative real-valued variables called *clocks*. Let $X$ be a finite set of *clocks*. A constraint over $X$ is a conjunction of atomic formulas of the form $x \bowtie h$ for $x \in X$, $h \in \mathbb{N}$ and $\bowtie \in \{\langle-, \leq, \geq, \rangle\}$ and we write $\mathcal{C}(X)$ for the set of constraints.
+
+**Definition 2 (Timed Automaton).** A Timed Automaton $\mathcal{A}$ over alphabet $\Sigma_{\varepsilon}$ is a tuple $(L, \ell_0, X, \Sigma_{\varepsilon}, E, \text{Inv})$ where
+
+- $L$ is a finite set of locations with $\ell_0 \in L$ is the initial location,
+
+- $X$ is a finite set of clocks,
+
+- $E \subseteq L \times \mathcal{C}(X) \times \Sigma_{\varepsilon} \times 2^X \times L$ is a finite set of edges and
+
+- $\text{Inv} \in \mathcal{C}(X)^L$ assigns an invariant to each location.
+
+An edge $e = (\ell, \gamma, a, R, \ell') \in E$, also written $\ell \xrightarrow{\gamma, a, R} \ell'$, represents a transition from location $\ell$ to location $\ell'$ with guard $\gamma$ and reset set $R \subseteq X$. We restrict the invariants to conjunctions of terms of the form $x \bowtie h$ for $x \in X$, $h \in \mathbb{N}$ and $\bowtie \in \{\langle-, \leq\}.$
+
+We consider *injectively-labelled automata* i.e., where two different edges have different labels (and no label is $\epsilon$). Figure 1 shows two injectively-labelled timed automata that will be used to illustrate the constructions throughout the paper.
+
+Fig. 1. Two timed automata
+
+A valuation $v$ is a mapping in $\mathbb{R}_{\ge 0}^X$. The effect of reset operations and time elapsing on valuations are described as follows. For $R \subseteq X$, the valuation $v[R \mapsto 0]$ maps each variable in $R$ to the value 0 and agrees with $v$ over $X \setminus R$. For valuation $v$ and $d \in \mathbb{R}_{\ge 0}$, valuation $v+d$ is defined by: $\forall x \in X, (v+d)(x) = v(x)+d$. Constraints of $\mathcal{C}(X)$ are interpreted over valuations: we write $v\models\gamma$ when the constraint $\gamma$ is satisfied by $v$. The set of valuations satisfying a constraint $\gamma$ is denoted by $[\gamma]$.
+---PAGE_BREAK---
+
+**Definition 3 (Semantics of TA).** The semantics of a timed automaton $\mathcal{A} = (L, \ell_0, X, \Sigma_\varepsilon, E, \text{Inv})$ is a timed transition system $S_\mathcal{A} = (Q, q_0, \rightarrow)$ where $Q = L \times (\mathbb{R}_{\le 0})^X$, $q_0 = (\ell_0, \mathbf{0})$ and $\rightarrow$ is defined by:
+
+- either a delay move $(\ell, v) \xrightarrow{d} (\ell, v+d)$ iff $v+d \models \text{Inv}(\ell)$,
+
+- or a discrete move $(\ell, v) \xrightarrow{e} (\ell', v')$ iff there exists some $e = (\ell, \gamma, a, R, \ell')$ in $E$ such that $v \models \gamma$, $v' = v[R \mapsto 0]$ and $v' \models \text{Inv}(\ell')$.
+
+In order to ensure in a syntactic way that a move associated with a discrete transition *e* always leads to a configuration $(\ell', v')$ such that $v' \models \text{Inv}(\ell')$, we may modify the initial timed automaton in the following way : any atomic constraint related to a clock *x* occurring in the invariant of $\ell'$ is added to the guard of each transition arriving in $\ell'$ which does not reset *x*. This transformation does not change the resulting transition system but makes some proofs simpler and in the sequel, we always assume that the timed automaton has this property.
+
+Since our results are mainly based on the region automaton and some of its variants, we now recall the material needed.
+
+## 2.3 Elementary zones of a timed automaton
+
+**Notations.** A zone is a subset of $(\mathbb{R}_{\ge 0})^X$ defined by a conjunction of atomic clock constraints of the form $x \bowtie h$ or $x - y \bowtie h$, where $x$ and $y$ are clocks in $X$, $h \in \mathbb{N}$ and $\bowtie \in \{\langle-, \leq, \geq, \rangle\}$. In particular, if $\gamma$ is a clock constraint in $C(X)$, then the set $[\gamma]$ is a zone. Constraints of the form $x - y \bowtie h$ are usually called *diagonal constraints*. The future of a zone $Z$ is defined by $fut(Z) = \{v + d | v \in Z, d \in \mathbb{R}_{\ge 0}\}$. A zone $Z$ satisfies a constraint $\gamma \in C(X)$, written $Z \models \gamma$, if all valuations in $Z$ satisfy $\gamma$. For $k \in \mathbb{N}$, a *k-zone* is a zone for which no atomic clock constraint in its definition involves a constant greater than $k$.
+
+**Elementary zones.** Recall [4] that, if *m* is the maximal constant appearing in atomic formulas *x* ⋇ *h* of *A*, an equivalence relation with finite index can be defined on clock valuations, leading to a partition of $(\mathbb{R}_{\ge 0})^X$, with the following property: two equivalent valuations have the same behaviour under progress of time and reset operations, with respect to the constraints. In the original definition, an element of the partition is specified by a pair $(\{I_x\}_{x \in X}, \infty)$ where $I_x$ is an interval in the set $\{\{0\}, ]0, 1[, \{1\}, \dots, \{m\}, ]m, +\infty[)\$ and $\infty$ is a relation on the subset of clocks *x* such that $I_x \neq ]m, +\infty[$ defined by $x \propto y \equiv fract(x) \le fract(y)$.
+
+Note that the property related to the behaviour of the systems holds for any partition which refines the above partition. In this paper, we define a family of refining partitions $\mathcal{P}_{K,g}$ for $K \ge m+1$ and $g \in \mathbb{N} \setminus \{0\}$. whose elements are called *elementary zones*. The constant *g* is called the *granularity* and $\mathcal{P}_{K,g}$ is called a *g-grid*. For such a partition the interval $I_x$ belongs to the set $\{\{0\}, ]0, \frac{1}{g}[$, $\{\frac{1}{g}\}$, $]1, \frac{2}{g}[$, ..., $\{K-\frac{1}{g}\}$], $]K-\frac{1}{g}, K[$, $[K, +\infty[$] (instead of keeping $\{K\}$ separately). As before, we also specify the ordering on the fractional parts w.r.t.
+---PAGE_BREAK---
+
+$g$ for all clocks $x$ (i.e. the value $\delta \in [0, \frac{1}{g}]$ such that $\exists i, x = \frac{i}{g} + \delta$) with valuation less than $K$. We include the case $K = +\infty$.
+
+When $K$ is finite, reachability can be decided by the usual method applied on these partitions; however we consider $[K, \infty[$ (rather than the usual ]$K, \infty[$) as the last interval for topological reasons explained later. The partition $K = +\infty$ is used here as a tool for our expressiveness results.
+
+**Fig. 2.** Partition of $(\mathbb{R}^+)^2$ with granularity $g = 1$ and $K = 3$
+
+Figures 2 and 3 represent different partitions for the set of two clocks $X = \{x, y\}$. The horizontal and vertical lines correspond to the possible intervals $I_x$ and $I_y$ whereas the diagonal lines specify the ordering of fractional parts.
+
+Figure 2 is associated with $g = 1$ and $K = 3$. For this example, elementary zones $Z_1$ and $Z_2$ are described as follows: $Z_1$: $(2 < x < 3) \land (1 < y < 2) \land (0 < frac(y) < frac(x))$ or equivalently, $(2 < x < 3) \land (1 < y < 2) \land (x - y > 1)$ and $Z_2$: $(x \ge 3) \land (1 < y < 2)$. Figure 3 (left) is associated with $g = 2$ and $K = 2$ while figure 3 (right) is associated with $g = 2$ and $K = +\infty$. The dotted lines of this figure mean that the pattern is infinitely repeated.
+
+**Fig. 3.** Partitions of $(\mathbb{R}^+)^2$ with granularity $g = 2$, for $K = 2$ and $K = +\infty$
+---PAGE_BREAK---
+
+If $Z$ and $Z'$ are elementary zones, $Z'$ is a time successor of $Z$, written $Z \leq Z'$,
+if for each valuation $v \in Z$, there is some $d \in \mathbb{R}_{\geq 0}$ such that $v+d \in Z'$. For each
+elementary zone $Z$, there is at most one elementary zone such that (i) $Z'$ is a time
+successor of $Z$, (ii) $Z \neq Z'$ and (iii) there is no time successor $Z''$ different from
+$Z$ and $Z'$ such that $Z \leq Z'' \leq Z'$. When it exists, this elementary zone is called
+the immediate time successor of $Z$ and denoted by $\mathrm{succ}(Z)$. From the definition
+of elementary zones as equivalent classes consistent with reset operations, the
+reset can be extended to elementary zones, so that $Z[R \mapsto 0]$ is the elementary
+zone containing any $v[R \mapsto 0]$, for $v \in Z$.
+
+Standard topological notions on $(\mathbb{R}_{\ge 0})^n$ apply to zones. Moreover, due to
+the particular form of the constraints, the topological closure of any elementary
+zone has a minimal element.
+
+For a timed automaton $\mathcal{A}$, a constant $K$ and a granularity $g$, we call region
+a pair $(\ell, Z)$, where $\ell$ is a location of $\mathcal{A}$ and $Z$ an elementary zone of $(\mathbb{R}_{\ge 0})^X$.
+We now give a description for regions, which distinguishes time-closed and time-
+open descriptions. It is equivalent to the original one but more convenient for
+our proofs and it fits both cases, whether $K$ is finite or infinite.
+
+**Definition 4 (Region description w.r.t. constant K and granularity g).**
+
+A time-closed description of a region *r* is given by:
+
+- $\ell_r$ the location of r,
+
+- $min_r \in \mathbb{N}_g^X$ with $\forall x, min_r(x) \le K$, the minimal vector of the topological closure of r,
+
+- $\text{Act}X_r = \{x \in X \mid min_r(x) < K\}$ the subset of relevant clocks,
+
+- the number sizer of different fractional parts for the values of relevant clocks
+in the $\mathbb{N}_g^{\text{Act}X_r}$ grid, with $1 \le size_r \le \text{Max}(|\text{Act}X_r|, 1)$ and the onto mapping
+$\text{ord}_r : X \mapsto \{1, \dots, size_r\}$ giving the ordering of the fractional parts.
+
+By convention, $\forall x \in X \setminus \text{Act}X_r$, $\text{ord}_r(x) = 1$.
+
+Then $r = \{((\ell_r, min_r + \boldsymbol{\delta}) | \boldsymbol{\delta} \in \mathbb{R}_{\ge 0}^X \text{ and } \forall x, y \in \text{Act}X_r [\text{ord}_r(x) = 1 \Leftrightarrow \boldsymbol{\delta}(x) = 0] \wedge \boldsymbol{\delta}(x) < 1/g \wedge [\text{ord}_r(x) < \text{ord}_r(y) \Leftrightarrow \boldsymbol{\delta}(x) < \boldsymbol{\delta}(y)]}\}$
+
+A time-open description of a region *r* is defined with the same attributes (and conditions) as the time-closed one with:
+
+$$
+r = \left\{ (\ell_r, min_r + \boldsymbol{\delta} + d) \mid \exists d \in \mathbb{R}_{>0} \wedge \forall x \in ActX_r, d(x) = d \wedge \delta(x) + d < 1/g \wedge \forall x \notin ActX_r, d(x) = 0 \right\}.
+$$
+
+The set $[X]_r$ is the set of equivalence classes of clocks w.r.t. their fractional parts,
+i.e. $x$ and $y$ are equivalent iff $\mathrm{ord}_r(x) = \mathrm{ord}_r(y)$.
+
+Thus, a time-open region corresponds to an immediate time successor of a time-
+closed region and the decomposition of a valuation in a time-open region as
+$min_r + \mathbf{\delta} + d$ is unique if $min_r + \mathbf{\delta}$ belongs to the previous time-closed region.
+This property is used in the sequel.
+
+Also remark that $min_r \notin r$ except if there is a single class of clocks relative
+to *r* (for instance if the corresponding zone is a singleton). Of course, when
+*K* = +∞, the part about *relevant* clocks, for which the value is less than *K*,
+---PAGE_BREAK---
+
+can be omitted (since $ActX_r = X$). The hypothesis $K = +\infty$ makes some proofs simpler, because the extremal case where a clock value is greater than $K$ is avoided, and it can be lifted afterward. Furthermore when $K$ is finite, some regions admit both time-open and time-closed descriptions (for instance a region associated with zone $Z_2$ in fig. 3), whereas when $K = +\infty$, a region admits a single description, so that time elapsing leads to an alternation of time-open regions (where time can elapse) and time-closed ones (where no time can elapse). Furthermore in this case, the representation of a valuation in a time-open region can be written $min_r + \delta + d$ with $d$ a (scalar) duration.
+
+## 2.4 Region automaton and class automaton of a TA.
+
+For a timed automaton $\mathcal{A}$, a constant $K$ and a granularity $g$, the region automaton $R(\mathcal{A})_{g,K}$ is a finite automaton, the states of which are regions. These regions are built inductively from the initial one $(\ell_0, \mathbf{0})$ by the following transitions over the set of labels $\{succ\} \cup \Sigma_\varepsilon$:
+
+$$
+\begin{aligned}
+- (\ell, Z) & \xrightarrow{succ} (\ell, succ(Z)) && \text{if } succ(Z) \models \mathrm{Inv}(\ell) \text{ and} \\
+- (\ell, Z) & \xrightarrow{a} (\ell', Z') && \text{if there is a transition } (\ell, \gamma, a, R, \ell') \in E \text{ such that } Z \models \gamma \\
+& && \text{and } Z' = Z[R \mapsto 0], \text{ with } Z' \models \mathrm{Inv}(\ell').
+\end{aligned}
+ $$
+
+A region $r = (\ell, Z)$ is said to be maximal in $R(\mathcal{A})_{g,K}$ with respect to $\ell$ if no succ-transition is possible from $r$. In the sequel, the topological properties of $r$ are implicitly derived from those of $Z$. We write $\bar{r}$ for the topological closure of $r$, and recall that $min_r$ denotes the minimal vector of $\bar{r}$.
+
+We also consider another automaton, called the *class automaton*, in which the states, called *classes*, are of the form $(\ell, Z)$, where $Z$ is a zone. In this case, the automaton is built from the initial class $(\ell_0, fut(\mathbf{0}) \cap Inv(\ell_0))$ by the following transitions:
+
+$$ (\ell, Z_1) \xrightarrow{a} (\ell', Z_2) && \text{if there exists } (\ell, \gamma, a, R, \ell') \in E \text{ such that } Z_1 \cap [\gamma] \neq \emptyset, \text{ and} \\
+Z_2 = fut((Z_1 \cap [\gamma])[R \mapsto 0]) \cap Inv(\ell'). $$
+
+While this construction does not always yield a finite number of classes, it is now well known [12] that for a timed automaton without diagonal constraints (as is our case), replacing each zone $Z$ from the previous construction by its $K$-approximation ($Approx_K(Z)$, the smallest $K$-zone containing $Z$), we obtain a finite automaton such that any configuration belonging to a zone is strongly timed bisimilar to a reachable configuration (see also the discussion of the next paragraph). This method is usually implemented in tools like UPPAAL [25] or KRONOS [28] for on-the-fly verification, with each zone represented by a Difference Bounded Matrix [18].
+
+## 2.5 Reachability
+
+For a region $r$ of $R(\mathcal{A})_{g,K}$, not all configurations of $r$ are reachable. Nevertheless, by induction on the reachability relation, the following property can be shown: for any region $r$ of $R(\mathcal{A})_{g,K}$, there is a region `reach(r)` with respect to the $g$-grid and constant $K = \infty$ such that
+---PAGE_BREAK---
+
+- (i) *reach(r) ⊂ r*,
+
+- (ii) each configuration of *reach(r)* is reachable and
+
+- (iii) if *reach(r)* admits a time-open description then this is also the case for *r*, else *r* admits a time-closed description.
+
+As a consequence, we have: $\forall x \in ActX_r, min_{reach(r)}(x) = min_r(x)$ and $\forall x \in X \setminus ActX_r, min_{reach(r)}(x) \ge K$ and $ord_r$ restricted to $ActX_r$ is identical to $ord_{reach}(r)$.
+
+Consider now the relation $\mathcal{R}$ defined by $(\ell, v)\mathcal{R} (\ell, v')$ iff $\forall x \in X, v'(x) = v(x) \lor (v(x) \ge K \land v'(x) \ge K)$. It is a strong timed bisimulation relation. From the previous observations, we note that each configuration of a region is strongly timed bisimilar to a reachable configuration of this region. Thus speaking about reachability of regions is a slight abuse of notations. Note that the same property holds for the classes of the class automaton.
+
+# 3 Characterizing TA bisimilar to Time Petri Nets
+
+## 3.1 Time Petri Nets
+
+Introduced in [24], Time Petri Nets (TPNs) extend Petri nets by associating a (topological) closed time interval with each transition (see figure 4). The meaning of this interval is detailed after the definition.
+
+**Fig. 4.** A labelled time Petri net
+
+**Definition 5 (Labeled Time Petri Net).** A Labeled Time Petri Net $\mathcal{N}$ over $\Sigma_\epsilon$ is a tuple $(P, T, \Sigma_\epsilon, \bullet(\cdot), (\cdot)^\bullet, M_0, \Lambda, I)$ where:
+
+- P is a finite set of places,
+
+- T is a finite set of transitions with $P \cap T = \emptyset$,
+
+- $\bullet(\cdot) \in (\mathbb{N}^P)^T$ is the backward incidence mapping,
+
+- $(\cdot)^\bullet \in (\mathbb{N}^P)^T$ is the forward incidence mapping,
+
+- $M_0 \in \mathbb{N}^P$ is the initial marking,
+
+- $\Lambda: T \to \Sigma_\epsilon$ is the labeling function,
+
+- $I: T \mapsto \mathcal{I}(\mathbb{Q}_{\ge 0})$ associates with each transition a closed firing interval.
+---PAGE_BREAK---
+
+A TPN $\mathcal{N}$ is a $g$-TPN if for all $t \in T$, the interval $I(t)$ has its bounds in $\mathbb{N}_g$.
+We also use $\bullet t$ (resp. $t^\bullet$) to denote the set of places $\bullet t = \{p \in P | \bullet t(p) > 0\}$
+(resp. $t^\bullet = \{p \in P | t^\bullet(p) > 0\}$) as is common in the literature. The intended
+meaning of these notations will be clear from the context.
+
+As usual, a marking *M* is a mapping in $\mathbb{N}^P$, with $M(p)$ the number of tokens
+in place *p*. A transition *t* is enabled in a marking *M* iff $M \geq \bullet t$. We denote by
+*En*(*M*) the set of enabled transitions in *M*. A configuration of a TPN is a pair
+(*M*, *v*) where *M* is a marking and the valuation *v* is a mapping in $(\mathbb{R}_{\ge 0})^{En(*M)}$,
+which describes the values of clocks implicitly associated with transitions en-
+abled in *M*. Roughly speaking, such a clock measures the time elapsed since the
+enabling of the corresponding transition as explained more precisely below.
+
+An enabled transition *t* can be fired if $\nu(t)$ belongs to the interval $I(t)$. The result of this firing is as usual the new marking $M' = M - \bullet t + t^\bullet$. Moreover, some valuations are reset and we say that the corresponding transitions are newly enabled. Different semantics are possible for this operation. In this paper, we choose persistent atomic semantics, which is different from the classical semantics [10,5], but equivalent when the net is bounded [6]. The predicate describing when a transition $t'$ is newly enabled by the firing of transition *t* is defined by:
+
+$$ \uparrow \text{enabled}(t', M, t) = t' \in \text{En}(M - \bullet t + t^{\bullet}) \wedge (t' \notin \text{En}(M)). $$
+
+Thus, firing a transition is considered as an atomic step and the transition currently fired behaves like the other transitions ($\nu(t)$ need not be reset when $t$ is fired). In classical semantics, when firing a transition, disabling of the other transitions is checked after the consumption of tokens from input places and before the production of tokens to output places. Furthermore the clock associated with the fired transition is always reset. Observe that our results also hold for the classical semantics but we have chosen this one as it leads to more elegant constructions in sections 5 and 6. More precisely, the sufficient condition is obtained by construction of a bounded TPN and it has been proven that for bounded nets, both semantics are equivalent [6]. The necessary condition is based on lemmas 1 and 2 which do not depend on the triggering of clock reset in TPNs.
+
+The set $\mathbf{ADM}(\mathcal{N})$ of admissible configurations consists of the pairs $(M, v)$
+such that $v(t) \in I(t)^{\downarrow}$ for each transition $t \in En(M)$. Thus time can progress
+in a marking only until it reaches the first right endpoint of the intervals for all
+enabled transitions. For $d \in \mathbb{R}_{\ge 0}$, the valuation $v+d$ is defined by $(v+d)(t) =$
+$v(t)+d$ for each $t \in En(M)$.
+
+**Definition 6 (Semantics of TPN).** The semantics of a TPN $\mathcal{N} = (P, T, \Sigma_\epsilon, \bullet(\cdot), (\cdot)^\bullet, M_0, \Lambda, I)$ is a TTS $S_\mathcal{N} = (Q, q_0, \rightarrow)$ where $Q = \mathbf{ADM}(\mathcal{N})$, $q_0 = (M_0, \mathbf{0})$ and $\rightarrow$ is defined by:
+
+- either a delay move $(M, v) \xrightarrow{d} (M, v+d)$
+ iff $\forall t \in En(M), v(t)+d \in I(t)^{\downarrow},$
+---PAGE_BREAK---
+
+- or a discrete move $(M, \nu) \xrightarrow{\Lambda(t)} (M - \bullet t + t^\bullet, \nu')$ where $\forall t' \in En(M - \bullet t + t^\bullet)$,
+ $\nu'(t') = 0$ if $\uparrow enabled(t', M, t)$ and $\nu'(t') = \nu(t')$ otherwise,
+ iff $t \in En(M)$ and $\nu(t) \in I(t)$.
+
+We simply write $(M, \nu) \xrightarrow{w}$ to emphasise that a sequence of transitions $w$ can be fired. If $Duration(w) = 0$, we say that $w$ is an *instantaneous firing sequence*. A net is said to be $k$-bounded if for each reachable configuration $(M, \nu)$ and for each place $p$, $M(p) \le k$.
+
+Denoting $((M_0(p_1), M_0(p_2)), (\nu_0(t_1), \nu_0(t_2)))$ by $((2, 1), (0, 0))$, the sequence $((2, 1), (0, 0)) \xrightarrow{2} ((2, 1), (2, 2)) \xrightarrow{t_1} ((1, 1), (2, 2)) \xrightarrow{t_1} ((0, 1), (-, 2))$ is a firing sequence of the net of figure 4. Note that, due to the semantics, the first firing of $t_1$ does not reset its clock even if the token in $p_2$ is consumed and produced again. Note also that time cannot progress beyond time 2 as long as $t_1$ remains fireable. Afterwards time can progress for at most 2 t.u. at which point $t_2$ must fire (it could also fire before). This temporal feature is sometimes called an *urgent* behaviour in contrast to a *lazy* behaviour.
+
+**Fig. 5.** An execution in a time Petri net
+
+## 3.2 A characterization of TA bisimilar to TPNs
+
+We aim at characterizing TA bisimilar to some TPN whatever the labelling of the TA. In fact to check this property is equivalent to test whether an associated injectively-labelled TA is bisimilar to some TPN. This is why we only consider injectively-labelled TA in the sequel.
+
+TA include both lazy mechanisms since time elapsing can falsify the guard of an edge and urgent mechanisms since invariants may forbid time elapsing in some configurations. Hence a characterization of TA bisimilar to some TPN seems to be closely related to the absence of laziness features. In this paragraph, we first present two intuitive conditions for a TA to be bisimilar to a TPN which do not characterize this property before introducing the exact characterization.
+
+The first condition is obtained by observing that a transition (and more generally an instantaneous firing sequence) of a TPN cannot be disabled by time elapsing. Consequently, our first attempt to characterize this property is: "A TA is bisimilar to some TPN if time elapsing cannot falsify the guard of an edge
+---PAGE_BREAK---
+
+starting from the current location". By virtue of the observation this condition is necessary. However it is not sufficient and the automaton $\mathcal{A}_1$ of figure 1 does not admit a bisimilar TPN whereas it fulfills this condition as can be checked on its region automaton represented in figure 6. The three parts of this figure (from left to right) represent respectively the reachable regions associated with locations $\ell_0, \ell_1$ and $\ell_2$. Thin arrows represent time elapsing (thus staying in the same location) while thick (labelled) arrows represent discrete transitions from some $(\ell, Z)$ to another region $(\ell', Z')$. For instance, $(\ell_0, 0 < x = y < 1) \xrightarrow{a} (\ell_1, 0 < x < 1 \land y = 0)$
+
+**Fig. 6.** The region automaton of $\mathcal{A}_1$ ($K=2, g=1$)
+
+The previous observation about the behaviour of a TPN may be sharpened and yields the following lemma useful for subsequent developments.
+
+**Lemma 1.** Let $(M, \nu)$ and $(M, \nu + \delta)$ be two admissible configurations of a g-TPN with $\nu, \delta \in \mathbb{R}_{\ge 0}^{En(M)}$. Let $w$ be an instantaneous firing sequence, then:
+
+(i) $(M, \nu) \xrightarrow{w} \text{implies } (M, \nu + \delta) \xrightarrow{w}$
+
+(ii) If $\nu \in \mathbb{N}_g^{En(M)}$ and $\delta \in [0, 1/g^{En(M)}]$ then $(M, \nu + \delta) \xrightarrow{w}$ implies $(M, \nu) \xrightarrow{w}$
+
+*Proof.* There are two kinds of transitions firing in $w$: those corresponding to a firing of a transition (say $t$) still enabled from the beginning of the firing sequence and those corresponding to a newly enabled transition (say $t'$).
+
+**Proof of (i)** Since $t$ is firable from $(M, \nu)$, $\nu(t) \in I(t) \subset I(t)^\uparrow$, so $\nu(t) + \delta(t) \ge \nu(t)$ also belongs to $I(t)^\uparrow$. Since $t \in En(M)$ and $(M, \nu + \delta)$ is reachable, $\nu(t) + \delta(t) \in I(t)^\downarrow$. Thus $\nu(t) + \delta(t) \in I(t)$ and $t$ is also firable from $(M, \nu + \delta)$. Since $t'$ is newly enabled, $0 \in I(t')$ and $t'$ is also firable when it occurs starting from $(M, \nu + \delta)$.
+
+**Proof of (ii)** The case of newly enabled transitions in $w$ is handled as before. Now let $t$ be firable in $(M, \nu + \delta)$. Since $t \in En(M)$ and $(M, \nu)$ is reachable, $\nu(t) \in I(t)^\downarrow$. Since $\nu(t)+\delta(t) \in I(t)^\uparrow$, (denoting by $eft(t)$ the minimum of $I(t)^\uparrow$), we have $eft(t) \le \nu(t)+\delta(t)$ but $eft(t)$ belongs to the g-grid, thus $eft(t) \le \nu(t) \Leftrightarrow \nu(t) \in I(t)^\uparrow$. So $t$ is firable from $(M, \nu)$. $\square$
+---PAGE_BREAK---
+
+Taking into account this lemma, one can elaborate a slightly modified version of the first condition: “A TA is bisimilar to some TPN if given any two reachable configurations $(l, v)$ and $(l, v')$ with $v' \ge v$ and an edge $e$, then $(l, v) \xrightarrow{e} (l, v') \xrightarrow{e}$”. For instance, $\mathcal{A}_1$ does not fulfill this condition (take $(l_1, (1,0)), (l_1, (1,1))$ and the edge labelled by $c$ in figure 6). Unfortunately this condition is not necessary. The TA $\mathcal{A}_0$ of figure 1 does not fulfill this condition whereas there exists a TPN bisimilar to it (see figure 13 in section 5). Indeed take $(l_1, (1,0)), (l_1, (1,1))$ and the edge labelled by $c$ and see its region automaton presented in figure 7 (with the same conventions as in figure 6).
+
+**Fig. 7.** The region automaton of $\mathcal{A}_0$ ($K=2, g=1$)
+
+The difference between the two automata w.r.t. the property to be checked is the existence of the reachable region ($l_1, x = 1 \land 0 < y < 1$) whose topological closure includes the configuration $(l_1, (1,0))$. In fact, the exact characterization takes into account topological considerations as stated by the following theorem which also contains effectiveness and complexity results.
+
+**Theorem 1.** Let $\mathcal{A}$ be a injectively-labelled timed automaton and $R(\mathcal{A})_{1,K}$ its region automaton with a constant $K$ strictly greater than any constant occurring in the automaton, then $\mathcal{A}$ is weakly timed bisimilar to a time Petri net iff for each region $r$ of $R(\mathcal{A})_{1,K}$ and for each edge $e$ from $\mathcal{A}$,
+
+(a) Every region $r'$ such that $r' \cap \bar{r} \neq \emptyset$ is reachable
+
+(b) $\forall (\ell_r, v) \in r$, if $(\ell_r, v) \xrightarrow{e} \text{then } (\ell_r, \min_r) \xrightarrow{e}$
+
+(c) $\forall (\ell_r, v) \in \bar{r}$, if $(\ell_r, \min_r) \xrightarrow{e} \text{then } (\ell_r, v) \xrightarrow{e}$.
+
+Furthermore, if these conditions are satisfied then we can build a 1-bounded 2-TPN bisimilar to $\mathcal{A}$ whose size is linear w.r.t. the size of $\mathcal{A}$ and a 1-bounded 1-TPN bisimilar to $\mathcal{A}$ whose size is exponential w.r.t. the size of $\mathcal{A}$.
+
+We denote by $\mathcal{T}\mathcal{A}^{wtb}$ the corresponding subclass of timed automata.
+
+Thus $\mathcal{T}\mathcal{A}^{wtb}$ is the maximal subclass of TA that are weakly timed bisimilar to a TPN. In [7], we have proposed a “syntactical” (proper) subclass of $\mathcal{T}\mathcal{A}^{wtb}$ which avoids to check this characterization.
+---PAGE_BREAK---
+
+The characterization of Theorem 1 is closely related to the topological closure
+of reachable regions: it states that any region intersecting the topological closure
+of a reachable region is also reachable and that a discrete step either from a
+region or from the minimal vector of its topological closure is possible in the
+whole topological closure. Consider again the two TA $\mathcal{A}_0$ and $\mathcal{A}_1$ in Figure 1.
+The automaton $\mathcal{A}_0$ admits a bisimilar TPN whereas $\mathcal{A}_1$ does not. Indeed, the
+region $r = (\ell_1, x = 1 \land 0 < y < 1)$ is reachable. The guard of edge $c$ is true in
+$min_r = (\ell_1, (1, 0))$ whereas it is false in $r$.
+
+The next sections are devoted to the proof of Theorem 1.
+
+# 4 Proof of necessary condition for Theorem 1
+
+## 4.1 From bisimulation to uniform bisimulation
+
+As a first step, we prove that when a g-TPN and a TA are bisimilar, this relation
+can in fact be strengthened in what we call *uniform bisimulation*. Lemma 2 is
+the central point for the proof of necessity. It shows that bisimulation implies
+uniform bisimulation for the g-grid with $K = \infty$. Roughly speaking, uniform
+bisimulation means that a unique mechanism is used for every configuration of
+the topological closure of the region to obtain a bisimilar configuration of the
+net.
+
+**Lemma 2 (From bisimulation to uniform bisimulation).** Let $\mathcal{A}$ be a timed automaton bisimilar to some g-TPN $\mathcal{N}$ via some relation $\mathcal{R}$ and let $R(\mathcal{A})_{g,\infty}$ be a region automaton of $\mathcal{A}$. Then:
+
+- if a region $r$ belongs to $R(\mathcal{A})_{g,\infty}$ then any region included in $\bar{r}$ also belongs to $R(\mathcal{A})_{g,\infty}$;
+
+- for each region $r$, there exist a configuration of the net $(M_r, \nu_r)$ with $\nu_r \in \mathbb{N}_g^{\mathrm{En}(M_r)}$ and a mapping $\phi_r : \mathrm{En}(M_r) \to [X]_r$ such that:
+
+ * If $r$ is time-closed, then for each $\delta \in \mathbb{R}_{\ge 0}^X$ such that $(\ell_r, \min_r + \delta) \in \bar{r}$, $(\ell_r, \min_r + \delta) \in \mathcal{R}(M_r, \nu_r + \mathrm{proj}_r(\delta))$,
+
+ * If $r$ is time-open, then for each $\delta \in \mathbb{R}_{\ge 0}^X$, $d \in \mathbb{R}_{\ge 0}$ such that $(\ell_r, \min_r + \delta + d) \in \bar{r}$, $(\ell_r, \min_r + \delta + d) \in \mathcal{R}(M_r, \nu_r + \mathrm{proj}_r(\delta) + d)$,
+ where $\mathrm{proj}_r(\delta)(t) = \delta(\phi_r(t))$.
+
+*Proof.* First note that the choice of a particular clock $x$ in the class $\phi_r(t)$ is irrelevant when considering the value $\delta(x)$. Thus the definition of $\mathrm{proj}_r$ is sound. The proof is an induction on the transition relation in the region automaton. The basis case is straightforward with $\{(l_0, 0)\}$ and $\{(M_0, 0)\}$. The induction part relies on lemma 1, with 4 cases, according to the incoming or target region and to the nature of the step: 1. a time step from a time-closed region, 2. a time step from a time-open region, 3. a discrete step into a time-closed region, and 4. a discrete step into a time-open region.
+
+**1. A time step from a time-closed region** (see figure 8(1)). Let $r$ be a time-closed region in $R(\mathcal{A})_{g,\infty}$ and let us denote $r' = \mathrm{succ}(r)$ the immediate
+---PAGE_BREAK---
+
+time successor of $r$. Let $(\ell_r, \min_r + \delta_0)$ be some item of $r$. $(\ell_r, \min_r + \delta_0) \xrightarrow{d}$ for some $d > 0$. Thus (by induction hypothesis) in $\mathcal{N}$ there is a step sequence of $(M_r, \nu_r + \text{proj}_r(\delta_0)) \xrightarrow{d_0 t_1 \cdots t_n d_n}$ with all transitions labelled by $\epsilon$ and $\sum d_k = d$. Let $d_k$ be the first non zero elapsing of time. By application of lemma 1-(ii), the firing sequence $t_1 \dots t_k$ is fireable from $(M_r, \nu_r)$.
+
+Let us choose $(M_{r'}, \nu_{r'})$ the configuration reached by this sequence. By application of lemma 1-(i), this firing sequence is also fireable from any $(M_r, \nu_r + \text{proj}_r(\delta))$ bisimilar to $(\ell_r, \min_r + \delta) \in \bar{r}$ and it leads to $(M_{r'}, \nu_{r'} + \text{proj}_{r'}(\delta))$ (still bisimilar to $(\ell_r, \min_r + \delta)$) where $\phi_{r'}$ (resp. $\nu_{r'}$) is equal to $\phi_r$ (resp. $\nu_r$) for transitions always enabled during the firing sequence and $\phi_{r'}$ (resp. $\nu_{r'}$) is obtained by associating the class of index 1 (resp. by associating the value 0) with the transitions newly enabled. Since $(M_{r'}, \nu_{r'})$ let the time elapse and since $\mathcal{N}$ is a g-TPN, we note that $\forall t \in En(M_{r'})$, $\nu_{r'}(t)+1/g \in I(t)^{\downarrow}$. Now let $(\ell_r, \min_r + \delta+d) \in \bar{r}'$, one has $\forall x \in X$, $\delta(x)+d \le 1/g$. Thus $\forall t \in En(M_{r'}), \text{proj}_{r'}(\delta(x))+d \le 1/g$, which implies $(M_{r'}, \nu_{r'} + \text{proj}_{r'}(\delta)) \xrightarrow{d} (M_{r'}, \nu_{r'} + \text{proj}_{r'}(\delta) + d)$, this last configuration being necessarily bisimilar to $(\ell_r, \min_r + \delta + d)$.
+
+**2. A time step from a time-open region** (see figure 8(2)). Let $r$ be an time-open region and let us denote $r' = succ(r)$. Note that $\bar{r}' \subset \bar{r}$. Let us define $X_r^{max}$ the class $[x]_r$ with maximal index. We remark that $\min_{r'} = \min_r + \delta_0$ where if $x \in X_r^{max}$ then $\delta_0(x) = 1/g$ else $\delta_0(x) = 0$. We choose $(M_{r'}, \nu_{r'}) = (M_r, \nu_r + \text{proj}_r(\delta_0))$. Let $t \in En(M_r)$ and $x \in \phi_r(t)$ then $\phi_{r'}(t) = [x]_{r'}$ (letting time elapse does not split the classes). So $\text{proj}_r$ and $\text{proj}_{r'}$ are identical.
+
+Now let $(l_{r'}, \min_{r'} + \delta) \in \bar{r}'$. $(l_{r'}, \min_{r'} + \delta) = (\ell_r, \min_r + \delta_0 + \delta)$.
+
+Now let $d = \delta(x)$ for $x$ belonging the class of index 1 in $[X_r]$. Then $(\ell_r, \min_r + \delta_0 + \delta) = (\ell_r, \min_r + \delta' + d)$ where if $x \in X_r^{max}$ then $\delta'(x) = 1/g - d$ else $\delta'(x) = \delta(x) - d$. $(\ell_r, \min_r + \delta' + d)$ is bisimilar to $(M_r, \nu_r + \text{proj}_r(\delta') + d) = (M_r, \nu_r + \text{proj}_r(\delta' + d)) = (M_r, \nu_r + \text{proj}_r(\delta_1 + \delta)) = (M_r, \nu_r + \text{proj}_r(\delta_1) + \text{proj}_r(\delta)) = (M_{r'}, \nu_{r'} + \text{proj}_{r'}(\delta))$.
+
+For this step, we have not used the characteristics of time Petri nets.
+
+**3. A discrete step into a time-closed region** (see figure 8(3)).
+
+**Case a.** We first consider the case where $r$ is a time-closed region.
+
+Let $(\ell_r, \min_r + \delta_0)$ be some element of $r$. Suppose that $(\ell_r, \min_r + \delta_0) \xrightarrow{e} (\ell', v' + \delta'_0)$ with $\forall x \in R(e), v'(x) = \delta'_0(x) = 0, \forall x \notin R(e), v'(x) = \min_r(x) \wedge \delta'_0(x) = \delta_0(x)$. Then in $\mathcal{N}$ there is an instantaneous firing sequence $(M_r, \nu_r + \text{proj}_r(\delta_0)) \xrightarrow{w}$ labelled by $e$. Due to lemma 1, this firing sequence is also fireable from any $(M_r, \nu_r + \text{proj}_r(\delta))$ bisimilar to $(\ell_r, \min_r + \delta) \in \bar{r}$. By bisimilarity, $(\ell_r, \min_r + \delta) \xrightarrow{e}$ for any $(\ell_r, \min_r + \delta) \in \bar{r}$. Let $r'$ be the region including $(\ell', v' + \delta'_0)$, then any configuration of $\bar{r}'$ is reachable by this discrete step. Note that $\ell_{r'} = l'$ and $\min_{r'} = v'$.
+
+From $(M_r, \nu_r + \text{proj}_r(\delta))$, the sequence $w$ leads to some $(M', v')$ bisimilar to $(\ell_{r'}, \min_{r'} + \delta')$. We now show how to define $M_{r'}$, $\nu_{r'}$ and $\phi_{r'}$. First $M_{r'} = M'$. Second, $\nu_{r'}(t) = \nu_r(t)$ for transitions $t$ always enabled during the firing sequence and $\nu_{r'} = 0$ otherwise. At last, $\phi_{r'}$ is obtained from $\phi_r$.
+---PAGE_BREAK---
+
+Fig. 8. The different cases of the proof
+
+as follows. Let $t$ be a transition newly enabled during the firing sequence, then $\phi_{r'}(t)$ is associated to the class of index 1. Let $t$ be a transition always enabled during the firing sequence. There are two cases to consider for $\phi_{r'}(t)$: if there is a $x \in \phi_r(t)$ not reset, then $\phi_{r'}(t) = |x|_{r'}$ otherwise $\phi_{r'}(t)$ is the class of maximal index which precedes $\phi_r(t)$ and contains a clock not reset or else the class of index 1. The two last affectations are sound since, it means that whatever, before the firing of $w$, the fractional value of the implicit clock associated with $t$ between the fractional value of the new clock corresponding to $t$ and the fractional value of a clock of $\phi_r(t)$, the firing sequence $w$ leads to bisimilar configurations (as being bisimilar to the same configuration of the automaton).
+
+**Case b.** The case where $r$ is a time-open region is handled in a similar way. Let $(\ell_r, \min_r + \delta_0 + d_0)$ be some element of $r$. Suppose that $(\ell_r, \min_r + \delta_0 + d_0) \xrightarrow{e} (\ell', v' + \delta'_0)$ with $\forall x \in R(e), v'(x) = \delta'_0(x) = 0, \forall x \notin R(e), v'(x) = \min_r(x) \wedge \delta'_0(x) = \delta_0(x) + d_0$. Then in $\mathcal{N}$ there is an instantaneous firing sequence $(M_r, v_r + \text{proj}_r(\delta) + d_0) \xrightarrow{w} e$ labelled by $e$. Due to lemma 1, this firing sequence is also fireable from any $(M_r, v_r + \text{proj}_r(\delta) + d)$ bisimilar to $(\ell_r, \min_r + \delta + d) \in \bar{r}$. By bisimilarity, $(\ell_r, \min_r + \delta + d) \xrightarrow{e}$ for any $(\ell_r, \min_r + \delta + d) \in \bar{r}$. Let $r'$ be the region including $(\ell', v' + \delta'_0)$, then any configuration of $\bar{r}'$ is reachable by this discrete step. Note that $\ell_{r'} = l'$ and $\min_{r'} = v'$.
+
+From $(M_r, v_r + \text{proj}_r(\delta) + d)$, the sequence $w$ leads to some $(M', v')$ bisimilar to $(\ell_{r'}, \min_{r'} + \delta')$. We now show how to define $M_{r'}$, $v_{r'}$ and $\phi_{r'}$. First $M_{r'} = M'$. Second, $v_{r'}(t) = v_r(t)$ for transitions $t$ always enabled during the firing sequence and $v_{r'} = 0$ otherwise. At last, $\phi_{r'}$ is obtained from $\phi_r$ as follows. Let $t$ be a transition newly enabled during the firing sequence, then $\phi_{r'}(t)$ is associated to the class of index 1. There are two cases to consider
+---PAGE_BREAK---
+
+for $\phi_{r'}(t)$: if there is a $x \in \phi_r(t)$ not reset, then $\phi_{r'}(t) = |x|_{r'}$ otherwise $\phi_{r'}(t)$ is the class of maximal index which precedes $\phi_r(t)$ and contains a clock not reset or else the class of index 1. The two last affectations are sound since, it means that whatever, before the firing of $w$, the fractional value of the implicit clock associated with $t$ between the fractional value of the new clock corresponding to $t$ and the fractional value of a clock of $\phi_r(t)$, the firing sequence $w$ leads to bisimilar configurations (as being bisimilar to the same configuration of the automaton).
+
+**4. A discrete step into a time-open region** (see figure 8(4)). In order to reach a time-open region by a discrete step, the corresponding transition must start from a time-open region and must not reset any clock. Let $(\ell_r, min_r + \delta + d) \in r$ and $(\ell_r, min_r + \delta + d) \xrightarrow{e} (l', min_r + \delta + d)$. Here we have used the hypothesis that no clock is reset. Then there is a firing sequence $(M_r, v_r + \text{proj}_r(\delta) + d) \xrightarrow{w} \text{labelled by } e$. Due to the lemma 1, $(M_r, v_r + \text{proj}_r(\delta)) \xrightarrow{w} (\ell_r, v_r + \delta)$ is bisimilar to $(M_r, v_r + \text{proj}_r(\delta))$. Thus $(\ell_r, min_r + \delta) \xrightarrow{e} (l', min_r + \delta) \xrightarrow{d} (l', min_r + \delta + d)$. Then this region can be reached via a discrete step into a time-closed region followed by a time step. So we do not need to examine this case. $\square$
+
+## 4.2 Proof of Necessity
+
+The fact that conditions (a), (b) and (c) of Theorem 1 hold for $R(\mathcal{A})_{g,\infty}$ is now straightforward:
+
+(a) This assertion is included in the inductive assertions.
+
+(b) Let $r$ be a region, let $(\ell_r, min_r + \delta) \in r$ be a configuration with $\delta \in [0, 1/g]^X$, then $\exists(M, v) \ v \in \mathbb{N}_g^{En(M)}$ bisimilar to $(\ell_r, min_r)$ and $(M, v + \delta')$ with $\delta' \in [0, 1/g]^{En(M)}$ bisimilar to $(\ell_r, v + \delta)$. Suppose that $(\ell_r, min_r + \delta) \xrightarrow{e},$ then $(M, v + \delta') \xrightarrow{w}$ with $w$ an instantaneous firing sequence and label($w$) = $e$. Now by lemma 1-(ii), $(M, v) \xrightarrow{w},$ thus $(\ell_r, min_r) \xrightarrow{e}$.
+
+(c) Let $r$ be a region, and $(\ell_r, min_r + \delta) \in \bar{r}$ with $\delta \in [0, 1/g]^X$ thus $\exists(M, v)$ bisimilar to $(\ell_r, min_r)$ and $(M, v + \delta')$ with $\delta' \in [0, 1/g]^{En(M)}$ bisimilar to $(\ell_r, min_r + \delta)$. Suppose that $(\ell_r, min_r) \xrightarrow{e},$ then $(M, v) \xrightarrow{w}$ with $w$ an instantaneous firing sequence and label($w$) = $e$. By lemma 1-(i), we have $(M, v + \delta') \xrightarrow{w},$ thus $(\ell_r, min_r + \delta) \xrightarrow{e}$.
+
+In order to complete the proof, we successively show that if the conditions are satisfied in $R(\mathcal{A})_{g,\infty}$ for some $g$, they also hold for $R(\mathcal{A})_{1,\infty}$ (lemma 3), and finally that they are satisfied in the standard region automaton $R(\mathcal{A})_{1,K}$, with a finite constant $K$, sufficiently large (lemma 4). Recall (section 2.2) that any atomic constraint related to a clock $x$ occurring in the invariant of a location is added to the guard of each incoming transition which does not reset $x$.
+
+**Lemma 3 (about the conditions and the grid).** Let $\mathcal{A}$ be a timed automaton and $g > 0$ in $\mathbb{N}$. If conditions (a),(b),(c) are satisfied in the region automaton $R(\mathcal{A})_{g,\infty}$, then they are satisfied in $R(\mathcal{A})_{1,\infty}$.
+---PAGE_BREAK---
+
+*Proof.* From the definition of regions, a region *r* of R(*A*)1,∞ is a finite union of regions of R(*A*)*g*,∞ (say *r* = ⋃*i*=1..*k* *r*i). Thus $\bar{r}$ = ⋃*i*=1..*k* $\bar{r}_i$ which proves the implication for (a).
+
+Assume that (b) is satisfied by $R(\mathcal{A})_{g,\infty}$. Let $(\ell_r, \min_r + \delta + d) \in r$ be a region of $R(\mathcal{A})_{1,\infty}$ and assume $(\ell_r, \min_r + \delta + d) \xrightarrow{e}$ . We define $\delta'$ by $\delta'(x) = \delta(x)/g$. Then since $\mathcal{A}$ has integer constraints $(\ell_r, \min_r + \delta' + d/g) \xrightarrow{e}$. Moreover this configuration belongs to $r$ and then to a region $r' \in R(\mathcal{A})_{g,\infty}$ whose minimal vector is $\min_r$. Then applying (b), we obtain $(\ell_r, \min_r) \xrightarrow{e}$.
+
+Assume that (c) is satisfied by $R(\mathcal{A})_{g,\infty}$. Let $(\ell_r, v) \in \bar{r}$ where $r$ is a region of $R(\mathcal{A})_{1,\infty}$ and assume $(\ell_r, \min_r) \xrightarrow{e}$. Then there is an increasing path among the minimum vectors of regions of $R(\mathcal{A})_{g,\infty}$ all included in $\bar{r}$. This path is such that any two consecutive elements belong to the closure of some region; it starts at $(\ell_r, \min_r)$ and finishes at $(\ell_r, \min_{r_*})$ such that $(\ell_r, v) \in \bar{r_*}$ (with $r_*$ a region of $R(\mathcal{A})_{g,\infty}$). Thus applying iteratively (c) yields $(\ell_r, v) \xrightarrow{e}$. $\square$
+
+**Lemma 4 (about the conditions and the constant K).** Let $\mathcal{A}$ be a timed automaton. If conditions (a),(b),(c) are satisfied in $R(\mathcal{A})_{1,\infty}$, then they hold in the region automaton $R(\mathcal{A})_{1,K}$, for some finite constant $K$.
+
+*Proof.* Let $r$ be a region in $R(\mathcal{A})_{1,K}$ where $K$ is greater than the maximal constant in $\mathcal{A}$ and reach($r$) the associated region of $R(\mathcal{A})_{1,\infty}$. Note that $\ell_{reach}(r) = \ell_r$ and that $\forall x \in ActX_r, \min_{reach}(r) = \min_r$ and $\forall x \in X, \min_{reach}(r) \geq \min_r$. Suppose that reach($r$) is time-closed (resp. time-open) then $r$ admits a time-closed (resp. time-open) description where the ord$_r$ and ord$_{reach}(r)$ mappings are identical for clocks in Act$X_r$. Thus $\forall (\ell_r, v) \in r, \exists (\ell_r, v') \in reach(r)$ such that $\forall x \in ActX_r, v'(x) = v(x)$.
+
+Now take a convergent sequence $\lim_{i \to \infty} (\ell_r, v_i) = (\ell_r, v)$ with $(\ell_r, v_i) \in r$ so that $(\ell_r, v) \in \bar{r}$. Then the corresponding sequence {$(\ell_r, v'_i)$} being bounded admits an accumulation point $(\ell_r, v') \in \bar{r}$. It is routine to show that $(\ell_r, v)$ and $(\ell_r, v')$ belong to the same region in $R(\mathcal{A})_{1,K}$. This proves that condition (a) for $R(\mathcal{A})_{1,\infty}$ implies condition (a) for $R(\mathcal{A})_{1,K}$.
+
+Assume that (b) is satisfied by $R(\mathcal{A})_{1,\infty}$. Let $(\ell_r, v) \in r$ be a reachable region of $R(\mathcal{A})_{1,K}$ and $(\ell_r, v) \xrightarrow{e}$ . Let reach($r$) be the associated reachable region (as explained in section 2.5) of $R(\mathcal{A})_{1,\infty}$ then $\exists (\ell_r, v') \in reach(r)$ strongly time bisimilar to $(\ell_r, v)$, thus $(\ell_r, v') \xrightarrow{e}$ . Using condition (b), $(\ell_r, \min_{reach}(r)) \xrightarrow{e}$. Since $(\ell_r, \min_{reach}(r))$ is strongly time bisimilar to $(\ell_r, \min_r)$, we have $(\ell_r, \min_r) \xrightarrow{e}$ .
+
+Assume that (c) is satisfied by $R(\mathcal{A})_{1,\infty}$ and consider $(\ell_r, v) \in \bar{r}$ where $r$ is a region of $R(\mathcal{A})_{1,K}$ and $(\ell_r, \min_r) \xrightarrow{e}$. Again let reach($r$) be the associated reachable region of $R(\mathcal{A})_{1,\infty}$, then $\exists (\ell_r, v') \in reach(r)$ strongly time bisimilar to $(\ell_r, v)$. Since $(\ell_r, \min_{reach}(r))$ is strongly time bisimilar to $(\ell_r, \min_r)$, $(\ell_r, \min_{reach}(r)) \xrightarrow{e}$. Thus using condition (c), $(\ell_r, v') \xrightarrow{e}$. By bisimilarity, we obtain $(\ell_r, v) \xrightarrow{e}$. $\square$
+---PAGE_BREAK---
+
+# 5 Sufficient condition: first construction
+
+Starting from a timed automaton $\mathcal{A}$ satisfying the conditions of Theorem 1, we build a 2-TPN bisimilar to $\mathcal{A}$. We describe the construction and give the proof of correctness.
+
+For the figures corresponding to all constructions, all edges are weighted by 1. Omitted labels for transitions stand for $\varepsilon$. A firing interval $[0, 0]$ is indicated by a blackened transition and intervals $[0, \infty[$ are omitted. A double arrow between a place $p$ and a transition $t$ indicates that $p$ is both an input and an output place for $t$.
+
+## 5.1 Construction
+
+First remark that any constraint of the form $x < c$ occurring in an invariant of $\mathcal{A}$ may be safely omitted. If it would forbid the progress of time in some configuration, then the associated region would be a maximal time-open region $r$. Due to condition (a), $\bar{r}$ is reachable but since $r$ is time-open, $\bar{r} \cap \text{succ}(r) \neq \emptyset$, so that $\text{succ}(r)$ is reachable which contradicts the maximality of $r$.
+
+**Fig. 9.** Subnets for $x < h$ (with $h > 0$) and $x \le h$ in guards
+
+**Clock constraints.** The atomic constraints associated with a clock $x$ are arbitrarily numbered from 1 to $n(x)$ where $n(x)$ is the number of such conditions. When $x \le h$ occurs in at least one transition and in at least one invariant, we consider it as two different conditions. Then we add places *Rinit* and $(Rnext_i^x)_{i \le n(x)+1}$ for the reset operations. We build a subnet for each atomic constraint $x \supset h$ occurring in a transition of the TA, and one for each condition $x \le h$ occurring in an invariant. Figure 9 shows the subnets corresponding to $x < h$ (with $h > 0$) on the left and $x \le h$ on the right, while Figure 10 shows
+---PAGE_BREAK---
+
+the subnets for $x > h$ on the left and $x \ge h$ (with $h > 0$) on the right, in the case where the constraint has number $i$. Figure 11 shows the subnet for invariant $x \le h$.
+
+Since constant $\frac{1}{2}$ appears in interval bounds, the resulting TPN is a 2-TPN.
+
+Fig. 10. Subnets for $x > h$ and $x \ge h$ (with $h > 0$) in guards
+
+Fig. 11. Subnet for $x \le h$ in an invariant
+
+**Locations and edges.** With each location $\ell$ of the automaton, we associate an eponymous place $\ell$. The place $\ell$ is initially marked iff the location $\ell$ is the initial one. The invariant $Inv(\ell)$ is tested with the subnets corresponding to its atomic constraints:
+---PAGE_BREAK---
+
+- if condition $x \le h$ (with $h > 0$) occurs in the invariant, then we add a transition $stop_{\ell}^{x \le h}$, with both $\ell$ and $TReach_{x \le h}$ as input and output places, and interval $[0, 0]$,
+
+- if condition $x \le 0$ occurs in the invariant, we simply add a transition $stop_{\ell}$ with $\ell$ as only input and output place, and also interval $[0, 0]$.
+
+To simulate an edge $e = (\ell, \gamma, a, R, \ell')$, we must test the atomic constraints from $\gamma = \gamma_1 \wedge \dots \wedge \gamma_{m(e)}$, using the places corresponding to true in the associated subnets, and reset successively all the clocks in $R = \{x_1, \dots, x_{k(e)}\}$ by instantaneous transitions. This is done by the subnet in Figure 12, which must be connected to some subnets like those of Figure 9, 10 or 11.
+
+**Fig. 12.** The subnet for edge $e = (\ell, \gamma = \gamma_1 \wedge \dots \wedge \gamma_{m(e)}), a, R = \{x_1, \dots, x_{k(e)}\}, \ell'$)
+
+This construction is illustrated in figure 13 for the timed automaton $\mathcal{A}_0$ from figure 1 with some simplifications related to this particular timed automaton. More precisely, places $FReach_{x \le 1}$ and $F_{x \ge 1}$ (resp. $TReach_{x \le 1}$ and $T_{x \ge 1}$) have been merged. Moreover since $x$ is never reset in $\mathcal{A}_0$ the corresponding parts in the TPN are omitted.
+
+First note that the subnet associated to the constraint $y \le 0$ switches the condition to false (marking $F_{y \le 0}$) when the implicit value of $y$ maintained in the net reaches $1/2$. This translation thus seems less constrained than the original condition, so we explain how we prove that it is nevertheless sound. Let $r$ be the region corresponding to the current configuration $(\ell, v)$ of the automaton simulated by the net. If the net is able to simulate a discrete step of the automaton, we prove that in the configuration $(\ell, min_r)$ of the automaton, this step is also possible. Thus by condition (c), the step is also possible from $(\ell, v)$. On the other hand, if a discrete step is possible for $(\ell, v)$ in the automaton, we show that this step can also be simulated in the net using both conditions (b) and (c) and the following fact: $\forall x \in X, \exists(\ell_r, v'), (\ell_r, v'') \in \bar{r}$ such that $v'(x) = [v(x)]$ and $v''(x) = [v(x)]$. The subnet associated to the atomic constraint $x \le 1$ occurring in the invariant of $\ell_0$ leads to transition $inv_{\ell_0}$ (not modifying the marking)
+---PAGE_BREAK---
+
+Fig. 13. A 2-TPN bisimilar to $\mathcal{A}_0$
+
+which is fireable as soon as the simulated value of $x$ reaches 1 and the place $\ell_0$ is marked. Thus time cannot progress except if the location is left.
+
+## 5.2 Correctness proof
+
+We decompose the reachable configurations (and markings) into intermediate ones (some $W_e^i$ is marked) and permanent ones (some $\ell$ is marked). An easy induction shows that in permanent configurations $(M, \nu)$ the enabled timed transitions relative to a clock are “synchronized”: $\nu(\text{change}_c) = \nu(\text{change}_{c'}) = \nu(\text{reach}_{c''})$ as soon as $c, c', c''$ relates to the same clock $x$. We define $\nu(x)$ as this common value if at least one such transition is enabled and otherwise $\nu(x) = K(x)$ where $K(x)$ is the maximal value relative to clock $x$ occuring in the net $\mathcal{N}$. Furthermore from any intermediate configuration $(M, \nu)$, the behaviour of the net is quasi-deterministic until it reaches a permanent configuration: there are only instantaneous firing sequences (i.e. no time step) and the finite maximal ones lead to permanent configurations. These permanent configurations (say $(M_{next}, \nu_{next})$) have the same marked place $\ell$ and the same values $\nu_{next}(x)$. They may only differ depending on whether some transitions related to a condition switch have been fired.
+
+It is also obvious that once some fire$_e$ is fired, the construction ensures the existence of a “resetting” sequence which reinitializes the subnets associated to the clocks to be reset.
+
+*Bisimulation relation.* We now define the relation $\mathcal{R}$ between reachable configurations of the automaton $\mathcal{A}$ and the net $\mathcal{N}$. Let us define $(\ell, \upsilon)\mathcal{R}(M, \nu)$ iff:
+
+- either $M$ is a permanent marking and $M(\ell)$ is marked and if $\nu(x) < K(x)$ then $\upsilon(x) = \nu(x)$ else $\upsilon(x) \ge K(x)$.
+---PAGE_BREAK---
+
+- or M is an intermediate marking leading to some permanent ($M_{next}, \nu_{next}$) and $(\ell, v)\mathcal{R}(M_{next}, \nu_{next})$. This definition is sound due to the common features of the different ($M_{next}, \nu_{next}$).
+
+It remains to prove that $\mathcal{R}$ is a bisimulation, which is done in the next lemma.
+
+**Lemma 5.** The relation $\mathcal{R}$ defined above is a weak timed bisimulation.
+
+*Proof.* We first consider moves from $\mathcal{A}$.
+
+**Case 1:** $(\ell, v) \xrightarrow{\epsilon} (\ell', v')$.
+
+We prove that $(M, \nu) \xrightarrow{\sigma}$ with $\sigma$ labelled by $e$. At first, $\sigma$ begins by $\sigma'$ which consists to fire all the change$_c$ fireable leading to some $(M', \nu')$ (with $(\ell, v)\mathcal{R}(M', \nu')$). Now we prove that $(M', \nu') \xrightarrow{fire_e}$.
+
+By definition of $\mathcal{R}$, $M(\ell)$ is marked. Let $c$ be a condition occuring in the guard of $e$.
+
+If $c = [x \ge a]$ then $v(x) \ge a$ which implies $\nu(x) \ge a$ and that $T_{x \ge a}$ is marked (possibly with the help of $\sigma'$).
+
+If $c = [x > a]$ then let $r$ be the region to which $(\ell, v)$ belongs. $min_r(x) = [v(x)]$. Using condition (b), $(l, min_r) \xrightarrow{\epsilon}$. Thus $v(x) \ge min_r(x) \ge a + 1$ which implies $\nu(x) \ge a + 1$ and that $T_{x>a}$ is marked (possibly with the help of $\sigma'$).
+
+If $c = [x \le a]$ then $v(x) \le a$ which implies $\nu(x) \le a$ and that $T_{x \le a}$ is marked (remember that change$_{x \le a}$ fires when $\nu(x) = a + 1/2$).
+
+If $c = [x < a]$ then let $r$ be the region to which $(\ell, v)$ belongs. Then there exists $(\ell, v_1) \in \bar{r}$ with $v_1(x) = [v(x)]$. Using condition (b) and then (c), $(\ell, v_1) \xrightarrow{\epsilon}$. Thus $v(x) \le v_1(x) \le a - 1$ which implies $\nu(x) \le a - 1$ and that $T_{xe (M is then a permanent marking). Let r be the region to which (ℓ, v) belongs. We will show that (ℓ, minr) e→. Then by condition (c), we will obtain that (ℓ, v) e→.
+
+Let c be a condition occuring in the guard of e.
+
+If $c = [x \ge a]$ then $T_{x \ge a}$ is marked which implies that $\nu(x) \ge a$ and then
+$v(x) \ge a$, thus $min_r(x) = \lfloor v(x) \rfloor \ge a$.
+
+If $c = [x > a]$ then then $T_{x>a}$ is marked which implies that $\nu(x) \ge a + 1$ and
+then $v(x) \ge a + 1$ thus $min_r(x) = \lfloor v(x) \rfloor \ge a + 1 > a$
+
+If $c = [x \le a]$ then $T_{x \le a}$ is marked which implies that $\nu(x) \le a + 1/2$ and then
+$v(x) \le a + 1/2$ thus $min_r(x) = \lfloor v(x) \rfloor \le a$
+
+If $c = [x < a]$ then $T_{x*ℓ* must be fired and time may not elapse. Similarly since stop*ℓ*x≤a is only possibly fireable from (*M*, *ν* + *d*), it follows that *ν*(*x*) + *d* ≤ *a*, thus *v*(*x*) + *d* ≤ *a*.
+
+$$
+\text{Consequently } (\ell, v) \xrightarrow{d} (\ell, v+d) \text{ and obviously } (\ell, v+d)\mathcal{R}(M, v+d). \quad \square
+$$
+
+# 6 Sufficient condition: second construction
+
+When the conditions on the injectively-labelled timed automaton $\mathcal{A}$ are satisfied,
+we now build a 1-TPN $\mathcal{N}$ which is weakly timed bisimilar to $\mathcal{A}$.
+
+## 6.1 Construction
+
+The construction of the TPN contains a partial replication of both the region automaton of $\mathcal{A}$ and its class automaton. Recall that we consider $K = m + 1$, where *m* is the maximal constant for $\mathcal{A}$. There is first a subnet for each clock *x*, in which only the integral parts of *x* appear in the places (but with a fractional part that can reach 1).
+
+Fig. 14. Subnet for clock *x*
+
+Then we add one place *C* for each class *C* = (ℓ, *Z*) of the class automaton,
+with the initial class marked. Now let *e* = (ℓ, *g*, *a*, *R*, ℓ') be a transition of *A*. For
+---PAGE_BREAK---
+
+each pair $(v, v')$ of clock valuations in $\mathbb{N}^X$, with $v, v' \le \vec{K}$, we build a subnet (see figure 15) which simulates the transition $(\ell, v) \xrightarrow{e} (\ell', v')$, where we have $v'(x) = 0$ if $x \in R$ and $v'(x) = v(x)$ otherwise. Let $C_1 = (\ell, Z_1), \dots, C_k = (\ell, Z_k)$ be the subset of classes such that $\exists v'' \in Z_i \wedge \forall x \in X, v''(x) = v(x) \vee (v''(x) \ge K \wedge v(x) = K)$) for $1 \le i \le k$. Otherwise stated, there is a configuration in every $Z_i$ which is strongly time bisimilar to $v$. Let $C'_1, \dots, C'_k$ the classes obtained by applying transition $e$ to $C_1, \dots, C_k$ respectively. We have a transition with label $e$ for each $C_i$ (with $k=2$ in figure 15), all with interval $[0, +\infty[$. Note that all reset operations for clocks in $R$ are executed successively with instantaneous transitions. Moreover, the upper part of the net ensures that the invariant conditions of location $l$ are satisfied (this part has been omitted for $\ell'$ in the figure).
+
+**Fig. 15.** Simulation of a transition
+
+For instance, for the automaton $\mathcal{A}_0$ from Figure 1, we have four classes: $C_0 = \{l_0, 0 \le x = y \le 1\}$, $C_1 = \{l_1, 0 \le x = y \le 1\}$, $C_2 = \{l_1, x = 1 \wedge y = 0\}$ and $C_3 = \{l_2, 0 \le y = x - 1\}$. The subnet in Figure 16 corresponds to transition $c$ at point $(l_1, (1, 0))$ and class $C_2$.
+
+Consider the following run in $\mathcal{A}_0$: $(l_0, (0, 0)) \xrightarrow{a} (l_1, (0, 0)) \xrightarrow{1} (l_1, (1, 1))$. The simulation of this run by $\mathcal{N}$ may lead to the following configuration: $l_1, h_0^x, h_0^y$
+---PAGE_BREAK---
+
+Fig. 16. Subnet of transition c
+
+and $C_1$ are marked and $t_0^x$ and $t_0^y$ have been enabled for 1 t.u. Suppose that the transition $t_0^x$ is fired, marking the place $t_1^x$, then without the input place $C_2$ the transition labelled c could be erroneously fired. Since $C_2$ is unmarked this firing is disabled.
+
+## 6.2 Correctness proof
+
+Like in the previous proof, we say that a configuration (and the corresponding marking) $(M, \nu)$ of the TPN is permanent if $M(\ell) = 1$ for some $\ell$. Otherwise, it is an intermediate configuration (and marking), where $M(reset_e^x) = 1$ for some (exactly one of each) $x$ and $e$, meaning that some reset operations are in progress. Here again, a permanent configuration is reached instantaneously from such an intermediate configuration, with only firing sequences completing the reset operations for transition $e$ (interleaved with possibly transitions firings of some $t_c^x$).
+
+Furthermore, for a configuration $(M, \nu)$, there is exactly one non empty place $h_c^x$ for each clock $x$. Writing $c_x$ for the constant such that $M(h_{c_x}^x) = 1$, we have either $c_x = K$ or $0 \le \nu(t_{c_x}^x) \le 1$, where $\nu(t_c^x)$ is the time elapsed since arrival of the token in the place $h_{c_x}^x$. This means that the value of clock $x$ is either $v(x) \ge K$ or $v(x) = c_x + \nu(t_{c_x}^x)$ with $[v(x)]$ equal to either $c_x$ or $c_x + 1$. In the latter case, transition $t_{c_x}^x$ can be fired instantaneously, leading to the configuration $(M', \nu')$ with one token in place $h_{c_x+1}^x$ and either $c_x + 1 = K$ or $\nu'(t_{c_x+1}^x) = 0$. We can thus reach a configuration where its associated $c = (c_x)_{x \in X}$ is maximal, i.e. for every $x$, either $c_x = K$ or $\nu(t_{c_x}^x) < 1$.
+
+*Bisimulation relation.* The relation $\mathcal{R}$ is defined as the set of pairs $((M, \nu), (\ell, v))$ such that:
+
+- either $(M, \nu)$ is a permanent configuration with $M(\ell) = 1$, the relation between $v$ and $\nu$ is the one described above, and there exists exactly one class $C = (\ell, Z)$ such that $M(C) = 1$ and $\ell \in Z$;
+
+- or $(M, \nu)$ is an intermediate configuration leading to some permanent configuration $(M', \nu')$ such that $((M', \nu'), (\ell, v)) \in \mathcal{R}$.
+---PAGE_BREAK---
+
+We achieve the proof with an auxiliary lemma and the fact that $\mathcal{R}$ is a weak timed bisimulation.
+
+The following lemma which relates regions and classes, shows how the class automaton will be used to control the firing of a transition when the minimal point $c$ is in not in the same region than $v$.
+
+**Lemma 6.** Let $\mathcal{A}$ be an automaton satisfying the conditions of theorem 1, let $C = (\ell, Z)$ be a class of the class automaton and $(\ell, v) \in C$. Let $(\ell, v) \in r$ where $r$ is a region w.r.t. to the choice $K = \infty$ (which means that there is an infinite number of regions). Then $\forall(\ell, v') \in \bar{r}, (\ell, v') \in C$.
+In particular, $(\ell, \lfloor v \rfloor] \in C$ (note that $\lfloor v \rfloor = \min_r$).
+
+*Proof.* The proof is by induction on the reachability relation between regions. The case of a discrete step follows from conditions (b) and (c) of theorem 1. The case of a time step follows from the choice of $K = \infty$ which implies that given a region $r$, every item of $\textit{succ}(r)$ is reached by a time step from an item of $\bar{r}$. Recall that classes are closed by time elapsing. $\square$
+
+**Lemma 7.** *The relation $\mathcal{R}$ defined above is a weak timed bisimulation.*
+
+*Proof.* Assume that $(M, \nu)\mathcal{R}(\ell, v)$ and consider a move in $\mathcal{A}$.
+
+**Case 1:** $(\ell, v) \xrightarrow{d} (\ell, v+d)$ (with $d \neq 0$). Let us note $v' = v+d$. In this case, we must consider different subcases, according to the regions that can be reached by elapsing time. We consider only moves in which at most one different region is reached, the general case would be a combination of those elementary moves. First note that since $v'$ can be reached, no transition related to an invariant condition in $\mathcal{N}$ is enabled before $d$. Moreover, if $(M, \nu)$ is an intermediate configuration, we first apply the sequence described above and reach the equivalent configuration $(M_1, \nu_1)$. Also in this case, since classes are unchanged by elapsing time, if we prove that a delay move is possible from $(M_1, \nu_1)$, we immediately obtain that the class is the same in the resulting configuration. Thus, the resulting configuration will be equivalent to $(\ell, v+d)$.
+
+• If $v$ belongs to a time-open region, the case where $v'$ belongs to the same time-open region is easy, it simply corresponds to a delay transition from $(M_1, \nu_1)$ in $\mathcal{N}$, each clock being in some $h_c^x$ and staying inside (no token move), with $(M_1, \nu_1 + d)$ equivalent to $(\ell, v+d)$.
+If $v'$ has reached an integer value, we consider a clock $x$ with greatest integral part, so that $v'(x) = \lfloor v(x) \rfloor + 1 = v(x) + d$ with $v(y) + d \le \lfloor v(y) \rfloor + 1$ for every other clock. In this case also, we obtain a delay move in $\mathcal{N}$ from $(M_1, \nu_1)$.
+
+• If there are some clocks $x$ for which $v(x)$ has an integer value, then elapsing time leads to the successor region, which is time-open. From $(M_1, \nu_1)$, it is possible to reach with instantaneous transitions a configuration $(M_2, \nu_2)$ where for all clocks with integer values, $M_2(h_c^x) = 1$ with its associated $c$ maximal, and $(M_2, \nu_2)$ still equivalent to $(\ell, v)$. Now from $(M_2, \nu_2)$, a delay move can be applied so that $(M, \nu) \xrightarrow{*} (M_1, \nu_1) \xrightarrow{*} (M_2, \nu_2) \xrightarrow{d} (M_2, \nu_2 + d)$, with $(M_2, \nu_2 + d)\mathcal{R}(\ell, v+d)$.
+---PAGE_BREAK---
+
+**Case 2:** If $(\ell, v) \xrightarrow{e} (\ell', v')$ for some $e = (\ell, g, a, R, l')$ then condition (b) implies that a transition $(\ell, \lfloor v \rfloor)$ $\xrightarrow{e}$ $(\ell', \lfloor v' \rfloor)$ is also possible in $\mathcal{A}$. Here again we may have to apply from $(M, \nu)$ a sequence of instantaneous transitions, leading to $(M_1, \nu_1)$ where place $l$ is marked, and from there we can reach an equivalent configuration $(M_2, \nu_2)$ with its associated $c = (c_x)_{x \in X}$ maximal. Let $C = (\ell, Z)$ be the class for which $M(C) = 1$, with $v \in Z$. From lemma 6, $(\ell, \lfloor v \rfloor)$ also belongs to $C$, and $\forall x \in X, \lfloor v \rfloor(x) = c_x \lor (\lfloor v \rfloor(x) \ge K \land c_x = K)$ so that the transition $e$ (corresponding to this vector and this class) can be fired in $\mathcal{N}$, immediately followed by the corresponding reset sequence, leading to $(M', \nu')$. Since exactly one class $C'$ is marked after $e$, we have $(M', \nu')R(\ell', v')$ by the definition of $\mathcal{R}$.
+
+For the converse, we consider a move in $\mathcal{N}$.
+
+**Case 3:** $(M, \nu) \xrightarrow{d} (M, \nu + d)$ (with $d \neq 0$). Then, neither reset transitions nor transitions of the form $t_c^x$ can be fired in $\mathcal{N}$. Thus, the places $h_c^x$ which contain a token are such that $\nu(t_c^x) < 1$ and $\nu(t_c^x) + d \le 1$. For the state $(\ell, v)$, we have $M(\ell) = 1$ and $v(x) = c + \nu(t_c^x)$. The move $(\ell, v) \xrightarrow{d} (\ell, v + d)$ is possible in $\mathcal{A}$ since $(\ell, v + d)$ belongs either to the region of $(\ell, v)$ or to its time successor which is reachable by condition (a). Therefore $(\ell, v) \xrightarrow{d} (\ell, v + d)$ in $\mathcal{A}$ with $(M, \nu + d)\mathcal{R}(\ell, v + d)$.
+
+**Case 4:** $(M, \nu) \xrightarrow{t} (M', \nu')$. For any transition $t$ of $\mathcal{N}$ which is not associated with some transition $e = (\ell, g, a, R, l')$ in $\mathcal{A}$, no time can elapse so there is no need for a move in $\mathcal{A}$ because $(M', \nu')$ is still equivalent to $(\ell, v)$. Suppose now that $t$ is associated with an edge $e$, we have $M(\ell) = 1$, $M(C) = 1$ for some class $C = (\ell, Z)$ with $v \in Z$. Since $t$ is fireable, considering the valuation $c = (c_x)_{x \in X}$ associated with $(M, \nu)$ the construction implies that $\exists v'' \in Z$ s.t. $\forall x \in X, v''(x) = c_x \lor (v''(x) \ge K \land c_x = K)$, which implies that the segment $[\nu'', v] \subseteq Z$, from the convexity of $Z$, with $0 \le v(x) - v''(x) = v(x) - c_x \le 1$ for each $x$ s.t. $c_x < K$. Thus, $[(\ell, v''), (\ell, v)]$ is contained in the topological closure $\bar{r}$ of some reachable region such that $\min_r = c$ and $l = l_r$. Since $(\ell, c) \xrightarrow{e} (\ell', c')$ is possible in $\mathcal{A}$, condition (c) implies that a move $(\ell, v) \xrightarrow{e} (\ell', v')$ is also possible in $\mathcal{A}$. From the definition, $(M', \nu')R(\ell', v')$. □
+
+# 7 Complexity results
+
+This characterization leads to the following complexity results.
+
+**Proposition 1.** Given a (injectively-labelled) timed automaton $\mathcal{A}$, deciding whether there is a TPN weakly timed bisimilar to $\mathcal{A}$ is PSPACE-complete. The reachability problem for the class $\mathcal{T}\mathcal{A}^{wtb}$ is PSPACE-complete.
+
+*Proof.* The reachability problem for regions is in PSPACE. In order to check whether the condition (a) is false we non deterministically pick a region $r$ and a region $r'$ which intersects $\bar{r}$ and check whether $r$ is reachable and $r'$ is not reachable. In order to check whether the condition (b) is false we non deterministically pick a region $r$ and an edge $e$ and check whether $r$ is reachable and $e$.
+---PAGE_BREAK---
+
+is firable from $r$ and not fireable from $(l_r, min_r)$. In order to check whether the condition (c) is false we non deterministically pick a region $r$, a region $r'$ which intersects $\bar{r}$ and an edge $e$ and check whether $r$ is reachable and $e$ is not firable from $r$ or $r'$ and fireable from $(l_r, min_r)$. By Savitch construction, we obtain a deterministic algorithm in PSPACE.
+
+In order to show the PSPACE-hardness, we adapt the construction given in [3] (appendix D) which reduces the acceptance problem for linear bounded Turing machines (LBTM) to the reachability problem for TA with restricted guards. In order to be self-content, we develop the complete proof.
+
+The Turing machine $M = \langle Q, \Sigma, q_0, q_F, Tr \rangle$ is defined by $Q$ its finite set of states, $\Sigma = \{a, b\}$ its alphabet, $q_0$ (resp. $q_F$) its initial (resp. final) state and $Tr$ the transitions of the machine. Each transition $\theta = (q, \alpha, \alpha\beta\alpha', \delta, q')$ is defined by its current state $q$, the character to be read $\alpha$, the character to be written $\alpha'$, the move to be performed $\delta \in \{L, R\}$ and the next state $q'$. Let $w_0$ be a word of length $n$. We first build a TA $A_{M,w_0}$ bisimilar to a TPN which reaches location end iff $w_0$ is accepted by $M$.
+
+The set of clocks of $A_{M,w_0}$ is $\{x_i\}_{0\le i \le n}$. Clock $x_0$ rules the behaviour of the simulation by letting exactly 1 time unit elapse before performing instantan- eously the simulation of a machine transition. For $1 \le i$, at the time of transition execution, the value of clock $x_i$ is related to the value of the $i^{\text{th}}$ cell: $x_i = 1$ iff the cell contains an $a$ and $x_i \ge 2$ iff the cell contains an $b$.
+
+The set of locations of $A_{M,w_0}$ is:
+
+$$ \{(q,i) \mid q \in Q, 1 \le i \le n\} \cup \{init, end\} \cup \{(i,\theta,l) \mid 1 \le i \le n, \theta \in Tr, 1 \le l \le n\} $$
+
+When the machine is in state $q$ reading the $i^{\text{th}}$ cell, the location of an automaton is $(q, i)$. When the machine changes its configuration to state $q'$ reading the $i^{\text{th}}$ cell by transition $\theta = (q, \alpha, \alpha\beta\alpha', \delta, q')$, the automaton will first let 1 time unit elapse and then will successively visit $(i, \theta, 1), \dots, (i, \theta, n)$, $(q', i')$ in zero time. Thus the invariant associated with every $(q, i)$ is $x_0 \le 1$ and the one associated with every $(i, \theta, l)$ is $x_0 \le 0$.
+
+Let us describe the automaton transitions related to such a transition (we do not give the labels as the TA is injectively-labelled). We define $s(i, \delta)$ by: if $\delta = L$ then $s(i, \delta) = i - 1$ else $s(i, \delta) = i + 1$.
+
+$$ -(q,i) \xrightarrow{g,\{x_0\}} (i,\theta,1) \text{ with } g = (x_0 \ge 1) \land (x_i \le 1) \text{ (resp. } g = (x_0 \ge 1) \land (x_i \ge 2)\text{)) if } \alpha = a \text{ (resp. } \alpha = b $$
+
+$$ -(i, \theta, i) \xrightarrow{true,r} (i, \theta, i+1) \text{ if } i < n \text{ and } (n, \theta, n) \xrightarrow{true,r} (q', s(n, \delta)) \text{ with } r = \{x_i\} \text{ (resp. } r = \emptyset\text{) if } \alpha' = a \text{ (resp. } \alpha' = b $$
+
+$$ -(i, \theta, l) \xrightarrow{x_l \le 1, \{x_l\}} (i, \theta, l+1) \text{ if } l < n \land i \ne l \text{ and } (i, \theta, n) \xrightarrow{x_n \le 1, \{x_n\}} (q', s(i, \delta)) \\
+\text{if } i \ne n \text{ (in this case } \delta \text{ must be } L\text{).} $$
+
+$$ -(i, \theta, l) \xrightarrow{x_l \ge 2, \emptyset} (i, \theta, l+1) \text{ if } l < n \land i \ne l \text{ and } (i, \theta, n) \xrightarrow{x_n \ge 2, \emptyset} (q', s(i, \delta)) \text{ if } i \ne n $$
+
+The first step of the simulation consists in checking whether the $i^{\text{th}}$ cell contains $\alpha$ whereas the other steps consists in resetting the clocks corresponding
+---PAGE_BREAK---
+
+to a cell containing an *a*. By this way, such clocks will have value 1 at the next stage of the simulation whereas the other ones will have a value at least 2. In the last step, the new location $(q', s(i, \delta))$ is reached.
+
+It remains to "initialize" the clocks according to $w_0$. This is performed through the transition $init \xrightarrow{x_0 \ge 1, \{x_0\} \cup r_{w_0}} (q_0, 1)$ with $r_{w_0}$ being the positions of *a* in $w_0$. Once again, the invariant associated with $init$ is $x_0 \le 1$. At last, we add transitions in order to reach end: $(q_F, i) \xrightarrow{\text{true}, \{x_0\}} \text{end}$.
+
+From the details of the construction, it is clear that the size of $A_{M,w_0}$ is polynomial w.r.t. *n*. Furthermore it satisfies the conditions (a), (b) and (c). This is mainly due to the fact that in a configuration with $x_0 \in \{0,1\}$ all the other clocks have integral values.
+
+We build another TA $A'_{M,w_0}$ by adding an edge $end \xrightarrow{x_0=0,\emptyset} end$.
+
+If the LBTM $\mathcal{M}$ does not accept the word $w_0$, then the state *end* is not reachable and $A'_{M,w_0}$ which behaves as $A_{M,w_0}$, satisfies the conditions (a), (b), (c).
+
+If the LBTM $\mathcal{M}$ accepts the word $w_0$, then the state *end* is reachable and $A'_{M,w_0}$ does not satisfy the condition (c) (the additional edge is fireable when entering *end* but not after letting the time elapse). The fact that the reachability problem for the class $\mathcal{T}\mathcal{A}^{wtb}$ is PSPACE-complete was proved implicitly within the proof above. $\square$
+
+# 8 Conclusion
+
+In this paper, we considered the subclass $\mathcal{T}\mathcal{A}^{wtb}$ of injectively-labelled TA such that a timed automaton $\mathcal{A}$ is in $\mathcal{T}\mathcal{A}^{wtb}$ if and only if there is a TPN $\mathcal{N}$ weakly timed bisimilar to $\mathcal{A}$. We obtained a characterization of this class, based on the region automaton associated with $\mathcal{A}$. To prove that our condition is necessary, we introduced the notion of uniform bisimulation between TA and TPNs. For the sufficiency, we proposed two constructions. From this characterization, we have proved that for the class $\mathcal{T}\mathcal{A}^{wtb}$, the membership problem and the reachability problem are PSPACE-complete. Of course, checking this condition is therefore expensive but it should be noted that the syntactic subclass of TA proposed in [7] fulfills the condition. Hence there is a simpler (even if coarser) way to check a sufficient condition avoiding the complexity. Furthermore translation of a TA into a TPN provides an alternative method for verification with tools like TINA [11] or ROMEO [19] in case UPPAAL [25] or KRONOS [28] are inefficient. Besides the techniques introduced in this paper give some insight for use of the region automaton in order to obtain expressivity results.
+
+# References
+
+1. P. A. Abdulla, P. Mahata and R. Mayr. Decidability of Zenoness, Syntactic Boundedness and Token-Liveness for Dense-Timed Petri Nets. *Foundations of Software Technology and Theoretical Computer Science*, 24th International Conference, Chennai, India., LNCS volume 3328 pages 58-70, december 2004.
+---PAGE_BREAK---
+
+2. P. A. Abdulla and A. Nylén. Timed Petri Nets and BQOs. *International Conference on Application and Theory of Petri nets*. Newcastle. U.K., LNCS volume 2075 pages 53-70, June 2001.
+
+3. L. Aceto and F. Laroussinie. Is Your Model Checker on Time? On the Complexity of Model Checking for Timed Modal Logics. *Journal of Logic and Algebraic Programming*, volume 52-53, pages 7-51. Elsevier Science Publishers, August 2002.
+
+4. R. Alur and D. Dill. A theory of timed automata. *Theoretical Computer Science* B, 126:183-235, 1994.
+
+5. T. Aura and J. Lilius. A causal semantics for time Petri nets. *Theoretical Computer Science*, 243(1-2):409-447, 2000.
+
+6. B. Bérard, F. Cassez, S. Haddad, D. Lime and O.H. Roux. Comparison of Different Semantics for Time Petri Nets. *Automated Technology for Verification and Analysis: Third International Symposium (ATVA 2005) Taipei, Taiwan, LNCS* volume 3707 pages 293-307, October 2005.
+
+7. B. Bérard, F. Cassez, S. Haddad, D. Lime and O.H. Roux. Comparison of the Expressiveness of Timed Automata and Time Petri Nets . *Third International Conference on Formal Modelling and Analysis of Timed Systems (FORMATS'05)*, Uppsala, Sweden, LNCS volume 3829, pages 211-225, September 2005.
+
+8. B. Bérard, F. Cassez, S. Haddad, D. Lime and O.H. Roux. When are timed automata weakly timed bisimilar to time Petri nets ? *25th Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2005)*, Hyderabad, India LNCS volume 3821, pages 273-284, December 2005.
+
+9. B. Bérard, V. Diekert, P. Gastin and A. Petit. Characterization of the expressive power of silent transitions in timed automata. *Fundamenta Informaticae*, 36(2-3):145-182, 1998.
+
+10. B. Berthomieu and M. Diaz. Modeling and verification of time dependent systems using time Petri nets. *IEEE Transactions on Software Engineering*, 17(3):259-273, March 1991.
+
+11. B. Berthomieu and F. Vernadat. Time Petri Nets Analysis with TINA. *Third International Conference on the Quantitative Evaluation of Systems (QEST 2006)*, Riverside, California, USA IEEE Computer Society Press, pages 123-124, September 2006.
+
+12. P. Bouyer. Forward Analysis of Updatable Timed Automata. *Formal Methods in System Design*, 24(3):281-320, May 2004.
+
+13. P. Bouyer, C. Dufourd, E. Fleury and A. Petit. Updatable Timed Automata. *Theoretical Computer Science*, 321(2-3):291-345, Elsevier Science Publishers, August 2004.
+
+14. P. Bouyer, S. Haddad and P.-A. Reynier. Timed Petri Nets and Timed Automata: On the Discriminating Power of Zeno Sequences. *33rd International Colloquium on Automata, Languages and Programming (ICALP'06)*, Venice, Italy, (LNCS) volume 4052, pages 420-431, July 2006.
+
+15. P. Bouyer, S. Haddad and P.-A. Reynier. Undecidability Results for Timed Automata with Silent Transitions. *Research Report LSV-07-12*, http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/PDF/rr-lsv-2007-12.pdf, February 2007.
+
+16. G. Bucci and A. Fedeli and L. Sassoli and E. Vicario. Timed State Space Analysis of Real-Time Preemptive Systems. *IEEE Trans. Software Eng.* 30(2): 97-111, 2004.
+
+17. F. Cassez and O. H. Roux. Structural Translation of Time Petri Nets into Timed Automata. In Michael Huth, editor, *Workshop on Automated Verification of Critical Systems (AVoCS'04) London, UK*, Electronic Notes in Computer Science. Elsevier, volume 128, issue 6, pages 23-40, August 2004.
+---PAGE_BREAK---
+
+18. D. L. Dill. Timing assumptions and verification of finite-state concurrent systems. In Proc. Workshop on Automatic Verification Methods for Finite State Systems, Grenoble, LNCS volume 407 pages 197-212, 1989.
+
+19. G. Gardey, D. Lime, M. Magnin and O. H. Roux, ROMÉO: A Tool for Analyzing Time Petri Nets. In *Proc. of Computer Aided Verification, 17th International Conference, CAV 2005, Edinburgh, Scotland*, volume 3576 of LNCS, pages 418-423, 2005.
+
+20. S. Haar, F. Simonot-Lion, L. Kaiser, and J. Toussaint. Equivalence of Timed State Machines and safe Time Petri Nets. In *Proceedings of 6th International Workshop on Discrete Event Systems (WODES 2002)*, Zaragoza, Spain, pages 119-126, 2002.
+
+21. T.A. Henzinger, X. Nicollin, J. Sifakis and S. Yovine. Symbolic model checking for real-time systems. *Information and Computation*, 111(2):193-244, 1994.
+
+22. D. Lime and O. H. Roux. State class timed automaton of a time Petri net. In *Proceedings of the 10th International Workshop on Petri Nets and Performance models (PNPM 2003) Urbana-Champaign, Illinois, USA IEEE Computer Society*, pages 124-133 september 2003.
+
+23. X. Nicollin and J. Sifakis. An Overview and Synthesis on Timed Process Algebras. In *CAV'91, Aalborg, invited talk, LNCS 575*, pages 376-398 July 1991
+
+24. P. M. Merlin. A study of the recoverability of computing systems. *PhD thesis*, University of California, Irvine, CA, 1974.
+
+25. P. Pettersson and K. G. Larsen. UPPAAL2k. *Bulletin of the European Association for Theoretical Computer Science*, 70:40-44, February 2000.
+
+26. L. Popova-Zeugmann and M. Heiner and I. Koch. Time Petri Nets for Modelling and Analysis of Biochemical Networks. *Fundamenta Informaticae* (FI), 67(2005), pp 149-162, IOS-Press, Amsterdam.
+
+27. C. Ramchandani. Analysis of asynchronous concurrent systems by timed Petri nets. *PhD thesis*, Massachusetts Institute of Technology, Cambridge, MA, 1974.
+
+28. S. Yovine. Kronos: A Verification Tool for real-Time Systems. *Journal of Software Tools for Technology Transfer*, 1(1/2):123-133, October 1997.
\ No newline at end of file
diff --git a/samples/texts_merged/3336595.md b/samples/texts_merged/3336595.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab7329badb698e4603f393a04ff95f48e2ac5c2e
--- /dev/null
+++ b/samples/texts_merged/3336595.md
@@ -0,0 +1,482 @@
+
+---PAGE_BREAK---
+
+Learning Koopman Operator under Dissipativity
+Constraints
+
+Keita Hara *, Masaki Inoue *, Noboru Sebe **
+
+* Department of Applied Physics and Physico-Informatics, Keio University,
+3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa, Japan.
+
+** Department of Artificial Intelligence, Kyushu Institute of Technology,
+680-4 Kawazu, Iizuka, Fukuoka, Japan.
+
+**Abstract:** This paper addresses a learning problem for nonlinear dynamical systems with incorporating any specified dissipativity property. The nonlinear systems are described by the Koopman operator, which is a linear operator defined on the infinite-dimensional lifted state space. The problem of learning the Koopman operator under specified quadratic dissipativity constraints is formulated and addressed. The learning problem is in a class of the non-convex optimization problem due to nonlinear constraints and is numerically intractable. By applying the change of variable technique and the convex overbounding approximation, the problem is reduced to sequential convex optimization and is solved in a numerically efficient manner. Finally, a numerical simulation is given, where high modeling accuracy achieved by the proposed approach including the specified dissipativity is demonstrated.
+
+**Keywords:** Learning, Dissipativity, Koopman Operator, Linear Matrix Inequality
+
+# 1. INTRODUCTION
+
+Artificial neural networks have re-developed recently and have brought high modeling accuracy in data-driven modeling of complex nonlinear systems. In the development, multi-layered hierarchical models play a central role, and their efficient learning methods receive considerable attention in various areas (see e.g., the works by De la Rosa and Yu (2016), Jin et al. (2016)). For example, De la Rosa and Yu (2016) presents an identification method for nonlinear dynamical systems by using deep learning techniques. Jin et al. (2016) proposes a deep reconstruction model (DRM) to analyze the characteristics of nonlinear systems.
+
+The drawback of such learning methods, in particular for researchers or engineers, is the gap between the constructed model and some *a priori* information on a physical system. Although the model may fit a given data-set and emulate the system behavior accurately, it cannot necessarily possess some practically essential properties of the system. If we know *a priori* information on the properties, it is natural to try to incorporate them into the model. For *linear* dynamical systems, a variety of learning methods that incorporate *a priori* information have been studied well. For example, the subspace identification method is combined with the *a priori* information including stability by Lacy and Bernstein (2003), eigenvalue location by Okada and Sugie (1996); Miller and De Callafon (2013), steady-state property by Alenany et al. (2011); Yoshimura et al. (2019), moments by Inoue (2019), and positive-realness studied by Goethals et al. (2003); Hoagg et al. (2004), more general frequency-domain property by Abe et al. (2016). However, to the best of the authors' knowledge, for *nonlinear* dynamical systems, learning methods with *a priori* information have not been studied well.
+
+This paper addresses a learning problem for nonlinear dynamical systems with incorporating the *a priori* information on dis-
+
+sipativity, which is proposed by Willems (1972) and developed e.g., by Hill and Moylan (1976, 1977). To this end, the *non-linear* system is described with the *Koopman operator*, which is a *linear* operator defined on infinite-dimensional lifted state space and has been applied to analysis of nonlinear dynamical systems. See e.g., the works by Koopman (1931), Williams et al. (2015), Korda and Mezić (2018). Then, the learning problem is reduced to the data-driven finite-dimensional approximation of the Koopman operator onto the dissipativity constraint.
+
+The approximation is formulated as the minimization of a convex cost function, which measures the consistency of the model and data, subject to a nonlinear matrix inequality, which represents the dissipation inequality. The formulated problem, which is called Problem 1, is in a class of the non-convex optimization and is numerically intractable. Therefore, we aim at approximating Problem 1 to derive a numerically efficient algorithm that is composed of the solutions to the following two problems. 1) By applying the change of variable technique to the nonlinear matrix inequality, we derive a linear matrix inequality (LMI) constraint. In addition, the approximation of the cost function reduces Problem 1 to a convex optimization, which is called Problem 2 and provides the feasible solution to Problem 1. 2) The convex overbounding approximation method proposed by Sebe (2018) is applied to the nonlinear matrix inequality of Problem 1 to derive its inner approximation. Then, the derived convex optimization problem, which is called Problem 3, is sequentially solved by starting from the initial guess obtained in Problem 2. It is guaranteed that the overall algorithm generates a less conservative solution than the solution obtained in Problem 2.
+
+The remaining parts of this paper are organized as follows. In Section 2, we review theory of the Koopman operator and dissipativity. In Section 3, the problem of learning the Koopman operator with the dissipativity-constraint is formulated, and the learning algorithm is proposed. In Section 4, a numerical
+---PAGE_BREAK---
+
+simulation is performed, in which the learning problem of a dissipative nonlinear system is addressed and the effectiveness of the algorithm is presented. Section 5 concludes the works in this paper.
+
+## 2. PRELIMINARIES
+
+### 2.1 Koopman Operator Theory
+
+In this section, we review *Koopman operator theory* to show its application to control systems based on the work by Korda and Mezić (2018).
+
+Koopman Operator
+We consider a discrete-time nonlinear system described by
+
+$$ \begin{cases} x(k+1) = f(x(k), u(k)), \\ y(k) = g(x(k)), \end{cases} \quad (1) $$
+
+where $k$ is the discrete time, $u \in \mathbb{R}^m$ is the input, $x \in \mathbb{R}^n$ is the state, $y \in \mathbb{R}^l$ is the output, and $f(x, u): \mathbb{R}^{n+m} \to \mathbb{R}^n$ and $g(x): \mathbb{R}^n \to \mathbb{R}^l$ are the nonlinear functions. Let $z$ denote the extended state
+
+$$ z := \begin{bmatrix} x \\ u \end{bmatrix} \in \mathbb{R}^{n+m}. $$
+
+Further let $\mathcal{F}$ be the nonlinear operator defined by
+
+$$ \mathcal{F}(z) := \begin{bmatrix} f(x, u) \\ S(u) \end{bmatrix}, $$
+
+where $S$ is the time-shift operator defined as
+
+$$ S(u(k)) := u(k + 1). $$
+
+Then, the time evolution of $z$ is described by
+
+$$ z(k + 1) = \mathcal{F}(z(k)). \quad (2) $$
+
+Now, we let $\phi_{\text{inf}}(z)$ denote the infinite-dimensional lifting function described by
+
+$$ \phi_{\text{inf}}(z) = \begin{bmatrix} \phi_1(z) \\ \phi_2(z) \\ \vdots \end{bmatrix}. \quad (3) $$
+
+Here, we introduce the Koopman operator $\mathcal{K}$ as
+
+$$ \mathcal{K}(\phi_{\text{inf}}(z)) := \phi_{\text{inf}}(\mathcal{F}(z)). $$
+
+Then, the time evolution of $\phi_{\text{inf}}(z)$ is described by
+
+$$ \phi_{\text{inf}}(z(k+1)) = \mathcal{K}(\phi_{\text{inf}}(z(k))). \quad (4) $$
+
+Note that the Koopman operator is a linear operator defined on the infinite-dimensional state space, while expressing nonlinear dynamical systems (see e.g., the work by Korda and Mezić (2018)).
+
+**Approximation of Koopman Operator**
+Since the Koopman operator is the infinite-dimensional operator, it is difficult to be handled in numerical calculations. In this subsection, we give the finite-dimensional approximation of the Koopman operator. To this end, we define the $N_\phi$-dimensional lifting function $\phi(z): \mathbb{R}^{n+m} \to \mathbb{R}^{N_\phi}$ as
+
+$$ \phi(z) = \begin{bmatrix} \phi_1(z) \\ \vdots \\ \phi_{N_\phi}(z) \end{bmatrix} \in \mathbb{R}^{N_\phi}. $$
+
+Furthermore, we let $\mathcal{A} \in \mathbb{R}^{N_\phi \times N_\phi}$ be a finite-dimensional matrix that approximates the Koopman operator $\mathcal{K}$, i.e., the error
+
+$$ \| \mathcal{A}\phi(z) - \phi(\mathcal{F}(z)) \| \quad (5) $$
+
+is sufficiently small. With this $\mathcal{A}$, we have the expression
+
+$$ \phi(z(k+1)) \approx \mathcal{A}\phi(z(k)), \quad (6) $$
+
+which approximately describes the behavior of $\phi_{\text{inf}}(z)$, defined by (4).
+
+In this paper we propose the method of the data-driven approximation of $\mathcal{K}$, i.e., the learning method of $\mathcal{A}$ by using some data. In the method, we aim at constructing the model (6) that is compatible with controller design. It is tractable for controller design and its implementation that the model is linear to the input $u$. To this end, we further specialize the class of the lifting function $\phi(z)$ in the following form
+
+$$ \phi(z) = \begin{bmatrix} \psi(x) \\ u \end{bmatrix} \in \mathbb{R}^{N+m}, $$
+
+where $\psi(x): \mathbb{R}^n \to \mathbb{R}^N$ is $N$-dimensional lifting function given by
+
+$$ \psi(x) = \begin{bmatrix} \psi_1(x) \\ \vdots \\ \psi_N(x) \end{bmatrix} \in \mathbb{R}^N, $$
+
+and $N+m=N_\phi$ holds. Let the matrix $\mathcal{A}$ be partitioned as
+
+$$ \mathcal{A} = \begin{bmatrix} A & B \\ * & * \end{bmatrix} \in \mathbb{R}^{(N+m) \times (N+m)}, $$
+
+where $A \in \mathbb{R}^{N\times N}$ and $B \in \mathbb{R}^{N\times m}$. Then, it follows from (6) that the following expression of the time-evolution of $\psi(x)$ holds.
+
+$$ \psi(x(k+1)) \approx A\psi(x(k)) + Bu(k). $$
+
+In addition, we give the approximation of the output equation in (1) by
+
+$$ y(k) \approx C\psi(x(k)), $$
+
+where $C \in \mathbb{R}^{l\times N}$. For simplicity of notation, we let $\psi(k) = \psi(x(k))$. Then, we obtain the state-space model defined on the functional space as
+
+$$ \left\{ \begin{aligned} \psi(k+1) &= A\psi(k) + Bu(k), \\ y(k) &= C\psi(k). \end{aligned} \right. \quad (7) $$
+
+The model (7) approximately describes the nonlinear input-output behavior generated by (1). In this paper, the model (7) is called the “Koopman model”. The aim of this paper is to propose the learning method of the system matrices $(A, B, C)$ based on some data-sets.
+
+**Learning Koopman Operator**
+For simplicity of notation, we define the following data-matrices based on the sequences of the input, output, and state of the system (1).
+
+$$ U_k := [u(k) \ u(k+1) \ \dots \ u(k+M-1)] \in \mathbb{R}^{m \times M}, $$
+
+$$ Y_k := [y(k) \ y(k+1) \ \dots \ y(k+M-1)] \in \mathbb{R}^{l \times M}, $$
+
+$$ \Psi_k := [\psi(k) \ \psi(k+1) \ \dots \ \psi(k+M-1)] \in \mathbb{R}^{N \times M}, $$
+
+$$ \Psi_{k+1} := [\psi(k+1) \ \psi(k+2) \ \dots \ \psi(k+M)] \in \mathbb{R}^{N \times M}. $$
+
+It should be noted that $\Psi_k$ and $\Psi_{k+1}$ are constructed by using the measured data on the state, $\{x(k), ..., x(k+M)\}$. In this paper, it is assumed that the data-set $(U_k, Y_k, \Psi_k, \Psi_{k+1})$ is given and available for learning the Koopman operator that expresses (1). The problem of learning is formulated as follows.
+
+Given the data-matrices $(U_k, Y_k, \Psi_k, \Psi_{k+1})$, solve the optimization problem:
+
+$$ \min_{A,B,C} J_1(A, B) + J_2(C), \quad (8) $$
+---PAGE_BREAK---
+
+where $J_1(A, B)$ and $J_2(C)$ are given by
+
+$$
+\begin{aligned}
+J_1(A, B) &:= \left\| \Psi_{k+1} - [A B] \left[ \frac{\Psi_k}{U_k} \right] \right\|_F^2, && (9) \\
+J_2(C) &:= \| Y_k - C \Psi_k \|_F^2. && (10)
+\end{aligned}
+ $$
+
+The solution to the optimization problem (8) provides the system matrices $(A^\dagger, B^\dagger, C^\dagger)$ of the Koopman model (7). It is assumed that $\left[\Psi_k^\top U_k^\top\right]^\top$ is of full row rank, which is a natural assumption when rich data is available for learning. Then, the learned matrices $(A^\dagger, B^\dagger, C^\dagger)$ are uniquely determined by any given data.
+
+## 2.2 Dissipativity
+
+In this subsection, we review dissipativity of dynamical systems. Dissipativity is a property of characterizing dynamical systems and plays an important role in system analysis, in particular, the analysis of feedback or more general interconnection of dynamical systems (see e.g., the pioneering work by Willems (1972) and developments e.g., by Hill and Moylan (1976, 1977)). Dissipativity is defined for the input-output system (1) as follows.
+
+*Definition 1.* Given a scalar function $s(u, y) : \mathbb{R}^{m+l} \to \mathbb{R}$, the system (1) is said to be dissipative for $s(u, y)$ if there is a non-negative function $V(x) : \mathbb{R}^n \to \mathbb{R}_+$ such that the inequality
+
+$$ V(x(k)) - V(x(0)) \leq \sum_{\tau=0}^{k} s(u(\tau), y(\tau)), \quad \forall k \geq 0 \quad (11) $$
+
+holds.
+
+The functions $s(u, y)$ and $V(x)$ and the inequality (11) are called the supply rate, storage function, and dissipation inequality, respectively.
+
+A characterization of dissipative linear dynamical systems is given. We specialize the supply rate $s(u, y)$ in the quadratic form as
+
+$$ s(u, y) = - \begin{bmatrix} y \\ u \end{bmatrix}^T \Xi \begin{bmatrix} y \\ u \end{bmatrix}, \quad (12) $$
+
+where $\Xi$ is the real symmetric matrix of
+
+$$ \Xi = \begin{bmatrix} \Xi_{11} & \Xi_{12} \\ \Xi_{12}^T & \Xi_{22} \end{bmatrix}. \quad (13) $$
+
+Even for the specialization of the supply rate, the dissipativity includes some important property of dynamical systems. For example, the dissipativity with respect to $(\Xi_{11}, \Xi_{12}, \Xi_{22}) = (0, -1, 0)$ represents the passivity of dynamical systems, and that with respect to $(\Xi_{11}, \Xi_{12}, \Xi_{22}) = (1, 0, -\gamma)$ for some positive constant $\gamma$ represents the bounded $L_2$ gain.
+
+The dissipativity of linear input-output systems, e.g., described by (7), is characterized by the following lemma.
+
+*Lemma 1.* The following statements (i) and (ii) are equivalent (see the book by Brogliato et al. (2007)).
+
+(i) The Koopman model (7) is dissipative for the supply rate $s(u, y)$ of (12).
+
+(ii) There exists a symmetric matrix $P$ such that the following inequalities hold.
+
+$$ P > 0 \quad (14) $$
+
+$$
+\begin{bmatrix} A & B \\ I & 0 \end{bmatrix}^T \begin{bmatrix} P & 0 \\ 0 & -P \end{bmatrix} \begin{bmatrix} A & B \\ I & 0 \end{bmatrix} + \begin{bmatrix} C & 0 \\ 0 & I \end{bmatrix}^T \Xi \begin{bmatrix} C & 0 \\ 0 & I \end{bmatrix} < 0. \quad (15)
+ $$
+
+## 3. LEARNING KOOPMAN OPERATOR WITH DISSIPATIVITY-CONSTRAINTS
+
+In this section, we propose a learning method of nonlinear dynamical systems with incorporating a priori information on the system dissipativity. We assume that the supply rate $s(u, y)$ characterizing the system dissipativity is already given and available for learning. Then, we aim at incorporating the dissipativity information into the Koopman model (7).
+
+### 3.1 Problem Setting
+
+We aim at constructing the Koopman model (7) that satisfies the dissipation inequality (11) based on some data-sets. This learning problem of the system matrices $(A, B, C)$ of (7) is reduced to the problem of (8) subject to the dissipativity constraints (14) and (15). The problem is mathematically formulated as follows.
+
+*Problem 1.* Given the real symmetric matrix $\Xi$ and the data-matrices $(U_k, Y_k, \Psi_k, \Psi_{k+1})$, solve the optimization problem:
+
+$$
+\begin{array}{ll}
+\displaystyle \min_{P,A,B,C} & J_1(A, B) + J_2(C) \\
+\text{sub to} & (14), (15).
+\end{array}
+ $$
+
+Suppose that the optimal solution to Problem 1 is given by $(P^*, A^*, B^*, C^*)$. Then the Koopman model (7) with $(A^*, B^*, C^*)$ is dissipative for the supply rate $s(u, y)$ of (12).
+
+Note that the dissipativity constraint, described by (14) and (15), is non-convex in decision variables $(P, A, B, C)$. It is not numerically tractable to solve Problem 1. In the next subsection, we try to approximately reduce the problem to a convex one.
+
+### 3.2 Convex Approximation of Problem 1
+
+On the basis of the variable transformation technique by Hoagg et al. (2004); Abe et al. (2016), the nonlinear inequality (15) is reduced into a linear matrix one. We expand (15) as
+
+$$
+\begin{bmatrix} P - C^T \Xi_{11} C & -C^T \Xi_{12} \\ -\Xi_{12}^T C & -\Xi_{22} \end{bmatrix} - [AB]^T P [AB] > 0. \quad (16)
+ $$
+
+Then, we apply the variable transformation to $(P, A, B)$. We let
+
+$$ R = PA, \quad S = PB \quad (17) $$
+
+to reduce (16) to the inequality
+
+$$
+\begin{bmatrix} P - C^T \Xi_{11} C & -C^T \Xi_{12} & R^T \\ -\Xi_{12}^T C & -\Xi_{22} & S^T \\ R & S & P \end{bmatrix} > 0. \quad (18)
+ $$
+
+Now, suppose that $C$ is given, e.g., by just minimizing $J_2(C)$ based on the data-set $(Y_k, \Phi_k)$. Then, the inequality (18) is linear in the matrices $(P, R, S)$, which is numerically tractable.
+
+There is the drawback in the variable transformation of (17): the cost function $J_1(A, B)$ of (9) becomes non-convex in the transformed variables $(P, R, S)$, which is numerically intractable. To overcome the drawback and to numerically obtain the feasible solution to Problem 1, we approximately transform $J_1(A, B)$ to a convex one. To this end, we introduce $W = P$, as a weighting matrix, into $J_1(A, B)$ to define a new cost function as follows.
+---PAGE_BREAK---
+
+$$
+\begin{align}
+J_{1,W}(P, R, S) &= \left\| W \left( \Psi_{k+1} - [A \, B] \begin{bmatrix} \Psi_k \\ U_k \end{bmatrix} \right) \right\|_F^2 \\
+&= \left\| P \Psi_{k+1} - [R \, S] \begin{bmatrix} \Psi_k \\ U_k \end{bmatrix} \right\|_F^2 . \tag{19}
+\end{align}
+$$
+
+The function $J_{1,W}(P, R, S)$ of (19) is convex in the matrices $(P, R, S)$. The minimization problem of $J_{1,W}(P, R, S)$ under the inequalities (14) and (18) is in the class of the convex optimization. The optimization problem is summarized as follows.
+
+**Problem 2.** Given the system matrix *C*, the real symmetric matrix Ξ, and the data-matrices (*U**k*, *Ψ**k*, *Ψ**k*+1), solve the optimization problem:
+
+$$
+\begin{array}{ll}
+\displaystyle \min_{P,R,S} & J_{1,W}(P,R,S) \\
+\text{sub to} & (14), (18).
+\end{array}
+$$
+
+Suppose that Problem 2 is feasible and that the solution ($\hat{P}$, $\hat{R}$, $\hat{S}$) is given. Then, we obtain the system matrices as
+
+$$
+\hat{A} = \hat{P}^{-1}\hat{R}, \quad \hat{B} = \hat{P}^{-1}\hat{S}. \tag{20}
+$$
+
+We have the following proposition for the constructed model based on the solution to Problem 2.
+
+*Proposition 1.* Suppose that Problem 2 is feasible and that system matrices are given by (20). Then, the quadruplet ($\hat{P}$, $\hat{A}$, $\hat{B}$, $C$) is the feasible solution to Problem 1. In other words, the Koopman model (7) with ($\hat{A}$, $\hat{B}$, $C$) is dissipative for the supply rate $s(u, y)$ of (12).
+
+As implied in the proposition, the solution to Problem 2 is the feasible solution, but may not be the optimal solution to Problem 1. In general, the solution ($\hat{P}$, $\hat{A}$, $\hat{B}$, $C$) is conservative for Problem 1. In the following subsection, we aim at finding a better approximation of Problem 1.
+
+### 3.3 Sequential Convex Approximation of Problem 1
+
+In this subsection, we give the efficient solution method for Problem 1 based on the overbounding method proposed by Sebe (2018). In the overbounding method, the inner approximations of nonlinear matrix inequalities are sequentially constructed. This sequential method contributes to gradually reduce the conservativeness of the solution of Problem 2.
+
+Suppose that Problem 2 is feasible and that the feasible solution to Problem 1, denoted by ($\hat{P}$, $\hat{A}$, $\hat{B}$), is constructed. Then, we try to update the initial guess ($\hat{P}$, $\hat{A}$, $\hat{B}$) to reduce the conservativeness, i.e., to further reduce $J_1(A, B)$. First, we transform the decision variables of Problem 1, denoted by $(P, A, B)$, into $(\Delta P, \Delta A, \Delta B)$ as follows.
+
+$$
+P = \hat{P} + \Delta P, \quad A = \hat{A} + \Delta A, \quad B = \hat{B} + \Delta B.
+$$
+
+Further, we let $G$ and $H$ be additional decision variables. With those $G$ and $H$, we define the inequality condition described by
+
+$$
+\[
+\mathrm{He} \left( \left[ \begin{array}{cc} Q(\Delta P, \Delta A, \Delta B) & \begin{bmatrix} 0 \\ \Delta P \\ -G \\ 0 \end{bmatrix} \\ -H[\Delta A \Delta B] & G \\ 0 & -H \end{bmatrix} \right] \right) < 0, \quad (21)
+\]
+$$
+
+where $Q(\Delta P, \Delta A, \Delta B)$ is given by (22) and $F(\Delta P)$ is
+
+$$
+F(\Delta P) = \begin{bmatrix} \hat{P} + \Delta P - C^T \Xi_{11} C - C^T \Xi_{12} \\ -\Xi_{12}^T C \\ -\Xi_{22} \end{bmatrix}.
+$$
+
+Table 1. Notation of optimal solutions
+
+
+
+
+ |
+ Problem
+ |
+
+ Solution
+ |
+
+
+
+
+ |
+ Unconstrained (8)
+ |
+
+ (A†, B†, C†)
+ |
+
+
+ |
+ Problem 1
+ |
+
+ (P*, A*, B*, C*)
+ |
+
+
+ |
+ Problem 2
+ |
+
+ (P̂, R̂, Ŝ) → (P̂, Â, B̂)
+ |
+
+
+ |
+ Problem 3
+ |
+
+ (ΔP̂, ΔÂ, ΔB̂) → (P̂, Â, B̄)
+ |
+
+
+
+
+We show that (21) is a sufficient condition for (15) as stated in the following proposition.
+
+*Proposition 2.* Suppose (21) holds. Then, letting $(P, A, B) = (\hat{P} + \Delta P, \hat{A} + \Delta A, \hat{B} + \Delta B)$, it holds that (15).
+
+The proof follows Proposition 2 in the work by Sebe (2018) and is omitted in this paper. Furthermore, it should be noted that (21) is linear in $(\Delta P, \Delta A, \Delta B, G)$. This implies that for any fixed $H$, (21) is in the form of LMIs and is numerically tractable.
+
+Recall $J_1(A, B)$ of (9) to obtain the expression
+
+$$
+J_1(\hat{A} + \Delta A, \hat{B} + \Delta B) \\
+= \| \Psi_{k+1} - [\hat{A} + \Delta A, \hat{B} + \Delta B] [\Psi_k / U_k] \|_F^2 . \quad (23)
+$$
+
+Then, the problem of finding $(\Delta P, \Delta A, \Delta B, G)$ that minimizes $J_1(\hat{A} + \Delta A, \hat{B} + \Delta B)$ under the constraint (21) based on the initial guess $(\hat{P}, \hat{A}, \hat{B})$ is stated as follows
+
+**Problem 3.** Given the system matrix *C*, the real symmetric matrix Ξ, the data-matrices ($U_k$, $\Psi_k$, $\Psi_{k+1}$), the feasible solution to Problem 1, denoted by ($\hat{P}$, $\hat{A}$, $\hat{B}$), and the real matrix *H*, solve the optimization problem:
+
+$$
+\begin{gather*}
+\min_{\Delta P, \Delta A, \Delta B, G} & J_1(\hat{A} + \Delta A, \hat{B} + \Delta B) \\
+\text{sub to} & (21), \quad \hat{P} + \Delta P > 0.
+\end{gather*}
+$$
+
+With the optimal solution $(\Delta\bar{P}, \Delta\bar{A}, \Delta\bar{B})$ to Problem 3, we obtain the matrices
+
+$$
+\bar{P} = \hat{P} + \Delta\bar{P}, \quad \bar{A} = \hat{A} + \Delta\bar{A}, \quad \bar{B} = \hat{B} + \Delta\bar{B}.
+$$
+
+The notation on the optimal solutions to Problems 1–3 are summarized on Table 1.
+
+Note that the solution ($\bar{P}$, $\bar{A}$, $\bar{B}$) is the less conservative solution to Problem 1 than the initial guess ($\hat{P}$, $\hat{A}$, $\hat{B}$) for any real matrix $H$ satisfying
+
+$$
+H + H^T > 0. \tag{24}
+$$
+
+This fact is mathematically stated in the following proposition.
+
+*Proposition 3.* Suppose that Problem 2 is feasible. Then, for any real matrix *H* satisfying the condition (24), Problem 3 is feasible. In addition, if ($\hat{P}$, $\hat{A}$, $\hat{B}$) is not the solution to Problem 1, the strict inequality
+
+$$
+J_1(\bar{A}, \bar{B}) < J_1(\hat{A}, \hat{B}) \qquad (25)
+$$
+
+holds.
+
+The proof is omitted in this paper.
+
+On the basis of the fact stated in Proposition 3, a sequential algorithm of solving Problem 1 is proposed. Suppose that Problem 3 with ($\hat{\mathrm{P}}$, $\hat{\mathrm{A}}$, $\hat{\mathrm{B}}$, $H$) = ($\bar{\mathrm{P}}_i$, $\bar{\mathrm{A}}_i$, $\bar{\mathrm{B}}_i$, $H_i$) has the optimal solution $(\Delta\bar{\mathrm{P}}_i, \Delta\bar{\mathrm{A}}_i, \Delta\bar{\mathrm{B}}_i$, $\bar{\mathrm{G}}_i)$. Consider the updating law
+
+$$
+\begin{align}
+& (\bar{P}_{i+1}, \bar{A}_{i+1}, \bar{B}_{i+1}, H_{i+1}) \\
+& \quad \leftarrow (\bar{P}_i + \Delta\bar{P}_i, \bar{A}_i + \Delta\bar{A}_i, \bar{B}_i + \Delta\bar{B}_i, \bar{G}_i).
+\end{align}
+\tag{26}
+$$
+---PAGE_BREAK---
+
+$$Q(\Delta P, \Delta A, \Delta B) = -\frac{1}{2} \begin{bmatrix} -F(\Delta P) & 0 \\ 0 & \hat{P} + \Delta P \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ -\hat{P}\hat{A} & -\hat{P}\hat{B} \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ -\Delta P\hat{A} & -\Delta P\hat{B} \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ -\hat{P}\Delta A & -\hat{P}\Delta B \end{bmatrix}. \quad (22)$$
+
+Then, by sequentially solving Problem 3 with the updated $(\hat{P}, \hat{A}, \hat{B}, H) = (\bar{P}_{i+1}, \bar{A}_{i+1}, \bar{B}_{i+1}, H_{i+1})$, we obtain the solution $(\Delta \bar{P}_{i+1}, \Delta \bar{A}_{i+1}, \Delta \bar{B}_{i+1}, \bar{G}_{i+1})$, which generates the less conservative solution to Problem 1.
+
+We propose to sequentially solve Problem 3 with updated $(\bar{P}_{i+1}, \bar{A}_{i+1}, \bar{B}_{i+1}, H_{i+1})$ to obtain the less conservative solution to Problem 1. The sequential solution method is summarized in Algorithm.
+
+**Algorithm**
+
+1. Find the solution $(\hat{P}, \hat{R}, \hat{S}) = (\hat{P}_0, \hat{R}_0, \hat{S}_0)$ to Problem 2 and obtain $\bar{A}_0 = \hat{P}_0^{-1}\hat{R}_0$ and $\bar{B}_0 = \hat{P}_0^{-1}\hat{S}_0$. In addition, let $H_0 = I$ and $i = 0$.
+
+2. Given $(\hat{P}_2, \hat{A}, \hat{B}_2, H) = (\bar{P}_i, \bar{A}_i, \bar{B}_i, H_i)$, find the solution $(\Delta \bar{P}_i, \Delta \bar{A}_i, \Delta \bar{B}_i, \bar{G}_i)$ to Problem 3.
+
+3. Apply (26) to obtain $(\bar{P}_{i+1}, \bar{A}_{i+1}, \bar{B}_{i+1}, H_{i+1})$.
+
+4. If $|J_1(\bar{A}_{i+1}, \bar{B}_{i+1}) - J_1(\bar{A}_i, \bar{B}_i)| < \epsilon$ for a positive constant $\epsilon$, then terminate the algorithm.
+
+5. Set $i \leftarrow i + 1$ and go to 2.
+
+## 4. NUMERICAL EXPERIMENT
+
+In this section, we demonstrate the procedure of learning a nonlinear dynamical system by applying the proposed algorithm. Consider a continuous-time nonlinear dynamical system described by
+
+$$
+\begin{cases}
+\dot{x}_1(t) = x_2(t), \\
+\dot{x}_2(t) = -2x_2(t) + x_1(t)\cos(x_1(t)+x_2(t))+u(t), \\
+y(t) = x_2(t).
+\end{cases}
+\quad (27)
+$$
+
+It is known that the system is dissipative with respect to $(\Xi_{11}, \Xi_{12}, \Xi_{22}) = (0, -1, 0)$, i.e., the system is passive (see e.g., the work by Zakeri and Antsaklis (2019)). In this experiment, we aim at accurately learning the dynamical system in the Koopman model (7), while incorporating the passivity property.
+
+In the experimental setup, we consider that the time series of $x(t)$, $u(t)$, and $y(t)$ are sampled at each 0.01 time interval from the system (27), which are denoted by $\{x(k)\}$, $\{u(k)\}$, and $\{y(k)\}$, respectively. The input series $\{u(k)\}$ for learning are determined from randomly selected values from the uniform distribution in $[-1, 1]$. Then, the state and output series $\{x(k)\}$ and $\{y(k)\}$ are measured synchronously. In total, the data at 5000 samples are obtained.
+
+We try to apply Algorithm to the data $\{x(k)\}$, $\{u(k)\}$, and $\{y(k)\}$. To this end, first, we let $(\Xi_{11}, \Xi_{12}, \Xi_{22}) = (0, -1, -0.2)$ and its corresponding dissipativity constraint be defined to inherit the *a priori* information on the passivity in a relaxed form. Furthermore, let the lifting function $\psi(x(k))$ be composed of the state $x(k) = [x_1(k), x_2(k)]^\mathrm{T}$ and thin plate spline radial basis functions $\psi_i(x(k))$, $i \in \{1, 2, \dots, 8\}$, where $\psi_i(x(k))$ is given by
+
+$$ \psi_i(x(k)) = \|x(k) - r_i\|_2^2 \ln \|x(k) - r_i\|_2 $$
+
+Fig. 1. State trajectory $x_1(k)$.
+
+Fig. 2. State trajectory $x_2(k)$.
+
+and the values of $r_i$ are selected randomly from the uniform distribution on the unit box. Then, the lifting function $\psi(x(k))$ is described by
+
+$$ \psi(x(k)) = [x(k) \quad \psi_1(x(k)) \quad \dots \quad \psi_8(x(k))]^\mathrm{T} \in \mathbb{R}^{2+8}. \quad (28) $$
+
+By applying Algorithm, we constructed the dissipativity-constrained Koopman model, which approximates the nonlinear system (27) and is called Model 1. In addition, we constructed two different models by using the same time series data: (Model 2) one is the no-constrained Koopman model, which is simply constructed by solving (8), while (Model 3) the other is the dissipativity-constrained linear model, which is based on $\psi(x(k)) = x(k)$ and is constructed by applying the learning method proposed by Abe et al. (2016).
+
+First, to show the model accuracy, the three models are compared with the true nonlinear system (27). The result of the frequency response against the sin-wave input is illustrated in Figs. 1 and 2, where the state trajectory of the models is shown. In the figures, the black solid, blue solid, red dashed, and pink dotted lines represent the state of the true system, Model 1 (proposed model), Model 2, and Model 3, respectively. We see from Fig. 1 that Models 1 and 2, i.e., the Koopman models, accurately express the nonlinear behavior generated by (27), while Model 3, i.e., the linear model, is not. The lifting function with the basis (28) contributes to improving the ability of model expression.
+---PAGE_BREAK---
+
+Fig. 3. Comparison of Nyquist plot
+
+Next, to show the validity of the dissipativity constraint, we define the transfer function for the Koopman model. Letting $G(e^{j\omega T}) = C(e^{j\omega T}I - A)^{-1}B$, we reduce the dissipativity constraint, characterized by $(\Xi_{11}, \Xi_{12}, \Xi_{22})=(0,-1,-0.2)$, to
+
+$$ \mathrm{Re}[G(e^{j\omega T})] \geq -0.1, \quad \forall \omega \in \mathbb{R}. $$
+
+Then, the Nyquist plot of the $G(e^{j\omega T})$ for the three models is illustrated in Fig. 3. As illustrated in the figure, Models 1 and 3 satisfy the dissipativity constraint, while Model 2 violates. This shows that the dissipativity constraint imposed on the learning problems is valid.
+
+## 5. CONCLUSION
+
+This paper addressed the learning problem of nonlinear dynamical systems with incorporating the *a priori* information on the quadratic dissipativity. The problem was reduced to the data-driven approximation of the Koopman operator under the dissipativity constraint, which was called Problem 1 in this paper. Then, the solution method to the problem was given and summarized in Algorithm. There are two main contributions of this paper. 1) One is in this numerically efficient algorithm, which sequentially solves LMIs. 2) The other is the performance analysis of the algorithm, which is stated in Proposition 3. In the analysis, it is guaranteed that the Koopman model constructed by the algorithm fits given data more accurately than the model defined at any initial guess.
+
+## ACKNOWLEDGEMENTS
+
+This work was supported by Grant-in-Aid for Young Scientists (B), No. 17K14704 from JSPS.
+
+## REFERENCES
+
+- Abe, Y., Inoue, M., and Adachi, S. (2016). Subspace identification method incorporated with a priori information characterized in frequency domain. In *2016 European Control Conference (ECC)*, 1377–1382. IEEE.
+- Alenany, A., Shang, H., Soliman, M., and Ziedan, I. (2011). Improved subspace identification with prior information using constrained least squares. *IET Control Theory & Applications*, 5(13), 1568–1576.
+- Brogliato, B., Lozano, R., Maschke, B., and Egeland, O. (2007). Dissipative systems analysis and control. *Theory and Applications*, 2.
+- De la Rosa, E. and Yu, W. (2016). Randomized algorithms for nonlinear system identification with deep learning modification. *Information Sciences*, 364, 197–212.
+
+- Goethals, I., Van Gestel, T., Suykens, J., Van Dooren, P., and De Moor, B. (2003). Identification of positive real models in subspace identification by using regularization. *IEEE Transactions on Automatic Control*, 48(10), 1843–1847.
+- Hill, D. and Moylan, P. (1976). The stability of nonlinear dissipative systems. *IEEE Transactions on Automatic Control*, 21(5), 708–711.
+- Hill, D.J. and Moylan, P.J. (1977). Stability results for nonlinear feedback systems. *Automatica*, 13(4), 377–382.
+- Hoagg, J.B., Lacy, S.L., Erwin, R.S., and Bernstein, D.S. (2004). First-order-hold sampling of positive real systems and subspace identification of positive real models. In *Proceedings of the 2004 American control conference*, volume 1, 861–866. IEEE.
+- Inoue, M. (2019). Subspace identification with moment matching. *Automatica*, 99, 22–32.
+- Jin, X., Shao, J., Zhang, X., An, W., and Malekian, R. (2016). Modeling of nonlinear system based on deep learning framework. *Nonlinear Dynamics*, 84(3), 1327–1340.
+- Kevrekidis, I., Rowley, C., and Williams, M. (2015). A kernel-based method for data-driven Koopman spectral analysis. *Journal of Computational Dynamics*, 2(2), 247–265.
+- Koopman, B.O. (1931). Hamiltonian systems and transformation in hilbert space. *Proceedings of the National Academy of Sciences of the United States of America*, 17(5), 315–318.
+- Korda, M. and Mezić, I. (2018). Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. *Automatica*, 93, 149–160.
+- Lacy, S.L. and Bernstein, D.S. (2003). Subspace identification with guaranteed stability using constrained optimization. *IEEE Transactions on automatic control*, 48(7), 1259–1263.
+- Miller, D.N. and De Callafon, R.A. (2013). Subspace identification with eigenvalue constraints. *Automatica*, 49(8), 2468–2473.
+- Okada, M. and Sugie, T. (1996). Subspace system identification considering both noise attenuation and use of prior knowledge. In *Proceedings of 35th IEEE Conference on Decision and Control*, volume 4, 3662–3667. IEEE.
+- Sebe, N. (2018). Sequential convex overbounding approximation method for bilinear matrix inequality problems. *IFAC-PapersOnLine*, 51(25), 102–109.
+- Willems, J.C. (1972). Dissipative dynamical systems part i: General theory. *Archive for Rational Mechanics and Analysis*, 45(5), 321–351. doi:10.1007/BF00276493.
+- Williams, M.O., Kevrekidis, I.G., and Rowley, C.W. (2015). A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. *Journal of Nonlinear Science*, 25(6), 1307–1346.
+- Yoshimura, S., Matsubayashi, A., and Inoue, M. (2019). System identification method inheriting steady-state characteristics of existing model. *International Journal of Control*, 92(11), 2701–2711.
+- Zakeri, H. and Antsaklis, P.J. (2019). Passivity and passivity indices of nonlinear systems under operational limitations using approximations. *International Journal of Control*, (just-accepted), 1–20.
\ No newline at end of file
diff --git a/samples/texts_merged/3337707.md b/samples/texts_merged/3337707.md
new file mode 100644
index 0000000000000000000000000000000000000000..43286b302f0efe30f73290e559b0ed6a9a44343f
--- /dev/null
+++ b/samples/texts_merged/3337707.md
@@ -0,0 +1,181 @@
+
+---PAGE_BREAK---
+
+QUESTION 1
+
+In the diagram above, $\angle 1$ and $\angle 5$ are supplementary and $\angle 2 = \angle 6$. If $\angle 1 = 34^\circ$ and $\angle 2 = 55^\circ$, find $\angle 3 + \angle 4 + \angle 5 + \angle 6$.
+---PAGE_BREAK---
+
+QUESTION 2
+
+A = The sum of the degrees of the interior angles of a regular pentagon
+
+B = The measure of an exterior angle of a regular 36-gon
+
+C = The measure of the smallest interior angle of a quadrilateral with angles $(3x)^\circ, 60^\circ, (2x+12)^\circ$, and $(200-x)^\circ$.
+
+D = The measure of an interior angle of a regular dodecagon (12 sided polygon) (in degrees)
+
+Find $\frac{A}{B} - C + D$
+---PAGE_BREAK---
+
+QUESTION 3
+
+$A$ = The area of a regular hexagon with side length 2
+
+$B$ = The area of an isosceles trapezoid with median 9 and height 4
+
+$C$ = The area of a rhombus with diagonals 2 and 7
+
+$D$ = The area of a circle with radius $\frac{2}{\sqrt{\pi}}$
+
+Find $A + B - C - D$.
+---PAGE_BREAK---
+
+QUESTION 4
+
+The value of each statement is given in parentheses on the left. Find the sum of the values of all true statements.
+
+(2) : The contrapositive of the statement 'If I dance then I sing' is 'If I do not dance then I do not sing'.
+
+$(-3)$ : Skew lines never intersect.
+
+(5) : The total surface area of a cone is equal to $\pi rl$ where r is the radius and l is the slant height of the cone.
+
+$(-2)$ : A regular hexagon has six lines of symmetry.
+
+(1) : If the diagonals of a parallelogram bisect, then it is also a rhombus.
+
+(4) : The decahedron is one of the five platonic solids
+---PAGE_BREAK---
+
+QUESTION 5
+
+Use the following diagram to answer parts A and B
+
+A = The measure of the length of BC if AB = 5 and CD = 6.
+
+B = The measure in degrees of arcBD if arcAB = 80° and ∠C = 20°.
+
+An isosceles trapezoid is drawn with base lengths 6 and 10 and height 4. The trapezoid is cut at its median and forms two distinct regions. Let
+
+C = The length of the slant height of the original trapezoid before it is cut.
+
+D = The ratio of the areas of smaller region to the larger region of the cut trapezoid.
+
+Find: $\frac{BD}{AC}$
+---PAGE_BREAK---
+
+QUESTION 6
+
+A = The greatest possible distance between two vertices (corners) of a cube with side length 6.
+
+B = The perimeter of a rhombus with diagonals of lengths 10 and 24.
+
+C = The least possible value of the perimeter of a right triangle with two sides of lengths 4 and 8.
+
+D = The length of AC of regular hexagon ABCDEF with side length 6.
+
+Find $(A+B) - (C+D)$
+---PAGE_BREAK---
+
+QUESTION 7
+
+Annie has a circular dartboard with radius 6. A regular hexagon is inscribed in the circle and then a smaller circle is inscribed within this hexagon.
+
+$A$ = The area of the inscribed hexagon.
+
+$B$ = The ratio of the area of the inner circle to the area of the inscribed hexagon.
+
+$C$ = The area of one of the six regions that is bounded by the hexagon and the outer circle.
+
+Find $AB + C$.
+---PAGE_BREAK---
+
+QUESTION 8
+
+An equilateral triangle $\triangle XYZ$ has an inscribed circle with radius 1.
+
+$A$ = The ratio of the area of the inscribed circle to the area of $\triangle XYZ$.
+
+$B$ = The perimeter of the triangle formed by connecting the midpoints of sides $XY$, $YZ$, and $XZ$.
+
+Andrew draws a smaller circle that is tangent to the original inscribed circle and two sides of the triangle as shown below.
+
+$C$ = The length of the radius of the smaller circle.
+
+$D$ = The ratio of the area of the smaller circle to the original inscribed circle.
+
+Find $\frac{ABCD}{\pi}$
+---PAGE_BREAK---
+
+QUESTION 9
+
+Steve draws a triangle with vertices (1, -1), (3, 3), and (6, -3) on a coordinate plane. Meanwhile, Jeremy draws a pentagon with vertices (2, 1), (4, 1), (6, 3), (4, 5) and (1, 4) on the same coordinate plane.
+
+A = The area of the intersection of the triangle and the pentagon.
+
+B = The area of the union of the triangle and the pentagon.
+
+C = The slope of the median drawn from point (6, -3) on Steve's triangle.
+
+D = The sum of the abscissas of the new pentagon formed if Jeremy flips the original pentagon about the y-axis.
+
+Find A + B + C + D.
+---PAGE_BREAK---
+
+QUESTION 10
+
+The following cube has a side length of 6. X is the midpoint of DH, and Y is the midpoint of CG.
+
+If the cube is sliced by a plane from AB to XY, then let
+
+$A$ = The sum of the total surface areas of the two resulting parts.
+
+$B$ = The positive difference of the volumes of the two resulting parts.
+
+$C$ = The length of diagonal $BX$.
+
+Find $A + B + C$.
+---PAGE_BREAK---
+
+QUESTION 11
+
+Pappus' Theorem states that the volume of the solid created by revolving a 2-D shape about a given line is $V = A(D_C)$ where V is the volume of revolution, A is the area of the 2-D shape, and $D_C$ is the distance that the centroid, or center, travels during the revolution.
+
+If the circle $x^2 + y^2 = 16$ is revolved 360° around the line $y = -6$. Then, let
+
+$A$ = The shortest distance between the circle and the line.
+
+$B$ = The volume of the torus (doughnut) formed by the revolved circle.
+
+If the triangle with vertices (1, 0), (3, 3), and (5, 0) is revolved 360° about the line $x = 0$. Then, let
+
+$C$ = The area of the triangle.
+
+$D$ = The volume of the solid formed when the triangle is revolved about the line.
+
+Find $A + \frac{B+D}{C\pi}$.
+---PAGE_BREAK---
+
+QUESTION 12
+
+An isosceles triangle has legs of length 10 and a base of length 12. A point P is on the base of the triangle such that it is a distance of 5 from one endpoint of the base and 7 from the other. Find the sum of the distances from P to each of the legs of the triangle.
+---PAGE_BREAK---
+
+QUESTION 13
+
+Right triangle MNO has hypotenuse MN. Let P be the point on MN such that OP is perpendicular to MN and let Q be the point on ON such that PQ is perpendicular to ON. If OQ = 4 and QN = 9, find the area of triangle MNO.
+---PAGE_BREAK---
+
+QUESTION 14
+
+Let
+
+$$A = \sin(45^\circ)$$
+
+$$B = \tan(30^\circ)$$
+
+$$C = \cos(60^\circ)$$
+
+Find $ABC$.
\ No newline at end of file
diff --git a/samples/texts_merged/3339200.md b/samples/texts_merged/3339200.md
new file mode 100644
index 0000000000000000000000000000000000000000..74d826846d80389344985fecb1331abf1aa97fd3
--- /dev/null
+++ b/samples/texts_merged/3339200.md
@@ -0,0 +1,22 @@
+
+---PAGE_BREAK---
+
+For each of the following graphs:
+
+a) Label the vertex.
+
+b) Draw and label the axis of symmetry.
+
+c) Label the x-intercept(s).
+
+d) Label the y-intercept.
+
+e) State the minimum/maximum value.
+
+1) $y = \frac{1}{2}x^2 - 2x + 5$
+
+2) $y = -2x^2 - 8x - 5$
+
+3) $y = -x^2 + 2x - 4$
+
+4) $y = 2x^2 - 4x$
\ No newline at end of file
diff --git a/samples/texts_merged/3352155.md b/samples/texts_merged/3352155.md
new file mode 100644
index 0000000000000000000000000000000000000000..edd0c7be2ffd7a4457c56df5b2d37305ad3a6d9b
--- /dev/null
+++ b/samples/texts_merged/3352155.md
@@ -0,0 +1,601 @@
+
+---PAGE_BREAK---
+
+ON ENTROPY AND HAUSDORFF DIMENSION OF
+MEASURES DEFINED THROUGH A
+NON-HOMOGENEOUS MARKOV PROCESS
+
+BY
+
+ATHANASIOS BATAKIS (Orléans)
+
+**Abstract.** We study the Hausdorff dimension of measures whose weight distribution satisfies a Markov non-homogeneous property. We prove, in particular, that the Hausdorff dimensions of this kind of measures coincide with their lower Rényi dimensions (entropy). Moreover, we show that the packing dimensions equal the upper Rényi dimensions. As an application we get a continuity property of the Hausdorff dimension of the measures, when viewed as a function of the distributed weights under the $\ell^\infty$ norm.
+
+**1. Introduction.** Let us consider the dyadic tree (even though all the results in this paper can be easily generalised to any $\ell$-adic structure, $\ell \in \mathbb{N}$), let $K$ be its limit (Cantor) set and denote by $(\mathcal{F}_n)_{n \in \mathbb{N}}$ the associated filtration with the usual 0-1 encoding.
+
+We are interested in Borel measures $\mu$ on $K$ constructed in the following way: Take a sequence $(p_n, q_n)_{n \in \mathbb{N}}$ of couples of real numbers satisfying $0 \le p_n, q_n \le 1$. Let $I = I_{\varepsilon_1 \dots \varepsilon_n}$ be a cylinder of the $n$th generation, $J = I_{\varepsilon_{n+1}}$ a cylinder of the first generation and $IJ = I_{\varepsilon_1 \dots \varepsilon_n \varepsilon_{n+1}}$ the subcylinder of $I$ of the $(n+1)$th generation, where $\varepsilon_1, \dots, \varepsilon_n, \varepsilon_{n+1} \in \{0, 1\}$. The mass distribution of $\mu|_I$ will be as follows: $\mu(I_0) = p_0$, $\mu(I_1) = 1 - p_0$ and
+
+$$ (1) \quad \frac{\mu(IJ)}{\mu(I)} = \begin{cases} p_n \mathbf{1}_{\{\varepsilon_{n+1}=0\}} + (1-p_n)\mathbf{1}_{\{\varepsilon_{n+1}=1\}} & \text{if } \varepsilon_n=0, \\ q_n \mathbf{1}_{\{\varepsilon_{n+1}=0\}} + (1-q_n)\mathbf{1}_{\{\varepsilon_{n+1}=1\}} & \text{if } \varepsilon_n=1, \end{cases} $$
+
+where the extreme case $\mu(I) = 0$ (and hence $\mu(IJ) = 0$) is treated in the same way by convention.
+
+We use the notation $\dim_K$ for the Hausdorff dimension and $\dim_P$ for the packing dimension.
+
+**Definition 1.1.** If $\mu$ is a measure on $K$, we will denote by $h_*(\mu)$ its lower entropy:
+
+$$ h_*(\mu) = \liminf_{n \to \infty} \frac{-1}{n} \sum_{I \in \mathcal{F}_n} \log \mu(I) \cdot \mu(I), $$
+
+2000 Mathematics Subject Classification: 28A78, 28A80, 60J60.
+
+**Key words and phrases:** Hausdorff and packing dimensions, entropy, non-homogeneous Markov processes.
+---PAGE_BREAK---
+
+by $h^*(\mu)$ its upper entropy:
+
+$$h^*(\mu) = \limsup_{n \to \infty} \frac{-1}{n} \sum_{I \in \mathcal{F}_n} \log \mu(I) \cdot \mu(I),$$
+
+by $\dim_*(\mu)$ its lower Hausdorff dimension:
+
+$$\dim_*(\mu) = \inf\{\dim_\mathcal{H} E : E \subset \mathbb{K} \text{ and } \mu(E) > 0\},$$
+
+and by $\dim^*(\mu)$ its upper Hausdorff dimension:
+
+$$\dim^*(\mu) = \inf\{\dim_\mathcal{H} E : E \subset \mathbb{K} \text{ and } \mu(\mathbb{K} \setminus E) = 0\}.$$
+
+In the same way we define the lower packing dimension of $\mu$:
+
+$$\mathrm{Dim}_*(\mu) = \inf\{\dim_P E : E \subset \mathbb{K} \text{ and } \mu(E) > 0\},$$
+
+and the *upper packing dimension* of $\mu$:
+
+$$\mathrm{Dim}^*(\mu) = \inf\{\dim_P E : E \subset \mathbb{K} \text{ and } \mu(\mathbb{K} \setminus E) = 0\}.$$
+
+One can show (see [Bat02], [BH02]) that
+
+$$\dim_*(\mu) \le h_*(\mu) \le h^*(\mu) \le \dim^*(\mu),$$
+
+and there are examples of these inequalities being strict, even when the measure $\mu$ is rather “regular”.
+
+It is also well known (cf. [Fal97], [Bil65], [Mat95], [Fan94], [You82], [Rén70] and [Heu98]) that
+
+$$\dim_*(\mu) = \essinf_\mu \liminf_{n \to \infty} \frac{\log \mu(I_n(x))}{-n \log 2}$$
+
+and
+
+$$\dim^*(\mu) = \essup_\mu \liminf_{n \to \infty} \frac{\log \mu(I_n(x))}{-n \log 2},$$
+
+where $I_n(x)$ is the dyadic cylinder of the $n$th generation containing $x$, $\essinf_\mu$ is the essential infimum and $\essup_\mu$ is the essential supremum, taken over $\mu$-almost all $x \in \mathbb{K}$.
+
+Whenever $\mu$ is a shift-invariant and ergodic measure, it is well known
+that all limits exist and
+
+$$\lim_{n \to \infty} \frac{\log \mu(I_n(x))}{-n \log 2} = h_*(\mu) = h^*(\mu),$$
+
+which is the *Breiman-Shanon-McMillan formula*. This is also valid in several random settings (see for instance [Nas87], [Kah87], [KP76] and [Heu03]) and for products of Bernoulli measures (cf. [Bil65]).
+
+In the case of measures defined by (1) we can use tools developed in
+[Bat96] and [Bat00] to prove they are *exact*, i.e. that $\dim_*(\mu) = \dim^*(\mu)$ or
+---PAGE_BREAK---
+
+equivalently that
+
+$$ \liminf_{n \to \infty} \frac{\log \mu(I_n(x))}{-n \log 2} = \dim_*(\mu) \quad \text{for } \mu\text{-almost all } x \in \mathbb{K}. $$
+
+This is, for instance, the case of harmonic measure on homogeneous Cantor sets and on limit sets of a large class of iterated function systems, like the ones considered in the articles mentioned above. Nevertheless, some kind of shift-invariance is needed in replacement of the Markov condition proposed in this work. In Theorem 1.2 we prove that $\dim_*(\mu) = \dim^*(\mu)$.
+
+In general, there is no trivial inequality between $h_*(\mu)$ and $\dim^*(\mu)$. Furthermore, it is easy to construct measures $\mu$ satisfying (1) such that $h_*(\mu) \neq h^*(\mu)$, which shows that the sequence of functions $\frac{\log \mu(I_n(x))}{-n \log 2}$ does not necessarily converge (in any space).
+
+The proof of Theorem 1.2 implies that there is a sequence $(c_n)_{n \in \mathbb{N}}$ of real numbers such that
+
+$$ \lim_{n \to \infty} \left[ \frac{\log \mu(I_n(x))}{-n \log 2} - c_n \right] = 0, $$
+
+where
+
+$$ c_n = \frac{-1}{n \log 2} \sum_{I \in \mathcal{F}_n} \log(\mu(I))\mu(I). $$
+
+This can be seen as a Breiman-Shannon-McMillan type theorem generalised to measures defined through non-homogeneous Markov chains.
+
+Note that the tools of [KP76] and [Kah87] can be applied to give the same results for “almost every” measure $\mu$ satisfying (1). Other results in this sense involving colouring of graphs are proposed in [Nas87].
+
+A. Bisbas and C. Karanikas [BK94] have already partially proved the conclusions of Theorem 1.2 under some assumptions on the sequences $(p_n, q_n)_{n \in \mathbb{N}}$. In particular they proved the theorem when the sequences $(p_n, q_n)_{n \in \mathbb{N}}$ are uniformly bounded away from 0 and 1, which is the case of a perturbation of a homogeneous Markov chain. We thank A. Bisbas for informing us about that article.
+
+**THEOREM 1.2.** If $\mu$ satisfies (1) then
+
+$$ \dim_*(\mu) = \dim^*(\mu) = h_*(\mu) \quad \text{and} \quad \dim_*(\mu) = \dim^*(\mu) = h^*(\mu). $$
+
+Using the same type of argument we also obtain the following continuity result.
+
+**THEOREM 1.3.** Let $\mu$ and $\mu'$ be measures defined by (1) and the respective sequences $(p_n, q_n)_{n \in \mathbb{N}}$ and $(p'_n, q'_n)_{n \in \mathbb{N}}$. Then $|\dim_*(\mu) - \dim_*(\mu')|$ and $|\dim_*(\mu) - \dim_*(\mu')|$ go to 0 as $||(p_n, q_n)_{n \in \mathbb{N}} - (p'_n, q'_n)_{n \in \mathbb{N}}||_\infty$ tends to 0.
+---PAGE_BREAK---
+
+**2. Lemmas and preliminary results.** Let us introduce some notation: for $p \in [0, 1]$ we define
+
+$$h(p) = p \log p + (1-p) \log(1-p)$$
+
+and if $I = I_{\varepsilon_1, \dots, \varepsilon_{n-1}} \in \mathcal{F}_n$, we also set
+
+$$\gamma(I, n) = \sum_{i=0,1} \log \left( \frac{\mu(II_i)}{\mu(I)} \right) \frac{\mu(II_i)}{\mu(I)}.$$
+
+Note that $\gamma(I,n) = E_I(X_n)$ in the notation of [Chu01, Section 9.1, p. 295]. We also remark that for $n \in \mathbb{N}$ and $I \in \mathcal{F}_{n-1}$, $\gamma(I,n)$ is equal to $h(p_n)$ if $\varepsilon_{n-1} = 0$ and to $h(q_n)$ if $\varepsilon_{n-1} = 1$ and therefore $|\gamma(I,n)| \le \log 2$.
+
+Let us start with the following easy lemma.
+
+LEMMA 2.1. For all $n, k \in \mathbb{N}$ and all $I \in \mathcal{F}_{n-1}$,
+
+$$
+\begin{align*}
+& (2) \quad \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} \\
+& \qquad = \gamma(I, n) + \sum_{i=0,1} \frac{\mu(II_i)}{\mu(I)} \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_i K)}{\mu(II_i)} \right) \frac{\mu(II_i K)}{\mu(II_i)}.
+\end{align*}
+$$
+
+where $I_0$ and $I_1$ are the two cylinders of the first generation. Furthermore, if we set
+
+$$a_n^k(I) = \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_0 K)}{\mu(II_0)} \right) \frac{\mu(II_0 K)}{\mu(II_0)},$$
+
+$$b_n^k(I) = \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_1 K)}{\mu(II_1)} \right) \frac{\mu(II_1 K)}{\mu(II_1)},$$
+
+then $a_n^k(I) = a_n^k(I')$ and $b_n^k(I) = b_n^k(I')$ for all $I, I' \in \mathcal{F}_n$.
+
+*Proof.* We have
+
+$$
+\begin{align*}
+& (3) \quad \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} \\
+&= \sum_{i=0,1} \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_i K)}{\mu(I)} \right) \frac{\mu(II_i K)}{\mu(I)} \\
+&= \sum_{i=0,1} \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_i K)}{\mu(II_i)} \right) \frac{\mu(II_i K)}{\mu(I)} + \sum_{i=0,1} \log \left( \frac{\mu(II_i)}{\mu(I)} \right) \frac{\mu(II_i)}{\mu(I)}.
+\end{align*}
+$$
+
+Since we have set
+
+$$\gamma(I, n) = \sum_{i=0,1} \log\left(\frac{\mu(II_i)}{\mu(I)}\right) \frac{\mu(II_i)}{\mu(I)},$$
+---PAGE_BREAK---
+
+the equalities (3) give
+
+$$
+\sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} \\
+= \gamma(I, n) + \sum_{i=0,1} \frac{\mu(II_i)}{\mu(I)} \sum_{K \in \mathcal{F}_{k-1}} \log \left( \frac{\mu(II_i K)}{\mu(II_i)} \right) \frac{\mu(II_i K)}{\mu(II_i)}.
+$$
+
+It is immediate that $0 \le -\gamma(I,n) \le \log 2$. By the construction of the measure $\mu$, the quantities $a_n^k(I)$ and $b_n^k(I)$ do not depend on the cylinder $I$ but only on the cylinder's generation $n$, and this ends the proof. ■
+
+**REMARK 2.2.** Since the quantities $a_n^k(I)$ and $b_n^k(I)$ depend only on the generation of $I$ and on $k$, we can write $a_n^k = a_n^k(I)$ and $b_n^k = b_n^k(I)$ for $I \in \mathcal{F}_n$. We also set $\Delta_n^k = |a_n^k - b_n^k|/k$.
+
+The following lemma is easy to prove but helps to clarify the proof.
+
+LEMMA 2.3. Take $\epsilon > 0$. There exists $\zeta > 0$ such that for all $p, q \in [0, 1]$ we have either $|h(p) - h(q)| \le \epsilon/2$ or $|p - q| < 1 - \zeta$. For all $k > k_0 = [4(\log 2)/\zeta\epsilon]$ and all $\alpha > \epsilon/2$,
+
+$$
+\frac{|h(p) - h(q)|}{k} + |p - q| \left(1 - \frac{1}{k}\right) \alpha < \left(1 - \frac{1}{2k}\right) \alpha,
+$$
+
+and hence, for all $\alpha > 0$,
+
+$$
+\frac{|h(p) - h(q)|}{k} + |p - q| \left(1 - \frac{1}{k}\right) \alpha < \min \left\{ \varepsilon, \left(1 - \frac{1}{2k}\right) \alpha \right\}.
+$$
+
+The proof is elementary and therefore omitted. In the following we will
+denote by $k_0$ the positive integer defined in the previous lemma.
+
+PROPOSITION 2.4. Let $I, I'$ be two cylinders of the nth generation. Then
+
+$$
+\left|\frac{1}{k}\right| \sum_{K \in \mathcal{F}_k} \log\left(\frac{\mu(IK)}{\mu(I)}\right) \frac{\mu(IK)}{\mu(I)} - \sum_{K \in \mathcal{F}_k} \log\left(\frac{\mu(I'K)}{\mu(I')}\right) \frac{\mu(I'K)}{\mu(I')}| < \eta(k)
+$$
+
+where $\eta$ is a positive function, not depending on $n$, such that $\eta(k)$ goes to 0
+as $k$ tends to $\infty$.
+
+*Proof.* Take any two cylinders *I* = *I*ε₁...εₙ, *I*′ = *I*ε'1...ε'n of the *n*th gener-
+ation. If εn = ε'n then by definition of the measure μ we get
+
+$$
+\left|\frac{1}{k}\right| \sum_{K \in \mathcal{F}_k} \log\left(\frac{\mu(IK)}{\mu(I)}\right) \frac{\mu(IK)}{\mu(I)} - \sum_{K \in \mathcal{F}_k} \log\left(\frac{\mu(I'K)}{\mu(I')}\right) \frac{\mu(I'K)}{\mu(I')} = 0.
+$$
+---PAGE_BREAK---
+
+If $\epsilon_n \neq \epsilon'_n$, using Lemma 2.1 and the notation therein we obtain
+
+$$
+\begin{align*}
+(4) \quad \Delta_{n-1}^{k+1} &= \left| \frac{1}{k+1} \sum_{K \in \mathcal{F}_{k+1}} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} \right. \\
+&\qquad \left. - \frac{1}{k+1} \sum_{K \in \mathcal{F}_{k+1}} \log \left( \frac{\mu(I'K)}{\mu(I')} \right) \frac{\mu(I'K)}{\mu(I')} \right| \\
+&= \left| \frac{\gamma(I, n) - \gamma(I', n)}{k+1} + \frac{1}{k+1} \frac{\mu(II_0)}{\mu(I)} \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(II_0 K)}{\mu(II_0)} \right) \frac{\mu(II_0 K)}{\mu(II_0)} \right. \\
+&\qquad + \frac{1}{k+1} \frac{\mu(II_1)}{\mu(I)} \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(II_1 K)}{\mu(II_1)} \right) \frac{\mu(II_1 K)}{\mu(II_1)} \\
+&\qquad - \frac{1}{k+1} \frac{\mu(I'I_0)}{\mu(I')} \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(I'I_0 K)}{\mu(I'I_0)} \right) \frac{\mu(I'I_0 K)}{\mu(I'I_0)} \\
+&\qquad - \frac{1}{k+1} \frac{\mu(I'I_1)}{\mu(I')} \sum_{K \in \mathcal{F}_k} \log \left( \frac{\mu(I'I_1 K)}{\mu(I'I_1)} \right) \frac{\mu(I'I_1 K)}{\mu(I'I_1)} \\
+&= \left| \frac{h(p_n) - h(q_n)}{k+1} + \frac{1}{k+1} \left( \left( \frac{\mu(II_0)}{\mu(I)} - \frac{\mu(I'I_0)}{\mu(I')} \right) a_n^k + \left( \frac{\mu(II_1)}{\mu(I)} - \frac{\mu(I'I_1)}{\mu(I')} \right) b_n^k \right) \right| \\
+&\leq \frac{|h(p_n) - h(q_n)|}{k+1} + \left| \frac{1}{k+1} \left( \frac{\mu(II_0)}{\mu(I)} - \frac{\mu(I'I_0)}{\mu(I')} \right) (a_n^k - b_n^k) \right|.
+\end{align*}
+$$
+
+We can rewrite (4) in the following way:
+
+$$
+\frac{|a_{n-1}^{k+1} - b_{n-1}^{k+1}|}{k+1} \leq \frac{|h(p_n) - h(q_n)|}{k+1} + |p_n - q_n| \frac{|a_n^k - b_n^k|}{k} \left(1 - \frac{1}{k+1}\right)
+$$
+
+and thus,
+
+$$
+(5) \quad \Delta_{n-1}^{k+1} \leq \frac{|h(p_n) - h(q_n)|}{k+1} + |p_n - q_n| \left(1 - \frac{1}{k+1}\right) \Delta_n^k.
+$$
+
+Take $\epsilon > 0$. By Lemma 2.3, for $k \ge k_0$ we have
+
+$$
+(6) \quad \Delta_{n-1}^{k+1} \leq \min \left\{ \varepsilon, \left( 1 - \frac{1}{2(k+1)} \right) \Delta_n^k \right\}.
+$$
+
+We use a recursion argument to finish the proof of the lemma. First observe
+that if for some $\ell \in \{1, \dots, k-k_0\}$ we have
+
+$$
+(7) \qquad \Delta_{n+\ell}^{k-\ell} < \varepsilon
+$$
+
+then we will also have
+---PAGE_BREAK---
+
+$$ \Delta_{n+\ell-1}^{k-\ell+1} < \min \left\{ \varepsilon, \left(1 - \frac{1}{2(k+1)}\right) \Delta_{n+\ell}^{k-\ell} \right\} \le \varepsilon $$
+
+by (6), and therefore $\Delta_n^k < \varepsilon$.
+
+On the other hand, if (7) does not hold for any $\ell \in \{1, \dots, k-k_0\}$ then by (6) we get
+
+$$ \Delta_{n+\ell-1}^{k-\ell+1} \le \left(1 - \frac{1}{2(k-\ell+1)}\right) \Delta_{n+\ell}^{k-\ell} $$
+
+and finally
+
+$$ (8) \qquad \Delta_n^k \le \prod_{\ell=k_0}^{k+1} \left(1 - \frac{1}{2(\ell+1)}\right) \log 2, $$
+
+which becomes strictly smaller than $\varepsilon$ if $k$ is large enough, and the proof is complete. ■
+
+We will also use the following two theorems of [BH02] that we include without proof for the convenience of the reader (a direct proof—without using these theorems—is possible but much longer).
+
+**THEOREM 2.5 ([BH02]).** Let $m$ be a probability measure on $[0, 1)^D$ equipped with the filtration of $\ell$-adic cubes, $\ell \in \mathbb{N}$. Then
+
+$$ \dim_*(m) \le h_*(m). $$
+
+Moreover, the following properties are equivalent:
+
+1. $\dim_*(m) = h_*(m)$.
+
+2. $\dim_*(m) = \dim^*(m) = h_*(m)$.
+
+3. There exists a subsequence $(n_k)_{k \in \mathbb{N}}$ such that for $m$-almost every $x \in [0, 1)^D$,
+
+$$ \lim_{k \to \infty} \frac{\log m(I_{n_k}(x))}{-n_k \log \ell} = \dim_*(m). $$
+
+**THEOREM 2.6 ([BH02]).** We also have
+
+$$ h^*(m) \le \mathrm{Dim}^*(m), $$
+
+and the following properties are equivalent:
+
+1. $\dim^*(m) = h^*(m)$.
+
+2. $\dim_*(m) = \dim^*(m) = h^*(m)$.
+
+3. There exists a subsequence $(n_k)_{k \in \mathbb{N}}$ such that for $m$-almost every $x \in [0, 1)^D$,
+
+$$ \lim_{k \to \infty} \frac{\log m(I_{n_k}(x))}{-n_k \log \ell} = \dim^*(m). $$
+
+**3. Proofs of the theorems.** To prove Theorem 1.2 we will use the following strong law of large numbers (cf. [HH80]).
+---PAGE_BREAK---
+
+**THEOREM 3.1 (Law of Large Numbers).** Let $(X_n)_{n \in \mathbb{N}}$ be a sequence of real random variables uniformly bounded in $\mathcal{L}^2$ on a probability space $(\mathcal{X}, \mathcal{B}, P)$ and let $(\mathcal{F}_n)_{n \in \mathbb{N}}$ be an increasing sequence of $\sigma$-subalgebras of $\mathcal{B}$ such that $X_n$ is measurable with respect to $\mathcal{F}_n$ for all $n \in \mathbb{N}$. Then
+
+$$ (9) \quad \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^{n} (X_k - \mathbb{E}(X_k | \mathcal{F}_{k-1})) = 0 \quad \textit{P-almost surely.} $$
+
+We point out that the assumptions on the random variables are not
+optimal but the result will be sufficient for our goal. The space here is $\mathbb{K}$, the
+filtration will be the dyadic one, and $\mu$ will take the place of the probability
+measure $P$.
+
+*Proof of Theorem 1.2.* Consider the random variables $X_n, n \in \mathbb{N}$, defined on $\mathbb{K}$ by
+
+$$ X_n(x) = \log \frac{\mu(I_n(x))}{\mu(I_{n-1}(x))}, $$
+
+where, for $x \in \mathbb{K}$, we have denoted by $I_n(x)$ the unique element of $\mathcal{F}_n$
+containing $x$. Theorem 3.1 implies that for all positive $p$,
+
+$$ (10) \quad \lim_{n \to \infty} \frac{1}{n+1} \sum_{j=1}^{n} \left( \frac{1}{p} \sum_{k=1}^{p} [X_{jp+k} - \mathbb{E}(X_{jp+k} | \mathcal{F}_{jp})] \right) = 0 \quad \mu\text{-almost surely.} $$
+
+On the other hand, on each $I \in \mathcal{F}_n$ we have
+
+$$ (11) \quad \frac{1}{p} \sum_{k=1}^{p} \mathbb{E}(X_{np+k} | \mathcal{F}_{np}) = \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)}. $$
+
+By Proposition 2.4, for every $\varepsilon > 0$ there exists $p \in \mathbb{N}$ such that for all $n \in \mathbb{N}$ and all $I$ in $\mathcal{F}_{np}$,
+
+$$ (12) \quad \left| \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} - c_n \right| < \varepsilon, $$
+
+where $c_n = p^{-1}\mathbb{E}\{\sum_{K\in\mathcal{F}_p} \log(\mu(IK)/\mu(I))\}$ is a constant depending only on $n$ and on the chosen $p$ but not on the cylinder $I$ of $\mathcal{F}_n$.
+
+It is also easy to see that the variables $(X_n)_{n \in \mathbb{N}}$ are uniformly bounded in $\mathcal{L}^2(\mu)$. We deduce, using (10) and (11), that for every $\varepsilon > 0$ there exists $p \in \mathbb{N}$ and a sequence $(c_n)_{n \in \mathbb{N}}$ of real numbers such that
+
+$$
+\begin{align}
+(13) \qquad & -\varepsilon < \liminf_{n\to\infty} \frac{1}{n+1} \sum_{j=1}^{n} \left( \frac{1}{p} \sum_{k=1}^{p} X_{jp+k} - c_j \right) \\
+& \leq \limsup_{n\to\infty} \frac{1}{n+1} \sum_{j=1}^{n} \left( \frac{1}{p} \sum_{k=1}^{p} X_{jp+k} - c_j \right) < \varepsilon
+\end{align}
+$$
+---PAGE_BREAK---
+
+$\mu$-almost everywhere on $\mathbb{K}$. This implies that
+
+$$
+\begin{align*}
+(14) \quad & \liminf_{n \to \infty} \frac{-1}{n+1} \sum_{j=1}^{n} c_j - \varepsilon < \liminf_{n \to \infty} \frac{-1}{p} \frac{1}{n+1} \sum_{j=1}^{n} \sum_{k=1}^{p} X_{jp+k} \\
+& < \liminf_{n \to \infty} \frac{-1}{n+1} \sum_{j=1}^{n} c_j + \varepsilon
+\end{align*}
+$$
+
+and
+
+$$
+\begin{align*}
+(15) \quad & \limsup_{n \to \infty} \frac{-1}{n+1} \sum_{j=1}^{n} c_j - \varepsilon < \limsup_{n \to \infty} \frac{-1}{p} \frac{1}{n+1} \sum_{j=1}^{n} \sum_{k=1}^{p} X_{jp+k} \\
+& < \limsup_{n \to \infty} \frac{-1}{n+1} \sum_{j=1}^{n} c_j + \varepsilon
+\end{align*}
+$$
+
+$\mu$-almost everywhere on $\mathbb{K}$. If we set
+
+$$
+\underline{c} = \liminf_{n \to \infty} \frac{-1}{(n+1) \log 2} \sum_{j=1}^{n} c_j, \quad \bar{c} = \limsup_{n \to \infty} \frac{-1}{(n+1) \log 2} \sum_{j=1}^{n} c_j,
+$$
+
+we deduce from (14) and (15) that $\dim_*(\mu) = \underline{c}$ and $\dim_*(\mu) = \bar{c}$.
+
+Furthermore, the inequalities (13) imply that for every positive $\epsilon$ there is a strictly increasing sequence $(n_l)_{l \in \mathbb{N}}$ of natural numbers satisfying
+
+$$
+\begin{align*}
+-\epsilon < & \liminf_{l \to \infty} \frac{-1}{n_l + 1} \sum_{j=1}^{n_l} \left( \frac{1}{p} \sum_{k=1}^{p} X_{jl+kl} \right) - c \\
+& \leq \limsup_{l \to \infty} \frac{-1}{n_l + 1} \sum_{j=1}^{n_l} \left( \frac{1}{p} \sum_{k=1}^{p} X_{jl+kl} \right) - c < \epsilon
+\end{align*}
+$$
+
+for $\mu$-almost all $x \in K$. One easily proves (using, for instance, Cantor's diagonal argument) that there exists a strictly increasing sequence $(n_l)_{l \in \mathbb{N}}$ of natural numbers such that
+
+$$
+\lim_{l \to \infty} \frac{-1}{n_l \log 2} \log \mu(I_{n_l}(x)) = \dim_*(\mu) \quad \text{for } \mu\text{-almost all } x \in \mathbb{K}.
+$$
+
+Similarly, there exists a strictly increasing sequence $(\hat{n}_l)_{l \in \mathbb{N}}$ of natural numbers such that
+
+$$
+\lim_{l \to \infty} \frac{-1}{\hat{n}_l \log 2} \log \mu(I_{\hat{n}_l}(x)) = \mathrm{Dim}_*(\mu) \quad \text{for } \mu\text{-almost all } x \in K.
+$$
+
+We use Theorems 2.5 and 2.6 to finish the proof. $\blacksquare$
+
+To prove Theorem 1.3 we will use Proposition 2.4 and Lemma 3.1.
+
+*Proof of Theorem 1.3.* Take $\epsilon > 0$ and let $(p_n, q_n)_{n \in \mathbb{N}}$ and $(p'_n, q'_n)_{n \in \mathbb{N}}$ be two sequences of weights satisfying $0 < p_n, q_n, p'_n, q'_n < 1$ for all $n \in \mathbb{N}$ and
+
+$$
+\|(p_n, q_n)_{n \in \mathbb{N}} - (p'_n, q'_n)_{n \in \mathbb{N}}\|_{\infty} < \zeta.
+$$
+---PAGE_BREAK---
+
+We denote by $\mu$ and $\mu'$ the measures corresponding to these two sequences of weights. We will show that
+
+$$|\dim_*(\mu) - \dim_*(\mu')| < \varepsilon,$$
+
+if $\zeta$ is small enough.
+
+It follows from Proposition 2.4 that there exist a natural number $p$ large enough and two sequences $(c_n)_{n \in \mathbb{N}}$, $(c'_n)_{n \in \mathbb{N}}$ of real numbers such that
+
+$$\left| \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log \left( \frac{\mu(IK)}{\mu(I)} \right) \frac{\mu(IK)}{\mu(I)} - c_n \right| < \frac{\varepsilon}{4}$$
+
+and
+
+$$\left| \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log \left( \frac{\mu'(IK)}{\mu'(I)} \right) \frac{\mu'(IK)}{\mu'(I)} - c'_n \right| < \frac{\varepsilon}{4}$$
+
+for all cylinders $I \in \mathcal{F}_{np}$ and all $n \in \mathbb{N}$. Since $p$ is a fixed finite number it suffices to take $\zeta$ small in order to have
+
+$$\left| \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log\left(\frac{\mu(IK)}{\mu(I)}\right) \frac{\mu(IK)}{\mu(I)} - \frac{1}{p} \sum_{K \in \mathcal{F}_p} \log\left(\frac{\mu'(IK)}{\mu'(I)}\right) \frac{\mu'(IK)}{\mu'(I)} \right| < \frac{\varepsilon}{2}$$
+
+for all $I \in \mathcal{F}_{np}$ and all $n \in \mathbb{N}$. Hence,
+
+$$-\varepsilon < \liminf_{n \to \infty} \frac{1}{n+1} \sum_{j=1}^{n} |c_j - c'_j| \leq \limsup_{n \to \infty} \frac{1}{n+1} \sum_{j=1}^{n} |c_j - c'_j| < \varepsilon.$$
+
+Now we deduce from (14) and (15) that $|\dim_*(\mu) - \dim_*(\mu')| < \varepsilon$ and $|\text{Dim}_*(\mu) - \text{Dim}_*(\mu')| < \varepsilon$, which completes the proof. $\blacksquare$
+
+The hypothesis on the markovian structure of the measures $\mu$ and $\mu'$ cannot be omitted as we show in the following section.
+
+**4. A counterexample.** For every $\varepsilon > 0$ we construct two dyadic doubling measures $\mu$ and $\nu$ on $\mathbb{K}$ such that if
+
+$$X_n(x) = \log \frac{\mu(I_n(x))}{\mu(I_{n-1}(x))}, \quad Y_n(x) = \log \frac{\nu(I_n(x))}{\nu(I_{n-1}(x))}, \quad n \in \mathbb{N},$$
+
+then
+
+$$ (16) \qquad \sup_{n \in \mathbb{N}} \|X_n - Y_n\|_{L^\infty} < \varepsilon $$
+
+and, nevertheless, $|\dim_*(\mu) - \dim_*(\nu)| > 1/4$. A first example was proposed to us by Professor Alano Ancona; the proof provided here is of a similar nature.
+
+The construction is carried out in two stages. We fix two Bernoulli measures satisfying (16) and we use a recurrent process to modify them in order to get the corresponding dimensions very different.
+---PAGE_BREAK---
+
+For $I \in \mathcal{F}_n$ we denote by $\hat{I}$ the unique cylinder of the $(n-1)$th generation $\mathcal{F}_{n-1}$ containing $I$. Relation (16) can now be reformulated as
+
+$$ (17) \quad \left| \frac{\mu(I)}{\mu(\hat{I})} : \frac{\nu(I)}{\nu(\hat{I})} - 1 \right| < \varepsilon \quad \text{for all cylinders } I \text{ of } \bigcup_{n \in \mathbb{N}} \mathcal{F}_n. $$
+
+*The starting point.* Take $\varepsilon > 0$ and $\lambda_0$ the Lebesgue (uniform) measure (of dimension 1) on $\mathbb{K}$.
+
+Consider the Bernoulli measure $\varrho_0$ of weight $1/2 - \varepsilon$, i.e. such that for $I \in \mathcal{F}_n, n \in \mathbb{N}$,
+
+$$ (18) \quad \varrho_0(II_0) = (1/2 - \varepsilon)\varrho(I), \quad \varrho_0(II_1) = (1/2 + \varepsilon)\varrho(I). $$
+
+Put $\mu_0 = \lambda_0$ and $\nu_0 = \varrho_0$. By construction the measures $\lambda_0$ and $\varrho_0$ satisfy condition (16), are exact and doubling on the dyadics. Moreover, we have
+
+$$ \dim \varrho_0 = h_*(\varrho) = -\frac{1/2 - \varepsilon}{\log 2} \log\left(\frac{1}{2} - \varepsilon\right) - \frac{1/2 + \varepsilon}{\log 2} \log\left(\frac{1}{2} + \varepsilon\right). $$
+
+It is clear that $\lambda_0$ and $\varrho_0$ are singular. Furthermore by the Shannon-MacMillan formula (cf. for instance [Zin97]),
+
+$$ \lim_{n \to \infty} \frac{\log \varrho_0(I_n(x))}{n} = h_*(\varrho_0) \quad \varrho_0\text{-almost everywhere on } \mathbb{K}. $$
+
+Hence, we can find $n_1 \in \mathbb{N}$ and a partition $\{F_0, F_1\}$ of $\mathcal{F}_{n_1}$ such that:
+
+• $F_0 \cup F_1 = \mathcal{F}_{n_1}$,
+
+$$ \left| \frac{\log \varrho_0(I)}{n} + h_*(\varrho_0) \right| < \varepsilon \text{ for all } I \in F_1, $$
+
+$$ \left| \frac{\log \lambda_0(I)}{n} + \log 2 \right| < \varepsilon \text{ for all } I \in F_0, $$
+
+• $\sum_{I \in F_1} \varrho_0(I) > 1 - \varepsilon$,
+
+• $\sum_{I \in F_0} \lambda_0(I) > 1 - \varepsilon$.
+
+Let us also define the Bernoulli measures $\varrho_1$ and $\lambda_1$ on $\mathbb{K}$ by
+
+$$ (19) \quad \begin{aligned} \varrho_1(I_0) &= \delta, & \varrho_1(I_1) &= 1 - \delta, \\ \lambda_1(I_0) &= \delta(1-\varepsilon), & \lambda_1(I_1) &= 1 - \delta(1-\varepsilon), \end{aligned} $$
+
+where $\delta > 0$ will be fixed later.
+
+*Going on with the construction.* For $I_{i_1...i_n} \subset I \in F_1$ we put
+
+$$ (20) \quad \begin{aligned} \mu_1(I_{i_1...i_n}) &= \mu_0(I_{i_1...i_{n_1}})\lambda_1(I_{i_{n_1}...i_n}), \\ \nu_1(I_{i_1...i_n}) &= \nu_0(I_{i_1...i_{n_1}})\varrho_1(I_{i_{n_1}...i_n}), \end{aligned} $$
+---PAGE_BREAK---
+
+and for $I_{i_1...i_n} \subset I \in F_0$,
+
+$$ (21) \qquad \mu_1(I_{i_1...i_n}) = \mu_0(I_{i_1...i_n}), \quad \nu_1(I_{i_1...i_n}) = \nu_0(I_{i_1...i_n}). $$
+
+We remark that for $I = I_{i_1...i_n}$ with $n \le n_1$ we have $\mu_1(I) = \mu_0(I)$ and $\nu_1(I) = \nu_0(I)$.
+
+The restrictions of the measures $\mu_1$ and $\nu_1$ to cylinders of $F_{n_1} = F_0 \cup F_1$ are Bernoulli measures of different dimensions, so they are singulars. Therefore, we can find $n_2 \in \mathbb{N}$ and a partition $\{F_{00}, F_{01}, F_{10}, F_{11}\}$ of $F_{n_2}$ such that
+
+* $I \in F_{j_0} \cup F_{j_1}$ if and only if there is $J \in F_j$ such that $I \subset J$, $j \in \{0, 1\}$,
+
+$$ \left| \frac{\log \mu_1(I)}{n_2} + \log 2 \right| < \varepsilon^2 \text{ for all } I \in F_{00}, $$
+
+$$ \left| \frac{\log \nu_1(I)}{n_2} + h_*(\varrho_1) \right| < \varepsilon^2 \text{ for all } I \in F_{11}, $$
+
+* $\sum_{\substack{J \in F_{00} \\ J \subset I}} \mu_1(J) > (1-\varepsilon^2)\mu_1(I)$ and $\sum_{\substack{J \in F_{01} \\ J \subset I}} \nu_1(J) > (1-\varepsilon^2)\nu_1(I)$ for $I \in F_0$,
+
+* $\sum_{\substack{J \in F_{10} \\ J \subset I}} \mu_1(J) > (1-\varepsilon^2)\mu_1(I)$ and $\sum_{\substack{J \in F_{11} \\ J \subset I}} \nu_1(J) > (1-\varepsilon^2)\nu_1(I)$ for $I \in F_1$.
+
+If $I \in F_{00} \cup F_{10}$ and $J \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n$, we put
+
+$$ \mu_2(IJ) = \mu_1(I)\lambda_0(J), \quad \nu_2(IJ) = \nu_1(I)\varrho_0(J). $$
+
+If $I \in F_{01} \cup F_{11}$ and $J \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n$ we put
+
+$$ \mu_2(IJ) = \mu_1(I)\lambda_1(J), \quad \nu_2(IJ) = \nu_1(I)\varrho_1(J). $$
+
+Finally, for $I \in \mathcal{F}_n$ with $n \le n_2$, we keep the same mass distribution $\mu_2(I) = \mu_1(I)$ and $\nu_2(I) = \nu_1(I)$.
+
+Suppose the measures $\mu_k$, $\nu_k$ and the partition $\{F_{i_1...i_k} : i_1, ..., i_k \in \{0, 1\}\}$ of $F_{n_k}$ are constructed. As in the two first stages, the restrictions of the measures $\mu_k$ and $\nu_k$ to each cylinder of $F_{n_k}$ are supposed to be Bernoulli measures: either $\lambda_0$ and $\varrho_0$ or $\lambda_1$ and $\varrho_1$, respectively.
+
+The measures $\mu_k$ and $\nu_k$ are mutually singular. Hence, there is $n_{k+1} > n_k$ and a partition $\{F_{i_1...i_{k+1}} : i_1, ..., i_{k+1} \in \{0, 1\}\}$ of $F_{n_{k+1}}$ satisfying
+
+* for any $i_1, ..., i_k \in \{0, 1\}$, $I \in F_{i_1...i_k 0} \cup F_{i_1...i_k 1}$ if and only if there is $J \in F_{i_1...i_k}$ such that $I \subset J$,
+
+$$ \left| \frac{\log \mu_k(I)}{n_{k+1}} + \log 2 \right| < \varepsilon^{k+1} \text{ for all } I \in F_{i_1...i_{k-1} 00}, $$
+
+$$ \left| \frac{\log \nu_k(I)}{n_2} + h_*(\varrho_1) \right| < \varepsilon^{k+1} \text{ for all } I \in F_{i_1...i_{k-1} 11}, $$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+\sum_{\substack{J \in F_{i_1 \dots i_{k-1} 0 0} \\ J \subset I}} \mu_k(J) &> (1 - \varepsilon^{k+1}) \mu_k(I) \text{ and} \\
+\sum_{\substack{J \in F_{i_1 \dots i_{k-1} 0 1} \\ J \subset I}} \nu_k(J) &> (1 - \varepsilon^{k+1}) \nu_k(I) \text{ for all cylinders } I \in F_{i_1 \dots i_{k-1} 0},
+\end{align*}
+$$
+
+$$
+\bullet \quad \sum_{\substack{J \in F_{i_1 \dots i_{k-1} 1 0} \\ J \subset I}} \mu_k(J) > (1 - \epsilon^{k+1})\mu_k(I) \text{ and}
+$$
+
+$$
+\sum_{\substack{J \in F_{i_1 \dots i_{k-1} 1 1} \\ J \subset I}} \nu_k(J) > (1 - \epsilon^{k+1}) \nu_k(I) \text{ for all cylinders } I \in F_{i_1 \dots i_{k-1} 1}.
+$$
+
+If $I \in F_{i_1 \dots i_k 0}$, $i_1, \dots, i_k \in \{0, 1\}$, then for all $J \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n$ we put
+$$
+\mu_{k+1}(IJ) = \mu_k(I)\lambda_0(J), \quad \nu_{k+1}(IJ) = \nu_k(I)\varrho_0(J).
+$$
+
+If $I \in F_{i_1 \dots i_k 1}$, $i_1, \dots, i_k \in \{0, 1\}$, then for all $J \in \bigcup_{n \in \mathbb{N}} \mathcal{F}_n$ we put
+$$
+\mu_{k+1}(IJ) = \mu_k(I)\lambda_1(J), \quad \nu_{k+1}(IJ) = \nu_k(I)\varrho_1(J).
+$$
+
+Properties of the measures defined. It is clear that the sequences $(\mu_n)_{n \in \mathbb{N}}$ and $(\nu_n)_{n \in \mathbb{N}}$ converge towards two probability measures $\mu$ and $\nu$ respectively. By the construction $\mu$ and $\nu$ are doubling on the dyadics, exact and satisfy (16).
+
+On the other hand, clearly $\dim_*(\mu) = 1$ and it is not difficult to see that $\dim_* \nu \le 1/2$ if $\delta$ is small enough, since
+
+$$
+\liminf_{n \to \infty} \frac{-\log \nu(I_n(x))}{n \log 2} = \frac{h_*(\varrho_1)}{\log 2} \quad \nu\text{-almost everywhere.}
+$$
+
+Even more, the measures $\mu$ and $\nu$ satisfy the conclusion of Theorem 1.2.
+The counterexample is complete.
+
+**Acknowledgments.** The author would like to thank the referee for having carefully read this work and for his comments on it. His remarks have been very helpful.
+
+I would like to dedicate this paper to the memory of Martine Babillot,
+who encouraged and helped me throughout the writing. Martine, we miss
+you.
+
+REFERENCES
+
+[Bat96] A. Batakis, *Harmonic measure of some Cantor type sets*, Ann. Acad. Sci. Fenn. 21 (1996), 255–270.
+
+[Bat00] —, *A continuity property of the dimension of harmonic measure of Cantor sets under perturbations*, Ann. Inst. H. Poincaré Probab. Statist. 36 (2000), 87–107.
+---PAGE_BREAK---
+
+[Bat02] A. Batakis, *Dimensions of measures and remarks on relations between them*, preprint, 2002.
+
+[BH02] A. Batakis and Y. Heurteaux, *On relations between entropy and Hausdorff dimension of measures*, Asian J. Math. 6 (2002), 399–408.
+
+[Bil65] P. Billingsley, *Ergodic Theory and Information*, Wiley, 1965.
+
+[BK94] A. Bisbas and C. Karanikas, *Dimension and entropy of a non-ergodic Markovian process and its relation to Rademacher Riesz products*, Monatsh. Math. 118 (1994), 21–32.
+
+[Chu01] K. L. Chung, *A Course in Probability Theory*, Academic Press, 1968, 1974, 2001.
+
+[Fal97] K. Falconer, *Techniques in Fractal Geometry*, Wiley, New York, 1997.
+
+[Fan94] A. H. Fan, *Sur la dimension des mesures*, Studia Math. 111 (1994), 1–17.
+
+[Heu98] Y. Heurteaux, *Estimations de la dimension inférieure et de la dimension supérieure des mesures*, Ann. Inst. H. Poincaré Probab. Statist. 34 (1998), 309–338.
+
+[Heu03] —, *Weierstrass functions with random phases*, Trans. Amer. Math. Soc. 355 (2003), 3065–3077.
+
+[HH80] P. Hall and C. C. Heyde, *Martingale Theory and its Applications*, Academic Press, New York, 1980.
+
+[Kah87] J.-P. Kahane, *Multiplications aléatoires et dimensions de Hausdorff*, Ann. Inst. H. Poincaré 23 (1987), 289–296.
+
+[KP76] J.-P. Kahane et J. Peyrière, *Sur certaines martingales de Benoit Mandelbrot*, Adv. Math. 22 (1976), 131–145.
+
+[Mat95] P. Mattila, *Geometric Measure Theory*, Cambridge Univ. Press, 1995.
+
+[Nas87] F. Ben Nasr, *Mesures aléatoires de Mandelbrot associées à des substitutions*, C. R. Acad. Sci. Paris 304 (1987), 255–258.
+
+[Pey77] J. Peyrière, *Calculs de dimensions de Hausdorff*, Duke Math. J. 44 (1977), 591–601.
+
+[Rén70] A. Rényi, *Probability Theory*, North-Holland, Amsterdam, 1970.
+
+[Wu98] J. M. Wu, *Doubling measures with different bases*, Colloq. Math. 76 (1998), 49–55.
+
+[You82] L. Young, *Dimension, entropy and Lyapounov exponents*, Ergodic Theory Dynam. Systems 2 (1982), 109–124.
+
+[Zin97] M. Zinsmeister, *Formalisme thermodynamique et systèmes dynamiques holomorphes*, Panoramas et synthèses 4, Soc. Math. France, 1997.
+
+MAPMO
+Université d'Orléans
+BP 6759
+45067 Orléans Cedex 2, France
+E-mail: athanasios.batakis@univ-orleans.fr
+
+Received 22 April 2003;
+revised 10 August 2005
+
+(4334)
\ No newline at end of file
diff --git a/samples/texts_merged/337490.md b/samples/texts_merged/337490.md
new file mode 100644
index 0000000000000000000000000000000000000000..f78ddee8b272c3016f4610bd60801225f19508ff
--- /dev/null
+++ b/samples/texts_merged/337490.md
@@ -0,0 +1,1607 @@
+
+---PAGE_BREAK---
+
+# An open microscopic model of heat conduction: evolution and non-equilibrium stationary states
+
+Tomasz Komorowski, Stefano Olla, Marielle Simon
+
+## ► To cite this version:
+
+Tomasz Komorowski, Stefano Olla, Marielle Simon. An open microscopic model of heat conduction: evolution and non-equilibrium stationary states. Communications in Mathematical Sciences, International Press, 2020, 18 (3), pp.751-780. 10.4310/CMS.2020.v18.n3.a8. hal-02081200v2
+
+HAL Id: hal-02081200
+
+https://hal.inria.fr/hal-02081200v2
+
+Submitted on 9 Sep 2019
+
+**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
+---PAGE_BREAK---
+
+AN OPEN MICROSCOPIC MODEL OF HEAT CONDUCTION:
+EVOLUTION AND NON-EQUILIBRIUM STATIONARY STATES
+
+TOMASZ KOMOROWSKI, STEFANO OLLA, AND MARIELLE SIMON
+
+**ABSTRACT.** We consider a one-dimensional chain of coupled oscillators in contact at both ends with heat baths at different temperatures, and subject to an external force at one end. The Hamiltonian dynamics in the bulk is perturbed by random exchanges of the neighbouring momenta such that the energy is locally conserved. We prove that in the stationary state the energy and the volume stretch profiles, in large scale limit, converge to the solutions of a diffusive system with Dirichlet boundary conditions. As a consequence the macroscopic temperature stationary profile presents a maximum inside the chain higher than the thermostats temperatures, as well as the possibility of uphill diffusion (energy current against the temperature gradient). Finally, we are also able to derive the non-stationary macroscopic coupled diffusive equations followed by the energy and volume stretch profiles.
+
+# 1. INTRODUCTION
+
+Non-equilibrium transport in one dimension presents itself to be an interesting phenomenon and in many models numerical simulations can be easily performed. Most of the attention has been focused on the study of the non-equilibrium stationary states (NESS), where the systems are subject to exterior heat baths at different temperatures and other external forces, so that the invariant measure is not the equilibrium Gibbs measure.
+
+The most interesting models are those with various conserved quantities (energy, momentum, volume stretch...) whose transport is coupled. The densities of these quantities may evolve at different time scales, particularly when the spatial dimension of the system equals one. For example, in the Fermi-Pasta-Ulam (FPU) chain, volume stretch, mechanical energy and momentum all evolve in the hyperbolic time scale. Their evolution is governed by the Euler equations (see [8]) while the thermal energy is expected to evolve at a superdiffusive time scale, with
+
+2010 Mathematics Subject Classification. 82C70, 60K35.
+
+Key words and phrases. Hydrodynamic limit, heat diffusion, non-equilibrium stationary states, uphill heat diffusion.
+
+This work was partially supported by the grant 346300 for IMPAN from the Simons Foundation and the matching 2015-2019 Polish MNiSW fund, and by the ANR-15-CE40-0020-01 grant LSD. T.K. acknowledges the support of the National Science Centre: NCN grant 2016/23/B/ST1/00492. M. S. thanks Labex CEMPI (ANR-11-LABX-0007-01) and the project EDNHS ANR-14-CE25-0011 of the French National Research Agency (ANR) for their support. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovative programme (grant agreement n°715734).
+---PAGE_BREAK---
+
+an autonomous evolution described by a fractional heat equation. This has been predicted [22], confirmed by many numerical experiments on the NESS [19, 18] and proved analytically for harmonic chains with random exchanges of momenta that conserve energy, momentum and volume stretch, see [12].
+
+In contrast to the situation described above, the present paper deals with a system for which conserved quantities evolve macroscopically in the same *diffusive* time scale, and their macroscopic evolution is governed by a system of *coupled* diffusive equations. One example is given by the chain of coupled rotors, whose dynamics conserves the energy and the angular momentum. In [10] the NESS of this chain is studied numerically, when Langevin thermostats are applied at both ends, while a constant force is applied to one end and the position of the rotor on the opposite side is kept fixed. While heat flows from the thermostats, work is performed by the torque, increasing the mechanical energy, which is then transformed into thermal energy by the dynamics of the rotors. The stationary temperature profiles observed numerically in [10] present a maximum inside the chain higher than the temperature of both thermostats. Furthermore, a negative linear response for the energy flux has been observed for certain values of the external parameters. This phenomenon is referred to in the literature as an *uphill diffusion*, see [17] or [7] and references therein. These numerical results have been confirmed in [11], as well as an instability of the system when thermostats are at zero temperature.
+
+The present work aims at describing a similar phenomenon for the NESS, but for a different model. In particular, we are able to show rigorously that the maximum of the temperature profile occurs inside the system. According to our knowledge it is the first theoretical result that rigorously establishes the heating inside the system and uphill diffusion phenomena.
+
+More specifically, we consider a chain of unpinned harmonic oscillators whose dynamics is perturbed by a random mechanism that conserves the energy and volume stretch: any two nearest neighbour particles exchange their momenta randomly in such a way that the total kinetic energy is conserved. Two Langevin thermostats are attached at the opposite ends of the chain and a constant force $\bar{\tau}_+$ acts on the last particle of the chain. This system has only two conserved quantities: total energy and volume. Since the random mechanism does not conserve the total momentum, the macroscopic behaviour of these two quantities is diffusive, and the non stationary hydrodynamic limit with periodic boundary conditions (no thermostats or exterior force present) has been proven in [16].
+
+The action of this constant force puts the system out of equilibrium, even when the temperatures of the thermostats are equal. As in the rotor chain described above, the exterior force performs positive work on the system, that increases the mechanical energy (concentrated on low frequency modes). The random mechanism, which consists in the kinetic energy exchange between neighbouring atoms, see definition (2.8) and the following explanations, transforms the mechanical energy into the thermal one (uniformly distributed in all frequencies, when the
+---PAGE_BREAK---
+
+system is in a local equilibrium), which is eventually dissipated by the thermostats. This transfer of mechanical into thermal energy happens in the bulk of the system and is already completely predicted by the solution of the macroscopic diffusive system of equations obtained in the hydrodynamic limit [16], see also [1] for a similar model without boundary conditions.
+
+In the present article we study the NESS of this dynamics. We prove, see Theorem 3.3 below, that the energy and the volume stretch profiles converge to the stationary solution of the diffusive system, with the boundary conditions imposed by the thermostats and the external tension. It turns out that these stationary equations can be solved explicitly and the stretch profile is linear between 0 and $\bar{\tau}_+$, while the thermal energy (temperature) profile is a concave parabola with the boundary conditions coinciding with the temperatures of the thermostats. The curvature of the parabola is proportional to $\bar{\tau}_+^2$, i.e. the increase of the bulk temperature is not a linear response term. In the case $\bar{\tau}_+ = 0$, the NESS was studied in [4], where the temperature profile is proved to be linear: more details are available in [2, 3]. This heating inside the system phenomenon is similar to the ohmic loss, due to the diffusion of electricity in a resistive system (see e.g. [6]).
+
+The NESS for our model also provides a simple example of an uphill energy diffusion: if the force $\bar{\tau}_+$ is large enough and applied on the side, where the coldest thermostat is acting, the sign of the energy current can be equal to the one of the temperature gradient, see Theorem 3.2 below. It is not surprising after understanding that this is regulated by a system of two diffusive coupled equations. On the other hand, the model does not work as a stationary refrigerator: i.e. a system where the heat on the coldest thermostat flows into it.
+
+Our results suggest that there is a universal behaviour of the temperature profiles in the NESS when there are at least two conserved quantities. This should be tested on a system with three conserved quantities that evolves in the diffusive scale, such as e.g. a non-acoustic harmonic chain with a random exchange of momentum as considered in [15], where the non-stationary hydrodynamic limit is proven. An attempt to describe more generally the systems for which the phenomena of an uphill energy diffusion and heating inside the system occur is made in [21].
+
+Let us add a comment on the proofs of our main results. In proving Theorem 3.3 (on the asymptotics of energy and stretch profile) we need to make an additional hypothesis concerning the strength $\gamma > 0$ of the noisy part of the dynamics, see (2.3) and (2.4). More precisely we suppose that $\gamma = 1$. This assumption is of purely technical nature and allows to discard the term corresponding to the equipartition of the mechanical and kinetic energy in the decomposition (4.8) of the microscopic energy profile of the chain (the term $H_n^m$). We conjecture that this term vanishes in the stationary macroscopic limit, making the conclusion of the theorem valid for any $\gamma > 0$, but are unable to prove this fact at the moment. We do not need this hypothesis in our proof of Theorem 3.2 (on the uphill diffusion phenomenon).
+---PAGE_BREAK---
+
+In Appendix A we give a proof for the non-stationary macroscopic evolution of the energy and the volume stretch profiles in the diffusive space-time scaling. As for the NESS, the proof is rigorous only for $\gamma = 1$, for similar reasons. The corresponding result with periodic boundary conditions was contained in [1].
+
+The rest of the paper is organized as follows: in Section 2 we define the microscopic model under investigation and give the expected macroscopic system of equations, showing the phenomenon of uphill diffusion. In Section 3 we state the main results of the paper, namely the convergence of the non-equilibrium stationary profiles of elongation, current and energy. In order to prove them, we need precise computations on the averages and second order moments taken with respect to the NESS. Section 4 provides elements of the proofs and preliminary computations on the averages, while Section 5 provides all the remaining technical lemmas, concerning the second order moments.
+
+## 2. MICROSCOPIC DYNAMICS AND MACROSCOPIC BEHAVIOUR
+
+2.1. **Open chain of oscillators.** Let $I_n := \{1, \dots, n\}$, $\bar{I}_n := I_n \cup \{0\}$ and $\mathbb{I} := [0, 1]$. The configuration space $\Omega_n := \mathbb{R}^{|\bar{I}_n|} \times \mathbb{R}^{|\bar{I}_n|}$ consists of all sequences $(\mathbf{q}, \mathbf{p}) := \{q_x, p_x\}_{x \in \bar{I}_n}$, where $p_x \in \mathbb{R}$ stands for the momentum of the oscillator at site $x$, and $q_x \in \mathbb{R}$ represents its position. The interaction between two particles $x$ and $x+1$ is described by the quadratic potential energy $V(q_x - q_{x+1}) := \frac{1}{2}(q_x - q_{x+1})^2$ of a harmonic spring linking the particles. At the boundaries the system is connected to two Langevin heat baths at temperatures $T_-$ and $T_+$. Furthermore, on the right boundary acts a force (tension) $\tau_+$, possibly slowly changing in time at a scale $t/n^2$. Note that the system is *unpinned*, i.e. there is no external potential binding the particles. Consequently, the absolute positions $q_x$ do not have a precise meaning, and the dynamics only depends on the interparticle elongations $r_x := q_x - q_{x-1}$, $x \in I_n$. The configurations can then be described by
+
+$$ (\mathbf{r}, \mathbf{p}) = (r_1, \dots, r_n, p_0, \dots, p_n) \in \Omega_n. \quad (2.1) $$
+
+The total energy of the system is defined by the Hamiltonian:
+
+$$ \mathcal{H}_n(\mathbf{r}, \mathbf{p}) := \sum_{x \in I_n} \varepsilon_x + \frac{p_0^2}{2}, \quad (2.2) $$
+
+with
+
+$$ \varepsilon_x := \frac{p_x^2}{2} + \frac{r_x^2}{2}, \quad x \in I_n. $$
+
+We investigate this system in the diffusive time scale (when the ratio of the microscopic vs macroscopic time is $n^2$), therefore the equations of the microscopic
+---PAGE_BREAK---
+
+FIGURE 1. Oscillator chains with heat baths and one boundary force.
+
+dynamics are given in the bulk by
+
+$$
+\begin{align}
+dr_x(t) &= n^2 (p_x(t) - p_{x-1}(t)) dt, \quad x \in \mathbb{I}_n \tag{2.3} \\
+dp_x(t) &= n^2 (r_{x+1}(t) - r_x(t)) dt - \gamma n^2 p_x(t) dt \nonumber \\
+&\qquad + n \sqrt{\gamma} (p_{x-1}(t) dw_{x-1,x}(t) - p_{x+1}(t) dw_{x,x+1}(t)), \quad x \in \{1, \dots, n-1\} \tag{2.4}
+\end{align}
+$$
+
+and at the boundaries:
+
+$$
+\begin{equation}
+\begin{aligned}
+dp_0(t) &= n^2 r_1(t)dt - \frac{n^2}{2}(\gamma + \tilde{\gamma})p_0(t)dt - n\sqrt{\gamma}p_1(t)dw_{0,1}(t) + n\sqrt{\tilde{\gamma}T_-}d\tilde{w}_0(t) && (2.5) \\
+dp_n(t) &= -n^2 r_n(t)dt + n^2 \bar{\tau}_+(t)dt - \frac{n^2}{2}(\gamma + \tilde{\gamma})p_n(t)dt \\
+&\qquad + n\sqrt{\gamma}p_{n-1}(t)dw_{n-1,n}(t) + n\sqrt{\tilde{\gamma}T_+}d\tilde{w}_n(t)
+\end{aligned}
+\tag{2.6}
+\end{equation}
+$$
+
+where $w_{x,x+1}(t)$, $x \in \{0, \dots, n-1\}$, $\tilde{w}_0(t)$ and $\tilde{w}_n(t)$ are independent, standard one dimensional Wiener processes, and $\gamma > 0$ (resp. $\tilde{\gamma} > 0$) regulates the intensity of the random perturbation (resp. the Langevin thermostats). See Figure 1 for a representation of the chain. Note that the purely Hamiltonian dynamics is perturbed by a stochastic noise which exchanges kinetic energy between the neighbouring atoms, and with the boundary thermostats.
+
+We let
+
+$$
+(\mathbf{r}_n(t), \mathbf{p}_n(t)) := (r_1(t), \dots, r_n(t), p_0(t), \dots, p_n(t)), \quad t \ge 0 \quad (2.7)
+$$
+
+be the Ω$_n$-valued process whose dynamics is determined by the equations (2.3)–
+(2.6). Its generator is given by
+
+$$
+L := n^2 \left( A + \frac{\gamma}{2}S + \frac{\tilde{\gamma}}{2}\tilde{S} \right), \quad (2.8)
+$$
+---PAGE_BREAK---
+
+where
+
+$$
+\begin{align*}
+A &:= \sum_{x=1}^{n} (p_x - p_{x-1}) \partial_{r_x} + \sum_{x=1}^{n-1} (r_{x+1} - r_x) \partial_{p_x} + r_1 \partial_{p_0} + (\bar{\tau}_+(t) - r_n) \partial_{p_n} \\
+Sf &:= \sum_{x=0}^{n-1} \chi_x \circ \chi_x(f),
+\end{align*}
+$$
+
+where $\mathcal{X}_x$ is the momentum exchange operator defined as
+
+$$
+\mathcal{X}_x := p_{x+1} \partial_{p_x} - p_x \partial_{p_{x+1}},
+$$
+
+and moreover the generator of the Langevin heat baths at the boundaries is given
+by
+
+$$
+\tilde{S} := T_- \partial_{p_0}^2 - p_0 \partial_{p_0} + T_+ \partial_{p_n}^2 - p_n \partial_{p_n}.
+$$
+
+From the microscopic energy conservation law there exist microscopic energy
+currents $j_{x,x+1}$ which satisfy
+
+$$
+n^{-2} L \mathcal{E}_x = j_{x-1,x} - j_{x,x+1}, \quad \text{for any } x \in \bar{\mathbb{I}}_n \qquad (2.9)
+$$
+
+and are given by
+
+$$
+j_{x,x+1} := -p_x r_{x+1} + \frac{\gamma}{2}(p_x^2 - p_{x+1}^2), \quad \text{if } x \in \{0, \dots, n-1\}, \quad (2.10)
+$$
+
+while at the boundaries
+
+$$
+j_{-1,0} := \frac{\tilde{\gamma}}{2} (T_{-} - p_{0}^{2}), \quad j_{n,n+1} := -\frac{\tilde{\gamma}}{2} (T_{+} - p_{n}^{2}) - \bar{\tau}_{+}(t)p_{n}. \quad (2.11)
+$$
+
+2.2. **Macroscopic equations.** Suppose that $r(t, u)$, $e(t, u)$, $(t, u) \in \mathbb{R}_{+} \times \mathbb{I}$, are the macroscopic profiles of *elongation* and *energy* of the macroscopic system, obtained in the diffusive scaling limit. The profiles $r(t, \cdot)$, $e(t, \cdot)$ are the expected limits, as $n$ gets large, of
+
+$$
+\frac{1}{n} \sum_{x \in \mathbb{I}_n} r_x(t) \delta_{x/n}(\cdot), \quad \text{and} \quad \frac{1}{n} \sum_{x \in \bar{\mathbb{I}}_n} \mathcal{E}_x(t) \delta_{x/n}(\cdot),
+$$
+
+where $\delta_u(\cdot)$ is the delta Dirac function at point $u$. These convergences are ex-
+pected to hold in the weak formulation sense: more details will be given in Ap-
+pendix A. If both convergences do hold at time $t = 0$ to some given profiles $r_0(u)$
+and $\mathcal{E}_0(u)$, then we expect that they satisfy the following system of equations¹,
+
+$$
+\partial_t r(t, u) = \gamma^{-1} \partial_{uu}^2 r(t, u) \tag{2.12}
+$$
+
+$$
+\partial_t e(t, u) = \frac{1}{2} \partial_{uu}^2 \left\{ (\gamma^{-1} + \gamma) e(t, u) + \frac{1}{2} (\gamma^{-1} - \gamma) r^2(t, u) \right\}, \quad (t, u) \in \mathbb{R}_+ \times \mathbb{I}, \quad (2.13)
+$$
+
+1See also Theorems 3.7 and 3.8 of [16] for a similar model which gives a similar coupled diffusive system for every value of γ.
+---PAGE_BREAK---
+
+with the boundary conditions
+
+$$
+\begin{align*}
+r(t, 0) &= 0, & r(t, 1) &= \bar{\tau}_+(t), \\
+e(t, 0) &= T_-, & e(t, 1) &= T_+ + \frac{(\bar{\tau}_+(t))^2}{2}
+\end{align*}
+$$
+
+and with the initial condition
+
+$$
+r(0, u) = r_0(u), \quad e(0, u) = \mathcal{E}_0(u).
+$$
+
+In Appendix A we will give the proof arguments for a derivation of these macro-
+scopic equations, which are rigorous for $\gamma = 1$, and conditioned to a form of local
+equilibrium result for $\gamma \neq 1$, stated in (A.46).
+
+Define now $e_{\text{mech}}(t,u) := \frac{1}{2}r^2(t,u)$ and $e_{\text{th}}(t,u) := e(t,u) - e_{\text{mech}}(t,u)$ as respectively the *mechanical* and *thermal* components of the macroscopic energy. From (2.12) and (2.13) we conclude that
+
+$$
+\partial_t e_{\text{mech}}(t, u) = \gamma^{-1} \left( \partial_{uu}^2 e_{\text{mech}}(t, u) - (\partial_u r(t, u))^2 \right), \quad (t, u) \in \mathbb{R}_+ \times \mathbb{I}
+$$
+
+with
+
+$$
+e_{\text{mech}}(t, 0) = 0, \quad e_{\text{mech}}(t, 1) = \frac{(\bar{\tau}_{+}(t))^{2}}{2}, \quad e_{\text{mech}}(0, 0) = \frac{r_{0}^{2}(u)}{2}, \quad (t, u) \in \mathbb{R}_{+} \times \mathbb{I}
+$$
+
+and
+
+$$
+\partial_t e_{\text{th}}(t, u) = \frac{1}{2}(\gamma^{-1} + \gamma)\partial_{uu}^2 e_{\text{th}}(t, u) + \gamma^{-1}(\partial_u r(t, u))^2, \quad (t, u) \in \mathbb{R}_+ \times \mathbb{I}, \quad (2.14)
+$$
+
+with
+
+$$
+e_{\text{th}}(t, 0) = T_{-}, \quad e_{\text{th}}(t, 1) = T_{+}, \quad t > 0.
+$$
+
+**2.3. Stationary non-equilibrium states.** From now on we assume $\bar{\tau}_+(t) \equiv \bar{\tau}_+$ to be constant in time.
+
+When $\bar{\tau}_+ = 0$ and $T_- = T_+ = T$, the system is *in equilibrium* and the stationary probability distribution is given explicitly by the homogeneous Gibbs measure
+
+$$
+\nu_T(\mathrm{d}\mathbf{r}, \mathrm{d}\mathbf{p}) := g_T(\mathbf{r}, \mathbf{p}) \mathrm{d}p_0 \prod_{x \in \mathbb{I}_n} \mathrm{d}p_x \mathrm{d}r_x,
+$$
+
+where
+
+$$
+g_T(\mathbf{r}, \mathbf{p}) := \frac{e^{-p_0^2/2T}}{\sqrt{2\pi T}} \prod_{x \in I_n} \frac{e^{-\epsilon_x/T}}{2\pi T}. \tag{2.15}
+$$
+
+If $\bar{\tau}_+ \neq 0$, or $T_- \neq T_+$, the stationary measure exists and is unique, but it is not
+given explicitly. More precisely, we know that there exists a unique stationary
+probability distribution $\mu_{ss}$ on $\Omega_n$ (cf. (2.1)) for the microscopic dynamics de-
+scribed by the equations (2.3)–(2.6). As a consequence $\langle LF\rangle_{ss} = 0$ for any function
+$F$ in the domain of the operator $L$, given by (2.8). Hereafter, we denote
+
+$$
+\langle F \rangle_{ss} := \int_{\Omega_n} F d\mu_{ss}.
+$$
+
+The proof of the existence and uniqueness of a stationary state follows from the
+same argument as the one used in [4, Appendix A] for $\bar{\tau}_+ = 0$. The fact that in
+---PAGE_BREAK---
+
+our case $\bar{\tau}_+$ does not vanish requires only minor modifications. In addition, one
+can show, see bound (A.1) in [4], that for a fixed $n$ we have $\langle \mathcal{H}_n \rangle_{ss} < +\infty$ (cf.
+(2.2)).
+
+The corresponding stationary profiles, denoted respectively by $r_{ss}(u)$ and $e_{th,ss}(u)$,
+will solve the stationary version of equations (2.12) and (2.14), i.e.:
+
+$$
+r_{ss}(u) = \bar{\tau}_{+} u \tag{2.16}
+$$
+
+and
+
+$$
+(\gamma^{-1} + \gamma) \partial_{uu}^2 e_{\text{th,ss}}(u) + 2\gamma^{-1} \bar{\tau}_{+}^2 = 0,
+$$
+
+with the boundary conditions
+
+$$
+e_{\text{th,ss}}(0) = T_{-}, \quad e_{\text{th,ss}}(1) = T_{+}.
+$$
+
+In other words
+
+$$
+e_{\text{th,ss}}(u) = \frac{\bar{\tau}_{+}^2}{1 + \gamma^2} u(1 - u) + (T_{+} - T_{-})u + T_{-}. \quad (2.17)
+$$
+
+Taking the average with respect to the stationary state in (2.9), we get the *stationary microscopic energy current*
+
+$$
+\langle j_{x,x+1} \rangle_{ss} := \bar{j}_s, \quad \text{for any } x \in \{-1, \dots, n\}. \tag{2.18}
+$$
+
+The macroscopic stationary energy current is defined as the limit of $n\bar{j}_s$, as $n \to +\infty$. It equals, see Theorem 3.2 below,
+
+$$
+J_{ss} = -\frac{1}{2}(\gamma^{-1} + \gamma)(T_+ - T_-) - \frac{\bar{\tau}_+^2}{2\gamma}.
+$$
+
+Observe that the energy current can flow against the temperature gradient if $T_- > T_+$ and $|\bar{\tau}_+|$ is large enough (*uphill diffusion*). Assuming $T_+ \ge T_-$ the maximum stationary temperature $e_{\text{th,ss}}^{\max}$ is reached at
+
+$$
+u_{\max} = \left( \frac{1}{2} + \frac{1+\gamma^2}{\bar{\tau}_+^2} (T_+ - T_-) \right) \wedge 1
+$$
+
+which implies that, if the condition $2(1+\gamma^2)(T_+-T_-) \le \bar{\tau}_+^2$ is satisfied, then the
+maximum temperature of the chain is attained inside, since $u_{\max} < 1$ (see Figure 2), and it equals
+
+$$
+e_{\text{th,ss}}^{\max} = \frac{(T_+ - T_-)}{2} + T_- + \frac{\bar{\tau}_+^2}{4(1 + \gamma^2)} \geq T_+.
+$$
+
+Note that this does not depend on the sign of $\bar{\tau}_+$.
+
+This phenomenon was observed by dynamical numerical simulations in [10] for
+the stationary states of the rotor model. It has attracted quite some interest from
+physicists, see [17] for a review. The present article is devoted to the proof of
+such a phenomenon, when $\gamma = 1$. This restriction is technical and will be further
+explained in Section 4.2. According to our knowledge it is the first rigorous proof
+of this fact in the existing literature.
+---PAGE_BREAK---
+
+FIGURE 2. Temperature profile when $T_+ - T_- < 2\bar{\tau}_+^2$.
+
+### 3. MAIN RESULTS
+
+Let us start with the following:
+
+**Theorem 3.1** (Stationary elongation profile). *The following uniform convergence holds:*
+
+$$ \sup_{u \in \mathcal{I}} |(r_{[nu]})_{ss} - r_{ss}(u)| \xrightarrow{n \to \infty} 0, $$
+
+where $r_{ss}(u) := \bar{\tau}_+ u$. In particular, for any continuous test function $G : \mathbb{I} \to \mathbb{R}$,
+
+$$ \frac{1}{n} \sum_{x \in \mathcal{I}_n} G\left(\frac{x}{n}\right) \langle r_x \rangle_{ss} \xrightarrow{n \to \infty} \int_{\mathbb{I}} G(u) r_{ss}(u) \, du. $$
+
+*Proof.* The averages under the stationary state $\langle r_x \rangle_{ss}$ and $\langle p_x \rangle_{ss}$ are computable explicitly, see Proposition 4.1 in the next section. It turns out that $\langle p_x \rangle_{ss}$ is constant for all $x \in \bar{\mathcal{I}}_n$ and equals $\bar{p}_s := \bar{\tau}_+ / (\gamma n + \tilde{\gamma})$ (see (4.1)). From (4.2) we also have
+
+$$ n(\langle r_{x+1} \rangle_{ss} - \langle r_x \rangle_{ss}) = n\gamma\bar{p}_s \xrightarrow[n\to\infty]{n\to\infty} \bar{\tau}_+, \quad \text{for } x \in \{1, \dots, n-1\} $$
+
+and
+
+$$ \langle r_1 \rangle_{ss} \xrightarrow{n \to \infty} 0, \qquad \langle r_n \rangle_{ss} \xrightarrow{n \to \infty} \bar{\tau}_+. $$
+
+Finally, (4.2) directly implies the conclusion of the theorem. $\square$
+
+Concerning the *stationary energy flow* and the validity of the Fourier law we show the following result on the macroscopic stationary energy current.
+
+**Theorem 3.2** (Stationary energy current and Fourier law).
+
+$$ n\bar{j}_s \xrightarrow[n\to\infty]{n\to\infty} -\frac{1}{2}(\gamma^{-1} + \gamma)(T_+ - T_-) - \frac{\bar{\tau}_+^2}{2\gamma}. \quad (3.1) $$
+
+Note that Theorems 3.1 and 3.2 are valid for any $\gamma > 0$. We now state our last main result about the stationary energy profile, which we are able to prove only
+---PAGE_BREAK---
+
+for $\gamma = 1$. Before stating it we introduce the stationary microscopic mechanical and thermal energy *per particle* as follows
+
+$$
+\begin{aligned}
+\mathcal{E}_x^{\text{mech}} &:= \frac{1}{2} \langle r_x^2 \rangle_{ss} \\
+\mathcal{E}_x^{\text{th}} &:= \mathcal{E}_x - \mathcal{E}_x^{\text{mech}} = \frac{1}{2} p_x^2 + \frac{1}{2} (r_x^2 - \langle r_x^2 \rangle_{ss}), \quad x \in \mathbb{I}_n.
+\end{aligned}
+ $$
+
+**Theorem 3.3 (Stationary energy profile).** Assume that $\gamma = 1$. For any continuous test function $G: \mathbb{I} \to \mathbb{R}$ we have
+
+$$ \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \mathcal{E}_x^{\text{mech}} \underset{n \to \infty}{\longrightarrow} \int_{\mathbb{I}} G(u) \frac{1}{2} r_{ss}^2(u) \, du, \qquad (3.2) $$
+
+$$ \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \langle \mathcal{E}_x^{\text{th}} \rangle_{\text{ss}} \xrightarrow{n \to \infty} \int_{\mathbb{I}} G(u) e_{\text{th,ss}}(u) \, du, \qquad (3.3) $$
+
+$$ \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \langle \mathcal{E}_x \rangle_{\text{ss}} \underset{n \to \infty}{\longrightarrow} \int_{\mathbb{I}} G(u) \left( e_{\text{th,ss}}(u) + \frac{1}{2} r_{\text{ss}}^2(u) \right) du, \qquad (3.4) $$
+
+where
+
+$$ r_{ss}(u) = \bar{\tau}_{+} u, $$
+
+$$ e_{\text{th,ss}}(u) = \frac{\bar{\tau}_{+}^2}{2} u(1-u) + (T_{+} - T_{-})u + T_{+}. $$
+
+The remaining part of the paper deals with the proofs of Theorems 3.2 and 3.3.
+
+#### 4. THE STATIONARY STATE
+
+Let us start with explicit computations for the average momenta and elongations with respect to the NESS.
+
+##### 4.1. Elongation and momenta averages.
+
+**Proposition 4.1.** The average stationary momenta are equal to
+
+$$ \langle p_x \rangle_{ss} = \bar{p}_s := \frac{\bar{\tau}_+}{\gamma n + \tilde{\gamma}}, \quad \text{for any } x \in \bar{\mathbb{I}}_n. \qquad (4.1) $$
+
+The average stationary elongations are equal to
+
+$$ \langle r_x \rangle_{ss} = \frac{\bar{p}_s}{2} (\tilde{\gamma} - \gamma + 2\gamma x) = \frac{\bar{\tau}_+(2\gamma x + \tilde{\gamma} - \gamma)}{2(\gamma n + \tilde{\gamma})}, \quad \text{for any } x \in \bar{\mathbb{I}}_n. \qquad (4.2) $$
+
+*Proof.* We start with some useful relations that hold for the stationary state:
+
+(1) since $\langle Lr_x \rangle_{ss} = 0$, applying (2.8), we conclude
+
+$$ \langle p_x \rangle_{ss} = \langle p_{x-1} \rangle_{ss} = \bar{p}_s, \quad \text{for any } x \in \mathbb{I}_n; $$
+---PAGE_BREAK---
+
+(2) from $\langle Lp_x \rangle_{ss} = 0$ we get
+
+$$
+\begin{align*}
+& \langle r_{x+1} \rangle_{ss} - \langle r_x \rangle_{ss} = \gamma \bar{p}_s, && \text{for any } x \in \{2, \dots, n-2\} \\
+& \langle r_1 \rangle_{ss} = \frac{1}{2} (\gamma + \tilde{\gamma}) \bar{p}_s, \\
+& \langle r_n \rangle_{ss} = -\frac{1}{2} (\gamma + \tilde{\gamma}) \bar{p}_s + \bar{\tau}_+.
+\end{align*}
+$$
+
+These equations determine the average stationary momentum and elongation as given in formulas (4.1) and (4.2). $\square$
+
+**4.2. Elements of the proofs of Theorems 3.2 and 3.3.** One of the main characteristics of this model is the existence of an explicit fluctuation-dissipation relation, which permits to write the stationary current $\vec{j}_s$ as a discrete gradient of some local function, as given in the following:
+
+**Proposition 4.2** (Decomposition of the stationary current). We can write $\vec{j}_s$ as a discrete gradient, namely
+
+$$ \vec{j}_s = \nabla \phi(x) := \phi(x+1) - \phi(x), \quad x \in \{1, \dots, n-1\}, \qquad (4.3) $$
+
+with
+
+$$ \phi(x) := -\frac{1}{2\gamma} (\langle r_x^2 \rangle_{ss} + \langle p_{x-1}p_x \rangle_{ss}) - \frac{\gamma}{4} (\langle p_x^2 \rangle_{ss} + \langle p_{x-1}^2 \rangle_{ss}), \quad x \in \mathbb{I}_n. \quad (4.4) $$
+
+**Remark 4.3.** Thanks to (4.3), the function $\phi(x)$ is harmonic, i.e.:
+
+$$ \Delta\phi(x) := \phi(x+1) + \phi(x-1) - 2\phi(x) = 0, \quad \text{for any } x \in \{2, \dots, n-1\}. \quad (4.5) $$
+
+*Proof of Proposition 4.2.* By a direct calculation one can easily check that the energy currents $j_{x,x+1}$ (defined in (2.10)) satisfy the following fluctuation-dissipation relation:
+
+$$ j_{x,x+1} = n^{-2}Lg_x - \frac{1}{2\gamma}\nabla(r_x^2 + p_{x-1}p_x) - \frac{\gamma}{4}\nabla(p_{x-1}^2 + p_x^2), \quad (4.6) $$
+
+for any $x \in \{1, ..., n-1\}$, with
+
+$$ g_x := -\frac{1}{4}p_x^2 + \frac{1}{2\gamma}p_x(r_x + r_{x+1}). $$
+
+Therefore, (4.3) is obtained by taking the average in (4.6) with respect to the stationary state. $\square$
+
+We can now sketch the proof of Theorem 3.3: straightforward computations, using the definition (4.4) of $\phi$, yield
+
+$$
+\begin{align}
+\langle \varepsilon_x \rangle_{ss} &= \frac{1}{2} (\langle p_x^2 \rangle_{ss} + \langle r_x^2 \rangle_{ss})
+= -\frac{2\gamma}{1+\gamma^2} \phi(x) + \frac{\gamma^2}{2(1+\gamma^2)} (\langle p_x^2 \rangle_{ss} - \langle p_{x-1}^2 \rangle_{ss}) \nonumber \\
+&\quad -\frac{1}{1+\gamma^2} \langle p_x p_{x-1} \rangle_{ss} + \frac{1-\gamma^2}{2(1+\gamma^2)} (\langle p_x^2 \rangle_{ss} - \langle r_x^2 \rangle_{ss}). \tag{4.7}
+\end{align}
+$$
+---PAGE_BREAK---
+
+Therefore, the microscopic energy profile can be decomposed as the sum of four
+terms:
+
+$$
+\mathcal{H}_n(G) := \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \langle \mathcal{E}_x \rangle_{ss} = \mathcal{H}_n^\phi(G) + \mathcal{H}_n^\nabla(G) + \mathcal{H}_n^{\text{corr}}(G) + \mathcal{H}_n^{\text{m}}(G), \quad (4.8)
+$$
+
+where
+
+$$
+\begin{align*}
+\mathcal{H}_n^\phi(G) &:= -\frac{2\gamma}{1+\gamma^2} \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \phi(x), \\
+\mathcal{H}_n^\nabla(G) &:= \frac{\gamma^2}{2(1+\gamma^2)} \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) (\langle p_x^2 \rangle_{ss} - \langle p_{x-1}^2 \rangle_{ss}), \\
+\mathcal{H}_n^{\text{corr}}(G) &:= -\frac{1}{1+\gamma^2} \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \langle p_{x-1} p_x \rangle_{ss}, \\
+\mathcal{H}_n^{\text{m}}(G) &:= \frac{1-\gamma^2}{2(1+\gamma^2)} \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) (\langle p_x^2 \rangle_{ss} - \langle r_x^2 \rangle_{ss}).
+\end{align*}
+$$
+
+Note that, if $\gamma = 1$, then $\mathcal{H}_n^m = 0$. If $\gamma \neq 1$, we conjecture that this last term vanishes as $n \to \infty$, but we are not able to prove it at the moment. The limits of the other three terms will be obtained in the next section and are summarized in the following proposition:
+
+**Proposition 4.4.** For any continuous test function $G : I \to \mathbb{R}$,
+
+$$
+\mathcal{H}_{n}^{\phi}(G) \xrightarrow{n \to \infty} \int_{I} G(u) \left( \frac{\bar{\tau}_{+}^{2}}{1+\gamma^{2}} u + (T_{+} - T_{-})u + T_{-} \right) du, \quad (4.9)
+$$
+
+$$
+\mathcal{H}_n^\nabla(G) \xrightarrow{n \to \infty} 0, \tag{4.10}
+$$
+
+$$
+\mathcal{H}_n^{\text{corr}}(G) \xrightarrow{n \to \infty} 0. \qquad (4.11)
+$$
+
+The complete proof of the proposition will be given in Section 5.4. Let us first comment on the ideas used in the argument. The limit (4.9) will be concluded using the fact that $\phi$ is harmonic, (4.10) is a consequence of the presence of a discrete gradient $\langle p_x^2 \rangle_{ss} - \langle p_{x-1}^2 \rangle_{ss}$ inside the sum, and (4.11) will be shown thanks to the second order bounds, which are obtained in the next section.
+
+*Proof of Theorem 3.3.* With the help of Proposition 4.4 the proof of Theorem 3.3 becomes straightforward. Assume that $\gamma = 1$. From the decomposition (4.7) and Proposition 4.4, we get
+
+$$
+\begin{align*}
+& \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \langle \mathcal{E}_x \rangle_{ss} \\
+&\xrightarrow[n \to \infty]{}
+\int_I G(u) \left( \frac{\bar{\tau}_+^2}{2} u + (T_+ - T_-)u + T_- \right) du \\
+&= \int_I G(u) \left( \frac{\bar{\tau}_+^2}{2} u(1-u) + (T_+ - T_-)u + T_- + \frac{(\bar{\tau}_+ u)^2}{2} \right) du.
+\end{align*}
+$$
+
+Recalling (2.16) and (2.17) we conclude that the right hand side equals
+
+$$
+\int_I G(u) \left( e_{\text{th,ss}}(u) + \frac{r_{\text{ss}}^2(u)}{2} \right) du.
+$$
+---PAGE_BREAK---
+
+Thus (3.4) follows. From Theorem 3.1 we immediately conclude (3.2). The
+convergence of the stationary microscopic thermal energy profile in (3.3) is an
+immediate consequence of these two statements.
+
+5. MOMENT BOUNDS UNDER THE STATIONARY STATE
+
+In this section we present a complete proof of Proposition 4.4 (see Section 5.4)
+and we show Theorem 3.2 (see Proposition 5.9). Before presenting the proof,
+we need a few technical estimates on the entropy production (Section 5.1) and
+second order moments (Section 5.2 and Section 5.3). In the whole section we do
+not assume $\gamma = 1$, since our results hold for any $\gamma > 0$.
+
+**5.1. Entropy production of the stationary state.** Recall definition (2.15).
+For a given $T$ we will use
+
+$$
+\nu_T(\mathrm{d}\mathbf{r}, \mathrm{d}\mathbf{p}) = g_T(\mathbf{r}, \mathbf{p}) \mathrm{d}p_0 \prod_{x \in I_n} \mathrm{d}p_x \mathrm{d}r_x,
+$$
+
+as a reference measure, and denote its respective expectation by $\langle\cdot\rangle_T$.
+
+Stationarity of $\mu_{ss}$ under the microscopic dynamics implies that $L^*\mu_{ss} = 0$ (in the sense of distributions). The operator $L^*$ is hypoelliptic, thus by [9, Theorem 1.1, p. 149], the measure $\mu_{ss}$ has a smooth density $f_s$ with respect to $\nu_{T_+}$, i.e.
+
+$$
+\langle F \rangle_{ss} = \langle \langle F f_s \rangle \rangle_{T_+} = \int F f_s \, d\nu_{T_+}.
+$$
+
+**Proposition 5.1** (Entropy production). *Denote h := $g_{T^-}/g_{T_+}$. The following formula holds*
+
+$$
+\begin{align}
+& \gamma \sum_{x=0}^{n-1} \mathcal{D}_x(f_s) + \tilde{\gamma} T_- \left\langle \frac{\left(\partial_{p_0}(f_s/h)\right)^2}{(f_s/h)} \right\rangle_{T_-} + \tilde{\gamma} T_+ \left\langle \frac{\left(\partial_{p_n}f_s\right)^2}{f_s} \right\rangle_{T_+} \nonumber \\
+& \qquad = \frac{\bar{\tau}_+^2}{T_+(\gamma n + \tilde{\gamma})} + \tilde{\gamma} \left(\frac{1}{T_+} - \frac{1}{T_-}\right) (T_- - \langle p_0^2 \rangle_{ss}), \tag{5.1}
+\end{align}
+$$
+
+where
+
+$$
+\mathcal{D}_x(f_s) := \left\langle \frac{(\mathcal{X}_x f_s)^2}{f_s} \right\rangle_{T_+}.
+$$
+
+*Proof.* Integration by parts yields
+
+$$
+\[\langle g \tilde{S} f \rangle_{T_+} = T_+ \langle \partial_{p_n} f \partial_{p_n} g \rangle_{T_+} + T_- \langle \partial_{p_0} f \partial_{p_0} g \rangle_{T_+} + T_- \left( \frac{1}{T_-} - \frac{1}{T_+} \right) \langle g \partial_{p_0} f \rangle_{T_+}
+$$
+
+for any $f, g \in C_0^\infty(\Omega_n)$, where $C_0^\infty(\Omega_n)$ is the space of compactly supported smooth
+functions. As
+
+$$
+A(\mathcal{H}_n(\mathbf{r}, \mathbf{p})) = \bar{\tau}_+ p_n \quad \text{and} \quad \chi_x\left(\sum_{y=0}^{n} p_y^2\right) = 0, \quad x \in \{0, \dots, n-1\}
+$$
+---PAGE_BREAK---
+
+we conclude
+
+$$
+\begin{align*}
+- \langle g A f \rangle_{T_+} &= \langle f A g \rangle_{T_+} + \bar{\tau}_+ \langle p_n f g \rangle_{T_+}, \\
+- \langle g \mathcal{X}_x^2 f \rangle_{T_+} &= \langle \mathcal{X}_x f \mathcal{X}_x g \rangle_{T_+},
+\end{align*}
+$$
+
+for any $x = 0, \dots, n-1$, and any $f, g \in C_0^\infty(\Omega_n)$. We take the average of $-n^{-2}L(\log f_s)$ with respect to the stationary state $\mu_{ss}$. Taking into account the above identities we obtain
+
+$$
+\begin{align*}
+0 = -n^{-2} \langle L \log f_s \rangle_{ss} &= -n^{-2} \langle f_s L \log f_s \rangle_{T_+} \\
+&= \gamma \sum_{x=0}^{n-1} \mathcal{D}_x (f_s) + \tilde{\gamma} T_+ \left\langle \frac{(\partial_{p_n} f_s)^2}{f_s} \right\rangle_{T_+} - \bar{\tau}_+ \langle \partial_{p_n} f_s \rangle_{T_+} - \tilde{\gamma} \left\langle (T_- \partial_{p_0}^2 - p_0 \partial_{p_0}) \log f_s \right\rangle_{ss}.
+\end{align*}
+$$
+
+From the definition $h = g_{T_-}/g_{T_+}$, the last term can be rewritten in the form:
+
+$$
+\begin{align*}
+- \left\langle (T_- \partial_{p_0}^2 - p_0 \partial_{p_0}) \log f_s \right\rangle_{ss} &= - \int \frac{f_s}{h} (T_- \partial_{p_0}^2 - p_0 \partial_{p_0}) \left( \log \left( \frac{f_s}{h} \right) \right) g_{T_-} dp_0 \prod_{x=1}^{n} dp_x dr_x \\
+&\quad - \int f_s (T_- \partial_{p_0}^2 - p_0 \partial_{p_0}) (\log h) g_{T_+} dp_0 \prod_{x=1}^{n} dp_x dr_x \\
+&= T_- \left\langle \frac{(\partial_{p_0}(f_s/h))^2}{(f_s/h)} \right\rangle_{T_-} + \left( \frac{1}{T_-} - \frac{1}{T_+} \right) (T_- - \langle p_0^2 \rangle_{ss}).
+\end{align*}
+$$
+
+Moreover, by integration by parts and (4.1), we obtain
+
+$$
+\langle \partial_{p_n} f_s \rangle_{T_+} = T_+^{-1} \langle p_n f_s \rangle_{T_+} = \frac{\bar{p}_s}{T_+} = \frac{\bar{\tau}_+}{T_+(n + \tilde{\gamma})}
+$$
+
+and (5.1) follows. Since $f_s$ needs not be compactly supported the above calculation is somewhat formal. A rigorous argument (using variational principles) can be found in [5, Section 3]. $\square$
+
+**5.2. Bounds on second moments.** In the present section we obtain some bounds on the covariance functions of momenta and positions, with respect to the stationary states. In particular, we estimate the magnitude of the average current $[\bar{j}_s]$, see (2.18) and investigate the behaviour of $\phi(1)$ and $\phi(n)$ as $n \to \infty$, see (4.4).
+
+Let us first state a rough estimate on the second moments at the boundaries,
+which is going to be refined further.
+
+**Proposition 5.2** (Second moments at the boundaries: Part I). *The following equality holds*
+
+$$
+\langle p_0^2 \rangle_{ss} + \langle p_n^2 \rangle_{ss} = T_+ + T_- + \frac{2\bar{\tau}_+\bar{p}_s}{\tilde{\gamma}} \quad \text{for all } n \ge 1. \qquad (5.2)
+$$
+
+Moreover, there exists a constant $C = C(\gamma, \tilde{\gamma}, \bar{\tau}_{+}, T_{+}, T_{-}) > 0$, such that
+
+$$
+\langle r_1^2 \rangle_{ss} + \langle r_n^2 \rangle_{ss} \le C \quad \text{for all } n \ge 1. \tag{5.3}
+$$
+---PAGE_BREAK---
+
+**Remark 5.3.** By convention, the constants appearing in the statements below depend only on the parameters indicated in parentheses in the statement of the proposition.
+
+*Proof of Proposition 5.2.* The first identity (5.2) is an easy consequence of (2.11) and (2.18), which yields
+
+$$ \bar{j}_s = \frac{\tilde{\gamma}}{2} (T_- - \langle p_0^2 \rangle_{ss}), \quad (5.4) $$
+
+$$ \bar{j}_s = -\bar{\tau}_+ \bar{p}_s - \frac{\tilde{\gamma}}{2} (T_+ - \langle p_n^2 \rangle_{ss}). \quad (5.5) $$
+
+Identity (5.2) is obtained by adding sideways the above equalities. To show estimate (5.3) note that
+
+$$ n^{-2}L(p_0 r_1) = (p_1 - p_0)p_0 + r_1^2 - \frac{1}{2}(\tilde{\gamma} + \gamma)p_0 r_1 \quad (5.6) $$
+
+$$ n^{-2}L(p_n r_n) = p_n(p_n - p_{n-1}) + (\bar{\tau}_+ - r_n)r_n - \frac{1}{2}(\tilde{\gamma} + \gamma)p_n r_n. \quad (5.7) $$
+
+After taking the average with respect to the stationary state from (5.6) we conclude
+
+$$ \langle r_1^2 \rangle_{ss} = \langle p_0^2 \rangle_{ss} - \langle p_1 p_0 \rangle_{ss} + \frac{1}{2}(\tilde{\gamma} + \gamma)\langle p_0 r_1 \rangle_{ss}. \quad (5.8) $$
+
+Recalling the definition of the current (2.10) and then invoking (5.4), we get
+
+$$
+\begin{aligned}
+\langle r_1^2 \rangle_{ss} &= \langle p_0^2 \rangle_{ss} - \langle p_1 p_0 \rangle_{ss} - \frac{1}{2}(\tilde{\gamma} + \gamma)(\langle j_{0,1} \rangle_{ss} + \frac{\gamma}{2}(\langle p_1^2 \rangle_{ss} - \langle p_0^2 \rangle_{ss})) \\
+&= \langle p_0^2 \rangle_{ss} - \langle p_1 p_0 \rangle_{ss} - \frac{\tilde{\gamma}}{4}(\tilde{\gamma} + \gamma)(T_- - \langle p_0^2 \rangle_{ss}) - \frac{\gamma}{4}(\tilde{\gamma} + \gamma)(\langle p_1^2 \rangle_{ss} - \langle p_0^2 \rangle_{ss}).
+\end{aligned}
+$$
+
+Using Young's inequality
+
+$$ |\langle p_1 p_0 \rangle_{ss}| \leq \frac{A}{2} \langle p_1^2 \rangle_{ss} + \frac{1}{2A} \langle p_0^2 \rangle_{ss}, $$
+
+with $A = \frac{\gamma}{2}(\gamma + \tilde{\gamma})$, we get
+
+$$ \langle r_1^2 \rangle_{ss} \leq \left( \frac{1}{\gamma(\gamma + \tilde{\gamma})} + 1 + \frac{1}{4}(\gamma + \tilde{\gamma})^2 \right) \langle p_0^2 \rangle_{ss}. \quad (5.9) $$
+
+From (5.2) we conclude that $\langle r_1^2 \rangle_{ss}$ is bounded.
+
+To estimate $\langle r_n^2 \rangle_{ss}$, note that from (5.7) we write
+
+$$ \langle r_n^2 \rangle_{ss} = \langle p_n^2 \rangle_{ss} - \langle p_n p_{n-1} \rangle_{ss} + \bar{\tau}_+ \langle r_n \rangle_{ss} - \frac{1}{2} (\tilde{\gamma} + \gamma) \langle p_n r_n \rangle_{ss}. \quad (5.10) $$
+
+We use again Young's inequality
+
+$$ |\langle p_n p_{n-1} \rangle_{ss}| \leq \frac{A}{2} |\langle p_n^2 \rangle_{ss} + \frac{1}{2A} |\langle p_{n-1}^2 \rangle_{ss}|, $$
+---PAGE_BREAK---
+
+with $A = 1/(2\gamma)$ and we get
+
+$$ \langle r_n^2 \rangle_{ss} \le \left(1 + \frac{1}{4\gamma}\right) \langle p_n^2 \rangle_{ss} + \gamma \langle p_{n-1}^2 \rangle_{ss} + \tau \langle r_n \rangle_{ss} - \frac{1}{2} (\tilde{\gamma} + \gamma) \langle p_n r_n \rangle_{ss}. \quad (5.11) $$
+
+To replace $\langle p_{n-1}^2 \rangle_{ss}$, note that
+
+$$ n^{-2}L(p_n^2) = 2(\bar{\tau}_+ - r_n)p_n + \gamma(p_{n-1}^2 - p_n^2) + \tilde{\gamma}(T_+ - p_n^2). \quad (5.12) $$
+
+Taking the average with respect to the stationary state, we obtain:
+
+$$ \gamma \langle p_{n-1}^2 \rangle_{ss} = 2 \langle r_n p_n \rangle_{ss} - 2 \bar{\tau}_+ \langle p_n \rangle_{ss} + (\gamma + \tilde{\gamma}) \langle p_n^2 \rangle_{ss} - \tilde{\gamma} T_+, \quad (5.13) $$
+
+which, in (5.11), gives
+
+$$ \langle r_n^2 \rangle_{ss} \le \left(1 + \frac{1}{4\gamma} + \gamma + \tilde{\gamma}\right) \langle p_n^2 \rangle_{ss} + \bar{\tau}_+ \left(\langle r_n \rangle_{ss} - 2\langle p_n \rangle_{ss}\right) + \frac{1}{2}(4 - \tilde{\gamma} - \gamma)\langle p_n r_n \rangle_{ss} - \tilde{\gamma}T_+. $$
+
+Using again Young's inequality
+
+$$ |\langle p_n r_n \rangle_{ss}| \leqslant \frac{A}{2} \langle p_n^2 \rangle_{ss} + \frac{1}{2A} \langle r_n^2 \rangle_{ss} $$
+
+with $A = \frac{1}{2}|4 - \gamma - \tilde{\gamma}|$, we finally arrive at
+
+$$ \frac{1}{2} \langle r_n^2 \rangle_{ss} \leqslant \left(1 + \frac{1}{4\gamma} + \gamma + \tilde{\gamma} + \frac{1}{4}(4 - \gamma - \tilde{\gamma})^2\right) \langle p_n^2 \rangle_{ss} + \bar{\tau}_+ (\langle r_n \rangle_{ss} - 2\langle p_n \rangle_{ss}) - \tilde{\gamma} T_+. \quad (5.14) $$
+
+We now invoke (5.2), (4.1) and (4.2) to conclude the bound on $\langle r_n^2 \rangle_{ss}$, which combined with the already obtained bound on $\langle r_1^2 \rangle_{ss}$ yields (5.3). $\square$
+
+**Corollary 5.4** (Second moments at the boundaries: Part II). There exists $C = C(\gamma, \tilde{\gamma}, \bar{\tau}_+, T_+, T_-) > 0$, such that
+
+$$ \langle p_1^2 \rangle_{ss} + \langle p_{n-1}^2 \rangle_{ss} \le C \quad \text{for all } n \ge 1. \quad (5.15) $$
+
+*Proof.* To bound $\langle p_{n-1}^2 \rangle_{ss}$ we use formula (5.13). From an elementary inequality $|\langle r_n p_n \rangle_{ss}| \le \frac{1}{2}(\langle p_n^2 \rangle_{ss} + \langle r_n^2 \rangle_{ss})$ and Proposition 5.2, we easily conclude that $\langle p_{n-1}^2 \rangle_{ss}$ is bounded.
+
+The bound for $\langle p_1^2 \rangle_{ss}$ is obtained similarly. First, note that
+
+$$ n^{-2} L(p_0^2) = 2r_1p_0 + \gamma(p_1^2 - p_0^2) + \tilde{\gamma}(T_- - p_0^2). \quad (5.16) $$
+
+Taking the average with respect to the stationary state, using the inequality $|\langle r_1 p_0 \rangle_{ss}| \le \frac{1}{2}(\langle r_1^2 \rangle_{ss} + \langle p_0^2 \rangle_{ss})$, and invoking Proposition 5.2, we conclude the desired bound on $\langle p_1^2 \rangle_{ss}$. Thus (5.15) follows. $\square$
+
+In the next proposition we provide a bound on the energy current under the stationary state, which will be further refined in Proposition 5.9.
+
+**Proposition 5.5** (The stationary current: Part I). There exists a constant $C = C(\gamma, \tilde{\gamma}, \bar{\tau}_+, T_+, T_-) > 0$, such that the stationary current satisfies
+
+$$ |\bar{j}_s| \leqslant \frac{C}{n} \quad \text{for all } n \geqslant 1. \quad (5.17) $$
+---PAGE_BREAK---
+
+*Proof.* We sum the identity (4.3) from $x=1$ to $n-1$ and apply (4.4) to express $\phi(n)$ and $\phi(1)$. In this way we get
+
+$$
+\begin{align}
+(n-1)\bar{j}_s = \phi(n) - \phi(1) & \\
+&= \left\{ -\frac{p_{n-1}p_n + r_n^2}{2\gamma} - \frac{\gamma(p_{n-1}^2 + p_n^2)}{4} \right\}_{ss} + \left\{ \frac{p_1p_0 + r_1^2}{2\gamma} + \frac{\gamma(p_1^2 + p_0^2)}{4} \right\}_{ss}. \tag{5.18}
+\end{align}
+$$
+
+Therefore, (5.17) follows from the elementary inequalities
+
+$$ |\langle p_{x-1}p_x \rangle_{ss}| \le \frac{1}{2} (\langle p_x^2 \rangle_{ss} + \langle p_{x-1}^2 \rangle_{ss}) \quad \text{for } x=n \text{ and } x=1 $$
+
+together with the bounds obtained in Proposition 5.2 and Corollary 5.4. $\square$
+
+Proposition 5.5 permits to get a better estimate on the entropy production.
+Namely, combining (5.1), (5.4) and (5.17) we conclude the following.
+
+**Corollary 5.6.** There exists $C = C(\gamma, \tilde{\gamma}, \tau_+, T_+, T_-) > 0$, such that
+
+$$ \gamma \sum_{x=0}^{n-1} D_x(f_s) + \tilde{\gamma}T_- \left\langle \frac{(\partial_{p_0}(f_s/h))^2}{(f_s/h)} \right\rangle_{T_-} + \tilde{\gamma}T_+ \left\langle \frac{(\partial_{p_n}(f_s))^2}{f_s} \right\rangle_{T_+} \le \frac{C}{n}, \quad n \ge 1. \quad (5.19) $$
+
+Thanks to Proposition 5.5 we are now able to estimate the covariances of momenta and stretches at the boundaries as follows:
+
+**Proposition 5.7** (Second moment at the boundaries: Part III). *There exists*
+$C = C(\gamma, \tilde{\gamma}, \bar{\tau}_+, T_+, T_-) > 0$, *such that, at the left boundary point*
+
+$$ |\langle p_0 p_1 \rangle_{ss}| + |\langle r_1 p_1 \rangle_{ss}| + |\langle r_1 p_0 \rangle_{ss}| \le \frac{C}{\sqrt{n}}, \quad n \ge 1, \qquad (5.20) $$
+
+and at the right boundary point
+
+$$ | \langle p_n p_{n-1} \rangle_{ss} | + | \langle r_n p_n \rangle_{ss} | + | \langle r_n p_{n-1} \rangle_{ss} | \le \frac{C}{\sqrt{n}}, \quad n \ge 1. \qquad (5.21) $$
+
+*Proof.* Integration by parts yields
+
+$$ \langle p_0 p_1 \rangle_{ss} = -T_- \langle \langle p_1 (f_s/g_{T-}) \partial_{p_0} g_{T-} \rangle \rangle_{T_+} = T_- \langle \langle p_1 \partial_{p_0} (f_s/h) \rangle \rangle_{T_-}. $$
+
+We use the entropy production bound (5.19) and estimate (5.15) on $\langle p_1^2 \rangle_{ss}$, to
+estimate the right hand side. As a result we get
+
+$$ |\langle p_0 p_1 \rangle_{ss}| = T_- |\langle\langle p_1 \partial_{p_0} (f_s/h) \rangle\rangle_{T_-}| \le T_- |\langle p_1^2 \rangle_{ss}^{1/2} | |\langle\langle \frac{(\partial_{p_0}(f_s/h))^2}{(f_s/h)} \rangle\rangle_{T_-}^{1/2}| \le \frac{C}{\sqrt{n}}. $$
+
+Similarly,
+
+$$ |\langle p_n p_{n-1} \rangle_{ss}| = T_+ |\langle\langle p_{n-1} \partial_{p_n} f_s \rangle\rangle_{T_+}| \le T_+ |\langle p_{n-1}^2 \rangle_{ss}^{1/2} | |\langle\langle \frac{(\partial_{p_n} f_s)^2}{f_s} \rangle\rangle_{T_+}^{1/2}| \le \frac{C}{\sqrt{n}}. $$
+
+Finally, note that, for any $x \in I_n$
+
+$$ n^{-2}L(r_x^2) = 2(p_x - p_{x-1})r_x. \qquad (5.22) $$
+---PAGE_BREAK---
+
+Therefore, upon averaging with respect to the NESS, we get
+
+$$
+\langle p_x r_x \rangle_{ss} = \langle p_{x-1} r_x \rangle_{ss}, \quad x \in \mathbb{I}_n. \tag{5.23}
+$$
+
+In particular, applying (5.23) for $x=1$ and $x=n$, we remark that the only
+quantities we need to yet estimate are $\|\langle r_1 p_0 \rangle_{ss}\|$ and $\|\langle r_n p_n \rangle_{ss}\|$. This is done using
+the entropy production bound (5.19) in the same manner as before, namely:
+
+$$
+|\langle r_1 p_0 \rangle_{ss}| = T_- | \langle \! \langle r_1 \partial_{p_0} (f_s/h) \rangle \! \rangle_{T_-} | \le T_- \langle r_1^2 \rangle_{ss}^{1/2} \left\langle \frac{(\partial_{p_0} (f_s/h))^2}{(f_s/h)} \right\rangle_{T_-}^{1/2} \le \frac{C}{\sqrt{n}},
+$$
+
+from (5.3) and (5.1). We leave the last estimate for the reader.
+□
+
+We now have all the ingredients necessary to prove moments convergences at
+the boundaries:
+
+**Corollary 5.8** (Second moments at the boundaries: Part IV). *The following*
+*limits hold: at the left boundary point,*
+
+$$
+\langle p_x^2 \rangle_{ss} \xrightarrow[n \to \infty]{} T_- \quad \text{for } x \in \{0,1\}, \tag{5.24}
+$$
+
+$$
+\langle r_1^2 \rangle_{ss} \xrightarrow[n \to \infty]{} T_-, \qquad (5.25)
+$$
+
+$$
+\langle r_1 r_2 \rangle_s \xrightarrow[n \to \infty]{} 0, \qquad (5.26)
+$$
+
+and at the right boundary point,
+
+$$
+\langle p_x^2 \rangle_{ss} \xrightarrow[n \to \infty]{} T_+ \quad \text{for } x \in \{n-1, n\}, \tag{5.27}
+$$
+
+$$
+\langle r_n^2 \rangle_{ss} \xrightarrow[n \to \infty]{} T_+ + \bar{\tau}_+^2, \qquad (5.28)
+$$
+
+$$
+\langle r_{n-1} r_n \rangle_s \xrightarrow[n \to \infty]{} \bar{\tau}_+^2. \tag{5.29}
+$$
+
+*Proofs of (5.24) and (5.27).* From (5.4) and Proposition 5.5 we get $\langle p_0^2 \rangle_{ss} \to T_-$.
+Thanks to (5.16) and (5.20) we deduce $\langle p_1^2 \rangle_{ss} \to T_-$, which in turn proves (5.24).
+A similar argument proves (5.27). Indeed, from (5.5) and Proposition 5.5, we get
+$\langle p_n^2 \rangle_{ss} \to T_+$ and from (5.13) and (5.21), we deduce $\langle p_{n-1}^2 \rangle_{ss} \to T_+$.
+
+*Proofs of (5.25) and (5.28).* The limit (5.25) follows directly from (5.8) and Proposition 5.7. From (4.2) we conclude that $\langle r_n \rangle_{ss} \to \bar{\tau}_+$. Using then (5.10) together with Proposition 5.7 we conclude (5.28).
+
+*Proofs of (5.26) and (5.29).* Note that
+
+$$
+n^{-2} L(r_1 p_1) = (p_1 - p_0)p_1 + (r_2 - r_1)r_1 - \gamma r_1 p_1 \quad (5.30)
+$$
+
+$$
+n^{-2}L(r_n p_{n-1}) = (p_n - p_{n-1})p_{n-1} + (r_n - r_{n-1})r_n - \gamma r_n p_{n-1}. \quad (5.31)
+$$
+
+Taking the average with respect to the stationary state, and using Proposition 5.7 together with (5.24) proved above, we get
+
+$$
+\langle r_1^2 \rangle_{ss} - \langle r_1 r_2 \rangle_s \xrightarrow[n \to \infty]{} T_- \qquad (5.32)
+$$
+
+and
+
+$$
+\langle r_n^2 \rangle_{ss} - \langle r_{n-1} r_n \rangle_s \xrightarrow[n \to \infty]{} T_+. \quad (5.33)
+$$
+---PAGE_BREAK---
+
+Using the already proved limits (5.25) and (5.28) we conclude (5.26) and (5.29). $\square$
+
+**Proposition 5.9** (The stationary current: Part II). *The following limits hold:*
+
+$$ \phi(1) \xrightarrow{n \to \infty} -\frac{1}{2}(\gamma^{-1} + \gamma)T_{-} \quad (5.34) $$
+
+$$ \phi(n) \xrightarrow{n \to \infty} -\frac{1}{2}(\gamma^{-1} + \gamma)T_{+} - \frac{\bar{\tau}_{+}^2}{2}. \qquad (5.35) $$
+
+*In consequence, (3.1) holds and Theorem 3.2 is proved.*
+
+*Proof.* Limits in (5.34) and (5.35) follow from formula (4.4), Proposition 5.7 and the limits computed in Corollary 5.8. The limit (3.1) is a consequence of (5.34), (5.35) and formula (5.18). $\square$
+
+**5.3. Energy bounds.** We now provide bounds on the total energy under the stationary state:
+
+**Proposition 5.10** (Energy bounds). *There exists $C = C(\gamma, \tilde{\gamma}, \bar{\tau}_+, T_+, T_-) > 0$, such that*
+
+$$ \frac{1}{n} \sum_{x=1}^{n} \langle p_x^2 \rangle_{ss} \le C \quad \text{and} \quad \frac{1}{n} \sum_{x=1}^{n} \langle r_x^2 \rangle_{ss} \le C, \quad n \ge 1. \qquad (5.36) $$
+
+*Proof.* From the current decomposition given by (4.3), we easily get that
+
+$$ \phi(x) = (x-1)\bar{j}_s + \phi(1), \quad \text{for any } x \in \mathbb{I}_n. $$
+
+Summing over $x$, this gives
+
+$$ \frac{1}{n} \sum_{x=1}^{n} \phi(x) = \frac{1}{n} \sum_{x=2}^{n} (x-1)\bar{j}_s + \phi(1) = \frac{n(n-1)}{2n} \bar{j}_s + \phi(1). $$
+
+Therefore, recalling (5.34) and (3.1), we get
+
+$$ \frac{1}{n} \sum_{x=1}^{n} \phi(x) \xrightarrow{n \to \infty} -\frac{1}{4}(\gamma^{-1} + \gamma)(T_{+} + T_{-}) - \frac{\bar{\tau}_{+}^2}{4\gamma}. \quad (5.37) $$
+
+From (4.4), we have
+
+$$ \frac{1}{n} \sum_{x=1}^{n} \phi(x) = -\frac{1}{2\gamma n} \sum_{x=1}^{n} \langle r_x^2 \rangle_{ss} - \frac{1}{2\gamma n} \sum_{x=1}^{n} \langle p_x p_{x-1} \rangle_{ss} - \frac{\gamma}{2n} \sum_{x=1}^{n} \langle p_x^2 \rangle_{ss} + \frac{\gamma}{4n} (\langle p_n^2 \rangle_{ss} - \langle p_0^2 \rangle_{ss}). \quad (5.38) $$
+
+To compute the limit of the second sum in the right hand side of (5.38), we first write:
+
+$$ n^{-2} L(p_{x-1}p_x) = (r_{x+1} - r_x)p_{x-1} + (r_x - r_{x-1})p_x - 2\gamma p_x p_{x-1}, \quad x \in \{2, \dots, n-1\}. \quad (5.39) $$
+
+Thus, taking the average with respect to the stationary state and subsequently using (5.23), we obtain
+
+$$
+\begin{aligned}
+2\gamma\langle p_x p_{x-1} \rangle_{ss} &= \langle p_x r_x \rangle_{ss} + \langle p_{x-1} r_{x+1} \rangle_{ss} - \langle p_x r_{x-1} \rangle_{ss} - \langle p_{x-1} r_x \rangle_{ss} \\
+&= \langle p_{x-1} r_{x+1} \rangle_{ss} - \langle p_x r_{x-1} \rangle_{ss}.
+\end{aligned}
+\quad (5.40) $$
+---PAGE_BREAK---
+
+On the other hand
+
+$$
+n^{-2}L(r_x r_{x+1}) = (p_x - p_{x-1})r_{x+1} + (p_{x+1} - p_x)r_x, \quad x \in \{1, \dots, n-1\}.
+$$
+
+Hence, taking the average and using again (5.23), we get
+
+$$
+\begin{align*}
+0 &= \langle p_{x+1}r_x \rangle_{ss} + \langle p_x r_{x+1} \rangle_{ss} - \langle p_x r_x \rangle_{ss} - \langle p_{x-1}r_{x+1} \rangle_{ss} \\
+&= \langle p_{x+1}r_x \rangle_{ss} + \langle p_{x+1}r_{x+1} \rangle_{ss} - \langle p_x r_x \rangle_{ss} - \langle p_{x-1}r_{x+1} \rangle_{ss},
+\end{align*}
+$$
+
+which yields
+
+$$
+\langle p_{x+1} r_{x+1} \rangle_{ss} - \langle p_x r_x \rangle_{ss} = \langle p_{x-1} r_{x+1} \rangle_{ss} - \langle p_{x+1} r_x \rangle_{ss}
+$$
+
+for any $x \in \{2, \dots, n-1\}$. Combining with (5.40) we get
+
+$$
+2\gamma\langle p_x p_{x-1} \rangle_{ss} = \langle p_{x+1}r_{x+1} \rangle_{ss} - \langle p_x r_x \rangle_{ss} + \langle p_{x+1}r_x \rangle_{ss} - \langle p_x r_{x-1} \rangle_{ss}, \quad (5.41)
+$$
+
+for any $x \in \{2, \dots, n-1\}$. Summing over $x$, one gets:
+
+$$
+\sum_{x=2}^{n-1} \langle p_x p_{x-1} \rangle_{ss} = \frac{1}{2\gamma} (\langle p_n r_n \rangle_{ss} - \langle p_2 r_2 \rangle_{ss} + \langle p_n r_{n-1} \rangle_{ss} - \langle p_2 r_1 \rangle_{ss}). \quad (5.42)
+$$
+
+To compute the limit as $n \to \infty$, we need to estimate the covariances appearing in
+the right hand side. The covariance $\langle p_n r_n \rangle_{ss}$ can be estimated thanks to Propo-
+sition 5.7. We still need the bounds on the covariances $\langle p_2 r_2 \rangle_{ss}$, $\langle p_n r_{n-1} \rangle_{ss}$ and
+$\langle p_2 r_1 \rangle_{ss}$. To deal with it write
+
+$$
+n^{-2} L(p_0 p_1) = (r_2 - r_1) p_0 + r_1 p_1 - \frac{1}{2} (3\gamma + \tilde{\gamma}) p_0 p_1
+\quad (5.43)
+$$
+
+$$
+n^{-2} L(r_1 r_2) = (p_1 - p_0) r_2 + (p_2 - p_1) r_1
+\qquad (5.44)
+$$
+
+$$
+n^{-2} L(p_{n-1}p_n) = (\bar{\tau}_+ - r_n)p_{n-1} + (r_n - r_{n-1})p_n - \frac{1}{2}(3\gamma + \tilde{\gamma})p_{n-1}p_n. \quad (5.45)
+$$
+
+Taking averages with respect to the stationary state and summing (5.43) and
+(5.44) sideways gives (using $\langle p_2 r_2 \rangle_{ss} = \langle p_1 r_2 \rangle_{ss}$ from (5.23))
+
+$$
+\langle p_2 r_2 \rangle_{ss} + \langle p_2 r_1 \rangle_{ss} = \langle p_0 r_1 \rangle_{ss} + \frac{1}{2}(3\gamma + \tilde{\gamma})\langle p_0 p_1 \rangle_{ss} \underset{n\to\infty}{\longrightarrow} 0, \quad (5.46)
+$$
+
+from Proposition 5.7. Moreover, (5.45) gives (using $\langle p_n r_n \rangle_{ss} = \langle p_{n-1} r_n \rangle_{ss}$)
+
+$$
+\langle r_{n-1}p_n \rangle_{ss} = \bar{\tau}_+ \langle p_{n-1} \rangle_{ss} - \frac{1}{2}(3\gamma + \tilde{\gamma})\langle p_{n-1}p_n \rangle_{ss} \xrightarrow{n\to\infty} 0, \quad (5.47)
+$$
+
+from (4.1) and Proposition 5.7. Therefore, we have proved that (5.42) vanishes
+as $n \to \infty$. In fact, due to the estimates obtained in Proposition 5.7 we have even
+proved that there exists a constant $C = C(\gamma, \tilde{\gamma}, \bar{\tau}_{+}, T_{+}, T_{-}) > 0$, such that
+
+$$
+\left| \sum_{x=1}^{n} \langle p_x p_{x-1} \rangle_{ss} \right| \leqslant \frac{C}{\sqrt{n}}, \quad n \geqslant 1. \qquad (5.48)
+$$
+---PAGE_BREAK---
+
+From (5.38) it follows that
+
+$$
+\begin{align*}
+& \frac{1}{2\gamma n} \sum_{x=1}^{n} \langle r_x^2 \rangle_{ss} + \frac{\gamma}{2n} \sum_{x=1}^{n} \langle p_x^2 \rangle_{ss} \\
+&= -\frac{1}{2\gamma n} \sum_{x=1}^{n} \langle p_x p_{x-1} \rangle_{ss} - \frac{1}{n} \sum_{x=1}^{n} \phi(x) + \frac{\gamma}{4n} (\langle p_n^2 \rangle_{ss} - \langle p_0^2 \rangle_{ss}) \\
+& \longrightarrow_{n \to \infty} \frac{1}{4} (\gamma^{-1} + \gamma)(T_+ + T_-) + \frac{\tau_+^2}{4\gamma},
+\end{align*}
+$$
+
+due to (5.37) and (5.48). This in particular implies estimate (5.36). $\square$
+
+Thanks to the energy bounds, we are finally able to prove one further conver-
+gence, which will be essential in establishing Proposition 4.4.
+
+**Proposition 5.11.** For any continuous test function $G : I \to \mathbb{R}$ we have
+
+$$
+\frac{1}{n} \sum_{x=1}^{n-1} G\left(\frac{x}{n}\right) \langle p_x p_{x+1} \rangle_{ss} \xrightarrow{n \to \infty} 0. \qquad (5.49)
+$$
+
+Proof. Assume first that $G \in C^1(I)$. For the brevity sake we denote $G_x := G(x/n)$ for any $x \in I_n$ and $\psi(x) = \langle p_{x+1}r_{x+1} \rangle_{ss} + \langle p_{x+1}r_x \rangle_{ss}$. Then (5.41) says that
+
+$$
+\langle p_x p_{x+1} \rangle_{ss} = \frac{1}{2\gamma} (\psi(x+1) - \psi(x)), \quad \text{for any } x \in \{1, \dots, n-2\}.
+$$
+
+Therefore, by an application of summation by parts formula, we get
+
+$$
+\begin{align}
+\frac{1}{n} \sum_{x=1}^{n-1} G_x \langle p_x p_{x+1} \rangle_{ss} &= \frac{1}{2\gamma n^2} \sum_{x=2}^{n-2} n(G_{x-1} - G_x)\psi(x) \tag{5.50}\\
+&\quad + \frac{1}{n} \langle p_n p_{n-1} \rangle_{ss} G_{n-1} + \frac{1}{2\gamma n} (\langle p_n r_n \rangle_{ss} + \langle p_n r_{n-1} \rangle_{ss}) \tag{5.51}\\
+&\quad - \frac{1}{2\gamma n} (\langle p_2 r_2 \rangle_{ss} + \langle p_2 r_1 \rangle_{ss}) G_1. \tag{5.52}
+\end{align}
+$$
+
+The boundary terms (5.51) and (5.52) vanish, as $n \to \infty$, thanks to (5.21), (5.46) and (5.47). To deal with the sum in the right hand side of (5.50) note that, since $G \in C^1(\mathbb{I})$, we have
+
+$$
+\sup_{x \in \{2, \dots, n-2\}} n |G_{x-1} - G_x| \le \|G'\|_\infty. \quad (5.53)
+$$
+
+Since
+
+$$
+|\psi(x)| \le 2(2\langle p_{x+1}^2 \rangle_{ss} + \langle r_x^2 \rangle_{ss} + \langle r_{x+1}^2 \rangle_{ss}), \quad x \in \{1, \dots, n-1\}
+$$
+
+we conclude that
+
+$$
+\frac{1}{2\gamma n^2} \left| \sum_{x=2}^{n-2} n(G_{x-1}-G_x)\psi(x) \right| \le \frac{C}{n} \left\{ \frac{1}{n} \sum_{x=1}^{n} (\langle p_x^2 \rangle_{ss} + \langle r_x^2 \rangle_{ss}) \right\}, \quad (5.54)
+$$
+
+which vanishes, as $n \to +\infty$, thanks to the energy bound (5.36). This proves (5.49)
+for any test function $G \in C^1(\mathbb{I})$. The result can be extended to all continuous
+functions by the standard density argument and the energy bound (5.36). $\square$
+
+**5.4.** *Proof of Proposition 4.4.* We now have at our disposal all components needed to prove Proposition 4.4 and thus conclude the proof of Theorem 3.3. There are three convergences to prove:
+---PAGE_BREAK---
+
+*Proof of (4.9).* From Proposition 4.2 (in particular (4.5)), the function $\phi$ is linear and is completely determined from the values at the endpoints. More precisely,
+
+$$ \phi(x) = \frac{\phi(n) - \phi(1)}{n-1} x + \frac{n\phi(1) - \phi(n)}{n-1}, \quad \text{for any } x \in \mathbb{I}_n. $$
+
+Since, from Proposition 5.9, the values $\phi(1)$ and $\phi(n)$ are of order 1 as $n \to \infty$, we see that
+
+$$ \phi(x) \approx (\phi(n) - \phi(1)) \frac{x}{n} + \phi(1), \quad \text{as } n \to \infty, $$
+
+and therefore we easily obtain
+
+$$ \frac{1}{n} \sum_{x \in \mathbb{I}_n} G\left(\frac{x}{n}\right) \phi(x) \underset{n \to \infty}{\longrightarrow} \int_{\mathbb{I}} G(u) \left\{ -\frac{\bar{\tau}_+^2}{2\gamma} u - \frac{1}{2}(\gamma^{-1} + \gamma)[(T_+ - T_-)u + T_-] \right\} du. $$
+
+which proves directly (4.9).
+
+*Proof of (4.10).* Concerning $\mathcal{H}_n^\nabla(G)$ we use a summation by parts formula (with the notation $G_x = G(x/n)$), which leads to:
+
+$$ \mathcal{H}_n^\nabla(G) = \frac{\gamma^2}{2(1+\gamma^2)} \left( \frac{G_n}{n} \langle p_n^2 \rangle_{ss} - \frac{G_1}{n} \langle p_0^2 \rangle_{ss} + \frac{1}{n^2} \sum_{x=1}^{n-1} n(G_x - G_{x+1}) \langle p_x^2 \rangle_{ss} \right). $$
+
+The boundary terms in the right hand side vanish, as $n \to +\infty$, since $\langle p_n^2 \rangle_{ss}$ and $\langle p_0^2 \rangle_{ss}$ are bounded, due to (5.2). To deal with the limit of the last sum in the right hand side, we can repeat the argument made in (5.53)-(5.54), which shows that the expression vanishes. Thus (4.10) holds.
+
+*Proof of (4.11).* This is a consequence of (5.49).
+
+## APPENDIX A. NON-STATIONARY BEHAVIOUR
+
+In this section we explain how to derive (2.12) and (2.13): while the derivation of (2.12) is rigorous, in order to obtain (2.13) we need to assume a form of local equilibrium that allows for the local equipartition of kinetic and potential energy, see (A.46) below. In the stationary setting this term corresponds to $\mathcal{H}_n^m(G)$ in (4.8) and, similarly, does not appear in the case $\gamma = 1$. Unfortunately, quite analogously with the stationary situation, the relative entropy method does not allow us to treat the case $\gamma \neq 1$. Throughout the present section we allow $\bar{\tau}_+(t)$ to be a $C^1$ function.
+
+### A.1. Preliminaries.
+In the present section we establish non-stationary asymptotics corresponding to Corollary 5.8. They will be useful in proving the hydrodynamic limit in Section A.2. Since $\nu_{T_+}$ is not stationary, except for the corresponding equilibrium boundary conditions, the relative entropy $H_n(t)$ defined as
+
+$$ H_n(t) := \int f_n(t) \log f_n(t) d\nu_{T_+} $$
+
+is not strictly decreasing in time, where hereafter $f_n(t)$ is the density of the $\Omega_{n-}$ valued random variable $(\mathbf{r}_n(t), \mathbf{p}_n(t))$ (recall (2.7)), with respect to $\nu_{T_+}$. However,
+---PAGE_BREAK---
+
+the effect of the boundary condition can be controlled and one can obtain a linear
+in $n$ bound at any time $t$, i.e. (see the proof of Proposition 4.1 in [20] for the details
+of the argument)
+
+$$
+H_n(t) \le C(t)n, \quad n \ge 1, t \ge 0. \tag{A.1}
+$$
+
+Both here and throughout the remainder of the paper $C(t)$ shall denote a generic constant, always independent of $n$ and locally bounded in $t$.
+
+Furthermore, one obtains the bounds on the Dirichlet form controlling the entropy production, similar to (5.19),
+
+$$
+\int_0^t ds \left[ \gamma \sum_{x=0}^{n-1} \mathcal{D}_x (f_n(s)) + \tilde{\gamma} T_- \left\langle \frac{(\partial_{p_0} f_n(s)/h)^2}{f_n(s)/h} \right\rangle_{T_-} + \tilde{\gamma} T_+ \left\langle \frac{(\partial_{p_n} f_n(s))^2}{f_n(s)} \right\rangle_{T_+} \right] \\
+\le \frac{C(t)}{n}, \quad (\text{A.2})
+$$
+
+where $h = g_{T_-,}/g_{T_+}$ and $n \ge 1$, $t \ge 0$, see Proposition 4.1 and Appendix D of [20] for the proof.
+
+Below we list some consequences of the above bounds on the entropy and
+Dirichlet forms.
+
+**Lemma A.1.** The following equalities hold:
+
+$$
+\lim_{n \to \infty} \mathbb{E} \left[ \int_0^t (p_n^2(s) - T_+) ds \right] = 0, \quad \lim_{n \to \infty} \mathbb{E} \left[ \int_0^t (p_0^2(s) - T_-) ds \right] = 0 \quad (\text{A.3})
+$$
+
+and
+
+$$
+\lim_{n \to \infty} \mathbb{E} \left[ \int_0^t j_{n-1,n}(s) ds \right] = 0, \quad \lim_{n \to \infty} \mathbb{E} \left[ \int_0^t j_{0,1}(s) ds \right] = 0. \quad (\text{A.4})
+$$
+
+*Proof*. Note that,
+
+$$
+\begin{align*}
+\mathbb{E}\left[\int_0^t (p_n^2(s) - T_+) ds\right]
+&= \int_0^t ds \int (p_n^2 - T_+) f_n(s) d\nu_{T_+} \\
+&= T_+ \int_0^t ds \int p_n \partial_{p_n} f_n(s) d\nu_{T_+}.
+\end{align*}
+$$
+
+Thus, by the Cauchy-Schwarz inequality
+
+$$
+\left| \mathbb{E} \left[ \int_0^t (p_n^2(s) - T_+) ds \right] \right|
+\le T_+ \left( \int_0^t ds \int p_n^2 f_n(s) d\nu_{T_+} \right)^{1/2} \left( \int_0^t ds \int \frac{(\partial_{p_n} f_n(s))^2}{f_n(s)} d\nu_{T_+} \right)^{1/2} . \quad (\text{A.5})
+$$
+
+By the entropy inequality, see e.g. [14, p. 338], we can write (recall (2.2))
+
+$$
+\int \mathcal{H}_n(\mathbf{r}, \mathbf{p}) f_n(s, \mathbf{r}, \mathbf{p}) \nu_{T_+} (\mathrm{d}\mathbf{r}, \mathrm{d}\mathbf{p}) \\
+\leq \frac{1}{\alpha} \left\{ \log \left[ \int \exp\{\alpha \mathcal{H}_n(\mathbf{r}, \mathbf{p})\} \nu_{T_+} (\mathrm{d}\mathbf{r}, \mathrm{d}\mathbf{p}) \right] + H_n(s) \right\}
+$$
+---PAGE_BREAK---
+
+for any $\alpha > 0$. By (A.1), for any $t > 0$ and a sufficiently small $\alpha > 0$, there exists $C > 0$ such that
+
+$$ \sup_{s \in [0, t]} \int \mathcal{H}_n(\mathbf{r}, \mathbf{p}) f_n(s, \mathbf{r}, \mathbf{p}) \nu_{T_+} (\mathrm{d}\mathbf{r}, \mathrm{d}\mathbf{p}) \le Cn, \quad n \ge 1. \qquad (\text{A.6}) $$
+
+Consequently, by (A.2) and (A.5), there exists $C > 0$ such that
+
+$$ \sup_{s \in [0,t]} \left| \mathbb{E} \left[ \int_0^t (p_n^2(s) - T_+) ds \right] \right| \le C, \quad n \ge 1. $$
+
+Hence, in particular we obtain
+
+$$ \int_0^t ds \int p_n^2 f_n(s) d\nu_{T_+} \le C. \qquad (\text{A.7}) $$
+
+Using this estimate in (A.5) together with (A.2) we conclude that for any $t \ge 0$
+there exists $C > 0$ for which
+
+$$ \left| \mathbb{E} \left[ \int_0^t (p_n^2(s) - T_+) ds \right] \right| \le \frac{C}{\sqrt{n}}, \quad n \ge 1. \qquad (\text{A.8}) $$
+
+Hence the first equality of (A.3) follows. The proof of the second equality of (A.3)
+is similar.
+
+Concerning (A.4): from the energy conservation it follows that
+
+$$ n^{-2} L \mathcal{E}_n = j_{n-1,n} + \frac{\tilde{\gamma}}{2}(T_+ - p_n^2) + \bar{\tau}_+(t)p_n. $$
+
+To deal with the term $\bar{\tau}_+(t)p_n$, note that
+
+$$
+\begin{aligned}
+\left| \int_0^t \bar{\tau}_+(s) \mathbb{E}[p_n(s)] ds \right| &\le \| \bar{\tau}_+ \|_\infty \int_0^t \left| \int p_n f_n(s) d\nu_{T_+} \right| ds \\
+&= \frac{\| \bar{\tau}_+ \|_\infty}{T_+} \int_0^t \left| \int \partial_{p_n} f_n(s) d\nu_{T_+} \right| ds \\
+&\le \frac{\| \bar{\tau}_+ \|_\infty}{T_+} \int_0^t \left( \int f_n(s) d\nu_{T_+} \right)^{1/2} \left( \int \frac{(\partial_{p_n} f_n(s))^2}{f_n(s)} d\nu_{T_+} \right)^{1/2} ds \\
+&\le \frac{\| \bar{\tau}_+ \|_\infty}{T_+} \frac{C}{\sqrt{n}} \xrightarrow{n \to \infty} 0,
+\end{aligned}
+\qquad (\text{A.9})
+$$
+
+by virtue of (A.2).
+
+The first equality of (A.4) is then a direct consequence of (A.3). The same
+argument works for $j_{0,1}$. $\square$
+
+Using the energy conservation it follows immediately:
+
+**Corollary A.2.** The currents, defined in (2.10) and (2.11), satisfy
+
+$$ \lim_{n \to \infty} \mathbb{E} \left[ \int_0^t j_{x,x+1}(s) ds \right] = 0, \quad x = -1, \dots, n, \quad t \ge 0. \qquad (\text{A.10}) $$
+
+Concerning the potential energy at the boundary points we have the following
+bound.
+---PAGE_BREAK---
+
+**Lemma A.3.** There exists a constant $C < \infty$ such that
+
+$$ \mathbb{E} \left[ \int_0^t (r_1^2(s) + r_n^2(s)) \, ds \right] \le C, \quad n \ge 1. \qquad (\text{A.11}) $$
+
+*Proof.* Using (5.7) we get
+
+$$ \begin{aligned} & n^{-2}\mathbb{E}[p_n(t)r_n(t) - p_n(0)r_n(0)] && (\text{A.12}) \\ &= \int_0^t \mathbb{E} \Big[ p_n(s)(p_n(s) - p_{n-1}(s)) + (\bar{\tau}_+(s) - r_n(s))r_n(s) - \frac{1}{2}(\tilde{\gamma} + \gamma)p_n(s)r_n(s) \Big] ds. \end{aligned} $$
+
+The term in the left hand side vanish, as $n \to +\infty$, due to estimate (A.6). We can
+repeat then the same arguments as we have used to obtain (5.14) and conclude
+that there exists $C > 0$ such that
+
+$$ \mathbb{E} \left[ \int_0^t r_n^2(s) ds \right] \le C \left\{ \mathbb{E} \left[ \int_0^t p_n^2(s) ds \right] + 1 \right\}, \quad n \ge 1. \qquad (\text{A.13}) $$
+
+Estimate (A.7) can be used to obtain the desired bound for $\mathbb{E}[\int_0^t r_n^2(s) ds]$. An
+analogous estimate on $\mathbb{E}[\int_0^t r_1^2(s) ds]$ follows from the same argument, using (5.6)
+and the second equality in (A.3) instead. $\square$
+
+**Lemma A.4.** The following convergences hold: at the left boundary point:
+
+$$ \mathbb{E} \left[ \int_{0}^{t} p_{0}(s) r_{1}(s) ds \right] \underset{n \to \infty}{\longrightarrow} 0 \qquad (\text{A.14}) $$
+
+$$ \mathbb{E} \left[ \int_0^t (p_1^2(s) - p_0^2(s)) \, ds \right] \xrightarrow{n \to \infty} 0 \qquad (\text{A.15}) $$
+
+$$ \mathbb{E} \left[ \int_{0}^{t} r_{1}(s) r_{2}(s) \, ds \right] \xrightarrow{n \to \infty} 0 \qquad (\text{A.16}) $$
+
+and at the right boundary point:
+
+$$ \mathbb{E} \left[ \int_{0}^{t} p_{n-1}(s) r_{n}(s) ds \right] \xrightarrow{n \to \infty} 0 \qquad (\text{A.17}) $$
+
+$$ \mathbb{E} \left[ \int_0^t (p_{n-1}^2(s) - p_n^2(s)) ds \right] \xrightarrow{n \to \infty} 0, \qquad (\text{A.18}) $$
+
+$$ \mathbb{E} \left[ \int_0^t (r_{n-1}(s)r_n(s) - \bar{\tau}_+(s))ds \right] \xrightarrow{n\to\infty} 0. \qquad (\text{A.19}) $$
+
+*Proof.* Since $n^{-2}Lr_n^2 = 2r_n p_n - 2r_n p_{n-1}$, using (A.6) we conclude that (A.17) holds,
+provided that we can prove
+
+$$ \mathbb{E} \left[ \int_{0}^{t} r_{n}(s) p_{n}(s) ds \right] \xrightarrow{n \to \infty} 0. \qquad (\text{A.20}) $$
+---PAGE_BREAK---
+
+The latter is a consequence of the following estimate, cf. (A.2),
+
+$$
+\begin{align*}
+\left|\mathbb{E}\left[\int_0^t r_n(s)p_n(s)ds\right]\right| &= \left|\int_0^t ds \int r_n \partial_{p_n} f_n(s) d\nu_{T_+}\right| \\
+&\le \left(\int_0^t ds \int r_n^2 f_n(s) d\nu_{T_+}\right)^{1/2} \left(\int_0^t ds \int \frac{(\partial_{p_n} f_n(s))^2}{f_n(s)} d\nu_{T_+}\right)^{1/2} \\
+&\le \left(\int_0^t ds \int r_n^2 f_n(s) d\nu_{T_+}\right)^{1/2} \frac{C}{\sqrt{n}} \xrightarrow{n \to \infty} 0.
+\end{align*}
+$$
+
+The proof of (A.14) is similar.
+
+To show (A.18) note that (see (5.12))
+
+$$
+\begin{align*}
+\gamma \int_0^t \mathbb{E}[p_{n-1}^2(s) - p_n^2(s)] ds &= 2 \int_0^t \bar{\tau}_+(s) \mathbb{E}[p_n(s)] ds - 2 \int_0^t \mathbb{E}[r_n(s)p_n(s)] ds \\
+&\quad + \tilde{\gamma} \int_0^t \mathbb{E}[T_+ - p_n^2(s)] ds - \frac{1}{n^2} \mathbb{E}[p_n^2(t) - p_n^2(0)].
+\end{align*}
+$$
+
+The second, third and fourth terms in the right hand side vanish due to (A.20),
+Lemma A.1 and (A.6), respectively. The first term has been already treated in
+(A.9). An analogous argument, starting from (5.16) allows us to prove (A.15).
+
+Besides, we have
+
+$$
+\begin{align*}
+& \mathbb{E}\left[\int_0^t p_{n-1}(s)p_n(s)ds\right] \\
+&= \mathbb{E}\left[\int_0^t p_{n-1}(s)(p_n(s)-T_+)ds\right] + T_+\mathbb{E}\left[\int_0^t p_{n-1}(s)ds\right] \xrightarrow[n\to\infty]{} 0. \quad (\text{A.21})
+\end{align*}
+$$
+
+The above convergence is proved as follows: the first term in the right hand side
+can be estimated by the Cauchy-Schwarz inequality. Then we can use the bound
+
+$$
+\mathbb{E} \left[ \int_{0}^{t} p_{n-1}^{2}(s) ds \right] \le C, \quad n \ge 1
+$$
+
+(it follows from the already proved (A.18)) and Lemma A.1 to prove that it
+vanishes, as $n \to +\infty$. To show that the second term vanishes we can use estimates
+analogous to (A.9). The argument for (A.14) follows essentially the same lines.
+
+By (5.7) we can now write
+
+$$
+\begin{align*}
+\mathbb{E}\left[\int_0^t (r_n^2(s) - T_+ - \bar{\tau}_+^2(s)) ds\right]
+&= \int_0^t \bar{\tau}_+(s) \mathbb{E}[r_n(s) - \bar{\tau}_+(s)] ds \\
+&\quad + \int_0^t \mathbb{E}[p_n^2(s) - T_+] ds - \int_0^t \mathbb{E}[p_n(s)p_{n-1}(s)] ds \\
+&\quad - \frac{1}{2}(\tilde{\gamma} + \gamma) \int_0^t \mathbb{E}[p_n(s)r_n(s)] ds \\
+&\quad - \frac{1}{n^2} \mathbb{E}[p_n(t)r_n(t) - p_n(0)r_n(0)]. \tag{A.22}
+\end{align*}
+$$
+---PAGE_BREAK---
+
+By the previous results, the last four terms in the right hand side vanish, as $n \to +\infty$. Concerning the first term we use
+
+$$ n^{-2} L p_n(s) = \bar{\tau}_+(s) - r_n(s) - \frac{1}{2}(\tilde{\gamma} + \gamma)p_n(s) \quad (A.23) $$
+
+to conclude that
+
+$$
+\begin{aligned}
+\int_0^t \bar{\tau}_+(s) \mathbb{E}[r_n(s) - \bar{\tau}_+(s)] ds &= \frac{1}{2}(\tilde{\gamma} + \gamma) \int_0^t \bar{\tau}_+(s) \mathbb{E}[p_n(s)] ds \\
+&\quad - \frac{1}{n^2} \int_0^t \bar{\tau}_+(s) \frac{d}{ds} \mathbb{E}[p_n(s)] ds.
+\end{aligned}
+\quad (A.24) $$
+
+The first term vanishes, as $n \to +\infty$, by (A.9). By integration by parts the second term equals
+
+$$ \frac{1}{n^2} \int_0^t \bar{\tau}'_+(s) \mathbb{E}[p_n(s)] ds - \frac{1}{n^2} (\bar{\tau}_+(t) \mathbb{E}[p_n(t)] - \bar{\tau}_+(0) \mathbb{E}[p_n(0)]) $$
+
+which also vanishes, thanks to (A.9) and (A.6). Therefore,
+
+$$ \mathbb{E}\left[\int_{0}^{t}(r_{n}^{2}(s)-T_{+}-\bar{\tau}_{+}^{2}(s))ds\right] \rightarrow 0. $$
+
+To see (A.19) it suffices to show that
+
+$$ \mathbb{E}\left[\int_{0}^{t}(r_{n}^{2}(s)-r_{n}(s)r_{n-1}(s)-T_{+})ds\right] \rightarrow 0. $$
+
+For that purpose we invoke (5.31), which permits to write
+
+$$
+\begin{aligned}
+\mathbb{E}\left[ \int_0^t \left( (r_{n-1}(s) - r_n(s))r_n(s) + T_+ \right) ds \right] &= \mathbb{E}\left[ \int_0^t (p_n^2(s) - p_{n-1}^2(s)) ds \right] \\
+&\quad + \mathbb{E}\left[ \int_0^t p_n(s)p_{n-1}(s) ds \right] + \mathbb{E}\left[ \int_0^t (T_+ - p_n^2(s)) ds \right] \\
+&\quad - \gamma \mathbb{E}\left[ \int_0^t r_n(s)p_{n-1}(s) ds \right] - \frac{1}{n^2} \mathbb{E}\left[ r_n(t)p_{n-1}(t) - r_n(0)p_{n-1}(0) \right].
+\end{aligned}
+$$
+
+Each term in the right hand side of the above equality vanishes, as $n \to +\infty$, by virtue of the already proved estimates. With a similar procedure we obtain (A.16). $\square$
+
+## A.2. Hydrodynamic limit.
+Let us now turn to equation (2.12), which can be formulated in a weak form as:
+
+$$
+\begin{aligned}
+& \int_0^1 du G(u)(r(t,u) - r(0,u)) \\
+= & \frac{1}{\gamma} \int_0^t ds \int_0^1 du G''(u)r(s,u) - \frac{1}{\gamma} G'(1) \int_0^t \bar{\tau}_+(s)ds,
+\end{aligned}
+\quad (A.25) $$
+
+for any test function $G \in C_{0,1}^2(\mathbb{I})$ – the class of $C^2$ functions on $[0,1]$ such that $G(0)=G(1)=0$. Existence and uniqueness of such weak solutions in an appropriate space of integrable functions are standard. By the microscopic evolution
+---PAGE_BREAK---
+
+equations (2.3) we have (recall that $G_x = G(x/n)$)
+
+$$
+\begin{align}
+\mathbb{E}\left[\frac{1}{n} \sum_{x=1}^{n} G_x (r_x(t) - r_x(0))\right] &= \mathbb{E}\left[\int_0^t ds \ n \sum_{x=1}^{n} G_x (p_x(s) - p_{x-1}(s))\right] \nonumber \\
+&= \mathbb{E}\left[\int_0^t ds \left\{-\sum_{x=1}^{n-1} (\nabla_n G)_x p_x(s) - nG_1 p_0(s)\right\}\right], \tag{A.26}
+\end{align}
+$$
+
+where $(\nabla_n G)_x := n(G_{x+1} - G_x)$. Using (2.4) we can write (A.26) as
+
+$$
+\begin{equation}
+\begin{aligned}
+& \mathbb{E} \left[ - \int_0^t ds \left\{ \sum_{x=1}^{n-1} \frac{1}{\gamma} (\nabla_n G)_x (r_{x+1}(s) - r_x(s)) + \frac{1}{2(\gamma + \tilde{\gamma})} n G_1 r_1(s) \right\} \right] \\
+& \quad + \mathbb{E} \left[ \frac{1}{\gamma n^2} \sum_{x=1}^{n-1} (\nabla_n G)_x (p_x(t) - p_x(0)) + \frac{1}{2(\gamma + \tilde{\gamma})^2} n G_1 (p_0(t) - p_0(0)) \right].
+\end{aligned}
+\tag{A.27}\end{equation}
+$$
+
+(A.28)
+
+Since $G$ is smooth, $nG_1 \to G'(0)$ and $(\nabla_n G)_x \to G'(x)$, as $n \to +\infty$. Using this and (A.6), one shows that the expression (A.28) converges to 0. The only significant term is therefore the first one (A.27). Summing by parts, using the notation
+
+$$
+(\Delta_n G)_x := n^2 (G_{x+1} + G_{x-1} - 2G_x)
+$$
+
+and recalling that $G(0) = 0$, it can be rewritten as
+
+$$
+\begin{equation}
+\begin{aligned}
+& \mathbb{E} \left[ \int_0^t \frac{1}{\gamma} \left\{ \frac{1}{n} \sum_{x=2}^{n-1} (\Delta_n G)_x r_x(s) - (\nabla_n G)_{n-1} r_n(s) \right\} ds \right] \\
+& \qquad - \left( \frac{1}{2(\gamma + \tilde{\gamma})} (\nabla_n G)_0 - \frac{1}{\gamma} (\nabla_n G)_1 \right) \mathbb{E} \left[ \int_0^t r_1(s) ds \right].
+\end{aligned}
+\tag{A.29}
+\end{equation}
+$$
+
+It is easy to see, using (2.5), that
+
+$$
+\begin{align*}
+\lim_{n \to +\infty} \mathbb{E} \left[ \int_0^t r_1(s) ds \right] &= \frac{\gamma + \tilde{\gamma}}{2} \lim_{n \to +\infty} \int_0^t ds \int p_1 f_n(s) d\nu_{T_+} \\
+&= \frac{\gamma + \tilde{\gamma}}{2T_+} \lim_{n \to +\infty} \int_0^t ds \int \partial_{p_1} f_n(s) d\nu_{T_+} = 0,
+\end{align*}
+$$
+
+by (A.2). Using (A.23) we obtain also
+
+$$
+\lim_{n \to +\infty} \mathbb{E} \left[ \int_{0}^{t} r_{n}(s) ds \right] = \int_{0}^{t} \bar{\tau}_{+}(s) ds.
+$$
+
+Therefore, we can rewrite (A.29) as
+
+$$
+\int_0^t ds \left\{ \frac{1}{\gamma} \int_0^1 G''(u) r^{(n)}(s, u) du - G'(1) \bar{\tau}_+(s) \right\} + o_n(t), \quad (A.30)
+$$
+
+where
+
+$$
+\bar{r}^{(n)}(t, u) = E[r_x(t)] \quad \text{for } u \in [x/n; (x+1)/n], \quad n \ge 1
+$$
+
+(A.31)
+
+and $\lim_{n \to +\infty} \sup_{s \in [0,t]} o_n(s) = 0$. Thanks to (A.6) we know that there exists $R > 0$ such that
+
+$$
+\sup_{n \ge 1} \sup_{s \in [0,t]} \| \bar{\tau}^{(n)}(s, \cdot) \|_{L^2(\mathbb{I})} := R < +\infty. \quad (A.32)
+$$
+---PAGE_BREAK---
+
+The above means that for each $s \in [0, t]$ the sequence $\{\bar{r}^{(n)}(s, \cdot)\}_{n \ge 1}$ is contained in $\bar{B}_R$ – the closed ball of radius $R > 0$ in $L^2(\mathbb{I})$, centered at 0. The ball is compact in $L_w^2(\mathbb{I})$ – the space of square integrable functions on $\mathbb{I}$ equipped with the weak $L^2$ topology. The topology restricted to $\bar{B}_R$ is metrizable. From the above argument it follows in particular that for each $t > 0$ the sequence $\{\bar{r}^{(n)}(\cdot)\}$ is equicontinuous in $C([0, t], \bar{B}_R)$. Thus, according to the Arzela theorem, see e.g. [13, p. 234], it is sequentially pre-compact in the space $C([0, t], L_w^2(\mathbb{I}))$ for any $t > 0$. Consequently, any limiting point of the sequence satisfies (A.25).
+
+Concerning equation (2.13) with the respective boundary condition, its weak formulation is as follows: for any $G \in L^1([0, +\infty); C_{0,1}^2(\mathbb{I}))$ we have
+
+$$
+\begin{align*}
+0 ={}& \int_0^1 G(0, u)e(0, u) \, du \\
+ & + \int_0^{+\infty} ds \int_0^1 \left( \partial_s G(s, u) + \frac{1}{2}(\gamma^{-1} + \gamma) \partial_u^2 G(s, u) \right) e(s, u) \, du \\
+ & + \frac{1}{4}(\gamma^{-1} - \gamma) \int_0^{+\infty} ds \int_0^1 \partial_u^2 G(s, u) r^2(s, u) \, du \\
+ & - \int_0^{+\infty} \left( \partial_u G(s, 1) \left[ (\gamma^{-1} + \gamma)T_+ + \frac{1}{4}(\gamma^{-1} - \gamma) \bar{\tau}_+(s)^2 \right] - \partial_u G(s, 0)T_- \right) ds. \tag{A.33}
+\end{align*}
+$$
+
+Given a non-negative initial data $e(0, \cdot) \in L^1(\mathbb{I})$ and the macroscopic stretch $r(\cdot)$ (determined via (A.25)) one can easily show that the respective weak formulation of the boundary value problem for a linear heat equation, resulting from (A.33), admits a unique measured value solution.
+
+The averaged thermal energy function is defined as
+
+$$
+\bar{\mathcal{E}}_n(t, u) := \mathbb{E}[\mathcal{E}_x(t)], \quad u \in [\frac{x}{n}, \frac{x+1}{n}), \quad x = 0, \dots, n-1.
+$$
+
+It is easy to see, thanks to (A.6), that $(\bar{\mathcal{E}}_n(t))_{n \ge 1}$ is bounded in the dual to the separable Banach space $L^1([0, +\infty); C_{0,1}^2(\mathbb{I}))$. Thus it is ⋆-weakly compact. In what follows we identify its limit $e(t)$ by showing that it satisfies (A.33). To achieve this goal we are going to use the microscopic energy currents given in (2.10).
+
+Consider now a smooth test function $G \in C_0^\infty([0, +\infty) \times \mathbb{I})$ such that $G(s, 0) = G(s, 1) = 0$, $s \ge 0$. Then, from (2.9), we get
+
+$$
+\begin{equation}
+\begin{split}
+-\frac{1}{n}\mathbb{E}\left[\sum_{x=0}^{n} G_x(0)\mathcal{E}_x(0)\right] &= -\frac{1}{n}\mathbb{E}\left[\sum_{x=1}^{n-1} G_x(0)\mathcal{E}_x(0)\right] \\
+&= -\frac{1}{n}\int_0^t ds \mathbb{E}\left[\sum_{x=0}^{n-1} \partial_s G_x(s)\mathcal{E}_x(s)\right] \\
+&\quad + \int_0^t ds \mathbb{E}\left[\sum_{x=1}^{n-2} (\nabla_n G)_x(s) j_{x,x+1}(s) - nG_{n-1}(s)j_{n-1,n}(s) + nG_1(s)j_{0,1}(s)\right].
+\end{split}
+\tag{A.34}
+\end{equation}
+$$
+
+Here $G_x(s) := G(s, x/n)$ and we use a likewise notation for $\nabla_n G_x(s)$, $\Delta_n G_x(s)$.
+
+Concerning (A.34): by (A.4), the expectation of its last two terms are negligi-
+ble. In order to treat the first term of (A.34), we use the fluctuation-dissipation
+---PAGE_BREAK---
+
+relation (4.6), i.e.
+
+$$
+j_{x,x+1} = n^{-2} L g_x + \nabla V_x, \tag{A.35}
+$$
+
+with
+
+$$
+V_x = -\frac{1}{2\gamma}r_x^2 - \frac{\gamma}{4}(p_x^2 + p_{x-1}^2) - \frac{1}{2\gamma}p_x p_{x-1}.
+$$
+
+Using this relation we can rewrite the last term (A.34) as
+
+$$
+\int_0^t ds \mathbb{E} \left[ \frac{1}{n} \sum_{x=2}^{n-2} (\Delta_n G)_x(s) V_x(s) - (\nabla_n G)_{n-2}(s) V_{n-1}(s) + (\nabla_n G)_1(s) V_1(s) \right] \quad (\text{A.36})
+$$
+
+plus expressions involving the average fluctuating term
+
+$$
+\begin{align*}
+& \int_0^t ds \mathbb{E} \left[ \frac{1}{n^2} \sum_{x=1}^{n-2} (\nabla_n G)_x(s) L g_x(s) \right] \\
+&= \mathbb{E} \left[ \frac{1}{n^2} \sum_{x=1}^{n-2} \left( (\nabla_n G)_x(t) g_x(t) - (\nabla_n G)_x(0) g_x(0) \right) \right] \\
+&\quad - \int_0^t ds \mathbb{E} \left[ \frac{1}{n^2} \sum_{x=1}^{n-2} (\nabla_n \partial_s G)_x(s) g_x(s) \right]
+\end{align*}
+$$
+
+which turns out to be small, as $n \to +\infty$, thanks to the energy bound (A.6). By Lemmas A.1 and A.4 we have:
+
+$$
+\lim_{n \to \infty} \mathbb{E} \left[ \int_0^t ds (\nabla_n G)_1(s) V_1(s) \right] = - \int_0^t ds \partial_u G(s, 0) \frac{1}{2} (\gamma^{-1} + \gamma) T_-, \quad (\text{A.37})
+$$
+
+which takes care of the left boundary condition. Concerning the right one we
+have
+
+$$
+\mathbb{E}\left[\int_{0}^{t} ds (\nabla_{n}G)_{n-2}(s)V_{n-1}(s)\right] = -\mathbb{E}\left[\int_{0}^{t} ds (\nabla_{n}G)_{n-2}(s)\nabla V_{n-1}(s)\right] \quad (\text{A.38})
+$$
+
+$$
++ \mathbb{E}\left[\int_{0}^{t} ds (\nabla_{n}G)_{n-2}(s)V_{n}(s)\right]. \quad (\text{A.39})
+$$
+
+Using again the results of Lemmas A.1 and A.4 we conclude that the limit of the
+second term (A.39) equals
+
+$$
+-\int_0^t ds \partial_u G(s, 1) \left( \frac{1}{2}(\gamma^{-1} + \gamma)T_+ + \frac{1}{2\gamma}\bar{T}_+^2(s) \right).
+$$
+
+On the other hand, using (A.35), the term (A.38) equals
+
+$$
+\frac{1}{n^2} \mathbb{E} \left[ \int_0^t ds (\nabla_n G)_{n-2}(s) L g_{n-1}(s) \right] + \int_0^t ds (\nabla_n G)_{n-2}(s) \mathbb{E}[j_{n-1,n}(s)]. \quad (\text{A.40})
+$$
+
+From (A.10) we conclude that the second term vanishes, with $n \to +\infty$. By integration by parts the first term equals
+
+$$
+\frac{1}{n^2} \mathbb{E} [(\nabla_n G)_{n-2}(t) g_{n-1}(t) - (\nabla_n G)_{n-2}(0) g_{n-1}(0)] \\
+\qquad - \frac{1}{n^2} \mathbb{E} [\int_0^t ds (\nabla_n \partial_s G)_{n-2}(s) g_{n-1}(s)], \quad (\text{A.41})
+$$
+---PAGE_BREAK---
+
+which vanishes, thanks to (A.6). Summarizing we have shown that
+
+$$
+\begin{align*}
+& \lim_{n \to +\infty} \mathbb{E} \left[ \int_0^t ds (\nabla_n G)_{n-2}(s) V_{n-1}(s) \right] \\
+&= - \int_0^t ds \partial_u G(s, 1) \left[ \frac{1}{2} (\gamma^{-1} + \gamma) T_+ + \frac{1}{2\gamma} \bar{\tau}_+^2(s) \right].
+\end{align*}
+$$
+
+Now, for the bulk, it follows from (5.22) and (5.39) that
+
+$$
+n^{-2}Lh_x = \nabla W_x - 2\gamma p_x p_{x-1}, \quad x = 2, \dots, n-1, \tag{A.42}
+$$
+
+with
+
+$h_x := p_x p_{x-1} - \frac{r_x^2}{2}, \quad W_x := r_{x-1} p_x.$
+
+Therefore by (A.2) and an argument similar to the one used above we conclude
+that
+
+$$
+\lim_{n \to \infty} \int_0^n ds \mathbb{E} \left[ \frac{1}{n} \sum_{x=2}^{n-2} (\Delta_n G)_x(s) p_x(s) p_{x-1}(s) \right] = 0. \quad (\text{A.43})
+$$
+
+In the case $\gamma = 1$ we can rewrite
+
+$$
+V_x = -\mathcal{E}_x + \frac{1}{4}(p_x^2 - p_{x-1}^2) - \frac{1}{2}p_x p_{x-1} \quad (\text{A.44})
+$$
+
+so that it is easy to see that (A.36) is equivalent to
+
+$$
+-\int_0^t ds \mathbb{E} \left[ \frac{1}{n} \sum_{x=2}^{n-2} (\Delta_n G)_x(s) \mathcal{E}_x(s) \right] + o_n(1), \quad (\text{A.45})
+$$
+
+closing the energy conservation equation and concluding the proof.
+
+Finally, for $\gamma \neq 1$ we expect that the following term to vanish as $n \to +\infty$:
+
+$$
+\int_0^t ds \mathbb{E} \left[ \frac{1}{n} \sum_{x=2}^{n-2} (\Delta_n G)_x (p_x^2(s) - (r_x(s) - \mathbb{E}[r_x(s)])^2) \right], \quad (\text{A.46})
+$$
+
+as can be guessed by local equilibrium considerations. Unfortunately in order to
+prove the last limit one needs some higher moment bounds that are not available
+from relative entropy considerations. One prospective work could be to proceed
+in an analogous way as in the periodic case [16], by studying the evolution of the
+Wigner distribution of the thermal energy in Fourier coordinates.
+
+REFERENCES
+
+[1] C. Bernardin, Hydrodynamics for a system of harmonic oscillators perturbed by a conservative noise, *Stochastic Processes and their Applications*, **117**: 487–513, 2007.
+
+[2] C. Bernardin, Heat conduction model: stationary non-equilibrium properties, *Physical Review E* **78**, Issue 2, id. 021134, 2008.
+
+[3] C. Bernardin, V. Kannan, J.L. Lebowitz, J. Lukkarinen, Harmonic Systems with Bulk Noises, *Journal of Statistical Physics* **146**: 800-831, 2012.
+
+[4] C. Bernardin, S. Olla, Fourier Law for a Microscopic Model of Heat Conduction, *J. Stat. Phys.*, **121**: 271–289, 2005.
+
+[5] C. Bernardin, S. Olla, Transport Properties of a Chain of Anharmonic Oscillators with Random Flip of Velocities, *J. Stat. Phys.*, **145**: 1224–1255, 2011.
+---PAGE_BREAK---
+
+[6] A. Bradji, R. Herbin, Discretization of the coupled heat and electrical diffusion problems by the finite element and the finite volume methods, *IMA Journal of Numerical Analysis, Oxford University Press (OUP)*, 28 (3), 469–495, 2008.
+
+[7] M. Colangeli, A. De Masi, E. Presutti, Microscopic models for uphill diffusion, *Journal of Physics A: Mathematical and Theoretical*, Volume 50, Number 43, 2017.
+
+[8] N. Even, S. Olla, Hydrodynamic Limit for an Hamiltonian System with Boundary Conditions and Conservative Noise, *Arch. Rat. Mech. Appl.* 213, 61–585, 2014.
+
+[9] L. Hörmander, Hypoelliptic second order differential equations, *Acta Math.* 119, 147–171, 1967.
+
+[10] A. Iacobucci, F. Legoll, S. Olla, G. Stoltz, Negative thermal conductivity of chains of rotors with mechanical forcing, *Phys. Rev. E*, 84, 061108, 2011.
+
+[11] S. Iubini, S. Lepri, R. Livi, A. Politi, Boundary induced instabilities in coupled oscillators, *Phys. Rev. Lett.* 112, 134101, 2014.
+
+[12] M. Jara, T. Komorowski, S. Olla, Superdiffusion of Energy in a system of harmonic oscillators with noise, *Commun. Math. Phys.* 339: 407–425, 2015.
+
+[13] Kelley, J. L. (1991), *General topology*, Springer-Verlag, ISBN 978-0-387-90125-1.
+
+[14] C. Kipnis and C. Landim, *Scaling Limits of Interacting Particle Systems*, Springer-Verlag: Berlin, 1999.
+
+[15] T. Komorowski, S. Olla, Diffusive propagation of energy in a non-acoustic chain, *Arch. Rat. Mech. Appl.* 223, N.1, 95–139, 2017.
+
+[16] T. Komorowski, S. Olla, M. Simon, Macroscopic evolution of mechanical and thermal energy in a harmonic chain with random flip of velocities, *Kinetic and Related Models*, AIMS, 11 (3): 615–645, 2018.
+
+[17] R. Krishna, Uphill diffusion in multicomponent mixtures, *Chem. Soc. Rev.*, 44, 2812–2836, 2015.
+
+[18] S. Lepri, R. Livi, A. Politi, Thermal Conduction in classical low-dimensional lattices, *Phys. Rep.* 377, 1–80, 2003.
+
+[19] S. Lepri, R. Livi, A. Politi, Heat conduction in chains of nonlinear oscillators, *Phys. Rev. Lett.* 78, 1896, 1997.
+
+[20] V. Letizia, S. Olla, Non-equilibrium isothermal transformations in a temperature gradient from a microscopic dynamics, with V. Letizia, *Annals of Probability*, 45, 6A (2017), 3987–4018. doi:10.1214/16-AOP1156.
+
+[21] S. Olla, Role of conserved quantities in Fourier's law for diffusive mechanical systems, preprint available at https://arxiv.org/abs/1905.07762. To appear in *Comptes Rendus Physiques*, doi: 10.1016/j.crhy.2019.08.001 (2019).
+
+[22] H. Spohn, Nonlinear Fluctuating Hydrodynamics for Anharmonic Chains, *Journal of Statistical Physics* 154, 2013.
+
+TOMASZ KOMOROWSKI: INSTITUTE OF MATHEMATICS, POLISH ACADEMY OF SCIENCES,
+WARSAW, POLAND.
+
+Email address: komorow@hektor.umcs.lublin.pl
+
+STEFANO OLLA: CNRS, CEREMADE, UNIVERSITÉ PARIS-DAUPHINE, PSL RESEARCH
+UNIVERSITY, 75016 PARIS, FRANCE
+
+Email address: olla@ceremade.dauphine.fr
+
+MARIELLE SIMON: INRIA, UNIV. LILLE, CNRS, UMR 8524, LABORATOIRE PAUL PAINLEVÉ, F-59000 LILLE, FRANCE
+
+Email address: marielle.simon@inria.fr
\ No newline at end of file
diff --git a/samples/texts_merged/3422347.md b/samples/texts_merged/3422347.md
new file mode 100644
index 0000000000000000000000000000000000000000..529745247716d88412d32b3908dabefd1e38d5cb
--- /dev/null
+++ b/samples/texts_merged/3422347.md
@@ -0,0 +1,714 @@
+
+---PAGE_BREAK---
+
+# Chaos in Dirac Electron Optics: Emergence of a Relativistic Quantum Chimera
+
+Hong-Ya Xu,¹ Guang-Lei Wang,¹ Liang Huang,² and Ying-Cheng Lai¹,*
+
+
+
+¹School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, Arizona 85287-5706, USA
+
+²School of Physical Science and Technology, and Key Laboratory for Magnetism and Magnetic Materials of MOE,
+Lanzhou University, Lanzhou, Gansu 730000, China
+
+(Received 2 January 2018; published 23 March 2018)
+
+We uncover a remarkable quantum scattering phenomenon in two-dimensional Dirac material systems where the manifestations of both classically integrable and chaotic dynamics emerge simultaneously and are electrically controllable. The distinct relativistic quantum fingerprints associated with different electron spin states are due to a physical mechanism analogous to a chiroptical effect in the presence of degeneracy breaking. The phenomenon mimics a chimera state in classical complex dynamical systems but here in a relativistic quantum setting—henceforth the term “Dirac quantum chimera,” associated with which are physical phenomena with potentially significant applications such as enhancement of spin polarization, unusual coexisting quasibound states for distinct spin configurations, and spin selective caustics. Experimental observations of these phenomena are possible through, e.g., optical realizations of ballistic Dirac fermion systems.
+
+DOI: 10.1103/PhysRevLett.120.124101
+
+The tremendous development of two-dimensional (2D) Dirac materials such as graphene, silicene, and germanene [1–5], in which the low-energy excitations follow the relativistic energy-momentum relation and obey the Dirac equation, has led to the emergence of a new area of research: Dirac electron optics [6–33]. Theoretically, it was articulated early [7] that Klein tunneling and the unique gapless conical dispersion relation can be exploited to turn a simply *p*-n junction into a highly transparent focusing lens with a gate-controlled negative refractive index, producing a Vaselago lens for the chiral Dirac fermions in graphene. The negative refraction of Dirac fermions obeys the Snell’s law in optics and the angularly resolved transmittances in analogy with the Fresnel coefficients in optics have been recently confirmed experimentally [20,26]. Other works include various Klein-tunneling junction based electronic counterparts of optical phenomena such as Fabry-Pérot resonances [8,13], cloaking [11,14], waveguides [12,19], the Goos-Hänchen effect [9], the Talbot effect [22], beam splitter and collimation [21,28,29], and even the Dirac fermion microscope [33]. A Dirac material based electrostatic potential junction with a closed interface can be effectively tuned to optical guiding and acts as an unusual quantum electron-optics element whose effective refractive index can be electrically modulated, in which phenomena such as gate controlled caustics [6], electron Mie scattering [15,23–25], and whispering gallery modes [17,18,30,31] can arise. In addition, unconventional electron optical elements have been demonstrated such as valley resolved waveguides [34,35] and beam splitters [27], electronic birefringent superlens [16], and spin (current) lens [10,32]. Research on Dirac electron
+
+optics offers the possibility to control Dirac electron flows in a similar way as for light.
+
+In this Letter, we address the role of chaos in Dirac electron optics. In nonrelativistic quantum mechanics, the interplay between chaos and quantum optics has been studied in microcavity lasers [36–39] and deformed dielectric microcavities with non-Hermitian physics and wave chaos [40]. With the development of Dirac electron optics [6–33], the relativistic electronic counterparts of deformed optical dielectric cavities or resonators have become accessible. For massless Dirac fermions in ballistic graphene, the interplay between classical dynamics and electrostatic confinement has been studied [41–44] with the finding that integrable dynamics lead to sharp transport resonances due to the emergence of bound states while chaos typically removes the resonances. In these works, the uncharged degree of freedom such as electron spin, which is fundamental to relativistic quantum systems, was not treated.
+
+Our focus is on the interplay between ray-path defined classical dynamics and spin in Dirac electron optical systems. To be concrete, we introduce an electrical gate potential defined junction with a ring geometry, in analogy to a dielectric annular cavity. Classically, this system generates integrable and mixed dynamics with the chaotic fraction of the phase space depending on the ring eccentricity and the effective refractive index configuration, where the index can be electrically tuned to negative values to enable Klein tunneling. Inside the gated region, the electron spin degeneracy is lifted through an exchange field from induced ferromagnetism, leading to a class of spin-resolved, electrically tunable quantum systems of
+---PAGE_BREAK---
+
+FIG. 1. Scattering system and classical ray dynamics. (a) Annular shaped scattering region with eccentricity $\xi = \bar{O}O'$, (b) a cross-sectional view, (c),(d) chaotic and integrable ray dynamics on the Poincaré surface of section defined by the Birkhoff coordinates $(\theta, \sin\beta)$ for spin-up and -down particles, respectively, where $\theta$ denotes the polar angle of a ray's intersection point with the cavity boundary and $\beta$ is the angle of incidence with respect to the boundary normal. The quantity $\sin\beta$ is proportional to the angular momentum and the critical lines for total internal reflection are given by $\sin\beta_c = \pm 1/n_s$.
+
+electron optics with massless Dirac fermions (by mimicking the photon polarization resolved photonic cavities made from synthesized chiral metamaterials). We develop an analytic wave function matching solution scheme and uncover a striking quantum scattering phenomenon: manifestations of classically integrable and chaotic dynamics coexist simultaneously in the system at the same parameter setting, which mimics a chimera state in classical complex dynamical systems [45–52]. The basic underlying physics is the well-defined, spin-resolved, gate-controllable refraction index that dominantly controls the ballistic motion of short-wavelength Dirac electrons across the junction interface, in which the ray tracing of reflection and refraction associated with particles belonging to different spin states generates distinct classical dynamics inside the junction or scatterer. Especially, with a proper gate potential, the spin-dependent refractive index profile can be controlled to generate regular ray dynamics for one spin state but generically irregular behavior with chaos for the other. A number of highly unusual physical phenomena arise, such as enhanced spin polarization with chaos, simultaneous quasiscarred and whispering gallery type of resonances, and spin-selective lensing with a starkly near-field separation between the local density of states (DOS) for spin-up and spin-down particles.
+
+Low energy excitations in 2D Dirac materials are described by the Dirac-Weyl Hamiltonian $H_0 = v_F \boldsymbol{\sigma} \cdot \mathbf{p}$, where $v_F$ is the Fermi velocity, $\mathbf{p} = (p_x, p_y)$ is the momentum measured from a given Dirac point, and $\boldsymbol{\sigma} = (\sigma_x, \sigma_y)$ are Pauli matrices for sublattice pseudospin. In the presence of a gate potential and an exchange field due to the locally induced ferromagnetism inside the whole gated region, the effective Hamiltonian is $H = v_F s_0 \otimes \boldsymbol{\sigma} \cdot \mathbf{p} + s_0 \otimes \sigma_0 \mathcal{V}_{\text{gate}}(\mathbf{r}) - s_z \otimes \sigma_0 M(\mathbf{r})$, where the Pauli matrix $s_z$ acts on the real electron spin space, $s_0$ and $\sigma_0$ both are identity matrices, $\mathcal{V}_{\text{gate}}(\mathbf{r})$ and $M(\mathbf{r})$ are the electrostatic and exchange potential, respectively. Because of the pseudospin-momentum locking (i.e., $\boldsymbol{\sigma} \cdot \mathbf{p}$), a nonuniform potential couples the two pseudospinor components, but the electron spin components are not coupled with each other. The exchange field breaks the twofold spin degeneracy. Since $[s_z \otimes \sigma_0, H] = 0$, the Hamiltonian can be simplified as $H_s = H_0 + \mathcal{V}_{\text{gate}}(\mathbf{r}) - sM(\mathbf{r})$ with $s = \pm$ denoting the electron spin quantum number. Because of $M$, the Dirac-type Hamiltonian $H_s$ can give rise to spin dependent physical processes.
+
+For the ring configuration in Fig. 1(a) and assuming the potentials are smooth on the scale of the lattice spacing but sharp in comparison with the conducting carriers' wavelength, in the polar coordinates $\mathbf{r} = (r, \theta)$, we have $\mathcal{V}_{\text{gate}}(\mathbf{r}) = \hbar v_F \nu_1 \Theta(R_1 - r) \Theta(|\mathbf{r} - \xi| - R_2) + \hbar v_F \nu_2 \Theta(R_2 - |\mathbf{r} - \xi|)$, and $M(\mathbf{r}) = \hbar v_F \mu \Theta(R_1 - r)$, where $\Theta$ is the Heaviside step function, $R_2$ is the radius of the small disk gated region of strength $\hbar v_F (\nu_2 - \nu_1)$ placed inside a larger disk of radius $R_1$ ($> R_2$) and strength $\hbar v_F \nu_1$, the displacement vector between
+
+the disk centers is $\xi = (\xi, 0)$, and the exchange potential has the strength $\hbar v_F \mu$ over the whole gated region. The two circular boundaries divide the domain into three distinct regions: I: $r > R_1$; II: $r < R_1$ and $|\mathbf{r} - \xi| > R_2$; III: $|\mathbf{r} - \xi| < R_2$. For given particle energy $E = \hbar v_F c$, the momenta in the respective regions are $k_s^I = |c|$, $k_s^II = |c - \nu_1 + s\mu|$, and $k_s^III = |c - \nu_2 + s\mu|$. Within the gated region, the exchange potential splits the Dirac cone into two in the vertical direction in the energy domain while the electrostatic potential simply shifts the cone, leading to a spin-resolved, gate-controllable annular junction for massless Dirac electrons.
+
+In the short wavelength limit, locally the curved junction interface appears straight for the electrons, so the gated regions and the surroundings can be treated as optical media. The unusual feature here is that the refractive indices are spin dependent: $n_s^{II,III} = (c + s\mu - \nu_{1,2})/c$, similar to light entering and through a polarization resolved photonic crystal [53,54]. Given the values of $c$ and $\mu$, depending on the values of $\nu_{1,2}$, the refractive indices for the two spin states can be quite distinct with opposite signs. The system is thus analogous to a chiral photonic material based cavity, which represents a novel class of Dirac electron optics systems.
+
+The classical behaviors of Dirac-like particles in the short wavelength limit can be assessed using the optical analogy, as done previously for circularly curved p-n junctions [6,33], where the classical trajectories are defined via the principle of least time. Because of the spin dependent and piecewise constant nature of the index profile, the resulting stationary ray paths for the Dirac
+---PAGE_BREAK---
+
+electrons are spin-resolved and consist of straight line segments. At a junction interface, there is ray splitting governed by the spin-resolved Snell’s law. On a Poincaré surface of the section, the classical dynamics are described by a spin-resolved map $F_s$ relating the dynamical variables $\theta$ and $\beta$ (Fig. 1) between two successive collisions with the interface: $(\theta_i, \sin \beta_i) \mapsto (\theta_{i+1}, \sin \beta_{i+1})$. The ray-splitting picture is adequate for uncovering the relativistic quantum fingerprints of distinct classical dynamics.
+
+Spin-resolved ray trajectories inside the junction lead to the simultaneous coexistence of distinct classical dynamics. For example, for the parameter setting $\nu_2 = -\nu_1 = \epsilon = \mu$, i.e., $n_s^{\mathrm{II}} = 2+s$ and $n_s^{\mathrm{III}} = s$, for spin-up particles ($s = +$), the junction is an eccentric annular electron cavity characterized by the refractive indices $n_+^{\mathrm{II}} = 3$ and $n_+^{\mathrm{III}} = 1$, as exemplified in Fig. 1(b) for $\xi = 0.3$. However, for spin-down particles ($s = -$), the junction appears as an off-centered negatively refracted circular cavity with $n_-^{\mathrm{II}} = 1$ and $n_-^{\mathrm{III}} = -1$. Figures 1(c) and 1(d) show the corresponding ray dynamics on the Poincaré surface of section for spin-up and -down particles, respectively, where the former exhibit chaos while the dynamics associated with the latter are integrable with angular momentum being the second constant of motion.
+
+For a spin unpolarized incident beam, the simultaneous occurrence of integrable and chaotic classical dynamics means the coexistence of distinct quantum manifestations, leading to the emergence of a Dirac quantum chimera. To establish this, we carry out a detailed analysis of the scattering matrices for spin-dependent, relativistic quantum scattering and transport through the junction. Using insights from analyzing optical dielectric cavities [55,56] and nonrelativistic quantum billiard systems [57,58], we develop an analytic wave function matching scheme at the junction interfaces (See Supplemental Material [59] which includes Refs. [24,30,60–64]) to solve the Dirac-Weyl equation to obtain the scattering matrix $S$ as a function of the energy $E$ as well as the spin polarization $s$ for given system parameters $R_2/R_1$, $\xi$, $\nu_{1,2}$, and $\mu$. The Wigner-Smith time delay [60,61] is defined from the $S$ matrix as $\tau = -i\hbar\mathrm{Tr}[S^\dagger(\partial S/\partial E)]$, which is proportional to the DOS of the cavity. Large positive values of $\tau$ signify resonances associated with the quasibound states [65]. Physically, a sharper resonance corresponds to a longer trapping lifetime and scattering time delay. Previous works on wave or quantum chaotic scattering [66–85] established that classical chaos can smooth out (broaden) the sharp resonances and reduce the time delay markedly while integrable dynamics can lead to stable, long-lived bound states (or trapping modes).
+
+We present concrete evidence for Dirac quantum chimera. Figure 2(a) shows, for $R_2/R_1 = 0.6$, $\mu = -\nu_1 = 5$, and $\nu_2 = 45$, the dimensionless time delay (on a logarithmic scale) versus the eccentricity $\xi$ and energy $E$ (in units of $\hbar v_F/R_1$). Figure 2(b) shows the maximum time delay
+
+FIG. 2. A Dirac quantum chimera. (a) Top: Contour map of dimensionless Wigner-Smith time delay (on a logarithmic scale) versus energy $E$ and eccentricity $\xi$ for spin-down (left) and -up (right) cases, where the bright yellow color indicates larger values. Middle and bottom panels: time delay and total cross section averaged over all directions of the incident waves versus $E$, respectively, for $\xi = 0.3$. (b) Dependence of the maximum time delay on $\xi$ (red: spin-up; blue: spin-down). (c) Energy averaged spin polarization versus $\xi$.
+
+[within the given energy range in Fig. 2(a)] versus $\xi$ for spin-up (red) and spin-down (blue) particles. There are drastic changes in the time delay as the energy is varied, which are characteristic of well-isolated, narrow resonances and imply the existence of relatively long-lived confined modes. There is a key difference in the resonances associated with the spin-up and -down states: the former depend on the eccentricity parameter $\xi$ and are greatly suppressed for $\xi > 0.2$, while the latter are independent of $\xi$. For example, the middle panel of Fig. 2(a) shows that, for a severely deformed structure ($\xi = 0.3$), there are sharp resonances with high peak values of the time delay for the spin down state, but none for the spin-up state. The suppression of resonances associated with the spin-up state is consistent with the behavior of the total cross section $\bar{\sigma}_t$ (averaged over the directions of the incident wave) given in terms of the $S$-matrix elements by $\bar{\sigma}_t = (2k)^{-1} \sum_{m,l=-\infty}^{\infty} |S_{ml} - \delta_{ml}|^2$, as shown in the bottom panel of Fig. 2(a). Because the classical dynamics for massless fermions in the spin-up and -down states are chaotic and integrable, respectively [cf., Figs. 1(c), 1(d)], there is simultaneous occurrence of two characteristically different quantum scattering behaviors for a spin unpolarized beam: one without and another with sharp resonances. This striking contrast signifies a Dirac quantum chimera.
+
+Are there unexpected, counterintuitive physical phenomena associated with a Dirac quantum chimera? Yes, there are. Here we present two and point out their applied values.
+
+The first is spin polarization enhancement, which has potential applications to Dirac material based spintronics. A general way to define spin polarization is through the spin conductivities $G^\downarrow$ as $P_z = (G^\downarrow - G^\uparrow)/(G^\downarrow + G^\uparrow)$. Imagine a system consisting of a set of sparse, randomly distributed, identical junction-type of annular scatterers,
+---PAGE_BREAK---
+
+and assume that the scatterer concentration is sufficiently low ($n_c \ll 1/R_1^2$) so that multiple scattering events can be neglected. In this case, the spin conductivities can be related to the transport cross section as $G^{↓(↑)}/G_0 = k/(n_c\sigma_{tr}^{↓(↑)})$, where $G_0$ is the conductance quantum and $\sigma_{tr}^{↓(↑)}$ can be calculated from the S matrix. For a spin unpolarized incident beam along the x axis with equal spin-up and -down populations, we calculate the average spin polarization over a reasonable Fermi energy range as a function of the eccentricity $\xi$, as shown in Fig. 2(c). For $\xi > 0.2$ so classical chaos is relatively well developed and a Dirac quantum chimera emerges, there is robust enhancement of spin polarization. From the standpoint of classical dynamics, the scattering angle is much more widely distributed for spin-up particles (due to chaos) as compared with the angle distribution for spin-down particles with integrable dynamics, leading to a larger effective resistance for spin-up particles. From an applied perspective, the enhancement of spin polarization brought about by a Dirac quantum chimera can be exploited for developing spin rheostats or filters, where one of the spin resistances, e.g., $R^\uparrow \propto 1/G^\uparrow$, can be effectively modulated through tuning the deformation parameter $\xi$ so as to induce classically
+
+FIG. 3. Spin polarized scarred and regular whispering-gALLERY-mode resonances as a result of Dirac quantum chimera. (a), (c) Real space probability densities (on a logarithmic scale) of the representative quasibound states for spin-up and spin-down Dirac electrons, respectively. For the spin-up particles, the spinor wave solution is scarred by an unstable periodic ray trajectory obeying the Snell's law, as indicated by the red-dashed path with highlighted pentagram markers. The spin-down Dirac electrons are associated with a whispering gallery ray path due to the continuous total internal reflections denoted by the blue dotted segments. (b),(d) The corresponding phase-space representations with regions below the critical black dashed lines satisfying the total internal reflection at the boundary. The distinct quasibound modes are from simultaneous resonances under the same system parameters, leading to a relativistic quantum chimera. Further signatures of the chimera state can be seen in the plot of the total cross section versus the particle energy for different spin states (e) and a net spin distribution with a dramatic spin-resolved separation in the real space confined inside the cavity (f).
+
+chaotic motion for one type of polarization but integrable dynamics for another.
+
+The second phenomenon is resonance and lensing associated with a Dirac quantum chimera. Figures 3(a)-3(f) show, for $\xi = 0.27$ (in units of $R_1$), $R_2/R_1 = 0.6$, $\nu_2 = 4\nu_1 = -4\mu = 24.16$ (in units of $1/R_1$) and $E = 6.04$ (in units of $\hbar v_F/R_1$), a resonant (quasibound) state, in which the spatially separated, spin-resolved local DOS is confined inside the cavity. The spin-up state is concentrated about a particular unstable periodic orbit without the rotational symmetry [Figs. 3(a) and 3(b)] and exhibits a scarring pattern with a relatively short lifetime characterized by a wider resonance profile, as shown in Fig. 3(e). Spin-down particles are trapped inside the inner disk by a regular long-lived whispering gallery mode associated with the integrable dynamics [Figs. 3(c) and 3(d)]. The Dirac quantum chimera thus manifests itself as the simultaneous occurrence of a magnetic scarred quasibound state and a whispering gallery mode excited by an incident wave with equal populations of spin-up and -down particles, as shown in Fig. 3(f), a color-coded spatial distribution of the difference between the local DOS for spin-up and -down particles.
+
+FIG. 4. Spin-selective caustic lens and skew scattering associated with a Dirac quantum chimera. (a) Caustic patterns resulting from the scattering of a spin unpolarized planar incident wave traveling along the positive x axis ($\theta' = 0$) with relatively short wavelength, i.e., $kR_1 = 70 \gg 1$, and (c) from scattering of the wave propagating along the direction that makes an angle $\theta' = \pi/4$ with the x axis. (b),(d) The corresponding spatially resolved near field net spin distributions measured by the difference $|\psi_1|^2 - |\psi_-|^2$, respectively. (e) The resulting far-field behavior characterized by the angular distributions of spin-dependent differential cross sections with symmetric profiles for $\theta' = 0$ (left inset) and a spin-selective asymmetric one for $\theta' = \pi/4$ (right inset), where both insets are plotted by the eighth root of $\sigma_{\text{diff}}^{↑(↓)}$ in order to weaken the drastic contrast variation in magnitude for better visualization. Parameters are $\xi = 0.27$, $R_2/R_1 = 0.6$, $\nu_2 = \mu = -\nu_1 = 70$, and $E = 70$.
+---PAGE_BREAK---
+
+In the sufficiently short wavelength regime where the ray picture becomes accurate, a spin-resolved lensing behavior arises, due to the simultaneous occurrence of two distinct quantum states associated with the chimera state. The cavity can be regarded as an effective electronic Veselago lens with a robust caustic function for spin-down particles but the spin-up particles encounter simply a conventional lens of an irregular shape. Particularly, for a spin-unpolarized, planar incident wave, a spin-selective caustic behavior arises, as shown in Figs. 4(a)–4(d) through the color-coded near-field patterns. There is a pronounced lensing caustic of the cusp type for the spin-down state while a qualitatively distinct lensing pattern occurs for the spin-up state. A consistent far-field angular distribution of the differential cross section is shown in Fig. 4(e), which gives rise to well-oriented or -collimated, spin-dependent far-field scattering with the angle resolved profile minimized into a small range due to the lensing effect. Despite a lack of robust lensing, the spin-up particles in general undergo asymmetric scattering, which can lead to spin-polarized transverse transport in addition to longitudinal spin filtering.
+
+To summarize, we uncover a Dirac quantum chimera—a type of relativistic quantum scattering states characterized by the simultaneous coexistence of two distinct types of behaviors as the manifestations of classical chaotic and integrable dynamics, respectively. The physical origin of the chimera state is the optical-like behavior of massless Dirac fermions with both spin and pseudospin degrees of freedom, which together define a spin-resolved Snell’s law governing the chiral particles’ ballistic motion. The phenomenon is predicted analytically based on quantum scattering from a gate-defined annular junction structure. The chimera has striking physical consequences such as spin polarization enhancement, unusual quantum resonances, and spin-selective lensing, which are potentially exploitable for developing 2D Dirac material-based electronic and spintronic devices.
+
+We would like to acknowledge support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering and funded by the Office of Naval Research through Grant No. N00014-16-1-2828. L. H. is also supported by NNSF of China under Grant No. 11775101.
+
+*Ying-Cheng.Lai@asu.edu
+
+[1] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science **306**, 666 (2004).
+
+[2] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature (London) **438**, 197 (2005).
+
+[3] A. H. C. Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009).
+
+[4] T. Wehling, A. Black-Schaffer, and A. Balatsky, Adv. Phys. **63**, 1 (2014).
+
+[5] J. Wang, S. Deng, Z. Liu, and Z. Liu, Nat. Sci. Rev. **2**, 22 (2015).
+
+[6] J. Cseri, A. Pályi, and C. Péterfalvi, Phys. Rev. Lett. **99**, 246801 (2007).
+
+[7] V. V. Cheianov, V. Fal'ko, and B. L. Altshuler, Science **315**, 1252 (2007).
+
+[8] A. V. Shytov, M. S. Rudner, and L. S. Levitov, Phys. Rev. Lett. **101**, 156804 (2008).
+
+[9] C. W. J. Beenakker, R. A. Sepkhanov, A. R. Akhmerov, and J. Tworzydło, Phys. Rev. Lett. **102**, 146804 (2009).
+
+[10] A. G. Moghaddam and M. Zareyan, Phys. Rev. Lett. **105**, 146803 (2010).
+
+[11] N. Gu, M. Rudner, and L. Levitov, Phys. Rev. Lett. **107**, 156603 (2011).
+
+[12] J. R. Williams, T. Low, M. S. Lundstrom, and C. M. Marcus, Nat. Nanotechnol. **6**, 222 (2011).
+
+[13] P. Rickhaus, R. Maurand, M.-H. Liu, M. Weiss, K. Richter, and C. Schönenberger, Nat. Commun. **4**, 2342 (2013).
+
+[14] B. Liao, M. Zebarjadi, K. Esfarjani, and G. Chen, Phys. Rev. B **88**, 155432 (2013).
+
+[15] R. L. Heinisch, F. X. Bronold, and H. Fehske, Phys. Rev. B **87**, 155409 (2013).
+
+[16] M. M. Asmar and S. E. Ulloa, Phys. Rev. B **87**, 075420 (2013).
+
+[17] J.-S. Wu and M.M. Fogler, Phys. Rev. B **90**, 235402 (2014).
+
+[18] Y. Zhao, J. Wyrick, F.D. Natterer, J.F Rodriguez-Nieva, C. Lewandowski, K. Watanabe, T. Taniguchi, L.S. Levitov, N.B.Zhitenev, and J.A.Stroscio, Science **348**, 672 (2015).
+
+[19] P.Rickhaus, M.-H.Liu, P.Makk,R.Maurand,S.Hess,S.Zihlmann,M.Weiss,K.Richter,and C.Schonenberger,Nano Lett. **15**, 5819 (2015).
+
+[20] G.-H.Lee,G.-H.Park,and H.-J.Lee,Nat.Phys,**11**,925 (2015), letter.
+
+[21] P.Rickhaus,P.Makk,K.Richter,and C.Schonenberger,Apl.Phys.Lett,**107**, 251901 (2015).
+
+[22] J.D.Walls and D.Hadad, Sci.Rep,**6**, 26698 (2016).
+
+[23] J.Caridad,S.Connaughton,C.Ott,H.B.Weber,and V.Krstic,Nat.Commun,**7**, 12894 (2016).
+
+[24] C.Gutierrez,L.Brown,C.-J.Kim,J.Park,and A.N.Pasupathy,Nat.Phys.,**12**, 1069 (2016).
+
+[25] J.Lee,D.Wong,J.Velasco Jr.,J.F.Rodriguez-Nieva,S.Kahn,H.-Z.Tsai,T.Taniguchi,K.Watanabe,A.Zettl,F.Wang,L.S.Levitov,and M.F.Crommie,Nat.Phys.,**12**, 1032 (2016).
+
+[26] S.Chen,Z.Han,M.M.Elahi,K.M.M.Habib,L.Wang,B.Wen,Y.Gao,T.Taniguchi,K.Watanabe,J.Hone,A.W.Ghosh,and C.R.Dean,Science **353**, 1522 (2016).
+
+[27] M.Settnes,S.R POWER,M Brandbyge,and A.-P.Jauho,
+Phys. Rev. Lett. **117**, 276801 (2016).
+
+[28] M.-H.Liu,C.Gorini,and K.Richter,Phys.Rev.Lett.,**118**,
+066801 (2017).
+
+[29] A.W.Barnard,A.Hughes,A.L.Sharpe,K.Watanabe,
+T.Taniguchi,and D.Goldhaber-Gordon,Nat.Commun.,**8**,
+15418 (2017).
+---PAGE_BREAK---
+
+[30] Y. Jiang, J. Mao, D. Moldovan, M. R. Masir, G. Li, K. Watanabe, T. Taniguchi, F. M. Peeters, and E. Y. Andrei, Nat. Nanotechnol. 12, 1045 (2017).
+
+[31] F. Ghahari, D. Walkup, C. Gutiérrez, J. F. Rodriguez-Nieva, Y. Zhao, J. Wyrick, F. D. Natterer, W. G. Cullen, K. Watanabe, T. Taniguchi, L. S. Levitov, N. B. Zhitenev, and J. A. Stroscio, Science 356, 845 (2017).
+
+[32] S.-H. Zhang, J.-J. Zhu, W. Yang, and K. Chang, 2D Mater. 4, 035005 (2017).
+
+[33] P. Bäggild, J. M. Caridad, C. Stampfer, G. Calogero, N. R. Papior, and M. Brandbyge, Nat. Commun. 8, 15783 (2017).
+
+[34] Z. Wu, F. Zhai, F. M. Peeters, H. Q. Xu, and K. Chang, Phys. Rev. Lett. 106, 176802 (2011).
+
+[35] F. Zhai, Y.-L. Ma, and K. Chang, New J. Phys. 13, 083029 (2011).
+
+[36] J. U. Nöckel, A. D. Stone, G. Chen, H. L. Grossman, and R. K. Chang, Opt. Lett. 21, 1609 (1996).
+
+[37] J. U. Nöckel and A. D. Stone, Nature (London) 385, 45 (1997).
+
+[38] C. Gmachl, F. Capasso, E. E. Narimanov, J. U. Nöckel, A. D. Stone, J. Faist, D. L. Sivco, and A. Y. Cho, Science 280, 1556 (1998).
+
+[39] K.J. Vahala, Nature (London) 424, 839 (2003).
+
+[40] H. Cao and J. Wiersig, Rev. Mod. Phys. 87, 61 (2015).
+
+[41] J. H. Bardarson, M. Titov, and P. W. Brouwer, Phys. Rev. Lett. 102, 226803 (2009).
+
+[42] M. Schneider and P. W. Brouwer, Phys. Rev. B 84, 115440 (2011).
+
+[43] J. Heinl, M. Schneider, and P. W. Brouwer, Phys. Rev. B 87, 245426 (2013).
+
+[44] M. Schneider and P. W. Brouwer, Phys. Rev. B 89, 205437 (2014).
+
+[45] Y. Kuramoto and D. Battogtokh, Nonlinear Phenom. Complex Syst. 5, 380 (2002).
+
+[46] D.M. Abrams and S.H. Strogatz, Phys. Rev. Lett. 93, 174102 (2004).
+
+[47] D.M. Abrams and S.H. Strogatz, Int. J. Bifurcation Chaos Appl. Sci. Eng. 16, 21 (2006).
+
+[48] M.R.Tinsley,S.Nkomo,andK.Showalter,Nat.Phys.8,
+662 (2012).
+
+[49] A.M.Hagerstrom,T.E.Murphy,R.Roy,P.Hövel,I.
+Omelchenko,and E.Schöll,Nat.Phys.8,658 (2012).
+
+[50] E.A.Martens,S.Thutupalli,A.Fourrière,and O.
+Hallatschek,Proc.Natl.Acad.Sci.U.S.A.110,10563
+(2013).
+
+[51] N.Yao,Z.-G.Huang,Y.-C.Lai,andZ.-G.Zheng,Sci.Rep.
+3, 3522 (2013).
+
+[52] N.Yao,Z.-G.Huang,C.Grebogi,andY.-C.Lai,Sci.Rep.5,
+12988 (2015).
+
+[53] J.K.Gansel,M.Thiel,M.S.Rill,M.Decker,K.Bade,V.
+Saile,G.von Freymann,S.Linden,and M.Wegener,
+Science 325, 1513 (2009).
+
+[54] S.Zhang,Y.-S.Park,J.Li,X.Lu,W.Zhang,and X.Zhang,
+Phys.Rev.Lett. 102, 023901 (2009).
+
+[55] G.Hackenbroich and J.U.Nöckel,Europhys.Lett., 39, 371
+(1997).
+
+[56] M.Hentschel and K.Richter, Phys.Rev.E **66**, 056207
+(2002).
+
+[57] E.Doron and U.Smilansky,Nonlinearity **5**, 1055 (1992).
+
+[58] E.Doron and S.D.Frischat, Phys.Rev.Lett **75**, 3661
+(1995).
+
+[59] See Supplemental Material at http://link.apss.org/
+supplemental/10.1103/PhysRevLett.120.124101 for a de-
+tailed analytical derivation of the scattering matrix for the
+Dirac electron optical system with classically integrable or
+chaotic dynamics Formulas for various characterizing
+quantities (e.g., scattering cross sections) and numerical
+validation are also included.
+
+[60] E.P.Wigner, Phys.Rev **98**, 145 (1955).
+
+[61] F.T.Smith, Phys.Rev **118**, 349 (1960).
+
+[62] H.Schomerus,M.Marciani,and C.W.J Beenakker,Phys.
+Rev.Lett **114**, 166803 (2015).
+
+[63] D.Zwillinger, Table of Integrals, Series, and Products
+(Elsevier Science, New York, 2014).
+
+[64] P.Wei,S.-W.Lee,F.Lemaitre,L.Pinel,D.Cutaia,W.-J.
+Cha,F.Katmis,Y.Zhu,D.Heiman,J.Hone,and J.S.M.
+C.-T.Chen,Nat.Mater **15**, 711 (2016).
+
+[65] S.RotterandS.Gigan,Rev.Mod.Phys,**89**,015005(2017).
+
+[66] R Blümel and U Smilansky, Phys. Rev. Lett **60**, 477
+(1988).
+
+[67] R Blümel and U Smilansky, Physica (Amsterdam) **36D**,
+111 (1989).
+
+[68] R.A.Jalabert,H.U.Baranger,and A.D.Stone,Phys.Rev.
+Lett **65**, 2442 (1990).
+
+[69] C.M.Marcus,A.J.Rimberg,R.M.Westervelt,P.F.Hopkins,
+and A.C.Gossard,Phys.Rev.Lett **69**, 506 (1992).
+
+[70] Y.-C.Lai,R Blümel,E.Ott,and C.Grebogi,Phys.Rev.
+Lett **68**, 3491 (1992).
+
+[71] R Ketzmerick, Phys. Rev. B **54**, 10841 (1996).
+
+[72] T.Kottos and U.Smilansky, Phys.Rev.Lett **79**, 4794
+(1997).
+
+[73] A.S.Sachrajda,R.Ketzmerick,C.Gould,Y.Feng,P.J.
+Kelly,A.Delage,and Z.Wasilewski,Phys.Rev.Lett **80**,
+1948 (1998).
+
+[74] T.Kottos,U.Smilansky,J.Fortuny,and G.Nesti,Radio Sci.
+**34**,747 (1999).
+
+[75] T.Kottos and U.Smilansky, Phys.Rev.Lett **85**, 968 (2000).
+
+[76] B.Huckestein,R.Ketzmerick,and C.H.Lewenkopf,Phys.
+Rev.Lett **84**, 5504 (2000)
+
+[77] G.Casati,I.Guarneri,and G.Maspero,Phys.Rev.Lett **84**,
+63 (2000).
+
+[78] A.P.S.de Moura,Y.-C.Lai,R.Akis,J.P.Bird,and
+D.K.Ferry,Phys.Rev.Lett **88**,236804 (2002).
+
+[79] R.Crook,C.G.Smith,A.C.Graham,I.Farrer,H.E.Beere,
+and D.A.Ritchie,Phys.Rev.Lett **91**,246803 (2003).
+
+[80] T.Kottos and U.Smilansky,J.Phys.A **36**,3501 (2003).
+
+[81] S.Gnutzmann and U.Smilansky,Adv.Phys,**55**, 527
+(2006).
+
+[82] R.Band,A.Sawicki,and U.Smilansky,J.Phys.A **43**,
+415201 (2010).
+
+[83] Y.Krivolapov,S.Fishman,E.Ott,and T.M.Antonsen,
+Phys.Rev.E **83**, 016204 (2011).
+
+[84] R.Yang,L.Huang,Y.-C.Lai,and C.Grebogi,Europhys.
+Lett., **94**, 40004 (2011).
+
+[85] G.L.Wang,L.Ying,Y.-C.Lai,and C.Grebogi,Phys.Rev.
+E **87**, 052908 (2013).
+---PAGE_BREAK---
+
+Supplementary Information for
+
+# Chaos in Dirac electron optics: Emergence of a relativistic quantum chimera
+
+Hong-Ya Xu, Guang-Lei Wang, Liang Huang, and Ying-Cheng Lai
+
+Corresponding author: Y.-C. Lai (Ying-Cheng.Lai@asu.edu)
+
+## CONTENTS
+
+| I. Basics | 1 | | II. Multichannel elastic scattering theory for two-dimensional massless Dirac fermions - S-matrix approach | 2 | | III. S-matrix for eccentric annular shaped (ring) scatterer | 6 | | IV. Calculation of wavefunctions | 7 | | V. Ideal centric case: analytic results | 8 | | VI. Validation of the S-matrix approach | 10 | | A. Symmetry constraints | 10 | | B. The case of ξ → 0 | 12 | | VII. Full data set for the plot Fig. 2(c) in the main text | 12 | | VIII. Feasibility of experimental implementation | 13 | | References | 14 |
+
+## I. BASICS
+
+The starting point of our analysis is the effective low-energy Hamiltonian of graphene or graphene-like systems with Dirac cones:
+
+$$H = v_F s_0 \otimes \boldsymbol{\sigma} \cdot \mathbf{p} + s_0 \otimes \boldsymbol{\sigma}_0 \mathcal{V}_{\text{gate}}(\mathbf{r}) - s_z \otimes \boldsymbol{\sigma}_0 \mathcal{M}(\mathbf{r}), \quad (\text{S1.1})$$
+
+where the identity matrix $s_0$ and the Pauli matrix $s_z$ act on the real electron spin space while the Pauli matrices $\boldsymbol{\sigma} = (\boldsymbol{\sigma}_x, \boldsymbol{\sigma}_y)$ and the identity matrix $\boldsymbol{\sigma}_0$ define the sublattice pseudospin. The first term in Eq. (S1.1) characterizes the pristine Dirac cone band dispersion with a four-fold degeneracy at a Dirac point: two for the sublattice pseudospin and two for the real electron spin. Since $[s_z \otimes \boldsymbol{\sigma}_0, H] = 0$, it is equivalent to two copies of Dirac-like Hamiltonian indexed by the spin quantum number $s = \pm$:
+
+$$H_s = H_0 + \mathcal{V}_{\text{gate}}(\mathbf{r}) - s\mathcal{M}(\mathbf{r}), \quad (\text{S1.2})$$
+---PAGE_BREAK---
+
+where $H_0 = v_F \sigma \cdot p$ is effectively the fundamental Dirac-Weyl Hamiltonian describing the two-dimensional free-space massless Dirac fermions. The Hamiltonian $H_s$ acts on two-component pseudospinor waves for the massless Dirac quasiparticles belonging to the real spin state $s$ in graphene or similar materials. The last two terms in Eq. (S1.1) represent the applied gate and exchange potential, respectively.
+
+In the main text, the calculations are for the scattering of such quasiparticles from the step potential that can lead to spin-resolved ray-path defined classical dynamics in the short wavelength limit. The scattering process is of the relativistic type for massless Dirac fermions. In the following Secs. II-V, we develop an S-matrix based scheme to solve the relativistic quantum scattering problem, which is validated computationally in Sec. VI. In Sec. VII, we provide a detailed demonstration of the phenomenon of enhanced spin polarization as shown in Fig. 2(c) in the main text.
+
+## II. MULTICHANNEL ELASTIC SCATTERING THEORY FOR TWO-DIMENSIONAL MASS-LESS DIRAC FERMIONS - S-MATRIX APPROACH
+
+The main theoretical tool that we employ to investigate the role of chaos in Dirac electron optics is the formalism of stationary quantum scattering for two-dimensional massless Dirac fermions, where the scatterer has an irregular shape and a finite range. The scattering process is assumed to be elastic. The fundamental quantity of interest is the scattering (S-) matrix, from which all physically relevant quantities characterizing the scattering process can be deduced.
+
+In the free space, the system is governed by the stationary Dirac-Weyl equation
+
+$$H_0\chi = \hbar v_F \sigma \cdot k\chi = E\chi, \quad (S2.3)$$
+
+for which the plane-wave solutions for energy $E = \alpha\hbar v_F k$ is given by
+
+$$\chi_k(r) = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ \alpha e^{i\theta_k} \end{pmatrix} e^{ik \cdot r}, \quad (S2.4)$$
+
+where $k = \sqrt{k_x^2 + k_y^2}$, $\alpha = \operatorname{sgn}(E)$ and $\theta_k = \arctan(k_y/k_x)$ characterize the propagating direction parallel to the wavevector $k$ for $E > 0$. For $E < 0$, the two directions are anti-parallel with each other. In the polar coordinates $r = r(\cos\theta, \sin\theta)$, the corresponding spinor cylindrical waves with given angular momentum and energy are
+
+$${}^k h_m(r, \theta) = \begin{pmatrix} Z_m(kr) \\ i\alpha Z_{m+1}(kr) e^{im\theta} \end{pmatrix} e^{im\theta}, \quad (S2.5)$$
+
+where $Z_m$ is the m-th order Bessel or Hankel function of the physically relevant kind. In particular, under the time convention $e^{-iE/\hbar t}$ and for positive energy $E > 0$, we have
+
+$${}^k h_m(-) = \begin{pmatrix} H_m^{(2)}(kr) \\ iH_{m+1}^{(2)}(kr)e^{im\theta} \end{pmatrix} e^{im\theta}, \quad (S2.6a)$$
+
+as the cylindrical wave basis of the spinor waves of the incoming type, and
+
+$${}^k h_m(+)= \begin{pmatrix} H_m^{(1)}(kr) \\ iH_{m+1}^{(1)}(kr)e^{im\theta} \end{pmatrix} e^{im\theta}, \quad (S2.6b)$$
+---PAGE_BREAK---
+
+as the outgoing type, where $H_m^{(1)}$ and $H_m^{(2)}$ denote the Hankel functions of the first and second kind, respectively.
+
+For the scattering problem illustrated in Fig. 1 in the main text, the stationary wavefunction outside the scatterer generally can be decomposed into two parts - incoming and outgoing waves:
+
+$$ \Psi = \Psi_{in} + \Psi_{out}. \quad (S2.7) $$
+
+In the spinor cylindrical wave basis for massless Dirac fermions with positive energy, the incoming wave can be written as
+
+$$ \Psi_{in} = \sum_{m} a_{m} {}^{k}h_{m}^{(-)}, \quad (S2.8) $$
+
+and the outgoing wave can be expressed as
+
+$$ \Psi_{out} = \sum_{m} a_{m} \sum_{m'} S_{mm'} {}^{k}h_{m'}^{(+)}, \quad (S2.9) $$
+
+where the coefficients $a_m$ are determined to yield a desired kind of incoming test wave, $S_{mm'}$ denotes the transition amplitude for an incoming cylindrical wave ${}^k h_m^{(-)}$ scattered into an outgoing one ${}^k h_{m'}^{(+)}$. This defines the S-matrix with $m$ and $m'$ covering all possible angular momentum channels. We thus have
+
+$$ \begin{aligned} \Psi(r, \theta) &= \sum_m a_m \left[ \begin{pmatrix} H_m^{(2)}(kr) \\ iH_{m+1}^{(2)}(kr)e^{i\theta} \end{pmatrix} e^{im\theta} + \sum_{m'} S_{mm'} \begin{pmatrix} H_{m'}^{(1)}(kr) \\ iH_{m'+1}^{(1)}(kr)e^{i\theta} \end{pmatrix} e^{im'\theta} \right], \\ &= \sum_m 2a_m \begin{pmatrix} J_m(kr) \\ iJ_{m+1}(kr)e^{i\theta} \end{pmatrix} e^{im\theta} + \sum_m a_m \sum_{m'} (S_{mm'} - \delta_{mm'}) \begin{pmatrix} H_{m'}^{(1)}(kr) \\ iH_{m'+1}^{(1)}(kr)e^{i\theta} \end{pmatrix} e^{im'\theta}. \end{aligned} \quad (S2.10) $$
+
+To be concrete, we assume the incident wave to be a plane wave given by
+
+$$ \chi_{k_{in}}(r, \theta) = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ e^{i\theta k_{in}} \end{pmatrix} e^{ik_{in} \cdot r} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ e^{i\theta'} \end{pmatrix} e^{ikr \cos(\theta - \theta')}, $$
+
+for massless Dirac fermions with positive energy $E = \hbar v_F k$ and incident wavevector $k_{in} = k(\cos\theta', \sin\theta')$ that makes an angle $\theta'$ with the x axis. This defines the incident propagating direction as shown in Fig. 1 in the main text. We have
+
+$$ \chi_{k_{in}} = \sum_m \frac{i^m e^{-im\theta'}}{\sqrt{2}} \begin{pmatrix} J_m(kr) \\ iJ_{m+1}(kr)e^{i\theta} \end{pmatrix} e^{im\theta}, \quad (S2.11) $$
+
+where the Jacobi-Anger expansion $e^{iz\cos\theta} = \sum_m i^m J_m(z)e^{im\theta}$ has been used. Given the coefficients
+
+$$ a_m = a_m(\theta') = \frac{i^m e^{-im\theta'}}{2\sqrt{2}}, \quad (S2.12) $$
+
+and with the definition $T_{mm'} = S_{mm'} - \delta_{mm'}$, we get
+
+$$ \Psi(r, \theta) = \chi_{k_{in}} + \sum_m a_m \sum_{m'} T_{mm'} \begin{pmatrix} H_{m'}^{(1)}(kr) \\ iH_{m'+1}^{(1)}(kr)e^{i\theta} \end{pmatrix} e^{im'\theta}. \quad (S2.13) $$
+---PAGE_BREAK---
+
+Far away from the scatterer center, i.e. $kr \gg 1$, the asymptotic wavefunction can be written as
+
+$$ \lim_{kr \gg 1} \Psi = \chi_{k_{in}} + \frac{f(\theta, \theta')}{\sqrt{-i\tau}} \begin{pmatrix} 1 \\ e^{i\theta} \end{pmatrix} e^{ikr}, \quad (S2.14) $$
+
+where *f* is the scattering amplitude for two-dimensional massless Dirac fermions, which is related to the differential cross section through
+
+$$ \frac{d\sigma}{d\theta} = \sigma(\theta, \theta') = |f(\theta, \theta')|^2, \quad (S2.15a) $$
+
+the total cross section through
+
+$$ \sigma_t(\theta') = \oint d\theta |f(\theta, \theta')|^2, \quad (S2.15b) $$
+
+the transport cross section through
+
+$$ \sigma_{tr}(\theta') = \oint d\theta (1 - \cos\theta) |f(\theta, \theta')|^2, \quad (S2.15c) $$
+
+and the skew cross section through
+
+$$ \sigma_{sk}(\theta') = \oint d\theta \sin\theta |f(\theta, \theta')|^2. \quad (S2.15d) $$
+
+It follows from Eqs. (S2.13) and (S2.14) that
+
+$$ \frac{f(\theta, \theta')}{\sqrt{-i\tau}} \begin{pmatrix} 1 \\ e^{i\theta} \end{pmatrix} e^{ikr} = \lim_{kr \gg 1} \sum_m a_m \sum_{m'} T_{mm'} \begin{pmatrix} H_{m'}^{(1)}(kr) \\ iH_{m'+1}^{(1)}(kr)e^{i\theta} \end{pmatrix} e^{im'\theta}. $$
+
+Finally, we obtain
+
+$$ f(\theta, \theta') = i \sqrt{\frac{2}{\pi k}} \sum_{m'} \sum_m a_m(\theta') T_{mm'}(-i)^{m'} e^{im'\theta}. \quad (S2.16) $$
+
+Defining
+
+$$ f_l(\theta') = \sum_m a_m(\theta') T_{ml}(-i)^l = \sum_m a_m(\theta') (S_{ml} - \delta_{ml}) (-i)^l, \quad (S2.17) $$
+
+we rewrite the scattering amplitude as
+
+$$ f(\theta, \theta') = i \sqrt{\frac{2}{\pi k}} \sum_l f_l(\theta') e^{il\theta}, $$
+
+which, when substituted into Eqs. (S2.15a)-(S2.15d), leads to convenient summation forms of the various cross sections in terms of $f_l(\theta')$ (and eventually the scattering matrix elements $S_{ml}$) as
+
+$$ \sigma(\theta, \theta') = \frac{2}{\pi k} \left| \sum_l f_l(\theta') e^{il\theta} \right|^2 = \frac{2}{\pi k} \sum_{l,l'} \sum_{m,m'} a_m a_{m'}^*(S_{ml} - \delta_{ml})(S_{m'l'}^* - \delta_{m'l})(-i)^{(l-l')} e^{i(l-l')\theta}, \quad (S2.18a) $$
+
+$$ \sigma_t(\theta') = \frac{4}{k} \sum_l |f_l(\theta')|^2 = \frac{4}{k} \sum_{m,m'} a_m(TT^\dagger)_{mm'} a_{m'}^*, \quad (S2.18b) $$
+---PAGE_BREAK---
+
+$$ \sigma_{tr}(\theta') = \sigma_t(\theta') - \frac{4}{k} \sum_l \Re[f_l f_{l+1}^*] = \sigma_t(\theta') - \frac{4}{k} \sum_{m,m'} \Re[i a_m (T \hat{T}^\dagger)_{mm'} a_{m'}^*], \quad (\text{S2.18c}) $$
+
+and
+
+$$ \sigma_{sk}(\theta') = \frac{4}{k} \sum_l \mathfrak{S}[f_l f_{l+1}^*] = \frac{4}{k} \sum_{m,m'} \mathfrak{S}[i a_m (T \hat{T}^\dagger)_{mm'} a_{m'}^*], \quad (\text{S2.18d}) $$
+
+where $(\hat{T}^\dagger)_{lm'} \equiv (T^\dagger)_{l+1,m'} = T_{m',l+1}^*$. All the scattering cross sections are functions of $\theta'$ that defines the direction of the incident wave with respect to the x axis. Averaging over all the incident directions $(\theta')$, we obtain the cross sections that are independent of the angle $\theta'$ as
+
+$$ \bar{\sigma}_t = \frac{1}{2\pi} \oint d\theta' \sigma_t(\theta') = \frac{4}{k} \sum_{m,m'} \frac{1}{2\pi} \oint d\theta' a_m(\theta')(TT^\dagger)_{mm'} a_{m'}^*(\theta') = \frac{1}{2k} \sum_{m,l} |T_{ml}|^2, \quad (\text{S2.19a}) $$
+
+$$ \bar{\sigma}_{tr} = \bar{\sigma}_t - \frac{4}{k} \sum_{m,m'} \Re \left[ \frac{i}{2\pi} \oint d\theta' a_m(\theta') (T \hat{T}^\dagger)_{mm'} a_{m'}^*(\theta') \right] = \frac{1}{2k} \sum_{m,l} \left\{ |T_{ml}|^2 - \Re[iT_{ml}T_{m,l+1}^*] \right\}, \quad (\text{S2.19b}) $$
+
+and
+
+$$ \bar{\sigma}_{sk} = \frac{1}{2k} \sum_l \sum_{m,m'} \mathfrak{S} [iT_{ml}(\hat{T}^\dagger)_{lm'}\delta_{mm'}] = \frac{1}{2k} \sum_{m,l} \mathfrak{S} [iT_{ml}T_{m,l+1}^*]. \quad (\text{S2.19c}) $$
+
+From the definition
+
+$$ T_{ml} = S_{ml} - \delta_{ml}, \quad (\text{i.e. } T = S - I), $$
+
+we can calculate the characteristic cross sections once the scattering ($S$)-matrix is obtained.
+
+In addition to the cross sections, associated with the $S$-matrix, another quantity of interest is the Wigner-Smith delay time [1, 2] defined as
+
+$$ \tau(E) = -i\hbar\operatorname{Tr}\left[S^\dagger \frac{\partial S}{\partial E}\right], \quad (\text{S2.20}) $$
+
+which characterizes the temporal aspects of the scattering process. The delay time is related to the density of states [3] through $\rho(E) = \tau(E)/(2\pi\hbar)$.
+
+By definition, the transport cross section most appropriately characterizes the transport property, which determines the transport relaxation time $\tau_{tr}$ according to the Fermi's golden rule with its reciprocal given by
+
+$$ \frac{1}{\tau_{tr}} = n_c v_F \sigma_{tr}, \quad (\text{S2.21}) $$
+
+where $n_c$ is the concentration of identical scatterers that are assumed to be sufficiently dilute so that multiple scattering effects can be neglected. If the system dimension is larger than the mean-free path $\mathcal{L} = v_F \tau_{tr}$, from the semiclassical Boltzmann transport theory, we obtain the conductivity of the system as
+
+$$ \frac{G}{G_0} = k_F v_F \tau_{tr} = \frac{k}{n_c \sigma_{tr}}, \quad (\text{S2.22}) $$
+
+where $G_0 = 2e^2/h$ is the conductance quantum.
+---PAGE_BREAK---
+
+### III. S-MATRIX FOR ECCENTRIC ANNULAR SHAPED (RING) SCATTERER
+
+We perform an explicit calculation of the S-matrix for the scatterer of annular shape defined by two disks of different radii ($R_1, R_2 < R_1$) with a finite relative displacement $\xi$ of the disk centers, as shown in Fig. 1(a) in the main text. For convenience, we adopt the convention that the unprimed coordinates are defined by choosing the origin as the center of the larger disk $O$ while the primed ones have their origin at the small disk center $O'$. Applying the standard S-matrix formalism, we obtain the wavefunction outside the eccentric annular scatterer, i.e., $|r| > R_1$, in the unprimed polar coordinates $r = (r, \theta)$ as
+
+$$ \Psi^I(\mathbf{r}) = \sum_{m=-\infty}^{\infty} a_m^0 \left[ k_0 h_m^{(2)} + \sum_{m'=-\infty}^{\infty} S_{mm'}^{k_0} h_{m'}^{(1)} \right], \quad (S3.23) $$
+
+where $S_{mm'}$ denotes the S-matrix elements in terms of the two given channels indexed by $m$ and $m'$, respectively, and the coefficients $a_m^0$ are chosen to yield a desired kind of incident test wave.
+
+Let $k_0 h_m^{(2)} = a_m^0 k_0 h_m^{(2)}$ and $\underline{S}_{mm'} = a_m^0 \underline{S}_{mm'}$, and so
+
+$$ \Psi^I(\mathbf{r}) = \sum_{m=-\infty}^{\infty} \left[ k_0 \underline{h}_m^{(2)} + \sum_{m'=-\infty}^{\infty} \underline{S}_{mm'}^{k_0} h_{m'}^{(1)} \right]. \quad (S3.24) $$
+
+The wavefunction in the annular region ($|\mathbf{r}'| > R_2$ and $|\mathbf{r}| < R_1$) can be expressed in the unprimed coordinates as
+
+$$ \Psi^{II}(\mathbf{r}) = \sum_{m=-\infty}^{\infty} \sum_{l=-\infty}^{\infty} m a_l^1 \left[ k_1 h_l^{(2)} + \sum_{l'=-\infty}^{\infty} S_{ll'}^{cd} k_1 h_{l'}^{(1)} \right], \quad (S3.25) $$
+
+where the resulting matrix $S^{od} = [S_{ll'}^{od}]$ characterizes the scattering from the off-centered small inner disk and is non-diagonal. Making use of the addition property of the Bessel functions, we obtain the following relation
+
+$$ S^{od} = U^{-1} S^{cd} U, \quad (S3.26) $$
+
+where the transformation matrices $U = [U_{l\mu}] = [J_{\mu-l}(k_1\xi)]$ and $U^{-1} = [U_{ml}^{-1}] = [J_{m-l}(k_1\xi)]$ are responsible for the eccentric displacement/deformation, and $S^{cd} = [S_l^{cd}\delta_{ll'}]$ is the diagonal scattering matrix for the centered inner disk scatterer in the primed coordinates with its elements $S_l^{cd}$ given by
+
+$$ S_l^{cd} = - \frac{\alpha_1 H_{l+1}^{(2)}(k_1 R_2) J_l(k_2 R_2) - \alpha_2 H_l^{(2)}(k_1 R_2) J_{l+1}(k_2 R_2)}{\alpha_1 H_{l+1}^{(1)}(k_1 R_2) J_l(k_2 R_2) - \alpha_2 H_l^{(1)}(k_1 R_2) J_{l+1}(k_2 R_2)}. \quad (S3.27) $$
+
+The S-matrix of the whole scatterer can thus be determined by the matching conditions at the outer boundary $|r| = R_1$. In Eqs. (S3.23) and (S3.25), $k_{0,1}h_m^{(1,2)}$ denote the basic spinor waves consisting of the expanding basis indexed by the angular momentum in the polar coordinates and are explicitly given in Eq. (S5.44a). In particular, for a given incident spinor wave with angular momentum $m$, wavefunction matching for each momentum value $j$ yields
+
+$$ a_m^0 H_m^{(2)}(k_0 R_1) \delta_{mj} + a_m^0 S_m j H_j^{(1)}(k_0 R_1) = {}^m a_j^1 H_j^{(2)}(k_1 R_1) + \sum_l {}^m a_l^1 S_{lj}^{od} H_j^{(1)}(k_1 R_1), \quad (S3.28a) $$
+---PAGE_BREAK---
+
+$$i\alpha_0 [a_m^0 H_{m+1}^{(2)}(k_0 R_1) \delta_{mj} + a_m^0 S_{mj} H_{j+1}^{(1)}(k_0 R_1)] = i\alpha_l [\sum_l a_l^l H_{j+1}^{(2)}(k_1 R_l) + \sum_l a_l^l S_{lj}^{od} H_{j+1}^{(1)}(k_1 R_l)]. \quad (S3.28b)$$
+
+Defining matrices
+
+$$X^{(1,2)} = [H_m^{(1,2)}(k_0 R_1)\delta_{mj}], \quad Y^{(1,2)} = [H_{m+1}^{(1,2)}(k_0 R_1)\delta_{mj}], \qquad (S3.29a)$$
+
+and
+
+$$x^{(1,2)} = [H_m^{(1,2)}(k_1 R_1)\delta_{mj}], \quad y^{(1,2)} = [H_{m+1}^{(1,2)}(k_1 R_1)\delta_{mj}], \qquad (S3.29b)$$
+
+we can rewrite the above equations in the following compact form
+
+$$A^0 X^{(2)} + A^0 S X^{(1)} = A X^{(2)} + A S^{od} X^{(1)}, \qquad (S3.30a)$$
+
+$$\alpha_0 [A^0 Y^{(2)} + A^0 S Y^{(1)}] = \alpha_1 [A Y^{(2)} + A S^{od} Y^{(1)}], \qquad (S3.30b)$$
+
+with the coefficient matrices $A^0 = [a_m^0 \delta_{mj}]$ and $A = [a_l^l]$. Solving the above equations, we arrive at
+
+$$S = - \frac{Y^{(2)} - \alpha_0 \alpha_1 X^{(2)} T}{Y^{(1)} - \alpha_0 \alpha_1 X^{(1)} T}, \qquad (S3.31)$$
+
+where $T = F^{-1}G$ with the conventions $F = x^{(2)} + S^{od}x^{(1)}, G = y^{(2)} + S^{od}y^{(1)}$ and band indices $\alpha_{0,1} = \pm 1$. Substituting the $S$-matrix given in Eq. (S3.31) into Eq. (S3.30a), we obtain matrix $A$ consisting of the expansion coefficients $a_l^l$ in the annular region as
+
+$$A = \frac{A^0 X^{(2)} + A^0 S X^{(1)}}{X^{(2)} + S^{od} X^{(1)}}. \qquad (S3.32)$$
+
+# IV. CALCULATION OF WAVEFUNCTIONS
+
+Inside the inner disk region, i.e., $|r'| < R_2$, the wavefunction in the primed polar coordinates $r' = (r', \theta')$ (with origin at the small disk center $O'$) is
+
+$$\tilde{\Psi}^{III}(r', \theta') = \sum_m \sum_l {}^m\tilde{b}_l \binom{J_l(k_2 r')}{is_2 J_{l+1}(k_2 r')} e^{il\theta'} . \qquad (S4.33)$$
+
+The expansion coefficients ${}^m\tilde{b}_l$ can be determined by the matching condition at the inner boundary $r' = |r - \xi| = R_2$ between $\Psi^{(2)}(r', \theta')$ and $\Psi^{(1)}(r, \theta)$. To do so, it is convenient to reformulate the wavefunction inside the annular region in the primed coordinates. Using the relations
+
+$$S_{ll'}^{od} = \sum_j J_{l-j} S_{jj}^{cd} J_{l'-j}, \qquad (S4.34a)$$
+
+$$\delta_{ll'} = \sum_j J_{l-j} J_{l'-j}, \qquad (S4.34b)$$
+---PAGE_BREAK---
+
+and assuming $l' = j+n$, we have
+
+$$
+\begin{align*}
+k_1 h_l^{(2)} + \sum_{l'=-\infty}^{\infty} S_{ll'}^{od} k_1 h_{l'}^{(1)} &\equiv \sum_{l'=-\infty}^{\infty} \left[ \delta_{ll'} k_1 h_{l'}^{(2)} + \sum_j J_{l-j} S_{jj}^{cd} J_{l'-j} k_1 h_{l'}^{(1)} \right], \\
+&= \sum_{l'} \sum_j J_{l-j} \left[ J_{l'-j} k_1 h_{l'}^{(2)} + S_{jj}^{cd} J_{l'-j} k_1 h_{l'}^{(1)} \right], \tag{S4.35} \\
+&= \sum_j J_{l-j} \left[ \sum_n J_n k_1 h_{j+n}^{(2)} + S_{jj}^{cd} \sum_n J_n k_1 h_{j+n}^{(1)} \right].
+\end{align*}
+$$
+
+Making use of the Graf's addition theorem [4] for the Bessel functions $Z_j \in \{J_j, H_j^{(1,2)}\}$:
+
+$$Z_j(kr')e^{ij\theta'} = \sum_n J_n(k\xi)Z_{j+n}(kr)e^{i(j+n)\theta},$$
+
+we can rewrite the Eq. (S4.35) in the primed coordinates as
+
+$$k_1 h_l^{(2)} + \sum_{l'=-\infty}^{\infty} S_{ll'}^{od} k_1 h_{l'}^{(1)} = \sum_j J_{l-j} \left[ k_1 \tilde{h}_j^{(2)} + S_{jj}^{cd} k_1 \tilde{h}_j^{(1)} \right], \quad (S4.36)$$
+
+where
+
+$$k_1 \tilde{h}_j^{(1,2)} = \begin{pmatrix} H_j^{(1,2)}(k_1 r') \\ i \alpha_1 H_{j+1}^{(1,2)}(k_1 r') e^{i \theta'} \end{pmatrix} e^{ij\theta'}. \quad (S4.37)$$
+
+Substituting this expression into Eq. (S3.25), we obtain the wavefunction for the annular region in the primed coordinates as
+
+$$\tilde{\Psi}^{II}(r', \theta') = \sum_m \sum_l \sum_j {}^m a_l^{\dagger} J_{l-j} \left[ k_1 \tilde{h}_j^{(2)} + S_{jj}^{\mathrm{cd}} k_1 \tilde{h}_j^{(1)} \right] = \sum_m \sum_l {}^m \tilde{a}_l^{\dagger} \left[ k_1 \tilde{h}_l^{(2)} + S_{ll}^{\mathrm{cd}} k_1 \tilde{h}_l^{(1)} \right], \quad (S4.38)$$
+
+where
+
+$${}^m\tilde{a}_l = \sum_j {}^m a_j J_{j-l}(k_1\xi). \qquad (S4.39)$$
+
+Imposing the continuity of the wavefunction at $r' = R_2$, we get
+
+$${}^m\tilde{b}_l = {}^m\tilde{a}_l \frac{H_l^{(2)}(k_1 R_2) + S_{ll}^{cd} H_l^{(1)}(k_1 R_2)}{J_l(k_2 R_2)}. \quad (S4.40)$$
+
+With the expansion coefficients ${}^m a_l^{\dagger}$, ${}^m \tilde{a}_l^{\dagger}$, and ${}^m \tilde{b}_l$ so determined and the scattering matrices $S, S^{od}, S^{cd}$ obtained in the relevant regions via Eqs. (S3.32, S4.39, S4.40) and Eqs. (S3.31, S3.26, S3.27), respectively, we can calculate the wavefunctions accordingly, which together give the full wavefunction in the entire space.
+
+V. IDEAL CENTRIC CASE: ANALYTIC RESULTS
+
+For the centric case, i.e., $\xi = 0$, we can obtain the analytic solutions of the scattering problem via the standard technique of partial wave decomposition. In particular, due to the circular rotational
+---PAGE_BREAK---
+
+symmetry and hence conservation of the total angular momentum, the partial waves outside the annular scatterer ($r > R_2$) can be written as
+
+$$ \psi_m^I = {}^{k_0}h_m^{(2)} + {}^m S_m h_m^{(1)}. \quad (S5.41) $$
+
+Inside the annular region $R_1 < r < R_2$, the waves are
+
+$$ \psi_m^{II} = A_m \left[ {}^{k_1}h_m^{(2)} + {}^m S_m h_m^{(1)} \right] \quad (S5.42) $$
+
+and
+
+$$ \psi_m^{III} = B_m {}^{k_2}\chi_m, \quad (S5.43) $$
+
+in the inner disk region $r < R_2$, where $[k_0, k_1, k_2] = [|E_0|, |E_0 - V_1|, |E_0 - V_2|]/\hbar v$,
+
+$$ {}_{k_{0,1}}h_m^{(1,2)} = \begin{pmatrix} H_m^{(1,2)}(k_{0,1}r) \\ i\alpha_{0,1}H_{m+1}^{(1,2)}(k_{0,1}r)e^{i\theta} \end{pmatrix} e^{im\theta}, \quad (S5.44a) $$
+
+and
+
+$$ \chi_m = \begin{pmatrix} J_m(k_2 r) \\ i \alpha_2 J_{m+1}(k_2 r) e^{i\theta} \end{pmatrix} e^{im\theta}, \quad (S5.44b) $$
+
+with $\alpha_{0,1,2} = \pm 1$ being the band index defined as the signs of $E_0$, $(E_0 - V_{1,2})$, respectively, and $m = 0, \pm 1, \pm 2, \dots$ denote the orbital angular momentum. The scattering amplitudes $S_m, S_m^{cd}$ and the expansion coefficients $A_m, B_m$ can be determined from the boundary conditions $\psi_m^I(R_1, \theta) = \psi_m^{II}(R_1, \theta)$; $\psi_m^{II}(R_2, \theta) = \psi_m^{III}(R_2, \theta)$, leading to the following linear matrix equation
+
+$$ \begin{pmatrix} H_m^{(2)}(k_1 R_2) & -J_m(k_2 R_2) & H_m^{(1)}(k_1 R_2) & 0 \\ \alpha_1 H_{m+1}^{(2)}(k_1 R_2) & -\alpha_2 J_{m+1}(k_2 R_2) & \alpha_1 H_{m+1}^{(1)}(k_1 R_2) & 0 \\ H_m^{(2)}(k_1 R_1) & 0 & H_m^{(1)}(k_1 R_1) & -H_m^{(1)}(k_0 R_1) \\ \alpha_1 H_{m+1}^{(2)}(k_1 R_1) & 0 & \alpha_1 H_{m+1}^{(1)}(k_1 R_1) & -\alpha_0 H_{m+1}^{(1)}(k_0 R_1) \end{pmatrix} \begin{pmatrix} A_m \\ B_m \\ C_m \\ S_m \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ H_m^{(2)}(k_0 R_1) \\ \alpha_0 H_{m+1}^{(2)}(k_0 R_1) \end{pmatrix}, \quad (S5.45) $$
+
+where $C_m = A_m S_m^{cd}$. From the standard quantum scattering theory, we have that $S_m$ is an element of the *S*-matrix for the concentric circular scatterer, which is diagonal in the basis of angular momentum states *m*. Solving Eq. (S5.45), we obtain the coefficients as
+
+$$ A_m = \frac{H_m^{(2)}(k_0 R_1) + H_m^{(1)}(k_0 R_1) S_m}{H_m^{(2)}(k_1 R_1) + H_m^{(1)}(k_1 R_1) S_m^{cd}}, \quad B_m = A_m \frac{H_m^{(2)}(k_1 R_2) + H_m^{(1)}(k_1 R_2) S_m^{cd}}{J_m(k_2 R_2)}, \quad (S5.46a) $$
+
+while the scattering amplitudes for the whole scatterer are given by
+
+$$ S_m = - \frac{\alpha_0 x_m H_{m+1}^{(2)}(k_0 R_1) - \alpha_1 y_m H_m^{(2)}(k_0 R_1)}{\alpha_0 x_m H_{m+1}^{(1)}(k_0 R_1) - \alpha_1 y_m H_m^{(1)}(k_0 R_1)}, \quad (S5.46b) $$
+
+where $x_m = H_m^{(2)}(k_1 R_1) + H_m^{(1)}(k_1 R_1) S_m^{cd}$ and $y_m = H_{m+1}^{(2)}(k_1 R_1) + H_{m+1}^{(1)}(k_1 R_1) S_m^{cd}$ with $S_m^{cd}$ given by Eq. (S3.27).
+---PAGE_BREAK---
+
+FIG. S1. Validation of the *S*-matrix approach through the closed-form analytic constraints imposed by the symmetry of the system. (a) Plot of the diagonal elements $S_{-(l+1),-(l+1)}$ versus $S_{ll}$, where the thick black line is the theoretical prediction of Eq. (S6.49), (b) real and imaginary parts of $S_{ll}$, (c) false color-coded map of the magnitudes of the full scattering matrix elements with a proper cut-off at $l = \pm 102$ (the scale bar shows the fourth root of magnitudes $|S_{ll'}|$); (d) the skew cross section $\sigma_{sk}$ (purple line) and the total cross section $\sigma_t$ (light blue curve) as a function of the energy for incident waves propagating parallel to the symmetry axis, where the vanishing skew (asymmetric) scattering, i.e., $\sigma_{sk} \equiv 0$, is consistent with the prediction of Eq. (S6.51). Parameters adopted for (a)-(c) are $E = 70$, $R_2/R_1 = 0.6$, $\xi = 0.3$, $v_1 = -140$, and $v_2 = 0$. For (d), the parameters are $R_2/R_1 = 0.6$, $\xi = 0.3$, $v_1 = -10$ and $v_2 = 40$.
+
+# VI. VALIDATION OF THE *S*-MATRIX APPROACH
+
+## A. Symmetry constraints
+
+In spite of the lack of circular rotational symmetry, the system possesses a mirror (parity) symmetry, which imposes certain constrains on the *S*-matrix and leads to vanishing of skew (asymmetric) scattering provided that the incident wave propagates along the axis of the symmetry. In particular, for the configuration shown in Fig. 1(a) in the main text, for spinor scattering we can explicitly write the representation of the parity symmetry operation as $\mathcal{P}_x = \sigma_x \mathcal{R}_y$ with $\mathcal{R}_y$, which is the reflection operator that acts in the physical (position) space with respect to the x axis via the operations $x \to x(k_x \to k_x)$ and $y \to -y(k_y \to -k_y, \theta \to -\theta)$. As such, the system is invariant under parity, stipulating the relation $\mathcal{P}_x H \mathcal{P}_x^{-1} = H$ so that $\mathcal{P}_x \Psi$ is still a state of the system with the same energy. Under the operation of $\mathcal{P}_x$, the spinor cylindrical wave $^{\kappa h_m^{(1,2)}}$ of given orbital angular
+---PAGE_BREAK---
+
+momentum $m$ (corresponding to total angular momentum $L = m + 1/2$) can be transformed as
+
+$$
+\begin{align}
+\P_{x}^{k} h_{m}^{(1,2)} &= i \sigma_{x} \mathcal{R}_{y} \left( \begin{array}{c} H_{m}^{(1,2)}(kr) \\ i \alpha H_{m+1}^{(1,2)}(kr) e^{i\theta} \end{array} \right) e^{im\theta} = (-)^{m+1} i \alpha \left( \begin{array}{c} H_{-(m+1)}^{(1,2)}(kr) \\ i \alpha H_{-m}^{(1,2)}(kr) e^{i\theta} \end{array} \right) e^{-i(m+1)\theta}, \tag{S6.47} \\
+&= (-)^{m+1} i \alpha^k h_{-(m+1)}^{(1,2)}. \notag
+\end{align}
+$$
+
+Applying this relation to the resulting state $\Psi^{(0)}$ given in Eq. (S3.23), we obtain
+
+$$
+\begin{align}
+\P_x \Psi^l &= \sum_m \P_x a_m^0 \P_x^{-1} \P_x \Psi_m = \sum_{m=-\infty}^{\infty} \P_x a_m^0 \P_x^{-1} \P_x \left[ k_0 h_m^{(2)} + \sum_{m'=-\infty}^{\infty} S_{mm'} k_0 h_{m'}^{(1)} \right], \nonumber \\
+&= \sum_{m=-\infty}^{\infty} \bar{a}_m^0 (-)^{m+1} i \alpha_0 \left[ k_0 h_{-(m+1)}^{(2)} + \sum_{m'=-\infty}^{\infty} \P_x S_{mm'} \P_x^{-1} (-)^{m'-m} k_0 h_{-(m'+1)}^{(1)} \right], \tag{S6.48} \\
+&\equiv \sum_n c_n^0 \Psi_n = \sum_{n=-\infty}^{\infty} c_n^0 \left[ k_0 h_n^{(2)} + \sum_{n'=-\infty}^{\infty} S_{nn'} k_0 h_{n'}^{(1)} \right], \nonumber
+\end{align}
+$$
+
+with deduced identities $n \equiv -(m+1)$, $n' \equiv -(m'+1)$, $c_n^0 \equiv \bar{a}_m^0 (-)^{m+1} i\alpha_0 = \mathcal{P}_x a_m^0 \mathcal{P}_x^{-1} (-)^{m+1} i\alpha_0$.
+We thus have
+
+$$
+S_{nn'} \equiv S_{-(m+1),-(m'+1)} = \mathcal{P}_x S_{mm'} \mathcal{P}_x^{-1} (-)^{m'-m} = (-)^{m'-m} S_{mm'} . \quad (S6.49)
+$$
+
+In particular, for $m=m'$, i.e., the diagonal elements, we have $S_{mm} = S_{-(m+1),-(m+1)}$. Under such constrains and using the definition of $f_l(\theta')$ given in Eq. (S2.17), we have
+
+$$
+\begin{align*}
+f_l(\theta') &= \sum_m a_m(\theta')(S_{ml}-\delta_{ml})(-i)^l, \\
+&= \sum_m \frac{i^m e^{-im\theta'}}{2\sqrt{2}} [(-)^{m-l}S_{-(m+1),-(l+1)} - \delta_{-(m+1),-(l+1)}] (-i)^{2l+1} (-i)^{-(l+1)}, \tag{S6.50} \\
+&= e^{i\theta'} \sum_m \frac{i^{-(m+1)}e^{-i(m+1)\theta'}}{2\sqrt{2}} [S_{-(m+1),-(l+1)} - \delta_{-(m+1),-(l+1)}] (-i)^{-(l+1)}, \\
+&= e^{i\theta'} \sum_{m'} a_{m'}(-\theta')(S_{m',-(l+1)} - \delta_{m',-(l+1)}) (-i)^{-(l+1)} = e^{i\theta'} f_{-(l+1)}(-\theta'). &
+\end{align*}
+$$
+
+For $\theta' = 0$ ($\pi$), i.e., when the incident wave propagates parallel (anti-parallel) to the axis of the mirror symmetry, we obtain $f_l = \pm f_{-(l+1)}$, based on which we can rewrite the skew cross section in Eq. (S2.18d) as
+
+$$
+\begin{align}
+\sigma_{sk}|_{\theta'=0(\pi)} &= \frac{4}{k} \sum_l \Im[f_l f_{l+1}^*] = \frac{4}{k} \Im \left\{ |f_0|^2 + \sum_{l=0}^{\infty} [f_l f_{l+1}^* + f_{-(l+1)} f_{-(l+1)}^*] \right\}, \tag{S6.51} \\
+&= \frac{4}{k} \Im \left[ |f_0|^2 + \sum_{l=0}^{\infty} 2\Re(f_l f_{l+1}^*) \right] = 0. \notag
+\end{align}
+$$
+
+These basic symmetry induced, exact constrains given by the closed forms in Eqs. (S6.49) and
+(S6.51) can serve as benchmarks for validating the S-matrix approach. Note that, while theoreti-
+cally the dimension of the S-matrix is infinite, in practice a finite truncation is needed for a given
+---PAGE_BREAK---
+
+FIG. S2. Validate the *S* matrix approach by showing the convergence to the integrable case. For the case of classically integrable dynamics, the agreement between the theoretically predicted cross section values [the black curve calculated from Eqs. (S5.46b)] and the numerical results as $\xi$ approaches zero. Parameters are $R_2/R_1 = 0.6$, $v_1 = -10$, and $v_2 = 40$.
+
+energy *E* since channels with higher angular momenta *l* ≫ *ER/ħv* cannot be excited effectively and thus have negligible contribution to the scattering process. Representative results are shown in Fig. S1. We obtain a good agreement between the theoretical prediction and the simulation results from properly truncated *S*-matrices.
+
+**B. The case of ξ → 0**
+
+Numerically, it is straightforward to validate the *S*-matrix approach indirectly by evaluating the convergence of the value of the cross section to the theoretical value for the limiting case of $\xi \to 0$ at which the classical dynamics are integrable. As shown in Fig. S2, a good agreement is achieved for $\xi < 0.01$.
+
+VII. FULL DATA SET FOR THE PLOT FIG. 2(C) IN THE MAIN TEXT
+
+Figure S3(a) shows the spin polarization versus $\xi$ and the Fermi energy *E*, where the deep sky-blue regions in the energy domain indicating higher values of spin polarization become extended as $\xi$ is increased and exceeds the value of 0.2. Figure S3(b) shows the average spin conductivities versus $\xi$, where the conductivity for the spin up population is a decreasing function of $\xi$ but that for
+---PAGE_BREAK---
+
+the spin down state is essentially constant. Thus, on average the spin up particles undergo signifi-
+cantly stronger backward scattering as compared with the spin down particles, generating a severe
+spin imbalance (e.g., for ξ = 0.3) and consequently, significantly enhanced spin polarization. To
+appreciate the role of deformation played in generating a strong Dirac quantum chimera state, we
+calculate the average differential cross section Δσdiff ≡ (E2 − E1)−1 ∫E1E2 (σdiff↑ − σdiff↓)dE versus
+the backward scattering angle θ for two cases: ξ = 0 and ξ = 0.3, as shown in Fig. S3(c). A
+schematic illustration of the generation of spin polarization is shown in Fig. S3(d).
+
+FIG. S3. Spin polarization enhancement as a result of Dirac quantum chimera. (a) Color-coded map of spin polarization $P_z$ as a function of energy $E$ and eccentricity $\xi$, (b) spin conductivities averaged over a given Fermi energy range versus $\xi$, where the red curve is vertically shifted by an arbitrary amount for better visualization, (c) illustration of a chaos rendered spin rheostat tuned by $\xi$, and (d) a schematic illustration of the generation of spin polarization.
+
+VIII. FEASIBILITY OF EXPERIMENTAL IMPLEMENTATION
+
+In general, the emergence of a Dirac chimera relies on the optical like behavior of Dirac elec-
+trons and Dirac cone splitting, which can be realized in current experimental systems of graphene.
+In particular, given the graphene lattice constant and typical values of the Fermi wavelength (e.g.,
+$\lambda_F \sim 20$nm), a Dirac description of the step potential requires the length scale characterizing the
+junction sharpness to be $d \sim 1$nm, which has been recently achieved experimentally for a circular
+---PAGE_BREAK---
+
+junction geometry [5]. The size of the junction can be tuned to the micrometer scale (>> $\lambda_F$), validating the short wavelength approximation [6]. Furthermore, the experimentally achievable strength of the exchange potential for graphene is strong enough to enable Dirac cone splitting at the room temperature [7], providing a base for experimentally observing the predicted Dirac quantum chimera.
+
+[1] E. P. Wigner, Phys. Rev. **98**, 145 (1955).
+
+[2] F. T. Smith, Phys. Rev. **118**, 349 (1960).
+
+[3] H. Schomerus, M. Marciani, and C. W. J. Beenakker, Phys. Rev. Lett. **114**, 166803 (2015).
+
+[4] D. Zwillinger, *Table of Integrals, Series, and Products* (Elsevier Science, 2014).
+
+[5] C. Gutierrez, L. Brown, C.-J. Kim, J. Park, and A. N. Pasupathy, Nat. Phys. **12**, 1069 (2016).
+
+[6] Y. Jiang, J. Mao, D. Moldovan, M. R. Masir, G. Li, K. Watanabe, T. Taniguchi, F. M. Peeters, and E. Y. Andrei, Nat. Nanotech. **12**, 1045 (2017).
+
+[7] P. Wei, S.-W. Lee, F. Lemaitre, L. Pinel, D. Cutaia, W.-J. Cha, F. Katmis, Y. Zhu, D. Heiman, J. Hone, and J. S. M. C.-T. Chen, Nat. Mater. **15**, 711 (2016).
\ No newline at end of file
diff --git a/samples/texts_merged/3441871.md b/samples/texts_merged/3441871.md
new file mode 100644
index 0000000000000000000000000000000000000000..21d79c6655fec6d5365d878be3c27f26ee812778
--- /dev/null
+++ b/samples/texts_merged/3441871.md
@@ -0,0 +1,450 @@
+
+---PAGE_BREAK---
+
+# Variational Principles and Solitary Wave Solutions of Generalized Nonlinear Schrödinger Equation in the Ocean
+
+Meng-Zhu Liu¹, Xiao-Qun Cao¹,², Xiao-Qian Zhu¹,², Bai-Nian Liu¹,², Ke-Cheng Peng¹
+
+¹ College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China
+Email: liumengzhu19@nudt.edu.cn (M.L.), zhu_xiaoqian@nudt.edu.cn (X.Z.), bnliu@nudt.edu.cn (B.L.), pkch15@lzu.edu.cn (K.P.)
+² College of Computer, National University of Defense Technology, Changsha 410073, China
+
+Received February 20 2021; Revised March 15 2021; Accepted for publication March 18 2021.
+Corresponding author: Xiao-Qun Cao (caoxiaoqun@nudt.edu.cn)
+© 2021 Published by Shahid Chamran University of Ahvaz
+
+**Abstract.** Internal solitary waves are very common physical phenomena in the ocean, which play an important role in the transport of marine matter, momentum and energy. Because the generalized nonlinear Schrödinger equation can well explain the effects of nonlinearity and dispersion in the ocean, it is more suitable for describing the deep-sea internal wave propagation and evolution than other mathematical models. At first, by designing skillfully the trial-Lagrange functional, different kinds of variational principles are successfully established for a generalized nonlinear Schrödinger equation by the semi-inverse method. Then, the constructed variational principles are proved correct by minimizing the functionals with the calculus of variations. Furthermore, some kinds of internal solitary wave solutions are obtained and demonstrated by semi-inverse variational principle for the generalized nonlinear Schrödinger equation.
+
+**Keywords:** Generalized nonlinear Schrödinger equation; semi-inverse method; variational principle; internal solitary waves.
+
+## 1. Introduction
+
+Internal solitary waves [1-3] are a kind of physical motions that occur frequently in the interior of fluid, and they happen almost everywhere in the world ocean. The study of internal waves in the ocean is of great significance to the theoretical research of ocean science, utilization of marine resources, avoiding marine disasters, as well as marine military and engineering. Internal solitary waves play an important role in ocean dynamics, which affect the transport of marine matter, momentum, and energy. At present, the well-known KdV equation is only suitable for describing the propagation of small amplitude internal waves in shallow water, [4-8] but there will be intolerable errors when it is used to model large-amplitude internal waves in the deep sea. For deep-sea internal waves, the Benjamin-Ono equation is constructed by Benjamin [9] and Ono [10], while the intermediate long wave (ILW) equation is obtained by Kubota [11] et al. Chio and Camass [12] obtained the fully nonlinear evolution equation of the internal wave at the two-layer interface. The derived equation can be reduced to the ILW equation when it is weakly nonlinear and propagates along one direction, and can be reduced to the Benjamin-Ono equation in infinite water depth. Song et al. [13] established the nonlinear Schrödinger (NLS) equation under two-layer stratification, trying to develop a more accurate equation of the ocean internal wave characteristics in a specific environment. Solving the nonlinear partial differential equations (PDEs) with integer or fractional orders is always an attractive and hot topic for many researchers in different scientific fields, because of their excellent ability for modeling nonlinear phenomena [14-18]. Numerous mathematical techniques have been developed to explore the approximate and exact solutions, of which variational-based methods have been very effective and successful, such as the Ritz technique [19-20], variational iteration method [21-24], and variational approximation method [25-28] et al. When contrasted with other methods, variational ones show some outstanding advantages. In this paper, a generalized nonlinear Schrödinger (GNLS) equation for modelling ocean internal waves is studied by the semi-inverse method, which was first proposed in 1997 by Dr. Ji-Huan He[29], who is a famous Chinese mathematician. At first, by designing skillfully the trial-Lagrange functional, different forms of variational principles are successfully established for the generalized nonlinear Schrödinger equation based on the semi-inverse method and variational theory. Then, different kinds of internal solitary wave solutions are obtained by semi-inverse variational principle for the GNLS equation. Furthermore, some different solutions of the solitary wave with the same trial-Lagrange functional form for the GNLS equation are demonstrated.
+
+## 2. Variational principles for a GNLS equation
+
+For inviscid fluids, ignoring the influence of Coriolis force, if the fluid is selected as a two-layer structure, a generalized nonlinear Schrödinger equation for deep-sea internal waves can be derived from the continuity equation and Bernoulli equation. It can be used to describe the propagation of internal solitary waves in the ocean:
+---PAGE_BREAK---
+
+$$
+-iA_1 + \alpha A_{xx} + \alpha_1 i A_{xxx} + \beta |A|^2 A = 0 \quad (1)
+$$
+
+where x and t represents the spatial and temporal variables respectively. In eq.(1), A represents complex amplitude fields of the internal solitary wave and $i=\sqrt{-1}$. $\alpha$ and $\alpha_1$ is the dispersion and the high-order dispersion coefficient, respectively, and $\beta$ is the nonlinear coefficient. All these coefficients are related to the local ocean depth, layer structure and the density of the seawater et al, which are physical parameters impacting the amplitude of the internal solitary waves. In eq.(1), the first term is the evolution term, and the second one is the group velocity dispersion term. The third term is the high-order dispersion term, and the fourth one is the nonlinear term in the equation. After substituting $A(x,t) = q_1(x,t) + iq_2(x,t)$ and $|A|^2 = q_1^2 + q_2^2$ into eq. (1), where $q_1$ and $q_2$ are the real-valued functions of t and x, we obtain the following coupled partial differential equations for $q_1$ and $q_2$ in real space
+
+$$
+\begin{equation}
+\begin{aligned}
+& -\frac{\partial q_1}{\partial t} + \alpha \frac{\partial^2 q_2}{\partial x^2} + \alpha_1 \frac{\partial^3 q_1}{\partial x^3} + \beta (q_1^2 + q_2^2) q_2 = 0 \\
+& \frac{\partial q_2}{\partial t} + \alpha \frac{\partial^2 q_1}{\partial x^2} - \alpha_1 \frac{\partial^3 q_2}{\partial x^3} + \beta (q_1^2 + q_2^2) q_1 = 0
+\end{aligned}
+\tag{2}
+\end{equation}
+$$
+
+The target is searching for variational formulations whose stationary conditions satisfy eq. (2) and eq. (3) simultaneously. With the help of He's semi-inverse method [30-31], a trial-functional is constructed in the following form
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} L dx = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} \left\{ q_1 \frac{\partial q_2}{\partial t} - \frac{\alpha}{2} \left( \left( \frac{\partial q_1}{\partial x} \right)^2 + \left( \frac{\partial q_2}{\partial x} \right)^2 \right) - \alpha_1 q_1 \frac{\partial^3 q_2}{\partial x^3} + F(q_1, q_2) \right\} dx \quad (4)
+$$
+
+where F is an unknown function of $q_1$, $q_2$ and their derivatives. There are various alternative approaches to the construction of trial-functional, illustrating examples can be found in Refs. [38-40], and detailed discussion about how to construct a suitable trial-functional is given in Ref. [32]. The main merit of the above trial-functional lies on the fact that the stationary condition with respect to $q_1$, $q_2$ results in eq. (2) and eq. (3), respectively.
+
+Now calculating the variational derivative of the functional, in eq. (4), with respect to $q_1$, $q_2$, we obtain the following Euler equations:
+
+$$
+-\frac{\partial q_1}{\partial t} + \alpha \frac{\partial^2 q_2}{\partial x^2} + \alpha_1 \frac{\partial^3 q_1}{\partial x^3} + \frac{\delta F}{\delta q_2} = 0
+$$
+
+$$
+\frac{\partial q_2}{\partial t} + \alpha \frac{\partial^2 q_1}{\partial x^2} - \alpha_1 \frac{\partial^3 q_2}{\partial x^3} + \frac{\delta F}{\delta q_1} = 0
+$$
+
+where $\delta F/\delta q_i$ is called He's variational derivative with respect to $q_i$, defined as [32]
+
+$$
+\frac{\partial F}{\partial q_i} = \frac{\partial F}{\partial q_i} - \frac{\partial}{\partial x}(\frac{\partial F}{\partial q_{i,x}}) - \frac{\partial}{\partial t}(\frac{\partial F}{\partial q_{i,t}}) + \frac{\partial^2}{\partial x^2}(\frac{\partial F}{\partial q_{i,xx}}) + ...
+$$
+
+We search for such an F so that eq. (5) turns into eq. (2), and eq. (6) becomes eq. (3) separately. Accordingly, we set
+
+$$
+\frac{\partial F}{\partial q_1} = \beta(q_1^2 + q_2^2)q_1
+$$
+
+$$
+\frac{\delta F}{\delta q_2} = \beta(q_1^2 + q_2^2)q_2
+$$
+
+from which the unknown F can be determined as follows
+
+$$
+F = \frac{\beta}{4}(q_{1}^{2} + q_{2}^{2})^{2}
+$$
+
+After embedding eq. (9) into eq. (4), the variational principle in real space is established for the generalized nonlinear Schrödinger equations (1), as following
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} [q_1 \frac{\partial q_2}{\partial t} - \frac{\alpha}{2} (\left(\frac{\partial q_1}{\partial x}\right)^2 + (\frac{\partial q_2}{\partial x})^2) - \alpha_1 q_1 \frac{\partial^3 q_2}{\partial x^3} + \frac{\beta}{4}(q_1^2 + q_2^2)^2] dx \quad (10)
+$$
+
+Similarly, another variational principle can be obtained as
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} [q_1 \frac{\partial q_2}{\partial t} + \frac{\alpha}{2}(q_1 \frac{\partial^2 q_1}{\partial x^2} + q_2 \frac{\partial^2 q_2}{\partial x^2}) - \alpha_1 q_1 \frac{\partial^3 q_2}{\partial x^3} + \frac{\beta}{4}(q_1^2 + q_2^2)^2] dx \quad (11)
+$$
+
+If the trial-Lagrange functional is preset to be in the following form
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} L dx = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} [-q_2 \frac{\partial q_1}{\partial t} - \frac{\alpha}{2} (\frac{\partial q_1}{\partial x})^2 + (\frac{\partial q_2}{\partial x})^2] + \alpha_1 q_2 \frac{\partial^3 q_1}{\partial x^3} + F(q_1, q_2)] dx
+$$
+
+using the variational theories, two diverse variational principles in different formulations can be constructed as
+---PAGE_BREAK---
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} \left\{ -q_2 \frac{\partial q_1}{\partial t} - \frac{\alpha}{2} \left[ \left( \frac{\partial q_1}{\partial x} \right)^2 + \left( \frac{\partial q_2}{\partial x} \right)^2 \right] + \alpha_1 q_2 \frac{\partial^3 q_1}{\partial x^3} + \frac{\beta}{4} (q_1^2 + q_2^2)^2 \right\} dx
+$$
+
+and
+
+$$
+J(q_1, q_2) = \int_{t_1}^{t_2} dt \int_{x_1}^{x_2} l - q_2 \frac{\partial q_1}{\partial t} + \frac{\alpha}{2}(q_1 \frac{\partial^2 q_1}{\partial x^2} + q_2 \frac{\partial^2 q_2}{\partial x^2}) + \alpha_1 q_2 \frac{\partial^3 q_1}{\partial x^3} + \frac{\beta}{4}(q_1^2 + q_2^2)^2 l \, dx
+$$
+
+**Proof.** Making any one of the above functionals of variational principles eq. (10), eq. (11), eq. (13), and eq. (14) stationary with respect to all independent functions $q_1$ and $q_2$ severally, the following Euler-Lagrange equations can be obtained:
+
+$$
+\delta q_1 : \quad \frac{\partial q_2}{\partial t} + \alpha \frac{\partial^2 q_1}{\partial x^2} - \alpha_1 \frac{\partial^3 q_2}{\partial x^3} + \beta (q_1^2 + q_2^2) q_1 = 0
+$$
+
+$$
+\delta q_2 : -\frac{\partial q_1}{\partial t} + \alpha \frac{\partial^2 q_2}{\partial x^2} + \alpha_1 \frac{\partial^3 q_1}{\partial x^3} + \beta (q_1^2 + q_2^2) q_2 = 0
+$$
+
+in which $\delta q_1$ and $\delta q_2$ are the first-order variations for $q_1$ and $q_2$. Obviously, the equations (15)-(16) are totally equivalent to the field equations eq. (3), eq. (2) in turn. So, we successfully proved the obtained four different variational principles eq. (10)-(11), eq. (13)-(14) correct.
+
+**3. Solitary wave solutions for the GNLS equation**
+
+There are various techniques of integration that have been recently developed to integrate the nonlinear PDEs. They are the Lie symmetry approach, variational iteration method, homotopy analysis method, ansatz method, exponential function method and many others. These are besides the well-known and powerful technique of integration that was known for a fairly long time. In this article, one such modern method of integrability will be employed to integrate the GNLS equation (1) in the ocean. This is the He's semi-inverse variational principle (HVP) that has become very popular since its first appearance in 1997 [25]. In this method, the given PDEs are transformed into ODEs based on the traveling wave function transformation, and the variational formulas corresponding to ordinary differential equations are established in the framework of the variational method with the help of semi-inverse technique[33-36]. The solitary wave solution of the given equation is constructed by substituting the assumed solution into the variational formula and finding its stationary point. The fractal variational principle is the last development of the semi-inverse method[36-40], which can greatly widen our sight and richen our knowledge on solitary wave theory and nonlinear vibration theory.
+
+The solitary wave solution of the given equation is constructed by substituting the assumed solution into the variational formula and finding its stationary point. Subsequently, it will be applied to carry out the integration of the generalized nonlinear Schrödinger equation eq. (1).
+
+The starting point is the solitary wave ansatz that is given by
+
+$$
+A(x,t) = f(\xi)e^{i(mx-nt)} \tag{17}
+$$
+
+where the travelling wave transform is:
+
+$$
+\xi = x - Et
+$$
+
+and both *m* and *n* are constants. *f* is an undetermined real function, and *E* is the wave velocity. Substituting the solitary wave ansatz eq. (17) into eq. (1) and decomposing into real and imaginary parts yields the following pair of relations, respectively
+
+$$
+\alpha f''' + (E + 2\alpha m - 3\alpha_1 m^2)f' = 0
+$$
+
+$$
+(\alpha - 3\alpha_{1}m)f'' - (n + \alpha m^{2} - \alpha_{1}m^{3})f' + \beta f^{3} = 0
+$$
+
+In the above two equations, $f' = df/d\xi$, $f'' = d^2f/d\xi^2$ and $f''' = d^3f/d\xi^3$. By using semi-inverse method [41-62], the variational formulation of eq. (20) can be obtained:
+
+$$
+J = \int_{0}^{\infty} \left[ \frac{1}{2} (\alpha - 3\alpha_{1}m)(f')^{2} + \frac{1}{2}(n + cm^{2} - \alpha_{1}m^{3})f^{2} - \frac{1}{4}\beta f^{4} \right] d\xi
+$$
+
+Now, *f* is assumed to have the following form
+
+$$
+f = psech(q\xi), \quad \xi = x - Et
+$$
+
+where *p* and *q* are unknown parameters to be determined.
+
+In order to obtain the two parameters function *f*, eq. (22) is inserted into eq. (21), and after some manipulations, we get:
+
+$$
+J = \int_{0}^{\infty} \frac{1}{2} (\alpha - 3\alpha_{1}m) p^{2} q^{2} \tanh^{2}(q\xi) \mathrm{sech}^{2}(q\xi) d\xi + \int_{0}^{\infty} \frac{1}{2} (n + cm^{2} - \alpha_{1}m^{3}) p^{2} \mathrm{sech}^{2}(q\xi) d\xi - \int_{0}^{\infty} \frac{1}{4} \beta p^{4} \mathrm{sech}^{4}(q\xi) d\xi \\
+= \frac{(\alpha - 3\alpha_{1}m)}{6} p^{2}q + \frac{(n + cm^{2} - \alpha_{1}m^{3})p^{2}}{2q} - \frac{\beta p^{4}}{6q}
+$$
+
+In order to get the stagnation point of J on p and q, we minimizing the above functional with respect to two unknown parameters. And the following equations can be obtained:
+---PAGE_BREAK---
+
+$$
+\begin{align}
+\frac{\partial J}{\partial p} &= \frac{\alpha - 3\alpha_1 m}{3} pq + \frac{(n + \alpha m^2 - \alpha_1 m^3)}{q} p - \frac{2\beta}{3q} p^3 \\
+\frac{\partial J}{\partial q} &= \frac{\alpha - 3\alpha_1 m}{6} p^2 - \frac{(n + \alpha m^2 - \alpha_1 m^3)}{2q^2} p^2 + \frac{2\beta}{6q^2} p^4
+\end{align}
+$$
+
+(24)
+
+The above two equations can be transformed into:
+
+$$
+\begin{align*}
+(\alpha - 3\alpha_1 m)q^2 + 3(n + \alpha m^2 - \alpha_1 m^3) - 2\beta p^2 &= 0 \\
+(\alpha - 3\alpha_1 m)q^2 - 3(n + \alpha m^2 - \alpha_1 m^3) + \beta p^2 &= 0
+\end{align*}
+$$
+
+(25)
+
+After solving the above algebraic equations, we can get:
+
+$$
+\begin{align*}
+p &= \pm \sqrt{\frac{2(n + \alpha m^2 - \alpha_1 m^3)}{\beta}} \\
+q &= \pm \sqrt{\frac{(n + \alpha m^2 - \alpha_1 m^3)}{\alpha - 3\alpha_1 m}}
+\end{align*}
+$$
+
+(26)
+
+provided
+
+$$
+\begin{gathered}
+(n + \alpha m^2 - \alpha_1 m^3) \cdot \beta > 0 \\
+(n + \alpha m^2 - \alpha_1 m^3) \cdot (\alpha - 3\alpha_1 m) > 0
+\end{gathered}
+$$
+
+Finally, the solitary wave solutions to eq. (1) are obtained:
+
+$$
+A(x,t) = \pm \sqrt{\frac{2(n+\alpha m^2 - \alpha_1 m^3)}{\beta}} \operatorname{sech}\left[\pm \sqrt{\frac{(n+\alpha m^2 - \alpha_1 m^3)}{\alpha - 3\alpha_1 m}} \xi\right] e^{i(mx-nt)} \quad (27)
+$$
+
+Fig. 1. The shape of the solitary wave solution given by eq. (27)
+
+Fig. 2. The shape of the solitary wave solution given by eq. (27) at different time (when t = 0, t = 0.2, t = 0.4, t = 0.6, t = 0.8, t = 1)
+---PAGE_BREAK---
+
+Fig. 2. Continued.
+
+From the exact solution formula eq. (27), it can be concluded that high-order dispersion term $\alpha_1$ and nonlinear term $\beta$ both have a great influence on internal waves, which cannot be ignored. Obviously, by giving different values to the parameters for $\alpha$, $\beta$, $\alpha_1$, $m$, $n$ and $E$, we will get different solitary wave solutions. If the parameters are set as $\alpha = 0.2$, $\alpha_1 = 0.2$, $\beta = 0.2$, $m = 2$, $n = 2$, and $E = 2$. The value of $x$ is between -3 and 3, and the value of $t$ is between 0 and 1. We can plot the solitary wave solution as figure 1. From figure 1 and figure 2, it is easy to show that the amplitude of wave solution is very local in space and has characteristics of soliton.
+
+Similarly, in order to get new solutions, we can choose a different form of solution functional as
+
+$$f = p \operatorname{sech}^2(q\xi), \quad \xi = x - Et \qquad (28)$$
+
+The calculation procedure is similar to above, and the letters $p$, $q$ are undetermined parameters. In order to obtain the following two-parameter function, we insert eq.(28) into eq.(21):
+
+$$
+\begin{aligned}
+J &= \int_{0}^{\infty} 2(\alpha - 3\alpha_{1}m)p^{2}q^{2}\tanh^{2}(q\xi)\operatorname{sech}^{4}(q\xi)d\xi + \int_{0}^{\infty} \frac{1}{2}(n + \alpha m^{2} - \alpha_{1}m^{3})p^{2}\operatorname{sech}^{4}(q\xi)d\xi - \int_{0}^{\infty} \frac{1}{4}\beta p^{4}\operatorname{sech}^{8}(q\xi)d\xi \\
+&= \frac{4}{15}(\alpha - 3\alpha_{1}m)p^{2}q + \frac{(n + \alpha m^{2} - \alpha_{1}m^{3})p^{2}}{3q} - \frac{4\beta p^{4}}{35q}
+\end{aligned}
+\qquad (29)
+$$
+
+In order to get the stagnation point of $J$ on $p$ and $q$, we set up the following equations:
+
+$$
+\begin{aligned}
+\frac{\partial J}{\partial p} &= \frac{8}{15}(\alpha - 3\alpha_1 m) pq + \frac{2(n + \alpha m^2 - \alpha_1 m^3)}{3q} p - \frac{16(\beta - \beta_1 m)}{35q} p^3 \\
+\frac{\partial J}{\partial q} &= \frac{4}{15}(\alpha - 3\alpha_1 m) p^2 - \frac{(n + \alpha m^2 - \alpha_1 m^3)}{3q^2} p^2 + \frac{4(\beta - \beta_1 m)}{35q^2} p^4
+\end{aligned}
+\qquad (30)
+$$
+
+Or simplify to get:
+
+$$
+\begin{aligned}
+& 28(\alpha - 3\alpha_1 m)q^2 - 35(n + \alpha m^2 - \alpha_1 m^3) + 12\beta p^2 = 0 \\
+& 28(\alpha - 3\alpha_1 m)q^2 + 35(n + \alpha m^2 - \alpha_1 m^3) - 24\beta p^2 = 0
+\end{aligned}
+\qquad (31)
+$$
+
+After solving the above algebraic equations, we can get:
+
+$$
+\begin{aligned}
+p &= \pm \sqrt{\frac{35(n + \alpha m^2 - \alpha_1 m^3)}{18\beta}} \\
+q &= \pm \sqrt{\frac{5(n + \alpha m^2 - \alpha_1 m^3)}{12(\alpha - 3\alpha_1 m)}}
+\end{aligned}
+\qquad (32)
+$$
+---PAGE_BREAK---
+
+provided
+
+$$ (n + \alpha m^2 - \alpha_1 m^3) \cdot \beta > 0 $$
+
+$$ (n + \alpha m^2 - \alpha_1 m^3) \cdot (\alpha - 3\alpha_1 m) > 0 $$
+
+and the result is:
+
+$$ A(x,t) = \pm \frac{1}{3} \sqrt{\frac{35(n + \alpha m^2 - \alpha_1 m^3)}{2\beta}} \operatorname{sech}^2 \left[ \pm \frac{1}{2} \sqrt{\frac{5(n + \alpha m^2 - \alpha_1 m^3)}{3(\alpha - 3\alpha_1 m)}} \xi \right] e^{i(mx-nt)} \quad (33) $$
+
+Fig. 3. The shape of the solitary wave solution given by eq. (33)
+
+Fig. 4. The shape of the solitary wave solution given by eq. (33) at different time (when $t = 0, t = 0.2, t = 0.4, t = 0.6, t = 0.8, t = 1$)
+---PAGE_BREAK---
+
+Fig. 4. Continued.
+
+From the exact solution formula eq. (31), it can be concluded that high-order dispersion term $α_1$ and nonlinear term $β$ both have a great influence on internal waves, which cannot be ignored. Obviously, by giving different values to the parameters for $α$, $β$, $α_1$, $m$, $n$ and $E$, we will get different solitary wave solutions. If the parameters are set as $α = 0.2$, $α_1 = 0.2$, $β = 0.2$, $m = 2$, $n = 2$, and $E = 2$. The value of $x$ is between -3 and 3, and the value of $t$ is between 0 and 1. We can plot the solitary wave solution as figure 1. From figure 3 and figure 4, it is easy to show that the amplitude of wave solution is very local in space and has characteristics of soliton.
+
+And we can also choose
+
+$$f = p\sqrt{\operatorname{sech}(q\xi)}, \quad \xi = x - Et \tag{34}$$
+
+$$f' = \frac{1}{2}pq \tanh(q\xi)\sqrt{\operatorname{sech}(q\xi)} \tag{35}$$
+
+Inserting eq. (34) and eq. (35) into eq. (19), eq. (19) is:
+
+$$
+\begin{aligned}
+J &= \int_{0}^{\infty} \frac{1}{8} (\alpha - 3\alpha_1 m) p^2 q^2 \tanh^2(q\xi) \operatorname{sech}(q\xi) + \frac{1}{2} (n + \alpha m^2 - \alpha m^3) p^2 \operatorname{sech}(q\xi) - \frac{1}{4} \beta p^4 \operatorname{sech}^2(q\xi) d\xi \\
+&= \frac{(\alpha - 3\alpha_1 m) \pi p^2 q}{32} + \frac{(n + \alpha m^2 - \alpha m^3) \pi p^2}{4q} - \frac{\beta p^4}{4q}
+\end{aligned}
+\tag{36} $$
+
+In order to get the stagnation point of J on p and q, we set up the following equations:
+
+$$
+\begin{aligned}
+\frac{\partial J}{\partial p} &= \frac{(\alpha - 3\alpha_1 m) \pi pq}{16} + \frac{(n + \alpha m^2 - \alpha m^3) \pi p}{2q} - \frac{\beta p^3}{q} \\
+\frac{\partial J}{\partial q} &= \frac{(\alpha - 3\alpha_1 m) \pi p^2}{32} - \frac{(n + \alpha m^2 - \alpha m^3) \pi p^2}{4q^2} + \frac{\beta p^4}{4q^2}
+\end{aligned}
+\tag{37} $$
+
+Or simplify to get:
+
+$$
+\begin{aligned}
+(\alpha - 3\alpha_1 m) \pi q^2 &- 8(\alpha_1 m^3 - n - \alpha m^2) - 16\beta p^2 = 0 \\
+(\alpha - 3\alpha_1 m) \pi q^2 &+ 8(\alpha_1 m^3 - n - \alpha m^2) + 8\beta p^2 = 0
+\end{aligned}
+\tag{38} $$
+
+After solving the above algebraic equations, we can get:
+
+$$p = \pm \sqrt{\frac{2(n + \alpha m^2 - \alpha m^3)\pi}{3\beta}}$$
+
+$$q = \pm \sqrt{\frac{8(n + \alpha m^2 - \alpha m^3)}{3(\alpha - 3\alpha_1 m)}}$$
+
+provided
+
+$$(n + \alpha m^2 - \alpha_1 m^3) \cdot \beta > 0$$
+
+$$(n + \alpha m^2 - \alpha_1 m^3) \cdot (\alpha - 3\alpha_1 m) > 0$$
+
+After solving the above algebraic equations, we can get:
+
+$$A(x,t) = \pm \sqrt{\frac{2(n + \alpha m^2 - \alpha m^3)\pi}{3\beta}} \cdot \sqrt{\operatorname{sech}\left(\pm \sqrt{\frac{8(n + \alpha m^2 - \alpha m^3)\xi}{3(\alpha - 3\alpha_1 m)}}\right)} \tag{40}$$
+---PAGE_BREAK---
+
+Fig. 5. The shape of the solitary wave solution given by eq. (40)
+
+Fig. 6. The shape of the solitary wave solution given by eq.(40) at different time
+(when t = 0, t = 0.2, t = 0.4, t = 0.6, t = 0.8, t = 1)
+---PAGE_BREAK---
+
+From the exact solution formula eq. (40), it can be concluded that high-order dispersion term $α_1$ and nonlinear term $β$ both have a great influence on internal waves, which cannot be ignored. Obviously, by giving different values to the parameters for $α$, $β$, $α_1$, $m$, $n$ and $E$, we will get different solitary wave solutions. If the parameters are set as $α = 0.2$, $α_1 = 0.2$, $β = 0.2$, $m = 2$, $n = 2$, and $E = 2$. The value of $x$ is between -3 and 3, and the value of $t$ is between 0 and 1. We can plot the solitary wave solution as figure 1. From figure 5 and figure 6, it is easy to show that the amplitude of wave solution is very local in space and has characteristics of soliton.
+
+# 4. Conclusion
+
+The generalized nonlinear Schrödinger equation is widely applied in mathematics and physics. It is closely related to many nonlinear problems in theoretical physics such as nonlinear optics, ion acoustic waves in plasma, etc. Especially, it is very suitable for describing the deep-sea internal wave propagation. In this paper, different kinds of variational principles have been successfully constructed for a generalized nonlinear Schrödinger equation, by the semi-inverse method and designing skillfully trial-Lagrange functionals. Then, the constructed variational principles are proved correct by minimizing the functionals with the calculus of variations. Subsequently, different solution structures for solitary waves are obtained by semi-inverse variational principle for the GNLS equation. From the figures of solutions, it is observed that on one hand the amplitude of solitary wave solution is very local in space, which displays the characteristics of soliton, on the other hand the shape of the solitary wave solution varies greatly over time. From the exact solution formulas, it can be concluded that high-order dispersion term and nonlinear term both have a great influence on internal wave solutions in the GNLS equation, and they cannot be ignored.
+
+# Author Contributions
+
+All authors have important contributions in this paper. The details are as following. Conceptualization, M.-Z. L. and X.-Q. C.; methodology, X.-Q. C. and M.-Z. L.; validation, K.-C. P.; writing—original draft preparation, M.-Z. Liu; writing—review and editing, X.-Q. C., X.-Q. Z., and B.-N. L. The manuscript was written through the contribution of all authors. All authors discussed the results, reviewed, and approved the final version of the manuscript.
+
+# Acknowledgments
+
+The authors thank the reviewers for their useful comments, which lead to the improvement of the content of the paper.
+
+# Conflict of Interest
+
+The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
+
+# Funding
+
+This work is supported by the National Key R&D Program of China (Grant No.2018YFC1506704) and National Natural Science Foundation of China (Grant No. 42005003 and 41475094).
+
+# References
+
+[1] Jiang, Z.H., Huang, S.X., You, X. B., Ocean internal waves interpreted as oscillation travelling waves in consideration of ocean dissipation, Chinese Physics B, 23, 2014, 52-59.
+[2] Lee, C.Y., Beardsley, R.C., The generation of long nonlinear internal waves in a weakly stratified shear flow, Journal of Geophysical Research, 79, 1974, 453-462.
+[3] Wang, Z., Zhu Y.K., Theory, modelling and computation of nonlinear ocean internal waves, Chinese Journal of Theoretical and Applied Mechanics, 51, 2019, 1589-1604.
+[4] Karunakar, P., Chakraverty, S., Effect of Coriolis constant on Geophysical Korteweg-de Vries equation, Journal of Ocean Engineering and Science, 4, 2019, 113-121.
+[5] Kaya, D., Explicit and Numerical Solutions of Some Fifth-order KdV Equation by Decomposition Method, Applied Mathematics and Computation, 144, 2003, 353-363.
+[6] Wazwaz, A. M., A Study on Compaction-Like Solutions for the Modified KdV and Fifth Order KdV-Like Equations, Applied Mathematics and Computation, 147, 2004, 439-447.
+[7] Wazwaz, A.M., Helal, M.A., Variants of the Generalized Fifth-Order KdV Equation with Compact and Noncompact Structures, Chaos Solitons and Fractals, 21, 2004, 579-589.
+[8] Li, J., Gu X.F., Yu T., Sun Y. Simulation investigation on the internal wave via the analytical solution of Korteweg-de Vries equation, Marine Science Bulletin, 30, 2011, 23-28.
+[9] Benjamin, B.T., Internal waves of permanent form in fluids of great depth, Journal of Fluid Mechanics, 29(03), 1967, 559-592.
+[10] Hiroaki, O., Algebraic Solitary Waves in Stratified Fluids, Journal of the Physical Society of Japan, 39, 1975, 1082-1091.
+[11] Kubota, T., Ko, D., Dobbs, L., Propagation of weakly nonlinear internal waves in a stratified fluid of finite depth, AIAA Journal of Hydraulics, 12, 1978, 157-165.
+[12] Choi, W., Camassa, R., Fully nonlinear internal waves in a two-fluid system, Journal of Fluid Mechanics, 396, 1999, 1-36.
+[13] Song, S.Y., Wang, J., Meng, J.M., Wang, J.B., Hu, P. X., Nonlinear Schrödinger equation for internal waves in deep sea, Acta Physica Sinica, 59(02), 2010, 1123-1129.
+[14] Liu, S.K., Fu, Z.T., Expansion method about the Jacobi elliptic function and its applications to nonlinear wave equations, Acta Physica Sinica, 50, 2001, 2068-2073.
+[15] He, J.H., Exp-function method for fractional differential equations, International Journal Nonlinear Science and Numerical Simulation, 14, 2013, 363-366.
+[16] He, J.H., On the fractal variational principle for the Telegraph equation, Fractals, 29, 2021, https://doi.org/10.1142/S0218348X21500225.
+[17] Wu, Y., Variational approach to higher-order water-wave equations, Chaos Solitons and Fractals, 32, 2007, 195-203.
+[18] Gazzola, F., Wang, Y., Pavani, R., Variational formulation of the Melan equation, Mathematical Methods in the Applied Sciences, 41, 2018, 943-951.
+[19] He, J.H., Liu, F.J., Local Fractional Variational Iteration Method for Fractal Heat Transfer in Silk Cocoon Hierarchy, Nonlinear Science Letters A, 2013, 4, 15-20.
+[20] He, J.H., Ji, F.Y., Taylor Series Solution for Lane-Emden Equation, Journal of Mathematical Chemistry, 57(8), 2019, 1932-1934.
+[21] He, C.H., Shen, Y., Ji, F.Y., He, J.H., Taylor series solution for fractal Bratu-type equation arising in electrospinning process, Fractals, 28(1), 2020, 2050011.
+[22] He, J.H., Taylor series solution for a third order boundary value problem arising in architectural engineering, Ain Shams Engineering Journal, 11(4), 2020, 1411-1414.
+[23] Kaup, D.J., Variational solutions for the discrete nonlinear Schrödinger equation, Mathematics and Computers in Simulation, 69, 2005, 322-333.
+---PAGE_BREAK---
+
+[24] Putri, N.Z., Asfa, A.R., Fitri, A., Bakri, I., Syafwan, M., Variational approximations for intersite soliton in a cubic-quintic discrete nonlinear Schrödinger equation, *Journal of Physics, Conference Series*, 2019, 1317, 012-015.
+
+[25] He, J.H., Variational principles for some nonlinear partial differential equations with variable coefficients, *Chaos Solitons and Fractals*, 19, 2004, 847-851.
+
+[26] He, J.H., A modified Li-He's variational principle for plasma, *International Journal Numerical Methods for Heat & Fluid Flow*, 2019, doi:10.1108/HFF-06-2019-0523.
+
+[27] He, J.H., Generalized equilibrium equations for shell derived from a generalized variational principle, *Applied Mathematics Letters*, 64, 2017, 94-100.
+
+[28] He, J.H., Sun, C., A variational principle for a thin film equation, *Journal of Mathematical Chemistry*, 57, 2019, 2075-2081.
+
+[29] He, J.H., Semi-Inverse Method of Establishing Generalized Variational Principles for Fluid Mechanics With Emphasis on Turbomachinery Aerodynamics, *International Journal of Turbo & Jet-Engines*, 14, 1997, 23-28.
+
+[30] He, J.H., Generalized Variational Principles for Buckling Analysis of Circular Cylinders, *Acta Mechanica*, 231, 2020, 899-906.
+
+[31] He, J.H., A fractal variational theory for one-dimensional compressible flow in a microgravity space, *Fractals*, 2019, https://doi.org/10.1142/S0218348X20500243.
+
+[32] Yue, S., He, J.H., Variational principle for a generalized KdV equation in a fractal space, *Fractals*, 28(4), 2020, 2050069.
+
+[33] Biswas, A., Zhou, Q., Ullah, M.Z., Triki, H., Moshokoa, S.P., Belic, M., Optical soliton perturbation with anti-cubic nonlinearity by semi-inverse variational principle, *Optik*, 143, 2017, 131-134.
+
+[34] Cao, X.Q., Guo Y.N., Hou, S.C., Zhang, C.Z., Peng, K.C., Variational Principles for Two Kinds of Coupled Nonlinear Equations in Shallow Water, *Symmetry*, 12, 2020, 850.
+
+[35] Cao, X.Q., Generalized variational principles for Boussinesq equation systems, *Acta Physica Sinica*, 60, 2011, 105-113.
+
+[36] Kohl, R.W., Biswas, A., Zhou, Q., Ekici, M. Alzahrani, A.K., Belic, M.R., Optical soliton perturbation with polynomial and triple-power laws of refractive index by semi-inverse variational principle, *Chaos, Solitons & Fractals*, 135, 2020, 109765.
+
+[37] He, J.H., El-Dib, Y.O., Periodic property of the time-fractional Kundu-Mukherjee-Naskar equation, *Results in Physics*, 19, 2020, 103345.
+
+[38] He, J.H., Variational principle and periodic solution of the Kundu-Mukherjee-Naskar equation, *Results in Physics*, 17, 103031.
+
+[39] He, J.H., Kou, S.J., He, C.H., Zhang, Z.W., Khaled, A. Gepreel., Fractal oscillation and its frequency-amplitude property, *Fractals*, 2021, DOI: 10.1142/S0218348X2150105X.
+
+[40] He, C.H., Liu, C., He, J.H., Shirazi, A.H., Sedighi, H.M., Passive Atmospheric water harvesting utilizing an ancient Chinese ink slab and its possible applications in modern architecture, *Facta Universitatis Series-Mechanical Engineering*, 2021, doi: 10.22190/FUME201203001H.
+
+ORCID iD
+
+Meng-Zhu Liu https://orcid.org/0000-0001-6699-3496
+Xiao-Qun Cao https://orcid.org/0000-0002-6135-0712
+Xiao-Qian Zhu https://orcid.org/0000-0002-5769-9292
+Bai-Nian Liu https://orcid.org/0000-0002-2742-1265
+Ke-Cheng Peng https://orcid.org/0000-0002-3800-271X
+
+© 2021 Shahid Chamran University of Ahvaz, Ahvaz, Iran. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0 license) (http://creativecommons.org/licenses/by-nc/4.0/).
+
+**How to cite this article:** Liu M.-Z. et al., Variational Principles and Solitary Wave Solutions of Generalized Nonlinear Schrödinger Equation in the Ocean, *J. Appl. Comput. Mech.*, 7(3), 2021, 1639–1648. https://doi.org/10.22055/JACM.2021.36690.2890
+
+Publisher's Note Shahid Chamran University of Ahvaz remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
\ No newline at end of file
diff --git a/samples/texts_merged/351103.md b/samples/texts_merged/351103.md
new file mode 100644
index 0000000000000000000000000000000000000000..884363172f115eb76017a13d8d2bdcef3b98a852
--- /dev/null
+++ b/samples/texts_merged/351103.md
@@ -0,0 +1,825 @@
+
+---PAGE_BREAK---
+
+# Compositional Verification of Concurrent Systems by Combining Bisimulations
+
+Frédéric Lang, Radu Mateescu, Franco Mazzanti
+
+► To cite this version:
+
+Frédéric Lang, Radu Mateescu, Franco Mazzanti. Compositional Verification of Concurrent Systems by Combining Bisimulations. FM 2019 - 23rd International Conference on Formal Methods, Oct 2019, Porto, Portugal. pp.196-213, 10.1007/978-3-030-30942-8_13 . hal-02295459
+
+HAL Id: hal-02295459
+
+https://hal.inria.fr/hal-02295459
+
+Submitted on 24 Sep 2019
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+# Compositional Verification of Concurrent
+Systems by Combining Bisimulations
+
+Frédéric Lang¹, Radu Mateescu¹, and Franco Mazzanti²
+
+¹ Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP**, LIG, 38000 Grenoble, France
+
+² ISTI-CNR, Pisa, Italy
+
+**Abstract.** One approach to verify a property expressed as a modal $\mu$-calculus formula on a system with several concurrent processes is to build the underlying state space compositionally (i.e., by minimizing and re-composing the state spaces of individual processes, keeping visible only the relevant actions occurring in the formula), and check the formula on the resulting state space. It was shown previously that, when checking the formulas of the $L_{\mu}^{\text{dsbr}}$ fragment of $\mu$-calculus (consisting of weak modalities only), individual processes can be minimized modulo divergence-preserving branching (divbranching) bisimulation. In this paper, we refine this approach to handle formulas containing both strong and weak modalities, so as to enable a combined use of strong or divbranching bisimulation minimization on concurrent processes depending whether they contain or not the actions occurring in the strong modalities of the formula. We extend $L_{\mu}^{\text{dsbr}}$ with strong modalities and show that the combined minimization approach preserves the truth value of formulas of the extended fragment. We implemented this approach on top of the CADP verification toolbox and demonstrated how it improves the capabilities of compositional verification on realistic examples of concurrent systems.
+
+## 1 Introduction
+
+We consider the problem of verifying a temporal logic property on a concurrent system $P_1 || ... || P_n$ consisting of $n$ processes composed in parallel. We work in the action-based setting, the property being specified as a formula $\varphi$ of the modal $\mu$-calculus ($L_\mu$) [18] and the processes $P_i$ being described in a language with process algebraic flavour. A well-known problem is the state-space explosion that happens when the system state space exceeds the available computer memory.
+
+Compositional verification is a set of techniques and tools that have proven efficient to palliate state-space explosion in many situations [11]. These techniques may be either independent of the property, i.e., focus only on the construction of the system state space, such as compositional state space construction [22, 29, 32, 31, 14, 33, 19]. Alternatively, they may depend on the property, e.g., verification of the property on the full system is decomposed in the verification of properties on (expectedly smaller) sub-systems, such as in compositional reachability analysis [36, 4], assume-guarantee reasoning [28], or partial model checking [1].
+
+** Institute of Engineering Univ. Grenoble Alpes
+---PAGE_BREAK---
+
+Nevertheless, the frontier between property-independent and property-dependent techniques is loose. In compositional state space construction, to be able to reduce the system size, a set of actions is selected and a suitable equivalence relation (e.g., strong bisimulation, branching bisimulation, or divergence-preserving branching bisimulation — divbranching for short) is chosen, restricting the set of properties preserved after hiding the selected actions and reducing the system w.r.t. the selected relation. Therefore, there is still a dependency between the state space construction and the set of properties that can be verified. Given a formula $\varphi$ of $L_{\mu}$ to be verified on the system, Mateescu & Wijs [24] have pushed this idea and shown how to extract a maximal hiding set of actions and an equivalence relation (either strong or divbranching bisimulation) automatically from $\varphi$, thus inviting the compositional state space construction technique to the table of property-dependent reductions. To select the equivalence relation from the formula, they have identified an $L_{\mu}$ fragment named $L_{\mu}^{dsbr}$, which is adequate with divbranching bisimulation [24]. This fragment consists of $L_{\mu}$ restricted to weak modalities, which match actions preceded by arbitrary sequences of hidden actions, as opposed to traditional strong modalities $\langle\alpha\rangle \varphi_0$ and $[\alpha]\varphi_0$, which match only a single action satisfying $\alpha$. If $\varphi$ belongs to $L_{\mu}^{dsbr}$, then the system can be reduced for divbranching bisimulation; otherwise, it can be reduced for strong bisimulation, the weakest equivalence relation preserving full $L_{\mu}$.
+
+In this paper, we revisit and refine this approach to accommodate $L_{\mu}$ formulas containing both strong and weak modalities. To do so, we define a logic named $L_{\mu}^{strong}(A_s)$, which extends $L_{\mu}^{dsbr}$ with strong modalities matching only the actions belonging to a given set $A_s$ of strong actions. The set $A_s$ induces a partition of the processes $P_1 || ... || P_n$ into those containing at least one strong action, and those that do not. We show that a formula $\varphi$ of $L_{\mu}^{strong}(A_s)$ is still preserved if the processes containing strong actions are reduced modulo strong bisimulation and the other ones modulo divbranching bisimulation. We also provide guidelines for extracting the set $A_s$ from particular $L_{\mu}$ formulas encoding the operators of widely-used temporal logics, such as CTL [5], ACTL [26], PDL [9], and PDL-$\Delta$ [30]. This combined use of bisimulations to reduce different parts of the same system makes possible a fine-tuning of the compositional state space construction by going smoothly from strong bisimulation (when all modalities are strong) to divbranching bisimulation (when $A_s$ is empty, as in the previous approach based on $L_{\mu}^{dsbr}$). We implemented this approach on top of the CADP verification toolbox [12], and demonstrated how it improves the capabilities of compositional verification on two realistic case studies, namely the TFTP plane-ground communication protocol specified in [13] and the parallel CTL benchmark of the RERS'2018 challenge.
+
+The paper is organized as follows. Section 2 recalls some definitions. Section 3 defines $L_{\mu}^{strong}(A_s)$ and proves the main result of its adequacy with the combined use of strong and divbranching bisimulations. Section 4 presents the experimental results obtained on the two case studies. Finally, Section 5 contains concluding remarks and directions of future work. Formal proofs and code of case studies are available at http://doi.org/10.5281/zenodo.2634148.
+---PAGE_BREAK---
+
+# 2 Background
+
+## 2.1 LTS compositions and reductions
+
+We consider systems whose behavioural semantics can be represented using an LTS (Labelled Transition System).
+
+**Definition 1 (LTS).** Let $\mathcal{A}$ denote an infinite set of actions, including the invisible action $\tau$, which denotes internal behaviour. All other actions are called visible. An LTS is a tuple $(\Sigma, A, \rightarrow_P, p_{init})$, where $\Sigma$ is a set of states, $A \subseteq \mathcal{A}$ is a set of actions, $\rightarrow \subseteq \Sigma \times A \times \Sigma$ is the (labelled) transition relation, and $p_{init} \in \Sigma$ is the initial state. We write $p \xrightarrow{a} p'$ if $(p, a, p') \in \rightarrow$ and $p \xrightarrow{\tau^*} p'$ if there is a (possibly empty) sequence of $\tau$-transitions from $p$ to $p'$, i.e., states $p_0, \dots, p_n$ ($n \ge 0$) such that $p = p_0, p' = p_n$, and $p_i \xrightarrow{\tau} p_{i+1}$ for $i = 0, \dots, n-1$.
+
+LTS can be composed in parallel and their actions can be abstracted away using the parallel composition and hiding operators defined below. Prior to hiding, an action mapping operator is also introduced for the generality of the approach.
+
+**Definition 2 (Parallel composition of LTS).** Let $P = (\Sigma_P, A_P, \rightarrow_P, p_{init})$, $Q = (\Sigma_Q, A_Q, \rightarrow_Q, q_{init})$, and $A_{sync} \subseteq \mathcal{A} \setminus \{\tau\}$. The parallel composition of $P$ and $Q$ with synchronization on $A_{sync}$, "P |[A_{sync}]| Q", is defined as $(\Sigma_P \times \Sigma_Q, A_P \cup A_Q, \rightarrow, (p_{init}, q_{init}))$, where $(p, q) \xrightarrow{a} (p', q')$ if and only if either (1) $p \xrightarrow{a} p'$, $q' = q$, and $a \notin A_{sync}$, or (2) $p' = p, q \xrightarrow{a} q'$, and $a \notin A_{sync}$, or (3) $p \xrightarrow{a} p', q \xrightarrow{a} q'$, and $a \in A_{sync}$.
+
+**Definition 3 (Action mapping).** Let $P = (\Sigma_P, A_P, \rightarrow_P, p_{init})$ and a total function $\rho : A_P \to 2^\mathcal{A}$. We write $\rho(A_P)$ for the image of $\rho$, defined by $\cup_{a \in A_P} \rho(a)$. We write $\rho(P)$ for the action mapping $\rho$ applied to $P$, defined as the LTS $(\Sigma_P, \rho(A_P), \rightarrow'_P, p_{init})$ where $\rightarrow'_P = \{(p, a', p') | p \xrightarrow{a} P p' \land a' \in \rho(a)\}$. An action mapping $\rho$ is admissible if $\tau \in A_P \implies \rho(\tau) = \{\tau\}$.
+
+Action mapping enables a single action $a$ to be mapped onto the empty set of actions, onto a single action $a'$, or onto more than one actions $a'_0, \dots, a'_{n+1}$ ($n \ge 0$). In the first case, every transition labelled by $a$ is removed. In the second case, $a$ is renamed into $a'$. In the third case, every transition labelled by $a$ is replaced by $n + 2$ transitions with same source and target states, labelled by $a'_0, \dots, a'_{n+1}$. Action hiding is a special case of admissible action mapping.
+
+**Definition 4 (Action hiding).** Let $P = (\Sigma_P, A_P, \rightarrow_P, p_{init})$ and $A \subseteq \mathcal{A} \setminus \{\tau\}$. We write "hide *A* in *P*" for the LTS $\rho(P)$, where $\rho$ is the admissible action mapping defined by $(\forall a \in A_P \cap A) \rho(a) = \{\tau\}$ and $(\forall a \in A_P \setminus A) \rho(a) = \{a\}$.
+
+Parallel composition and admissible action mapping subsume all abstraction and composition operators encodable as networks of LTS [20, 11, 7], such as the parallel composition, hiding, renaming, and cut (or restriction) operators of CCS [25], CSP [2], mCRL [15], LOTOS [16], E-LOTOS [17], and LNT [3], as well as synchronization vectors³. In the sequel, we write $P_1 || \dots || P_n$ for any
+
+³ For instance, the composition of $P$ and $Q$ where action $a$ of $P$ synchronizes with either $b$ or $c$ of $Q$, can be written as $\rho(P) |[b,c]| Q$, where $\rho$ maps $a$ onto $\{b,c\}$.
+---PAGE_BREAK---
+
+expression composing $P_1, \dots, P_n$ using these operators. Given any partition of $P_1, \dots, P_n$ into arbitrary subsets $\mathcal{P}_1$ and $\mathcal{P}_2$, it is always possible to rewrite $P_1 || \dots || P_n$ in the form $((|P_i| \in \mathcal{P}_1 P_i)| (|P_j| \in \mathcal{P}_2 P_j))$, even for non-associative parallel composition operators (e.g., $[|\dots|]$), using appropriate action mappings⁴.
+
+LTS can be compared and reduced with respect to well-known bisimulation relations. In this paper, we consider strong bisimulation [27] and divbranching bisimulation, which itself derives from branching bisimulation [34, 35].
+
+**Definition 5 (Bisimulations).** A strong bisimulation is a symmetric relation $R \subseteq \Sigma \times \Sigma$ such that if $(p_1, p_2) \in R$ then: for all $p_1 \xrightarrow{a} p'_1$, there exists $p'_2$ such that $p_2 \xrightarrow{a} p'_2$ and $(p'_1, p'_2) \in R$. A branching bisimulation is a symmetric relation $R \subseteq \Sigma \times \Sigma$ such that if $(p_1, p_2) \in R$ then: for all $p_1 \xrightarrow{a} p'_1$, either $a = \tau$ and $(p'_1, p_2) \in R$, or there exists a sequence $p_2 \xrightarrow{\tau^*} p'_2 \xrightarrow{a} p''_2$ such that $(p_1, p'_2) \in R$ and $(p'_1, p''_2) \in R$. A divergence-preserving branching bisimulation (divbranching bisimulation for short) is a branching bisimulation $R$ such that if $(p_1^0, p_2^0) \in R$ and there is an infinite sequence $p_1^0 \xrightarrow{\tau} p_1^1 \xrightarrow{\tau} p_1^2 \xrightarrow{\tau} \dots$ with $(p_1^i, p_2^i) \in R$ for all $i \ge 0$, then there is an infinite sequence $p_2^0 \xrightarrow{\tau} p_2^1 \xrightarrow{\tau} p_2^2 \xrightarrow{\tau} \dots$ such that $(p_1^i, p_2^j) \in R$ for all $i, j \ge 0$. Two states $p_1$ and $p_2$ are strongly (resp. branching, divbranching) bisimilar, written $p_1 \sim p_2$ (resp. $p_1 \sim_{br} p_2$, $p_1 \sim_{dsbr} p_2$), if there exists a strong (resp. branching, divbranching) bisimulation $R$ such that $(p_1, p_2) \in R$. Two LTS $P_1$ and $P_2$ are strongly (resp. branching, divbranching) bisimilar, written $P_1 \sim P_2$ (resp. $P_1 \sim_{br} P_2$, $P_1 \sim_{dsbr} P_2$), if their initial states are strongly (resp. branching, divbranching) bisimilar.
+
+Strong, branching, and divbranching bisimulations are congruences for parallel composition and admissible action mapping. This allows reductions to be applied at any intermediate step during the state space construction, thus potentially reducing the overall cost of reduction. However, since processes may constrain each other by synchronization, composing LTS two by two following the algebraic structure of the composition expression and applying reduction after each composition can be orders of magnitude less efficient than other strategies in terms of the largest intermediate LTS. Finding an optimal strategy is difficult. One generally relies on heuristics to select a subset of LTS to compose at each step of the compositional reduction. In this paper, we will use the *smart reduction* heuristic [6, 11], which is implemented within the SVL [10] tool of CADP [12]. This heuristic tries to find an efficient composition order by analysing the synchronization and hiding structure of the composition expression.
+
+## 2.2 Temporal logics
+
+**Definition 6 (Modal μ-calculus [18]).** The modal μ-calculus ($L_μ$) is built from action formulas $\alpha$ and state formulas $\varphi$, whose syntax and semantics w.r.t.
+
+⁴ For instance, $P_1 [[a]] (P_2 [[[[P_3]])]]$ is equivalent to $\rho_0((\rho_1(P_1)) [[a_1]] (\rho_2(P_2))) [[a_2]] (\rho_3(P_3))$ —observe the different associativity— where $\rho_1$ maps $a$ onto $\{a_1, a_2\}$, $\rho_2$ renames $a$ into $a_1$, $\rho_3$ renames $a$ into $a_2$, and $\rho_0$ renames $a_1$ and $a_2$ into $a$.
+---PAGE_BREAK---
+
+an LTS $P = (\Sigma, A, \rightarrow, p_{init})$ are defined as follows:
+
+$$
+\begin{array}{l@{\hspace{2em}}c@{\hspace{2em}}l}
+\alpha & ::= & a \\
+ & | & \mathbf{false} \\
+ & | & \alpha_1 \lor \alpha_2 \\
+ & | & \neg \alpha_0 \\
+\varphi & ::= & \mathbf{false} \\
+ & | & \varphi_1 \lor \varphi_2 \\
+ & | & \neg \varphi_0 \\
+ & | & \langle \alpha \rangle \varphi_0 \\
+ & | & X \\
+ & | & \mu X.\varphi_0
+\end{array}
+\qquad
+\begin{array}{l}
+[a]_A = \{\alpha\} \\
+[\mathbf{false}]_A = \emptyset \\
+[\alpha_1 \lor \alpha_2]_A = [\alpha_1]_A \cup [\alpha_2]_A \\
+[\neg \alpha_0]_A = A \setminus [\alpha_0]_A \\
+[\mathbf{false}]_P \delta = \emptyset \\
+[\varphi_1 \lor \varphi_2]_P \delta = [\varphi_1]_P \delta \cup [\varphi_2]_P \delta \\
+[\neg \varphi_0]_P \delta = \Sigma \setminus [\varphi_0]_P \delta \\
+[\langle \alpha \rangle \varphi_0]_P \delta = \{s \in \Sigma \mid \exists s \xrightarrow{a} s'. a \in [\alpha]_A \land s' \in [\varphi_0]_P \delta\} \\
+[X]_P \delta = \delta(X) \\
+[\mu X.\varphi_0]_P \delta = \bigcup_{k \ge 0} \Phi_{0,P,\delta}^k(\emptyset)
+\end{array}
+$$
+
+where $X \in \mathcal{X}$ are propositional variables denoting sets of states, $\delta : \mathcal{X} \to 2^\Sigma$ is a context mapping propositional variables to sets of states, [] is the empty context, $\delta[U/X]$ is the context identical to $\delta$ except for variable $X$, which is mapped to state set $U$, the functional $\Phi_{0,P,\delta}: 2^\Sigma \to 2^\Sigma$ associated to the formula $\mu X.\varphi_0$ is defined as $\Phi_{0,P,\delta}(U) = [\varphi_0]_P\delta[U/X]$, and $\Phi^k$ means k-fold application. We write $P\models\varphi$ (read $P$ satisfies $\varphi$) for $p_0 \in [[\varphi]]_P[.]$.
+
+Action formulas $\alpha$ are built from actions and boolean operators. State formulas $\varphi$ are built from boolean operators, the possibility modality $\langle\alpha\rangle \varphi_0$ denoting the states with an outgoing transition labeled by an action satisfying $\alpha$ and leading to a state satisfying $\varphi_0$, and the minimal fixed point operator $\mu X.\varphi_0$ denoting the least solution of the equation $X = \varphi_0$ interpreted over $2^\Sigma$.
+
+The usual derived operators are defined as follows: boolean connectors true = $\neg$false and $\varphi_1 \land \varphi_2 = \neg(\neg\varphi_1 \lor \neg\varphi_2)$; necessity modality $[\alpha]\varphi_0 = \neg\langle\alpha\rangle\neg\varphi_0$; and maximal fixed point operator $\nu X.\varphi_0 = \neg\mu X.\neg\varphi_0[\neg X/X]$, where $\varphi_0[\neg X/X]$ is the syntactic substitution of $X$ by $\neg X$ in $\varphi_0$. Syntactically, $\langle\rangle$ and [] have the highest precedence, followed by $\land$, then $\lor$, and finally $\mu$ and $\nu$. To have a well-defined semantics, state formulas are syntactically monotonic [18], i.e., in every subformula $\mu X.\varphi_0$, all occurrences of $X$ in $\varphi_0$ fall in the scope of an even number of negations. Thus, negations can be eliminated by downward propagation.
+
+Although $L_\mu$ subsumes most action-based logics, its operators are rather low-level and lead to complex formulas. In practice, temporal logics or extensions of $L_\mu$ with higher-level operators are used, avoiding (or at least reducing) the use of fixed point operators and modalities. We review informally some of these logics (whose operators can be translated to $L_\mu$), which will be useful in the sequel.
+
+Propositional Dynamic Logic with Looping The logic PDL-Δ [30] introduces the modalities $\langle\beta\rangle\varphi_0$ and $\langle\beta\rangle@$, where $\beta$ is a regular formula defined as follows:
+
+$$
+\beta ::= \varphi? | \alpha | \beta_1 \cdot \beta_2 | \beta_1 | \beta_2 | \beta_0^*
+$$
+
+Regular formulas $\beta$ denote sets of transition sequences in an LTS: the testing operator $\varphi?$ denotes all zero-step sequences consisting of states satisfying $\varphi$; $\alpha$
+---PAGE_BREAK---
+
+denotes all one-step sequences consisting of a transition labeled by an action satisfying $\alpha$; the concatenation $\beta_1 \cdot \beta_2$, choice $\beta_1 | \beta_2$, and transitive-reflexive closure $\beta_0^*$ operators have their usual semantics transposed to transition sequences.
+
+The regular diamond modality $\langle\beta\rangle \varphi_0$ denotes the states with an outgoing transition sequence satisfying $\beta$ and leading to a state satisfying $\varphi_0$. The infinite looping operator $\langle\beta\rangle @ $ denotes the states having an outgoing transition sequence consisting of an infinite concatenation of subsequences satisfying $\beta$.
+
+Action Computation Tree Logic The logic ACTL$\X$ (ACTL without next operator) [26] introduces four temporal operators, whose semantics can be found in terms of $L_\mu$ formulas in [8, 24], where $\alpha_1, \alpha_2$ are interpreted over visible actions:
+
+$$E(\varphi_{1 \alpha_1} U \varphi_{2}), E(\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2}), A(\varphi_{1 \alpha_1} U \varphi_{2}), A(\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2})$$
+
+A transition sequence satisfies the path formula $\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2}$ if it contains a visible transition whose action satisfies $\alpha_2$ and whose target state satisfies $\varphi_2$, whereas at any moment before this transition, $\varphi_1$ holds and all visible actions satisfy $\alpha_1$. A sequence satisfies $\varphi_{1 \alpha_1} U \varphi_{2}$ if it contains a state satisfying $\varphi_2$ and at any moment before, $\varphi_1$ holds and all visible actions satisfy $\alpha_1$. A state satisfies $E(\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2})$ (resp. $E(\varphi_{1 \alpha_1} U \varphi_{2})$) if it has an outgoing sequence satisfying $\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2}$ (resp. $\varphi_{1 \alpha_1} U \varphi_{2}$). It satisfies $A(\varphi_{1 \alpha_1} U_{\alpha_2} \varphi_{2})$ (resp. $A(\varphi_{1 \alpha_1} U \varphi_{2})$) if all its outgoing sequences satisfy the corresponding path formula. The following abbreviations are often used:
+
+$$EF_{\alpha}(\varphi_0) = E(\text{true}_{\text{true}} U_{\alpha} \varphi_0) \quad AG_{\alpha}(\varphi_0) = \neg EF_{\neg\alpha}(\text{true}) \wedge \neg E(\text{true}_{\text{true}} U \neg \varphi_0)$$
+
+A state satisfies $EF_\alpha(\varphi_0)$ if it has an outgoing sequence leading to a transition whose action satisfies $\alpha$ and target state satisfies $\varphi_0$. A state satisfies $AG_\alpha(\varphi_0)$ if none of its outgoing sequences leads to a transition labeled by an action not satisfying $\alpha$ or to a state not satisfying $\varphi_0$.
+
+Computation Tree Logic The logic CTL [5] contains the following operators:
+
+$$E(\varphi_1 U \varphi_2), A(\varphi_1 U \varphi_2), E(\varphi_1 W \varphi_2), A(\varphi_1 W \varphi_2), EF(\varphi_0), AG(\varphi_0), AF(\varphi_0), EG(\varphi_0)$$
+
+A state satisfies $E(\varphi_1 U \varphi_2)$ (resp. $A(\varphi_1 U \varphi_2)$) if some of (resp. all) its outgoing sequences lead to states satisfying $\varphi_2$ after passing only through states satisfying $\varphi_1$. It satisfies $E(\varphi_1 W \varphi_2)$ (resp. $A(\varphi_1 W \varphi_2)$) if some of (resp. all) its outgoing sequences either contain only states satisfying $\varphi_1$, or lead to states satisfying $\varphi_2$ after passing only through states satisfying $\varphi_1$. A state satisfies $EF(\varphi_0)$ (resp. $AF(\varphi_0)$) if some of (resp. all) its outgoing sequences lead to states satisfying $\varphi_0$. A state satisfies $EG(\varphi_0)$ (resp. $AG(\varphi_0)$) if some of (resp. all) its outgoing sequences contain only states satisfying $\varphi_0$.
+
+## 2.3 Compositional property-dependent LTS reductions
+
+Given a formula $\varphi \in L_\mu$ and a composition of processes $P_1 || \dots || P_n$, [24] shows two results that allow an LTS equivalent to $P_1 || \dots || P_n$ to be reduced compositionally, while preserving the truth value of $\varphi$. The first result is a procedure,
+---PAGE_BREAK---
+
+called maximal hiding, which extracts systematically from $\varphi$ a set of actions $H(\varphi)$ that are not discriminated by any action formula occurring in $\varphi$. It is shown that $P_1 || \dots || P_n = \varphi$ if and only if **hide $H(\varphi)$ in $(P_1 || \dots || P_n)$** $\models \varphi$. The second result is the identification of a fragment of $L_\mu$, called $L_\mu^{\text{dsbr}}$, which is strictly more expressive than $\mu\text{ACTL}\backslash X^5$ and adequate with divbranching bisimulation. This fragment is defined as follows.
+
+**Definition 7 (Modal μ-calculus fragment $L_{\mu}^{\text{dsbr}}$ [24]).** By convention, we use the symbols $\alpha_{\tau}$ and $\alpha_{a}$ to denote action formulas such that $\tau \in [\alpha_{\tau}]_{A}$ and $\tau \notin [\alpha_{a}]_{A}$. The fragment $L_{\mu}^{\text{dsbr}}$ of $L_{\mu}$ is defined as the set of formulas that are semantically equivalent to some formula of the following language:
+
+$$
+\begin{array}{lclcl}
+\varphi & ::= & \mathbf{false} & | & \varphi_1 \lor \varphi_2 \mid \neg \varphi_0 \mid X \mid \mu X.\varphi_0 \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^* \rangle \varphi_2 \mid \langle ((\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a) \rangle \varphi_2 \mid \langle \varphi_1?.\alpha_{\tau} \rangle @ \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^* \rangle \varphi_2, & weak & modality \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & weak & infinite \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & \langle (\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a \rangle \varphi_2, & loop \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+& | & (strong modalities in this context). \\
+
+Depending on the $L_\mu$ fragment $\varphi$ belongs to, it is thus possible to determine
+whether the system can or cannot be reduced for divbranching bisimulation.
+
+3 Combining Bisimulations Compositionally
+
+The above approach is a *mono-bisimulation* approach: either the formula is in
+$L_{\mu}^{\text{dsbr}}$ and then the system is entirely reduced for divbranching bisimulation, or it
+is not and then the system is entirely reduced for strong bisimulation. We show
+in this section that, even if the formula is not in $L_{\mu}^{\text{dsbr}}$, it may still be possible to
+reduce some processes among the parallel processes $P_1, ..., P_n$ for divbranching
+instead of strong bisimulation. This approach relies on the fact that, in general,
+an arbitrary temporal logic formula $\varphi$ may be rewritten in a form that contains
+both weak modalities, as those present in $L_{\mu}^{\text{dsbr}}$, and non-weak modalities of $L_{\mu}$
+(called *strong modalities* in this context).
+
+To do so, we characterize a family of fragments of $L_\mu$, each of which is written
+$L_\mu^{strong}(A_s)$, where $A_s$ is the set of actions that can be matched by strong modal-
+ities. We then prove that if $\varphi$ belongs to $L_\mu^{strong}(A_s)$ and some process $P_i$ does
+not contain any action from the set $A_s$, then $P_i$ can be reduced for divbranching
+bisimulation. Throughout this section, we assume that the concurrent system
+$P_1 || ... || P_n$ is fixed, and we write $A$ for the set of actions occurring in the
+system.
+
+⁵ µACTL>X denotes ACTL>X plus fixed points. The authors of [24] claim that Lμdsbr is as expressive as µACTL>X, but they omit that the ⟨ϕ1?·,ατ⟩ @ weak infinite looping modality cannot be expressed in µACTL>X.
+---PAGE_BREAK---
+
+## 3.1 The $L_{\mu}^{strong}(A_s)$ fragments of $L_{\mu}$
+
+**Definition 8** ($L_{\mu}^{strong}(A_s)$). Let $A_s \subseteq A$ be a fixed set of actions, called strong actions, and let $\alpha_s$ denote any action formula such that $[\alpha_s]_A \subseteq A_s$, called a strong action formula. The fragment $L_{\mu}^{strong}(A_s)$ of $L_{\mu}$ is defined as the set of formulas that are semantically equivalent to some formula of the following language:
+
+$$ \varphi ::= \mathbf{false} \mid \varphi_1 \lor \varphi_2 \mid \neg \varphi_0 \mid \langle \alpha_s \rangle \varphi_0 \mid X \mid \mu X.\varphi_0 \\ | \quad \langle (\varphi_1?.\alpha_{\tau})^* \rangle \varphi_2 \mid \langle ((\varphi_1?.\alpha_{\tau})^*.\varphi_1?.\alpha_a) \rangle \varphi_2 \mid \langle \varphi_1?.\alpha_{\tau} \rangle @ $$
+
+We call $\langle\alpha_s\rangle \varphi_0$ a strong modality. $L_{\mu}^{strong}(A_s)$ is the fragment of $L_{\mu}$ consisting of formulas expressible in a form where strong modalities match only actions in $A_s$. Its formal relationship with $L_{\mu}^{dsbr}$ and $L_{\mu}$ is given in Theorem 1.
+
+**Theorem 1.** *The following three propositions hold trivially: $L_{\mu}^{strong}(\emptyset) = L_{\mu}^{dsbr}$, $L_{\mu}^{strong}(A) = L_{\mu}$, and if $A_s \subset A'_s$ then $L_{\mu}^{strong}(A_s) \subset L_{\mu}^{strong}(A'_s)$.*
+
+Given $\varphi \in L_{\mu}$, there exists a (not necessarily unique, see Theorem 3 page 10) minimal set $A_s$ such that $\varphi \in L_{\mu}^{strong}(A_s)$. Obviously, $L_{\mu}^{strong}(A_s)$ is not adequate with divbranching bisimulation when $A_s$ is not empty, as illustrated by the following example.
+
+*Example 1.* Consider the LTS $P$, $P'$, $Q$, and $Q'$ depicted in Figure 1. $P'$ (resp. $Q'$) denotes the minimal LTS equivalent to $P$ (resp. $Q$) for divbranching bisimulation. The formula $\varphi = [(true?.true)^* . true?.a_1][a_2]$ false of $L_{\mu}^{strong}(\{a_2\})$ (which is equivalent to the PDL formula [true^*.a_1.a_2] false) expresses that the system does not contain two successive transitions labelled by $a_1$ and $a_2$ respectively. $\varphi$ does not belong to $L_{\mu}^{dsbr}$. Indeed, $P |([a_1]| Q)$ satisfies $\varphi$ because $a_1$ is necessarily followed by a $\tau$ transition, but $P' |([a_1]| Q')$ (which is isomorphic to $Q'$) does not.
+
+Fig. 1. LTS used in Examples 1 and 2
+---PAGE_BREAK---
+
+## 3.2 Applying divbranching bisimulation to selected components
+
+The following theorem states the main result of this paper, namely that every component process containing no strong action can be replaced by any divbranching equivalent process, without affecting the truth value of the formula⁶.
+
+**Theorem 2.** Let $P = (\Sigma_P, A_P, \rightarrow_P, p_{init})$, $Q = (\Sigma_Q, A_Q, \rightarrow_Q, q_{init})$, $Q' = (\Sigma_{Q'}, A_{Q'}, \rightarrow_{Q'}, q'_{init})$, $A_{sync} \subseteq \mathcal{A}$, and $\varphi \in L_{\mu}^{\text{strong}}(A_s)$. If $A_Q \cap A_s = \emptyset$ and $Q \sim_{dsbr} Q'$, then $P|[A_{sync}]|Q \models \varphi$ if and only if $P|[A_{sync}]|Q' \models \varphi$.
+
+*Proof.* The proof looks like the one in [24], showing that divbranching bisimilation preserves the properties of $L_{\mu}^{\text{dsbr}}$, but reasoning concerns product states and additionally handles the case of strong modalities. □
+
+Note that $\tau$ may belong to $A_s$. If so, every $P_i$ containing $\tau$ cannot be reduced for divbranching bisimulation. On the contrary, processes that do not contain strong actions do not contain $\tau$. Reducing them for divbranching bisimulation is thus allowed, but coincides with strong bisimulation reduction. In the end, all $\tau$-transitions of the system are preserved, as expected for the truth value of formulas containing strong modalities matching $\tau$ to be preserved.
+
+*Example 2.* In Example 1, $P$ does not contain $a_2$, the only strong action of the system. Thus, $\varphi$ can be checked on $P'|[a_1]|Q$ (which is isomorphic to $Q$ and has only 3 states) instead of $P|[a_1]|Q$ (6 states), while preserving its truth value.
+
+Theorem 2 is consistent with Andersen's *partial model checking* [1] and the mono-bisimulation approach [24]. Given $P||Q$ such that the strong actions of $\varphi$ occur only in $P$, one can observe that the quotient $\varphi//P$ (defined in [1, 21]) belongs to $L_{\mu}^{\text{dsbr}}$, because quotienting removes all strong modalities, leaving only weak modalities in the quotiented formula. It follows that $Q$, on which $\varphi//P$ has to be checked, can be reduced for divbranching bisimulation. This observation could serve as an alternative proof of Theorem 2.
+
+## 3.3 Identifying strong actions in derived operators
+
+In the general case, identifying a minimal set of strong actions is not easy, if even feasible. One cannot reasonably assume that formulas are written in the obscure $L_{\mu}^{\text{strong}}(A_s)$ syntax (see Ex. 1) and that the remaining strong modalities cannot be turned to weak ones. Instead, users shall continue to use “syntactic sugar” extensions of $L_{\mu}$, with operators of e.g., CTL, ACTL, PDL, or PDL-$\Delta$. In Lemma 1, we provide patterns that can be used to prove that a formula written using one of those operators belongs to a particular instance of $L_{\mu}^{\text{strong}}(A_s)$.
+
+**Lemma 1.** Let $\varphi_0, \varphi_1, \varphi_2 \in L_{\mu}^{\text{strong}}(A_s)$, $\tau \in [\alpha_{\tau}]_A$, $\tau \notin [\alpha_a]_A$, $[\alpha_s]_A \subseteq A_s$, and $\alpha_0, \alpha_1, \alpha_2$ be any action formulas. The following formulas belong to $L_{\mu}^{\text{strong}}(A_s)$ (the list may be not exhaustive):
+
+⁶ Theorem 2 generalizes easily to more general compositions $P||Q$ (with admissible action mappings) if $Q$ does not contain any action that maps onto a strong action.
+---PAGE_BREAK---
+
+1. *Modal μ-calculus:*
+
+$$ \langle \alpha_s \rangle \varphi_0 \quad [\alpha_s] \varphi_0 \quad \neg \varphi_0 \quad \varphi_1 \lor \varphi_2 \quad \varphi_1 \land \varphi_2 \quad \varphi_1 \Rightarrow \varphi_2 $$
+
+2. *Propositional Dynamic Logic:*
+
+$$ \langle\alpha_{\tau}^{*}\rangle \varphi_{0} \quad [\alpha_{\tau}^{*}] \varphi_{0} \quad \langle\alpha_{\tau}^{*} \cdot \alpha_{a}\rangle \varphi_{0} \quad [\alpha_{\tau}^{*} \cdot \alpha_{a}] \varphi_{0} \quad \langle\alpha_{\tau}\rangle @ \quad [\alpha_{\tau}] \searrow $$
+
+3. *Action Computation Tree Logic:*
+
+$$
+\begin{array}{l@{\hspace{4em}}l}
+A(\varphi_1 \alpha_1 \cup \varphi_2) & A(\varphi_1 \alpha_1 U \alpha_2 \varphi_2) & AG_{\alpha_0}(\varphi_0) \\
+E(\varphi_1 \alpha_1 \cup \varphi_2) & E(\varphi_1 \alpha_1 U \alpha_2 \varphi_2) & EF_{\alpha_0}(\varphi_0)
+\end{array}
+$$
+
+4. *Computation Tree Logic:*
+
+$$
+\begin{array}{llll}
+A(\varphi_1 \cup \varphi_2) & A(\varphi_1 W \varphi_2) & AG(\varphi_0) & AF(\varphi_0) \\
+E(\varphi_1 \cup \varphi_2) & E(\varphi_1 W \varphi_2) & EF(\varphi_0) & EG(\varphi_0) \\
+A([\alpha_a] \varphi_1 \cup \varphi_2) & A([\alpha_a] \varphi_1 W \varphi_2) & AG([\alpha_a] \varphi_0) & EF(\langle\alpha_a\rangle \varphi_0) \\
+AG(\varphi_1 \lor [\alpha_a] \varphi_2) & EF(\varphi_1 \land \langle\alpha_a\rangle \varphi_2) & & \\
+\end{array}
+$$
+
+*Example 3.* Let $a_1, a_2$, and $a_3$ be visible actions and recall that $A$ denotes the set of all actions of the system. Lemma 1 allows the following to be shown (this is left as an exercise):
+
+$$
+\begin{align*}
+& (\text{true}^* . a_1 . (\neg a_2)^* . a_3) \text{ true} \in L_{\mu}^{\text{strong}}(\emptyset) && [\text{true}] false \in L_{\mu}^{\text{strong}}(A) \\
+& A((\langle a_1 \rangle \text{ true} \land \neg a_2 U a_3) \text{ true}) \in L_{\mu}^{\text{strong}}(\{a_1\}) && AG([a_1] false) \in L_{\mu}^{\text{strong}}(\emptyset) \\
+& E(\text{true}_{\text{true}} U U_{\tau} true) \in L_{\mu}^{\text{strong}}(A) && (\langle a_1^* . a_2 \rangle true) \in L_{\mu}^{\text{strong}}(\{a_1, a_2\}) \\
+& E(\text{true}_{\text{true}} U U_{\tau} true) \in L_{\mu}^{\text{strong}}(\{\tau\}) && [a_1 . a_2] false \in L_{\mu}^{\text{strong}}(\{a_1, a_2\}) \\
+& EF(\langle a_1 \rangle true \land \langle a_2 \rangle true) \in L_{\mu}^{\text{strong}}(\{a_1\}) && EF(\langle a_1 \rangle \langle a_2 \rangle true) \in L_{\mu}^{\text{strong}}(\{a_2\}) \\
+& EF(\langle a_1 \rangle true \land \langle a_2 \rangle true) \in L_{\mu}^{\text{strong}}(\{a_2\}) && EF(\langle\neg a_1\rangle true) \in L_{\mu}^{\text{strong}}(A \setminus \{a_1\})
+\end{align*}
+$$
+
+**Theorem 3.** There is not a unique minimal set $A_s$ such that $\varphi \in L_\mu^{strong}(A_s)$.
+
+*Proof.* $EF((\langle a_1\rangle true \land \langle a_2\rangle true) \in L_\mu^{strong}(\{a_1\}) \cap L_\mu^{strong}(\{a_2\}))$, because it is semantically equivalent to both formulas $EF(((\langle a_1\rangle true?.true))^*.\langle a_1\rangle true?.a_2\rangle true)$ and $EF(((\langle a_2\rangle true?.true))^*.\langle a_2\rangle true?.a_1\rangle true)$. Yet, it is not in $L_\mu^{strong}(\emptyset)$ as it has not the same truth value on the divbranching equivalent LTS $P$ and $P'$ below:
+
+Thus, {$a_1$} and {$a_2$} are non-unique minimal sets of strong actions. $\square$
+
+## 4 Applications
+
+We consider two examples to illustrate our new verification approach combining strong and divbranching bisimulation and show how it can reduce both time and memory usage when associated to the smart reduction heuristic. In both examples, the aim is to perform a set of verification tasks, each consisting in checking a formula $\varphi$ on a system of parallel processes $P_1 || ... || P_n$. Since our approach can only improve the verification of formulas containing both strong and weak modalities, we consider only the pairs of formulas and systems such
+---PAGE_BREAK---
+
+that the formula is part of $L_{\mu}^{strong}(A_s)$ for some minimal $A_s$ that is not empty and that is strictly included in the set of visible actions of the system$^7$. For each verification task, we compare the largest LTS size, the verification time, and the memory peak obtained using the following two approaches:
+
+**Mono-bisimulation approach:** $\varphi$ is verified on **hide H**($\varphi$) **in** ($P_1 || \dots || P_n$) (where $H(\varphi)$ is the maximal hiding set mentioned in Sect. 2.3) reduced compositionally for strong bisimulation (since $\varphi$ is not in $L_{\mu}^{dsbr}$) using the smart reduction heuristic.
+
+**Refined approach combining bisimulations:** The set $\{P_1, \dots, P_n\}$ is partitioned in two groups $\mathcal{P}_s$ and $\mathcal{P}_w$ such that $P_i \in \mathcal{P}_s$ if it contains actions in $A_s$ and $P_i \in \mathcal{P}_w$ otherwise, so that $P_1 || \dots || P_n$ can be rewritten in the equivalent form ($||_{P_i \in \mathcal{P}_s} P_i ||$ ($||_{P_j \in \mathcal{P}_w} P_j$). The set $A_I$ of actions on which at least one process of $\mathcal{P}_s$ and one process of $\mathcal{P}_w$ synchronize (inter-group synchronization) is then identified. Using the smart reduction heuristic, **hide H**($\varphi$) $\setminus$ $A_I$ **in** $||_{P_i \in \mathcal{P}_s} P_i$ (corresponding to the processes containing strong actions) is reduced compositionally for strong bisimulation, leading to a first LTS $P_s$, and **hide H**($\varphi$) $\setminus$ $A_I$ **in** $||_{P_j \in \mathcal{P}_w} P_j$ (corresponding to the processes containing no strong action) is reduced compositionally for divbranching bisimulation, leading to a second LTS $P_w$. Finally, $\varphi$ is verified on **hide H**($\varphi$) $\cap$ $A_I$ **in** ($P_s$ $[[A_I]]$ $P_w$) reduced for strong bisimulation.
+
+All experiments were done on a 3GHz/12GB RAM/8-core Intel Xeon computer running Linux, using the specification languages and 32-bit versions of tools provided in the CADP toolbox [12] version 2019-a "Pisa".
+
+## 4.1 Trivial File Transfer Protocol
+
+The TFTP (*Trivial File Transfer Protocol*) case-study$^8$ addresses the verification of an avionic communication protocol between a plane and the ground [13]. It comprises two instances (A and B) of a process named TFTP, connected through a FIFO buffer. Since the state space is very large in the general case, the authors defined five scenarios named A, B, C, D, and E, depending on whether each instance may write and/or read a file. The system corresponding to each scenario is a parallel composition of eight processes. The requirements consist of 29 properties parameterized by the identity of a TFTP instance, defined in MCL [23] (an implementation of the alternation-free modal $\mu$-calculus including PDL-$\Delta$ modalities and macro definitions enabling the construction of libraries of operators), 24 of which belong to $L_{\mu}^{dsbr}$. The remaining five, namely properties 08, 09, 14, 16, and 17, contain both weak and strong modalities. The shape of these properties is described in Table 1, where we do not provide the details of the action formulas, but instead denote them by letters $a_1, a_2, \dots$, where $\tau \notin [[a_i]]_A$
+
+$^7$ Otherwise, our approach coincides with the mono-bisimulation approach of [24]. In all the examples addressed in this section, there is always a unique minimal set $A_s$, whose identification is made easy using Lemma 1.
+
+$^8$ Specification available at ftp://ftp.inrialpes.fr/pub/vasy/demos/demo_05
+---PAGE_BREAK---
+
+| Nr. | Property |
|---|
| 08 | true* · a1 · a2 false | | 09 | true* · a1 · a2 · ((a3 · (¬a4)* · a5) | (a6 · (¬a7)* · a8)) false | | 14 | true* · a1 · a2 · (¬a3)* · a4 · a5 false | | 16 | ((¬a1)* · a2 · (¬a3)* · a4) <((¬a5)* · a6 · a7) · ((¬a5)* · a6 · a7)> true | | 17 | Same shape as property Nr. 16 |
+
+**Table 1.** TFTP properties (strong action formulas are highlighted)
+
+for all *i*. Strong action formulas are highlighted and one shows easily that the other are weak using Lemma 1-2.
+
+We consider 31 among a potential of 50 verification tasks (five properties, five scenarios, and two instances) as some properties are not relevant to every TFTP instance and scenario (e.g., in a scenario where one TFTP instance only receives messages, checking a property concerning a message emission is irrelevant). All 31 verification tasks return **true** and the strong actions occur in only three (although not the same three) out of the eight parallel processes.
+
+**Fig. 2.** Experimental results of the TFTP case-study
+
+Figure 2 shows that the refined approach always reduces LTS size (for both intermediate and final LTS), memory and time following similar curves, up to a factor 7 (the vertical axis is on a logarithmic scale). Time does not include LTS generation of the component processes from their LNT specification, which takes only a few seconds and is common to both approaches. In these experiments, time is dominated by the last step of generation and minimization, whereas memory usage is dominated by minimization.
+---PAGE_BREAK---
+
+| Nr. | Property | Result |
|---|
| 101#21 | AG([A21] [A23] [A4] true false) | false | | 101#22 | AG([A3] AF(<A2> true)) | false | | 101#23 | AG(<A20> true → <A20> A([A23] false W <A8> true)) | true | | 102#21 | EF(AG([A5] false)) | true | | 102#22 | EG([A35] E([A23] false U <A35> true)) | false | | 102#23 | AG([A22] A([A8] false U <A22> true)) | false | | 103#21 | AG([A11] A([A2] false W <A6> true) → [A11] A([A5] false W <A6> true)) | true | | 103#22 | EG([A14] false ∧ (<A18> true → [A18] EG([A21] false ∧ EF(<A19> true)))) = EG([A14] false ∧ [A18] EG([A21] false ∧ EF(<A19> true))) | true | | 103#23 | AG(<A34> true → [A34] A([A68] false W <A59> true)) = AG([A34] A([A68] false W <A59> true)) | false |
+
+**Table 2.** RERS 2018 properties (strong action formulas are highlighted)
+
+## 4.2 Parallel benchmark of the RERS 2018 challenge
+
+The RERS (Rigorous Examination of Reactive Systems)⁹ challenge is an international competition on a benchmark of verification tasks. Since 2018 (8th edition), the challenge features a set of parallel problems where systems are synchronizing LTS and properties are expressed using CTL and modalities. This section illustrates the benefits of our approach on these problems.
+
+The benchmark comprises three specifications of concurrent systems, numbered 101, 102, and 103, each accompanied by three properties to be checked, numbered *p#21*, *p#22*, and *p#23*, where *p* is the system number. Thus, nine verification tasks have to be solved. The properties are presented in Table 2, where the strong action formulas are highlighted. One easily shows that all other action formulas are weak using Lemmas 1-1 and 1-4. However, for 103\#22 and 103\#23, the identity ($\langle\alpha\rangle$ true $\Rightarrow$ [$\alpha$] $\varphi$) = ($[\alpha]$ false $\lor$ [$\alpha$] $\varphi$) = [$\alpha$] $\varphi$ (because [$\alpha$] false $\Rightarrow$ [$\alpha$] $\varphi$ for all $\varphi$) was applied to obtain the simplified formulas occurring after the = sign in the table. For 102\#23, this simplification allowed us to prove that A34 is not a strong action, unlike what appears at first sight.
+
+Table 3 gives, for each of the nine verification tasks, the number \#act of actions in the system, the number \#hide of actions in the maximal hiding set, the number \#sact of strong actions, the number \#proc of parallel processes, the number \#sproc of processes in the strong group, the number \#sync of inter-group actions, and the best reduction relation among strong bisimulation, di-vbranching bisimulation, or a combination of both. We observe that:
+
+- The set of weak actions of 101\#21 is empty due to the presence of the “true” strong action formula, whereas the set of strong actions of 102\#21 is empty, i.e., the property belongs to $L_{\mu}^{dsbr}$. In both cases, our approach coincides with the mono-bisimulation approach. The verification of 101\#21 (reduced
+
+⁹ http://rers-challenge.org
+---PAGE_BREAK---
+
+| Task | #act | #hide | #sact | #proc | #sproc | #sync | relation |
|---|
| 101#21 | 24 | 21 | 24 | 9 | 9 | - | strong | | 101#22 | 24 | 22 | 1 | 9 | 4 | 11 | combination | | 101#23 | 24 | 21 | 2 | 9 | 3 | 9 | combination | | 102#21 | 28 | 27 | 0 | 20 | 0 | - | divbranching | | 102#22 | 28 | 26 | 2 | 20 | 10 | 14 | combination | | 102#23 | 28 | 26 | 1 | 20 | 4 | 12 | combination | | 103#21 | 70 | 66 | 2 | 34 | 8 | 12 | combination | | 103#22 | 70 | 66 | 3 | 34 | 6 | 18 | combination | | 103#23 | 70 | 67 | 1 | 34 | 7 | 10 | combination |
+
+**Table 3.** Some numbers about the RERS 2018 parallel benchmark
+
+for strong bisimulation) takes 75 seconds, with a memory peak of 11 MB and a largest LTS of 83,964 states and 374,809 transitions. The verification of 102#21 (reduced for divbranching bisimulation) takes 261 seconds, with a memory peak of 22 MB and a largest LTS of 243 states and 975 transitions.
+
+- 101#22, 101#23, 102#22, 102#23, 103#21, 103#22, and 103#23 contain both weak and strong actions. They are used to evaluate our approach.
+
+Table 4 compares the performance of verifying the latter seven verification tasks using the approaches described above. LTS sizes are given in kilostates, memory in megabytes, and time in seconds. Tasks using more than 3 GB of memory were aborted. We see that our approach reduces both time and memory usage and allows all problems of the challenge to be solved, whereas using strong bisimulation alone fails in five out of those seven tasks.
+
+| Task | Strong bisimulation | Combined bisimulations |
|---|
| Kstates | sec. | Kstates verif. | sec. |
|---|
| largest | final | MB | largest | final | MB |
|---|
| 101#22 | 84 | 77 | 10 | 77 | 1.4 | 1.4 | 10 | 72 | | 101#23 | 84 | 77 | 11 | 80 | 0.5 | 0.5 | 8 | 73 | | 102#22 | - | - | - | - | 611 | 585 | 57 | 295 | | 102#23 | - | - | - | - | 17 | 9.8 | 22 | 260 | | 103#21 | - | - | - | - | 734 | 313 | 101 | 604 | | 103#22 | - | - | - | - | 14,143 | 14,141 | 1575 | 2533 | | 103#23 | - | - | - | - | 122 | 122 | 35 | 566 |
+
+**Table 4.** Experimental results of the RERS 2018 parallel benchmark
+
+The negligible reductions in time and memory usage observed for tasks 101#22 and 101#23 are due to the fact that time and memory usage are dominated by the algorithm in charge of selecting a sub-set of processes to be composed and reduced (implemented in smart reduction). The complexity of this
+---PAGE_BREAK---
+
+algorithm does not depend on the state space size, but on the number of actions and parallel processes, which is almost the same using both approaches. When considering larger examples, memory usage gets dominated by minimisation. In particular, for tasks 102\#22, 102\#23, 103\#21, and 103\#23 (and likely also 103\#22), memory usage is reduced by several orders of magnitude.
+
+Note that some of these tasks can be verified more efficiently using non-compositional approaches, such as on-the-fly model checking, in cases where proofs or counter-examples can be detected much before having explored the full state space. The main drawback of maximal hiding is that the generated counter-examples are given only in terms of the actions visible in the formula, which abstracts out a lot of intermediate transitions. However, this is the price to pay for being able to verify most of the tasks, such as 103\#21, for which on-the-fly verification aborts due to memory exhaustion.
+
+## 5 Conclusion and Future Work
+
+In this paper, we proposed a compositional verification approach that extends the state of the art [24] and consists of three steps: First, so-called strong actions are identified, corresponding to those actions of the system that the formula cannot match using weak modalities in the sense of the $L_μ$ fragment $L_μ^{dsbr}$ adequate with divbranching bisimulation. These actions are used to partition the parallel processes into those containing strong actions and the others. Second, maximal hiding and compositional reduction are used to minimize the composition of processes not containing strong actions for divbranching bisimulation, and the other processes for strong bisimulation. Finally, the property is verified on the reduced system.
+
+The originality of this approach is to combine strong and divbranching bisim-ulation, as opposed to the mono-bisimulation approach of [24]. We proved it cor-rect by characterizing a family of fragments of the logic $L_μ$, called $L_μ^{strong}(A_s)$, parameterized by the set $A_s$ of strong actions. We also showed under which conditions action-based branching-time temporal logic formulas containing well-known operators from the logics CTL, ACTL, PDL, and PDL-$Δ$ are part of $L_μ^{strong}(A_s)$ when $A_s$ is fixed. In the future, it might be worth investigating whether more operators can be considered, e.g., from the linear-time logic LTL.
+
+This approach may significantly improve the verification performance for systems containing both processes with and without strong actions, as illustrated by two case-studies. In particular, it allowed the whole parallel CTL benchmark of the RERS 2018 challenge to be solved on a standard computer.
+
+Identifying (close to minimal) sets of strong actions for arbitrary formulas manually is a cumbersome task, prone to errors. We shall investigate ways to compute such sets automatically. As illustrated by verification task 103\#23 of RERS 2018, the problem is not purely syntactic: considering non-trivial semantic equivalences may prove useful to eliminate actions that appear strong at first sight. Yet, we trust that the presented approach has potential to be implemented in automated software tools, such as those available in the CADP toolbox.
+---PAGE_BREAK---
+
+References
+
+1. Henrik Reif Andersen. Partial Model Checking. In *Proceedings of the 10th Annual IEEE Symposium on Logic in Computer Science LICS (San Diego, California, USA)*, pages 398–407. IEEE Computer Society Press, June 1995.
+
+2. Stephen D. Brookes, C. A. R. Hoare, and A. W. Roscoe. A Theory of Communicating Sequential Processes. *Journal of the ACM*, 31(3):560–599, July 1984.
+
+3. David Champelovier, Xavier Clerc, Hubert Garavel, Yves Guerte, Christine McKinty, Vincent Powazny, Frédéric Lang, Wendelin Serwe, and Gideon Smeding. Reference Manual of the LNT to LOTOS Translator (Version 6.7). INRIA, Grenoble, France, July 2017.
+
+4. S. C. Cheung and J. Kramer. Enhancing Compositional Reachability Analysis with Context Constraints. In *Proceedings of the 1st ACM SIGSOFT International Symposium on the Foundations of Software Engineering (Los Angeles, CA, USA)*, pages 115–125. ACM Press, December 1993.
+
+5. E. M. Clarke, E. A. Emerson, and A. P. Sistla. Automatic Verification of Finite-State Concurrent Systems using Temporal Logic Specifications. *ACM Transactions on Programming Languages and Systems*, 8(2):244–263, April 1986.
+
+6. Pepijn Crouzen and Frédéric Lang. Smart Reduction. In Dimitra Giannakopoulou and Fernando Orejas, editors, *Proceedings of Fundamental Approaches to Software Engineering (FASE'11), Saarbrücken, Germany*, volume 6603 of *Lecture Notes in Computer Science*, pages 111–126. Springer, March 2011.
+
+7. Sander de Putter, Anton Wijs, and Frédéric Lang. Compositional Model Checking is Lively — Extended Version, 2018. Submitted to Science of Computer Programming.
+
+8. Alessandro Fantechi, Stefania Gnesi, and Gioia Ristori. From ACTL to $\mu$-calculus (extended abstract). In *Proceedings of the Workshop on Theory and Practice in Verification*. ERCIM, 1992.
+
+9. Michael J. Fischer and Richard E. Ladner. Propositional Dynamic Logic of Regular Programs. *Journal of Computer and System Sciences*, 18(2):194–211, September 1979.
+
+10. Hubert Garavel and Frédéric Lang. SVL: a Scripting Language for Compositional Verification. In Myungchul Kim, Byoungmoon Chin, Sungwon Kang, and Danhyung Lee, editors, *Proceedings of the 21st IFIP WG 6.1 International Conference on Formal Techniques for Networked and Distributed Systems (FORTE'01), Cheju Island, Korea*, pages 377–392. Kluwer Academic Publishers, August 2001. Full version available as INRIA Research Report RR-4223.
+
+11. Hubert Garavel, Frédéric Lang, and Radu Mateescu. Compositional Verification of Asynchronous Concurrent Systems Using CADP. *Acta Informatica*, 52(4):337–392, April 2015.
+
+12. Hubert Garavel, Frédéric Lang, Radu Mateescu, and Wendelin Serwe. CADP 2011: A Toolbox for the Construction and Analysis of Distributed Processes. Springer International Journal on Software Tools for Technology Transfer (STTT), 15(2):89–107, April 2013.
+
+13. Hubert Garavel and Damien Thivolle. Verification of GALS Systems by Combining Synchronous Languages and Process Calculi. In Corina Pasareanu, editor, *Proceedings of the 16th International SPIN Workshop on Model Checking of Software (SPIN'09), Grenoble, France*, volume 5578 of *Lecture Notes in Computer Science*, pages 241–260. Springer, June 2009.
+---PAGE_BREAK---
+
+14. Susanne Graf and Bernhard Steffen. Compositional Minimization of Finite State Systems. In Edmund M. Clarke and Robert P. Kurshan, editors, *Proceedings of the 2nd Workshop on Computer-Aided Verification (CAV'90)*, Rutgers, New Jersey, USA, volume 531 of *Lecture Notes in Computer Science*, pages 186–196. Springer, June 1990.
+
+15. Jan Friso Groote and Alban Ponse. The Syntax and Semantics of μCRL. CS-R 9076, Centrum voor Wiskunde en Informatica, Amsterdam, 1990.
+
+16. ISO/IEC. LOTOS – A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard 8807, International Organization for Standardization – Information Processing Systems – Open Systems Interconnection, Geneva, September 1989.
+
+17. ISO/IEC. Enhancements to LOTOS (E-LOTOS). International Standard 15437:2001, International Organization for Standardization – Information Technology, Geneva, September 2001.
+
+18. D. Kozen. Results on the Propositional μ-calculus. *Theoretical Computer Science*, 27:333–354, 1983.
+
+19. Jean-Pierre Krimm and Laurent Mounier. Compositional State Space Generation from LOTOS Programs. In Ed Brinksma, editor, *Proceedings of the 3rd International Workshop on Tools and Algorithms for the Construction and Analysis of Systems (TACAS'97), University of Twente, Enschede, The Netherlands*, volume 1217 of *Lecture Notes in Computer Science*. Springer, April 1997. Extended version with proofs available as Research Report VERIMAG RR97-01.
+
+20. Frédéric Lang. EXP OPEN 2.0: A Flexible Tool Integrating Partial Order, Compositional, and On-the-fly Verification Methods. In Judi Romijn, Graeme Smith, and Jaco van de Pol, editors, *Proceedings of the 5th International Conference on Integrated Formal Methods (IFM'05), Eindhoven, The Netherlands*, volume 3771 of *Lecture Notes in Computer Science*, pages 70–88. Springer, November 2005. Full version available as INRIA Research Report RR-5673.
+
+21. Frédéric Lang and Radu Mateescu. Partial Model Checking using Networks of Labelled Transition Systems and Boolean Equation Systems. *Logical Methods in Computer Science*, 9(4):1–32, October 2013.
+
+22. J. Malhotra, S. A. Smolka, A. Giacalone, and R. Shapiro. A Tool for Hierarchical Design and Simulation of Concurrent Systems. In *Proceedings of the BCS-FACS Workshop on Specification and Verification of Concurrent Systems, Stirling, Scotland, UK*, pages 140–152. British Computer Society, July 1988.
+
+23. Radu Mateescu and Damien Thivolle. A Model Checking Language for Concurrent Value-Passing Systems. In Jorge Cuellar, Tom Maibaum, and Kaisa Sere, editors, *Proceedings of the 15th International Symposium on Formal Methods (FM'08), Turku, Finland*, volume 5014 of *Lecture Notes in Computer Science*, pages 148–164. Springer, May 2008.
+
+24. Radu Mateescu and Anton Wijs. Property-Dependent Reductions Adequate with Divergence-Sensitive Branching Bisimilarity. *Science of Computer Programming*, 96(3):354–376, 2014.
+
+25. Robin Milner. *Communication and Concurrency*. Prentice-Hall, 1989.
+
+26. R. De Nicola and F. W. Vaandrager. *Action versus State Based Logics for Transition Systems*. In *Semantics of Concurrency*, volume 469 of *Lecture Notes in Computer Science*, pages 407–419. Springer, 1990.
+
+27. David Park. Concurrency and Automata on Infinite Sequences. In Peter Deussen, editor, *Theoretical Computer Science*, volume 104 of *Lecture Notes in Computer Science*, pages 167–183. Springer, March 1981.
+---PAGE_BREAK---
+
+28. Amir Pnueli. In Transition from Global to Modular Temporal Reasoning about Programs. *Logic and Models of Concurrent Systems*, 13:123–144, 1984.
+
+29. Krishan K. Sabnani, Aleta M. Lapone, and M. Ümit Uyar. An Algorithmic Procedure for Checking Safety Properties of Protocols. *IEEE Transactions on Communications*, 37(9):940–948, September 1989.
+
+30. R. Streett. Propositional Dynamic Logic of Looping and Converse. *Information and Control*, (54):121–141, 1982.
+
+31. Kuo-Chung Tai and Pramod V. Koppol. An Incremental Approach to Reachability Analysis of Distributed Programs. In *Proceedings of the 7th International Workshop on Software Specification and Design, Los Angeles, CA, USA*, pages 141–150, Piscataway, NJ, December 1993. IEEE Press.
+
+32. Kuo-Chung Tai and Pramod V. Koppol. Hierarchy-Based Incremental Reachability Analysis of Communication Protocols. In *Proceedings of the IEEE International Conference on Network Protocols, San Francisco, CA, USA*, pages 318–325, Piscataway, NJ, October 1993. IEEE Press.
+
+33. Antti Valmari. Compositional State Space Generation. In Grzegorz Rozenberg, editor, *Advances in Petri Nets 1993 – Papers from the 12th International Conference on Applications and Theory of Petri Nets (ICATPN’91)*, Gjern, Denmark, volume 674 of *Lecture Notes in Computer Science*, pages 427–457. Springer, 1993.
+
+34. R. J. van Glabbeek and W. Peter Weijland. Branching-Time and Abstraction in Bisimulation Semantics (extended abstract). CS R8911, Centrum voor Wiskunde en Informatica, Amsterdam, 1989. Also in proc. IFIP 11th World Computer Congress, San Francisco, 1989.
+
+35. Rob J. van Glabbeek and W. Peter Weijland. Branching Time and Abstraction in Bisimulation Semantics. *Journal of the ACM*, 43(3):555–600, 1996.
+
+36. Wei Jen Yeh and Michal Young. Compositional Reachability Analysis Using Process Algebra. In *Proceedings of the ACM SIGSOFT Symposium on Testing, Analysis, and Verification (SIGSOFT’91), Victoria, British Columbia, Canada*, pages 49–59. ACM Press, October 1991.
+---PAGE_BREAK---
+
+# Appendix
+
+This appendix is organized as follows. Section A contains the proof of Theorem 2 (page 9). Section B contains the proof of Lemma 1 (page 9). Section C contains the detailed performance data collected from the TFTP experiment presented in Section 4.1.
+
+## Note to referees
+
+An archive was created at http://doi.org/10.5281/zenodo.2634149. It contains the experimental material (TFTP and RERS case-studies) and a PDF of this appendix. If the paper is accepted, the URL of the archive will be given in the paper, so that readers can access this companion material and this appendix will be corrected according to the reviewer's comments (if any).
+
+## A Proof of Theorem 2 (page 9)
+
+To show Theorem 2, we first need the following Lemma, which simply lifts a standard property of branching and divbranching bisimulations up to the level of product states:
+
+**Lemma 2.** In $P|[A_{sync}]|Q$, if $(p_0, q_0) \xrightarrow{a} (p_1, q_1)$ and $q'_0 \sim_{dsbr} q_0$ then there exists a state $q'_1$ such that $q'_1 \sim_{dsbr} q_1$ and either:
+
+$$- a = \tau, p_0 = p_1, q_0 \sim_{dsbr} q_1, \text{ and } q'_1 = q'_0, \text{ or}$$
+
+$$- \text{ there exists } q_0'', \dots, q_n'' \text{ ($n \ge 0$) such that } q_0'' = q_0', \text{ for all } i \in 0..n-1, q_{i+1}'' \sim_{dsbr} q_0', (p_0, q_i'') \xrightarrow{\tau} (p_0, q_{i+1}''), \text{ and } (p_0, q_n'') \xrightarrow{a} (p_1, q_1').$$
+
+Graphically, this means that one of the diagrams below holds, where solid lines denote universal quantification whereas dotted lines denote existential quantification:
+
+$$\begin{array}{ccc}
+(p_0, q_0) & \xrightarrow{a=\tau} & (p_1, q_1) \\
+\sim_{dsbr} & & \sim_{dsbr} \\
+(p_0, q'_0) & = & (p_1, q'_1)
+\end{array}$$
+
+$$- \text{ or } -$$
+---PAGE_BREAK---
+
+*Proof.* From the definition of $P|[A_{sync}]|Q$ and the fact that $(p_0, q_0) \xrightarrow{a} (p_1, q_1)$,
+there are three possible cases:
+
+1. $a \in A_{sync}$, $p_0 \xrightarrow{a} p_1$, and $q_0 \xrightarrow{a} q_1$, or
+
+2. $a \notin A_{sync}$, $p_0 \xrightarrow{a} p_1$, and $q_1 = q_0$, or
+
+3. $a \notin A_{sync}$, $p_1 = p_0$, and $q_0 \xrightarrow{a} q_1$
+
+Instead of considering those three cases, we merge cases 1 and 3 (where $q_0 \xrightarrow{a} q_1$, whatever $p_0$ does), which have a similar proof, into a single case. Thus we consider the following two cases, the first one corresponding to case 2 of the above enumeration, and the second one corresponding to cases 1 and 3:
+
+- $q_1 = q_0$, $a \notin A_{sync}$, and $p_0 \xrightarrow{a} p_1$. Take $q'_1 = q'_0$. We have $q'_1 \sim_{dsbr} q_1$ because $q'_1 = q'_0$, $q'_0 \sim_{dsbr} q_0$, and $q_0 = q_1$. The second item of the lemma (bottom diagram) is verified with $n=0$, $q''_0 = q''_n = q'_0$, and $(p_0, q''_0) = (p_0, q''_n) = (p_0, q'_0) \xrightarrow{a} (p_1, q'_0) = (p_1, q'_1)$.
+
+- $q_0 \xrightarrow{a} q_1$ (and either $a \notin A_{sync}$ and $p_1 = p_0$, or $a \in A_{sync}$ and $p_0 \xrightarrow{a} p_1$). Then since $q'_0 \sim_{dsbr} q_0$, we have by definition of $\sim_{dsbr}$:
+
+ * Either $a = \tau$ and $q_0 \sim_{dsbr} q_1$. Since $\tau \notin A_{sync}$, we have $p_0 = p_1$. Then, we can take $q'_1 = q'_0$. Indeed, $q'_1 \sim_{dsbr} q_1$ because $q'_1 = q'_0$, $q'_0 \sim_{dsbr} q_0$, and $q_0 \sim_{dsbr} q_1$. The first item of the lemma (top diagram) is verified.
+
+ * Or there exists $q''_0, ..., q''_n$ ($n \ge 0$) such that $q''_0 = q'_0$, for all $i \in 0..n-1$, $q''_{i+1} \sim_{dsbr} q'_0$, $q''_i \xrightarrow{\tau} q''_{i+1}$, and $q''_n \xrightarrow{a} q'_1$. In this case, we also have $(p_0, q''_i) \xrightarrow{\tau} (p_0, q''_{i+1})$ and $(p_0, q''_n) \xrightarrow{a} (p_1, q'_1)$, i.e., the second item of the lemma (bottom diagram) is verified. $\square$
+
+We draw the reader's attention to the fact that the top diagram of Lemma 2 concerns only the case $(p_0, q_0) \xrightarrow{\tau} (p_0, q_1)$ deriving from a transition $q_0 \xrightarrow{\tau} q_1$ where $q_0 \sim_{dsbr} q_1$. The bottom diagram concerns all other cases, including $(p_0, q_0) \xrightarrow{\tau} (p_1, q_0)$ deriving from $p_0 \xrightarrow{\tau} p_1$ where $p_0 \sim_{dsbr} p_1$. We now show the main theorem:
+
+**Theorem 2** Let $P = (\Sigma_P, A_P, \to_P, p_{init})$, $Q = (\Sigma_Q, A_Q, \to_Q, q_{init})$, $Q' = (\Sigma_{Q'}, A_{Q'}, \to_{Q'}, q'_{init})$, $A_{sync} \subseteq \mathcal{A}$, and $\varphi \in L_{\mu}^{\text{strong}}(A_s)$. If $A_Q \cap A_s = \emptyset$ and $Q \sim_{dsbr} Q'$, then $P|[A_{sync}]|Q \models \varphi$ if and only if $P|[A_{sync}]|Q' \models \varphi$.
+
+*Proof.* We prove that, given $p \in \Sigma_P$, $q \in \Sigma_Q$, and $q' \in \Sigma_{Q'}$ such that $q \sim_{dsbr} q'$,
+$(p, q)$ satisfies $\varphi$ if and only if $(p, q')$ satisfies $\varphi$. We show this claim by struc-
+tural induction on the formula $\varphi$. Since we work on finite LTS, every formula
+containing a fixed point operator $\mu$ can be expanded into an equivalent formula
+by unfolding the fixed point a bounded number of times. This way, we do not
+have to consider fixed points and propositional variables in this proof, and con-
+texts $\delta$ mapping propositional variables to sets of states are always empty in the
+semantics of formulas. For conciseness, we write $[\varphi]$ instead of $[\varphi]_P|_{[A_{sync}]}|Q$
+and $[\varphi]_P|_{[A_{sync}]}|Q'$[, respectively, as the LTS "$P|[A_{sync}]|Q$" or "$P|[A_{sync}]|Q'"
+---PAGE_BREAK---
+
+to which the semantics apply is clear from the context. Note that since the claim
+is symmetric, it is sufficient to prove one implication only. We thus assume that
+$(p, q) \in [\varphi]$ and prove that $(p, q') \in [\varphi]$.
+
+- Case $\varphi$ = **false**. It is obvious that both $(p, q) \notin [\varphi]$ and $(p, q') \notin [\varphi]$.
+
+- Case $\varphi = \varphi_1 \lor \varphi_2$. By definition, either $(p,q) \in [\varphi_1]$ or $(p,q) \in [\varphi_2]$. If $(p,q) \in [\varphi_1]$ (resp. $[\varphi_2]$) then $(p,q') \in [\varphi_1]$ (resp. $[\varphi_2]$) by the induction hypothesis. Therefore, $(p,q') \in [\varphi_1 \lor \varphi_2]$.
+
+- Case $\varphi = \neg\varphi_0$. By definition, $(p,q) \notin [\varphi_0]$. By the induction hypothesis, $(p,q') \notin [\varphi_0]$ and thus $(p,q') \in [\neg\varphi_0]$.
+
+- Case $\varphi = \langle\alpha_s\rangle \varphi_0$. By definition, there exists $a \in [\alpha_s]_A$ such that $(p,q) \xrightarrow{a} (p_1,q_1)$ and $(p_1,q_1) \in [\varphi_0]$.
+Since $[\alpha_s]_A \subseteq A_s$, $A_Q \cap A_s = \emptyset$, and $a \in [\alpha_s]_A$, we have $a \notin A_Q$ (and hence, $a \notin A_{sync}$). Therefore, $q = q_1$, $p \xrightarrow{a} p_1$, and $(p,q') \xrightarrow{a} (p_1,q')$.
+Since $q' \sim_{dsbr} q$ and $(p_1,q) \in [\varphi_0]$, we have by the induction hypothesis $(p_1,q') \in [\varphi_0]$. As a consequence, $(p,q') \in [\langle\alpha_s\rangle \varphi_0]$.
+
+- Case $\varphi = (\langle\varphi_1?.\alpha_\tau\rangle^* .\varphi_2$. By definition, there exist $m \ge 0$, $p_i \in \Sigma_P$, $q_i \in \Sigma_Q$, and $a_i \in [\alpha_\tau]_A$ ($i \in 0..m$), such that $(p,q) = (p_0,q_0)$, $(p_m,q_m) \in [\varphi_2]$, and for all $0 \le i < m$, $(p_i,q_i) \xrightarrow{a_i} (p_{i+1},q_{i+1})$ and $(p_i,q_i) \in [\varphi_1]$. We show that the same holds from state $(p,q')$, i.e., there exists a sequence of transitions matching $\alpha_\tau$ and passing through states satisfying $\varphi_1$, until reaching a state satisfying $\varphi_2$. By the induction hypothesis, $(p,q') = (p_0,q'_0) \in [\varphi_1]$. Consider one transition $(p_i,q_i) \xrightarrow{a_i} (p_{i+1},q_{i+1})$. We know that $(p_i,q_i) \in [\varphi_1]$. Assume that there exists $q'_i \sim_{dsbr} q_i$. Substituting $a_i$ for $a$, $(p_i,q_i)$ for $(p_0,q_0)$, and $(p_{i+1},q_{i+1})$ for $(p_1,q_1)$ in the two diagrams of Lemma 2, it appears clearly that there exists a state $q'_{i+1}$ such that $q'_{i+1} \sim_{dsbr} q_{i+1}$. Moreover, by the induction hypothesis, every state divbranching equivalent to $(p_i,q_i)$ in the bottom sequence of the diagrams is in $[\varphi_1]$. Finally, note that $\tau \in [\alpha_\tau]$ by definition, so that each of the $\tau$-transitions in the bottom sequence of the second diagram thus matches $\alpha_\tau$. Moreover, $(p_m,q_m) \in [\varphi_2]$ and $q'_m \sim_{dsbr} q_m$, so by the induction hypothesis, we have also $(p_m,q'_m) \in [\varphi_2]$.
+
+- Case $\varphi = (\langle\varphi_1?.\alpha_\tau\rangle^* .\varphi_2$. Since $\langle(\varphi_1?.\alpha_\tau)^* .\varphi_1?.\alpha_a\rangle \varphi_2$ is equivalent to $\langle(\varphi_1?.\alpha_\tau)^*\rangle \langle(\varphi_1?.\tau)^* .\varphi_1?.\alpha_a\rangle \varphi_2$ and since we have already considered the case $\varphi = (\langle\varphi_1?.\alpha_\tau\rangle^*)\varphi_2$ above, we only have to consider the case $\varphi = (\langle\varphi_1?.\tau\rangle^* .\varphi_1?.\alpha_a\rangle)\varphi_2$, i.e., $\alpha_\tau = \tau$. By definition, there exist $m \ge 0$, $p_i \in \Sigma_P$, $q_i \in \Sigma_Q$ ($i \in 0..m+1$) and $a \in [\alpha_a]_A$, such that $(p,q) = (p_0,q_0)$, $(p_m,q_m) \xrightarrow{a} (p_{m+1},q_{m+1})$, $(p_{m+1},q_{m+1}) \in [\varphi_2]$, and for all $0 \le i < m$, $(p_i,q_i) \xrightarrow{\tau} (p_{i+1},q_{i+1})$ and $(p_i,q_i) \in [\varphi_1]$. The reasoning is similar to the previous case. It is important to note that the transition $(p_m,q_m) \xrightarrow{a} (p_{m+1},q_{m+1})$ necessarily has a counterpart in the built sequence, since $a \in [\alpha_a]_A$ and $\tau \notin [\alpha_a]$. Therefore this transition cannot be an inert transition and the first case of Lemma 2 does not apply here.
+
+- Case $\varphi = \langle\varphi_1?.\alpha_\tau\rangle @$. By definition, there exists an infinite sequence of states $(p_0,q_0) \xrightarrow{a_0} (p_1,q_1) \xrightarrow{a_1} \dots (p_i,q_i) \xrightarrow{a_i} \dots$, such that $p_0 = p, q_0 = q$, and for all $i > 0$, $(p_i,q_i) \in [\varphi_1]$ and $a_i \in [\alpha_\tau]$. The reasoning is similar
+---PAGE_BREAK---
+
+to the previous case. It is important to note that a sequence of infinite
+$\tau$-transitions (when $a_i = \tau$ for all $i > 0$) cannot collapse into an empty
+sequence, as guaranteed by divbranching bisimulation.
+□
+
+B Proof of Lemma 1 (page 9)
+
+**Lemma 1-1 (Modal μ-calculus)** Assume $\varphi_i \in L_{\mu}^{strong}(A_s)$ ($i = 0,1,2$) and $[\alpha_s]_A \subseteq A_s$. Then the following hold:
+
+1. $\langle \alpha_s \rangle \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+2. $[\alpha_s] \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+3. $\neg \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+4. $\varphi_1 \lor \varphi_2 \in L_{\mu}^{strong}(A_s)$
+
+5. $\varphi_1 \land \varphi_2 \in L_{\mu}^{strong}(A_s)$
+
+6. $\varphi_1 \Rightarrow \varphi_2 \in L_{\mu}^{strong}(A_s)$
+
+*Proof.* Immediate from the definition of $L_{\mu}^{strong}(A_s)$ (see Def. 8) and the well-known identities $\varphi_1 \wedge \varphi_2 = \neg(\neg\varphi_1 \vee \neg\varphi_2)$ and $\varphi_1 \Rightarrow \varphi_2 = \neg\varphi_1 \vee \varphi_2$. $\square$
+
+**Lemma 1-2 (Propositional Dynamic Logic)** Assume that $\varphi_0 \in L_{\mu}^{strong}(A_s)$,
+$\tau \in [\alpha_{\tau}]_A$, and $\tau \notin [\alpha_a]$. Then the following hold:
+
+1. $\langle \alpha_{\tau}^* \cdot \alpha_a \rangle \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+2. $[\alpha_{\tau}^* \cdot \alpha_a] \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+3. $\langle \alpha_{\tau}^* \rangle \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+4. $[\alpha_{\tau}^*] \varphi_0 \in L_{\mu}^{strong}(A_s)$
+
+5. $\langle \alpha_{\tau} \rangle @ \in L_{\mu}^{strong}(A_s)$
+
+6. $[\alpha_{\tau}] \dashv \in L_{\mu}^{strong}(A_s)$
+
+*Proof.* It was shown in [24] that these weak PDL-Δ modalities can be en-coded in $L_{\mu}^{dsbr}$ when $\varphi_0 \in L_{\mu}^{dsbr}$. The proposed encoding also holds when $\varphi_0 \in L_{\mu}^{strong}(A_s)$. As this encoding does not add more strong modalities than already present in $\varphi_0$, then these properties belong to $L_{\mu}^{strong}(A_s)$. □
+
+**Lemma 1-3 (Action Computation Tree Logic)** Assume that $\varphi_i \in L_{\mu}^{strong}(A_s)$ ($i = 1,2$). Then the following hold:
+
+1. $E(\varphi_1 \alpha_1 U \varphi_2) \in L_{\mu}^{strong}(A_s)$
+
+2. $E(\varphi_1 \alpha_1 U_{\alpha_2} \varphi_2) \in L_{\mu}^{strong}(A_s)$
+
+3. $A(\varphi_1 \alpha_1 U \varphi_2) \in L_{\mu}^{strong}(A_s)$
+
+4. $A(\varphi_1 \alpha_1 U_{\alpha_2} \varphi_2) \in L_{\mu}^{strong}(A_s)$
+
+5. $AG_{\alpha_0}(\varphi_0) \in L_{\mu}^{strong}(A_s)$
+---PAGE_BREAK---
+
+6. $\mathrm{EF}_{\alpha_0}(\varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+*Proof.* It was shown in [24] that the operators of ACTL$\X$ can be encoded using the weak modalities of $L_{\mu}^{\mathrm{dsbr}}$. Therefore, the only strong modalities that remain in these formulas are those occurring in $\varphi_1$ and $\varphi_2$, which by definition match only labels in $A_s$. Therefore, these formulas belong to $L_{\mu}^{\mathrm{strong}}(A_s)$. $\square$
+
+In order to prove Lemma 1-4, we need the following lemma:
+
+**Lemma 3.** $\mathrm{E}(\varphi_{1 \alpha_1} \cup_{\alpha_2} \varphi_2) = \mathrm{E}(\varphi_{1 \alpha_1} \cup \varphi_1 \wedge \langle\alpha_2\rangle \varphi_2)$ if $\tau \notin [\alpha_2]_A$.
+
+*Proof.* This lemma is easily proven by showing that the modal $\mu$-calculus definitions of the left-hand-side and right-hand-side formulas are equivalent. First note that since $\tau \notin [\alpha_2]_A$, we have $[\alpha_2]_A = [\alpha_2]_A \setminus \{\tau\} = [\alpha_2 \wedge \neg\tau]_A$ and thus $\langle\alpha_2\rangle \varphi = \langle\alpha_2 \wedge \neg\tau\rangle \varphi$ for any formula $\varphi$. We then have the following:
+
+$$
+\begin{align*}
+& \mathrm{E}(\varphi_{1 \alpha_1} \cup \varphi_1 \wedge \langle\alpha_2\rangle \varphi_2) \\
+&= \mu X.(\varphi_1 \wedge \langle\alpha_2\rangle \varphi_2) \vee (\varphi_1 \wedge \langle\alpha_1 \vee \tau\rangle X) && \text{by definition of } \mathrm{E}(_{-}\cup_{-}) \\
+&= \mu X.\varphi_1 \wedge (\langle\alpha_2\rangle \varphi_2 \vee \langle\alpha_1 \vee \tau\rangle X) && \text{by factorization of } \varphi_1 \\
+&= \mu X.\varphi_1 \wedge (\langle\alpha_2 \wedge \neg\tau\rangle \varphi_2 \vee \langle\alpha_1 \vee \tau\rangle X) && \text{from } \langle\alpha_2\rangle \varphi_2 = \langle\alpha_2 \wedge \neg\tau\rangle \varphi_2 \\
+&= \mathrm{E}(\varphi_{1 \alpha_1} \cup_{\alpha_2} \varphi_2) && \text{by definition of } \mathrm{E}(/_- \cup _{-}/_-) \\
+\end{align*}
+$$
+
+$\square$
+
+**Lemma 1-4 (Computation Tree Logic)** Assume that $\varphi_i \in L_{\mu}^{\mathrm{strong}}(A_s)$ ($i = 0, 1, 2$) and $\tau \notin [\alpha_a]_A$. Then the following hold:
+
+1. $\mathrm{E}(\varphi_1 \cup \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+2. $\mathrm{A}(\varphi_1 \cup \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+3. $\mathrm{AG}(\varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+4. $\mathrm{EF}(\varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+5. $\mathrm{AF}(\varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+6. $\mathrm{EG}(\varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+7. $\mathrm{E}(\varphi_1 W \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+8. $\mathrm{A}(\varphi_1 W \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+9. $\mathrm{A}([\alpha_a] \varphi_1 \cup \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+10. $\mathrm{A}([\alpha_a] \varphi_1 W \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+11. $\mathrm{AG}([\alpha_a] \varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+12. $\mathrm{EF}(\langle\alpha_a\rangle \varphi_0) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+13. $\mathrm{AG}(\varphi_1 \vee [\alpha_a] \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+
+14. $\mathrm{EF}(\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$
+---PAGE_BREAK---
+
+*Proof.* CTL operators are defined on LTS in terms of ACTL$\X$ operators as follows:
+
+$$
+\begin{align*}
+E(\varphi_1 \Upsilon \varphi_2) &= E(\varphi_1 \text{ true} \Upsilon \varphi_2) && (\text{Ct1}) \\
+A(\varphi_1 \Upsilon \varphi_2) &= A(\varphi_1 \text{ true} \Upsilon \varphi_2) && (\text{Ct2}) \\
+EF(\varphi_0) &= E(\text{true} \text{ true} \Upsilon \varphi_0) && (\text{Ct3}) \\
+AG(\varphi_0) &= \neg E(\text{true} \text{ true} \Upsilon \neg \varphi_0) && (\text{Ct4}) \\
+AF(\varphi_0) &= A(\text{true} \text{ true} \Upsilon \varphi) && (\text{Ct5}) \\
+EG(\varphi_0) &= \neg A(\text{true} \text{ true} \Upsilon \neg \varphi) && (\text{Ct6}) \\
+E(\varphi_1 \mathsf{W} \varphi_2) &= E(\varphi_1 \mathsf{U} \varphi_2) \lor EG(\varphi_1) && (\text{Ct7}) \\
+A(\varphi_1 \mathsf{W} \varphi_2) &= \neg E(\neg \varphi_2 \mathsf{U} \neg \varphi_1 \land \neg \varphi_2) && (\text{Ct8})
+\end{align*}
+$$
+
+In addition, there is the following identity, which is easily proven by showing
+that the modal µ-calculus definitions of the left-hand-side and right-hand-side
+formulas are equivalent:
+
+$$
+A(\varphi_1 \cup \varphi_2) = \neg E(\neg \varphi_2 W \neg \varphi_1 \wedge \neg \varphi_2) (\mathrm{Ct9})
+$$
+
+We also use the following standard identity of the modal μ-calculus:
+
+$$
+\neg[\alpha] \varphi_0 = \langle\alpha\rangle \neg\varphi_0 \quad (\text{Mcl1})
+$$
+
+Items 1 to 8 are a direct consequence of Lemma 1-3 and the fact that these
+operators can be translated to ACTL$\backslash$X, as shown by equations (Ct1) to (Ct8).
+The proof of Items 9 to 14 follows:
+
+9. $A([\alpha_a] \varphi_1 U \varphi_2)$
+ $= \neg E(\neg \varphi_2 W \neg [\alpha_a] \varphi_1 \wedge \neg \varphi_2)$ By (Ct9)
+ $= \neg (E(\neg \varphi_2 U \neg [\alpha_a] \varphi_1 \wedge \neg \varphi_2) \vee EG(\neg \varphi_2))$ By (Ct7)
+ $= \neg (E(\neg \varphi_2 U \langle\alpha_a\rangle \neg \varphi_1 \wedge \neg \varphi_2) \vee EG(\neg \varphi_2))$ By (Mcl1)
+ $= \neg (E(\neg \varphi_2 \text{ true} U \langle\alpha_a\rangle \neg \varphi_1 \wedge \neg \varphi_2) \vee EG(\neg \varphi_2))$ By (Ct1)
+ $= \neg (E(\neg \varphi_2 \text{ true} U_{\alpha_a} \neg \varphi_1) \vee EG(\neg \varphi_2))$ By Lemma 3 and $\tau \notin [\alpha_a]_A$
+which is in $L_\mu^{strong}(A_s)$ following Item 6 and Lemmas 1-1 and 1-3.
+
+$$
+\begin{align*}
+& A([\alpha_a] \varphi_1 W \varphi_2) \\
+&= \neg E(\neg \varphi_2 U \neg [\alpha_a] \varphi_1 \wedge \neg \varphi_2) && \text{By (Ct8)} \\
+&= \neg E(\neg \varphi_2 U (\alpha_a) \neg \varphi_1 \wedge \neg \varphi_2) && \text{By (Mcl1)} \\
+&= \neg E(\neg \varphi_2 true U (\alpha_a) \neg \varphi_1 \wedge \neg \varphi_2) && \text{By (Ct1)} \\
+&= \neg E(\neg \varphi_2 true U_{\alpha_a} \neg \varphi_1) && \text{By Lemma 3 and } \tau \notin [\alpha_a]_A
+\end{align*}
+$$
+
+which is in $L_{\mu}^{strong}(A_s)$ following Lemmas 1-1 and 1-3.
+
+$$
+\begin{align*}
+& 11. AG([\alpha_a] \varphi_0) \\
+&= \neg E(\text{true}_{\text{true}} U \neg [\alpha_a] \varphi_0) && \text{By (Ct4)} \\
+&= \neg E(\text{true}_{\text{true}} U (\alpha_a) \neg \varphi_0) && \text{By (Mcl1)} \\
+&= \neg E(\text{true}_{\text{true}} U_{\alpha_a} \neg \varphi_0) && \text{By Lemma 3 and } \tau \notin [\alpha_a]_A
+\end{align*}
+$$
+
+which is in $L_{\mu}^{strong}(A_s)$ following Lemmas 1-1 and 1-3. This is also a con-
+sequence of $\mathcal{A}G(\varphi_1 \lor [\alpha_a] \varphi_2) \in L_{\mu}^{strong}(A_s)$ (shown below), replacing $\varphi_1$ by
+false and $\varphi_2$ by $\varphi_0$.
+---PAGE_BREAK---
+
+12. $\mathrm{EF}(\langle\alpha_a\rangle\varphi_0)$
+ $= \mathrm{E}(\mathbf{true}_{\mathbf{true}} \cup \langle\alpha_a\rangle \varphi_0)$ By (CtI3)
+ $= \mathrm{E}(\mathbf{true}_{\mathbf{true}} \cup_{\alpha_a} \varphi_0)$ By Lemma 3 and $\tau \notin [\alpha_a]_A$
+ which is in $L_{\mu}^{\mathrm{strong}}(A_s)$ following Lemmas 1-1 and 1-3. This is also a con-
+ sequence of $\mathrm{EF}(\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2) \in L_{\mu}^{\mathrm{strong}}(A_s)$ (shown below), replacing $\varphi_1$ by
+ true and $\varphi_2$ by $\varphi_0$.
+
+13. $\mathrm{AG}(\varphi_1 \lor [\alpha_a]\varphi_2) = \neg \mathrm{EF}(\neg \varphi_1 \wedge \langle\alpha_a\rangle \neg \varphi_2)$, which is shown to belong to $L_{\mu}^{\mathrm{strong}}(A_s)$ below.
+
+14. $\mathrm{EF}(\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2)$ is semantically equivalent to $\mathrm{EF}((\langle\varphi_1?.\mathbf{true}\rangle^*.\varphi_1?.\alpha_a) \varphi_2)$,
+which belongs to $L_{\mu}^{\mathrm{strong}}(A_s)$. Indeed, if $\mathrm{EF}(\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2)$ is true, then there is
+a reachable state satisfying $\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2$, by definition of $\mathrm{EF}$. This state also
+satisfies $(\langle\varphi_1?.\mathbf{true}\rangle^*.\varphi_1?.\alpha_a) \varphi_2$, after an empty sequence of steps match-
+ing $(\varphi_1?.\mathbf{true})^*$. On the other hand, if there is a reachable state satisfy-
+ing $(\langle\varphi_1?.\mathbf{true}\rangle^*.\varphi_1?.\alpha_a) \varphi_2$, then there is a reachable state satisfying both
+$\varphi_1$ and $\langle\alpha_a\rangle \varphi_2$, by definition of the weak modality, i.e, the LTS satisfies
+$\mathrm{EF}(\varphi_1 \wedge \langle\alpha_a\rangle \varphi_2)$.
+$\square$
+
+C Detailed performance data of the TFTP case-study
+
+Table 5 presents the detailed performance data obtained on the TFTP case-
+study, which served as input to generate the curves of Figure 2 (page 12).
+Parameters *P*, *S*, and *I* correspond respectively to the property number, the
+scenario, and the TFTP instance. LTS sizes are given in kilostates, memory
+in megabytes, and time in seconds. Time does not include LTS generation of
+the component processes from their LNT specification, which is quite fast (a
+few seconds) and shared by both the mono-bisimulation approach and the com-
+bined bisimulations approach. The column “Ratio Strong/Combined” shows the
+gain obtained by applying the combined bisimulations approach rather than the
+mono-bisimulation approach with respect to largest LTS size, memory peak, and
+time.
+---PAGE_BREAK---
+
+ | P | S | I | Strong bisimulation | Combined bisimulations | Ratio Strong/Combined |
|---|
| Kstates | verif. | Kstates | verif. | Kstates | verif. |
|---|
| largest | final | MB | sec. | largest | final | MB | sec. | largest | final | MB | sec. | largest | final | MB | sec. |
|---|
| 08 A A | | | | 1783 | 1197 | 136 | 66 | 746 | 503 | 60 | 32 | 2.4 | 2.4 | 2.3 | 2.1 | | | 14 A A | | | | 1783 | 1495 | 138 | 83 | 770 | 639 | 62 | 31 | 2.3 | 2.3 | 2.2 | 2.7 | | | 08 A B | | | | 1783 | 1295 | 143 | 91 | 347 | 281 | 30 | 17 | 5.1 | 4.6 | 4.8 | 5.4 | | | 17 A B | | | | 1783 | 1191 | 138 | 76 | 1000 | 694 | 79 | 44 | 1.8 | 1.7 | 1.7 | 1.7 | | | 08 B A | | | | 792 | 636 | 62 | 38 | 347 | 289 | 30 | 18 | 2.3 | 2.2 | 2.1 | 2.1 | | | 08 B B | | | | 792 | 707 | 62 | 40 | 174 | 130 | 17 | 13 | 4.6 | 5.4 | 3.6 | 3.1 | | | 14 B B | | | | 792 | 720 | 65 | 47 | 174 | 134 | 17 | 14 | 4.6 | 5.4 | 3.8 | 3.4 | | | 16 B B | | | | 792 | 613 | 63 | 35 | 456 | 356 | 38 | 22 | 1.7 | 1.7 | 1.7 | 1.6 | | <| 08 C A | | | | 30926 | 16924 | 2393 | 2274 | 8448 | 4471 | 660 | * | 09 C A |
|---|
30965 16935 2540 2305 7940 7516 636 666 3.9 2.3 4.0 3.5"> | 14 C A 30965 22772 2417 3033 8679 6141 688 706 3.6 3.7 3.5 4.3"> | 17 C A 30992 16935 2527 1873 19269 10833 1502 1164 1.6 1.6 1.7 1.6"> | 08 C B 30926 16853 2391 2257 7995 4176 617 417 3.9 4.0 3.9 5.4"> | 09 C B 30953 16998 2538 2271 7326 6931 589 502 4.2 2.5 4.3 4.5"> | 14 C B 30953 22710 2410 2635 8146 5683 637 601 3.8 4.0 3.8 4.4"> | 17 C B 30992 16998 2529 1913 18033 10173 1406 1155 1.7 1.7 1.8 1.7"> | 08 D A 36447 19136 2934 2703 9655 5824 783 623 3.8 3.3 3.7 4.3"> | 09 D A 36447 18731 3152 2903 9636 8607 808 865 3.8 2.1 3.9 3.4"> | 17 D A 36447 18731 2971 2262 22203 11926 1793 1545 08 D B 36447 19062 2933 2787 7405 3498 568 464 4.9 5.4 5.2 6.0
| 09 D B 36478 19243 3110 2650 5495 5153 443 412 6.6 3.7 7.0 6.4
| 14 D B 36478 22237 3002 3264 7533 3986 588 589 4.8 5.6 5.1 5.5
| 16 D B 36478 19243 3161 2303 15576 7830 1204 1232 2.3 2.5 2.6 1.9
| 08 E A 17035 10646 1315 1046 4970 3257 388 287 3.4 3.3 3.4 3.6
| 09 E A 17035 10375 1397 1312 4856 4620 392 310 3.5 2.2 3.6 4.2
| 14 E A 17035 12759 1327 1346 5081 3983 400 384 3.4 3.2 3.3 3.5
| 16 E A 17035 10376 1399 897 10723 6643 832 736 -
| 08 E B - - - - - - - (-) (-) (-) (-)
| - - - - -
| -
|
+
+**Table 5.** Results of the TFTP case-study
\ No newline at end of file
diff --git a/samples/texts_merged/3598064.md b/samples/texts_merged/3598064.md
new file mode 100644
index 0000000000000000000000000000000000000000..69eccce14720b3bcb5e3f6e4ec1b2d1a9dd3ae0c
--- /dev/null
+++ b/samples/texts_merged/3598064.md
@@ -0,0 +1,632 @@
+
+---PAGE_BREAK---
+
+# Distributed, Physics-Based Control of Swarms of Vehicles
+
+William M. Spears
+Diana F. Spears
+Department of Computer Science
+
+Jerry C. Hamann
+Rodney Heil
+Department of Electrical and Computer Engineering
+College of Engineering
+University of Wyoming
+Laramie, WY 82071
+wspears@cs.uwyo.edu
+
+## Abstract
+
+We introduce a framework, called “physicomimetics,” that provides distributed control of large collections of mobile physical agents in sensor networks. The agents sense and react to virtual forces, which are motivated by natural physics laws. Thus, physicomimetics is founded upon solid scientific principles. Furthermore, this framework provides an effective basis for self-organization, fault-tolerance, and self-repair. Three primary factors distinguish our framework from others that are related: an emphasis on minimality (e.g., cost effectiveness of large numbers of agents implies a need for expendable platforms with few sensors), ease of implementation, and run-time efficiency. Examples are shown of how this framework has been applied to construct various regular geometric lattice configurations (distributed sensing grids), as well as dynamic behavior for perimeter defense and surveillance. Analyses are provided that facilitate system understanding and predictability, including both qualitative and quantitative analyses of potential energy and a system phase transition. Physicomimetics has been implemented both in simulation and on a team of seven mobile robots. Specifications of the robotic embodiment are presented in the paper.
+
+Keywords: swarm robotics, physicomimetics, self-organization, fault-tolerance, predictability, formations.
+---PAGE_BREAK---
+
+# 1. Introduction
+
+The focus of our research is to design and build rapidly deployable, scalable, adaptive, cost-effective, and robust networks of autonomous distributed vehicles. This combines sensing, computation and networking with mobility, thereby enabling deployment, self-organization, and reconfiguration of the multi-agent collective. Our objective is to provide a scientific, yet practical, approach to the design and analysis of aggregate sensor systems.
+
+The general purpose for deploying tens to hundreds of such agents can be summarized as “volumetric control.” Volumetric control means monitoring, detecting, tracking, reporting, and responding to environmental conditions within a specified physical region. This is done in a distributed manner by deploying numerous vehicles, each carrying one or more sensors, to collect, aggregate, and fuse distributed data into a tactical assessment. The result is enhanced situational awareness and the potential for rapid and appropriate response. Our objective is to design fully automated, coordinated, multi-agent sensor systems.
+
+The team vehicles could vary widely in type, as well as size, e.g., from nanobots or micro-electromechanical systems (MEMS) to micro-air vehicles (MAVs) and micro-satellites. An agent's sensors perceive the world, including other agents, and an agent's effectors make changes to that agent and/or the world, including other agents. It is assumed that agents can only sense and affect nearby agents; thus, a key challenge has been to design “local” control rules. Not only do we want the desired global behavior to emerge from the local interaction between agents (self-organization), but we also require fault-tolerance, that is, the global behavior degrades very gradually if individual agents are damaged. Self-repair is also desirable, in the event of damage. Self-organization, fault-tolerance, and self-repair are precisely those principles exhibited by natural physical systems. Thus, many answers to the problems of distributed control can be found in the natural laws of physics.
+
+This paper presents a framework, called “physicomimetics” or “artificial physics” (AP), for distributed control. We use the term “artificial” (or virtual) because although we are motivated by natural physical forces, we are not restricted to them [45]. Although the forces are virtual, agents act as if they were real. Thus the agent's sensors
+---PAGE_BREAK---
+
+must see enough to allow it to compute the force to which it is reacting. The agent's effectors must allow it to respond to this perceived force.
+
+We see two potential advantages to this approach. First, in the real physical world, collections of small entities yield surprisingly complex behavior from very simple interactions between the entities. Thus there is a precedent for believing that complex control is achievable through simple local interactions. This is required for very small agents, since their sensors and effectors will necessarily be primitive. Second, since the approach is largely independent of the size and number of agents, the results scale well to larger agents and larger sets of agents.
+
+Three primary emphases distinguish the AP framework from others that are related: minimality, ease of implementation, and run-time efficiency. First, AP formations are achieved with a minimal set of sensors and sensor information. The rationale for this emphasis is that it will: (1) reduce overall vehicle cost, (2) enable physical embodiment with small agents, and (3) increase vehicle stealthiness if sensing is active. Second, the paper presents theoretical results that translate directly into practical advice on how to set system parameters for desired swarm performance. This makes the robotic implementation straightforward. Third, AP is designed to be computationally efficient. Therefore, we avoid physics-based multi-agent algorithms such as [32], which compute potential fields and then transform to forces at run-time. Instead, AP computes forces only at run-time.
+
+The paper is organized as follows. First, we present the general AP framework, which is currently based on Newtonian physics, but is extendible to other types of physics. Then, a sequence of examples shows how the framework has been applied to construct a variety of both static and dynamic multi-agent formations and behaviors. This includes regular geometric lattices for distributed sensing, as well as dynamic behaviors for surveillance and perimeter defense. Fault-tolerance and self-repair are addressed in the context of these applications. Theoretical analyses are provided that facilitate deeper system understanding and predictability, including qualitative and quantitative analyses of a system phase transition and of system potential energy. Then, details are provided regarding the physical implementation of AP on a team of seven robots with minimal sensing capabilities. We conclude with discussions of related and future work.
+---PAGE_BREAK---
+
+## 2. The Physicomimetics Framework
+
+The basic AP framework is elegantly simple. In essence, virtual physics forces drive a multi-agent system to a desired configuration or state. The desired configuration (state) is one that minimizes overall system potential energy. In essence, the system acts as a molecular dynamics ($\vec{F} = m\ddot{\vec{x}}$) simulation.
+
+At an abstract level, AP treats agents as physical particles. This enables the framework to be embodied in vehicles ranging in size from nanobots to satellites. Particles exist in two or three dimensions and are point-masses. Each particle $i$ has position $\vec{x}$ and velocity $\vec{v}$. We use a discrete-time approximation to the continuous behavior of the system, with time-step $\Delta t$. At each time step, the position of each particle undergoes a perturbation $\Delta \vec{x}$. The perturbation depends on the current velocity, i.e., $\Delta \vec{x} = \vec{v}\Delta t$. The velocity of each particle at each time step also changes by $\Delta \vec{v}$. The change in velocity is controlled by the force on the particle, i.e., $\Delta \vec{v} = \vec{F}\Delta t/m$, where $m$ is the mass of that particle and $\vec{F}$ is the force on that particle.¹ A frictional force is included, for self-stabilization. This is modeled as a viscous friction term, i.e., the product of a viscosity coefficient and the agent's velocity (independently modeled in the same fashion by Howard et al. [23]).
+
+We require that AP map easily to physical hardware, and our model reflects this design philosophy. Particle mass allows our simulated robots to have momentum. Robots need not have the same mass. The frictional force allows us to model actual friction, whether it is unavoidable or deliberate. With full friction, the robots come to a complete stop between sensor readings and with no friction the robots continue to move as they sense. The time step $\Delta t$ reflects the amount of time the robots need to perform their sensor readings. If the time step is small the robots get readings frequently, whereas if the time step is large readings are obtained infrequently. We have also included a parameter $F_{max}$, which restricts the maximum force felt by a particle. This provides a necessary restriction on the acceleration a robot can achieve. Also, a parameter $V_{max}$ restricts the velocity of the particles, which is very important for modeling real robots.
+
+¹*F* and *v* denote the magnitude of vectors *F* and *v*.
+---PAGE_BREAK---
+
+Although our framework does not require them, our design philosophy reflects further real-world constraints.
+
+The first is that AP be as distributed as possible and the second is that we require as little information as possible.
+
+To this end, we assume that sensors are minimal in information content and that the sensors (passive and active) are extremely local in nature. Occasionally we have to rely on very small amounts of global information and/or control, but this is done as infrequently as possible. If real-world systems have a richer suite of information, we can take advantage of it, but we do not rely on that information for the system to function.
+
+Due to the particle-like nature of our simulation, one important aspect of the real world is not modeled, namely, collisions of robots with other robots or objects in the environment. This was a deliberate design decision, since we wanted AP to be as platform independent as possible. Once a physical platform is selected, that aspect of the simulation must be modeled separately, and a lower-level algorithm is responsible for collision avoidance. For example, with small physical robots, gentle collisions can be tolerated and dealt with by using simple bumper sensors and routines. However, with MAVs, collisions must be avoided. The AP framework can avoid collisions through strong repulsive forces, but if additional guarantees are required then they must be modeled separately.
+
+Also, we do not model the behavioral dynamics of the actual robot. Although our robots can stop and turn on a dime, other platforms, such as MAVs, may not have this capability. AP is an algorithm that determines “way points” for the physical platforms. Lower-level software is necessary to control the movement of the robots toward their desired locations.
+
+Given a set of initial conditions and some desired global behavior, it is necessary to define what sensors, effectors, and local force laws are required for the desired behavior to emerge. This is explored, in the next few sections, for a variety of static and dynamic multi-agent configurations. Our implementation with robots is discussed in Section 7.
+---PAGE_BREAK---
+
+### 3. Hexagonal Lattices
+
+The example considered in this section was originally inspired by an application that required a swarm of MAVs to form a hexagonal lattice, thus creating a distributed sensing grid [25]. Such lattices create a virtual antenna or synthetic aperture radar to improve radar image resolution.
+
+#### 3.1 Designing Hexagonal Lattices
+
+Since MAVs (or other small agents such as nanobots) will have simple sensors and primitive CPUs, our goal was to provide the simplest control rules requiring minimal sensors and effectors. At first blush, creating hexagons appears to be somewhat complicated, requiring sensors that can calculate distance, the number of neighbors, their angles, etc. However, only distance and bearing information is required. To understand this, recall an old high-school geometry lesson in which six circles of radius $R$ can be drawn on the perimeter of a central circle of radius $R$. Figure 1 illustrates this construction. If the particles (shown as small circular spots) are deposited at the intersections of the circles, they form a hexagon with a particle in the middle.
+
+**Figure 1.** Six circles can be drawn on the perimeter of a central circle, forming a hexagon at the intersection of the circles.
+
+The construction indicates that hexagons can be created via overlapping circles of radius $R$. To map this into a force law, imagine that each particle repels other particles that are closer than $R$, while attracting particles that are further than $R$ in distance. Thus each particle has a circular “potential well” around itself at radius $R$ – and neighboring particles will be separated by distance $R$. The intersection of these potential wells is a form of constructive interference that creates “nodes” of low potential energy where the particles are likely to reside.
+---PAGE_BREAK---
+
+The nodes are the small circular spots in the previous figure. Thus the particles serve to create the very potential energy surface to which they are responding. Note that the potential energy surface is never actually computed by the robots. Robots only compute local force vectors. Potential energy is only computed for visualization or mathematical analysis.
+
+With this in mind we defined a force law $F = Gm_i m_j / r^p$, where $F \le F_{max}$ is the magnitude of the force between two particles i and j, and r is the distance between the two particles. The variable p is a user-defined power, which ranges from -5.0 to 5.0. When $p = 0.0$ the force law is constant for all distances. Unless stated otherwise, we assume $p = 2.0$ and $F_{max} = 1$ in this paper. Also, $m_i = 1.0$ for all particles. The “gravitational constant” G is set at initialization. The force is repulsive if $r < R$ and attractive if $r > R$. Each particle has one sensor that can detect the distance and bearing to nearby particles. The one effector enables movement with velocity $v \le V_{max}$. To ensure that the force laws are local, we allow particles to sense only their nearest neighbors. In a perfect hexagon, nearest neighbors are $R$ away, and next nearest neighbors are $\sqrt{3}R$ away. Hence, particles have a visual range of only $1.5R$.
+
+Figure 2 shows the magnitude of the force *F*, when *R* = 50, *G* = 1,200, *p* = 2, and *F*max = 1 (the system defaults). There are three discontinuities in the force law. The first occurs where the force law transitions from *F*max to *F* = *G*mimj/r*p*. The second occurs when the force law switches from repulsive to attractive at *R*. The third occurs at 1.5*R* (= 75), when the force goes to 0.
+
+The initial conditions are also inspired by the MAV application. The MAVs are released from a canister dropped from a plane, then they propel outward (due to repulsive forces) until the desired geometric configuration is achieved. A two-dimensional Gaussian random variable (variance $\sigma^2$) initializes the positions of all particles. Their velocities are initialized to 0.0, although the framework does not require this. An example initial configuration for *N* = 200 particles is shown in Figure 3 (left). The 200 particles move for 1,000 time steps, using this very simple force law (see Figure 3, right). For *R* = 50, *G* = 1,200 provides good results. These values remain fixed throughout this paper unless stated otherwise.
+---PAGE_BREAK---
+
+Figure 2. The force law, when $R = 50$, $G = 1,200$, $p = 2$ and $F_{max} = 1$. The force has a maximum magnitude of 1 and a magnitude of 0 at $1.5R = 75$. The force is repulsive when the distance is less than 50 and attractive when the distance is between 50 and 75.
+
+Figure 3. Initially, the particles are assumed to be in a tight cluster $t = 0$ (left). Then particles repel and after 1,000 time steps form a good hexagonal lattice (right).
+
+There are a number of important observations to make about Figure 3 (right). First, a reasonably well-defined hexagonal lattice has been formed from the interaction of simple local force laws that involve only the detection of distance and bearing to nearby neighbors. The hexagonal lattice is not perfect – there is a flaw near the center of the structure. Also, the perimeter is not a hexagon, although this is not surprising, given the lack of global constraints. However, many hexagons are clearly embedded in the structure and the overall structure is quite hexagonal. Second, each node in the structure can have multiple particles (“a cluster”). Clustering is an emergent property that provides robustness, because the disappearance (failure) of individual particles from a cluster will have minimal effect. Clustering depends on the value of $G$, which we explore later in this section.
+
+The formation shown in Figure 3 (right) is stable, and does not change to any significant degree as $t$ increases past 1,000. The dynamics of the system ($t < 1,000$) is fascinating to watch, yet is hard to simply convey in a paper.
+---PAGE_BREAK---
+
+As opposed to displaying numerous snapshots, we instead measure well-defined characteristics of the system at every time step. These characteristics provide useful insights into the system dynamics.
+
+## 3.2 Evaluating Lattice Quality
+
+The first characteristic we considered was orientation error, in the sense that the orientation of the lattice should be the same everywhere. To measure this characteristic, choose any pair of particles separated by $2R$. We use $2R$ instead of $R$ to smooth out local noise, since we care about global error. Specifically, two particles are separated by $2R$ if $1.98R < r < 2.02R$. These two particles form a line segment. Then choose any other pair of particles separated by $2R$, forming another line segment. Measure the angle between the two line segments. For a hexagonal lattice, this angle should be close to some multiple of $60^\circ$. The error is the absolute value of the difference between the angle and the closest multiple of 60. The maximum error is $30^\circ$ and the minimum is $0^\circ$. To evaluate lattice quality, we averaged the error over all distinct pairs of particle pairs.
+
+Since error ranges from $0^\circ$ to $30^\circ$, the average error at $t=0$ is $15^\circ$. Then the error decreases – the rate at which the decrease occurs indicates how quickly the system is stabilizing. For this system, the error decreases smoothly until $t = 200$, resulting in a final error of $5.6^\circ$ over the whole structure (averaged over 40 independent runs, with $\sigma = 3.6^\circ$). Further improvements can be achieved by gradually reducing friction as time progresses.
+
+## 3.3 Observing a Lattice Phase Transition
+
+The second characteristic we considered was the size of clusters at the lattice nodes. For each particle $i$ we counted the number of particles that were close to $i$ ($0 < r < 0.2R$). We always included the particle $i$ itself, so the minimum cluster size is 1.0. This was averaged over all particles and displayed for every time step (Figure 4, left). At $t=0$ particles are close together, yielding a high clustering. The particles then separate, due to repulsion, so that by $t=6$ the particles are apart. However, after $t=6$ clusters re-emerge, with a final cluster size of roughly 2.5. The re-emergence of clusters serves to lower the total potential energy of the system, and the size
+---PAGE_BREAK---
+
+of the re-emerged clusters depends on $G$, $R$, and the geometry of the system. We summarize here one interesting experiment with $G$. We continued the previous experiment, evolving the system until $t = 2,500$. However, after $t = 1,000$, $G$ is lowered by 0.5 at every time step. The results are shown in Figure 4 (right).
+
+Figure 4. Initially, the particles have many nearby neighbors. Then the particles repel and separate at $t = 6$. As the hexagonal lattice forms, some particles form small clusters at the nodes of low potential energy $t > 10$ (left). After 1,000 time steps $G$ is decreased linearly. At $t = 2,200$ ($G = 700$), the small clusters suddenly separate again, showing a phase transition (right). Note that the y-axis scale is different in the two graphs.
+
+We expected the average cluster size to linearly decrease with $G$, but in fact the behavior was much more interesting. The cluster size remained relatively constant, until $t = 2,000$ ($G = 700$). At this point the cluster size dramatically dropped until $t = 2,200$ ($G = 600$), where the particles are separated (cluster size is one). This is very similar to a phase transition in natural physics, e.g., from a solid to a liquid.
+
+## 3.4 Analysis of the Phase Transition
+
+A primary objective of our research is to model and analyze multi-agent behavior, thus enabling predictions and, if needed, corrections. Complex, evolving multi-agent systems are notoriously difficult to predict. It is quite disconcerting when they exhibit anomalous behaviors for which there is no explanation. Because AP is built upon fundamental physics principles, it can be modeled and analyzed using traditional physics techniques, thus leading to explanatory physics-based laws.
+---PAGE_BREAK---
+
+Consider the case of the observed phase transition just described. We first conducted a *qualitative* analysis of this transition. In particular, we explored mathematical visualizations of the virtual potential energy (PE) fields. By definition, $V = -\int_s \vec{F} \cdot d\vec{s}$, where V is the traditional variable used for PE. This line (path) integral is a measure of the work done by a *virtual particle* to get to some position in the force field.
+
+A line integral may be used to calculate PE if the force is (or is approximately) conservative because in that case the work to get from point *a* to point *b* is independent of the path taken. The force field is conservative if its curl (a measure of rotation of the vector force field near a point) is zero. Due to the radial symmetry of our force law, the curl is zero everywhere, and the PE field is meaningful.
+
+Figure 5 (left) illustrates the PE field for a system of seven particles that have stabilized into a hexagonal formation ($G = 1,200$). Lighter shading represents high positive PE, while black represents low (zero or negative) PE. It is important to realize that the PE field shown does *not* illustrate the stability of the seven particles, because it is not from the point of view of any of those seven particles. Indeed, it is possible to construct such field diagrams, and they show that all seven particles are very stable. However, Figure 5 (left) illustrates what would happen if a new eighth virtual particle were brought into the system. Thus, the PE field is from the point of view of that virtual particle, and the PE at each $(x, y)$ position is computed from the point of view of a virtual particle at that position. Positive PE indicates that work is required to push the virtual particle to that position. Negative PE indicates that work is required to push the virtual particle *away* from that position. A virtual particle placed in this field moves from regions of high PE to low PE. For example, consider the central particle, which is surrounded by a region of low PE. A virtual particle that is close to the central particle will stay near that center. Thus the central particle is in a PE well that can attract another particle. This is not surprising, since we showed earlier that a $G$ of 1,200 results in clustering.
+
+Now lower $G$ to 800. Figure 5 (center) illustrates the PE field. While the central PE well is not as large as it was previously, it can still attract another particle. Finally, lower $G$ to 600 (Figure 5, right). The central PE well no longer exists. The central particle is now surrounded by regions of lower PE – thus a virtual particle near the
+---PAGE_BREAK---
+
+**Figure 5.** The PE field when $G = 1,200$ (left), $G = 800$ (middle), and $G = 600$ (right), for a system of seven particles. The PE field at each *x*, *y* position is computed from the point of view of a virtual particle at that position. Bright areas represent areas of positive potential energy, while black represents negative potential energy. A potential well surrounds the central particle for $G \ge 800$ but that well disappears when $G = 600$.
+
+central particle will move away from that particle. A phase transition has occurred and clustering ceases.
+
+Visualization of the PE field has led to an understanding of *why* the behavior of the dynamical system exhibits a phase transition. Large forces result in deep potential wells, allowing particles to form very stable sensing grids, with multiple particles clustering at nodes of low PE. In this situation the formation acts like a solid. The phase transition occurs when the potential wells disappear. At this point, the forces that promote cluster fragmentation are stronger than the forces that promote cluster cohesion. In this situation the formation acts like a liquid.
+
+We have now set the stage for a quantitative analysis of the phase transition. In particular, we want to calculate the value of *G* (in terms of other parameter settings) where the phase transition will occur. Based on the qualitative analysis, we derived a standard *balance of forces* law to predict the phase transition. This quantitative law states that the phase transition will occur when the *cohesion force*, which keeps a particle within a cluster, equals the *fragmentation force*, which repels the particle from the cluster [19]. To specify this law, it is necessary to derive the expressions for these forces.
+
+Figure 5 (right) indicates that a particle placed near the central particle will escape along trajectories that avoid
+---PAGE_BREAK---
+
+the perimeter particles. This has been confirmed via observation of the simulation. We depict these escape paths in Figure 6. In this figure, there are two particles at the center of the formation, and one particle each at the perimeter nodes. Label one of the two particles in the center as "A." Due to symmetry, without loss of generality we can focus on any of the escape paths for particle A. Let us examine the escape paths along the horizontal axis. Particle A can be expelled along this axis by the other central particle, which exerts a repulsive force of $F_{max}$ (because $r$ is small). Therefore, the fragmentation force upon particle A is $F_{max}$.
+
+**Figure 6.** If two particles are at the center of a hexagon formation, one particle can escape along any of the six paths directed between the outer particles.
+
+Next, we derive an expression for the cohesion force on A. Particle A is held near the center by the perimeter particles. Without loss of generality we again focus on the horizontal axis. Consider the force exerted by the four perimeter particles closest to the horizontal axis, on particle A. If A moves slightly to the right (or left), two particles will pull A back to the center (attraction), while two particles will push A back to the center (repulsion). All four particles contribute to the cohesion of the central cluster. For each particle, the magnitude of this force is $G/R^p$. The projection of this force on the horizontal axis is $\sqrt{3}/2$ times the magnitude of this force – because the angle between the chosen perimeter particles and the horizontal axis is 30°. Since there are four perimeter particles exerting this force (the remaining two have a force of 0 after projection), we multiply this amount by four to get a total cohesion force of $2\sqrt{3}G/R^p$.
+
+When the cohesion force is greater than the fragmentation force, the central cluster remains intact. When the fragmentation force is greater, the central cluster separates. The phase transition occurs when the two forces are in balance: $F_{max} = 2\sqrt{3}G/R^p$. Thus the phase transition will occur when $G = F_{max}R^p/2\sqrt{3}$. We denote this
+---PAGE_BREAK---
+
+value of *G* as $G_t$. Thus our phase transition law is:
+
+$$G_t = \frac{F_{max} R^p}{2\sqrt{3}}$$
+
+We tested this law for varying values of *R*, $F_{max}$, and *p*. The results are shown in Table 1, averaged over 10 independent runs, with *N* = 200. The system evolved until equilibrium with a high value of *G*. Then *G* was gradually lowered. Cluster size was monitored, and we noted the value of *G* when the average cluster size dropped below 1.5. The observed values are very close to those that are predicted (within 6%), despite the enormous range in the magnitude of predicted values (approximately four orders). The variance among runs is low, with the normalized standard deviation being less than 5.7%.
+
+**Table 1.** The predicted/observed values of $G_t$ for different values of $R$, $p$, and $F_{max}$. The three columns under $F_{max}$ have $p=2$. The three columns under $p$ have $F_{max}=1$. The predicted values are very close to those that are observed.
+
+| $R$ | $F_{max}$ | $p$ |
|---|
| 0.5 | 1.0 | 2.0 | 1.5 | 2.0 | 3.0 |
|---|
| 25 | 90/87 | 180/173 | 361/342 | 36/35 | 902/874 | 4,510/4,480 | | 50 | 361/355 | 722/687 | 1,440/1,430 | 102/96 | 5,100/5,010 | 36,100/35,700 | | 100 | 1,440/1,410 | 2,890/2,840 | 5,780/5,630 | 289/277 | 28,900/28,800 | 289,000/291,000 |
+
+These results indicate that we have a very good predictor of $G_t$, which incorporates the most important system parameters $p$, $R$, and $F_{max}$. Notice that $N$ (the number of particles) does not appear in our law. The phase transition behavior is largely unaffected by $N$, due to the local nature of the force law.
+
+There are several uses for this equation. Not only can we predict the value of $G_t$ at which the phase transition will occur, but we can also use $G_t$ to help design our system. For example, a value of $G \approx 0.9G_t$ will yield the best unclustered formations, while a value of $G \approx 1.8G_t$ will yield the best clustered formations. The reason for this is explored in the next section.
+---PAGE_BREAK---
+
+## 3.5 Conservation of Energy and the Role of Potential Energy
+
+Because the force is conservative, AP should obey conservation of energy. Furthermore, as we shall see, the initial PE of the starting configuration yields important information concerning the dynamics of the system.
+
+First, we measured the PE of the system at every time step, using the path integral shown earlier. This is the amount of work required to push each particle into position, one after another, for the current configuration of particles. Because the force is conservative, the order in which the particles are chosen is not relevant. Then we also measured the kinetic energy (KE) of the particles ($mv^2/2$). We modeled friction as heat energy. If there is no friction, the heat energy component is zero.
+
+**Figure 7.** Conservation of energy, showing how the total energy remains constant, although the amount of different forms of energy changes over time. In the beginning, all energy is potential energy. This is transformed to kinetic energy when the particles move, and finally to heat as the particles stabilize due to friction.
+
+Figure 7 illustrates the energy dynamics of AP. As expected, the total energy remains constant over time. The system starts with only PE. Note that the graph illustrates one of the foundational principles of AP, namely, that the system lowers PE until a minimum is reached. This reflects the stability of the final aggregate system, requiring work to move the system away from desired configurations (thus increasing PE).
+
+As the system evolves, PE is converted into KE and heat, and the particles exhibit maximum motion (see Figure 7). Finally, the particles slow, and only heat remains. Note that PE is negative after a certain point. This
+---PAGE_BREAK---
+
+Figure 8. The amount of potential energy of the initial configuration of the hexagonal lattice system is maximized when $G_V = 1,300$ and $G_V = 65,000$, for a 200 particle system, when $p = 2$ (left) and $p = 3$ (right). The arrows show the values of $G_V$ and $G_{max}$, where $G_{max}$ is the maximum setting of $G$.
+
+illustrates stability of individual particles (as well as the collective) – it would require work to push individual particles out of these configurations. Hence this graph shows that the system is resilient to moderate amounts of force acting to disrupt it, once stable configurations are achieved. This issue will be addressed in the next section.
+
+The initial configuration ($t = 0$) PE also predicts important properties of the final evolved system, namely how well it evolves and the size of the formation. Higher initial PE determines that more work will be done by the system – and the creation of bigger formations requires more work. Higher initial PE is also correlated with better formations, because higher PE leads to greater initial linear momentum of the particles. As with simulated annealing, this momentum can help overcome problems with local optima.
+
+For example, consider Figure 8 (left), which shows the PE of the initial configuration of the 200 particle system, when $p = 2$, for different values of $G$. In the graphs, $G_V$ is the value of $G$ at which PE is maximized, and $G_{max}$ is the largest useful setting of $G$. Interestingly, PE is maximized at the range of values of $G$ (1,200 – 1,400) that have been found empirically to yield the best structures. To test this hypothesis, we recalculated PE for the system when $p = 3$. The results are shown in Figure 8 (right). Again, maximum PE is achieved for a $G$ value that is very close to those that yield the best structures.
+
+As with the phase transition analysis, the goal is to derive a general expression for $G_V$. We first need to calculate
+---PAGE_BREAK---
+
+Figure 9. The force law, when $G = 4,000$ (left) and $G = 5,625$ (right), representing the second and third situations. The force has a maximum magnitude of 1 and a magnitude of 0 at $1.5R = 75$. The force is repulsive when the distance is less than 50 and attractive when the distance is between 50 and 75.
+
+the potential energy, $V$. We begin by calculating the PE of a two particle system.
+
+It is necessary to consider three different situations, depending on the radial extent to which $F_{max}$ dominates the force law $F = G/r^p$. Recall that agents use $F_{max}$ when $F \ge F_{max}$. This occurs when $G/r^p \ge F_{max}$ or, equivalently, when $r \le (G/F_{max})^{1/p} \equiv R'$. The first situation occurs when $F_{max}$ is used only at close distances, i.e., when $0 \le R' \le R$ (see Figure 2). The second situation occurs when $R \le R' \le 1.5R$. The third situation occurs when $R' > 1.5R$. In the third situation the force law has a constant magnitude of $F_{max}$, and $V$ remains constant with increasing $G$ (see Figure 9, left and right).
+
+Let us now compute the PE for the first situation, which requires the calculation of three separate integrals. The first represents the attractive force felt by one particle as it approaches the other, from a distance of $1.5R$ to $R$. The second is the repulsive force of $F = G/r^p$ when $r < R$ and $F < F_{max}$. The third represents the repulsive force of $F_{max}$ when $0 \le r \le R'$. Then:
+
+$$V = - \int_{R}^{1.5R} \frac{G}{r^p} dr + \int_{R'}^{R} \frac{G}{r^p} dr + \int_{0}^{R'} F_{max} dr$$
+
+The first term is negative because the force is attractive, whereas the latter two terms are positive because the force
+---PAGE_BREAK---
+
+is repulsive. We assume that $p \neq 1.0$, since AP is not run with that setting. Solving and substituting for $R'$ yields:
+
+$$V = \frac{[2R^{1-p} - (1.5R)^{1-p}]G}{(1-p)} - \frac{pG^{1/p}}{(1-p)F_{max}(1-p)/p}$$
+
+The second situation is similar. The computation of PE is:
+
+$$V = - \int_{R'}^{1.5R} \frac{G}{r^p} dr - \int_{R}^{R'} F_{max} dr + \int_{0}^{R} F_{max} dr$$
+
+Solving and substituting for $R'$ yields:
+
+$$V = \frac{\left[\left(\frac{G}{F_{max}}\right)^{(1-p)/p} - \left(1.5R\right)^{(1-p)}\right]G}{(1-p)} - F_{max}\left[2R - \left(G/F_{max}\right)^{1/p}\right]$$
+
+Finally, the third situation is:
+
+$$V = - \int_{R}^{1.5R} F_{max} dr + \int_{0}^{R} F_{max} dr$$
+
+Solving and substituting for $R'$ yields:
+
+$$V = \frac{F_{max}R}{2}$$
+
+The first situation occurs with low $G$, when $G \le F_{max}R^p$. The second situation occurs with higher values of $G$, when $F_{max}R^p \le G \le F_{max}(1.5R)^p$. The third situation occurs when $G \ge F_{max}(1.5R)^p$. In the third situation the PE of the system remains constant as $G$ increases even further. Thus the maximum useful setting of $G$ is $G_{max} = F_{max}(1.5R)^p$. We can see this in Figure 8 (which represent the full curves over all three situations) for values of $G_{max} = 5,625$ and $G_{max} = 421,875$ respectively. Above these values of $G_{max}$, PE stays constant.
+
+It is now simple to generalize the computations for $V$ to $N$ particles, denoted as $V_N$. Regardless of the situation, we can build the $N$ particle system one particle at a time, in any order (because forces are conservative), resulting
+---PAGE_BREAK---
+
+in an expression for the total initial PE:
+
+$$V_N = \sum_{i=0}^{N-1} iV = \frac{VN(N-1)}{2}$$
+
+where *V* is defined above for the two particle system.
+
+Now that we have a general expression for the potential energy, $V_N$, we need to find the value of *G* that maximizes $V_N$. First, we need to determine whether the maximum occurs in the first or second situation. The slope of the PE equation for the second situation is strictly negative; thus the maximum must occur in the first situation. To find the maximum, we take the derivative of $V_N$ for the first situation with respect to $G$, set it to zero, and solve for $G$. The resulting maximum is:
+
+$$G_V = F_{max}R^p[2 - 1.5^{1-p}]^{p/(1-p)}$$
+
+The value of $G_V$ does not depend on the number of particles, which is a nice result. This simple formula is surprisingly predictive of the dynamics of a 200 particle system. For example, when $F_{max} = 1$, $R = 50$, and $p = 2$, $G_V = 1,406$, which is only about 7% higher than the value shown in Figure 8 (left). Similarly, when $p = 3$, $G_V = 64,429$, which is very close to the value shown in Figure 8 (right). The differences in values arise because our simulation has initial conditions specified by a 2D Gaussian random variable with a small variance $\sigma^2$, whereas our analysis assumes $\sigma^2 = 0$. Despite this difference, the equation for $G_V$ works quite well.
+
+In Section 3.4, empirical observations suggested that the best clustered formations occur when $G \approx 1.8G_t$. This is equivalent to stating that $G_V/G_t \approx 1.8$, because maximum *V* is correlated with the best formations. Using our prior expressions for $G_V$ and $G_t$, the ratio is:
+
+$$G_V/G_t = 2\sqrt{3}[2 - 1.5^{1-p}]^{p/(1-p)}$$
+---PAGE_BREAK---
+
+The ratio depends only on *p* and the sensor range, but not on $F_{max}$ or *R*. For *p* = 2 and *p* = 3 we get ratios of 1.9 and 1.77 respectively, which agree nicely with empirical observations.
+
+Our final observation is that as *G* is increased beyond the optimal point $G_V$, PE decreases, yielding less energy to build large formations. As an example, for *p* = 2, with *N* = 200, we found empirically that when *G* = 1,200, the number of clusters in the final formation was 29. When *G* was doubled to 2,400 the number of clusters halved to 16. Finally, when *G* was doubled again to 4,800 the number of clusters was 7. In this situation $G_{max} = 5,625$. When *G* is set to the maximum, a minimal structure consisting of four clusters in a diamond formation is created. This result appears to hold in general, regardless of system parameters.
+
+In summary, we have built a picture of how to set the value of *G*, given other system parameters. For unclustered behavior, set *G* to be slightly lower than the phase transition point $G_t$. For the best clustered behavior with the largest formations, set *G* to $G_V$ (which is greater than $G_t$). For the smallest formations with maximal clustering, set *G* to $G_{max}$. We are currently attempting to predict the number of clusters given *G*.
+
+## 3.6 Robustness
+
+If $G \approx G_V$, and the system has reached equilibrium, then it is very robust with respect to the disappearance of numerous particles. Since lattice nodes of low PE are created via the intersection of many circular PE wells, the removal of particles from a node decreases the PE well depth of neighboring nodes but usually does not alter the lattice structure. The lattice is also preserved because non-neighboring nodes are unaffected by the particle removal – since they are out of sensing range. However, if enough particles disappear from a node, the balance of forces at neighboring nodes can change enough to cause particles in those neighbor nodes to move. In particular, if the cluster fragmentation force exceeds the cohesion force at a neighbor node, then one or more particles will be ejected from that cluster. Nevertheless, an ejected particle will move to another node of low PE. If this node was previously empty, then the movement just described will partially repair the lattice. In summary, the lattice structure degrades slowly, except for possible fragmentation into disjoint sets in very rare situations, or when a
+---PAGE_BREAK---
+
+very large percentage of particles are removed. Figure 10 shows this clearly. Beginning with 99 particles, 10 particles are removed, then another 20, and finally another 20. Removed particles are randomly chosen from the interior and perimeter of the lattice. The lattice is reduced in overall size, but its overall structure and integrity remain intact. The lines in this figure represent the force bonds between particles, and are useful for visualization.
+
+**Figure 10.** Beginning with 99 particles (top left), 10 particles are randomly removed (top right), then another 20 (bottom left), and finally another 20 (bottom right). The overall structure and integrity remain intact, demonstrating robustness.
+
+The concept of PE also provides a natural mechanism for self-repair of formations if they are disturbed. The disturbances increase PE, and the system attempts to correct itself by lowering PE again. To test the efficacy of this approach we added a simulated blast (e.g., an explosion that causes a gust of wind) to our simulation. Weak gusts, which cause bends in the formation, are easily repaired with AP. More severe disturbances, that distort the shape of the perimeter, require monitoring, checking, and steering techniques [18].
+---PAGE_BREAK---
+
+# 4. Square Lattices
+
+Given the success in creating hexagonal lattices, we investigated other regular structures. The square lattice is an obvious choice, since it also tiles a 2D plane.
+
+## 4.1 Designing Square Lattices
+
+The success of the hexagonal lattice hinged upon the fact that nearest neighbors are $R$ in distance. This is not true for squares, since if the distance between particles along an edge is $R$, the distance along the diagonal is $\sqrt{2}R$. Particles have no way of knowing whether their relationship to neighbors is along an edge or along a diagonal.
+
+Once again it appears that we need to know angles or the number of neighbors to solve this difficulty. However, a much simpler approach will do the trick. Suppose each particle is given another attribute, called “spin”. Half of the particles are initialized to be spin “up”, whereas the other half are spin “down”.²
+
+Figure 11. Square lattices can be formed by using particles of two “spins”. Unlike spins are $R$ apart while like spins are $\sqrt{2}R$ apart.
+
+Consider the square depicted in Figure 11. Particles that are spin up are open circles, while particles that are spin down are filled circles. Particles of unlike spin are distance $R$ from each other, whereas particles of like spin are distance $\sqrt{2}R$ from each other. This “coloring” of particles extends to square lattices, with alternating spins along the edges of squares, and same spins along the diagonals.
+
+Figure 11 indicates that square lattices can be created if particles sense not only distance and bearing to neighbors, but also their spin. Thus sensors must detect one more bit of information, spin. We use the same force law as before: $F = Gm_i m_j / r^p$. However, $r$ is renormalized to $r/\sqrt{2}$ if two particles have the same spin. Once again the force is repulsive if $r < R$ and attractive if $r > R$. The one effector enables movement with velocity $v \le V_{max}$.
+
+²Spin is merely a particle label and has no relation to the rotational spin used in navigation templates [43].
+---PAGE_BREAK---
+
+**Figure 12.** Using the same initial conditions as for the hexagonal lattice, 200 particles form a square lattice by $t = 4,000$, but global flaws exist.
+
+To ensure that the force law is local, particles cannot see other particles that are further than $cR$, where $c = 1.3$ if particles have like spin and 1.7 otherwise.
+
+The initial conditions are the same as those for the hexagonal lattice. The 200 particles move for 4,000 time steps (the system is somewhat slower to stabilize than the hexagon), using this very simple force law. The final result is shown in Figure 12. Again, we measure orientation error by choosing pairs of particle pairs separated by $2R$. By insisting that each particle pair has like spins, we ensure that pairs are aligned with the rows and columns of the lattice. In this case the angle between the two line segments should be close to some multiple of 90°. The error is the absolute value of the difference between the angle and the closest multiple of 90. The maximum error is 45° while the minimum is 0°. Averaged over 40 independent runs, the final error was about $12.8°$, with $\sigma = 6.7°$.
+
+The results are clearly suboptimal. Locally, the particles have formed square lattices. This can be observed by noting that the spins alternate along the edges of squares, whereas spins are the same along diagonals. Once again each “node” in the lattice can have multiple particles, providing robustness (the average cluster size is roughly 1.75). However, large global flaws split the structure into separate square lattices. Thus, although the local force laws work reasonably well, they (not surprisingly) do not rule out difficulties at the global level. The question is whether global repair is needed or whether local repairs will suffice.
+---PAGE_BREAK---
+
+**Figure 13.** Using “spin-flip” local repair, the 200 particles form a better square lattice at $t = 4,000$. Global flaws are almost absent although some local flaws still exist.
+
+## 4.2 Local Self-Repair of Square Lattices
+
+As with other physical systems, noise can help remove global flaws in structures. Furthermore, systems should also self-repair at the local level. For example, if all particles at a particular lattice node are destroyed, a local hole opens in the lattice. Our goal is to provide a simple mechanism that repairs both local and global faults. To achieve this goal we focused again on the concept of spin. Figure 12 indicates that clusters are almost always made up of particles of like spin. There is an aversion to having clusters of unlike spins.
+
+Spins are set at initialization. What would happen, though, if one particle in a cluster of like spins changes spin? It could fly away from that cluster to another cluster with the same spin as it now has. It could also land at an empty node which, although empty, is still an area of very low PE. In essence, clusters represent nodes with excess capacity, and that excess can fix problems in the structure as they arise. Our hypothesis is that this increased flow of particles (noise) can repair both local and global flaws in the square lattice.
+
+Testing this hypothesis only required one change to the code. Again, particles are initialized with a given spin. However, if a particle has a very close neighbor ($r < 1.0$), the particle may flip its spin with a small probability. Particles have one additional effector – they can change their own spin. This does not create structural holes, since a particle only leaves a cluster if there is excess capacity in that cluster.
+---PAGE_BREAK---
+
+Once again, the 200 particles moved for 4,000 time steps, using the same force law, coupled with this simple spin-flip repair mechanism. The initial conditions were the same as those in the previous section. The results are shown in Figure 13. The previously shown global flaws are no longer in evidence, although a minor portion of the lattice is still misaligned. Many of the flaws that remain are local and are a result of a still operating spin-flip repair mechanism that continues to occasionally send particles from cluster to cluster. Observation of the evolving system shows that holes are continually filled, as particles leave their cluster and head toward open areas of low PE.
+
+An exact Wilcoxon rank-sum test indicates that the mean error *with* spin-flip repair (4.9°, σ = 6.0°) is statistically significantly less (*p* < 0.001) than the mean error *without* spin-flip repair (12.8°, σ = 6.7°).
+
+## 4.3 Phase Transition Analysis
+
+Square lattices also display a phase transition as *G* decreases. The derivation of a quantitative law for square lattices is a straightforward analogue of the analysis for hexagonal lattices. The one difference is that in a square lattice, one of the two particles in the central cluster is expelled along a path to one of the perimeter particles, rather than between them (see Figure 14).
+
+**Figure 14.** If two particles are at the center of a square formation, one particle can escape along any of the eight paths directed towards the outer particles.
+
+In Figure 14, there are two particles in the center of the formation, and one particle each at the perimeter nodes. Label one of the two particles in the center as "A." Using the same reasoning as before the fragmentation force upon particle A is $F_{max}$. Particle A is held near the center by the perimeter particles. Using the geometry of the situation as we did with hexagons, the total cohesion force on A is $(2\sqrt{2} + 2)G/R^p$ [19]. The phase transition
+---PAGE_BREAK---
+
+will occur when $G = F_{max}R^p/(2\sqrt{2} + 2)$. The phase transition law for square lattices is:
+
+$$G_t = \frac{F_{max} R^p}{2\sqrt{2} + 2}$$
+
+**Table 2.** The predicted/observed values of $G_t$ for different values of $R$, $p$, and $F_{max}$. The three columns under $F_{max}$ have $p=2$. The three columns under $p$ have $F_{max}=1$. The predicted values are very close to those that are observed.
+
+| $R$ | $F_{max}$ | $p$ |
|---|
| 0.5 | 1.0 | 2.0 | 1.5 | 2.0 | 3.0 |
|---|
| 25 | 65/69 | 130/136 | 259/278 | 26/26 | 647/651 | 3,236/3,312 | | 50 | 259/272 | 519/530 | 1,036/1,066 | 73/74 | 3,662/3,730 | 25,891/26,850 | | 100 | 1,036/1,112 | 2,071/2,138 | 4,143/4,405 | 207/206 | 20,713/21,375 | 207,125/211,350 |
+
+We tested this law for varying values of $R$, $F_{max}$, and $p$. The results are shown in Table 2, averaged over 10 independent runs, with $N = 200$. The observed values are very close to those that are predicted (within 7%), and the normalized standard deviation is less than 6.2%.
+
+## 4.4 Potential Energy Analysis
+
+We can also compute the PE of the initial configuration. This computation is slightly more difficult than before because there are two “species” of particles (spin up and spin down), with different inter-species and intra-species sensor ranges. The computation is performed in three stages. First assemble all spin up particles together in a cluster. Then assemble all spin down particles in a cluster. Finally, join these two clusters together. We consider only the first situation ($0 \le R' \equiv (G/F_{max})^{1/p} \le R$), since this is where the maximum PE occurs.
+
+First, compute the PE of the initial configuration of two spin up particles. When particles of like spin interact, $r$ is renormalized by $\sqrt{2}$, and their sensor range is $1.3R$. Thus:
+
+$$V = - \int_{\sqrt{2}R}^{1.3\sqrt{2}R} \frac{G}{(r/\sqrt{2})^p} dr + \int_{\sqrt{2}R'}^{\sqrt{2}R} \frac{G}{(r/\sqrt{2})^p} dr + \int_{0}^{\sqrt{2}R'} F_{max} dr$$
+---PAGE_BREAK---
+
+Solving and substituting for $R'$ yields:
+
+$$V = \sqrt{2} \left[ \frac{(2R^{1-p} - (1.3R)^{1-p})G}{(1-p)} - \frac{pG^{1/p}}{(1-p)F_{max}^{(1-p)/p}} \right]$$
+
+The computation for $V$ is very similar to that for the hexagonal lattice, differing only by a constant factor of $\sqrt{2}$ and the sensor range. We now generalize to $N$ spin up particles:
+
+$$V_N = \frac{VN(N-1)}{2}$$
+
+The computation for spin down particles is identical. We now combine the two clusters of $N$ spin up and $N$ spin down particles:
+
+$$V_{N+N} = V_N + V_N - \int_R^{1.7R} \frac{GN^2}{r^p} dr + \int_{R'}^R \frac{GN^2}{r^p} dr + \int_0^{R'} F_{max} N^2 dr$$
+
+Solving and substituting for $R'$ yields:
+
+$$V_{N+N} = V(N-1)N + N^2 \left[ \frac{(2R^{1-p} - (1.7R)^{1-p})G}{(1-p)} - \frac{pG^{1/p}}{(1-p)F_{max}^{(1-p)/p}} \right]$$
+
+To determine the value of $G$ for which PE is maximized, we take the derivative of $V_{N+N}$ with respect to $G$, set it to zero, and solve for $G$:
+
+$$G_V = F_{max} R^p \left[ \frac{\sqrt{2}(N-1)[2 - 1.3^{1-p}] + N[2 - 1.7^{1-p}]}{\sqrt{2}(N-1) + N} \right]^{p/(1-p)}$$
+
+Note that in this case $G_V$ depends on the number of particles $N$. It occurs because of the weighted average of different inter-species and intra-species sensor ranges. However, because this difference is not large, the dependency on $N$ is also not large. For example, with $R = 50$, $F_{max} = 1$, and $p = 2$, then $G_V = 1,466$ if there
+---PAGE_BREAK---
+
+Figure 15. The amount of potential energy of the initial configuration of the square lattice system is maximized when $G_V = 1,466$ and $G_V = 67,330$, for a 200 particle system, when $p = 2$ (left) and $p = 3$ (right). The arrows show the values of $G_V$ and $G_{max}$, where $G_{max}$ is the maximum setting of $G$.
+
+are 200 particles. With only 20 particles $G_V = 1,456$. Similarly, when $p = 3$, $G_V = 67,330$ and $G_V = 66,960$ respectively, for 200 and 20 particle systems (Figure 15).
+
+As we did with the hexagonal lattices, we can also compute the value of $G_{max}$, which is the highest value of $G$ that will have any effect on the system. For square lattices we get:
+
+$$G_{max} = F_{max}(1.7R)^p$$
+
+For our standard settings, when $p = 2$, $G_{max} = 7225$, and when $p = 3$, $G_{max} = 614,125$ (Figure 15).
+
+## 5. Perfect Lattices and Transformations
+
+Transformations are easily achieved in AP. For the original hexagonal and square lattices, transformations are accomplished by ignoring or not ignoring spins. Figure 16 illustrates a transformation from a square lattice to a hexagonal lattice, to another square lattice, and to a final hexagonal lattice. There is no guarantee that the same lattice will appear under successive transformations.
+
+By adding other attributes [45], perfect hexagonal and square lattices (and their transformations) are also easily achieved. Each particle carries two different sets of attributes, one for hexagonal lattices and one for square lattices.
+---PAGE_BREAK---
+
+**Figure 16.** Agents can transform smoothly between hexagonal and square lattices. A square lattice (top left) transforms to a hexagonal lattice (top right), back to a square lattice (bottom left) and finally back to a hexagonal lattice (bottom right).
+
+When the system switches from one pair to the other, it transforms. Figure 17 illustrates a transformation from a perfect square lattice to a perfect hexagonal lattice (given the number of particles). The square lattice structure is thoroughly destroyed, showing almost no structure at all. Despite this catastrophic disturbance the hexagonal structure eventually emerges, illustrating extreme robustness. The reverse transformation works equally well.
+
+## 6. Other Formations in Two and Three Dimensions
+
+Our simulation tool generalizes easily to three dimensions, which is necessary for the MAV task. Figure 18 shows 499 simulated MAVs in three separate planes of hexagonal lattices. Both top-down and side views are shown. Cubic lattices are also relatively easy to form, as are regular structures with triangular facets (such as triangular pyramids or regular icosahedrons). We have observed, however, as with natural crystals, that it is often easier to build these structures particle by particle, as opposed to building them in “batch.”
+---PAGE_BREAK---
+
+**Figure 17. Agents can also transform from perfect square lattices to perfect hexagonal lattices (shown) and back (not shown).**
+
+The previous sections have described formations that have been planned in advance. However, our simulation tool provides the opportunity to change force law parameters in arbitrary and unusual ways. The results are often surprising, yielding unanticipated structures, especially in two dimensions. Figure 19 shows two unusual but potentially useful structures. We have found that some structures can be assembled easily by fixing the relevant parameters at the beginning. Others, however, are easier to create dynamically via transformation, as parameters are slowly changed. This also raises the possibility of searching the space of force laws (e.g., with genetic algorithms [22, 17, 44]) to create desired behavior.
+
+## 7. Dynamic Behaviors: Obstacle Avoidance
+
+Previous sections of this paper have focused on the creation of formations, but for most applications formations will have to move (often toward some goal). Also, for ground platforms, obstacles pose a serious challenge to the
+---PAGE_BREAK---
+
+Figure 18. Three planes of MAVs in hexagonal lattices, shown from top-down and side views. There are 499 particles in this simulation.
+
+Figure 19. Unusual two dimensional formations can be achieved by changing the system parameters. These particular formations could be useful for perimeter defense applications.
+
+movement of the formation. To address this, we have extended our simulation to include goals (both stationary and moving) as well as obstacles. Larger obstacles are created from multiple, point-sized obstacles; this enables flexible creation of obstacles of arbitrary size and shape. While the obstacles are stationary, the goal can be moved by the mouse as the simulation is running.
+
+As a generalization to our standard paradigm, goals are attractive, whereas obstacles are repulsive (similar to potential field approaches, e.g., [26]). The goal can be sensed at a far distance of $6R$, while obstacles are sensed at a very short distance of $0.25R$. If the goal and obstacle forces are constant, we achieve good results.
+---PAGE_BREAK---
+
+Using this generalized paradigm we ran two sets of experiments. In the first, $G < G_t$, to remove clustering and to have fluid-like particle flow. The particles flow naturally around the obstacles, and do not retain any particular formation. In the second, $G \approx G_V > G_t$, to enhance the creation of clusters and rigid formations. Over numerous runs, three types of behavior are observed. First, the formation avoids obstacles via a sequence of rotations and counter-rotations of the whole collective. If this cannot be accomplished, the formation deforms by stretching force bonds between particles, diverging around obstacles directly in their path, and then converging again into a cohesive formation after the obstacle is passed. In the third situation, force bonds are broken and the formation fragments around an obstacle and then re-coalesces. One danger with this third situation is that particles can be permanently separated from the main formation.
+
+In the previous experiments the goal was stationary. Preliminary results indicate that slowly moving goals are successfully tracked without difficulty.³ The low $G$ (liquid) version generally performs better on this task. One can easily imagine a situation where the formation lowers $G$ to move around obstacles, and then raises $G$ to achieve better formations after the obstacles have been avoided.
+
+It is important to emphasize that this simulation models obstacle avoidance at an abstract level, and any application to platforms with complex dynamics (such as MAVs) will require additional modeling.
+
+## 8. Dynamic Behaviors: Surveillance and Perimeter Defense
+
+We have also explored surveillance and perimeter defense tasks. By using an analogy with the motion of gas molecules in a container, AP is successful on both of these tasks. The algorithm for surveillance is simple and elegant – agents repel each other, and are also repelled by the perimeter boundary. Friction is negligible. The surveillance task is shown in Figure 20 (left). Particles start at the center and move toward the perimeter, due to repulsion. They act like a gas in a container. If particles are destroyed, the remaining particles still search the enclosed area, but with less virtual “pressure.” Likewise, the addition of particles is also treated gracefully,
+
+³“Slowly” is relative to $\Delta t$. In the real world, faster motion implies smaller values of $\Delta t$ (i.e., sensing must occur more often).
+---PAGE_BREAK---
+
+increasing the pressure in the system. The two small squares inside the perimeter represent intruders. Particles are attracted towards intruders, but since they also repel each other, the number of particles that can cover an intruder is limited.
+
+**Figure 20.** Simulated agents perform surveillance (left) and perimeter defense (right). For surveillance, agents are repelled by each other and by walls, while they are attracted by objects of interest. For perimeter defense agents repel each other and are drawn to the corridor between the two walls (the inner wall is porous and excess capacity is stored in the interior area). Friction is zero, because constant movement is required.
+
+Perimeter defense is hardly more complex (see Figure 20, right). Once again, particles start from the center and repel each other. The inner and outer squares form a corridor to be monitored by the particles. The inner square is porous to the particles. Both the inner and outer walls are attractive, and particles are drawn to the space between them. One intruder is shown, which is attractive. Notice that some particles remain in the central area. This represents a situation of over-capacity – there are too many particles for the corridor. If particles in the corridor die, particles in the central area move to the corridor to replace them. This is a nice demonstration of the robustness of the system. An interesting phase transition of this system depends on the value of *G*. When *G* is high, particles fill the corridor uniformly, providing excellent on-the-spot coverage. When *G* is low, particles move toward the corners of the corridor, providing excellent line-of-sight coverage. Depending on whether the physical robots are better at motion or sensing, the *G* parameter can be tuned appropriately. Analysis of these dynamic systems will
+---PAGE_BREAK---
+
+center around the kinetic theory of gases, as has been initiated by Jantz et. al. [24].
+
+# 9 Application to a Team of Mobile Robots
+
+The current focus of this project is the physical embodiment of AP on a team of robots. Our choice of robots and sensors expresses a preference for minimal expense and expendable platforms.
+
+For our initial experiments we have used inexpensive kits from KIPR (KISS Institute for Practical Robotics). These kits come with a variety of sensors and effectors, and two micro-computers – the RCX and the Handy Board. Due to its generality and ease of programming, we are currently using the Handy Board. The Handy Board has a HC11 Motorola processor with 32K of static RAM, a two line LCD screen, and the capacity to drive several DC motors and servos. It also has ports for a variety of digital and analog sensors.
+
+Our robotic platform has two independent drive trains and two casters, allowing the platform to turn on a dime and move forward and backward. Slot sensors are incorporated into the drive trains to function as shaft encoders, yielding reasonably precise measures of the angle turned by the robot and the distance moved. The transmissions are geared down 25:1 to minimize slippage with the floor surface.
+
+The “head” of the robot is a sensor platform used to detect other robots in the vicinity. For distance information we use Sharp GP2D12 IR sensors. This sensor provides fairly accurate distance readings (10% error over a range of 6 to 50 inches). The readings are relatively non-influenced by the material sensed, unless the material is highly reflective. However, the angle of orientation of the sensed object does have significant effects, especially if the object is reflective. As a consequence, each “head” is a circular cardboard (non-reflective) cylinder, allowing for accurate readings by the IR sensors.
+
+The head is mounted horizontally on a servo motor. With 180° of servo motion, and two Sharp sensors mounted on opposite sides, the head provides a simple “vision” system with a 360° view. After a 360° scan, object detection is performed. A first derivative filter detects object boundaries, even under conditions of partial occlusion. Width filters are used to ignore narrow and wide objects (chair legs and walls). This algorithm detects nearby robots,
+---PAGE_BREAK---
+
+Figure 21. Example of robot detection where there are five nearby robots, one partially occluded by two others. The 360° scan produces a graph of distance values (top). The first derivative filter looks for large positive or negative values of the derivative, which yield object boundaries (middle). Regions between boundaries are potential objects. Objects that are too wide or are really empty space are filtered, producing an object list (bottom). The narrow false object to the right in the object list is also filtered.
+
+producing a “robot” list that gives the bearing and distance to each neighboring robot (Figure 21).
+
+Once sensing and object detection is complete, the AP algorithm computes the virtual force felt by that robot. In response, the robot turns and moves to some position. This “cycle” of sensing, computation and motion continues until we shut down the robots or they run out of power. Figure 22 shows the AP code. It takes a robot neighbor list as input, and outputs the vector of motion in terms of a turn and distance to move.
+
+For our experiments, we built seven robots. Each robot ran the same piece of software. The objective of the first experiment was to form a hexagon. The desired distance *R* between robots was 23 inches. Using the theory,
+---PAGE_BREAK---
+
+Figure 22. The main AP code, which takes as input a robot neighbor list (with distance and bearing information) and outputs a vector of motion.
+
+we chose a *G* of 270 (*p* = 2 and *F*max = 1). The beginning configuration was random. The results were very consistent, producing a hexagon ten times in a row and taking approximately seven cycles on average. Our scan algorithm takes about 22 seconds per cycle for this first implementation; however, a new localization technology we are developing will be much faster. For all runs the robots were separated by 20.5 to 26 inches in the final formation, which is only slightly more error than the sensor error.
+
+The objective of the second experiment was to form a hexagon and then move in formation to a goal. For this experiment, we placed four photo-diode light sensors on each robot, one per side. These produced an additional
+---PAGE_BREAK---
+
+**Figure 23.** Seven robots self-organize into a hexagonal formation, which then moves towards a light source. Pictures taken at the initial conditions, at two minutes, fifteen minutes, and thirty minutes.
+
+force vector, moving the robots towards a light source (a window). The reflection of the window on the floor is not noticed by the robots and is not the light source. The results, shown in Figure 23, were consistent over ten runs, achieving an accuracy comparable to the formation experiment above. The robots moved about one foot in 13 cycles of the AP algorithm.
+
+## 10. Summary and Related Work
+
+This paper has introduced a framework for distributed control of swarms of vehicles in sensor networks, based on laws of artificial physics (AP). The motivation for this approach is that natural laws of physics satisfy the requirements of distributed control, namely, self-organization, fault-tolerance, and self-repair.
+
+The results have been quite encouraging. We illustrated how AP can self-organize hexagonal and square lattices. The concept of spin-flip from natural physics was shown to be a useful repair mechanism for square lattices, if
+---PAGE_BREAK---
+
+no global information is available. Structures in three dimensions are easily achieved, as well as transformations
+between structures. We have also presented preliminary results with dynamic multi-agent behaviors such as goal
+tracking, obstacle avoidance, surveillance, and perimeter defense. Finally, we have shown our first embodiment of
+AP on a team of seven mobile robots.
+
+This paper also presents a novel physics-based analysis of AP, focusing on potential energy and force balance equations. This analysis provides a predictive technique for setting important parameters in the system, enabling the user to create unclustered formations, large clustered formations, and minimal clustered formations. The unclustered formations act like liquids, whereas the clustered formations act like solids. This analysis combines the geometry of the formations with important parameters of the system, namely, G, R, p, $F_{max}$, and sensor range. The parameter N was also included, but it is of little relevance for our most important results. This is a nice feature, since one motivation for AP was scalability to large numbers of agents. Including other relevant parameters such as $\Delta t$, $V_{max}$ and friction requires a more dynamic analysis – we are currently focusing on “kinetic theory.”
+
+In conclusion, AP, although simple and elegant, has shown itself to be adept in a wide range of sensor network applications. AP demonstrates the capabilities of self-organization, fault-tolerance, self-repair, and effectiveness in spite of minimal sensing capabilities. There is a straightforward amenity to theoretical analysis, thereby enabling predictions of the behavior of the multi-agent swarm, and providing ease of implementation on a team of robots.
+
+Due to the lack of computation of potential fields, it is also computationally efficient.
+
+Our discussion of related work will first focus on swarms and then on their theoretical analyses. Early work on swarm robotics focused on central controllers. For example, Carlson et al. [7] investigated techniques for controlling swarms of micro-electromechanical agents with a global controller that imposes an external potential field that is sensed by the agents. Recently, there has been movement away from global controllers, due to the brittleness of such an approach. AP is a distributed, rather than global, control framework for swarm management, although global control can be incorporated, if desired [18].
+
+Most of the swarm literature can be subdivided into *swarm intelligence*, *behavior-based*, *rule-based*, *control*-
+---PAGE_BREAK---
+
+theoretic and physics-based techniques. Swarm intelligence techniques are ethologically motivated and have had excellent success with foraging, task allocation, and division of labor problems [5, 20]. In Beni et. al. [4, 3], a swarm distribution is determined via a system of linear equations describing difference equations with periodic boundary conditions. Behavior-based approaches [6, 30, 35, 15, 2, 41] are also very popular. They derive vector information in a fashion similar to AP. Furthermore, particular behaviors such as “aggregation” and “dispersion” are similar to the attractive and repulsive forces in AP. Both behavior-based and rule-based (e.g., [40, 51]) systems have proved quite successful in demonstrating a variety of behaviors in a heuristic manner. Behavior-based and rule-based techniques do not make use of potential fields or forces. Instead, they deal directly with velocity vectors and heuristics for changing those vectors (although the term “potential field” is often used in the behavior-based literature, it generally refers to a field that differs from the strict Newtonian physics definition). Control-theoretic approaches have also been applied effectively (e.g., [1, 11]). Our approach does not make the assumption of having leaders and followers, as in [10, 13, 9].
+
+One of the earliest physics-based techniques is the *potential fields* (PF) approach (e.g., [26]). Most of the PF literature deals with a small number of robots (typically just one) that navigate through a field of obstacles to get to a target location. The environment, rather than the agents, exert forces. Obstacles exert repulsive forces while goals exert attractive forces. Recently, Howard et al. [23] and Vail and Veloso [49] extended PF to include inter-agent repulsive forces – for the purpose of achieving coverage. Although this work was developed independently of AP, it affirms the feasibility of a physics force-based approach. Another physics-based method is the “Engineered Collective” work by Duncan at the University of New Mexico and Robinett at Sandia National Laboratory. Their technique has been applied to search-and-rescue and other related tasks [39]. Kraus et al. [32] converts a generic, goal-oriented problem into a PE problem, and then automatically derives the forces needed to solve the problem. The *social potential fields* [37] framework is highly related to AP. Reif and Wang [37] rely on a force-law simulation that is similar to our own, allowing different forces between different agents. Their emphasis is on synthesizing desired formations by designing graphs that have a unique PE embedding. We plan to merge
+---PAGE_BREAK---
+
+this approach with ours.
+
+Other physics-based approaches of relevance include research in flocking and other biologically motivated behavior. Reynolds models the physics of each agent and uses behavior-based rules to control its motion. Central to his work is “velocity matching”, wherein each agent attempts to match the average velocity of its neighbors. The primary emphasis is on flocking [38]. Tu and Terzopoulos provide a sophisticated model of the physics of fish, which are controlled by behavior-based rules. The emphasis is on “schooling” and velocity matching [48]. Vicsek provides a point particle approach, but uses velocity matching (with random fluctuations) and emphasizes biological behavior [50, 8, 21]. His work on “escape panic” utilizes an $\vec{F} = m\vec{a}$ model, but includes velocity matching [21]. Toner and Tu provide a point particle model, with sophisticated theory, but again emphasize velocity matching and flocking behavior [47]. These models are quite different from ours, since we impose no velocity matching condition. Also, their models do not obey standard conservation laws. Furthermore, we utilize minimal sensory information, whereas velocity matching requires the computation of relative velocity differences between neighbors, which is more complex than our model. Finally, our motivation is to control vehicles for distributed sensing tasks. We are especially interested in regular geometric formations. For moving formations, our goal is to provide precise control of the formation, rather than “life-like” behavior.
+
+One can also divide the related literature by the form of theoretical analysis, both in terms of the goal of the analysis and the method. There are generally two goals: stability and convergence/correctness. Under stability is the work by Schoenwald et. al. [39], Fierro et al. [14], Olfati-Saber and Murray [33], Liu et al. [29], and Lerman [28]. The first three apply Lyapunov methods. Liu et al. use a geometric/topological approach, and Lerman uses differential equations to model system dynamics. Under convergence/correctness is the work by Suzuki [46], Parker [34], and Liu et al. [29]. Methods here include geometry, topology and graph theory. Other goals of theoretical analyses include time complexity [32], synthesis [37], prediction of movement cohesion [29], coalition size [28], number of instigators to switch strategies [31], and collision frequency [24].
+
+Methods of analysis are also similarly diverse. We focus only on physics-based analyses of physics-based
+---PAGE_BREAK---
+
+swarm robotics systems. We know of four methods. The first are the Lyapunov analyses by Schoenwald et. al. [39], Fierro et al. [14], and Olfati-Saber and Murray [33]. The second is the kinetic gas theory by Jantz et. al. [24]. The third is the minimum energy analysis by Reif and Wang [37]. The fourth develops macro-level equations describing flocking as a fluid-like movement [47].
+
+To the best of our knowledge, the only analyses mentioned above that can set system parameters are those of Lerman [28], Numaoka [31], and Toner and Tu [47]. The first two analyses are of behavior-based systems, while the latter is of a “velocity matching” particle system. The capability of being able to set system parameters based on theory has enormous practical value, in terms of ease of implementation. The research presented in this paper provides practical laws for setting system parameters, to achieve the desired behavior with actual robots.
+
+## 11. Future Work
+
+Currently, we are improving our mechanism for robot localization. This work is an extension of Navarro-Serment et. al. [27], using a combination of RF with acoustic pulses to perform trilateration. This will distinguish robots from obstacles in a straightforward fashion, and will be much faster than our current “scan” technique.
+
+From a theoretical standpoint, we plan to formally analyze all important aspects of AP systems. This analysis will be more dynamic (e.g., kinetic theory) than the analysis presented here. We also intend to expand the repertoire of formations, both static and dynamic. For example, initial progress has been made on developing static and dynamic linear formations. Many other formations are possible within the AP framework. Using evolutionary algorithms to create desired force laws is one intriguing possibility that we are currently investigating.
+
+We also plan to address the topic of optimality, if needed. It is well understood that PF approaches can yield sub-optimal solutions. Since AP is similar to PF, similar problems arise with AP. Our experience thus far indicates that this is not a crucial concern, especially for the tasks that we have examined. However, if optimality is required we can apply new results from control theory to design force laws that guarantee optimality [33, 11].
+
+Finally, future work will focus on transitioning to real-world applications. For example, to transition to MAVs,
+---PAGE_BREAK---
+
+more attention will be given to the interaction between the platforms and the environment. We are currently modeling four different environmental interactions: (1) goals, (2) obstacles, (3) the effects of wind, and (4) the friction of the medium. However, a richer suite is required for accurate models, such as signal propagation loss, occlusions of MAVs by obstacles or other MAVs, and weather effects. Also, if “velocity matching” is eventually required, information about “facing” and the relative speed of neighbors must be observed via sophisticated sensors or obtained via communication within a common coordinate system. Analysis will be more difficult under these situations. The speed of MAVs does not appear to be a major issue – higher speed implies a need for smaller $\Delta t$ (i.e., faster sensors). AP is able to indicate how fast sensors must be, and to set limits on the strength of obstacle and goal forces to maintain a formation.
+
+We consider AP to be one level of a more complex control architecture. The lowest level controls the actual movement of the platforms. AP is at the next higher level, providing “way points” for the robots, as well as providing simple repair mechanisms. Our goal is to put as much behavior as possible into this level, in order to provide the ability to generate laws governing important parameters. However, the current AP paradigm will not solve more complex tasks, involving planning, learning, repair from more catastrophic events, and global information. For example, certain arrangements of obstacles (such as cul-de-sacs) will require the addition of memory and planning. Hence, even higher levels will be required [42]. Learning is especially interesting to us, and we would like to add it to AP. Learning is advantageous in the context of behavior-based [12, 16] and rule-based [40, 36] systems, but its value has not been explored in the context of a physics-based system. Planning is also important, to provide forward reasoning, as opposed to reactive responses.
+
+## 12. Acknowledgements
+
+We are grateful to Adam Chiambi for developing our sophisticated Java simulator, and to Paul Hansen for building our robots. Thanks to Alan Schultz for loaning us some of his robot kits from the Naval Research Laboratory. We also thank Dimitri Zarzhitski, Suranga Hettiarachchi, and Vaibhav Mutha, for their participation
+---PAGE_BREAK---
+
+and contributions to this project. Vaibhav designed the balance of forces for obstacle avoidance and goal-seeking.
+
+Finally we thank Derek Green, Nathan Horter, Christoph Jechlitschek, Nathan D. Jensen, Nadya Kuzmina, Glenn McLellan, Markus Petters, Brendon Reed, and Peter Weissbrod of the Fall 2002 AI class for their contributions to
+the initial robot designs and experiments.
+
+References
+
+[1] R. Alur, J. Esposito, M. Kim, J. Kumar, and I. Lee. Formal modeling and analysis of hybrid systems: A case study in multi-robot coordination. *Lecture Notes in Computer Science*, 1708:212–232, 1999.
+
+[2] T. Balch and R. Arkin. Behavior-based formation control for multi-robot teams. *IEEE Transactions on Robotics and Automata.*, 14(6):1–15, 1998.
+
+[3] G. Beni. and S. Hackwood. Stationary waves in cyclic swarms. *Intelligent Control*, pages 234–242, 1992.
+
+[4] G. Beni and J. Wang. Swarm intelligence. In *Proceedings of the Seventh Annual Meeting of the Robotics Society of Japan*, pages 425–428, Tokyo, Japan, 1989.
+
+[5] E. Bonabeau, M. Dorigo, and G. Theraulaz. *Swarm Intelligence: From Natural to Artificial Systems*. Oxford University Press, Santa Fe Institute Studies in the Sciences of Complexity, Oxford, NY, 1999.
+
+[6] D. Brogan and J. Hodgins. Group behaviors for systems with significant dynamics. *Autonomous Robots*, 4:137–153, 1997.
+
+[7] B. Carlson, V. Gupta, and T. Hogg. Controlling agents in smart matter with global constraints. In Eugene C. Freuder, editor, *AAAI-97 Workshop on Constraints and Agents - Technical Report WS-97-05*, 1997.
+
+[8] A. Czirok, M. Vicsek, and T. Vicsek. Collective motion of organisms in three dimensions. *Physica A*, 264(299):299–304, 1999.
+---PAGE_BREAK---
+
+[9] J. Desai, J. Ostrowski, and V. Kumar. Modeling and control of formations of nonholonomic mobile robots. *IEEE Transactions on Robotics and Automation*, 17(6):905–908, 2001.
+
+[10] J. Desai, J. Ostrowski, and V.Kumar. Controlling formations of multiple mobile robots. In *IEEE International Conference on Robotics and Automation*, pages 2864–2869, Belgium, 1998.
+
+[11] J. Fax and R. Murray. Information flow and cooperative control of vehicle formations. In *IFAC World Congress*, Barcelona, Spain, 2002.
+
+[12] F. Fernandez and L. Parker. Learning in large cooperative multi-robot domains. *International Journal of Robotics and Automation*, 16(4):217–226, 2002.
+
+[13] R. Fierro, C. Belta, J. Desai, and V. Kumar. On controlling aircraft formations. In *IEEE Conference on Decision and Control*, volume 2, pages 1065–1070, Orlando, Florida, 2001.
+
+[14] R. Fierro, P. Song, A. Das, and V. Kumar. Cooperative control of robot formations. In R. Murphey and P. Pardalos, editors, *Cooperative Control and Optimization*, volume 66, pages 73–93, Hingham, MA, 2002.
+Kluwer Academic Press.
+
+[15] J. Fredslund and M. Matarić. A general algorithm for robot formations using local sensing and minimal communication. *IEEE Transactions on Robotics and Automation*, 18(5):837–846, 2002.
+
+[16] D. Goldberg and M. Matarić. Learning multiple models for reward maximization. In *Seventeenth International Conference on Machine Learning*, pages 319–326, Stanford, CA, 2000.
+
+[17] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989.
+---PAGE_BREAK---
+
+[18] D. Gordon, W. Spears, O. Sokolsky, and I. Lee. Distributed spatial control, global monitoring and steering of mobile physical agents. In *IEEE International Conference on Information, Intelligence, and Systems*, pages 681–688, Washington, DC, 1999.
+
+[19] D. Gordon-Spears and W. Spears. Analysis of a phase transition in a physics-based multiagent system. In M. Hinchey, J. Rash, W. Truszkowski, C. Rouff, and D. Gordon-Spears, editors, *Lecture Notes in Computer Science*, volume 2699, pages 193–207, Greenbelt, MD, 2003. Springer-Verlag.
+
+[20] A. Hayes, A. Martinoli, and R. Goodman. Distributed odor source localization. *IEEE Sensors*, 2 (3):260–271, 2002.
+
+[21] D. Helbing, I. Farkas, and T. Vicsek. Simulating dynamical features of escape panic. *Nature*, 407:487–490, 2000.
+
+[22] J. Holland. *Adaptation in natural and artificial systems*. University of Michigan Press, Michigan, USA, 1975.
+
+[23] A. Howard, M. Matarić, and G. Sukhatme. Mobile sensor network deployment using potential fields: A distributed, scalable solution to the area coverage problem. In *Sixth International Symposium on Distributed Autonomous Robotics Systems*, pages 299–308, Fukuoka, Japan, 2002. ACM.
+
+[24] S. Jantz, K. Doty, J. Bagnell, and I. Zapata. Kinetics of robotics: The development of universal metrics in robotic swarms. In *Florida Conference on Recent Advances in Robotics*, Miami, Florida, 1997.
+
+[25] J. Kellogg, C. Bovais, R. Foch, H. McFarlane, C. Sullivan, J. Dahlburg, J. Gardner, R. Ramamurti, D. Gordon-Spears, R. Hartley, B. Kamgar-Parsi, F. Pipitone, W. Spears, A. Sciambi, and D. Srull. The NRL micro tactical expendable (MITE) air vehicle. *The Aeronautical Journal*, 106(1062):431–441, 2002.
+
+[26] O. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. *International Journal of Robotics Research*, 5(1):90–98, 1986.
+---PAGE_BREAK---
+
+[27] L. L. Navarro-Serment, C. Paredis, and P. Khosla. A beacon system for the localization of distributed robotic teams. In *International Conference on Field and Service Robots*, pages 232–237, Pittsburgh, PA, 1999.
+
+[28] K. Lerman and A. Galstyan. A general methodology for mathematical analysis of multi-agent systems. Technical Report ISI-TR-529, USC Information Sciences, 2001.
+
+[29] Y. Liu, K. Passino, and M. Polycarpou. Stability analysis of m-dimensional asynchronous swarms with a fixed communication topology. In *IEEE Transactions on Automatic Control*, volume 48, pages 76–95, 2003.
+
+[30] M. Matarić. Designing and understanding adaptive group behavior. Technical report, CS Dept, Brandeis Univ., 1995.
+
+[31] C. Numaoka. Phase transitions in instigated collective decision making. *Adaptive Behavior*, 3(2):185–222, 1995.
+
+[32] S. Kraus O. Shehory and O. Yadgar. Emergent cooperative goal-satisfaction in large-scale automated-agent systems. *Artificial Intelligence*, 110:1–55, 1999.
+
+[33] R. Olfati-Saber and R. Murray. Distributed cooperative control of multiple vehicle formations using structural potential functions. In *IFAC World Congress*, Barcelona, Spain, 2002.
+
+[34] L. Parker. Alliance: An architecture for fault tolerant multi-robot cooperation. *IEEE Transactions on Robotics and Automation*, 14(2):220–240, 1998.
+
+[35] L. Parker. Toward the automated synthesis of cooperative mobile robot teams. In *SPIE Mobile Robots XIII*, volume 3525, pages 82–93, Boston, MA, 1998.
+
+[36] M. Potter, L. Meeden, and A. Schultz. Heterogeneity in the coevolved behaviors of mobile robots: The emergence of specialists. In *Seventh International Conference on Artificial Intelligence*, pages 1337–1343, Seattle, Washington, 2001. Morgan Kaufmann.
+---PAGE_BREAK---
+
+[37] J. Reif and H. Wang. Social potential fields: A distributed behavioral control for autonomous robots. In *Robotics and Autonomous Systems*, volume 27 (3), pages 171–194, 1999.
+
+[38] C. Reynolds. Flocks, herds, and schools: A distributed behavioral model. In *Proceedings of SIGGRAPH’87*, volume 21(4), pages 25–34, New York, NY, 1987. ACM Computer Graphics.
+
+[39] D. Schoenwald, J. Feddema, and F. Oppel. Decentralized control of a collective of autonomous robotic vehicles. In *American Control Conference*, pages 2087–2092, Arlington, VA, 2001.
+
+[40] A. Schultz, J. Grefenstette, and W. Adams. Roboshepherd: Learning a complex behavior. In *Robotics and Manufacturing: Recent Trends in Research and Applications*, volume 6, pages 763–768, New York, 1996. ASME Press.
+
+[41] A. Schultz and L. Parker, editors. *Multi-Robot Systems: From Swarms to Intelligent Automata*. Kluwer, Hingham, MA, 2002.
+
+[42] R. Simmons, T. Smith, M. Dias, D. Goldberg, D. Hershberger, A. Stentz, and R. Zlot. A layered architecture for coordination of mobile robots. In A. Schultz and L. Parker, editors, *Multi-Agent Robot Systems: From Swarms to Intelligent Automata*, Hingham, MA, 2002. Kluwer.
+
+[43] M. Slack. *Situationally Driven Local Navigation for Mobile Robots*. PhD thesis, Virginia Polytechnic, 1990.
+
+[44] W. Spears, K. De Jong, T. Bäck, D. Fogel, and H. de Garis. An overview of evolutionary computation. In *European Conference on Machine Learning*, volume 667, pages 442–459, Austria, 1993. Springer Verlag.
+
+[45] W. Spears and D. Gordon. Using artificial physics to control agents. In *IEEE International Conference on Information, Intelligence, and Systems*, pages 281–288, Washington, DC, 1999.
+
+[46] I. Suzuki and M. Yamashita. Distributed anonymous mobile robots: Formation of geometric patterns. *SIAM Journal of Computation*, 28(4):1347–1363, 1999.
+---PAGE_BREAK---
+
+[47] J. Toner and Y. Tu. Flocks, herds, and schools: A quantitative theory of flocking. *Physical Review E*, 58(4):4828–4858, 1998.
+
+[48] X. Tu and D. Terzopoulos. Artificial fishes: Physics, locomotion, perception, behavior. In *Proceedings of SIGGRAPH'94*, pages 43–50, Orlando, Florida, 1994. ACM Computer Graphics.
+
+[49] D. Vail and M. Veloso. Multi-robot dynamic role assignment and coordination through shared potential fields. In A. Schultz, L. Parker, and F. Schneider, editors, *Multi-Robot Systems*, pages 87–98, Hingham, MA, 2003. Kluwer.
+
+[50] T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen, and O. Shocher. Novel type of phase transition in a system of self-driven particles. *Physics Review Letters*, 75(6):1226–1229, 1995.
+
+[51] A. Wu, A. Schultz, and A. Agah. Evolving control for distributed micro air vehicles. In *IEEE Conference on Computational Intelligence in Robotics and Automation*, pages 174–179, Belgium, 1999.
\ No newline at end of file
diff --git a/samples/texts_merged/3658573.md b/samples/texts_merged/3658573.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea9c96704da148fa8ee936c7f14f5052676f9aa8
--- /dev/null
+++ b/samples/texts_merged/3658573.md
@@ -0,0 +1,1180 @@
+
+---PAGE_BREAK---
+
+Interpreting a 750 GeV Diphoton Resonance
+
+Rick S. Gupta,ª Sebastian Jäger,ª,b Yevgeny Kats,ª Gilad Perez,ª Emmanuel Stamouª
+
+ªDepartment of Particle Physics and Astrophysics, Weizmann Institute of Science,
+Rehovot 7610001, Israel
+
+ᵇDepartment of Physics and Astronomy, University of Sussex,
+Brighton, BN1 9QH, UK
+
+E-mail: rsgupta@weizmann.ac.il, S.Jaeger@sussex.ac.uk,
+yevgeny.kats@weizmann.ac.il, gilad.perez@weizmann.ac.il,
+emmanuel.stamou@weizmann.ac.il
+
+**ABSTRACT:** We discuss the implications of the significant excesses in the diphoton final state observed by the LHC experiments ATLAS and CMS around a diphoton invariant mass of 750 GeV. The interpretation of the excess as a spin-zero s-channel resonance implies model-independent lower bounds on both its branching ratio and its coupling to photons, which stringently constrain dynamical models. We consider both the case where the excess is described by a narrow and a broad resonance. We also obtain model-independent constraints on the allowed couplings and branching fractions to final states other than diphotons, by including the interplay with 8 TeV searches. These results can guide attempts to construct viable dynamical models of the resonance. Turning to specific models, our findings suggest that the anomaly cannot be accounted for by the presence of only an additional singlet or doublet spin-zero field and the Standard Model degrees of freedom; this includes all two-Higgs-doublet models. Likewise, heavy scalars in the MSSM cannot explain the excess if stability of the electroweak vacuum is required, at least in a leading-order analysis. If we assume that the resonance is broad we find that it is challenging to find a weakly coupled explanation. However, we provide an existence proof in the form of a model with vectorlike quarks with large electric charge that is perturbative up to the 100 TeV scale. For the narrow-resonance case a similar model can be perturbative up to high scales also with smaller charges. We also find that, in their simplest form, dilaton models cannot explain the size of the excess. Some implications for flavor physics are briefly discussed.
+---PAGE_BREAK---
+
+# Contents
+
+| 1 | Introduction | 1 | | 2 | Model-independent constraints | 2 | | 2.1 | Implications of the excess alone | 2 | | 2.2 | Interplay with previous LHC searches | 6 | | 3 | Models | 12 | | 3.1 | SM-singlet scalar | 12 | | 3.1.1 | Renormalizable model | 13 | | 3.1.2 | Boosting $c_γ$, $c_g$ with new vectorlike fermions | 13 | | 3.1.3 | A pseudoscalar | 16 | | 3.1.4 | The dilaton | 17 | | 3.1.5 | Production by quarks | 19 | | 3.2 | Excluding the general pure 2HDM | 21 | | 3.3 | The fate of the MSSM | 24 | | 3.3.1 | Constraints from vacuum stability | 26 | | 3.3.2 | Conservative bounds on sfermion contributions | 27 | | 3.3.3 | Contributions from other particles and verdict | 29 | | 3.3.4 | Production from quarks? | 29 | | 3.3.5 | Cautionary note | 31 | | 4 | Summary and Outlook | 31 |
+
+## 1 Introduction
+
+Very recently, both the ATLAS and the CMS collaborations at CERN have reported mutu-
+ally consistent “bumps” in the diphoton invariant mass distribution around 750 GeV [1, 2].
+Based on 3.2 and 2.6 fb⁻¹ of the 13 TeV LHC data, the corresponding deviations from the
+background-only hypothesis have a local significance of 3.9σ and 2.6σ in ATLAS and CMS,
+respectively. The bumps are best described by a relative width Γ/M ≈ 6% in ATLAS [1]
+but a sub-resolution width in CMS [2]. However, this discrepancy is not statistically sig-
+nificant and we will generally present results as a function of the unknown width. The
+resonant excesses are suggestive of the decay of a new particle beyond the Standard Model
+(BSM). The kinematic properties of the events in the excess region are reported not to show
+significant deviations compared with events in sidebands. This disfavors significant con-
+tributions to the production from decays of yet heavier particles or associated production
+and motivates focusing on the case of a single production of a 750 GeV resonance.
+---PAGE_BREAK---
+
+The purpose of the present paper is to characterise this theoretically unexpected result and discuss its implications for some leading paradigms for physics beyond the Standard Model. It is divided into two main parts, the first of which comprises a model-independent framework that aims to equip the reader with handy formulas for interpreting both the signal and the most important resonance search constraints from existing LHC searches in the context of BSM models. We derive a number of bounds, including model-independent lower bounds on the branching ratio and partial width into photons of the hypothetical new particle. The second part investigates concrete scenarios, including the possibility of interpreting the resonance as the dilaton in a theory with spontaneous breaking of scale invariance or as a heavy Higgs scalar a two-Higgs-doublet model (2HDM). We find the properties of the observed excess to be quite constraining. In particular, a leading-order analysis suggests that the interpretation as an *s*-channel resonance, if confirmed, cannot be accommodated within the Minimal Supersymmetric Standard Model (MSSM) even under the most conservative assumptions about the MSSM parameters and the true width of the resonance; this conclusion holds if we require the absence of charge- and colour-breaking minima.
+
+## 2 Model-independent constraints
+
+We start by discussing what can be inferred about the new hypothetical particle from data alone. We will first describe the implications of the observed properties of the diphoton bumps, and then examine the constraints from the absence of significant excesses in resonance searches in other final states that could be sensitive to other decay modes of the same particle.
+
+### 2.1 Implications of the excess alone
+
+Both ATLAS and CMS observe excesses in a diphoton invariant mass region near 750 GeV [1, 2]. For the purposes of this work, we will generally assume the signal contribution to be $N = 20$ events for $\mathcal{L} = 5.8 \text{ fb}^{-1}$ integrated luminosity (adding up ATLAS and CMS), but will make clear the scaling of our findings with $N$ wherever feasible. We will assume a signal efficiency (including acceptance) of $\epsilon = 50\%$, even though, in general, this does have some dependence on both the experiment and the details of the signal model.
+
+The most straightforward signal interpretation is resonant *s*-channel production of a new unstable particle. The observed signal strength corresponds to a 13 TeV inclusive cross section to diphotons of
+
+$$ \sigma_{13} \times \text{BR}_{\gamma\gamma} \approx 6.9 \text{ fb} \times \left(\frac{N}{20}\right) \left(\frac{50\%}{\epsilon}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right). \qquad (2.1) $$
+
+The diphoton final state precludes a spin-one interpretation due to the Landau-Yang theorem [3, 4], and we will henceforth assume spin zero. We take the mass to be $M = 750$ GeV; small variations have no significant impact on our findings. The shape of the excess in ATLAS may indicate a width of about $\Gamma = 45$ GeV [1]. However, we will also
+---PAGE_BREAK---
+
+contemplate the case of smaller width below, and discuss how our main findings depend on this.
+
+A minimal dynamical input is necessary to interpret the result and incorporate 8 TeV LHC constraints. The width-to-mass ratio is small enough to justify a narrow-width approximation to the level of accuracy we aim at here. In the narrow-width limit, resonant scattering amplitudes factorize into production and decay vertices, which we parameterize by terms in a “Lagrangian” for the resonance $\mathcal{S}$,
+
+$$ \begin{aligned} \mathcal{L} = & -\frac{1}{16\pi^2}\frac{1}{4}\frac{c_\gamma}{M}SF^{\mu\nu}F_{\mu\nu} - \frac{1}{16\pi^2}\frac{1}{4}\frac{c_g}{M}SG^{\mu\nu,a}G_{\mu\nu}^a \\ & -\frac{1}{16\pi^2}\frac{1}{2}\frac{c_W}{M}SW^{-\mu\nu}W_{\mu\nu}^{+} - \frac{1}{16\pi^2}\frac{1}{4}\frac{c_Z}{M}SZ^{\mu\nu}Z_{\mu\nu} - \frac{1}{16\pi^2}\frac{1}{4}\frac{c_{Z\gamma}}{M}SZ^{\mu\nu}F_{\mu\nu} \\ & - \hat{c}_W m_W SW^{+\mu}W_{\mu}^{-} - \frac{1}{2}\hat{c}_Z m_Z SZ^{\mu}Z_{\mu} - \sum_f c_f S\bar{f}f. \end{aligned} \quad (2.2) $$
+
+In this parametrization, $M$ is the mass of the resonance $\mathcal{S}$. We emphasize that each term denotes a particular production and/or decay vertex and that the parameterization $\mathcal{L}$ does not make any assumptions about hierarchies of scales.¹ If $\mathcal{S}$ is a pseudoscalar, $\hat{c}_W = \hat{c}_Z = 0$, while all the other couplings lead to the same results as we present in this section for the scalar upon the replacements $S\bar{f}f \to iS\bar{f}\gamma^5 f$, $X^{\mu\nu}X'_{\mu\nu} \to X^{\mu\nu}\tilde{X}'_{\mu\nu}$, where $X^{(l)\mu\nu} = F^{l\mu\nu}$, $G^{\mu\nu,a}$, $W^{\pm\mu\nu}$, $Z^{\mu\nu}$ (up to minor differences in the phase-space factors from table 1 below).
+
+The total decay width of $\mathcal{S}$ imposes one constraint on the couplings,
+
+$$ \frac{\Gamma}{M} = \sum_i \frac{\Gamma_i}{M} = \sum_i n_i |c_i|^2 \approx 0.06, \quad (2.3) $$
+
+where the (dimensionless) coefficients $n_i$ are listed in table 1 for the modes considered in the present analysis. In particular, eq. (2.3) directly implies upper bounds on the magnitude of each $c_i$, since observations imply that the width cannot exceed the ATLAS-preferred value of 45 GeV by more than a factor of about two.
+
+It is possible and convenient to represent the observed signal in terms of the branching ratios to the production mode and to $\gamma\gamma$. If a single production mode, $p$, dominates, the number of signal events, $N$, in the 13 TeV analyses fixes the product
+
+$$ \text{BR}_{\gamma\gamma} \times \text{BR}_p = n_p \frac{M}{\Gamma} \frac{N}{\epsilon x_S^{13,p} \mathcal{L}_{13}} = \kappa_p \times \left(\frac{N}{20}\right) \left(\frac{45 \text{ GeV}}{\Gamma}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right), \quad (2.4) $$
+
+where, for the production modes mediated by the various couplings from eq. (2.2),
+
+$$ gg \quad u\bar{u} \quad d\bar{d} \quad s\bar{s} \quad c\bar{c} \quad b\bar{b} \quad WW \quad \widetilde{WW} \quad ZZ \quad \widetilde{ZZ} \quad Z\gamma \quad \gamma\gamma $$
+
+$$ \kappa_p \approx \{2.5, 5.5, 8.9, 96, 140, 310, 1600, 16000, 2400, 21000, 1400, 170\} \times 10^{-5}. \quad (2.5) $$
+
+We used the leading-order $\sqrt{s} = 13$ TeV production cross sections for $M = 750$ GeV,
+
+$$ \sigma_{13} = |c_p|^2 x_S^{13,p}, \quad (2.6) $$
+
+¹In particular, the “couplings” $c_i$ are on-shell form factors that generally include contributions from light particles and CP-even phases due to unitarity cuts. Contributions from particles with mass $\gg M$ can be matched to a local effective Lagrangian similar to eq. (2.2). We discuss examples in section 3.
+---PAGE_BREAK---
+
+
+
+
+ | mode |
+ Width coefficient ni |
+ ni (#) |
+
+
+
+
+ | γγ |
+ 1⁄16(4π)5 |
+ 1.99 × 10-7 |
+
+
+ | gg |
+ 1⁄2(4π)5 |
+ 1.60 × 10-6 |
+
+
+ | qiq̅i |
+ 3⁄8π |
+ 0.119 |
+
+
+ | W&W |
+ 1⁄64π √(1 - 4mW2/M2) (M2/mW2) (1 - 4(mW2/M2) + 12(mW4/M4)) |
+ 0.404 |
+
+
+ | Z&Z |
+ 1⁄128π √(1 - 4mZ2/M2) (M2/mZ2) (1 - 4(mZ2/M2) + 12(mZ4/M4)) |
+ 0.154 |
+
+
+ | WW |
+ 1⁄8(4π)5 √(1 - 4mW2/M2) (1 - 4(mW2/M2) + 6(mW4/M4)) |
+ 3.72 × 10-7 |
+
+
+ | ZZ |
+ 1⁄16(4π)5 √(1 - 4mZ2/M2) (1 - 4(mZ2/M2) + 6(mZ4/M4)) |
+ 1.82 × 10-7 |
+
+
+ | Zγ |
+ 1⁄32(4π)5(1 - mZ2/MZ2)3 |
+ 9.54 × 10-8 |
+
+
+
+
+**Table 1.** Width coefficients.
+
+where $x_S^{13,p}$ are listed in table 2.² A direct consequence of eq. (2.4) is a lower bound on the branching ratio into photons,
+
+$$
+\mathrm{BR}_{\gamma\gamma} > \kappa_p \left(\frac{N}{20}\right) \left(\frac{45 \text{ GeV}}{\Gamma}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right). \quad (2.7)
+$$
+
+Note that this bound becomes more stringent if the width of the resonance is reduced.
+Alternatively, the excess events fix the product of couplings
+
+$$
+|c_{\gamma} c_{p}| = \sqrt{n_{\gamma}^{-1} \frac{\Gamma}{M} \frac{N}{\epsilon x_{S}^{13,p} \mathcal{L}_{13}}} = \rho_{p} \times \sqrt{\left(\frac{N}{20}\right) \left(\frac{\Gamma}{45 \text{ GeV}}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right)}, \quad (2.8)
+$$
+
+where
+
+$$
+\rho_p \approx \{530, 2.9, 3.7, 12, 15, 22, 28000, 84, 49000, 160, 52000, 12000\}. \quad (2.9)
+$$
+
+Importantly, increasing the production couplings, $c_p$, increases also the decay rates to the production modes. Since these compete with the $\gamma\gamma$ decay, $c_\gamma$ cannot be arbitrarily small. The smallest possible $|c_\gamma|$ corresponds to the situation where the total width is dominated by the production mode (which in particular implies $\Gamma_{\gamma\gamma} \ll \Gamma_p$, for production modes other than $\gamma\gamma$). Since the dependence on $|c_p|^2$ cancels between the production cross
+
+²Results for VBF production, here and below, involve the use of the *SWW* and *SZZ* vertices in eq. (2.2) implemented in MadGraph [5] using FeynRules [6]. This is correct in either of the following two situations: (i) the origin of the vertices is local physics, originating in scales ≫ M, such as in the dilaton case in section 3.1.4; in such a case the vertices can be interpreted as a unitary-gauge Lagrangian couplings and be used off shell; or (ii) the production process is dominated by nearly on-shell W, Z bosons (the same prerequisite under which the equivalent-boson approximation [7, 8] is justified).
+---PAGE_BREAK---
+
+| √s | [pb] | gg | uū | dd | ss | cc | bb |
|---|
| 13 | xS13,p | 7.5·10-3 | 250 | 150 | 14 | 9.8 | 4.4 | | 8 | xS8,p | 1.7·10-3 | 95 | 57 | 3.7 | 2.3 | 0.96 | | 13/8 | rp | 4.4 | 2.6 | 2.7 | 3.9 | 4.2 | 4.6 |
+
+| √s | [pb] | WW | WW | ZZ | ZZ | Zγ | γγ |
|---|
| 13 | xS13,p | 2.7·10-6 | 0.30 | 8.7·10-7 | 8.3·10-2 | 7.8·10-7 | 1.4·10-5 | | 8 | xS8,p | 6.5·10-7 | 6.9·10-2 | 2.1·10-7 | 1.9·10-2 | 2.1·10-7 | 4.7·10-6 | | 13/8 | rp | 4.1 | 4.3 | 4.2 | 4.2 | 3.7 | 2.9 |
+
+**Table 2.** Leading-order production cross sections for a resonance with $M = 750$ GeV for couplings $c_p = 1$, at the 13 TeV and 8 TeV LHC, and their ratio, $r_p$. We have used the leading-order PDF set NN23L01 [9] for the predictions of production via $gg, q\bar{q}, WW, \tilde{W}\tilde{W}, ZZ$ and $\tilde{Z}\tilde{Z}$. For $Z\gamma$ initiated production we use the CTEQ14QED PDF set [10] with photon PDF, while for $\gamma\gamma$ fusion we use the results of ref. [11], which also discusses the validity of $\gamma\gamma$ fusion results obtained with various PDF sets. For the $gg$ and $q\bar{q}$ modes, the process is $pp \to S$. For $WW, \tilde{W}\tilde{W}, ZZ$ and $\tilde{Z}\tilde{Z}$, both VBF ($pp \to S + jj$) and associated production ($pp \to S + W/Z$) contribute. The latter is small for $\tilde{W}\tilde{W}$ and $\tilde{Z}\tilde{Z}$ (approximately 1% of the inclusive value), but is significant for production via the field-strength $WW$ and $ZZ$ operators (approximately 20% and 30% for $WW$ and $ZZ$, respectively, at 13 TeV; see also ref. [12]). Finally, for production via $Z\gamma$, the processes $pp \to Sjj, pp \to SZ$, and $pp \to S\gamma$ contribute with relative weights 94%, 2.6%, and 3.6%, respectively.
+
+section and the diphoton branching fraction in this limit, this bound is independent of $\Gamma$. We hence have the following model-independent lower bounds on $c_\gamma$:
+
+$$|c_{\gamma}| > \sqrt{\frac{n_p}{n_{\gamma}} \frac{N}{\epsilon x_S^{13,p} \mathcal{L}_{13}}} = \{2.7, 4.1, 5.2, 17, 21, 31, 70, 220, 85, 250, 65, 23\} \times \sqrt{\left(\frac{N}{20}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right)}. \quad (2.10)$$
+
+If, as it often does, a single production mode dominates in a concrete model, eq. (2.10) can be directly used to identify how large $c_\gamma$ needs to be. For production via $\gamma\gamma$ fusion, saturating the lower bound determines in addition the width to be about 75 MeV. In the case where several initial states contribute, a conservative lower bound is given by
+
+$$|c_{\gamma}| > \sqrt{\frac{n_g}{n_{\gamma}} \frac{N}{\epsilon x_S^{13,g} \mathcal{L}_{13}}} = 2.7 \times \sqrt{\left(\frac{N}{20}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right)}. \quad (2.11)$$
+
+Importantly, eqs. (2.8) and (2.9) imply that photon fusion accounts for the entire excess once $|c_\gamma| \sim 110$, or less if the width is below 45 GeV. It then follows from eq. (2.10) that production via the couplings $\hat{c}_W$ and $\hat{c}_Z$ can never be an important production mechanism, so we disregard this possibility henceforth. (See also the discussion in ref. [13].)
+
+In figures 1 and 2, we plot the general relation between $|c_\gamma|$ and $|c_p|$ for the case of $N = 20$ excess events, switching on one production channel at a time. The mass and total
+---PAGE_BREAK---
+
+**Figure 1.** The black line corresponds to $N = 20$ signal events in the diphoton analyses for $M = 750$ GeV and $\Gamma = 45$ GeV when the resonance is produced from $gg$. Blue dashed lines are contours of fixed branching ratio to modes other than $\gamma\gamma$ or $gg$. The red-shaded area above the thick horizontal line is excluded by dijet resonance searches [14] due to decays to $gg$ alone. The shaded gray region corresponds to values of $c_g$, $c_\gamma$ that produce a width larger than 45 GeV. The vertical dashed red line indicates the $c_\gamma$ value for which photon fusion alone would account for all signal events even for $\Gamma = 45$ GeV, thus ruling out the region of larger $c_\gamma$ values.
+
+width are fixed at 750 and 45 GeV, respectively. The partial widths to diphotons, $\Gamma_{\gamma\gamma}$, and to the production mode, $\Gamma_p$, are assumed to be supplemented by decays to other possible final states, $\Gamma_{\text{other}}$, to make up the total width:
+
+$$ \Gamma_{\text{other}} \equiv \Gamma - \Gamma_{\gamma\gamma} - \Gamma_p. \qquad (2.12) $$
+
+Contours of fixed $BR_{\text{other}} \equiv \Gamma_{\text{other}}/\Gamma$ are shown in dashed blue. From the left panels of the figures it is evident that for a given $BR_{\text{other}}$ there exist two solutions, one with small and another with large $c_\gamma$. However, this second solution is generally incompatible with the upper limit $|c_\gamma| \lesssim 110$, unless $BR_{\text{other}}$ is close to 100%. The gray-shaded regions correspond to values of $c_p$ and $c_\gamma$ for which the total width is larger than 45 GeV. Horizontal red lines and the corresponding red-shaded regions indicate the parameter space excluded by 8 TeV dijet searches. We discuss them in the next subsection.
+
+## 2.2 Interplay with previous LHC searches
+
+Important additional information about the properties of the new particle can be obtained based on the non-observation of any of its decays in Run 1 of the LHC, in particular in the $20 \text{ fb}^{-1}$ of data collected at $\sqrt{s} = 8$ TeV.
+---PAGE_BREAK---
+
+**Figure 2.** Black lines correspond to $N = 20$ signal events in the diphoton analyses for $M = 750$ GeV and $\Gamma = 45$ GeV. Different dashing styles indicate the various production modes, $u\bar{u}$, $d\bar{d}$, $s\bar{s}$, $c\bar{c}$, and $b\bar{b}$. Blue dashed lines are contours of fixed branching ratio to modes other than $\gamma\gamma$ or the production mode. The red-shaded areas above the various horizontal lines, with dashing styles corresponding to the production modes, are excluded by dijet resonance searches [14] due to decays to the production mode alone. The vertical dashed red line indicates the $c_\gamma$ value for which photon fusion alone would account for all signal events even for $\Gamma = 45$ GeV, thus ruling out the region of larger $c_\gamma$ values.
+
+We first consider limits from the diphoton resonance searches. The most relevant limit for the broad resonance hypothesis preferred by the ATLAS excess, $\Gamma/M \approx 6\%$, is the CMS 95% CL limit
+
+$$ \sigma_8 \times \text{BR}_{\gamma\gamma} \lesssim 2.5 \text{ fb}, \qquad (2.13) $$
+
+which was derived for scalar resonances with $\Gamma/M = 10\%$ [15]. For a narrow resonance, which might be preferred by the CMS data, the same search sets the limit
+
+$$ \sigma_8 \times \text{BR}_{\gamma\gamma} \lesssim 1.3 \text{ fb}. \qquad (2.14) $$
+
+Somewhat weaker limits, of 2.5 and 2.0 fb, were obtained by ATLAS [16] and CMS [17], respectively, for RS gravitons with $k=0.1$, which are also narrow.
+
+The compatibility of the observed excesses with the 8 TeV diphoton searches depends primarily on the parton luminosity ratio, $r_p$, listed in table 2,³ since the selection efficiencies
+
+³More precisely, $r_p$ is the cross sections ratio. For VBF and associated production, it cannot be approximated by the parton luminosity ratio at $\sqrt{s} = 750$ GeV (as was done in some of the recent papers that claimed $r_{VBF} \approx 2.5$) since in most events $\sqrt{s}$ is significantly higher than 750 GeV because of the two forward jets or the additional electroweak boson.
+---PAGE_BREAK---
+
+
+
+
+ | decay mode i → |
+ gg |
+ q&bar;q |
+ t&bar;t |
+ WW |
+ ZZ |
+ hh |
+ Zh |
+ ττ |
+ Zγ |
+ ee + μμ |
+
+
+
+
+ | (σ8 × BRi)max [fb] |
+ 4000 [14] |
+ 1800 [14] |
+ 500 [18] |
+ 60 [19] |
+ 60 [20] |
+ 50 [21] |
+ 17 [22] |
+ 12 [23] |
+ 8 [24] |
+ 2.4 [25] |
+
+
+ | production p = |
+ gg |
+ 2600 |
+ 1200 |
+ 320 |
+ 38 |
+ 38 |
+ 32 |
+ 11 |
+ 7.7 |
+ 5.1 |
+ 1.5 |
+
+
+ | u&bar;u |
+ 1500 |
+ 690 |
+ 190 |
+ 23 |
+ 23 |
+ 19 |
+ 6.5 |
+ 4.6 |
+ 3.1 |
+ 0.92 |
+
+
+ | d&bar;d |
+ 1600 |
+ 700 |
+ 200 |
+ 23 |
+ 23 |
+ 20 |
+ 6.7 |
+ 4.7 |
+ 3.1 |
+ 0.94 |
+
+
+ | (BRi/BRγγ)max |
+ s&bar;s |
+ 2300 |
+ 1000 |
+ 280 |
+ 34 |
+ 34 |
+ 28 |
+ 9.6 |
+ 6.8 |
+ 4.5 |
+ 1.4 |
+
+
+ | c&bar;c |
+ 2400 |
+ 1100 |
+ 300 |
+ 36 |
+ 36 |
+ 30 |
+ 10 |
+ 7.3 |
+ 4.8 |
+ 1.5 |
+
+
+ | b&bar;b |
+ 2700 |
+ 1200 |
+ 340 |
+ 40 |
+ 40 |
+ 34 |
+ 11 |
+ 8.1 |
+ 5.4 |
+ 1.6 |
+
+
+ | WW |
+ 2400 |
+ 1100 |
+ 300 |
+ 35 |
+ 35 |
+ 30 |
+ 10 |
+ 7.1 |
+ 4.7 |
+ 1.4 |
+
+
+ | ZZ |
+ 2400 |
+ 1100 |
+ 310 |
+ 37 |
+ 37 |
+ 31 |
+ 10 |
+ 7.3 |
+ 4.9 |
+ 1.5 |
+
+
+ | Zγ |
+ 2200 |
+ 980 |
+ 270 |
+ 33 |
+ 33 |
+ 27 |
+ 9.2 |
+ 6.5 |
+ 4.3 |
+ 1.3 |
+
+
+ | γγ |
+ 1700 |
+ 760 |
+ 210 |
+ 25 |
+ 25 |
+ 21 |
+ 7.1 |
+ 5.0 |
+ 3.4 |
+ 1.0 |
+
+
+
+
+**Table 3.** Top: Bounds on 750 GeV resonances from 8 TeV LHC searches. Bottom: Derived bounds on ratios of branching fractions for different production channel assumptions. For gg production, bounds on the branching fraction to qBAR are even tighter than indicated, since decays to gg will necessarily also be present and enter the dijet searches. The same applies to branching fractions to gg when the production is from quarks.
+
+of the searches are similar. The ATLAS+CMS excess, eq. (2.1), translates to
+
+$$
+\sigma_8 \times \text{BR}_{\gamma\gamma} = \frac{\sigma_{13} \times \text{BR}_{\gamma\gamma}}{r_p} \approx \left(\frac{N}{20}\right) \times \{1.6, 2.6, 2.6, 1.8, 1.7, 1.5, 1.7, 1.6, 1.8, 2.4\} \text{ fb.} \tag{2.15}
+$$
+
+We see that $N = 20$ excess events at 13 TeV are borderline compatible with the 8 TeV analyses, especially if the resonance is broad. The *gg*, heavy-quark and electroweak-boson production modes are somewhat favoured in this respect because their cross sections increase more rapidly with $\sqrt{s}$.
+
+The ATLAS and CMS collaborations performed searches for resonant signals in many other final states as well. In table 3 we list the various two-body final states relevant to a neutral color-singlet spin-0 particle, and the corresponding 95% C.L. exclusion limits, $(\sigma_8 \times \text{BR}_i)^\text{max}$, from the 8 TeV searches. Searches for dijet resonances that employ b tagging, and would have enhanced sensitivity to b$\bar{b}$ final states, only address resonances heavier than 1 TeV [26, 27], but the limits from $q\bar{q}$ searches still apply to b$\bar{b}$. The recent 13 TeV dijet searches [28, 29] do not cover the mass range around 750 GeV at all, due to triggering limitations. We also note that the limits quoted in table 3 are approximate. In general, they do have some dependence on the width of the resonance, its spin, etc.
+
+Table 3 also lists the resulting constraints on the ratios of branching fractions of the particle, for different production channel assumptions. They are computed as
+
+$$
+\left( \frac{\mathrm{BR}_i}{\mathrm{BR}_{\gamma\gamma}} \right)^{\mathrm{max}} = r_p \frac{(\sigma_8 \times \mathrm{BR}_i)^{\mathrm{max}}}{\sigma_{13} \times \mathrm{BR}_{\gamma\gamma}}, \quad (2.16)
+$$
+---PAGE_BREAK---
+
+where we use eq. (2.1) and the cross section ratios $r_p$ from table 2.
+
+There is always a constraint from decays to dijets or dibosons since we take the resonance to couple to either $gg$, $q\bar{q}$, or the electroweak gauge bosons, for production. Also, the production cross section needs to be relatively large to accommodate the excess without too large $c_\gamma$, so limits on dijet or diboson resonances may restrict part of the parameter space of a concrete realisation. For the case in which a single production channel dominates, we obtain upper limits on $BR_p$ and $c_p$ by saturating the corresponding dijet or diboson bounds:
+
+$$ \text{BR}_p < \sqrt{n_p \frac{M (\sigma_8 \times \text{BR}_p)^{\text{max}}}{\Gamma x_S^{8,p}}} \approx \begin{array}[t]{@{}l@{}} \{25, 19, 25, 99, 120, 190, 76, 94, 25, 4.2\} \% \times \left(\frac{45 \text{ GeV}}{\Gamma}\right)^{1/2}, \\ \quad \begin{array}{c} \text{gg uū dū sū cū bū WW ZZ Zγ γγ} \\ \text{gg uū dū sū cū bū WW ZZ Zγ γγ} \end{array} \\ |c_p| < \begin{array}{@{}l@{}} \{97, 0.31, 0.35, 0.70, 0.79, 0.99, 350, 560, 390, 110\} \times \left(\frac{\Gamma}{45 \text{ GeV}}\right)^{1/4}. \end{array} \end{array} \tag{2.17} $$
+
+The dijet-excluded regions in the $c_p-c_\gamma$ planes of figures 1 and 2 are the red-shaded areas.
+
+By combining eq. (2.17) with eqs. (2.4) and (2.8), we obtain a second lower bound on $BR_{\gamma\gamma}$ and $c_\gamma$,
+
+$$ \text{BR}_{\gamma\gamma} > \begin{array}[t]{@{}l@{}} \{0.98, 2.8, 3.5, 9.7, 11, 16, 210, 260, 570, 400\} \times 10^{-4} \times \left(\frac{N}{20}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right) \left(\frac{45 \text{ GeV}}{\Gamma}\right)^{1/2}, \\ \quad \begin{array}{c} \text{gg uū dū sū cū bū WW ZZ Zγ γγ} \\ \text{gg uū dū sū cū bū WW ZZ Zγ γγ} \end{array} \\ |c_\gamma| > \begin{array}{@{}l@{}} \{5.4, 9.2, 10, 17, 18, 22, 80, 88, 130, 110\} \times \sqrt{\left(\frac{N}{20}\right) \left(\frac{5.8 \text{ fb}^{-1}}{\mathcal{L}_{13}}\right) \left(\frac{\Gamma}{45 \text{ GeV}}\right)^{1/4}} . \end{array} \end{array} \tag{2.18} $$
+
+Depending on the width and the production mechanism, these bounds can be stronger or weaker than those in eqs. (2.7) and (2.10). Some comments are in order regarding the case of photon fusion dominance. In this case, eqs. (2.17) and (2.18) fix, for nominal width and signal strength, $|c_\gamma| \approx 110$. This is because here we impose an upper bound of 2.5 fb for the 8 TeV diphoton signal, which essentially agrees with the predicted 8 TeV signal, for nominal width and number of excess events. For the same reason, this value agrees with the one previously obtained based on saturating the 13 TeV signal with a single diphoton coupling.
+
+Figures 3 and 4 show, for $\Gamma = 45$ GeV and 1 GeV, respectively, the required branching fraction $BR_{\text{other}}$ to modes other than the production mode and $\gamma\gamma$ as a function of the branching fraction of the production mode, $BR_p$. The black lines correspond to $N=20$ signal events in the 13 TeV diphoton analyses. These plots highlight the importance of $BR_{\text{other}}$, which in most of the viable parameter space is the dominant branching fraction if the width $\Gamma$ is large. In blue lines, it is also shown to what extent $BR_{\text{other}}$ can be attributed to various decays into Standard Model particles, in view of the 8 TeV LHC bounds on such decays. For example, if apart from the decays to the production mode and $\gamma\gamma$ the resonance can decay only to $t\bar{t}$, the region above the corresponding blue line is excluded. The solid
+---PAGE_BREAK---
+
+Figure 3. The required branching fraction into modes other than the production mode and $\gamma\gamma$, $BR_{other}$, as a function of the production mode branching fraction, for $N = 20$ and $\Gamma = 45$ GeV. Different plots correspond to different production mechanisms. Red regions are excluded by 8 TeV dijet resonance searches. Thin lines described in the legend show the maximal branching fractions allowed by 8 TeV searches into final states from table 3. The label “all” refers to the bound on the sum of all the final states from the table. For mixed dijet final states $(gg+q\bar{q})$, we show bands extending between curves obtained using the $gg$ and the $q\bar{q}$ dijet constraint.
+---PAGE_BREAK---
+
+Figure 4. The required branching fraction into modes other than the production mode and $\gamma\gamma$, $BR_{other}$, as a function of the production mode branching fraction, for $N = 20$ and $\Gamma = 1$ GeV. Different plots correspond to different production mechanisms. Red regions are excluded by 8 TeV dijet resonance searches. Thin lines described in the legend show the maximal branching fractions allowed by 8 TeV searches into final states from table 3. The label “all” refers to the bound on the sum of all the final states from the table. For mixed dijet final states $(gg+q\bar{q})$, we show bands extending between curves obtained using the $gg$ and the $q\bar{q}$ dijet constraint.
+---PAGE_BREAK---
+
+blue lines labeled “all” correspond to saturating all the two-body final states listed in table 3, with the band interpolating between lines that use the *gg* and the $q\bar{q}$ dijet bounds. The band is needed since the maximal possible $BR_{other}$ is generally achieved for a mixture of *gg* and $q\bar{q}$ decays. Indeed, for a fixed $BR_{gg}$, one can add decays to quarks. For a fixed $BR_{q\bar{q}}$ (for a given flavor $q$), one can add decays to either gluons or other quark flavors, but gluons are preferable since they are less constrained. It is reasonable to expect the bound on such mixed final states to lie somewhere within the band. The same discussion applies to the $b\bar{b}$ band in the fixed $BR_{gg}$ case.
+
+We see that when the diphoton signal is achieved by a large coupling to gluons/quarks and a small coupling to photons (right-hand side of the plots), it may be difficult to obtain $\Gamma = 45$ GeV with decays to SM particles alone (if we neglect the possibility of large branching fractions to $\nu\bar{\nu}$ or multibody final states). On the other hand, in the case of a small coupling to gluons/quarks and a large coupling to photons (left side of the plots) there is no such limitation.
+
+## 3 Models
+
+We now turn to discuss concrete models. First, in section 3.1, we discuss interpretations of the resonance as a scalar that is a singlet under the SM gauge group. Next, in section 3.2, we consider the possibility of an $SU(2)_L$ doublet. Finally, in section 3.3 we study whether the resonance can be a heavy Higgs of the MSSM.
+
+### 3.1 SM-singlet scalar
+
+The possible interactions of a real singlet scalar with the SM fields, up to dimension-five terms, are
+
+$$ \begin{aligned} \mathcal{L}_{\text{singlet}} = & (\mu\Phi + \kappa_{H1}\Phi^2)H^\dagger H \\ & + \frac{\Phi}{f} \left( \kappa_g \frac{\alpha_s}{8\pi} G_{\mu\nu} G^{\mu\nu} + \kappa_Y \frac{\alpha_1}{8\pi} B_{\mu\nu} B^{\mu\nu} + \kappa_W \frac{\alpha_2}{8\pi} W_{\mu\nu} W^{\mu\nu} + \kappa_{H2} |D_\mu H|^2 + \kappa_{H3} |H|^4 \right) \\ & - \frac{\Phi}{f} \left( Y_{ij}^d H \overline{Q}_i d_j + Y_{ij}^u \tilde{H} \overline{Q}_i u_j + Y_{ij}^e H \overline{L}_i e_j + \text{h.c.} \right). \end{aligned} \quad (3.1) $$
+
+We first discuss the renormalizable scenario in which only the terms on the first line are present. Next we consider a, still renormalizable, model where the diphoton and digluon couplings $c_g$ and $c_\gamma$ are generated by additional vectorlike fermions. We also analyze the pseudoscalar case, where the possible interactions differ in several important ways from eq. (3.1), as we will discuss. We then turn to scenarios where the nonrenormalizable couplings on the second and third line are present, generated by physics above the scale *M* and resulting in “local” contributions to the couplings $c_g$ and $c_\gamma$. We consider the dilaton scenario of ref. [30] (except that the dilaton is in addition to the Higgs) in which the $\kappa_{g,Y,W}$ are related to the $\beta$ functions in the low-energy effective theory. Finally we discuss the possibility of production of the resonance by quarks due to the presence of the couplings in the last line of eq. (3.1).
+---PAGE_BREAK---
+
+### 3.1.1 Renormalizable model
+
+We consider the case with only the renormalizable couplings in eq. (3.1). The $\mu$ term induces mixing of $\Phi$ with the SM-like Higgs field, and we obtain two mass eigenstates, $S$ and the observed 125 GeV Higgs $h$. This results in $S$ having tree-level couplings proportional to those of the SM Higgs but suppressed by a universal factor,
+
+$$g_i^S = s_\alpha g_i^{h_{SM}}, \quad (3.2)$$
+
+where $s_\alpha \equiv \sin\alpha$, $\alpha$ being the mixing angle. This mixing also modifies the couplings of the observed 125 GeV Higgs boson with respect to what they would be in the SM. The modified couplings are scaled by $\cos\alpha$ with respect to their SM values. We must thus have $s_\alpha \lesssim 0.2$ to ensure that these modifications are compatible with Higgs and particularly electroweak precision measurements [31]. The coupling to the light quarks is thus negligible, therefore the production must be gluonic. At one-loop level, SM particles generate an effective $c_g$ and $c_\gamma$. To get the largest possible $c_g$ and $c_\gamma$ we take $s_\alpha = 0.2$ and obtain using the expressions for the top and W-loop contributions in ref. [32]
+
+$$|c_g| = 1.6, \quad |c_\gamma| = 0.09. \qquad (3.3)$$
+
+If we assume a 45 GeV width, we need $|c_g c_\gamma| \approx 530$ to accommodate the excess (cf. eq. (2.8)), so these numbers are far too small. Even if we allow for a smaller width, they still do not satisfy the bound $|c_\gamma| \gtrsim 2.7$ from eq. (2.10).
+
+Clearly we need large contributions from BSM states to $c_\gamma$, and either $c_g$ or the couplings $c_f$ to quarks, in eq. (2.2), to explain the size of the excess.
+
+### 3.1.2 Boosting $c_\gamma$, $c_g$ with new vectorlike fermions
+
+To investigate whether new colored and charged particles can generate large enough $c_{g,\gamma}$, we consider the minimal case of an additional vectorlike fermion, a triplet under QCD with electric charge $Q_f$, that couples to $\Phi$ as
+
+$$\mathcal{L} = -y_Q \Phi \bar{Q} Q - m_Q \bar{Q} Q. \qquad (3.4)$$
+
+The fermion loop generates $c_{g,\gamma}$. Any mixing of $\Phi$ with the Higgs doublet would dilute the vectorlike loop contributions (which would be generally larger than the SM loop contributions) to the diphoton and digluon couplings of the mass eigenstate $S$. Thus, we assume that the mixing, which is in any case constrained to be small, is negligible and the mass eigenstate is $S = \Phi$.
+
+The fermion $Q$ contributes [32]
+
+$$c_g = g_s^2 y_Q \tilde{A}_{1/2}(\tau_Q), \qquad (3.5)$$
+
+$$c_\gamma = 2 N_c Q_f^2 e^2 y_Q \tilde{A}_{1/2}(\tau_Q), \qquad (3.6)$$
+
+where $N_c = 3$ is the number of color states, $\tau_Q = M^2/(4m_Q^2)$ and
+
+$$\tilde{A}_{1/2}(\tau) = 2\tau^{1/2} A_{1/2}(\tau) = 4\tau^{-3/2}[\tau + (\tau - 1)f(\tau)], \qquad (3.7)$$
+---PAGE_BREAK---
+
+**Figure 5.** For a SM-singlet scalar (left) or pseudoscalar (right), contributions of a vectorlike color-triplet fermion with mass $m_Q$, charge $Q_f$ and Yukawa coupling $y_Q$ to the photonic, $c_γ$, and gluonic, $c_g$, couplings, scaled by $4/y_Q$. Black lines are contours of the scale $Λ$ at which the theory becomes strongly coupled if the value of $y_Q$ at the scale $M$ is fixed by requiring the correct signal size ($N=20$), assuming $\Gamma = 45$ GeV. The diagonal solid red lines stand for different values of $Q_f$; on each line, the corresponding scale for the Landau pole for the hypercharge interaction, $Λ'$ is shown. The horizontal blue dashed lines refer to different values of $m_Q$. We shade the regions where $y_Q^2(M)$ is within 20% of $[y_Q^{FP}(M)]^2$ (see eq. (3.12)). The dashed black line is the contour where $y_Q(M) = y_Q^{FP}(M)$.
+
+where
+
+$$ f(\tau) = \begin{cases} \arcsin^2 \sqrt{\tau} & \tau \le 1 \\ -\frac{1}{4} \left[ \ln \frac{1 + \sqrt{1 - \tau^{-1}}}{1 - \sqrt{1 - \tau^{-1}}} - i\pi \right]^2 & \tau > 1. \end{cases} \qquad (3.8) $$
+
+For $m_Q < M/2$ we obtain the constraint $y_Q \lesssim 0.7$ by requiring $\Gamma(S \to Q\bar{Q}) \lesssim 45$ GeV. This would not allow generating sufficiently large values of $c_{g,\gamma}$. We thus take $m_Q > M/2$. In figure 5 (left), we show the resulting $c_{g,\gamma}$ for a range of $m_Q$ and $Q_f$. The values of $c_{g,\gamma}$ for $y_Q = 4$ can be directly read off from the plot, and one can easily find them for other $y_Q$ values by keeping in mind that $c_{g,\gamma}$ scale linearly with $y_Q$.
+
+The same fermions will generically also generate couplings to $W^+W^-$, ZZ and Z$\gamma$. While a detailed study of the various possibilities is beyond the scope of this paper, we note that the bounds from table 3 are easily satisfied if, for example, the fermions are $SU(2)_L$ singlets. This is because they only contribute to the $\kappa_Y$ coupling from eq. (3.1), but not to $\kappa_W$, so one has $BR_{Z\gamma}/BR_{\gamma\gamma} = 2\tan^2\theta_W \approx 0.6$, $BR_{ZZ}/BR_{\gamma\gamma} = \tan^4\theta_W \approx 0.1$, and no contribution to $W^+W^-$. Since the Yukawa couplings $y_Q$ needed to reproduce the diphoton signal are relatively large, it is important to check to what extent the theory remains perturbative in the UV. We first consider the case in which we assume a 45 GeV width for the resonance. In some
+---PAGE_BREAK---
+
+regions of the parameter space, this implies a low cut-off for the theory at the scale at which $y_Q$ becomes strongly coupled. For $n_f$ color-triplet, $SU(2)_L$-singlet vector-like fermions, the RGE are given by⁴
+
+$$ \frac{dy_Q}{d \ln \mu} = \frac{y_Q}{16\pi^2} ((3 + 6n_f)y_Q^2 - 6Q_f^2 g'^2 - 8g_s^2), \quad (3.9) $$
+
+$$ \frac{dg'}{d \ln \mu} = \frac{g_s^{3'}}{16\pi^2} \left( \frac{41}{6} + 4n_f Q_f^2 \right), \qquad (3.10) $$
+
+$$ \frac{dg_s}{d \ln \mu} = \frac{g_s^3}{16\pi^2} \left( -7 + \frac{2}{3} n_f \right) \qquad (3.11) $$
+
+(see, e.g., ref. [34]); as said above we will only consider the minimal case $n_f = 1$. We show in figure 5 (left) contours of the scale $\Lambda$ at which the theory becomes strongly coupled, assuming $y_Q$ to be just large enough at each point in figure 5 (left) to explain the excess. We take as the strong coupling scale $\Lambda$ the scale where either $\sqrt{N_c}y_Q$ or (only in some part of the region marked $\Lambda > 100$ TeV) $\sqrt{N_c}Q_f g'$ becomes $\mathcal{O}(4\pi)$ ($N_c = 3$).
+
+For the theory to remain weakly coupled above the scale of roughly 10 TeV, a rather large value for the electric charge $Q_f$ is required, roughly above 3, for most of the shown parameter space. For a large charge, the negative contribution to the running proportional to $y_Q Q_f^2 g'^2$ in eq. (3.9) can actually push the cut-off up to 100 TeV, as shown in the top-right part of figure 5 (left). The RGE of $y_Q$ has a perturbative quasi fixed point, which in the one-loop approximation, eq. (3.9), is given by
+
+$$ y_Q^{\text{FP}} = \frac{1}{3} \sqrt{8g_s^2 + 6g'^2 Q_f^2} \approx 1.15 \sqrt{1 + 0.066 Q_f^2}. \qquad (3.12) $$
+
+Therefore, for an IR value (at the diphoton resonance mass scale) satisfying $y_Q < y_Q^{\text{FP}}$, the cutoff of the theory will likely be given by the Landau pole of the hypercharge interaction. It is controlled at high energies by the rather large charge of the vector-like quarks. On the other hand, for $y_Q > y_Q^{\text{FP}}$, the Yukawa coupling typically grows with energy. In such a case, the cutoff of the theory is set by the Landau pole of $y_Q$. We also note that generically, for UV boundary conditions that satisfy $y_Q(\Lambda_{\text{UV}}) > y_Q^{\text{FP}}(\Lambda_{\text{UV}})$, we expect to have $y_Q^{\text{IR}} \sim y_Q^{\text{FP}}$. Since the one-loop $\beta$ function for $y_Q$ is small in this region, the impact of two-loop contributions may be nonnegligible. To indicate this, the region in which $y_Q^2(M)$ is within ±20% of $[y_Q^{\text{FP}}(M)]^2$ is shaded.
+
+We note that after rescaling $y_Q$ appropriately to explain the signal, the partial width to photons and gluons never exceeds 5 GeV in the region $Q_f > 2/3$, so significant decays to other final states are needed to explain the 45 GeV width. The dijet constraint $|c_g| \lesssim 97$ (see eq. (2.17)) is satisfied in the above region assuming that there are no dijet final states other than gg.
+
+We now consider the case in which the resonance is narrow and the width is dominated by decays to gg. We then only need a $|c_\gamma| \approx 2.7$ according to eq. (2.10). In figure 6 (left) we
+
+⁴Note that the couplings $\mu_3 S^3$ and $\lambda_S S^4$ do not alter these RG equations. For a more comprehensive analysis that considers the running of the quartic coupling $\lambda_S$, see for instance ref. [33] where it is shown that if the value of $\lambda_S$ at the low scale is appropriately chosen, it can remain perturbative up to the Planck scale.
+---PAGE_BREAK---
+
+**Figure 6.** For a narrow-width SM-singlet scalar (left) or pseudoscalar (right), the scale of breakdown of perturbativity as a function of the color-triplet vectorlike fermion mass $m_Q$, assuming we take the required $y_Q$ at the scale M to explain the excess. For $m_Q \lesssim 490$ GeV ($m_Q \lesssim 720$ GeV) with $Q_f = 4/3$ ($Q_f = 5/3$) for the scalar or $m_Q \lesssim 700$ GeV ($m_Q \lesssim 1050$ GeV) with $Q_f = 4/3$ ($Q_f = 5/3$) for the pseudoscalar, $y_Q(M) \lesssim y_Q^{\text{FP}}$ in eq. (3.12) and the $\beta$ function is negative at the scale M, so the theory is perturbative.
+
+show the scale of breakdown of perturbativity assuming the required $y_Q$ to obtain $|c_\gamma| = 2.7$ from the loop of a vectorlike color-triplet fermion, as a function of its mass, $m_Q$. We see that the theory might be perturbative up to high scales even for much smaller electric charges.
+
+### 3.1.3 A pseudoscalar
+
+We now consider the case of a pseudoscalar. Unlike the scalar it cannot mix with the SM Higgs and some couplings like those to longitudinal W's and Z's in eq. (3.1) are not allowed. The possible interaction terms are
+
+$$
+\begin{aligned}
+\mathcal{L} = & -\frac{1}{16\pi^2} \frac{c_B}{4M} S B^{\mu\nu} \tilde{B}_{\mu\nu} - \frac{1}{16\pi^2} \frac{c_W}{4M} S W^{a\mu\nu} \tilde{W}_{\mu\nu}^a - \frac{1}{16\pi^2} \frac{c_g}{4M} S G^{\mu\nu,a} \tilde{G}_{\mu\nu}^{a} \\
+& - \frac{S}{f} (iY_{ij}^d H \overline{Q}_i d_j + iY_{ij}^u \tilde{H} \overline{Q}_i u_j + iY_{ij}^e H \overline{L}_i e_j + \text{h.c.}) - y_Q S \bar{Q} i \gamma_5 Q \\
+\supset & -\frac{1}{16\pi^2} \frac{c_\gamma}{4M} S F^{\mu\nu} \tilde{F}_{\mu\nu} - \frac{1}{16\pi^2} \frac{c_g}{4M} S G^{\mu\nu,a} \tilde{G}_{\mu\nu}^{a} - y_Q S \bar{Q} i \gamma_5 Q,
+\end{aligned}
+\quad (3.13)
+$$
+
+where we have also included a coupling to a vectorlike quark $Q$. A pseudoscalar can appear in composite models as a pseudo-Nambu-Goldstone boson (PNGB) with sizeable couplings to photons and gluons because of anomalies [35], but here we will consider the possibility where $c_g$ and $c_\gamma$ are generated only from loops of the fermion $Q$. These loop contributions are given by [32]
+
+$$ c_g = 2g_s^2 y_Q \tilde{A}_{1/2}^{PS}(\tau_Q), \quad (3.14) $$
+
+$$ c_\gamma = 4N_c Q_f^2 e^2 y_Q \tilde{A}_{1/2}^{PS}(\tau_Q), \quad (3.15) $$
+---PAGE_BREAK---
+
+where $\tau_Q = M^2/(4m_Q^2)$ and
+
+$$ \tilde{A}_{1/2}^{PS}(\tau) = 2\tau^{1/2} A_{1/2}^{PS}(\tau) = 2\tau^{-1/2} f(\tau), \quad (3.16) $$
+
+with $f(\tau)$ defined in eq. (3.8).
+
+Following the same procedure as in section 3.1.2, we obtain the results shown in the right plots of figures 5 and 6. They are qualitatively similar to the scalar case, but the theory can be perturbative up to somewhat higher scales for given $Q_f$ and $m_Q$.
+
+Again, in figure 5 (right) we shade the region in which $y_Q^2(M)$ is within 20% of $[y_Q^{\text{FP}}(M)]^2$ of eq. (3.12). This is the region where the cut-off can be high but at the same time our one-loop computation may be less reliable. The dashed black line is the contour where $y_Q(M) = y_Q^{\text{FP}}(M)$.
+
+We thus find that for a $\Gamma \approx 45$ GeV singlet resonance (in both the scalar and pseudoscalar cases) the size of the excess suggests strongly coupled physics at a few TeV unless there are additional new particles around or below the mass of the resonance with large electric charge. For the narrow width case we still need to require additional charged states but the theory can be perturbative with a smaller electric charge. The hints of strongly coupled physics motivate us to examine in more detail a popular strongly coupled scalar candidate, the dilaton.
+
+### 3.1.4 The dilaton
+
+We consider a generalization of the dilaton scenario of ref. [30], taking the full SM, including the Higgs doublet, to be part of a conformal sector (see also [36]). The dilaton is the PNGB of the spontaneously broken scale invariance. The couplings of the dilaton in the electroweak broken phase are given by
+
+$$ \begin{aligned} \mathcal{L}^{\text{dil}} = & \frac{\Phi}{f} \left( (\partial_{\mu} h_{\text{SM}})^2 + 2 \left( m_W^2 W^+ W^- + \frac{m_Z^2}{2} Z^2 \right) \right. \\ & \left. - \sum_f m_f \bar{f} f + \frac{\kappa_g \alpha_s}{8\pi} G_{\mu\nu} G^{\mu\nu} + \frac{\kappa_\gamma \alpha}{8\pi} F_{\mu\nu} F^{\mu\nu} \right), \end{aligned} \quad (3.17) $$
+
+where the first three terms arise from the operator $2\Phi|D_\mu H|^2/f$. Note that the dilaton also couples to the $W^\pm$ and $Z$ field strengths, but these operators have loop-suppressed Wilson coefficients and thus their contribution is subdominant compared to the contribution from $2\Phi|D_\mu H|^2/f$. Furthermore, there will be a mixing term with the SM Higgs, that will arise from the potential term $\Phi H^\dagger H$ and possibly also from kinetic mixing, so that finally we obtain two mass eigenstates, $S$ and the observed 125 GeV Higgs $h$, where
+
+$$ S = s_\alpha h_{\text{SM}} + c_\alpha \Phi, \quad (3.18) $$
+---PAGE_BREAK---
+
+and $s_\alpha = \mathcal{O}(v/f)$ and $c_\alpha = 1$ up to $\mathcal{O}(v^2/f^2)$.⁵ We thus have for the couplings to the
+massive vector bosons and fermions
+
+$$
+g_{V,f}^{S} = \xi g_{V,f}^{h_{SM}}, \quad \text{with} \quad \xi = s_{\alpha} + \frac{v}{f} c_{\alpha}. \tag{3.19}
+$$
+
+For the dilaton, the couplings $\kappa_{g,\gamma}$ are completely determined using the low-energy theorems and scale invariance [30]. The dilaton coupling to gluons is
+
+$$
+\frac{\Phi}{f} \frac{\sum_{\text{heavy}} b_0^i \alpha_s}{8\pi} G_{\mu\nu} G^{\mu\nu}, \qquad (3.20)
+$$
+
+where $b_0^i$ is the contribution of the field $i$ to the QCD $\beta$ function,
+
+$$
+\beta_i = \frac{b_0^i g_s^3}{16\pi^2}, \qquad (3.21)
+$$
+
+and the sum is over all particles heavier than the scale $f$. Scale invariance implies
+
+$$
+\sum_{\text{heavy}} b_0^i + \sum_{\text{light}} b_0^i = 0, \qquad (3.22)
+$$
+
+so that we finally obtain
+
+$$
+\kappa_g = - \sum_{\text{light}} b_0^i = 7. \tag{3.23}
+$$
+
+Similarly one obtains [30]
+
+$$
+\kappa_{\gamma} = -11/3 . \tag{3.24}
+$$
+
+Note that if we do not include all the SM fields in the conformal sector but keep some of them elementary (e.g., ref. [38]), we cannot use the above arguments to fix $\kappa_{\gamma,g}$ which then become model dependent.
+
+The requirement $f \gtrsim M$ implies
+
+$$
+|c_g| \lesssim 21, \quad |c_\gamma| \lesssim 0.7, \tag{3.25}
+$$
+
+where we have assumed $s_\alpha \sim v/f$ in estimating the small contribution from mixing. For
+$f \approx M$, the total width, dominated by decays to $W^+W^-$, $ZZ$, $hh$ and $t\bar{t}$, is $\Gamma \approx 30$ GeV.
+For this width, eq. (2.8) requires $|c_g c_\gamma| \approx 430$ to explain the excess. This cannot be
+obtained with the numbers in eq. (3.25). We also note that VBF production is negligible,
+considering the requirement in eq. (2.10).
+
+Thus, we need additional large contributions to the QCD and QED $\beta$ functions below
+the scale $f$,
+
+$$
+\Delta c_g = \frac{2g_s^2 \Delta b_{QCD} M}{f}, \qquad (3.26)
+$$
+
+$$
+\Delta c_{\gamma} = \frac{2e^2 \Delta b_{QED} M}{f}. \qquad (3.27)
+$$
+
+⁵Note that in the presence of kinetic mixing the transformation from $(\Phi, h_{\mathrm{SM}})$ to the mass eigenstates is not orthogonal, and thus $s_\alpha$ and $c_\alpha$ cannot be expressed as a sine and cosine of a mixing angle (see for instance pp. 56-57 in ref. [37]).
+---PAGE_BREAK---
+
+For $n_f$ additional vectorlike colour-triplet, SU(2)$_L$-singlet fermions we have
+
+$$ \Delta b_{\text{QED}} = \frac{4N_c n_f}{3} Q_f^2, \qquad (3.28) $$
+
+$$ \Delta b_{\text{QCD}} = \frac{2}{3} n_f, \qquad (3.29) $$
+
+where $Q_f$ is the electric charge of the fermion. Clearly to enhance $c_g$ and $c_\gamma$ to the extent that $|c_g c_\gamma| \approx 430$, we need either a very large charge $Q_f$ or a very large number of flavors $n_f$ of additional fermions below the TeV scale. This scenario thus appears contrived and we do not investigate it further.
+
+### 3.1.5 Production by quarks
+
+Finally, we discuss the possibility of production of $\mathcal{S}$ from quarks via the dimension-five operators in eq. (3.1). Thus we consider the Lagrangian terms⁶
+
+$$ \mathcal{L} \supset -\frac{S}{f} (Y_{ij}^d H \overline{Q}_i d_j + Y_{ij}^u \tilde{H} \overline{Q}_i u_j + h.c.) + \kappa_Y \frac{\alpha_1 S}{8\pi f} B_{\mu\nu} B^{\mu\nu}. \qquad (3.30) $$
+
+We want to find a conservative bound on the maximal energy scale up to which the EFT in eq. (3.30) could be consistent while being completely agnostic about the UV theory. We will consider scenarios in which $\mathcal{S}$ couples primarily to a single quark flavor $f$ and set the corresponding $Y_{ij}^{u,d} \equiv Y_f$, as well as $\kappa_Y$, to their (conservative) perturbativity bounds as follows:
+
+$$ \frac{Y_f}{f} \rightarrow \frac{16\pi^2/\sqrt{N_c}}{\Lambda}, \qquad \kappa_Y \frac{\alpha_1}{8\pi f} \rightarrow \frac{4\pi}{\Lambda}, \qquad (3.31) $$
+
+so that $\Lambda$ can be identified with the maximum scale up to which the theory could be predictive.⁷ The couplings $c_f$ and $c_\gamma$ in eq. (2.2) can be expressed in terms of $\Lambda$ as follows,
+
+$$ c_f = \frac{16\pi^2 v}{\sqrt{N_c} \sqrt{2\Lambda}}, \qquad c_\gamma = 256\pi^3 \cos^2 \theta_W \frac{M}{\Lambda}. \qquad (3.32) $$
+
+One can find an absolute lower bound on $c_f$ by requiring that the production cross section of $\mathcal{S}$ is at least 6.9 fb in accordance with eq. (2.1). For an $f\bar{f}$ initial state with $f = \{u,d,s,c,b\}$ this gives the bounds
+
+$$ \begin{aligned} |c_u| &\gtrsim 0.005 \Rightarrow \Lambda \lesssim 3200 \text{ TeV}, \\ |c_d| &\gtrsim 0.007 \Rightarrow \Lambda \lesssim 2300 \text{ TeV}, \\ |c_s| &\gtrsim 0.022 \Rightarrow \Lambda \lesssim 720 \text{ TeV}, \\ |c_c| &\gtrsim 0.026 \Rightarrow \Lambda \lesssim 610 \text{ TeV}, \\ |c_b| &\gtrsim 0.040 \Rightarrow \Lambda \lesssim 400 \text{ TeV}, \end{aligned} \qquad (3.33) $$
+
+⁶Generating the coupling to photons via the $W_{\mu\nu}^a W^{a\mu\nu}$ operator instead of $B_{\mu\nu} B^{\mu\nu}$ would require a lower cut-off.
+
+⁷We can think of the following crude picture of how such a large diphoton coupling could be realised. Let us add a vector-like fermion $Q$ as considered in section 3.1.2, but with mass $m_Q \sim \Lambda$, charge $Q_f \sim 4\pi/g_1$ and Yukawa coupling to $\mathcal{S}$ of $y_Q \sim 4\pi$. This would generate the diphoton coupling in (3.1) with $f \to \Lambda$ and $\kappa_Y$ of order $(4\pi)^2/\alpha_1$, which indeed corresponds to the $4\pi$ in (3.31). This particular scenario does not require any exotic states below $\Lambda$, however generates a very large step in the hypercharge $\beta$ function, which leads to a UV Landau pole for $g_1$, and hence for the entire Standard Model, at or close to the scale $\Lambda$.
+---PAGE_BREAK---
+
+respectively. From eq. (2.10), the coupling to photons, for an $f\bar{f}$ initial state, needs to
+satisfy
+
+$$
+\begin{align*}
+|c_{\gamma}| \gtrsim 4.1 &\Rightarrow \Lambda \lesssim 1100 \text{ TeV}, \\
+|c_{\gamma}| \gtrsim 5.2 &\Rightarrow \Lambda \lesssim 880 \text{ TeV}, \\
+|c_{\gamma}| \gtrsim 17 &\Rightarrow \Lambda \lesssim 270 \text{ TeV}, \tag{3.34} \\
+|c_{\gamma}| \gtrsim 20 &\Rightarrow \Lambda \lesssim 230 \text{ TeV}, \\
+|c_{\gamma}| \gtrsim 30 &\Rightarrow \Lambda \lesssim 150 \text{ TeV},
+\end{align*}
+$$
+
+for $f = \{u, d, s, c, b\}$ respectively, thus giving somewhat stronger bounds than eq. (3.33).
+The lower bound on $|c_\gamma|$ above can be saturated only in the narrow width case with the
+additional requirement that $c_f$ is a few times higher than the corresponding bound in
+eq. (3.33) so that the width is dominated by the decays to the production mode (see the
+discussion below eq. (2.10)). This would require that $\Lambda$ is a few times lower than the bound
+in eq. (3.33) which roughly coincides with the values in eq. (3.34).
+
+If we assume a 45 GeV width for the resonance, eq. (2.8) must be satisfied, i.e. we
+must have
+
+$$
+\begin{align*}
+|c_{\gamma}c_u| &\approx 2.9 \Rightarrow \Lambda \approx 160 \text{ TeV}, \\
+|c_{\gamma}c_d| &\approx 3.7 \Rightarrow \Lambda \approx 140 \text{ TeV}, \\
+|c_{\gamma}c_s| &\approx 12 \Rightarrow \Lambda \approx 80 \text{ TeV}, \tag{3.35} \\
+|c_{\gamma}c_c| &\approx 15 \Rightarrow \Lambda \approx 70 \text{ TeV}, \\
+|c_{\gamma}c_b| &\approx 22 \Rightarrow \Lambda \approx 60 \text{ TeV},
+\end{align*}
+$$
+
+for an $f\bar{f}$ initial state with $f = \{u,d,s,c,b\}$, respectively, where to obtain the values for
+the cut-off we have used eq. (3.32).
+
+Note that, in the quark mass basis, the off-diagonal elements of the Y matrices generate
+terms like $c_{ij}S\bar{f}_i f_j$ with $i \neq j$. Tree level FCNC constraints (see, e.g., ref. [39]) constrain
+these off-diagonal $c_{ij}$ to be $\lesssim O(10^{-4})$ for couplings involving the first two generations
+and $\lesssim O(10^{-3})$ for couplings involving the b quark, thus much smaller than the values
+of the diagonal couplings in eq. (3.33). This scenario would thus be interesting from a
+flavor model-building point of view as one must find a way to suppress the off-diagonal
+couplings with respect to the diagonal ones. For instance, notice that if S is a complex
+scalar, the coupling $c_{ij}S\bar{f}_i f_j$ has an accidental flavour symmetry that forbids additional
+flavor violation, i.e. S can be formally viewed as a flavon field that carries an i-j flavor
+charge and thus cannot mediate $\Delta F = 2$ flavor violation. In such a case, any flavor
+violation induced by this coupling is proportional to powers of the $c_\gamma$ coupling and/or the
+SM Yukawas that do not respect this accidental symmetry. The $\Delta F = 2$ flavor violation
+induced by the coupling $c_{ij}$ would thus be suppressed by loop factors and/or SM Yukawas.
+
+Let us now assume that a mechanism for alignment exists thus eliminating any tree level FCNC. In this case flavor violation can arise only at higher loop order. If the production is dominated either by up or down-type S couplings we can assume that only one of $Y^u$ or $Y^d$ is non-zero and that it is aligned to the quark mass basis. For instance let us consider
+---PAGE_BREAK---
+
+the case where in the down mass basis, the production is dominated by a single coupling of $S$, e.g. $Y_d = \text{diag}(y_d, 0, 0)$. In such a case flavor violation has to involve the CKM matrix, $V_{\text{CKM}}$. Spurionically, the flavor violating bilinear coupling between two quark doublets is given by $V_{\text{CKM}}^{\dagger} Y_d^{\dagger} Y_d V_{\text{CKM}}$. This spurion needs to be squared in order to generate the most dangerous $\Delta F = 2$ contributions, in this case to $D - \bar{D}$ mixing processes. As $Y_d$ is the coefficient of a dimension-five operator in the unbroken phase, each coupling is accompanied with an $S$ field and thus the term $Y_d^{\dagger} Y_d$ is generated only at one loop by integrating out $S$. This holds similarly for the CKM insertions, which can only arise from internal $W$ lines. Consequently, the leading contribution to $\Delta F = 2$ (involving quark doublets) would be suppressed by a three-loop factor and is thus negligibly small. There are possible two-loop contributions (mixing doublet and singlet quarks) that are, however, suppressed by the light-quark masses and are thus even smaller. Finally, an even stronger (and phenomenologically not necessary) protection is obtained by assuming alignment and $U(2)$ universality in the form of $Y_d = \text{diag}(y_d, y_d, 0)$ or $Y_u = \text{diag}(y_u, y_u, 0)$. In such a case the contributions arise solely via the mixing with the third generation.
+
+## 3.2 Excluding the general pure 2HDM
+
+In this part we discuss the possibility of explaining the excess within the framework of the general two-Higgs-doublet model (2HDM), assuming no additional states beyond the additional doublet.
+
+It is useful to describe the theory in the so-called Higgs basis [40], where only one of the two doublets, which corresponds to the SM Higgs, acquires a VEV. The SM-like Higgs doublet, $H_a$, has a VEV equal to 246 GeV and a CP-even component with exactly SM-like couplings, whereas the other doublet, $H_b$, which contains the heavy CP-even and CP-odd states, as well as the charged states, has a vanishing VEV. The coupling
+
+$$-\lambda_V(H_b^\dagger H_a)(H_a^\dagger H_a) + \text{h.c.} \tag{3.36}$$
+
+causes a misalignment between the Higgs basis and the CP-even mass basis [41] that is of order $\lambda_V v^2/M^2$. If $\lambda_V \lesssim \mathcal{O}(1)$ we are in the so-called decoupling limit and can think of the ratio $\epsilon \equiv \lambda_V v^2/M^2$ as our formal expansion parameter (see ref. [40] and references therein for relevant discussions). The above interaction term leads to couplings of the heavy CP-even scalar, $H^0$, and the pseudoscalar, $A^0$, to the electroweak gauge bosons, $VV$. At the same time, it causes deviations from SM values in the $h^0VV$ couplings, $h^0$ being the lighter CP-even state. The value of $\lambda_V$ is thus constrained by electroweak precision measurements. Using the expressions in ref. [42] we find the constraint
+
+$$|\lambda_V| \lesssim 3, \tag{3.37}$$
+
+which shows that we are in fact in the decoupling limit as $\epsilon \lesssim 0.3$.
+
+One interesting consequence of the fact that $v^2/M^2 \ll 1$, is that the mass splitting between the neutral CP-even state, $H^0$, and the odd one, $A^0$, which is due to the coupling
+
+$$-\frac{\lambda_5}{2}(H_a^\dagger H_b)^2 + \text{h.c.}, \tag{3.38}$$
+---PAGE_BREAK---
+
+is generically small,
+
+$$
+\delta m = |m_{H^0} - m_{A^0}| \sim \frac{|\lambda_5| v^2}{2M} \sim 40 \text{ GeV} \quad (3.39)
+$$
+
+for $\lambda_5 = 1$. As $\delta m$ is compatible with the width of the excess, one may contemplate
+the possibility that the observed signal actually arises due to the presence of these two
+neighbouring states.
+
+We will now show that the general pure 2HDM cannot account for the observed excess.
+We note that in the Higgs basis the couplings of the heavy states to the light quarks can
+differ from those of the SM Higgs, as was exploited in ref. [43]; this is because $H_b$ acquires
+no VEV and thus its couplings to the SM fermions do not contribute to their masses. In
+particular, the couplings of $H_b$ to the light quarks might be as large as allowed by the
+model-independent constraints in figure 2. Thus, we consider production through either
+quark-antiquark or gluon fusion. In addition, as the signal might be accounted for by the
+presence of either $H^0$ or $A^0$, or both, we should consider the production and decay of each
+of these. We emphasise that to be conservative we do not require the width to be equal
+to 45 GeV as the excess could be explained by two narrower states separated in mass by
+a few tens of GeVs, which would be consistent with the reported diphoton spectrum. We
+denote by $N_H$ and $N_A$ the number of events from the production and decay of $H^0$ and $A^0$,
+respectively. In the CP limit we can assume no interference between these two production
+modes.
+
+**Gluon-gluon production** Assuming that the masses of $A^0$ and $H^0$ are less than 45 GeV apart, both states would contribute to the excess. For the total width of the resonance to not exceed 45 GeV, eq. (2.3), it is necessary that
+
+$$
+Y^2 \equiv \sum_f \beta_f c_f^2 \lesssim 0.5, \qquad (3.40)
+$$
+
+where $c_f$ is the coupling of $H^0$ and $A^0$ to the SM fermions,
+
+$$
+-c_f \bar{f}_L (H^0 + i A^0) f_R + h.c. \tag{3.41}
+$$
+
+and $\beta_f = (1 - 4m_f^2/M^2)^{1/2}$, with $m_f$ being the fermion mass. Taking into account the steep decrease of the fermion loop functions $\tilde{A}_{1/2}$ and $\tilde{A}_{1/2}^{PS}$, defined respectively in eqs. (3.7) and (3.16), with decreasing quark mass, we find that for a fixed partial width to fermions (and thus fixed $Y^2$), the fermionic loop contributions to $c_\gamma$ and $c_g$ are maximized for $c_t/c_{f'} \gg 1$, where $c_t$ is the coupling to the top and $c_{f'}$ are couplings to fermions other than the top.
+
+It is possible to bound the contributions from $A^0$ because, unlike for $H^0$, its couplings
+to the photons and gluons are only due to fermion loops. The total number of events from
+pseudoscalar decays can be expressed using eq. (2.4) as
+
+$$
+N_A = 8.0 \times 10^5 \times \mathrm{BR}(A^0 \rightarrow gg) \times \mathrm{BR}(A^0 \rightarrow \gamma\gamma) \times \frac{\Gamma(A^0)}{45 \text{ GeV}} . \quad (3.42)
+$$
+
+Using the inequalities
+
+$$
+\mathrm{BR}(A^0 \to gg) < \frac{\Gamma(A^0 \to gg)}{\Gamma(A^0 \to ff)}, \quad \mathrm{BR}(A^0 \to \gamma\gamma) < \frac{\Gamma(A^0 \to \gamma\gamma)}{\Gamma(A^0 \to ff)} \quad (3.43)
+$$
+---PAGE_BREAK---
+
+along with the condition $\Gamma(A^0) \lesssim 45$ GeV we then obtain
+
+$$N_A \lesssim 8.0 \times 10^5 \times \frac{\Gamma(A^0 \to gg) \times \Gamma(A^0 \to \gamma\gamma)}{\Gamma(A^0 \to f\bar{f})^2}. \quad (3.44)$$
+
+The partial widths are given by
+
+$$\Gamma(A^0 \to gg) = \frac{\alpha_s^2 M}{32\pi^3} \left| \sum_f c_f \tilde{A}_{1/2}^{PS}(\tau_f) \right|^2, \qquad (3.45)$$
+
+$$\Gamma(A^0 \to \gamma\gamma) = \frac{\alpha_s^2 M}{64\pi^3} \left| \sum_f c_f N_c Q_f^2 \tilde{A}_{1/2}^{PS}(\tau_f) \right|^2, \quad (3.46)$$
+
+where $\tau_f = M^2/(4m_f^2)$ and $\tilde{A}_{1/2}^{PS}$ is defined in eq. (3.16). Taking $c_t \gg c_{f'}$, as explained above, one can now evaluate the upper bound in eq. (3.44),
+
+$$N_A \lesssim 0.02, \qquad (3.47)$$
+
+where we have used $\Gamma(A^0 \to t\bar{t}) = \frac{3}{8\pi}\sqrt{1-4m_t^2/M^2}Mc_t^2 \approx 0.11 Mc_t^2$. We thus conclude that the pseudoscalar contributions are negligibly small in this case.
+
+We must then attribute all 20 signal events to $H^0$ decays,
+
+$$20 = N_H < 1.8 \times 10^4 \text{ GeV}^{-1} \times \frac{\Gamma(H^0 \to gg)}{\Gamma(H^0 \to f\bar{f})} \times \Gamma(H^0 \to \gamma\gamma), \quad (3.48)$$
+
+where we have used eq. (2.4) and $\Gamma(H^0 \to f\bar{f}) < \Gamma(H^0)$. Now, as above, we take $c_t \gg c_{f'}$ in
+
+$$\Gamma(H^0 \to gg) = \frac{\alpha_s^2 M}{128\pi^3} \left| \sum_f c_f \tilde{A}_{1/2}(\tau_f) \right|^2 \quad (3.49)$$
+
+to maximise the ratio $\Gamma(H^0 \to gg)/\Gamma(H^0 \to f\bar{f})$, which becomes independent of $c_f$. Using $\Gamma(H^0 \to \gamma\gamma) = 1.99 \times 10^{-7} M|c_\gamma|^2$ from table 1 we then obtain the requirement
+
+$$|c_\gamma| \gtrsim 66. \qquad (3.50)$$
+
+As we will soon show, such large values of $|c_\gamma|$ are impossible to obtain in a pure 2HDM.
+
+**Quark-antiquark production** As argued above, in general the heavy states can have sizeable couplings to the first two generations. Ignoring possible severe constraints from flavor physics we find that the weakest bound is from production due to $u\bar{u}$. Again we bound the pseudoscalar contribution first. Using eq. (2.4) we have
+
+$$N_A = 8.1 \times 10^3 \text{ GeV}^{-1} \times \frac{\Gamma(A^0 \to u\bar{u}) \Gamma(A^0 \to \gamma\gamma)}{\Gamma(A^0)}. \quad (3.51)$$
+
+As the up-quark loop contributes negligibly to $\Gamma(A^0 \to \gamma\gamma)$ compared with the top-quark loop, the above expression is proportional to $c_u^2 c_t^2 / Y^2$ assuming that all the other fermionic
+---PAGE_BREAK---
+
+couplings are zero. Keeping the bound from eq. (3.40) in mind, this is maximised for $c_u^2 = 0.25$ and $c_t^2 = 0.28$. For these values the pseudoscalar contribution yields less than one event. Thus $H^0$ must account for all 20 events. From eq. (2.10) we have the requirement
+
+$$|c_\gamma| \gtrsim 4.1. \qquad (3.52)$$
+
+Let us now discuss whether $|c_\gamma|$ as large as that required by eqs. (3.50) and (3.52) can be obtained by loops of charged particles for the $H^0$ in the pure 2HDM. In addition to fermionic loops, the couplings of $H^0$ to photons receive contributions from loops of $W^\pm$ and $H^\pm$. In the Higgs basis the two couplings that can parametrize these contributions are $\lambda_V$, defined in eq. (3.37), and
+
+$$-\lambda_{H^+} (H_a^\dagger H_b) (H_b^\dagger H_b) + \text{h.c.}, \qquad (3.53)$$
+
+where the term proportional to $\lambda_{H^+}$ ($\lambda_V$) results in a coupling of the $H^0$ to the charged Higgs ($W$). We take the maximal value of $\lambda_V$ allowed by electroweak precision constraints, $\lambda_V \approx 3$, as already mentioned above. There is no analogous restriction on the value of $\lambda_{H^+}$. To check whether it is possible to satisfy the requirement in eq. (3.50), or at least the one in eq. (3.52), we have added up the loop contributions from the top quark, the W and the charged Higgs (see, e.g., ref. [32]) allowing for maximal constructive interference. To maximise $|c_\gamma|$, we take the charged-Higgs mass to be as small as $M/2$, which can, for instance, be obtained with a large value of $\lambda_5$. For $O(1)$ values of $\lambda_{H^+}$, the contribution of the charged-Higgs loop is very small compared to the dominant contribution from the top loop as it is suppressed by $m_W^2/m_{H^+}^2$. We get, for $\lambda_{H^+} = 1$, $|c_\gamma| \sim 1.8$. We find that to satisfy even the bound $|c_\gamma| \gtrsim 4.1$ in eq. (3.52) requires very large values of $\lambda_{H^+}$, above $16\pi^2/3$. For such large values of $\lambda_{H^+}$, a naive estimate tells us that the loop contributions are a third of the tree-level ones, so perturbativity is questionable. Such large values of $\lambda_{H^+}$ and $\lambda_5$ are also ruled out if we require their contribution to the running of $\lambda_V$ between the scales $M$ and $m_Z$ to be smaller than the electroweak precision bound (which applies to $\lambda_V(m_Z)$), that is if we require $\Delta\lambda_V \lesssim 3$ (see ref. [42] for the RGE). This rules out both gluon and quark initiated production as the bounds in eqs. (3.50) and (3.52) are impossible to satisfy.
+
+Thus, we have verified that the general 2HDM, without any additional states, cannot account for the observed anomaly.
+
+## 3.3 The fate of the MSSM
+
+We now turn to the Minimal Supersymmetric Standard Model (MSSM). As in the 2HDM, which in its type-II form is contained in the MSSM as a subsector, the only candidate particles for the resonance in the MSSM are $H^0$ and $A^0$.⁸ The most plausible production
+
+⁸We consider the R-parity-conserving MSSM, otherwise in principle one could consider sneutrino candidates, which can be similarly constrained. A resonant $\gamma\gamma$ signal can also arise within the MSSM from the annihilation of a squark-antisquark near-threshold QCD bound state, most famously the stoponium [44]. However, based on expressions from [45], the stoponium has $|c_\gamma| \simeq \sqrt{(2^{21}\pi^5/3^6)}\bar{\alpha}_S^3\alpha^2 \approx 0.4$, while eq. (2.10) requires $|c_\gamma| \gtrsim 2.7$ even for the most favorable (but also a quite generic for stoponium) scenario where
+---PAGE_BREAK---
+
+mechanism is gluon fusion, due to the smallness of the $H_d$ doublet's Yukawa couplings to light quarks and the fact that we are deep in the decoupling regime, $M_{H^0} \gg m_Z$.
+
+As we have seen above, the 2HDM fails by a large margin to accommodate the data. However, in the MSSM there are extra contributions to the $H^0gg$ couplings from sfermions and to the $H^0\gamma\gamma$ couplings from sfermions and charginos, in addition to those already present in the 2HDM. The $A^0gg$ and $A^0\gamma\gamma$ vertices receive no sfermion contributions at one loop as a consequence of CP symmetry, though they do receive contributions from charginos.⁹ Considering first $H^0$ as a candidate, dimensional analysis gives, for the contribution of the two stops, for $M_{SUSY} = 1$ TeV,
+
+$$c_g \sim 2g_s^2 \times \frac{v M_{H^0}}{M_{SUSY}^2} \sim 0.5 \qquad (3.54)$$
+
+and
+
+$$c_\gamma \sim 2N_c e^2 \times \frac{v M_{H^0}}{M_{SUSY}^2} \sim 0.1. \qquad (3.55)$$
+
+Even allowing for similar contributions from other sparticles, this suggests that, generically, $|c_g c_\gamma| < 1$, which is nearly three orders of magnitude below what is required according to eq. (2.8). However, we must also contemplate that the true resonance width could be smaller than the “nominal” 45 GeV. The decay width of $H^0$ is dominated by tree-level decays into top and bottom quarks, and is essentially determined in the MSSM as a function of $\tan\beta$, with a minimum of about 2 GeV at $\tan\beta \approx 6$. Hence, eq. (2.8) can be recast as
+
+$$\frac{|c_\gamma c_g|}{\sqrt{\Gamma(\tan\beta)/(45 \text{ GeV})}} = \rho_g \approx 530. \qquad (3.56)$$
+
+The question is how large the left-hand side may be. First, a small numerator could be partly compensated for by a factor of up to five due to the denominator. Second, an MSSM spectrum could also be quite non-degenerate, with hierarchies like $m_{\tilde{t}_1} \ll M_{H^0}, \mu \ll m_{\tilde{t}_2}$; this is in fact favoured by the observed Higgs mass. In particular, large $\mu$ and/or A-terms and a light stop can lead to a parametric enhancement $\sim \{\mu, A_t\}/m_{\tilde{t}_1}$ relative to the naive estimates above. Third, there could also be important contributions from sbottoms and staus, as well as charginos, which brings in a large subset of the MSSM parameters. A conclusion about the fate of the MSSM requires a quantitative treatment, but a brute-force parameter scan is not really feasible and in any case beyond the scope of this work. Instead, the purpose of the rest of this section is to obtain simple yet conservative bounds on all one-loop contributions over the entire MSSM parameter space.
+
+the width is dominated by decays to the production mode, $\Gamma_{gg} \simeq (16/81)\bar{\alpha}_s^3\alpha_s^2 M \approx 0.0033$ GeV. One might also consider the gluinonium, whose binding is much stronger, though annihilation to $\gamma\gamma$ is loop-suppressed [45, 46]. However, pair production of $M/2 \approx 375$ GeV gluinos would have been almost certainly noticed by now.
+
+⁹As in the rest of this work, we assume *CP* conservation. Without this assumption, the gluonic and photonic couplings of some superposition of the two heavier mass eigenstates $H_2$ and $H_3$ will receive sparticle loop contributions, so apart from a division of the diphoton signal between $H_2$ and $H_3$ resonant contributions, we do not expect qualitative changes to our conclusions.
+---PAGE_BREAK---
+
+First, we will impose $1 \le \tan \beta \le 50$. The reason is that in the decoupling limit the $H^0 t\bar{t}$ and $H^0 b\bar{b}$ couplings are $\sqrt{2}m_t/(v\tan\beta)$ and $\sqrt{2}m_b/(v\cot\beta)$, respectively,$^{10}$ which, outside the stated $\tan\beta$ range, implies a decay width that significantly exceeds the width allowed by observations, cf. section 2.1. (Independently, such large couplings would lead to a Landau pole in $y_t$ or $y_b$, and/or strong coupling at low scales. Our lower limit on $\tan\beta$ also has very strong support from the observed Higgs mass of 125 GeV, which we will not separately impose.) The key assumption will be the absence of charge- and colour-breaking minima of the scalar potential. This could in principle be relaxed to only require metastability over cosmological timescales; we leave this aside for future work. As we will see, this assumption is sufficient to exclude the MSSM if the resonance interpretation is confirmed.
+
+### 3.3.1 Constraints from vacuum stability
+
+An essential role in our argument is played by the upper bounds on the $\mu$ parameter and the soft trilinear terms that follow from requiring the absence of charge- and colour-breaking minima of the MSSM scalar potential. The derivation of these bounds is well known [47–54] and involves suitable directions of the MSSM scalar field space. We employ five such directions
+
+$$ T_L = T_R = H_u^0, \quad B_L = B_R = H_d^0, \quad T_L = T_R = H_u^0, \quad B_L = B_R = H_d^0, \quad T_L = T_R = H_d^0, \quad B_L = B_R = H_u^0, \quad T_L = T_R = H_d^0 \tag{3.57} $$
+
+(with all other scalar fields held at zero), of which the first two are D-flat. The five bounds derived from these directions can be formulated in terms of the stop, sbottom and stau masses, as:
+
+$$ |A_t| \le \sqrt{3} \sqrt{m_{t_1}^2 + m_{t_2}^2 - 2m_t^2 + \frac{M_{H^0}^2}{2}(1+c_{2\beta}) - \frac{m_Z^2}{2}(1-c_{2\beta})(1+c_{2\beta})^2}, \quad (3.58) $$
+
+$$ |A_b| \le \sqrt{3} \sqrt{m_{b_1}^2 + m_{b_2}^2 - 2m_b^2 + \frac{M_{H^0}^2}{2}(1-c_{2\beta}) - \frac{m_Z^2}{2}(1-c_{2\beta})^2(1+c_{2\beta})}, \quad (3.59) $$
+
+$$ |\mu| \le \sqrt{1 + \frac{m_Z^2}{m_t^2} \sin^2 \beta} \times \sqrt{m_{t_1}^2 + m_{t_2}^2 - 2m_t^2 + \frac{M_{H^0}^2}{2}(1-c_{2\beta}) - \frac{m_Z^2}{2}(1+c_{2\beta}-c_{2\beta}^2+c_{2\beta}^3)}, \quad (3.60) $$
+
+$$ m_b |\mu| \tan \beta \le m_t \sqrt{\frac{\tan^2 \beta}{R^2} + \frac{m_Z^2}{m_t^2} \sin^2 \beta} \times \sqrt{m_{b_1}^2 + m_{b_2}^2 - 2m_b^2 + \frac{M_{H^0}^2}{2}(1+c_{2\beta}) - \frac{m_Z^2}{2}(1-c_{2\beta}-c_{2\beta}^2-c_{2\beta}^3)}, \quad (3.61) $$
+
+$^{10}$For this subsection, we use a convention $v \approx 174$ GeV.
+---PAGE_BREAK---
+
+$$
+\begin{equation}
+\begin{split}
+m_{\tau} |\mu| \tan \beta \le m_t & \sqrt{\frac{\tan^2 \beta}{R_{\tau}^2} + \frac{m_Z^2}{m_t^2} \sin^2 \beta} \\
+& \times \sqrt{m_{\tilde{\tau}_1}^2 + m_{\tilde{\tau}_2}^2 - 2m_{\tau}^2 + \frac{M_{H^0}^2}{2}(1+c_{2\beta}) - \frac{m_Z^2}{2}(1-c_{2\beta}-c_{2\beta}^2-c_{2\beta}^3)},
+\end{split}
+\tag{3.62}
+\end{equation}
+$$
+
+where $R \equiv m_t/m_b \sim 50$, $R_\tau \equiv m_t/m_\tau \sim 100$, and $c_{2\beta} \equiv \cos(2\beta)$. Eq. (3.59) also has an
+analogue for $A_\tau$, obtained by substituting $b \to \tau$.
+
+In these expressions we have kept the exact dependence on $\beta$, but neglected small
+terms of order $m_Z^4/M_{H^0}^4$ (i.e. we have taken the decoupling limit). Also, we have employed
+tree-level mass relations; this can easily be undone (for example, $m_b|\mu| \tan\beta \to y_b|\mu|$ on the
+left-hand side of eq. (3.61), and $m_t\sqrt{\tan^2\beta/R^2+m_Z^2/m_t^2}\sin^2\beta \to y_t\sqrt{y_b^2+(g^2+g'^2)/2}$ on
+its right-hand side).
+
+We can combine the bounds into bounding functions $\Phi_t$, $\Phi_b$, and $\Phi_\tau$ of the sfermion masses and $\beta$ only. Firstly,
+
+$$
+\Phi_t = \begin{cases} 0, & m_{t_1}^2 + m_{t_2}^2 - 2m_t^2 + \frac{M_{H^0}^2}{2}(1+c_{2\beta}) - \frac{m_Z^2}{2}(1-c_{2\beta})(1+c_{2\beta})^2 < 0, \\ \sqrt{3}\sqrt{m_{t_1}^2 + m_{t_2}^2 - 2m_t^2 + \frac{M_{H^0}^2}{2}(1-c_{2\beta}) - \frac{m_Z^2}{2}(1+c_{2\beta}-c_{2\beta}^2+c_{2\beta}^3)}, & \text{otherwise.} \end{cases} \tag{3.63}
+$$
+
+If the condition for $\Phi_t = 0$ is satisfied, there is no way to satisfy the $A_t$ constraint; setting the bounding function to zero in this case will serve to effectively discard those unphysical points below. Otherwise, $\Phi_t$ simultaneously bounds both $|A_t|$ and $|\mu|$. A similar function that simultaneously bounds $|A_b|$ and $\frac{mb}{mt}|\mu|\tan\beta$ is provided by
+
+$$
+\Phi_b = \sqrt{3} \sqrt{m_{b1}^2 + m_{b2}^2 - 2m_b^2 + \frac{M_{H0}^2}{2}(1-c_{2\beta})}, \quad (3.64)
+$$
+
+and an identical function $\Phi_\tau$ follows from this by substituting $b \to \tau$.
+
+### 3.3.2 Conservative bounds on sfermion contributions
+
+In the notation of ref. [55], the sfermion contributions to $c_g$ are given by
+
+$$
+\left| \sum_f c_g^{(\tilde{f})} \right| = \frac{g M_{H^0}}{2 M_W} g_s^2 |A_{SUSY,\tilde{f}}^{H^0}|. \qquad (3.65)
+$$
+
+If the contribution of a sfermion to $c_g$ is known, the corresponding contribution to $c_\gamma$ is
+given by
+
+$$
+c_{\gamma}^{(\tilde{f})} = 2(e^2/g_s^2) N_c^{(f)} Q_f^2 c_g^{(\tilde{f})}. \tag{3.66}
+$$
+
+Explicitly,
+
+$$
+A_{\text{SUSY},\tilde{f}}^{H0} = 4 \sum_{\tilde{f}=\tilde{\tau},\tilde{\beta},\tilde{\tau}} \sum_{i=1,2} \frac{g_{\tilde{\beta}_i \tilde{\beta}_i}^{H0}}{M_{H0}^2} h(\tau_i^{\tilde{f}}) \quad (3.67)
+$$
+---PAGE_BREAK---
+
+where $\tau_i^{\tilde{f}} = M_{H^0}^2/(4m_{\tilde{f}_i}^2)$ and
+
+$$h(\tau) = \tau A_0^{H^0}(\tau) = \begin{cases} \frac{\arcsin^2(\sqrt{\tau})}{\tau} - 1 & \tau \le 1, \\ -\frac{1}{4\tau} \left( \ln \frac{1+\sqrt{1-\frac{1}{\tau}}}{1-\sqrt{1-\frac{1}{\tau}}} - i\pi \right)^2 - 1 & \tau > 1. \end{cases} \quad (3.68)$$
+
+Consider first the stops. In the decoupling limit, their couplings to $H^0$ are
+
+$$g_{\tilde{t}_i \tilde{t}_i}^{H^0} = -\cot \beta m_t^2 + x_i \sin(2\beta) m_Z^2 \pm m_t \frac{\sin(2\theta_{\tilde{t}})}{2} (\mu + A_t \cot \beta), \quad (3.69)$$
+
+where $\theta_{\tilde{t}}$ is the stop mixing angle, and the coefficients $x_i$ depend on $\theta_{\tilde{t}}$ and $\beta$ and are always less than one in magnitude. Using that $h \to 0$ for $\tau \to 0, \infty$ and $|h| \le h(1) \approx 1.47$, one easily shows that the first two terms lead to maximal contributions to $A_{SUSY,\tilde{f}}^{H^0}$ that are bounded (in magnitude) by 2.74 cot $\beta$ and 0.03, respectively. (Similar terms for the sbottom and stau cases will be negligible.) The third term in the coupling leads to
+
+$$A_{SUSY,\tilde{t}}^{H^0} = \frac{\sin(2\theta_{\tilde{t}})}{2} \frac{4m_t(A_t \cot \beta + \mu)}{M_{H^0}^2} \times (h(\tau_1) - h(\tau_2)). \quad (3.70)$$
+
+Employing now the bounding function $\Phi_t$ from the previous subsection, it is not difficult to show that
+
+$$\left| m_t \frac{\sin(2\theta_{\tilde{t}})}{2} (A_t \cot \beta + \mu) \right| \le m_t \min \left( \frac{1}{2} \Phi_t (1 + \cot \beta), m_t \frac{\Phi_t^2}{m_{t_2}^2 - m_{t_1}^2} \right) = \frac{M_{H^0}^2}{4} B(\tau_1, \tau_2; M_{H^0}). \quad (3.71)$$
+
+The first argument of the min function follows from $|\sin| \le 1$, the second makes use of the explicit formula for the stop mixing angle. We then have
+
+$$|A_{SUSY,\tilde{t}}^{H^0}| \le B(\tau_1, \tau_2) |h(\tau_1) - h(\tau_2)|. \quad (3.72)$$
+
+The right-hand side is bounded and the physical parameter space is the compact region $0 \le \tau_i \le \tau_i^{\max}$, where $\tau_i^{\max} = M_{H^0}^2/(4m_{\tilde{t}_i}^{\min})$ depends on the experimental lower bound on the lighter stop mass. Straightforward numerical techniques establish that
+
+$$|A_{SUSY,\tilde{t}}^{H^0}| \le 3.37, \quad (3.73)$$
+
+where the maximum is obtained at $\tan\beta = 1$, when one stop is at threshold ($m = M_H/2$) and the other is relatively light. Below, we numerically obtain and use the bounds as a function of $\tan\beta$. (We allow stop masses as light as 100 GeV in the scan, to escape any doubts related to for instance compressed spectra where light stops might have escaped detection at the LHC). The extremal point is generally ruled out by the observed Higgs mass $m_h = 125$ GeV, and very unlikely to be consistent with LHC searches, but we are being conservative.
+---PAGE_BREAK---
+
+Analogous steps lead to bounds on the sbottom and stau contributions. In this case, terms proportional to $m_{b,\tau}^2$ and $m_Z^2$ in the Higgs-sfermion couplings lead to completely negligible effects. For the remainder, we require a bound
+
+$$
+\begin{aligned}
+\left| m_b \frac{\sin(2\theta_b)}{2} (\mu - A_b \tan \beta) \right| &\le m_t \min \left( \frac{1}{2} \Phi_b \left[ \cot \beta + \frac{\tan \beta}{R} \right] , \\
+&\qquad m_t \frac{\Phi_b^2}{m_{b_2}^2 - m_{b_1}^2} \left[ \cot \beta + \frac{\tan \beta}{R} \right] \left[ 1 + \frac{1}{R} \right] \right) \\
+&= \frac{M_{H^0}^2}{4} B_b(\tau_1, \tau_2; M_{H^0}) .
+\end{aligned}
+\tag{3.74}
+$$
+
+The resulting bound is most effective in the intermediate $\tan\beta$ region, counteracting the small denominator of eq. (3.56) in that region. The bound on the stau contribution, as a function of $\tan\beta$ and the slepton masses, is identical to the sbottom one, except for a missing colour factor (overcompensated in the photonic coupling by a ninefold larger squared electric charge).
+
+### 3.3.3 Contributions from other particles and verdict
+
+The contributions from top, bottom, W, and charged-Higgs loops have already been discussed in the 2HDM section. In the decoupling limit, where $M_{H^+} \approx M_{H^0}$, they are essentially functions of $\tan\beta$ only and easily incorporated. Regarding charginos, their effect is equivalent to the contribution of two vectorlike, colourless particles; such contributions have also been discussed above. We only need to bound the fermion loop function by its global maximum and make no use of the relation of the chargino and Higgs mixing angles to MSSM parameters in order to obtain the bound $|c_\gamma^{X^+}| \le 0.45$ (for any $\tan\beta$). Assuming now the extreme scenario where all contributions to $c_\gamma$ and $c_g$ simultaneously saturate their bounds and are in phase with one another, we obtain a (very) conservative upper bound on the left-hand side of eq. (3.56). This is displayed in figure 7. We observe that this bound still misses the data by more than a factor of two, even at the point of closest approach at $\tan\beta \sim 5$. It is fairly clear that the bound could be made stronger by, for example, employing more properties of the function $h$ or formulating a higher-dimensional extremization problem (closer to a full scan of the MSSM parameter space). It is also clear that the pseudoscalar $A^0$ fares worse than $H^0$ as a resonance candidate: the chargino contribution to its coupling to photons is similarly constrained as in the $H^0$ case, while sfermion contributions to both the photonic and gluonic couplings are absent, giving a much tighter bound on the left-hand side of eq. (3.56) in this case.
+
+### 3.3.4 Production from quarks?
+
+So far we only considered the production from gluons. A similar leading-order analysis for quark-antiquark initial states again leads to a negative conclusion. The bounds just established translate to an upper bound $|c_\gamma| < 5.3$, attained (for $\tan\beta \ge 1$) at $\tan\beta = 1$. This can be combined with the model-independent analysis of section 2. First, the constraint in eq. (2.10) rules out initial states other than $u\bar{u}$ or $d\bar{d}$. Eq. (2.18) then implies
+---PAGE_BREAK---
+
+**Figure 7.** Comparison of the upper bound on the left-hand side of eq. (3.56) to the signal suggested by the diphoton excesses, as a function of $\tan\beta$. The red horizontal line corresponds to the signal, and the blue dots represent our conservative upper bound.
+
+($\Gamma/(45\text{GeV}))^{1/4} < 0.5$, which together with eq. (2.17) implies $|c_u| < 0.15$ for $u\bar{u}$ initial state ($|c_d| < 0.18$ for $d\bar{d}$ initial state). (The couplings $c_u$ and $c_d$ denote the Yukawa couplings of the scalar mass eigenstate $H^0$, as defined in section 2. For finite $\tan\beta$ this state is a superposition of the neutral components of $H_u$ and $H_d$, and yet another superposition of the doublets in the “Higgs basis” of the preceding subsections.) At the same time, the signal constraint together with the expression for the width-to-mass ratio (eqs. (2.8) and (2.3), respectively) imply
+
+$$ |c_f| > \frac{2.9(3.7)}{|c_\gamma|} \sqrt{(n_{\gamma\gamma}|c_\gamma|^2 + n_t|c_t|^2 + n_b|c_b|^2)} \frac{750 \text{ GeV}}{45 \text{ GeV}}. \quad (3.75) $$
+
+Using the tree-level relations $|c_t| = \frac{m_t}{\sqrt{2}\nu} \cot\beta$, $|c_b| = \frac{m_b}{\sqrt{2}\nu} \tan\beta$, we find this to be in conflict with the upper bound unless $3 < \tan\beta < 15$ for $u\bar{u}$ initial state ($4 < \tan\beta < 14$ for $d\bar{d}$ initial state), in which case $|c_u| > 0.10$ (or $|c_d| > 0.13$). Employing again the tree-level relations, these $\tan\beta$ ranges correspond to an up-quark mass above 100 GeV (down-quark mass above a few GeV), both in gross contradiction with observation.
+
+However, higher-order corrections in the MSSM could potentially affect our conclusions. Although it is hard to see how they could give $\mathcal{O}(1)$ or larger corrections to the $H^0gg$ or $H^0\gamma\gamma$ vertices, loop corrections can contribute $\mathcal{O}(1)$ fractions of the down-type quark masses, through an induced coupling to the doublet $H_u$.¹¹ In this case, $c_b$ entering eq. (3.75) is no longer determined by $m_b$ and $\tan\beta$, and so for $\tan\beta \to \infty$ one would have only a very weak bound $|c_u| > 0.005$ ($|c_d| > 0.007$) due to the partial width into diphotons. While a complete investigation goes beyond the methodology and scope of this paper, we can put some relevant restrictions on such a scenario.
+
+¹¹We thank Martin Gorbahn for stressing this to us.
+---PAGE_BREAK---
+
+The fact that $\tau\tau$ resonance searches do not show an excess results in an upper bound on the tree-level $\tau$ mass, giving
+
+$$m_{\tau}^{\text{tree}} < \frac{7.4}{\tan \beta} \text{GeV}. \qquad (3.76)$$
+
+This follows directly from the upper bound on the ratio $BR_{\tau\tau}/BR_{\gamma\gamma} = (n_{\tau}/n_{\gamma})(|c_{\tau}|^{2}/|c_{\gamma}|^{2})$ (cf. eq. (2.16) and table 3), using $|c_{\gamma}| > 5.3$, giving $|c_{\tau}| < 0.026$. (This might be relaxed to about 0.03 for a mix of gg and $q\bar{q}$ production.) This implies that either $\tan\beta < 10$ or the dominant fraction of the $\tau$ mass would have to come from one-loop contributions.
+
+Such one-loop contributions have been considered in the literature (see, e.g., ref. [56]) and are due to neutralino-stau and chargino-sneutrino loops, with the latter suppressed by the small $|y_{\tau}| = \sqrt{2}|c_{\tau}|/\sin\beta < 0.06$. Discarding them, the remaining neutralino-stau contributions are proportional to the left-hand side of eq. (3.62) times a combination of coupling constants, times a loop function.¹² For $\tan\beta > 8.3$, the $M_{H^0}$ and $m_Z$ dependence of the stability bound of eq. (3.62) can be conservatively dropped. The one-loop contribution of a given neutralino to the $\tau$ mass is then bounded by the dimensionless combination
+
+$$\sqrt{m_{\tilde{\tau}_1}^2 + m_{\tilde{\tau}_2}^2} m_{\chi_0} I(m_{\tilde{\tau}_1}^2, m_{\tilde{\tau}_2}^2, m_{\chi_0}^2)$$
+
+(with $I$ defined in [56]), which is globally bounded in magnitude by one, times a factor independent of sparticle masses. Summing the latter over neutralinos and maximizing over mixing angles, we find that $\Delta m_{\tau}^{1\text{-loop}} < 0.2$ GeV for $\tan\beta > 8.3$. Therefore, if such a scenario can work at all, it necessarily implies small $\tan\beta$. We leave a detailed investigation for future work.
+
+### 3.3.5 Cautionary note
+
+We stress that our conclusions here are specific to the MSSM, and attest to the high predictivity of the model. If the MSSM cannot survive in regions of metastability (where charge and colour-breaking minima exist but are not tunneled to over cosmological timescales), or be saved by higher-order corrections, more complicated supersymmetric models may still accommodate the excess, although the techniques described here may be useful in scrutinizing them. Another logical possibility of saving the MSSM would be production through the decay of heavier particles (say, stops, which could themselves be produced from gluino and squark decays). As mentioned in the beginning, the experimental data do not seem to support such a mechanism.
+
+## 4 Summary and Outlook
+
+This work deals with the core phenomenology of the diphoton excess observed by the LHC experiments ATLAS and CMS around 750 GeV diphoton invariant mass. We have considered both the case where the data are interpreted by a narrow and a broad resonance. We
+
+¹²If nonholomorphic soft terms are allowed, the left-hand side of eq. (3.62) is modified but remains proportional to the relevant $\tilde{\tau}\tilde{\tau}H_u^0$ coupling, such that the coupling remains bounded by the right-hand side.
+---PAGE_BREAK---
+
+obtained model-independent constraints on the allowed couplings and branching fractions to various final states, including the interplay with other existing bounds. Our findings suggest that the anomaly cannot be accounted for by the presence of a single additional singlet or doublet spin-zero field and the Standard Model degrees of freedom; this includes all two-Higgs-doublet models. We also found that, at least in a leading-order analysis, the whole parameter space of the MSSM fails at explaining the excess if one requires the absence of charge and colour breaking minima. If we assume that the resonance is broad, we find that it is challenging to find a weakly coupled explanation. However, we provide an existence proof in the form of a model with vectorlike quarks with large electric charge. For the narrow resonance case, a similar model can be perturbative up to high scales also with smaller charges. We have also considered dilaton models where the full SM including the Higgs doublet is a part of the conformal sector. We find that these models cannot explain the size of the excess unless we add new fields below the TeV scale to give large extra contributions to the QED and QCD beta functions. As already mentioned, in all the scenarios studied by us we find that new particles below the TeV scale need to be present in addition to the resonance. They must have couplings to the scalar itself, to photons, maybe to gluons, and possibly also carry flavor information. Further study of their LHC phenomenology would be interesting to follow. Finally, models in which the new resonance has significant couplings to the light quarks motivate thinking about the linkage between flavor physics and the physics related to the resonance.
+
+**Note:** Other early-response studies of the various possible implications of the excess, that appeared approximately simultaneously with ours, are refs. [57-73]. Also, after the submission, an earlier study of diphoton resonances [74] was pointed out to us.
+
+## Acknowledgments
+
+GP is supported by the BSF, ISF, and ERC-2013-CoG grant (TOPCHARM # 614794). SJ thanks GP and the Weizmann Institute for hospitality, including the period during which this paper was conceived. SJ acknowledges partial support from the UK STFC under Grant Agreement ST/L000504/1, and from the IPPP through an associateship. SJ acknowledges the NExT Institute.
+
+## References
+
+[1] ATLAS Collaboration, "Search for resonances decaying to photon pairs in $3.2 \text{ fb}^{-1}$ of pp collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector," Tech. Rep. ATLAS-CONF-2015-081, CERN, Geneva, 2015. http://cds.cern.ch/record/2114853.
+
+[2] CMS Collaboration, "Search for new physics in high mass diphoton events in proton-proton collisions at 13 TeV," Tech. Rep. CMS-PAS-EXO-15-004, CERN, Geneva, 2015. http://cds.cern.ch/record/2114808.
+
+[3] L. D. Landau, "On the angular momentum of a system of two photons," Dokl. Akad. Nauk Ser. Fiz. 60 (1948) 207.
+
+[4] C.-N. Yang, "Selection Rules for the Dematerialization of a Particle Into Two Photons," Phys. Rev. 77 (1950) 242.
+---PAGE_BREAK---
+
+[5] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H.-S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, "The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations," JHEP 1407 (2014) 079, arXiv:1405.0301 [hep-ph].
+
+[6] A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, "FeynRules 2.0 - A complete toolbox for tree-level phenomenology," Comput. Phys. Commun. 185 (2014) 2250, arXiv:1310.1921 [hep-ph].
+
+[7] S. Dawson, "The Effective W Approximation," Nucl. Phys. B249 (1985) 42.
+
+[8] G. L. Kane, W. W. Repko, and W. B. Rolnick, "The Effective $W^±$, $Z^0$ Approximation for High-Energy Collisions," Phys. Lett. B148 (1984) 367.
+
+[9] R. D. Ball et al., "Parton distributions with LHC data," Nucl. Phys. B867 (2013) 244, arXiv:1207.1303 [hep-ph].
+
+[10] C. Schmidt, J. Pumplin, D. Stump, and C. P. Yuan, "CT14QED parton distribution functions from isolated photon production in deep inelastic scattering," Phys. Rev. D93 no. 11, (2016) 114015, arXiv:1509.02905 [hep-ph].
+
+[11] L. A. Harland-Lang, V. A. Khoze, and M. G. Ryskin, "The production of a diphoton resonance via photon-photon fusion," JHEP 03 (2016) 182, arXiv:1601.07187 [hep-ph].
+
+[12] W. Altmannshofer, J. Galloway, S. Gori, A. L. Kagan, A. Martin, and J. Zupan, "On the 750 GeV di-photon excess," arXiv:1512.07616 [hep-ph].
+
+[13] S. Fichet, G. von Gersdorff, and C. Royon, "Scattering Light by Light at 750 GeV at the LHC," arXiv:1512.05751 [hep-ph].
+
+[14] CMS Collaboration, "Search for Resonances Decaying to Dijet Final States at $\sqrt{s}$ = 8 TeV with Scouting Data," Tech. Rep. CMS-PAS-EXO-14-005, CERN, Geneva, 2015. http://cds.cern.ch/record/2063491.
+
+[15] CMS Collaboration, V. Khachatryan et al., "Search for diphoton resonances in the mass range from 150 to 850 GeV in pp collisions at $\sqrt{s}$ = 8 TeV," Phys. Lett. B750 (2015) 494, arXiv:1506.02301 [hep-ex].
+
+[16] ATLAS Collaboration, G. Aad et al., "Search for high-mass diphoton resonances in pp collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector," Phys. Rev. D92 (2015) 032004, arXiv:1504.05511 [hep-ex].
+
+[17] CMS Collaboration, "Search for High-Mass Diphoton Resonances in pp Collisions at $\sqrt{s}$ = 8 TeV with the CMS Detector," Tech. Rep. CMS-PAS-EXO-12-045, CERN, Geneva, 2015. http://cds.cern.ch/record/2017806.
+
+[18] CMS Collaboration, V. Khachatryan et al., "Search for Resonant $t\bar{t}$ Production in Proton-Proton Collisions at $\sqrt{s}$ = 8 TeV," arXiv:1506.03062 [hep-ex].
+
+[19] ATLAS Collaboration, G. Aad et al., "Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector," arXiv:1509.00389 [hep-ex].
+
+[20] CMS Collaboration, "Search for a standard model like Higgs boson in the $H \to ZZ \to \ell^+\ell^- q\bar{q}$ decay channel at $\sqrt{s}$ = 8 TeV," Tech. Rep. CMS-PAS-HIG-14-007, CERN, Geneva, 2015. https://cds.cern.ch/record/2001558.
+
+[21] CMS Collaboration, "Search for di-Higgs resonances decaying to 4 bottom quarks," Tech.
+---PAGE_BREAK---
+
+Rep. CMS-PAS-HIG-14-013, CERN, Geneva, 2014. http://cds.cern.ch/record/1748425.
+
+[22] ATLAS Collaboration, G. Aad et al., "Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $ℓℓ/ℓν/νν + b\bar{b}$ final states with the ATLAS detector," Eur. Phys. J. C75 (2015) 263, arXiv:1503.08089 [hep-ex].
+
+[23] ATLAS Collaboration, G. Aad et al., "Search for neutral Higgs bosons of the minimal supersymmetric standard model in pp collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector," JHEP 11 (2014) 056, arXiv:1409.6064 [hep-ex].
+
+[24] ATLAS Collaboration, G. Aad et al., "Search for new resonances in $Wγ$ and $Zγ$ final states in pp collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector," Phys. Lett. B738 (2014) 428, arXiv:1407.8150 [hep-ex].
+
+[25] ATLAS Collaboration, G. Aad et al., "Search for high-mass dilepton resonances in pp collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector," Phys. Rev. D90 (2014) 052005, arXiv:1405.4123 [hep-ex].
+
+[26] CMS Collaboration, S. Chatrchyan et al., "Search for narrow resonances and quantum black holes in inclusive and b-tagged dijet mass spectra from pp collisions at $\sqrt{s} = 7$ TeV," JHEP 01 (2013) 013, arXiv:1210.2387 [hep-ex].
+
+[27] CMS Collaboration, "Search for Heavy Resonances Decaying into $b\bar{b}$ and $bg$ Final States in pp Collisions at $\sqrt{s} = 8$ TeV," Tech. Rep. CMS-PAS-EXO-12-023, CERN, Geneva, 2013. http://cds.cern.ch/record/1542405.
+
+[28] ATLAS Collaboration, "Search for New Phenomena in Dijet Mass and Angular Distributions from pp Collisions at $\sqrt{s} = 13$ TeV with the ATLAS Detector," arXiv:1512.01530 [hep-ex].
+
+[29] CMS Collaboration, V. Khachatryan et al., "Search for narrow resonances decaying to dijets in proton-proton collisions at $\sqrt{s} = 13$ TeV," arXiv:1512.01224 [hep-ex].
+
+[30] W. D. Goldberger, B. Grinstein, and W. Skiba, "Distinguishing the Higgs boson from the dilaton at the Large Hadron Collider," Phys. Rev. Lett. 100 (2008) 111802, arXiv:0708.1463 [hep-ph].
+
+[31] T. Robens and T. Stefaniak, "Status of the Higgs Singlet Extension of the Standard Model after LHC Run 1," Eur. Phys. J. C75 (2015) 104, arXiv:1501.02234 [hep-ph].
+
+[32] M. Spira, A. Djouadi, D. Graudenz, and P. M. Zerwas, "Higgs boson production at the LHC," Nucl. Phys. B453 (1995) 17, arXiv:hep-ph/9504378.
+
+[33] M. Son and A. Urbano, "A new scalar resonance at 750 GeV: Towards a proof of concept in favor of strongly interacting theories," arXiv:1512.08307 [hep-ph].
+
+[34] M.-L. Xiao and J.-H. Yu, "Stabilizing electroweak vacuum in a vectorlike fermion model," Phys. Rev. D90 (2014) 014007, arXiv:1404.0681 [hep-ph].
+
+[35] B. Gripaios, A. Pomarol, F. Riva, and J. Serra, "Beyond the Minimal Composite Higgs Model," JHEP 04 (2009) 070, arXiv:0902.1483 [hep-ph].
+
+[36] A. Efrati, E. Kuflik, S. Nussinov, Y. Soreq, and T. Volansky, "Constraining the Higgs-Dilaton with LHC and Dark Matter Searches," Phys. Rev. D91 (2015) 055034, arXiv:1410.2225 [hep-ph].
+
+[37] J. D. Wells, "Lectures on Higgs Boson Physics in the Standard Model and Beyond," in *39th British Universities Summer School in Theoretical Elementary Particle Physics (BUSSTEPP)*
+---PAGE_BREAK---
+
+2009) *Liverpool, United Kingdom, August 24-September 4, 2009.* [arXiv:0909.4541 [hep-ph]](https://arxiv.org/abs/0909.4541).
+
+[38] B. Bellazzini, C. Csaki, J. Hubisz, J. Serra, and J. Terning, “A Higgslike Dilaton,” *Eur. Phys. J.* **C73** (2013) 2333, [arXiv:1209.3299 [hep-ph]](https://arxiv.org/abs/1209.3299).
+
+[39] R. S. Gupta and J. D. Wells, “Next Generation Higgs Bosons: Theory, Constraints and Discovery Prospects at the Large Hadron Collider,” *Phys. Rev.* **D81** (2010) 055012, [arXiv:0912.0267 [hep-ph]](https://arxiv.org/abs/0912.0267).
+
+[40] J. F. Gunion and H. E. Haber, “The CP conserving two Higgs doublet model: The Approach to the decoupling limit,” *Phys. Rev.* **D67** (2003) 075019, [arXiv:hep-ph/0207010](https://arxiv.org/abs/0207010).
+
+[41] R. S. Gupta, M. Montull, and F. Riva, “SUSY Faces its Higgs Couplings,” *JHEP* **04** (2013) 132, [arXiv:1212.5240 [hep-ph]](https://arxiv.org/abs/1212.5240).
+
+[42] G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher, and J. P. Silva, “Theory and phenomenology of two-Higgs-doublet models,” *Phys. Rept.* **516** (2012) 1, [arXiv:1106.0034 [hep-ph]](https://arxiv.org/abs/1106.0034).
+
+[43] D. Ghosh, R. S. Gupta, and G. Perez, “Is the Higgs Mechanism of Fermion Mass Generation a Fact? A Yukawa-less First-Two-Generation Model,” [arXiv:1508.01501 [hep-ph]](https://arxiv.org/abs/1508.01501).
+
+[44] M. Drees and M. M. Nojiri, “Proposed new signal for scalar top-squark bound-state production,” *Phys. Rev. Lett.* **72** (1994) 2324, [arXiv:hep-ph/9310209](https://arxiv.org/abs/9310209).
+
+[45] D. Kahawala and Y. Kats, “Distinguishing spins at the LHC using bound state signals,” *JHEP* **09** (2011) 099, [arXiv:1103.3503 [hep-ph]](https://arxiv.org/abs/1103.3503).
+
+[46] M. R. Kauth, J. H. Kuhn, P. Marquard, and M. Steinhauser, “Gluinonia: Energy Levels, Production and Decay,” *Nucl. Phys.* **B831** (2010) 285, [arXiv:0910.2612 [hep-ph]](https://arxiv.org/abs/0910.2612).
+
+[47] J. M. Frere, D. R. T. Jones, and S. Raby, “Fermion Masses and Induction of the Weak Scale by Supergravity,” *Nucl. Phys.* **B222** (1983) 11.
+
+[48] J. P. Derendinger and C. A. Savoy, “Quantum Effects and SU(2) x U(1) Breaking in Supergravity Gauge Theories,” *Nucl. Phys.* **B237** (1984) 307.
+
+[49] J. A. Casas and S. Dimopoulos, “Stability bounds on flavor violating trilinear soft terms in the MSSM,” *Phys. Lett.* **B387** (1996) 107, [arXiv:hep-ph/9606237](https://arxiv.org/abs/9606237).
+
+[50] R. Rattazzi and U. Sarid, “The Unified minimal supersymmetric model with large Yukawa couplings,” *Phys. Rev.* **D53** (1996) 1553, [arXiv:hep-ph/9505428](https://arxiv.org/abs/9505428).
+
+[51] J. Hisano and S. Sugiyama, “Charge-breaking constraints on left-right mixing of stau’s,” *Phys. Lett.* **B696** (2011) 92, [arXiv:1011.0260 [hep-ph]](https://arxiv.org/abs/1011.0260). [Erratum: *Phys. Lett.* B719 (2013) 472.]
+
+[52] W. Altmannshofer, M. Carena, N. R. Shah, and F. Yu, “Indirect Probes of the MSSM after the Higgs Discovery,” *JHEP* **01** (2013) 160, [arXiv:1211.1976 [hep-ph]](https://arxiv.org/abs/1211.1976).
+
+[53] M. Carena, S. Gori, I. Low, N. R. Shah, and C. E. M. Wagner, “Vacuum Stability and Higgs Diphoton Decays in the MSSM,” *JHEP* **02** (2013) 114, [arXiv:1211.6136 [hep-ph]](https://arxiv.org/abs/1211.6136).
+
+[54] W. Altmannshofer, C. Frugiuele, and R. Harnik, “Fermion Hierarchy from Sfermion Anarchy,” *JHEP* **12** (2014) 180, [arXiv:1409.2522 [hep-ph]](https://arxiv.org/abs/1409.2522).
+
+[55] A. Djouadi, “The Anatomy of electro-weak symmetry breaking. II. The Higgs bosons in the minimal supersymmetric model,” *Phys. Rept.* **459** (2008) 1, [arXiv:hep-ph/0503173](https://arxiv.org/abs/0503173).
+---PAGE_BREAK---
+
+[56] F. Borzumati, G. R. Farrar, N. Polonsky, and S. D. Thomas, “Soft Yukawa couplings in supersymmetric theories,” Nucl. Phys. B555 (1999) 53, [arXiv:hep-ph/9902443].
+
+[57] K. Harigaya and Y. Nomura, “Composite Models for the 750 GeV Diphoton Excess,” arXiv:1512.04850 [hep-ph].
+
+[58] Y. Mambrini, G. Arcadi, and A. Djouadi, “The LHC diphoton resonance and dark matter,” arXiv:1512.04913 [hep-ph].
+
+[59] M. Backovic, A. Mariotti, and D. Redigolo, “Di-photon excess illuminates Dark Matter,” arXiv:1512.04917 [hep-ph].
+
+[60] A. Angelescu, A. Djouadi, and G. Moreau, “Scenarii for interpretations of the LHC diphoton excess: two Higgs doublets and vector-like quarks and leptons,” arXiv:1512.04921 [hep-ph].
+
+[61] Y. Nakai, R. Sato, and K. Tobioka, “Footprints of New Strong Dynamics via Anomaly,” arXiv:1512.04924 [hep-ph].
+
+[62] S. Knapen, T. Melia, M. Papucci, and K. Zurek, “Rays of light from the LHC,” arXiv:1512.04928 [hep-ph].
+
+[63] D. Buttazzo, A. Greljo, and D. Marzocca, “Knocking on New Physics’ door with a Scalar Resonance,” arXiv:1512.04929 [hep-ph].
+
+[64] A. Pilaftsis, “Diphoton Signatures from Heavy Axion Decays at LHC,” arXiv:1512.04931 [hep-ph].
+
+[65] R. Franceschini, G. F. Giudice, J. F. Kamenik, M. McCullough, A. Pomarol, R. Rattazzi, M. Redi, F. Riva, A. Strumia, and R. Torre, “What is the $\gamma\gamma$ resonance at 750 GeV?,” arXiv:1512.04933 [hep-ph].
+
+[66] S. Di Chiara, L. Marzola, and M. Raidal, “First interpretation of the 750 GeV di-photon resonance at the LHC,” arXiv:1512.04939 [hep-ph].
+
+[67] T. Higaki, K. S. Jeong, N. Kitajima, and F. Takahashi, “The QCD Axion from Aligned Axions and Diphoton Excess,” arXiv:1512.05295 [hep-ph].
+
+[68] S. D. McDermott, P. Meade, and H. Ramani, “Singlet Scalar Resonances and the Diphoton Excess,” arXiv:1512.05326 [hep-ph].
+
+[69] J. Ellis, S. A. R. Ellis, J. Quevillon, V. Sanz, and T. You, “On the Interpretation of a Possible ~ 750 GeV Particle Decaying into $\gamma\gamma$,” arXiv:1512.05327 [hep-ph].
+
+[70] M. Low, A. Tesi, and L.-T. Wang, “A pseudoscalar decaying to photon pairs in the early LHC run 2 data,” arXiv:1512.05328 [hep-ph].
+
+[71] B. Bellazzini, R. Franceschini, F. Sala, and J. Serra, “Goldstones in Diphotons,” arXiv:1512.05330 [hep-ph].
+
+[72] C. Petersson and R. Torre, “The 750 GeV diphoton excess from the goldstino superpartner,” arXiv:1512.05333 [hep-ph].
+
+[73] E. Molinaro, F. Sannino, and N. Vignaroli, “Strong dynamics or axion origin of the diphoton excess,” arXiv:1512.05334 [hep-ph].
+
+[74] J. Jaeckel, M. Jankowiak, and M. Spannowsky, “LHC probes the hidden sector,” Phys. Dark Univ. 2 (2013) 111, [arXiv:1212.3620 [hep-ph]].
\ No newline at end of file
diff --git a/samples/texts_merged/3707129.md b/samples/texts_merged/3707129.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea1d67cf7cabdfc279e705fee986e33ae94093e4
--- /dev/null
+++ b/samples/texts_merged/3707129.md
@@ -0,0 +1,142 @@
+
+---PAGE_BREAK---
+
+**CHEMISTRY-11**
+
+Name:
+
+Class:
+
+ID:
+
+Date: / /
+
+Time Allowed: 40 Min.
+
+Marks Total: 25 Marks Obtained:
+
+Maximum Marks: 09
+
+(OBJECTIVE TYPE)
+
+Time Allowed: 10 Min.
+
+NOTE: Tick The Correct Option:
+
+1. The cathode reaction in the electrolysis of dil. $H_2SO_4$ with Pt electrodes is:
+
+(a) Reduction
+
+(b) Oxidation
+
+(c) Both oxidation and reduction
+
+(d) Neither oxidation or reduction
+
+2. Oxidation number of chromium in $Cr_2O_7^{2-}$ is:
+
+(a) +3
+
+(b) +4
+
+(c) +5
+
+(d) +6
+
+3. The oxidation number of Cl in $HClO_4$ is:
+
+(a) +2
+
+(b) +3
+
+(c) +5
+
+(d) +7
+
+4. Non-spontaneous redox reaction takes place in:
+
+(a) Electrolytic cell
+
+(b) Galvanic cell
+
+(c) Voltaic cell
+
+(d) Both 'b' & 'c'
+
+5. Which oxidation number can't be shown by oxygen?
+
+(a) -1
+
+(b) -2
+
+(c) +2
+
+(d) None of these
+
+6. In balancing of redox equations by ion electron method in basic medium, H is balanced by:
+
+(a) $H^+$
+
+(b) $OH^-$
+
+(c) $H_2O$
+
+(d) None
+
+7. During the electrolysis of aqueous $CuSO_4$ solution, ________ is deposited at anode.
+
+(a) Cu
+
+(b) $SO_2$
+
+(c) $H_2$
+
+(d) $O_2$
+
+8. During electrolytic purification of copper, anode is made up of:
+
+(a) Pure copper
+
+(b) Impure copper
+
+(c) Graphite
+
+(d) Platinum
+
+9. In Galvanic cell, anode is ______ charged.
+
+(a) Positively
+
+(b) Negatively
+
+(c) Neutrally
+
+(d) Both 'a' & 'b'
+
+Maximum Marks: 16
+
+(SUBJECTIVE TYPE)
+
+Time Allowed: 30 Min.
+
+SECTION-I
+
+Q.2: Give brief answers to the following questions: (12)
+
+i. Define electrochemistry.
+
+ii. Calculate the oxidation number of underlined element. a) $H_3PO_3$ b) $Ca(ClO_3)_2$.
+
+iii. Differentiate between oxidation and reduction.
+
+iv. Explain the difference between ionization and electrolysis.
+
+v. How is anodized aluminum prepared?
+
+vi. What is a salt bridge? What are its functions?
+
+SECTION-II
+
+NOTE: Attempt All Questions: (04)
+
+Q.3: Define electrolysis? Explain the electrolysis of very dilute solution of $NaNO_3$.
\ No newline at end of file
diff --git a/samples/texts_merged/3863109.md b/samples/texts_merged/3863109.md
new file mode 100644
index 0000000000000000000000000000000000000000..543b73d239ec3669ca8edc40876611673af904d9
--- /dev/null
+++ b/samples/texts_merged/3863109.md
@@ -0,0 +1,592 @@
+
+---PAGE_BREAK---
+
+A barcode shape descriptor for curve point cloud data
+
+Anne Collinsa,*,1, Afra Zomorodianb,2, Gunnar Carlssona,1, Leonidas J. Guibasb,2
+
+aDepartment of Mathematics, Stanford University, 450 Serra Mall, Bldg. 380, Stanford, CA 94305 2125, USA
+
+bDepartment of Computer Science, Stanford University, Stanford, CA 94305, USA
+
+**Abstract**
+
+In this paper, we present a complete computational pipeline for extracting a compact shape descriptor for curve point cloud data (PCD). Our shape descriptor, called a *barcode*, is based on a blend of techniques from differential geometry and algebraic topology. We also provide a metric over the space of barcodes, enabling fast comparison of PCDs for shape recognition and clustering. To demonstrate the feasibility of our approach, we implement our pipeline and provide experimental evidence in shape classification and parametrization.
+
+© 2004 Elsevier Ltd. All rights reserved.
+
+*Keywords:* Barcode; Descriptor; Persistence; Tangent complex; Point cloud data; Curves
+
+# 1. Introduction
+
+In this paper, we present a complete computational pipeline for extracting a compact shape descriptor for curve point cloud data (PCD). Our shape descriptor, called a *barcode*, is based on a blend of techniques from differential geometry and algebraic topology. We also provide a metric over the space of barcodes, enabling fast comparison of PCDs for shape recognition and clustering. To demonstrate the feasibility of our approach, we implement our pipeline and provide experimental evidence in shape classification and parametrization.
+
+*Corresponding author.
+
+*E-mail addresses:* collins@math.stanford.edu (A. Collins), afra@cs.stanford.edu (A. Zomorodian), gunnar@math.stanford.edu (G. Carlsson), guibas@cs.stanford.edu (L.J. Guibas).
+
+¹Research supported, in part, by NSF under grant DMS 0101364.
+
+²Research supported, in part, by NSF/DARPA under grant CARGO 0138456 and by NSF under grant ITR 0086013.
+
+# 1.1. Prior work
+
+Shape analysis is a well-studied problem in many areas of computer science, such as vision, graphics, and pattern recognition. Researchers in vision first introduced the idea of using compact representations of shapes, or *shape descriptors*, for two-dimensional data or images. They derived descriptors using diverse methods, such as topological invariants, moment invariants, morphological methods for skeletons or medial axes, and elliptic Fourier parameterizations [1–3]. More recently, the availability of large sets of digitized three-dimensional shapes has generated interest in 3D descriptors [4,5], with techniques such as shape distributions [6] and multi-resolution Reeb graphs [7]. Ideally, a shape descriptor should be invariant to rigid transformations and coordinatize the shape space in a meaningful way.
+
+The idea of using *point cloud data* or *PCD* as a display primitive was introduced early [8], but did not become popular until the recent emergence of massive datasets. PCDs are now utilized in rendering [9,10], shape representation [11,12], and modeling [13,14], among
+---PAGE_BREAK---
+
+other uses. Furthermore, PCDs are often the only
+possible primitive for exploring shapes in higher
+dimensions [15–17].
+
+## 1.2. *Our work*
+
+In a previous paper, we initiated a study of shape
+description via the application of persistent homology to
+tangential constructions [18]. We proposed a robust
+method that combines the differentiating power of
+geometry with the classifying power of topology. We
+also showed the viability of our method through explicit
+calculations for one- and two-dimensional mathematical
+objects (curves and surfaces.) In this paper, we shift our
+focus from theory to practice, illustrating the feasibility
+of our method in the PCD domain. We focus on curves
+in order to explore the issues that arise in the application
+of our techniques. We must emphasize, however, that we
+view curves as one-dimensional manifolds, and insist
+that all our solutions extend to n-dimensional manifolds.
+Therefore, we avoid heuristics based on abusing
+characteristics of curve PCDs and search for general
+techniques that will be suitable in all dimensions. We
+briefly discuss computing our structures for two-dimen-
+sional manifolds, or surfaces, in Section 5.5.
+
+## 1.3. Overview
+
+The rest of the paper is organized as follows. In
+Section 2 we review the theoretical background for our
+shape descriptor. We believe that an intuitive under-
+standing of this material is sufficient for appreciating the
+results of this paper. As such, this section is brief in its
+treatment, and we refer the interested reader to our
+previous paper for a detailed description [18]. Section 3
+contains the algorithms for computing barcodes for
+PCDs sampled from closed smooth curves. We also
+describe the computation of the metric over the space of
+barcodes. We apply our techniques to families of
+
+algebraic curves in Section 4 to demonstrate their
+effectiveness. In Section 5, we extend our system to
+general PCDs that may include non-manifold points,
+singularities, boundary points, or noise. We then
+illustrate the power of our methods through applications
+to shape classification and parametrization in Section 6.
+
+## 2. Background
+
+In this section, we review the theoretical background
+necessary for our work. To make the discussion
+accessible to the non-specialist, our exposition will have
+an intuitive flavor.
+
+### 2.1. *Filtered simplicial complex*
+
+Let $S$ be a set of points. A $k$-simplex is a subset of $S$ of size $k+1$ (19). A simplex may be realized geometrically as the convex hull of $k+1$ affinely independent points in $\mathbb{R}^d$, $d \ge k$. A realization gives us the familiar low-dimensional $k$-simplices: *vertices*, *edges*, and *triangles*. A simplicial complex is a set $K$ of simplices on $S$ such that if $\sigma \in K$ then $\tau \subset \sigma$ implies $\tau \in K$. A subcomplex of $K$ is a simplicial complex $L \subseteq K$. A filtration of a complex $K$ is a nested sequence of complexes $\emptyset = K^0 \subseteq K^1 \subseteq \dots \subseteq K^m = K$. We call $K$ a filtered complex and show a small example in Fig. 1.
+
+### 2.2. *Persistent homology*
+
+Suppose we are given a shape $X$ that is embedded in $\mathbb{R}^3$. Homology is an algebraic invariant that counts the topological attributes of this shape in terms of its Betti numbers $\beta_i$ [19]. Specifically, $\beta_0$ counts the number of components of $X$. $\beta_1$ is the rank of a basis for the tunnels through $X$. These tunnels may be viewed as forming a graph with cycles [20]. $\beta_2$ counts the number of voids in $X$, or spaces that are enclosed by the shape. In this
+
+Fig. 1. A filtered complex with newly added simplices highlighted. We show the persistent interval set in each dimension below the filtration. Each persistent interval shown is the lifetime of a topological attribute, created and destroyed by the simplices at the low and high endpoints, respectively.
+---PAGE_BREAK---
+
+manner, homology gives a finite compact description of
+the connectivity of the shape. Since homology is an
+invariant, we may represent our shape combinatorially
+with a simplicial complex that has the same connectivity
+to get the same result.
+
+Suppose now that we are also given a process for
+constructing our shape from scratch. Such a growth
+process gives an evolving shape that undergoes topolo-
+gical changes: new components appear and connect to
+the old ones, tunnels are created and closed off, and
+voids are enclosed and filled in. *Persistent homology* is an
+algebraic invariant that identifies the birth and death of
+each topological attribute in this evolution [21,22]. Each
+attribute has a lifetime during which it contributes to
+some Betti number. We deem important those attributes
+with longer lifetimes, as they *persist* in being features of
+the shape. We may represent this lifetime as an interval,
+as shown in Fig. 1 for our small example. A feature, such
+as the first component in any filtration, may live forever
+and therefore have a half-infinite interval as its lifetime.
+Persistent homology describes the connectivity of our
+evolving shape via a multiset of intervals in each
+dimension. If we represent our shape with a simplicial
+complex, we may also represent its growth with a filtered
+complex.
+
+## 2.3. Filtered tangent complex
+
+We examine the geometry of our shape by looking at
+the tangents at each point of the shape. Although our
+approach extends to any dimension, we restrict our
+definitions to curves as they are the focus of this paper
+and simplify the description.
+
+Let $X$ be a curve in $\mathbb{R}^2$. We define $T^0(X) \subseteq X \times \mathbb{S}^1$ to be the set of the tangents at all points of $X$. That is,
+
+$$ T^0(X) = \left\{ (x, \zeta) \left| \lim_{t \to 0} \frac{d(x + t\zeta, X)}{t} = 0 \right. \right\}. $$
+
+A point $(x, \zeta)$ in $T^0(X)$ represents a tangent vector at a
+point $x \in X$ in the direction $\zeta \in \mathbb{S}^1$. The *tangent complex*
+of $X$ is the closure of $T^0$, $T(X) = T^0(X) \subseteq \mathbb{R}^2 \times \mathbb{S}^1$.
+$T(X)$ is equipped with a projection $\pi: T(X) \to X$ that
+projects a point $(x, \zeta) \in T(X)$ in the tangent complex
+onto its *basepoint* $x \in X$, and $\pi^{-1}(x) \subseteq T(X)$ is the *fiber*
+*at* $x$.
+
+We may filter the tangent complex using the curvature
+at each point. We let $T_\kappa^0(X)$ be the set of points $(x, \zeta) \in
+T^0(X)$ where the curvature $\kappa(x)$ at $x$ is less than $\kappa$, and
+define $T_\kappa(X)$ be the closure of $T_\kappa^0(X)$ in $\mathbb{R}^2 \times \mathbb{S}^1$. We call
+the $\kappa$-parametrized family of spaces $\{T_\kappa(X)\}_{\kappa \ge 0}$ the
+*filtered tangent complex*, denoted by $T^\text{filt}(X)$.
+
+## 2.4. Barcodes
+
+We get a compact descriptor by applying persistent
+homology to the filtered tangent complex of our shape.
+
+That is, the descriptor examines the connectivity of not
+the shape itself, but that of a derived space that is
+enriched with geometric information about the shape.
+We define a *barcode* to be the resulting set of persistence
+intervals for $T^{\text{filt}}(X)$ in each dimension. For curves, the
+only interesting barcode is usually the $\beta_0$-barcode which
+describes the lifetimes of the components in the growing
+tangent complex. We also define a quasi-metric, a metric
+that has $\infty$ as a possible value, over the collection of all
+barcodes. Our metric enables us to utilize barcodes as
+shape descriptors, as we can compare shapes by
+measuring the difference between their barcodes.
+
+Let $I, J$ be any two intervals in a barcode. We define their dissimilarity $\delta(I, J)$ to be the length of their symmetric difference: $\delta(I, J) = |I \cup J - I \cap J|$. Note that $\delta(I, J)$ may be infinite. Given a pair of barcodes $B_1$ and $B_2$, a matching is a set $M(B_1, B_2) \subseteq B_1 \times B_2 = \{(I, J) | I \in B_1 \text{ and } J \in B_2\}$, so that any interval in $B_1$ or $B_2$ occurs in at most one pair $(I, J)$. Let $M_1, M_2$ be the intervals from $B_1, B_2$, respectively, that are matched in $M$, and let $N$ be the non-matched intervals $N = (B_1 - M_1) \cup (B_2 - M_2)$. Given a matching $M$ for $B_1$ and $B_2$, we define the distance of $B_1$ and $B_2$ relative to $M$ to be the sum
+
+$$ \mathcal{D}_M(B_1, B_2) = \sum_{(I,J) \in M} \delta(I, J) + \sum_{L \in N} |L|. \quad (1) $$
+
+We now look for the best possible matching to define the
+quasi-metric:
+
+$$ \mathcal{D}(B_1, B_2) = \min_M \mathcal{D}_M(B_1, B_2). $$
+
+# 3. Computing barcodes
+
+In this section, we present a complete pipeline for computing barcodes for a PCD. Throughout this section, we assume that our PCD P contains samples from a smooth closed curve X ⊂ R². Before we compute the barcode, we need to construct the tangent complex. Since we only have samples from the original space, we can only approximate the tangent complex. We begin by computing a new PCD, π⁻¹(P) ⊂ T(X), that samples the tangent complex for our shape. To capture its homology, we first approximate the underlying space and then compute a simplicial complex that represents its connectivity. We filter this complex by estimating the curvature at each point of π⁻¹(P). We conclude this section by describing the barcode computation and giving an algorithm for computing the metric on the barcode space.
+
+## 3.1. Fibers
+
+Suppose we are given a PCD P, as shown in Fig. 2(a).
+We wish to compute the fiber at each point to generate a
+---PAGE_BREAK---
+
+Fig. 2. Given a noisy PCD P (a), we compute the fibers $\pi^{-1}(P)$ (b) by fitting lines locally. In the volume, the z-axis corresponds to tangent angle, so the top and the bottom of the volume are glued. We center $\epsilon$-balls with $\epsilon = 0.05$ at the fiber points to get a space (c) that approximates the tangent complex $T(X)$. The fibers and union of balls are colored according to curvature using the hot colormap shown. The curvature estimates appear noisy because of the small variation. We capture the topology of the union of balls using the simplicial complex (d). We show the $\alpha$-complex $\mathcal{A}_{\epsilon}$ for the union of balls in the figure (with $\alpha = \epsilon$) as it has a nice geometric realization. In practice, we utilize the Rips complex. Applying persistent homology, we get the $\beta_0$-barcode (e).
+
+new PCD $\pi^{-1}(P)$ that samples the tangent complex $T(X)$. Naturally, we must estimate the tangent directions, as we do not have the underlying shape $X$ from which $P$ was sampled. We do so by approximating the tangent line to the curve $X$ at point $p \in P$ via a total least squares fit that minimizes the sum of the squares of the perpendicular distances of the line to the point's nearest neighbors. Let $S$ be the $k$ nearest neighbors to $p$, and let $x_0 = \frac{1}{k} \sum_{i=1}^k x_i$ be the average of the points in $S$. We assume that the best line passes through $x_0$. In general, the hyperplane $P(n, x_0)$ in $\mathbb{R}^n$ which is normal to $n$ and passes through the point $x_0$ has equation $(x-x_0) \cdot n = 0$. The perpendicular distance from any point $x_i \in S$ to this hyperplane is $|(x_i - x_0) \cdot n|$, provided that $|n| = 1$. Let $M$ be the matrix whose $i$th row is $(x_i - x_0)^T$. Then $Mn$ is the vector of perpendicular distances from points in $S$ to the hyperplane $P(n, x_0)$, and the total least squares (TLS) problem is to minimize $|Mn|^2$. The eigenvector corresponding to the smallest eigenvalue of the covariance matrix $M^T M$ is the normal to the hyperplane $P(n, x_0)$ that best approximates the neighbor set $S$. Therefore, for a point $p$ in two dimensions, the
+
+fiber $\pi^{-1}(p)$ contains the eigenvector corresponding to
+the larger eigenvalue, as well as the vector pointing in
+the reverse direction.
+
+We note that it is better to use TLS here than ordinary least squares (OLS), as the optimal line found by the former method is independent of the parametrization of the points. Also, when the underlying curve is not smooth, we may use TLS to identify points near the singularities by observing when the eigenvalues are close to each other.
+
+Choosing a correct neighborhood set is a fundamental issue in PCD computation and relates to the correct recovery of the lost topology and embedding of the underlying shape. The neighbor set $S$ may contain either the $k$ nearest neighbors to $p$, or all points within a disc of radius $\epsilon$. The appropriate value of $k$ or $\epsilon$ depends on local sampling density, local feature size, and noise, and may vary from point to point. It is standard practice to set these parameters empirically [14,15,17], although recent work on automatic estimation of neighborhood sizes seems promising [23]. In our current software, we estimate $k$ for each data set independently. We hope to incorporate automatic estimation into our software in the near future.
+
+## 3.2. Approximated T(X)
+
+We now have a sampling $\pi^{-1}(P)$ of the tangent complex $T(X)$, as shown in Fig. 2(b) for our example. This set is discrete and has no interesting topology. The usual approach is to center an $\epsilon$-ball $B_\epsilon(p) = \{x | d(p, x) \le \epsilon\}$, a ball of radius $\epsilon$, at each point of $\pi^{-1}(P)$. This approach is based on the assumption that the underlying space is a manifold, or locally flat. Our approximation to $T(X)$ is the union of $\epsilon$-balls around the fiber points:
+
+$$T(X) \approx \bigcup_{p \in \pi^{-1}(P)} B_{\epsilon}(p).$$
+
+Two issues arise, however: first, we need a metric $d$ on $\mathbb{R}^2 \times \mathbb{S}^1$ so that we can define what an $\epsilon$-ball is, and second, we need to determine an appropriate value for $\epsilon$.
+
+We define a Euclidean-like metric generally on $\mathbb{R}^n \times \mathbb{S}^{n-1}$ as $ds^2 = dx^2 + \omega^2 d\zeta^2$. That is, the squared distance between the tangent vectors $\tau = (x, \zeta)$ and $\tau' = (x', \zeta')$ is given by
+
+$$d^2(\tau, \tau') = \sum_{i=1}^{n} (x_i - x_i')^2 + \omega^2 \sum_{i=1}^{n} (\zeta_i - \zeta_i')^2,$$
+
+where $\omega$ is a scaling factor. Here, the distance between
+the two directions $\zeta, \zeta' \in \mathbb{S}^{n-1}$ is the chord length as
+opposed to the arc length. The first measure approx-
+imates the second quite well when the distances are
+small, and is also much faster computationally. The
+choice of the scaling factor $\omega$ in our metric depends on
+---PAGE_BREAK---
+
+the nature of the PCD and our goals in computing the tangent complex. A large value of $\omega$ will spread the points of $\pi^{-1}(P)$ out in the angular directions. This is useful for segmenting an object composed of flat pieces, such as the letter 'V'. However, too much separation can lead to errors for smooth curves with high curvature regions, such as an eccentric ellipse. In such regions, the angular separation at neighboring basepoints changes rapidly, yielding points that are further apart in $\pi^{-1}(P)$. In these cases, a smaller value of $\omega$ maintains the connectivity of $X$, while still separating the directions enough to compute the barcodes for $T(X)$. Setting $\omega = 0$ projects the fibers $\pi^{-1}(P)$ back to their basepoints $P$.
+
+There is, of course, no perfect choice for $\epsilon$, as it depends not only on the factors described in the previous section, but also on the value of the scale factor $\omega$ in the metric. We need to choose $\epsilon$ to be at least large enough so that the basepoints are properly connected when $\omega = 0$. When $\omega$ is small, then the starting $\epsilon$ is usually sufficient. When $\omega$ is large, the union of $\epsilon$-balls is less connected, which may be precisely what we want, such as for the letter 'V'. We have devised a rule of thumb for setting $\epsilon$. Recall that curvature is defined to be $\kappa = \frac{d\phi}{ds}$, where $\phi$ is the tangent angle and $s$ is arc-length along $X$. Then, two points that are $\Delta x$ apart in a region with curvature $\kappa$ have tangent angles roughly $\Delta\phi \approx \kappa\Delta x$ apart. Since the chord length $\Delta\zeta$ approximates the arc length $\Delta\phi$ on $\mathbb{S}^1$ for small values, the squared distance between neighboring points in $\pi^{-1}(P)$ is approximately $\Delta x^2(1 + (\omega\kappa)^2)$. So,
+
+$$ \varepsilon \approx \frac{\sqrt{\Delta x^2 (1 + (\omega\kappa)^2)}}{2}. \qquad (2) $$
+
+### 3.3. Complex
+
+We now have an approximation to the tangent complex as a union of balls. To compute its topology efficiently, we require a combinatorial representation of this union as a simplicial complex. This simplicial complex $T(P)$ must have the same connectivity as the union of balls, or the same homotopy type.
+
+A commonly used complex in algebraic topology is the *Čech complex*. For a set of $m$ points $M$, the Čech complex looks at the intersection pattern of the union of $\epsilon$-balls:
+
+$$ \mathcal{C}_{\varepsilon}(M) = \left\{ \text{conv } T \mid T \subseteq M, \bigcap_{t \in T} B_{\varepsilon}(t) \neq \emptyset \right\}. $$
+
+Clearly, the Čech complex is homotopic to the union of balls. Unfortunately, it is also expensive to compute, as we need to examine all subsets of the point set for potentially $\sum_{k=0}^{m} \binom{m}{k} = 2^{m+1} - 1$ simplices. Furthermore, the complex may have high-dimensional simplices
+
+even for low-dimensional point sets. If four balls have a common intersection in two dimensions, the Čech complex for the point set will include a four-dimensional simplex.
+
+A common approximation to the Čech complex is the *Rips complex* [24]. Intuitively, this complex only looks at the intersection pattern between pairs of balls, and adds higher simplices whenever all of their lower sub-simplices are present:
+
+$$ \mathcal{R}_c(M) = \{\text{conv } T \mid T \subseteq M, d(s,t) \le \varepsilon, s, t \in T\}. $$
+
+Note that $\mathcal{C}_{\varepsilon/2}(M) \subseteq \mathcal{R}_c(M)$ for all $\varepsilon$, and that the Rips complex may have different connectivity than the union of balls. The Rips complex is also large and requires $O(\binom{m}{k})$ time for computing $k$-simplices. However, it is easier than the Čech complex to compute and is often used in practice.
+
+Since we are computing $\beta_0$-barcodes in this paper, we only require the vertices and edges in $T(P)$. At this level, the Čech and Rips complexes are identical. For higher dimensional PCDs such as points from surfaces, however, we will need triangles and, at times, tetrahedra, for computing the barcodes. We are therefore examining methods for computing small complexes that represent the union of balls. A potential approach utilizes $\alpha$-complexes $\mathcal{A}_\alpha$, subcomplexes of the *Delaunay complex*, the dual to the Voronoi diagram of the points [25,26]. These complexes are small and geometrically realizable, and their highest-dimensional simplices have the same dimension as the embedding space. We may view our metric $\mathbb{R}^n \times \mathbb{S}^{n-1}$ as a Euclidean metric by first scaling the tangents on $\mathbb{S}^{n-1}$ to lie on a sphere of radius $\omega$. Then, we may compute $\alpha$-complexes easily, provided we connect the complex correctly in the tangent dimension across the top/bottom boundary. Fig. 2 displays renderings of our space with the correct scaling as well as an $\alpha$-complex with $\alpha = \varepsilon$. A fundamental problem with this approach, however, is that we need to filter $\alpha$-complexes by curvature. Currently, we do not know whether this is possible. An alternate but attractive method is to compute the *witness complex* [27]. This complex utilizes a subsample of landmark points to compute small complexes that approximate the topology of the underlying ball-set.
+
+### 3.4. Filtered tangent complex
+
+We now have a combinatorial representation of the tangent complex. We next need to filter the tangent complex using the curvature at the basepoint. Recall that the *curvature* at a point $x \in X$ in direction $\zeta$ is $\kappa(x, \zeta) = 1/\rho(x, \zeta)$, where $\rho$ is the radius of the osculating circle to $X$ at $x$ in direction $\zeta$. We need to estimate this curvature at each point of $\pi^{-1}(P)$, in order to construct the filtration on $T(P)$ required to compute barcodes. We then assign to each simplex the maximum of the curvatures at its vertices.
+---PAGE_BREAK---
+
+Rather than estimating the osculating circle, we estimate the osculating parabola as this estimation is computationally more efficient. Two curves $y = f(x)$ and $y = g(x)$ in the plane have second order contact at $x_0$ iff $f(x_0) = g(x_0)$, $f'(x_0) = g'(x_0)$ and $f''(x_0) = g''(x_0)$. So, if $X$ admits a circle of second-order contact, then it also admits a parabola of second-order contact. Consider the coordinate frame centered at $x \in X$ with vertical axis normal to $X$. Suppose the curvature at $x$ is $\kappa = 1/\rho$, that is, the osculating circle has equation $x^2 + (y - \rho)^2 = \rho^2$. This circle has derivatives $y' = 0$ and $y'' = 1/\rho$ at $x$. Integrating, we find that the parabola which has second-order contact with this circle, and hence with $X$, has equation $y = x^2/2\rho$, as shown in Fig. 3.
+
+We again approximate the shape locally using a set of neighborhood points for each point in $P$. To find the best-fit parabola, we do not utilize the TLS approach as in Section 3.1, as the equations that minimize the perpendicular distance to a parabola are rather unpleasant. Instead, we use OLS which minimizes the vertical distance to the parabola. Naturally, the resulting parabola depends upon the coordinate frame in which the points are expressed. Fortunately, we have already determined the appropriate frame to use in computing the fibers in Section 3.1. Once we compute the fiber at $p$, we move the nearest neighbors $S$ to a coordinate frame with vertical axis the TLS best-fit normal direction. We set the origin to be $x_0$ in this coordinate frame (the average of the points in $S$) although we do not insist that the vertex of the parabola lies precisely there. We then fit a vertical parabola $f(x) = c_0 + c_1x + c_2x^2$ as follows. Suppose the collection $S$ of $k$ neighbor points is $S = \{(x_1, y_1), \dots, (x_k, y_k)\}$. Let
+
+$$A = \begin{pmatrix} 1 & x_1 & x_1^2 \\ 1 & x_2 & x_2^2 \\ \vdots & \vdots & \vdots \\ 1 & x_k & x_k^2 \end{pmatrix},$$
+
+Fig. 3. The osculating circle and parabola to $X$ (dashed) at $x$. The circle has center $(0, \rho)$, the parabola has focus $(0, \rho/2)$. The curvature of $X$ at $x$ is $\kappa = 1/\rho$.
+
+$$C = (c_0, c_1, c_2)^T,$$
+
+$$Y = (y_1, \dots, y_k)^T.$$
+
+If all points of $S$ lie on $f$, then $AC = Y$; thus $\eta = AC - Y$ is the vector of errors that measures the distance of $f$ from $S$. We wish to find the vector $C$ that minimizes $|\eta|$. Setting the derivatives of $|\eta|^2$ to zero with respect to $\{c_i\}$, we solve for $C$ to get
+
+$$C = (A^T A)^{-1} A^T Y.$$
+
+The curvature of the parabola $f(x) = c_0 + c_1x + c_2x^2$ at its vertex is $2c_2$, and this is our curvature estimate $\kappa$ at $p$. We use this curvature to obtain a filtration of the simplicial complex $T(P)$ that we computed in the last section. This filtered complex approximates $T^{\text{filt}}(X)$, the filtered tangent complex described in Section 2.3. For our example, Fig. 2(b) and 2(c) show the fibers $\pi^{-1}(P)$ and union of $\epsilon$-balls colored according to curvature, using the *hot* colormap.
+
+### 3.5. Metric space of barcodes
+
+We now have computed a filtered simplicial complex that approximates $T^{\text{filt}}$ for our PCD. We next compute the $\beta_0$ barcodes using an implementation of the persistence algorithm [22]. Fig. 2(e) shows the resulting $\beta_0$-barcode for our sample PCD. As expected, the barcode contains two long intervals, corresponding to the two persistent components of the tangent complex that represent the two tangent directions at each point of a circle. The noise in our PCD is reflected in small intervals in the barcode, which we can discard easily.
+
+To compute the metric, we modify the algorithm that we gave in a previous paper [18] so it is robust numerically. Given two barcodes $B_1, B_2$, our algorithm computes the metric in three stages. In the first stage, we simply compare the number of half-infinite intervals in the two barcodes and return $\infty$ in the case of inequality. Note that this makes our measure a quasi-metric. In the second stage, we compute the distance between the half-infinite intervals. This problem now reduces into finding a matching that minimizes the distance between two pointsets, namely the low endpoints of the two sets of half-infinite intervals. We sort the intervals according to their low endpoints and match them one-to-one according to their ranks. Given a matched pair of half-infinite intervals $I \in B_1, J \in B_2$, their dissimilarity is $\delta(I,J) = |\text{low}(I) - \text{low}(J)|$, where low($\cdot$) denotes the low endpoint of an interval.
+
+In the third stage, we compute the distance between finite intervals using a matching problem. Minimizing the distance is equivalent to maximizing the intersection length [18]. We accomplish the latter by recasting the problem as a graph problem. Given sets $B_1$ and $B_2$, we define $G(V, E)$ to be a weighted bipartite graph [20]. We
+---PAGE_BREAK---
+
+place a vertex in V for each interval in $B_1 \cup B_2$. After sorting the intervals, we scan the intervals to compute all intersecting pairs between the two sets [28]. Each pair $(I, J) \in B_1 \times B_2$ adds an edge with weight $|I \cap J|$ to E. Maximizing the similarity is equivalent to the well-known maximum weight bipartite matching problem. In our software, we solve this problem with the function `MAX_WEIGHT_BIPARTITE_MATCHING` from the LEDA graph library [29,30]. We then sum the dissimilarity of each pair of matched intervals, as well as the length of the unmatched intervals, to get the distance.
+
+## 4. Algebraic curves
+
+Having described our methods for computing the metric space of barcodes, we examine our shape descriptor for PCDs of families of algebraic curves. Throughout this section, we use a neighborhood of $k = 20$ points for computing fibers and estimating curvature.
+
+### 4.1. Family of ellipses
+
+Our first family of spaces are ellipses given by the equation $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$. We compute PCDs for the five ellipses shown in Fig. 4 with semi-major axis $a = 0.5$ and semi-minor axes $b$ equal to 0.5, 0.4, 0.3, 0.2, and 0.1, from top to bottom. To generate the point sets, we select 50 points per unit length spaced evenly along the x- and y-axis, and then project these samples onto the true curve. Therefore, the points are roughly $\Delta x = 0.02$ apart. We then add Gaussian noise to each point with mean 0 and standard deviation equal to half the inter-point distance or 0.01. For our metric, we use a scaling factor $\omega = 0.1$. To determine an appropriate value for $\epsilon$ for computing the Rips complex, we utilize our rule-of-thumb: Eq. (2) from Section 3.2. The maximum curvature for the ellipses shown is $\kappa_{max} = 50$, so $\epsilon \approx 0.02\sqrt{1+5^2/2} \approx 0.05$. This value successfully connects points with close basepoints and tangent directions, while still keeping antipodal points in the individual fibers separated.
+
+Fig. 4. Family of ellipses: PCD *P*, fibers $\pi^{-1}(\mathcal{P})$ colored by curvature, and $\beta_0$-barcode.
+
+### 4.2. Family of cubics
+
+Our second family of spaces are cubics given by the equation $y = x^3 - ax$. The five cubics shown in Fig. 5 have $a$ equal to 0, 1, 2, 3, and 4, respectively. In this case, the portion of the graph sampled is approximately three by three. In order to have roughly the same number of points as the ellipses, we select 15 points per unit length spaced evenly along the x- and y-axis, and project them as before. The points of *P* are now roughly 0.06 apart. We add Gaussian noise to each point with mean 0 and standard deviation half the inter-point distance or 0.03. For our metric, we use $\omega = 0.5$, primarily for aesthetic
+
+reasons as the fibers are then more spread out. The maximum curvature on the cubics is $\kappa_{max} \approx 8$, and our rule-of-thumb suggests that we need $\epsilon \approx 0.4$. However, $\epsilon = 0.2$ is sufficient in this case.
+
+## 5. Extensions
+
+In Section 3, we assumed that our PCD was sampled from a closed smooth curve in the plane. Our PCDs in the last section, however, violated our assumption as both families had added noise, and the family of cubics featured boundary points. Our method performed quite well, however, and naturally, we would like our method to generalize to other misbehaving PCDs. In this section, we characterize several such phenomena. For each problem, we describe possible solutions that are restrictions of methods that work in arbitrary dimensions. In
+---PAGE_BREAK---
+
+Fig. 5. Family of cubics: PCD *P*, fibers $\pi^{-1}(P)$ colored by curvature, and $\beta_0$-barcode.
+
+the final section, we briefly discuss computing our structures for point cloud data sampled from surfaces embedded in three dimensions.
+
+### 5.1. Non-manifold points
+
+Suppose that our PCD $P$ is sampled from a geometric object $X$ that is not a manifold. In other words, there are points in the object that are not contained in any neighborhood that can be parametrized by a Euclidean space of some dimension. In the case of curves, a non-manifold point appears at a *crossing*, where two arcs intersect transversally. For example, the junction point of the letter 'T' is a non-manifold point. We would like our method to manage nicely in the presence of non-manifold points.
+
+Our approach is to create the tangent complex for $P$ as before, but remove points for which there is no well-defined linear approximation due to proximity to a singular point in $X$. Such points are identified by a relatively large ratio between the eigenvalues of the TLS covariance matrix constructed for computing fibers in Section 3.1. The point removal effectively segments the tangent complex into pieces. With appropriately large values of $\epsilon$ and $\omega$, we can still connect the remaining pieces correctly.
+
+Fig. 6 shows a PCD for the letter ‘T’. Near the non-manifold point, the tangent direction (height) and curvature (color) estimates deviate from the correct values, and the fiber over the crossing point appears as two rogue points away from the main segments. By removing all points whose eigenvalue ratio is greater than 0.25, we successfully eliminate both rogue and high-curvature points. The gap introduced in the fibers over the crossbar of the ‘T’ is narrower than the vertical (angular) spacing between components. With a well-chosen value of $\epsilon$, the $\epsilon$-balls will bridge this gap yet leave four components, as desired. For the images here, we perturbed points 0.01 apart by Gaussian noise with mean 0 and standard deviation 0.005—half the inter-point distance. The tangent complexes are displayed with angular scaling factor $\omega = 0.2$. Balls of radius $\epsilon = 0.1$ give the correct $T(P)$.
+
+### 5.2. Singularities
+
+Our PCD may be sampled from a non-smooth manifold. For curves, a non-smooth point typically appear as a “kink”, such as in the letter ‘V’. We say a corner is a *singular point* in the PCD. If the goal is simply to detect the presence of a singular point, then our solution to non-manifold points above—to snip out those points with bad linear approximation—works quite well here, as Fig. 7 displays for the letter ‘V’ and parameters as above.
+
+Sometimes, however, we would like to study a family of spaces that contain singular points to understand shape variability. Since our curvature estimates at a non-smooth point are large, they are included in the filtered tangent complex relatively late, breaking the complex into many components early on. Moreover, the curvature estimates correlate well with the “kinkiness” of the
+
+Fig. 6. PCD for the letter ‘T’, all fibers in $\pi^{-1}(P)$, and fibers with eigenvalue ratio less than 0.25.
+---PAGE_BREAK---
+
+Fig. 7. PCD for the letter ‘V’, all fibers in π⁻¹(P), and fibers with eigenvalue ratio less than 0.25.
+
+singularity, and enable a parametrization of the family, as an example illustrates in the next section. This method extends easily to higher dimensions with higher-dimensional barcodes.
+
+## 5.3. Boundary points
+
+We may have a PCD sampled from a space with boundary. Counting boundary points of curves could be an effective tool for differentiating between them. Currently, our method does not distinguish boundary points, but simply allows them to get curvature estimates similar to their neighboring points in the PCD, as seen for the shapes in Figs. 5, 6, and 7. We propose a method, however, that distinguishes boundary points via one-dimensional *relative homology*. Around each point *p*, we may construct $B_c(p)$ with its boundary $S_c(p)$. For a manifold point, the relative homology group $H_1(B_c(p), S_c(p))$ has rank 1. Around non-manifold points, the group has rank greater than 1. At a boundary point, the group has rank 0. This strategy would empower our method, for example, to distinguish between the letters ‘I’ and ‘J’ with serifs. We plan to implement this strategy in the near future.
+
+## 5.4. Noise
+
+Our PCD samples may contain noise, which affects our method in two different ways:
+
+(1) Noise may effectively thicken a curve so that it is no longer a one-dimensional object. Once the curve is thick enough, it becomes significantly difficult to compute reliable tangent and curvature estimates.
+
+(2) Noise may also create outliers that disrupt homology calculations by introducing spurious components that result in long barcode intervals that are indistinguishable from genuine persistent intervals.
+
+We resolve the first problem in part by averaging the estimated curvature values over neighborhoods of each point. This averaging smooths the curvature calculations, but does not fix incorrect tangent estimates, which can result in a mis-connected tangent complex. For some real-world datasets, our technique encounters problems in computing reliable tangent estimates. Fig. 8 gives an
+
+example of the resulting disconnected tangent complex for a scanned-in number from the MNIST database of handwritten digits [31].
+
+We may resolve the second problem by considering the density of points in the point cloud, and preprocessing the PCD by removing points with low density values. Another strategy which shows promise is to postpone including a point in the filtration of $T(P)$ until it is part of a component with at least $k$ points, for some threshold size $k$. Not only does this omit singleton outliers, but it also reduces the number and size of the noisy short intervals we see for small $\kappa$ in our barcodes.
+
+## 5.5. Surfaces
+
+We end this section with a short discussion on computing tangent complexes for PCDs that are sampled from two-dimensional manifolds or *surfaces*. We have a formal presentation of our techniques for surfaces in our previous paper, where we also analyze several families of mathematical surfaces [18]. Recall that on a curve, there is at most a single osculating circle. On a smooth surface, however, the tangent space at each point is a plane, giving us a whole circle of tangent directions, as shown in Fig. 9. For each point $x$ and direction $\zeta$, there exists an osculating circle. The radius of this circle determines the curvature at that point and direction. As before, we use this curvature to filter the points $(x, \zeta)$ in the tangent complex.
+
+In practice, we only have a PCD sampled from a surface and need to approximate the tangent complex using the fibers of the points. At each point, we may approximate the surface locally fitting a quadratic
+
+Fig. 8. Two examples of hand printed scanned-in digits ‘0’ from the MNIST Database. We successfully construct $T(P)$ on the left, but the mishappen left side of the right ‘0’ is too thick, resulting in tangent estimate errors and an incorrect $T(P)$.
+
+Fig. 9. Surface X with the tangent plane at x and the unit tangent circle $T(X)_x$. We also show a dotted portion of the osculating circle in the direction $\zeta$.
+---PAGE_BREAK---
+
+surface, sample the circle of directions, and compute
+curvatures for those directions. This computation gives
+us the fibers $\pi^1(P) \subset T(X)$ that sample the tangent
+complex. We may then compute barcodes for the surface
+by completing the pipeline outlined in Fig. 2. Observe
+that the tangent complex for a surface will be embedded
+in $\mathbb{R}^5$, making the Rips complex rather large and the
+computation potentially expensive. We are exploring
+utilizing alternative structures, such as the witness
+complex [27], as well as alternative techniques to make
+the computation manageable.
+
+**6. Applications**
+
+In this section, we discuss the application of our work
+to shape parametrization and classification. We have
+implemented a complete system for computing and
+visualizing filtered tangent complexes, and for comput-
+ing, displaying, and comparing barcodes.
+
+6.1. *Parameterizing a family of shapes*
+
+The two families of shapes we saw in Section 4 may be
+easily parametrized via barcodes. For a family of ellipses
+
+$$
+\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1,
+$$
+
+with parameters $a \ge b > 0$, we can show mathematically
+that the $\beta_0$-barcode consists of two half-infinite intervals
+
+$$
+\left(\frac{b}{a^2}, \infty\right)
+$$
+
+and two long finite intervals
+
+$$
+\left(\frac{b}{a^2}, \frac{a}{b^2}\right).
+$$
+
+For fixed *a* and decreasing *b*, the intervals should grow
+longer, and this is precisely the behavior of the barcodes
+in Fig. 4. Similarly, for a family of cubics with equation
+$y = x^3 - ax$ parametrized by *a*, the barcode should
+contain two half-infinite intervals and four long finite
+ones, with the exact equations being rather complex [18].
+As the parameter *a* grows in value, the length of the
+
+finite intervals should increase. Once again, the barcode
+captures this behavior in practice as seen in Fig. 5.
+
+An interesting application to shape parametrization is
+the recovery of the motion of a two-link articulated arm,
+as shown in Fig. 10. Suppose we have PCDs for the arm
+at angles from 0 to 90° in 15° intervals, and we wish to
+recover the sequence that describes the bending motion
+of this arm. As the figure illustrates, sorting the PCDs by
+the length of the longest finite interval in their $\beta_0$-
+barcodes recovers the motion sequence. The noise in the
+data creates many small intervals. The intrinsic shape of
+the arm, however, is described by the two half-infinite
+intervals and the two long finite ones.
+
+To illustrate the robustness of our barcode metric, we
+compute ten random copies of each of the seven
+articulations and compute the distance between them.
+Fig. 11 displays the resulting distance matrix, where the
+distances are computed using the barcode metric of
+Section 3.5. The distance between each pair of points is
+mapped to gray-scale, with black corresponding to zero
+distance. Pairs whose matrix entry is near the diagonal
+of the matrix are close in the sequence, and consequently
+have close articulation angles. They are also close in the
+barcode metric, making the diagonal of the matrix dark.
+
+We generate each arm by placing 100 points 0.02 apart, and perturbing each by Gaussian noise with mean 0 and standard deviation 0.01. We use $\omega = 0.1$ and $\epsilon = 0.005$ as for the ellipses in Fig. 4. In addition, we utilize the curvature averaging strategy of Section 5.4 using the twenty nearest neighbors to cope with the noise.
+
+6.2. *Classifying shapes*
+
+To demonstrate the power of our technique for shape
+classification, we apply it to a collection of hand printed
+scanned-in letters of the alphabet. We stress that our aim
+here is not to outperform existing OCR techniques, but
+to present an instructive example that illustrates the
+power of our techniques. It is clear that a pure
+topological classification cannot distinguish between
+the letters of the alphabet as it partitions the Roman
+alphabet into three classes based on the number of holes.
+The letter ‘B’ has two holes, the letters ‘A’, ‘D’, ‘O’, ‘P’,
+‘Q’, and ‘R’ have one hole, and the remaining letters
+have none.
+
+Fig. 10. A bending two-link articulated arm. The $\beta_0$-barcodes enable the recovery of the sequence, and hence the motion.
+---PAGE_BREAK---
+
+Fig. 11. Distance matrix for the two-link articulated arm in Fig. 10. We have 10 copies of each of the seven articulations of the arm. We map the distance between each pair to gray-scale, with black indicating zero distance.
+
+However, we can utilize the techniques of this paper to glean more information. For example, the letters ‘U’ and ‘V’ have the same topology, but ‘U’ is smooth while ‘V’ has a kink. This singularity splits the tangent complex for ‘V’ into four pieces, as in Fig. 7, while the tangent complex for ‘U’ has only two components. In turn, these components translate into the half-infinite intervals in the $β_0$-barcodes of the letters: four for ‘V’, and two for ‘U’, as shown in Fig. 12. Similarly, although ‘O’ and ‘D’ have the same topology, they are distinguishable by the number of half-infinite intervals in their barcodes, with the singularities in ‘D’ generating two additional intervals.
+
+Even when two letters are both smooth, we may use their curvature information to distinguish between them. For example, the difference in curvature between the letters ‘C’ and ‘I’ results in different low endpoints for intervals in their barcodes, as seen in Fig. 12. Even though the tangent complexes for the two letters have the same number of components, our barcode metric can distinguish between them. Finally, we consider the letters ‘A’ and ‘R’, both of which have tangent complexes that split into six components that are represented via the six half-infinite intervals in their respective barcodes. Again, the curved portion of ‘R’ results in a different low endpoint for one pair of half-infinite intervals, and hence a different barcode than for ‘A’.
+
+For our experiments, we scan ten hand printed copies of each of the eight letters discussed. Each letter has between 69–294 points, spaced 0.025 apart. We remove
+
+Fig. 12. Hand-drawn scanned-in copies of the letters discussed in Section 6.2, together with their $β_0$-barcodes.
+
+points whose eigenvalue ratio is greater than 0.1, as discussed in Section 5.1. We then use $k = 20$ nearest neighbors to estimate the normal direction and curvature at each point, and use ball radius $\varepsilon = 0.2$ and scale factor $\omega = 0.1$ to compute the filtered tangent complex. Fig. 13(a) displays the resulting distance matrix, where distance is mapped as before. The letters are grouped according to the number of infinite components in $T(P):6, 4,$ and $2$, respectively. The distance between letters from different groups is infinite, as reflected in the large white regions of the matrix. The matrix illustrates that we may exploit tangent information to distinguish between letters that have the same topology.
+
+### 6.3. Multiple signatures
+
+In the last section, we showed how tangent information provides additional power for discriminating between letters. It is clear from the resulting matrix in Fig. 13(b), however, that the tangent information, by itself, is insufficient for a full classification. In the tangent domain, the letters ‘D’ and ‘V’ look very similar. But we already know how to distinguish between them using a pure topological method: ‘D’ has one hole, and ‘V’ has none. So, we must employ information from both domains for classifying letters. We generalize this insight to advocate a framework for distinguishing shapes via multiple barcodes. Within our framework, each shape has several associated barcodes derived from different geometric filtrations. For example, we may consider the shape itself, various metrics or Morse functions on the space, or derived complexes, such as the tangent complex. Each barcode captures a different shape invariant. Since we have a single shape descriptor throughout our system, we utilize our metric to find
+---PAGE_BREAK---
+
+Fig. 13. Distance matrices for 10 scanned instances of the letters ‘A’, ‘R’, ‘D’, ‘V’, ‘U’, ‘C’, ‘I’ and ‘O’. We map the distance between each pair to gray-scale. (a) $β_0$-barcodes for the tangent complex $T(P)$ filtered by curvature, (b) a mask matrix, where distance is 0 if the letters have the same $β_1$, and $\infty$, otherwise, (c) the combination of (a) and (b), (d) $β_0$ of original space filtered top-down distinguishes ‘U’ from ‘C’, (e) $β_0$ of original space filtered right-left distinguishes pairs {‘A’, ‘R’} and {‘C’, ‘T’}, (f) the combination of all four distance functions distinguishes all letters.
+
+distances between respective barcodes of two shapes. We then combine the information from the different barcodes for shape comparison.
+
+In the rest of this section, we demonstrate our framework through the letter classification example. We must emphasize, however, that the key aspect of our framework is its generality: it does not rely on detailed ad-hoc analysis of a particular classification problem, but rather on a family of signatures that are potentially useful for solving other problems. Our methods also have a conceptual nature that extend naturally to higher dimensional settings.
+
+We begin by extracting information from the topology of the letters. As already discussed, the letters are partially classifiable using the number of loops or $β_1$. We
+
+compute $β_1$ for each letter via a simple method that does not require a filtration, although we may employ more robust methods for its calculation. We include all points of the PCD and use balls of radius 0.08 to construct a complex. We then create a *mask* matrix, shown in Fig. 13(b), where the distance between two shapes is 0 if they have identical $β_1$ and infinite otherwise. Observe that the mask allows us to distinguish ‘D’ from ‘V’, and ‘O’ from ‘U’, ‘C’, and ‘I’. We now apply this mask to the matrix of tangent information in Fig. 13(a) that we obtained in the last section to get Fig. 13(c). Our new matrix classifies ‘D’, ‘V’, and ‘I’ correctly, but still has two blocks that group ‘A’ and ‘R’, and ‘U’, ‘C’, and ‘I’, respectively.
+
+We next examine separating ‘U’ and ‘C’. An important characteristic of our metric is that it is invariant under both small elastic and large rigid motions. Since a ‘C’ is basically a ‘U’ turned on its side, we should not expect any topological method to separate them. However, as the alphabet illustrates, there are situations where it is necessary to distinguish between an object and a rotated version of itself. One way to do so is to employ a directional Morse function and examine the evolution of the excursion sets $X_f = \{(x,y) \in X | y > f\}$. As with all our techniques, this method for directional distinction extends in an obvious way to higher dimensional point clouds. In this case, we consider a vertical top-down filtration. Note that $β_0(U_f) = 2$ while $β_0(C_f) = 1$ for most values of $f$. Therefore, the corresponding $β_0$-barcodes allow us to distinguish between ‘U’ and ‘C’, as seen in Fig. 14. This filtration gives us the distance matrix shown in Fig. 13(d).
+
+Our final filtration employs a horizontal right-left Morse function. This filtration sharpens the distinction between the pairs {‘A’, ‘R’} and {‘C’, ‘I’}. We show the resulting $β_0$-barcodes for representatives of each pair in Fig. 15, and the resulting distance matrix is shown in Fig. 13(e).
+
+Having described our filtrations, we combine the multiple signatures into a single measure, depicted by the distance matrix in Fig. 13(f). This measure is based on the four invariant signatures:
+
+(1) $β_0$-barcodes of the tangent complex, filtered by curvature, in Fig. 13(a),
+
+(2) $β_1$ of the letters in Fig. 13(b),
+
+Fig. 14. Filtering the original point clouds from top to bottom gives different $β_0$ barcodes for ‘C’ and ‘U’.
+---PAGE_BREAK---
+
+Fig. 15. Filtering the original point clouds from right to left distinguishes between the pairs {'C', 'T'} and {'A', 'R'}.
+
+(3) $\beta_0$-barcodes of the letters, filtered top-down, in Fig. 13(d),
+
+(4) $\beta_0$-barcodes of the letters, filtered right-left, in Fig. 13(e).
+
+Let {$M_1, M_2, M_3, M_4$} denote the corresponding distance matrices, respectively. Since the distance range varies for each matrix, we re-scale the matrices prior to combining them. Specifically, if $\max(M)$ is the maximum value of the finite entries in the matrix $M$, we set $\lambda_{1,3} = \max(M_1)/\max(M_3)$ and $\lambda_{1,4} = \max(M_1)/\max(M_4)$. Then our combined distance matrix is
+
+$$M = M_1 + M_2 + \lambda_{1,3} \cdot M_3 + \lambda_{1,4} \cdot M_4.$$
+
+By combining all four signatures, we can readily distinguish between the eight letters discussed above. Fig. 13(f) depicts the final result.
+
+Our work suggests a number of enticing research directions:
+
+* Implementing density estimation techniques to remove spurious components arising from noise,
+
+* A systematic study of the thickness problem of scanned curves,
+
+* Implementing the strategy for identifying boundary points, further strengthening our method,
+
+* Applying our methods to surface point cloud data.
+
+It is also potentially very useful to study higher dimensional persistence, that is, situations where a complex is filtered by two or more variables. In this situation, the classification of multiply filtered vector spaces is much more difficult than the single variable persistence. Indeed, we do not expect to find a complete classification in terms of simple combinatorial data like a barcode. However, we still hope to find interesting computable invariants that would be quite useful in studying shape recognition.
+
+We should note that an eventual goal of this work is automatic shape classification via learning algorithms. Our approach replaces the raw shape data with information-rich barcodes that contain geometric and topological invariants. Learning algorithms could then discover significant intervals within a barcode, recover parameterizations, and provide proper weights for combining barcodes derived from different filtrations. We believe the combination of our techniques with learning theory represents an exciting new avenue of research.
+
+## 7. Conclusion
+
+In this paper we apply ideas of our earlier paper to provide novel methods for studying the qualitative properties of one-dimensional spaces in the plane [18]. Our method is based on studying the connected components of a complex constructed from a curve using its tangential information. Our method generates a compact shape descriptor called a barcode for a given PCD. We illustrate the feasibility of our methods by applying them for classification and parametrization of several families of shape PCDs with noise. We also provide an effective metric for comparing shape barcodes for classification and parametrization. Finally, we discuss the limitations of our methods and possible extensions. An important property of our methods is that they are applicable to any curve PCD without any need for specialized knowledge about the curve. The salient feature of our work is its reliance on theory, allowing us to extend our methods to shapes in higher dimensions, such as families of surfaces embedded in $\mathbb{R}^3$, where we utilize higher-dimensional barcodes.
+
+## References
+
+[1] Duda RO, Hart PE, Stork DG. Pattern classification and scene analysis, 2nd ed. Hoboken, NJ: Wiley-Interscience; 2000.
+
+[2] Gonzalez RC, Woods RE. Digital image processing, 2nd ed. Boston, MA: Addison-Wesley; 2002.
+
+[3] Sebastian T, Klein P, Kimia B. Recognition of shapes by editing shock graphs. In: International Conference on Computer Vision; 2001. p. 755-62.
+
+[4] Fan T-J. Describing and recognizing 3D objects using surface properties. New York, NY: Springer; 1990.
+
+[5] Fisher RB. From surfaces to objects: computer vision and three-dimensional scene analysis. New York, NY: John Wiley; 1989.
+
+[6] Osada R, Funkhouser T, Chazella B, Dobkin D. Matching 3D models with shape distributions. In: International Conference on Shape Modeling & Application; 2001. p. 154-66.
+
+[7] Hilaga M, Shinagawa Y, Kohmura T, Kunii TL. Topology matching for fully automatic similarity estimation of 3D shapes. In: Proceedings of the ACM SIGGRAPH 2001; 2001. p. 203-12.
+---PAGE_BREAK---
+
+[8] Levoy M, Whitted T. The use of points as display primitives, Tech. rep., The University of North Carolina at Chappel Hill, 1985.
+
+[9] Adamson A, Alexa M. Ray tracing point set surfaces. In: Proceedings of Shape Modeling International; 2003.
+
+[10] Rusinkiewicz S, Levoy M. QSplat: a multiresolution point rendering system for large meshes. In: Proceedings of ACM SIGGRAPH 2000; 2000. p. 343–52.
+
+[11] Alexa M, Behr J, Cohen-Or D, Fleishman S, Levin D, Silva CT. Point set surfaces. In: Proceedings of Vision; 2001.
+
+[12] Zwicker M, Pauly M, Knoll O, Gross M. Pointshop 3D: an interactive system for point-based surface editing. In: Proceedings of ACM SIGGRAPH 2002, vol. 21; 2002. p. 322–9.
+
+[13] Adams B, Dutré P. Interactive boolean operations on surfel-bounded solids. ACM Trans Graph 2003;22(3):651–6.
+
+[14] Pauly M, Keiser R, Kobbelt LP, Gross M. Shape modeling with point-sampled geometry. In: Proceedings of ACM SIGGRAPH 2003, vol. 22; 2003. p. 641–50.
+
+[15] Donoho DL, Grimes C. Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. PNAS 2003;100:5591–6.
+
+[16] Lee AB, Pedersen KS, Mumford D. The non-linear statistics of high contrast patches in natural images, Tech. rep., Brown University, Available online, 2001.
+
+[17] Tenenbaum JB, de Silva V, Langford JC. A global geometric frame-work for nonlinear dimensionality reduction. Science 2000;290:2319–23.
+
+[18] Carlsson G, Zomorodian A, Collins A, Guibas L. Persistence barcodes for shapes. In: Proceedings of Geometry Process; 2004. p. 127–38.
+
+[19] Munkres JR. Elements of algebraic topology. Red-wood City, California: Addison-Wesley; 1984.
+
+[20] Cormen TH, Leiserson CE, Rivest RL, Stein C. Introduction to algorithms. Cambridge, MA: MIT Press; 2001.
+
+[21] Edelsbrunner H, Letscher D, Zomorodian A. Topological persistence and simplification. Discrete Computational Geometry 2002;28:511–33.
+
+[22] Zomorodian A, Carlsson G. Computing persistent homology. Discrete Computational Geometry, to appear.
+
+[23] Mitra NJ, Nguyen A, Guibas L. Estimating surface normals in noisy point cloud data. International Journal of Computational Geometry and Applications 2004, to appear.
+
+[24] Gromov M. Hyperbolic groups. In: Gersten S editor. Essays in group theory. New York, NY: Springer; 1987. p. 75–263.
+
+[25] de Berg M, van Kreveld M, Overmars M, Schwarzkopf O. Computational geometry: algorithms and applications. New York: Springer; 1997.
+
+[26] Edelsbrunner H. The union of balls and its dual shape. Discrete Computational Geometry 1995;13:415–40.
+
+[27] de Silva V, Carlsson G. Topological estimation using witness complexes. In: Proceedings of Symposium on Point-Based Graphics; 2004. p. 157–66.
+
+[28] Zomorodian A, Edelsbrunner H. Fast software for box intersection. International Journal of Computational Geometry and Applications 2002;12:143–72.
+
+[29] Algorithmic Solutions Software GmbH, LEDA, 2004.
+
+[30] Mehlhorn K, Näher S. The LEDA platform of combinatorial and geometric computing. Cambridge, UK: Cambridge University Press; 1999.
+
+[31] LeCun Y. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/.
\ No newline at end of file
diff --git a/samples/texts_merged/3880484.md b/samples/texts_merged/3880484.md
new file mode 100644
index 0000000000000000000000000000000000000000..88d8e65e8ca39402398b0ef315ad5ec9f3a7a032
--- /dev/null
+++ b/samples/texts_merged/3880484.md
@@ -0,0 +1,346 @@
+
+---PAGE_BREAK---
+
+February 2021
+
+# The Integer-antimagic Spectra of Graphs with a Chord
+
+Richard M. Low
+*San Jose State University, richard.low@sjsu.edu*
+
+Dan Roberts
+*Illinois Wesleyan University, drobert1@iwu.edu*
+
+Jinze Zheng
+*Illinois Wesleyan University, jzheng@iwu.edu*
+
+Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/tag
+
+Part of the Discrete Mathematics and Combinatorics Commons
+
+**Recommended Citation**
+
+Low, Richard M.; Roberts, Dan; and Zheng, Jinze (2021) "The Integer-antimagic Spectra of Graphs with a Chord," *Theory and Applications of Graphs*: Vol. 8; Iss. 1, Article 1.
+DOI: 10.20429/tag.2021.080101
+Available at: https://digitalcommons.georgiasouthern.edu/tag/vol8/iss1/1
+---PAGE_BREAK---
+
+# The Integer-antimagic Spectra of Graphs with a Chord
+
+## Cover Page Footnote
+
+The authors are grateful for the valuable comments made by the referees. This improved the final manuscript. The second author was supported by a grant from Illinois Wesleyan University.
+---PAGE_BREAK---
+
+**Abstract**
+
+Let $A$ be a nontrivial abelian group. A connected simple graph $G = (V, E)$ is $A$-antimagic if there exists an edge labeling $f : E(G) \to A \setminus \{0\}$ such that the induced vertex labeling $f^+ : V(G) \to A$, defined by $f^+(v) = \sum_{uv \in E(G)} f(uv)$, is injective. The integer-antimagic spectrum of a graph $G$ is the set $\text{IAM}(G) = \{k \mid G \text{ is } \mathbb{Z}_k\text{-antimagic and } k \ge 2\}$. In this paper, we determine the integer-antimagic spectra for cycles with a chord, paths with a chord, and wheels with a chord.
+
+# 1 Introduction
+
+Labelings form a large and important area of study in graph theory. First formally introduced by Rosa [7] in the 1960s, graph labelings have captivated the interest of many mathematicians in the ensuing decades. In addition to the intrinsic beauty of the subject matter, graph labelings have applications (discussed in papers by Bloom and Golomb [1, 2]) in graph factorization problems, X-ray crystallography, radar pulse code design, and addressing systems in communication networks. The interested reader is directed to Gallian's [4] dynamic survey, which contains thousands of references to research papers and books on the topic of graph labelings.
+
+Let $G$ be a connected simple graph. For any nontrivial abelian group $A$ (written additively), let $A^* = A \setminus \{0\}$, where $0$ is the additive identity of $A$. Let a function $f : E(G) \to A^*$ be an edge labeling of $G$ and $f^+ : V(G) \to A$ be its induced vertex labeling, which is defined by $f^+(v) = \sum_{uv \in E(G)} f(uv)$. If there exists an edge labeling $f$ whose induced vertex labeling $f^+$ on $V(G)$ is injective, then we say that $f$ is an *$A$-antimagic labeling* and that $G$ is an *$A$-antimagic graph*. The *integer-antimagic spectrum* of a graph $G$ is the set $\text{IAM}(G) = \{k \mid G \text{ is } \mathbb{Z}_k\text{-antimagic and } k \ge 2\}$.
+
+The concept of the *A*-antimagicness property for a graph *G* (introduced independently in [3, 5]) naturally arises as a variation of the *A*-magic labeling problem (where the induced vertex labeling is a constant map). There is a large body of research on *A*-magic graphs within the mathematical literature. As for *A*-antimagic graphs (which is the focus of our paper), cycles, paths, various classes of trees, dumbbells, multi-cyclic graphs, $K_{m,n}$, $K_{m,n-e}$, tadpoles and lollipop graphs were investigated in [3, 6, 8, 9, 10].
+
+A result of Jones and Zhang [5] finds the minimum element of $\text{IAM}(G)$, for all connected graphs on three or more vertices. In their paper, a $\mathbb{Z}_n$-antimagic labeling of a graph on $n$ vertices is referred to as a *nowhere-zero modular edge-graceful labeling*. They proved the following theorem.
+
+**Theorem 1.1** (Jones and Zhang [5]). If $G$ is a connected simple graph of order $n \ge 3$, then $\min\{t: t \in \text{IAM}(G)\} \in \{n, n+1, n+2\}$. Furthermore,
+
+1. $\min\{t: t \in \text{IAM}(G)\} = n$ if and only if $n \not\equiv 2 (\mod 4)$, $G \neq K_3$, and $G$ is not a star of even order,
+
+2. $\min\{t: t \in \text{IAM}(G)\} = n+1$ if and only if $G = K_3$ or $n \equiv 2 (\mod 4)$ and $G$ is not a star of even order, and
+---PAGE_BREAK---
+
+3. $\min\{t: t \in \text{IAM}(G)\} = n+2$ if and only if G is a star of even order.
+
+Motivation for our current work is found in the following conjecture [6].
+
+**Conjecture 1.1.** Let $G$ be a connected simple graph. If $t$ is the least positive integer such that $G$ is $\mathbb{Z}_t$-antimagic, then $\text{IAM}(G) = \{k: k \ge t\}$.
+
+In [3, 6, 8, 9, 10], Conjecture 1.1 was shown to be true for various classes of graphs. The purpose of this paper is to provide additional evidence for Conjecture 1.1 by verifying it for cycles with a chord, paths with a chord, and wheels with a chord. We use constructive methods to determine the integer-antimagic spectra for these particular classes of graphs.
+
+## 2 Cycles with a Chord
+
+In this paper, we use the constructed labelings from the proof of the following theorem.
+
+**Theorem 2.1** (Chan, Low and Shiu [3]). If $r = 0, 1, 3$, then $C_{4m+r}$ is $\mathbb{Z}_k$-antimagic, for all $m \in \mathbb{N}$, $k \ge 4m+r$. $C_{4m+2}$ is $\mathbb{Z}_k$-antimagic, for all $m \in \mathbb{N}$, $k \ge 4m+3$.
+
+For the sake of completeness, here are the labelings. Let $e_1, e_2, ..., e_n$ be edges of $C_n$ arranged in counter-clockwise direction. A $\mathbb{Z}_k$-antimagic labeling of $C_n$ can be obtained as follows.
+
+Case 1: $n = 4m$:
+
+$$f(e_i) = \begin{cases} i & \text{if } 1 \le i \le 2m; \\ 3 + 2(2m - \lfloor \frac{i}{2} \rfloor) & \text{if } 2m + 1 \le i \le 4m. \end{cases}$$
+
+Case 2: $n = 4m + 1$:
+
+$$f(e_i) = \begin{cases} i & \text{if } 1 \le i \le 2m; \\ 3 + 2(2m - \left\lfloor \frac{i}{2} \right\rfloor) & \text{if } 2m + 1 \le i \le 4m + 1. \end{cases}$$
+
+Case 3: $n = 4m + 2$:
+
+$$f(e_i) = \begin{cases} i & \text{if } 1 \le i \le 2m + 3; \\ 3 + 2(2m - \left\lfloor \frac{i-2}{2} \right\rfloor) & \text{if } 2m + 4 \le i \le 4m + 2. \end{cases}$$
+
+Case 4: $n = 4m + 3$:
+
+$$f(e_i) = \begin{cases} i & \text{if } 1 \le i \le 2m + 3; \\ 3 + 2(2m - \left\lfloor \frac{i-3}{2} \right\rfloor) & \text{if } 2m + 4 \le i \le 4m + 3. \end{cases}$$
+
+Let $(1, 2, ..., n)$ denote the $n$-cycle with counterclockwise edges $\{i, i+1\}$ for $1 \le i \le n-1$ and $\{1, n\}$. Let $[1, 2, ..., n]$ denote the path of length $n-1$ with edges $\{i, i+1\}$ for $1 \le i \le n-1$. Suppose that $C_n$ is the cycle $(v_1, v_2, ..., v_n)$, and let $2 \le l \le \lfloor \frac{n}{2} \rfloor$. Define $C_n(l)$ to be the graph
+---PAGE_BREAK---
+
+obtained from $C_n$ by adding an edge $c = \{v_i, v_j\}$, where $l = \min\{|i-j|, n-|i-j|\}$. We call $C_n(l)$ an $n$-cycle with a chord $c$ of perimeter $l$.
+
+Note that $C_n(l)$ is the union of two cycles which share a common edge, namely chord $c$ of $C_n(l)$. The minor subcycle of $C_n(l)$ is the shorter of the two cycles, denoted by $C_n^-(l)$. The major subcycle of $C_n(l)$ is the longer of the two cycles, denoted by $C_n^+(l)$.
+
+The alternating cycle labeling of $C_n = (1, 2, ..., n)$, starting with the edge $\{1, 2\}$, is the function $g: E(C_n) \to \{1, -1\}$, such that $g(\{1, 2\}) = 1$ and $g$ alternates between 1 and -1.
+
+The alternating path labeling of the path $P_n = [1, 2, ..., n]$ starting with edge $\{1, 2\}$ is the function $t: E(P_n) \to \{1, -1\}$, such that $t$ alternates between 1 and -1, and the labeling on the first edge must be specified.
+
+**Lemma 2.2.** Let $n \ge 4$ be an integer and let $l$ be a positive odd integer with $2 \le l \le \lfloor \frac{n}{2} \rfloor$. Then, $\text{IAM}(C_n(l)) = \{k : k \ge n\}$ if $n \equiv 0, 1, 3, (\text{mod } 4)$ and $\text{IAM}(C_n(l)) = \{k : k \ge n+1\}$ if $n \equiv 2, (\text{mod } 4)$.
+
+*Proof.* The main idea of the proof is to find an even cycle within $C_n(l)$ containing the chord and to use the alternating cycle labeling on it. One can choose the endpoints of the chord to be $v_1$ and $v_{l+1}$. Now let $f$ be the $\mathbb{Z}_k$-antimagic labeling of $C_n$ defined in Theorem 2.1. Since $l$ is odd, $C_n^-(l)$ has even length. This follows from the fact that $C_n^-(l)$ contains an odd number of edges from $C_n$, along with the chord. Let $g$ be the alternating cycle labeling on the edges of $C_n^-(l)$ starting with edge $\{v_1, v_2\}$.
+
+Now define the labeling $h(e) = f(e) + g(e)$ where $g(e)$ is defined to be 0 for all edges not included in $C_n^-(l)$. Note that since $3 \le l+1 \le \lfloor \frac{n}{2} \rfloor + 1$, we have that for each $1 \le i \le l+1$, $f(e_i) = 1$ if and only if $i=1$. The edge labeling on the chord is -1. Therefore, $h(e) \ne 0$ for all $e \in E(C_n(l))$. Furthermore, $h^+(e) = f^+(e)$ for all $e \in C_n(l)$. Since $f$ is a $\mathbb{Z}_k$-antimagic labeling, so is $h$. $\square$
+
+Figure 1 illustrates the proof of Lemma 2.2.
+
+**Theorem 2.3.** Let $n$ and $l$ be positive integers with $2 \le l \le \lfloor \frac{n}{2} \rfloor$. Then $\text{IAM}(C_n(l)) = \{k : k \ge n\}$ if $n \equiv 0, 1, 3, (\text{mod } 4)$ and $\text{IAM}(C_n(l)) = \{k : k \ge n+1\}$ if $n \equiv 2, (\text{mod } 4)$.
+
+*Proof.* It suffices to consider only the case where *l* is even, since the case where *l* is odd is addressed in Lemma 2.2. First, let *f* be the $\mathbb{Z}_k$-antimagic labeling of $C_n$ defined in Theorem 2.1.
+
+If $n$ is odd, then $C_n^+(l)$ has even length. This follows from the fact that $C_n^+(l)$ contains an odd number of edges from $C_n$, along with the chord. We will define the labeling $h(e) = f(e) + g(e)$ where $g$ is the alternating cycle labeling on the edges of $C_n^+(l)$ starting with the edge $\{v_1, v_n\}$, and $g(e)$ is defined to be 0 for all edges not in $C_n^+(l)$. To ensure that $h(e) \ne 0$ for all $e \in V(C_n(l))$, we must show that $f(e) \ne -g(e)$. We observe the following about the minimum and maximum values of $f$.
+
+In the case where $n = 4m + 1$, the maximum value of $f$ is given by $f(e_{2m+1}) = 2m + 1$. The minimum value of $f$ is 1. Furthermore, $f(e_i) = 1$ if and only if $i \in \{1, 4m + 1\}$.
+
+In the case where $n = 4m + 3$, the maximum value of $f$ is given by $f(e_{2m+3}) = 2m + 3$. The minimum value of $f$ is 1. Furthermore $f(e_i) = 1$ if and only if $i = 1$.
+
+In either case, $f(e) \ne -1$ for all edges $e$; therefore $f(e) \ne -g(e)$ for all edges where $g(e) = 1$. We now have to check the edge labels on $e_1$ and $e_n$. Since $e_1$ is not in $C_n^+(l)$,
+---PAGE_BREAK---
+
+Figure 1: IAM($C_7(3)$) = {7, 8, 9, ...}.
+
+$g(e_1) = 0$ by definition; thus, $f(e_1) \neq -g(e_1)$. By the definition of $g$, we have that $g(e_n) = 1$; thus, $f(e_n) \neq -g(e_n)$. Thus when $n$ is odd, we have that $h$ is the desired $\mathbb{Z}_k$-antimagic labeling.
+
+In the case where $n$ is even, we consider the following two subcases.
+
+**Subcase 1:** $n = 4m+2$. We assume chord $c$ has endpoints $v_{2m+3}$ and $v_{2m+3+l}$. Note that $2 \le l \le 2m$. So in the case where $l=2m$, the endpoints of $c$ are $v_{2m+3}$ and $v_1$. Define $P$ to be the path
+
+$$[v_{2m+3}, v_{2m+3+l}, v_{2m+2+l}, v_{2m+1+l}, v_{2m+l}, \dots, v_{2m+4}].$$
+
+Now, define $h: E(C_n(l)) \to \mathbb{Z}_k^*$ by
+
+$$h(e) = f(e) + z(e),$$
+
+where addition is in $\mathbb{Z}_k$. Here, $f$ is the $\mathbb{Z}_k$-antimagic labeling for $C_n$ (given in Theorem 2.1, Case 3) and
+
+$$z(e) = \begin{cases} t(e) & \text{if } e \in P; \\ 0 & \text{otherwise.} \end{cases}$$
+
+Here, $t$ is the alternating path labeling of $P$ starting with the chord, which is labeled $t(c) = -1$.
+---PAGE_BREAK---
+
+First, we will show that no edge is labeled 0. Since $n \equiv 2 \pmod{4}$, $f(e_i) = 1$ if and only if $i=1$. By the definition of $P$, we have that $e_1 \notin P$. Therefore, $h(e) - 1 \not\equiv 0 \pmod{k}$ for all edges $e \in E(C_n(l))$. By the definition of $f$, we see that $\max\{f(e_i) : 1 \le i \le 4m+2\} = f(e_{2m+3}) = 2m+3$. Therefore, $h(e) + 1 \not\equiv 0 \pmod{k}$. Thus, $h(e) \not\equiv 0 \pmod{k}$ for all edges $e \in E(C_n(l))$.
+
+Next, we will show that all induced vertex labels are distinct. Since $t$ is the alternating path labeling, we have that $h^+(v) = f^+(v)$ for all vertices $v$ (besides the two endpoints of $P$, which are $v_{2m+3}$ and $v_{2m+4}$). We claim that $h^+(v_{2m+3}) = f^+(v_{2m+4})$ and $h^+(v_{2m+4}) = f^+(v_{2m+3})$. To see this, observe that by the definition of $f$, we have that
+
+$$
+\begin{aligned}
+f^+(v_{2m+3}) &= f(e_{2m+2}) + f(e_{2m+3}) \\
+&= (2m+2) + (2m+3) \\
+&= 4m + 5
+\end{aligned}
+ $$
+
+and
+
+$$
+\begin{aligned}
+f^+(v_{2m+4}) &= f(e_{2m+3}) + f(e_{2m+4}) \\
+&= (2m+3) + 3 + 2\left(2m - \left\lfloor \frac{2m+4-2}{2} \right\rfloor\right) \\
+&= 4m + 4.
+\end{aligned}
+ $$
+
+Then, we also have
+
+$$ h^+(v_{2m+3}) = f^+(v_{2m+3}) + t(c) = 4m + 5 - 1 = 4m + 4 $$
+
+and
+
+$$ h^+(v_{2m+4}) = f^+(v_{2m+4}) + t(e_{2m+4}) = 4m + 4 + 1 = 4m + 5. $$
+
+Thus, the net result of combining the alternating path labeling of $P$ with the $\mathbb{Z}_k$-antimagic labeling of $C_{4m+2}$ is that the induced vertex labels of $v_{2m+3}$ and $v_{2m+4}$ are transposed. Therefore, $h$ is the desired $\mathbb{Z}_k$-antimagic labeling.
+
+**Subcase 2:** $n = 4m$. We assume chord $c$ has endpoints $v_{2m+1}$ and $v_{2m+1+l}$. Define $P$ to be the path $[v_{2m+1}, v_{2m+1+l}, v_{2m+l}, v_{2m+l-1}, v_{2m+l-2}, \dots, v_{2m+2}]$.
+
+We define the same labeling $h$ as in the proof of Subcase 1, but with the alternating path labeling $t$ of the newly defined path $P$ starting with the chord $t(c) = 1$. The argument follows the same structure. The only differences are the calculations of the induced vertex labels, which are as follows.
+
+By the definition of $f$, we have that
+
+$$
+\begin{aligned}
+f^+(v_{2m+1}) &= f(e_{2m}) + f(e_{2m+1}) \\
+&= 2m + 3 + 2\left(2m - \left\lfloor \frac{2m+1}{2} \right\rfloor\right) \\
+&= 4m + 1
+\end{aligned}
+ $$
+---PAGE_BREAK---
+
+and
+
+$$
+\begin{align*}
+f^{+}(v_{2m+2}) &= f(e_{2m+1}) + f(e_{2m+2}) \\
+&= 2m + 1 + 3 + 2 \left( 2m - \left\lfloor \frac{2m+2}{2} \right\rfloor \right) \\
+&= 4m + 2.
+\end{align*}
+$$
+
+Then,
+
+$$h^+(v_{2m+1}) = f^+(v_{2m+1}) + t(c) = 4m + 1 + 1 = 4m + 2$$
+
+and
+
+$$h^+(v_{2m+2}) = f^+(v_{2m+2}) + t(e_{2m+2}) = 4m + 2 - 1 = 4m + 1.$$
+
+Thus, *h* is the desired Zₖ-antimagic labeling. ☐
+
+Figures 2 and 3 illustrate the proof of Theorem 2.3.
+
+Figure 2: IAM($C_6(2)$) = {7, 8, 9, ...}.
+---PAGE_BREAK---
+
+Figure 3: IAM($C_8(4)$) = \{8, 9, 10, ...\}.
+
+**3 Paths with a Chord**
+
+A *path with a chord c of perimeter l* (denoted $P_n(l)$), is defined similarly to a cycle with a
+chord of perimeter *l*. More precisely, $P_n(l)$ denotes the graph obtained by adding an edge,
+called the *chord*, to the path $P_n$. The *perimeter* of the chord is the length of the path
+from one end-vertex to the other which does not consist of the chord itself. We assume the
+two endpoints of the chord are not the end vertices of the original path, since this would
+simply be a cycle. The following theorem gives constructions of labelings which characterize
+the IAM($P_n$). Again in this paper, we use these particular labelings in characterizing the
+IAM($P_n(l)$).
+
+**Theorem 3.1** (Chan, Low and Shiu [3]). If $r = 0, 1, 3$, then $P_{4m+r}$ is $\mathbb{Z}_k$-antimagic, for all $m \in \mathbb{N}, k \ge 4m+r$. $P_{4m+2}$ is $\mathbb{Z}_k$-antimagic, for all $m \in \mathbb{N}, k \ge 4m+3$.
+
+For the sake of completeness, here are the labelings. Let $e_1, e_2, ..., e_{n-1}$ be edges of $P_n$,
+from left to right. A $\mathbb{Z}_k$-antimagic labeling of $P_n$ can be obtained as follows.
+
+Case 1: $n = 4m$:
+
+$$f(e_i) = \begin{cases} \frac{i+1}{2} & \text{if } i \text{ is odd;} \\ \frac{i}{2} & \text{if } i \text{ is even and } 2 \le i \le 2m-2; \\ \frac{i+2}{2} & \text{if } i \text{ is even and } 2m \le i \le 4m-2. \end{cases}$$
+---PAGE_BREAK---
+
+**Case 2:** $n = 4m + 1$:
+
+$$f(e_i) = \begin{cases} \frac{i}{2} & \text{if } i \text{ is even;} \\ \frac{i+3}{2} & \text{if } i \text{ is odd and } 1 \le i \le 2m-3; \\ \frac{i+5}{2} & \text{if } i \text{ is odd and } 2m-1 \le i \le 4m-1. \end{cases}$$
+
+**Case 3:** $n = 4m + 2$:
+
+$$f(e_i) = \begin{cases} \frac{i+1}{2} & \text{if } i \text{ is odd;} \\ \frac{i+2}{2} & \text{if } i \text{ is even and } 2 \le i \le 2m-2; \\ \frac{i+4}{2} & \text{if } i \text{ is even and } 2m \le i \le 4m. \end{cases}$$
+
+**Case 4:** $n = 4m + 3$:
+
+$$f(e_i) = \begin{cases} \frac{i}{2} & \text{if } i \text{ is even;} \\ \frac{i+1}{2} & \text{if } i \text{ is odd and } 1 \le i \le 2m-1; \\ \frac{i+3}{2} & \text{if } i \text{ is odd and } 2m+1 \le i \le 4m+1. \end{cases}$$
+
+A tadpole graph $T(r, s)$ is obtained by joining a cycle $C_r$ and an end-vertex of a path $P_s$ by a bridge, where $r \ge 3$ and $s \ge 1$. The following technical lemma will be used in the proof of Theorem 3.3.
+
+**Lemma 3.2** (Shiu, Sun and Low [10]). For $r \ge 3$ and $s \ge 1$,
+
+$$\mathrm{IAM}(T(r,s)) = \begin{cases} r+s, & r+s+1, \dots, r+s \equiv 2 \pmod{4}; \\ r+s+1, & r+s+2, \dots, r+s \equiv 2 \pmod{4}. \end{cases}$$
+
+We are now ready to establish the integer-antimagic spectrum of a path with a chord.
+
+**Theorem 3.3.** Let $n$ and $l$ be positive integers with $2 \le l \le \lfloor n/2 \rfloor$. Then, $\mathrm{IAM}(P_n(l)) = \{k: k \ge n\}$ if $n \equiv 0, 1, 3 \pmod 4$, and $\mathrm{IAM}(P_n(l)) = \{k: k \ge n+1\}$ if $n \equiv 2 \pmod 4$.
+
+*Proof*. First, let $f$ be the $\mathbb{Z}_k$-antimagic labeling of $P_n$ defined in Theorem 3.1. We make several observations about the nature of $f$ and $f^+$. From the definition of $f$, we see that if $f(e_i) = 1$, then $i \in \{1, 2\}$. Thus, at most two edges are labeled with 1. We can also see that for every edge $e \in E(P_n)$, $f(e) \le n/2+2$; in particular, $f(e) \ne k-1$, since $k \ge n$. The last (and most nuanced) observation is that there are at most two edges, with the exception of the first and last edges, which do not have consecutive induced vertex labels on their endpoints. Moreover, the two edges whose induced vertex labels are not consecutive are never adjacent to each other.
+
+The graph $P_n(l)$ contains exactly one cycle which will be denoted $C$. Without loss of generality, we assume that $l \ge 2$ and that $l$ is even. In the case that $l$ is odd, $C$ would have even length. Here, our claim is established by overlaying the alternating cycle labeling, much like how it was used in the proof of Theorem 2.3.
+---PAGE_BREAK---
+
+**Case 1:** $2 \le l \le n-3$. By symmetry, we may assume that $e_1, e_2 \notin E(C)$. Thus for all $e \in E(C)$, $f(e) \notin \{1, k-1\}$. Due to the observations above, there exists at least one edge (say, $x \in E(C)$) whose endpoints (say, $\alpha$ and $\beta$) have consecutive induced vertex labels. Thus, $C-x$ is a path of even length and $f^+(\alpha) = a-1$ and $f^+(\beta) = a$ for some $a \in \mathbb{Z}_n$. We let $t$ be the alternating path labeling of $C-x$ for which the first edge is labeled with 1 and that edge is chosen to be the one whose endpoint is $\alpha$.
+
+Now, define $h: E(P_n(l)) \to \mathbb{Z}_k^*$ by
+
+$$h(e) = f(e) + z(e),$$
+
+where addition is in $\mathbb{Z}_k$, and
+
+$$z(e) = \begin{cases} t(e) & \text{if } e \in C - x; \\ 0 & \text{otherwise.} \end{cases}$$
+
+Note that $h^+(v) = (f+z)^+(v) = (f+t)^+(v)$ for all $v \notin \{\alpha, \beta\}$. Furthermore, $(f+t)^+(\alpha) = f^+(\beta)$ and $(f+t)^+(\beta) = f^+(\alpha)$. In other words, by overlaying the labeling $t$ on top of $f$ we transpose the induced vertex labels of $\alpha$ and $\beta$, and we leave all other induced vertex labels fixed. Thus, $h$ is the desired $\mathbb{Z}_k$-antimagic labeling of $P_n(l)$.
+
+**Case 2:** $l = n-2$. Here, we have a cycle with a pendant path (i.e., a tadpole graph). Hence, the claim is true for this case, by Lemma 3.2. $\square$
+
+Figures 4 and 5 illustrate the proof of Theorem 3.3.
+
+# 4 Wheels with a Chord
+
+Let $W_n$ denote the wheel on $n$ spokes, which is the graph containing a cycle of length $n$ with another special vertex not on the cycle, called the central vertex, that is adjacent to every vertex on the cycle. The integer-antimagic spectra of wheels were determined in [6], and is stated in the following theorem.
+
+**Theorem 4.1** (Roberts and Low [6]). For each integer $m \ge 1$, $\text{IAM}(W_{4m+r}) = \{k : k \ge 4m+r+1\}$ if $r=0,2,3$ and $\text{IAM}(W_{4m+1}) = \{k : k \ge 4m+3\}$.
+
+Figure 6 illustrates Theorem 4.1.
+
+A wheel on $n$ spokes with a chord (denoted $W_n^+$) is a graph obtained by adding an edge to $W_n$. Since the central vertex of $W_n$ is adjacent to all other vertices, a chord added to $W_n$ must have both endpoints on the outer cycle. Note that $W_3^+$ is a multigraph and is not considered in this paper.
+
+**Theorem 4.2.** For each integer $n \ge 4$, $\text{IAM}(W_n^+) = \{k : k \ge n+1\}$ if $n \equiv 0,2,3 \pmod 4$, and $\text{IAM}(W_n^+) = \{k : k \ge n+2\}$ if $n \equiv 1 \pmod 4$.
+---PAGE_BREAK---
+
+Figure 4: IAM($P_{15}(6)$) = {15, 16, 17, ...}.
+
+*Proof*. For $n \in \{4, 5, 6, 7, 8, 9\}$, the $\mathbb{Z}_k$-antimagic labelings of $W_n^+$ are given in Figures 7–12.
+
+Suppose $n \ge 10$. Let $f$ be the $\mathbb{Z}_k$-antimagic labeling of $W_n$ defined in Theorem [6]. Let the vertices $w$ and $z$ be the endpoints of the chord in $W_n^+$, and note that neither $w$ nor $z$ are the central vertex. Denote the central vertex by $y$. There also must exist a vertex, say $x$, on the outer cycle which is different from $z$ and is adjacent to $w$.
+
+Consider the 4-cycle $C$, with edges $\{w,x\}$, $\{x,y\}$, $\{y,z\}$, and $\{z,w\}$. Since $n \ge 10$, we have that $k \ge 11$. So by the Pigeonhole Principle, there exists some $a \in \mathbb{Z}_k^*$ such that $\pm a \notin \{\pm f(\{w,x\}), \pm f(\{x,y\}), \pm f(\{y,z\}), \pm f(\{z,w\})\}$. We will overlay a variation of an
+---PAGE_BREAK---
+
+Figure 5: IAM($T(6, 1)$) = {7, 8, 9, ...}.
+
+Figure 6: IAM($W_7$) = {8, 9, 10, ...} and IAM($W_9$) = {11, 12, 13, ...}.
+
+alternating cycle labeling on $C$ by defining the edge labeling $t: E(W_n^+) \to \mathbb{Z}_k^*$ as follows.
+
+$$ t(e) = \begin{cases} a & \text{if } e \in \{\{w,x\}, \{y,z\}\}; \\ -a & \text{if } e \in \{\{x,y\}, \{w,z\}\}; \\ 0 & \text{otherwise.} \end{cases} $$
+
+Now, define $h: E(W_n^+) \to \mathbb{Z}_k^*$ by $h(e) = f(e) + t(e)$, where addition is in $\mathbb{Z}_k$. Note that $h(e) \neq 0$ for all $e \in E(W_n^+)$, since $a$ was chosen appropriately. Furthermore, $h^+(v) = f^+(v)$ for all $v \in V(W_n^+)$. Thus, $h$ is the desired $\mathbb{Z}_k$-antimagic labeling. $\square$
+
+## References
+
+[1] G.S. Bloom and S.W. Golomb, Numbered complete graphs, unusual rulers and assorted applications, *Theory and Applications of Graphs: Lecture Notes in Math.*, Vol. 642, Springer-Verlag, New York (1978), pp. 53-65.
+
+[2] G.S. Bloom and S.W. Golomb, Applications of numbered undirected graphs, *Proc. of the IEEE*, 65(4):562–570, (1977).
+
+[3] W.H. Chan, R.M. Low and W.C. Shiu, Group-antimagic labelings of graphs, *Congr. Numer.*, 217:21–31, (2013).
+---PAGE_BREAK---
+
+Figure 7: IAM($W_4^+$) = {5, 6, 7, ...}.
+
+Figure 8: IAM($W_5^+$) = {7, 8, 9, ...}.
+
+[4] J.A. Gallian, A dynamic survey of graph labeling, *Electronic Journal of Combinatorics*, Dynamic Survey DS6, (2018), http://www.combinatorics.org.
+
+[5] R. Jones and P. Zhang, Nowhere-zero modular edge-graceful graphs, *Discussiones Mathematicae Graph Theory*, 32:487-505, (2012).
+
+[6] D. Roberts and R.M. Low, Group-antimagic labelings of multi-cyclic graphs, *Theory and Applications of Graphs*. 3(1) Article 6, (2016), electronic.
+
+[7] A. Rosa, On certain valuations of the vertices of a graph, in: Théorie des graphes, journées internationales d'études, Rome 1966 (Dunod, Paris, 1967) 349-355.
+
+[8] W.C. Shiu and R.M. Low, The integer-antimagic spectra of dumbbell graphs, *Bull. Inst. Combin. Appl.*, 77:89-110, (2016).
+
+[9] W.C. Shiu and R.M. Low, Integer-antimagic spectra of complete bipartite graphs and complete bipartite graphs with a deleted edge, *Bull. Inst. Combin. Appl.*, 76:54-68, (2016).
+
+[10] W.C. Shiu, P.K. Sun and R.M. Low, Integer-antimagic spectra of tadpole and lollipop graphs, *Congr. Numer.*, 225:5-22, (2015).
+---PAGE_BREAK---
+
+Figure 9: IAM($W_6^+$) = {7, 8, 9, ...}.
+
+Figure 10: IAM($W_7^+$) = {8, 9, 10, ...}.
+
+Figure 11: IAM($W_8^+$) = {9, 10, 11, ...}.
+---PAGE_BREAK---
+
+Figure 12: IAM($W_9^+$) = {11, 12, 13, ...}.
\ No newline at end of file
diff --git a/samples/texts_merged/3884314.md b/samples/texts_merged/3884314.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a1d2a650318a05b4cb944ecf84bf88ce48976f7
--- /dev/null
+++ b/samples/texts_merged/3884314.md
@@ -0,0 +1,1097 @@
+
+---PAGE_BREAK---
+
+Feedback for linearly distributive categories: traces and
+fixpoints
+
+R.F. Blutea,1, J.R.B. Cockettb,2, R.A.G. Seelyc,*,3
+
+aDepartment of Mathematics, University of Ottawa, 585 King Edward St., Ottawa, ON,
+Canada K1N 6N5
+
+bDepartment of Computer Science, University of Calgary, 2500 University Drive, Calgary, AL,
+Canada T2N 1N4
+
+cDepartment of Mathematics, McGill University, 805 Sherbrooke St., Montréal, PQ, Canada H3A 2K6
+
+Communicated by M. Barr
+
+Dedicated to Bill Lawvere to mark the occasion of his 60th birthday
+
+**Abstract**
+
+In the present paper, we develop the notion of a trace operator on a linearly distributive category, which amounts to essentially working within a subcategory (the core) which has the same sort of “type degeneracy” as a compact closed category. We also explore the possibility that an object may have several trace structures, introducing a notion of compatibility in this case. We show that if we restrict to compatible classes of trace operators, an object may have at most one trace structure (for a given tensor structure). We give a linearly distributive version of the “geometry of interaction” construction, and verify that we obtain a linearly distributive category in which traces become canonical. We explore the relationship between our notions of trace and fixpoint operators, and show that an object admits a fixpoint combinator precisely when it admits a trace and is a cocommutative comonoid. This generalises an observation of Hyland and Hasegawa. © 2000 Elsevier Science B.V. All rights reserved.
+
+MSC: 03B70; 03G30; 19D23
+
+* Corresponding author.
+
+*E-mail address:* rags@math.mcgill.ca (R.A.G. Seely).
+
+¹ Research partially supported by Le Fonds FCAR, Québec, and NSERC, Canada.
+
+² Research partially supported by NSERC, Canada.
+
+³ Research partially supported by Le Fonds FCAR, Québec, and NSERC, Canada.
+---PAGE_BREAK---
+
+**Introduction**
+
+In order to understand the computational aspects of the cut elimination process, in particular with respect to linear logic, Girard [13] makes a distinction between the denotational semantics of a logic and a quite different idea, which he calls “geometry of interaction”. Girard observes that the denotational semantics of a logic measures the outcome of normalization: every proof is equivalent to a cut-free proof. What this viewpoint fails to capture is the *dynamics* of cut-elimination. In [14], Girard proposed a specific model of geometry of interaction: proofs are interpreted as operators on a Hilbert space, and cut-elimination is achieved by the iteration of a single operator. The dynamics were then captured by an “execution formula” which described the iteration required to normalize the terms.
+
+In [3], Abramsky and Jagadeesan proposed a reformulation of Girard's ideas which was much more categorical in flavour. Rather than using Hilbert spaces, the Abramsky–Jagadeesan version of the “geometry of interaction” construction worked on categories of domains and produced a model of linear logic. In this new construction the execution formula relied on the presence of fixed points and was motivated by the semantics of feedback in dataflow networks.
+
+In [20] Joyal et al. introduced the notion of a traced monoidal category. The trace was designed to model such constructs as braid closure, feedback, and, of course, the trace operator on finite-dimensional Hilbert spaces. They proved that every compact closed category has a canonical trace and that every traced monoidal category embeds into a compact closed category while making the trace canonical. This last result was obtained by a construction which functorially assigned to a traced monoidal category a compact closed category: their construction was essentially the same as the Abramsky–Jagadeesan “geometry of interaction” construction [1].
+
+Any doubt about the relationship between these two constructions was removed when Hyland and Hasegawa [15] independently observed that a category with a traced product is precisely the same thing as a category with fixed points. Thus, the Joyal–Street–Verity construction was an abstract reformulation of the Abramsky–Jagadeesan “geometry of interaction” construction.
+
+In retrospect, Girard’s original construction can also be seen as exploiting the presence of a trace. Thus, it is reasonable to view a “geometry of interaction” semantics as being one given by an execution formula determined by a trace. (This has recently been made precise by Hines [17].) Of course, this use of a trace to obtain a dynamical semantics for linear logic can now be seen to have an important side-effect: the codomain of the interpretation will be a compact closed category. This should be contrasted with the standard denotational semantics for linear logic which is an interpretation into a *-autonomous category.
+
+In this way, Girard’s rather concrete desire to understand the dynamic process of cut-elimination gives way to the development of an abstract understanding of trace combiners and their relation to fixed points. This understanding has wider ramifications than its original application to linear logic. For example, the fact that there
+---PAGE_BREAK---
+
+are two apparently completely different semantic denotations, domain theoretic and iteration theoretic, of imperative programming constructs such as the “while loop” can now be simply explained: what is needed to interpret these constructs is a trace and both theories supply settings which are traced. The theory of traces opens the door to a better understanding of the various forms of feedback which occur in all walks of mathematical life from matrix traces to recursive equations.
+
+The purpose of this paper is to develop the theory of trace (and fixpoint) combinatorics in the linearly distributive setting. We take a local view of the trace combinator: rather than assuming that a trace is available at every object, we consider the effect of particular objects having a trace. This allows us to separate the concerns of compatibility (Section 3), which arise when tracing is possible at multiple objects, from the mere presence of a trace (Section 2).
+
+A trace is a special feedback combinator: it is a combinator or functional from maps to maps: thus, given a map $U \otimes A \xrightarrow{f} U \oplus B$ it delivers a map $A \xrightarrow{\text{tr}(f)} B$. We start by regarding $U$ as a constant, so that feedback only need be available at one object. While feedback is often available at all objects, there are many examples in which this is not the case. For example, in the category of finite posets a notion of trace (on the product) can be provided by using least fixed points. However, not every object in this category will have a trace, since the least fixed point construction requires a least element.
+
+It should also be pointed out that a category, or indeed an object, can have more than one trace structure. Thus, in finite posets the greatest fixed point operator is also available as a basis for the construction of a trace. Of course, these two different trace structures are not compatible (in the sense of Section 3), and indeed we shall show that two compatible trace structures must coincide. (Similar observations have been noted in a slightly different setting by Simpson [22].)
+
+There are many different notions of feedback, and one might remark that they need not all satisfy the axioms demanded of a trace in the sense of [20]. For example, the main (indeed, only) non-structural trace axiom is “yanking” (Section 2), which says that feedback on the “switch” map is the identity. Yanking is definitely not satisfied by the usual notion of feedback in “stream processing”, where one delays the output until the next time step when one uses it as input: for streams, feedback on the switch is delay.
+
+It should also be mentioned that a notion of feedback which is specific to certain maps (not just objects) is also possible, see [2]. There the authors introduce the notion of a trace ideal. The motivating example is in the category of Hilbert spaces: while many morphisms do not have a trace, within each $\operatorname{Hom}(\mathcal{H}, \mathcal{H})$ there is a subspace of morphisms which can be traced. This subspace forms a two-sided ideal which is closely related to the ideal of Hilbert-Schmidt maps. In Remarks 8 and 16 we discuss how these ideas can be generalised to this setting.
+
+The denotational semantics of (multiplicative) linear logic is essentially given by *-autonomous categories. From this perspective, compact closed categories are slightly degenerate, since they correspond to models of linear logic in which the two multiplica-
+---PAGE_BREAK---
+
+tive connectives are canonically isomorphic. However, compact closed categories are the natural doctrine to model the geometry of interaction semantics; in other words, the distinction between *-autonomous and compact closed categories is essentially equiv- alent to the distinction between denotational semantics and geometry of interaction semantics for linear logic. As indicated above, in the present paper we propose a con- struction which attempts to bridge this gap, namely the notion of a trace operator on a *-autonomous category, or more generally on a linearly distributive category. Even though *-autonomous categories make up the basic ingredient of categorical models of linear logic, it is quite productive to ignore the closed structure entirely and instead focus on the interaction between the tensor product and its dual cotensor, par . This was one of the motivations of the latter two authors in introducing linearly distributive categories. In a sequence of papers [6,7,9,11,12], it has been amply demonstrated that once one understands the linearly distributive structure, the extension of crucial struc- tural results to *-autonomy is straightforward. These results are achieved by introducing a graph-theoretic language for specifying morphisms which is inspired by proof nets. It should be thought of as a logical version of the Joyal–Street geometry of tensor calculus [19].
+
+Furthermore, the general “geometry of interaction” construction (Section 4) com- pletes a category by adding “complements” to make the traces canonical. The con- struction, however, is pointless if all the complements are already present: thus, it is crucial to start in a setting which can lack complements. Linearly distributive categories, being the notion of *-autonomous categories without complementation, are therefore a natural starting point from which to consider such a construction.
+
+Significantly, to make sense of a trace combinator in the linearly distributive setting it is necessary to suppose the MIX rule holds. In a MIX category, the very fact that an object has a trace immediately forces the object to be in the “core” (Section 1.2) where the distinction between tensor and par is lost. This reflects the fact that a geometry of interaction semantics necessarily lies in a compact closed category. We also explore the notion that an object may have several traced structures, and we introduce a notion of compatibility in this case, which turns out to be equivalent to the dinaturality of the trace operator, and so to the axiom “sliding” of [20]. For a given tensor structure, restricting to compatible classes of trace operators guarantees that an object may have at most one trace structure.
+
+The link between fixpoint combinators and trace combinators can also be expressed in this setting (Section 5). We show that an object admits a fixpoint combinator precisely when it admits a trace and it is a cocommutative comonoid. This generalises the observation by Hyland and Hasegawa [15]
+
+Finally, we repeat a frequent warning about terminology and notation from previ-
+ous papers in this series. The reader will have already noticed that we have adopted
+the term “linearly distributive category” for what previously we have called “weakly
+distributive category”, continuing the practice begun in [11]. More controversial per-
+haps is our insistence upon the use of ⊕ for “par”, preferring + for the coproduct
+“sum”.
+---PAGE_BREAK---
+
+# 1. The core of a MIX category
+
+## 1.1. Preliminaries
+
+### 1.1.1. Linearly distributive categories
+
+For the full definition of a linearly distributive category, we refer the reader to [7,9,10] (where the term “weakly distributive category” is used). Briefly, a linearly distributive category is a category with two tensors $\otimes, \oplus$ and two strength natural transformations, making each tensor strong (respectively costrong) with respect to the other. These strength transformations will be denoted by
+
+$$ \begin{align*} \delta_L^L & : A \otimes (B \oplus C) \rightarrow (A \otimes B) \oplus C, \\ \delta_R^R & : (B \oplus C) \otimes A \rightarrow B \oplus (C \otimes A). \end{align*} $$
+
+In this paper, we shall suppose these tensors are symmetric, and so we have two additional induced strength transformations:
+
+$$ \begin{align*} \delta_R^L & : A \otimes (B \oplus C) \rightarrow B \oplus (A \otimes C), \\ \delta_L^R & : (B \oplus C) \otimes A \rightarrow (B \otimes A) \oplus C. \end{align*} $$
+
+In this case, $\delta_R^R$ is induced from $\delta_L^L$ and need not be assumed as a primitive. These data must satisfy standard coherence conditions, discussed in [9]. These tensors must satisfy the usual conditions for a monoidal category, and in addition there are conditions for the “distributivities” above; in the symmetric case which is the main concern in the present paper, these essentially amount to requiring the following commutative diagrams:
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}} \\
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}} \\
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}} \\
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$\delta_L^L \otimes 1_B$}} \\
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus (C \oplus D)$}} \\
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$\delta_L^L \otimes 1_B$}} \\
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$"] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \otimes (B \otimes (C \oplus D))$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus (B \otimes (C \oplus D))$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus ((B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus ((B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus ((B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Top row
+ \multicolumn{2}{c}{\textcolor{white}{$T \otimes (A \oplus B)$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\delta_L^R$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(T \otimes A) \oplus B$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus B$}}
+\arrow[r, "$\delta_L^L$"] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Bottom row
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes B) \oplus C$}}
+\arrow[r, "$\cong$] &
+ % Right column
+ \multicolumn{2}{c}{\textcolor{white}{$A \oplus ((B \otimes C) \oplus D)$}}
+\arrow[r, "$\cong$] &
+ % Left column
+ \multicolumn{2}{c}{\textcolor{white}{$(A \otimes (B \otimes C) \oplus D)$}}
+\\[1em]
+% Diagonal arrows (for $((A+B)\otimes C)\oplus D$, $(A+B)\otimes(C+D)$)
+$(A+B)\otimes(C+D)$ & $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)\otimes C)\oplus D$
+& $((A+B)(B+C)) + ((B+C)(C+D)) + ((C+D)(C+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+D)(D+D)) + ((D+ДЗДЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗЗ ЗРССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССССС СТРАТЕГИЧЕСКИЕ
+
+$$
+\begin{array}{ccc}
+(A & T & (B & T) \\
+(B & T & (C & T) \\
+(C & T & (B & T) \\
+(B & T & (C & T) \\
+(D & T & (E & T) \\
+(E & T & (F & T) \\
+(F & T & (G & T) \\
+(G & T & (H & T) \\
+(H & T & (I & T) \\
+(I & T & (J & T)
+\end{array}
+$$
+---PAGE_BREAK---
+
+The third diagram above is the most controversial, as it fails to be true in distributive categories, and is why distributive categories cannot be linearly distributive (unless they are posetal, see [10]). However, it is a direct consequence of the logical interpretation of linearly distributive categories: it corresponds to a natural (and necessary) cut-elimination step. For in essence, ignoring associativity,
+
+$$ \delta_L^L; \delta_R^R \oplus 1 = \frac{\substack{B, C \to B \otimes C \\ A \oplus B \to A, B} \quad \substack{A \oplus B, C \to A, B \otimes C \\ C \oplus D \to C, D}}{\substack{A \oplus B, C \oplus D \to A, B \otimes C, D}} $$
+
+and
+
+$$ \delta_R^R; 1 \oplus \delta_L^L = \frac{\substack{B, C \to B \otimes C \\ A \oplus B \to A, B} \quad \substack{C \oplus D \to C, D \\ B, C \oplus D \to B \otimes C, D}}{\substack{A \oplus B, C \oplus D \to A, B \otimes C, D}} $$
+
+and a standard permutation of the cuts would require these to be equivalent.
+
+We shall see some examples below in Example 4.
+
+### 1.1.2. Polycategorical composition and circuits
+
+In [9] we showed that the linear distributivities are precisely what is necessary to model the *cut* rule for polycategories (or equivalently, for sequent calculus with multiple conclusions and multiple premises). In this paper it will be convenient to use the polycategorical cut rule, which we shall call “polycategorical composition”; the reader ought to consult [9] for further details, but the following example ought to make the notion clear. Suppose we have maps $C_1 \otimes A \otimes C_2 \xrightarrow{g} C_3$ and $D_1 \xrightarrow{f} D_2 \oplus A \oplus D_3$ (one may imagine the objects $C_i$, $D_j$ are finite “sequences” of objects, i.e. finite tensors or pars of such sequences, as appropriate). We can construct $f_{\circlearrowleft} g: C_1 \otimes D_1 \otimes C_2 \to D_2 \oplus C_3 \oplus D_3$ as follows (ignoring several instances of associativity for simplicity):
+
+$$
+\begin{align*}
+f_{\circlearrowleft} g &= C_1 \otimes D_1 \otimes C_2 \xrightarrow{1 \otimes f \otimes 1} C_1 \otimes (D_2 \oplus A \oplus D_3) \otimes C_2 \\
+&\xrightarrow{\delta_R^L \otimes 1} (D_2 \oplus (C_1 \otimes (A \oplus D_3))) \otimes C_2 \\
+&\xrightarrow{\delta_R^R} D_2 \oplus ((C_1 \otimes (A \oplus D_3)) \otimes C_2) \\
+&\xrightarrow{1 \oplus (\delta_L^L \otimes 1)} D_2 \oplus (((C_1 \otimes A) \oplus D_3) \otimes C_2) \\
+&\xrightarrow{1 \oplus \delta_L^R} D_2 \oplus ((C_1 \otimes A \otimes C_2) \oplus D_3) \\
+&\xrightarrow{1 \oplus g \oplus 1} D_2 \oplus C_3 \oplus D_3
+\end{align*}
+$$
+
+(where 1 represents identity morphisms). There are other ways of achieving this poly-categorical composition, but they are equivalent under the coherence conditions imposed on linearly distributive categories.
+---PAGE_BREAK---
+
+Before leaving this subsection, we ought to remark on a distinction that must be made
+between the circuit diagrams we use and the categorical diagrams they are intended to
+illuminate. Circuits in fact correspond to morphisms in polycategories, not categories.
+One can get a categorical morphism by “tensoring” inputs and “par’ing” outputs, but
+unless this is systematically done throughout the circuit, there will be polycategorical
+elements remaining in the circuit. There is a direct translation between polycategori-
+cal morphisms and categorical ones; in [9] we showed that adding tensor and par to
+polycategories was conservative (in the categorical sense of an adjunction with a fully
+faithful unit), and that linearly distributive categories are equivalent to polycategories
+with tensor and par. But since we often place our discussions in the context of circuits,
+rather than using categorical conditions, the reader must keep the distinction clear. Cir-
+cuits and categorical diagrams emphasize different aspects of the underlying structure;
+it is to be expected that in translating between them, certain features will gain or lose
+prominence. But we must be clear about the following: circuits are a precise means of
+discussing the categorical (as well as the polycategorical) structure of the categories
+we consider. There is a precise “term logic” for them, given in [7], which makes them
+more than just an analogy. Using this, the reader who wants to recast our language
+into categorical terms may do so. Our point, of course, is that the present presentation
+helps make the structure easier to understand. Circuits handle the various instances of
+tensorial strength that determine the structure of linearly distributive categories with
+particular elegance.
+
+A piece of terminology: we refer to non-logical axioms (or generating morphisms) as “components”; one of the key results of [7] is that for purposes of determining the equivalence of circuits (i.e. the equality of maps), rules given in terms of components may in fact be used with arbitrary (sequentializable) subcircuits⁴ playing the same role as components.
+
+1.1.3. MIX categories
+
+Next, we recall from [11] that a **MIX** category is a linearly
+distributive category with a morphism $m: \perp \to \top$, satisfying
+a simple coherence condition. Note that in a **MIX** category,
+there is a morphism (also denoted $m: A \otimes B \xrightarrow{m_{AB}} A \oplus B$ for any
+$A, B$, which essentially amounts to either of the equivalent nets
+at left. The coherence condition referred to above is just that
+these two canonical ways of constructing this map are equal.
+
+In circuits, this condition amounts to being able to “switch”, or rewire, the unit thinning
+links that may be attached to the *m* component, as in the figure. In fact, in [11] we
+show that this condition need only be supposed to hold when the two wires have a
+unit type. We can strengthen the definition of mix: an iso*MIX* category is a *MIX*
+
+⁴ Unless we state otherwise, we shall use “subcircuits” to refer to sequentializable subgraphs, and “subgraphs” to refer to those that are not necessarily sequentializable.
+---PAGE_BREAK---
+
+category whose “mix” morphism $m: \perp \dashv \top$ is an isomorphism. (Note this does not force $A \otimes B \xrightarrow{m_{AB}} A \oplus B$ to be an isomorphism.) The isoMIX condition is essentially equivalent to having a biunit, and indeed simply forcing the units to be isomorphic also forces the mix condition. Thus a linearly distributive category in which $\top$ is isomorphic to $\perp$ is an isoMIX category.
+
+In a symmetric linearly distributive category, a morphism $f: A \to B$ is said to be nuclear [12] if there are morphisms $\tau: \top \to C \oplus B$, $\gamma: A \otimes C \to \perp$ so that $(u_\otimes^R)^{-1}; 1 \otimes \tau; \delta_L^L; \gamma \oplus 1; u_\oplus^L = f$ (where $u$ are appropriate unit isomorphisms). We say $C$ (and $\tau, \gamma$) “witnesses” the nuclearity of $f$. An object is nuclear if its identity map is nuclear; nuclear morphisms form a 2-sided ideal. The nucleus of a linearly distributive category is the full subcategory of its nuclear objects. In the symmetric case, this operation is idempotent (though not in the non-symmetric case).
+
+We noted in [11] that a linearly distributive category is MIX if and only if its nucleus is. Related to this is the notion of complement: an object *A* of a linearly distributive category is said to be complemented if there is an object *B* and maps $\tau: \top \to B \oplus A$, $\gamma: A \otimes B \to \perp$ so that $(u_\otimes^R)^{-1}; 1 \otimes \tau; \delta_L^L; \gamma \oplus 1; u_\oplus^L = 1_A$ and $(u_\otimes^L)^{-1}; \tau \otimes 1; \delta_R^R; 1 \oplus \gamma; u_\oplus^R = 1_B$. This means that each object *A*, *B* is nuclear, and moreover each is a witness of the other's nuclearity. For a complemented object *A*, its complement is unique up to a unique isomorphism. If idempotents split, then nuclear objects must be complemented, so these two notions coincide.
+
+## 1.2. The core
+
+**Definition 1.** Suppose X is a MIX category. We say an object U is in the core of X if the natural transformation $U \otimes_-^{m_U} U \oplus_-$ is an isomorphism.
+
+**Lemma 2.** *The following diagram commutes in a MIX category. So, if U is in the core of such a category, then the linear distributivity $\delta_L^L$ is an isomorphism (essentially corresponding to associativity).*
+
+$$
+\begin{array}{ccc}
+U \otimes (A \oplus B) & \xrightarrow{\delta_L^L} & (U \otimes A) \oplus B \\
+\underset{m}{\bigdownarrow} & & \underset{m \oplus 1}{\bigdownarrow} \\
+U \oplus (A \oplus B) & \xrightarrow{a^{-1}} & (U \oplus A) \oplus B
+\end{array}
+$$
+
+**Proof.** This is most simply shown by examining the proof circuits for the maps involved. Throughout this paper, we represent the “MIX-barbell” by $\mathfrak{g}$. Note it has thinning links attached at either end; rewiring these is a key step in such proofs. In [7] we gave a Rewiring Theorem which showed that any rewiring past a subcircuit was valid; the reader ought to consult that paper for the full details.
+---PAGE_BREAK---
+
+$$m; a_{\oplus}^{1} = \begin{tikzpicture}[baseline=(current bounding box.center), scale=0.7]
+ \node[circle, fill, inner sep=1pt] (top) at (0,0) {};
+ \node[circle, fill, inner sep=1pt] (bottom) at (0,2) {};
+ \node[circle, fill, inner sep=1pt] (left) at (-2,0) {};
+ \node[circle, fill, inner sep=1pt] (right) at (2,0) {};
+ \draw[thick] (top) -- (bottom);
+ \draw[thick] (left) -- (right);
+ \node[above=1pt of top] (u) {$\oplus V$};
+ \node[below=1pt of bottom] (v) {$\oplus U$};
+ \node[left=1pt of left] (a) {$\otimes m$};
+ \node[right=1pt of right] (b) {$\otimes 1$};
+ \draw[->] (a) -- (b);
+ \node[above=2pt of top] (top) {$=$};
+ \node[below=2pt of bottom] (bottom) {$=$};
+ \draw[thick,dashed] (top) -- (bottom);
+\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center), scale=0.7]
+ \node[circle, fill, inner sep=1pt] (top) at (0,0) {};
+ \node[circle, fill, inner sep=1pt] (bottom) at (0,2) {};
+ \node[circle, fill, inner sep=1pt] (left) at (-2,0) {};
+ \node[circle, fill, inner sep=1pt] (right) at (2,0) {};
+ \draw[thick] (top) -- (bottom);
+ \draw[thick] (left) -- (right);
+ \draw[thick,dashed] (top) -- (bottom);
+ \draw[thick] (left) to[bend left=30] (right);
+ \node[above=1pt of top] (top) {$\delta_L^l$};
+ \node[below=1pt of bottom] (bottom) {$m \oplus 1$};
+ \node[left=1pt of left] (left) {$\otimes V$};
+ \node[right=1pt of right] (right) {$\otimes U$};
+ \draw[->] (left) -- (right);
+\end{tikzpicture}$$
+
+**Proposition 3.** If $\mathbf{X}$ is a MIX category and $U, V$ are in the core of $\mathbf{X}$, then $U \otimes V$, $U \oplus V$ are also in the core. Moreover, $U \otimes V$, $U \oplus V$ are isomorphic. If $\mathbf{X}$ is isoMIX, then $\top, \bot$ are also in the core (so that for an isoMIX category, the core forms a full compact (i.e. $\otimes \cong \bot$) linearly distributive sub-category of $\mathbf{X}$).
+
+**Proof.** Most of this is obvious; the only point that needs some elaboration is that $U \otimes V$ is in the core if $U$ and $V$ are. This follows from the commutativity of the following diagram:
+
+$$
+\begin{tikzcd}
+(U \otimes V) \otimes A \arrow[r, "m"] & (U \otimes V) \oplus A \\
+a \otimes \downarrow & \delta_L^l \\
+U \otimes (V \otimes A) \arrow[uur, "1 \otimes m"] & U \otimes (V \oplus A)
+\arrow{lur, "a"'}
+\arrow{dl, ""}
+\arrow{dr, ""}
+\arrow{ld, ""}
+\arrow{rd, ""}
+\end{tikzcd}
+$$
+
+The commutativity of the diagram follows from the following circuit equation:
+
+$$a_{\oplus}; 1 \otimes m; \delta_L^l = \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.8]
+ \node[circle, fill, inner sep=1pt] (top) at (0,0) {};
+ \node[circle, fill, inner sep=1pt] (bottom) at (0,2) {};
+ \node[circle, fill, inner sep=1pt] (left) at (-2,0) {};
+ \node[circle, fill, inner sep=1pt] (right) at (2,0) {};
+ \draw[thick] (top) -- (bottom);
+ \draw[thick] (left) -- (right);
+ \node[above=1pt of top] (top) {$=$};
+ \node[below=1pt of bottom] (bottom) {$=$};
+ \draw[thick,dashed] (top) -- (bottom);
+ \draw[thick] (left) to[bend left=30] (right);
+ \node[above=2pt of top] (top) {$=$};
+ \node[below=2pt of bottom] (bottom) {$=$};
+ \draw[thick] (left) -- (right);
+\end{tikzpicture} = m$$
+
+**Example 4.** There are a number of examples of MIX categories which have non-empty cores but in which the tensor and par cannot be identified.
+
+(i) In any isoMIX category the core is non-empty since the unit is in the core. However, this may be all that is in the core. If the category has biproducts acting as
+---PAGE_BREAK---
+
+tensor and par, then it is not hard to show that all (finite) biproducts of the unit
+will also be in the core. This may then be a non-trivial category.
+
+An example of this phenomenon is given by the category **RTVec** of reflex-
+ive linear topological vector spaces [5,8,21], i.e. vector spaces equipped with a
+linear topology which are isomorphic to their double duals. In this category, all
+finite-dimensional vector spaces are in the core but infinite-dimensional vector
+spaces are not in the core.
+
+(ii) As another example, consider the category of sup-lattices, where the objects are lattices with arbitrary suprema and the functions preserve these; this is a well-known *-autonomous category* [4]. The tensor is determined as the adjoint to the function space construction (where the functions are given the pointwise ordering) and the tensor unit is the two-element boolean algebra 2. This unit is also the dualizing object. The elements of $A \to 2$ correspond to ideals which, being closed under suprema, are principal. These ideals are ordered pointwise as maps to 2 which, in fact, means that they are ordered by the reverse of inclusion. Thus, the “perp” of an object is the sup-lattice itself but with the reverse ordering.
+
+As the unit and counit in this category coincide it is in fact an iso*MIX* category.
+What is its core? First observe that sup-lattices have biproducts: thus the core
+contains at least all the biproducts of 2 (which are the finite boolean algebras).
+But clearly it contains more.
+
+sup-lattice $A$ is nuclear in case the function space can be expressed as $A \to B = (A \to 2) \otimes B$: in [16] these are shown to be the completely distributive lattices. However, since we are in a *-autonomous category this forces $A \to 2$ to be in the core for any nuclear $A$, and thus by Proposition 5 $A$ must be in the core. The converse is also obviously true: that is core objects are nuclear. This means that objects in the core — which we shall see are also those objects which can be traced — in the category of sup-lattices are exactly the completely distributive lattices.
+
+(iii) Any symmetric monoidal category $\mathbf{X}$ is an isoMIX category in which every object is in the core. The finite bicompletion of $\mathbf{X}$ is an isoMIX linearly distributive category $\Lambda(\mathbf{X})$, see [18]; its core includes $\mathbf{X}$ but is not the whole category.
+
+(iv) In a symmetric monoidal category an object $V$ is said to have a tensor inverse when there is an object $V'$ such that $V \otimes V'$ is the unit (and certain coherence diagrams hold, see [9]). Given such an object one can define a "par" as $A \oplus B = A \otimes V \otimes B$ which has unit $V'$. If there is a map $\hat{m}: \top \to V$ then this provides a MIX structure; notice that in this case, the mix map is the mate $m: V' \to \top$ obtained by tensoring $\hat{m}$ with $V'$. An object is $V$-invariant in case $\hat{m} \otimes 1: \top \otimes A \to V \otimes A$ is an isomorphism. Clearly $V$-invariant objects are in the core.
+
+**Proposition 5.** If $U$ is in the core of $\mathbf{X}$ and $V$ is a complement of $U$, then $V$ is in the core of $\mathbf{X}$.
+
+**Proof.** Recall that $\langle U, V \rangle$ being a complement pair means that we have the following components and equalities:
+---PAGE_BREAK---
+
+The inverse to $V \otimes A \xrightarrow{m} V \oplus A$ is
+
+$$
+\begin{align*}
+V \oplus A \xrightarrow{u} \tau \otimes (V \oplus A) &\xrightarrow{\tau \otimes 1} (V \oplus U) \otimes (V \oplus A) \\
+&\xrightarrow{m^{-1} \otimes 1} (V \otimes U) \otimes (V \oplus A) \xrightarrow{a; \delta_L^t} V \otimes ((U \otimes V) \oplus A) \\
+&\xrightarrow{1 \otimes (\gamma \oplus 1)} V \otimes (\bot \oplus A) \xrightarrow{1 \otimes u} V \otimes A,
+\end{align*}
+$$
+
+where $m^{-1}$ is the inverse to $V \otimes U \to V \oplus U$ which exists since $U$ is in the core.
+As a proof circuit, this map is the following. Note that to simplify these circuits, we
+shall drop the grounded unit nodes which are attached to the $\tau$ and $\gamma$ nodes, writing,
+for example, the $\tau$ node without any input wires, and dually for $\gamma$:
+
+To see this is the required inverse amounts to some circuit rewrites. First, we precom-
+pose with $m_{VA}$, and show this is equivalent to the identity on $V \otimes A$:
+---PAGE_BREAK---
+
+which is the expanded normal form of the identity on $V \otimes A$. Note the rewirings around subcircuits, and the use of the equivalence $m^{-1}$; $m = 1$ in the second rewrite.
+
+Next, we postcompose with $m_{VA}$, and show this is equivalent to the identity on $V \oplus A$, just as above:
+
+which is the expanded normal form of the identity on $V \oplus A$. □
+---PAGE_BREAK---
+
+## 2. Traced objects
+
+We begin with a definition based upon the similarly named notion of Joyal et al. [20].
+
+**Definition 6.** Suppose $\mathbf{X}$ is a Mix category, $U$ an object of $\mathbf{X}$. We say $U$ has a trace if there is a family of functions $\text{tr}_U^{AB}: \mathbf{X}(U \otimes A, U \oplus B) \to \mathbf{X}(A, B)$ satisfying the following axioms:
+
+$$
+\begin{align*}
+\text{Yanking : } & \text{tr}_{U}^{\mathrm{UU}}(c_{\otimes}; m_{UU}) = 1_{U} = \text{tr}_{U}^{\mathrm{UU}}(m_{UU}; c_{\oplus}) \\
+\text{Tightening : } & \text{tr}(g_{\otimes}f_{\otimes}h) = g_{\otimes}\text{tr}(f)_{\otimes}h \\
+\text{Superposing : } & \text{tr}(f \otimes C) = \text{tr}(f) \otimes C \\
+& \qquad \qquad \qquad \text{tr}(f \oplus C) = \text{tr}(f) \oplus C
+\end{align*}
+$$
+
+We must clarify the notation. Above, $c_\otimes$ and $c_\oplus$ are the “twist” maps (which exist since we are assuming $\otimes$ and $\oplus$ are symmetric). First note that $c; m=m; c: U \otimes U \to U \oplus U$; these are the “twisted” versions of $m$. Then the meaning of “yanking” is clear: these are both sent to the identity on $U$ under the trace operator. For “tightening”, we suppose given morphisms (again we ignore instances of associativity where the meaning is clear, and will continue to do so when appropriate) $f: U \otimes A \otimes B \to U \oplus X \oplus Y$, $g: D \to B \oplus C$, $h: Y \otimes Z \to W$; here $A, D$, and $Z$ may be thought of as arbitrary finite strings of (i.e. tensors of) objects, and $X, C$, and $W$ may be thought of as arbitrary finite strings of (i.e. pars of) objects. The notation “$\otimes$” refers to the evident polycategorical composition discussed in the previous section, so the resultant equation is between maps $A \otimes D \otimes Z \to X \oplus W \oplus C$. Finally, for “superposing”, we suppose $f$ as for tightening, and then we mean that the trace of the map $U \otimes A \otimes B \otimes C \xrightarrow{f \otimes 1} (U \oplus X \oplus Y) \otimes C \xrightarrow{\delta'} U \oplus (X \oplus (Y \otimes C))$ is the map $A \otimes B \otimes C \xrightarrow{\text{tr}(f) \otimes 1} (X \oplus Y) \otimes C \xrightarrow{\delta} X \oplus (Y \otimes C)$, and similarly for $\oplus$. Here the $\delta$'s are given by the evident linear distributivities. We shall present these axioms as circuit rewrites shortly.
+
+The names for these axioms are those of the corresponding axioms in [20]. There are two axioms in [20] we have omitted: “sliding”, for which we shall soon have a replacement, and “vanishing”, which will be a definition, not an axiom, in our treatment. Superposing is both more and less general here than in [20]: their version is only given in terms of the tensor, whereas we must also include a version for the par. Their version involves another map $g$, so we have the equation $\text{tr}(f \otimes g) = \text{tr}(f) \otimes g$ and similarly for $\oplus$, but it is a simple exercise that this general version is a consequence of our version plus tightening (at least in the present context).
+
+To present these axioms as proof circuits, we need a notation for the trace operator. We shall use a box notation similar to that used in [11], viz. we shall box the circuit corresponding to the map $U \otimes A \xrightarrow{f} U \oplus B$, anchoring the $U$ wires to the box, leaving only the $A, B$ wires, which will represent the map $A \to B$. When we wish to consider different trace operators, (for example, operators for different traced objects), we shall “ decorate ” the box with suitable labels. So, with these remarks to guide the reader, we present the three axioms for a trace operator in terms of proof circuits.
+---PAGE_BREAK---
+
+**Yanking:**
+
+**Tightening:**
+
+**Superposing:**
+---PAGE_BREAK---
+
+Tightening is necessary in order to make the trace operator a strong combinator, and superposing is necessary for the category – polycategory translation that underlies all operations on linearly distributive categories. We shall consider these two axioms more closely in order to elucidate this structure.
+
+First, it is standard to identify tightening as the requirement that $\text{tr}_U^{A,B}$ is a natural transformation in $A, B$. The tightening axiom we have is a polycategorical generalisation of this, since we have allowed $g, h$ to be “polymorphisms”.⁵ This amounts to adding a measure of tensorial strength to the situation; to explain this in detail would require a digression to introduce the notion of a “strong combinator”: from this point of view, the trace operator acts on endomorphisms of $U$, and the strength allows the smooth handling of the parameters $A$ and $B$. (An account of strong combinators in the cartesian case can be found in the thesis of Vesely [23] — the generalisation to the present context is fairly straightforward, but is not necessary for our purposes.)
+
+Next we consider the axiom of superposing. Note that in the circuit rewrites for superposing, as given above, pulling the lower $\otimes$ node out of the trace box may be done with our “polycategorical” version of tightening, as may pulling out the upper $\oplus$ node, since these are subcircuits. So the essential content of this axiom involves pulling out the other nodes, which are not subcircuits. These nodes are “switching”, in the terminology of proof nets, and their usual role is (for $\otimes$) to make two input wires into a single tensored input wire, and dually for the par, making two output wires a single par’ed output wire. In other words, they serve to translate between a polycategorical circuit, which has multiple in/output wires, and a categorical circuit, which has exactly one input and one output wire. Since these are *not* subcircuits (they do not correspond to morphisms), tightening does not apply to them; superposing then essentially amounts to allowing these moves, and so can be given in the following equivalent form.
+
+Superposing (ii):
+
+⁵ We refer to “polymorphisms” to mean morphisms in a polycategory. As we pointed out in [9], any linearly distributive category may be regarded as a polycategory, and our circuits make such a viewpoint very natural. Thinking of a morphism as a component box in circuit notation, a morphism has one input and one output wire, whereas a polymorphism has many inputs and outputs.
+---PAGE_BREAK---
+
+This form of superposing has the feature that if we try to give a categorical version,
+then since we use the very nodes that are being moved in and out of the scope of the
+trace boxes, these equations end up being identities. So in effect, superposing (in our
+context) amounts to enabling the polycategorical – categorical translation. (A similar
+effect was noted in the linear implication scope boxes of [11].)
+
+The point of these remarks is to underline the qualitative difference between the
+tightening and superposing requirements, and yanking: tightening and superposing are
+structural axioms which express no more than the strength of the combinator; yanking,
+however, is a distinct requirement — which makes the operator a trace rather than a
+general feedback combinator.
+
+**Remark 8 (Trace ideals).** In [2], the authors introduce the notion of a trace ideal, a notion which models the fact that in (for example) Hilbert spaces one has many maps without a trace, in particular identity morphisms generally do not have traces. We can model this phenomenon in our setting by allowing $\text{tr}_U$ to be a partial operator. The axioms which account for the bistrength, viz. tightening and superposing, will remain as before, but we must modify yanking, since there is no reason to suppose $c$; $m$ is in the domain of $\text{tr}_U$ in general. In this setting, we would take the following variant of yanking, for $A \xrightarrow{f} U \oplus D$, $U \otimes C \xrightarrow{g} B$. If $g \otimes f \circled{m} \circled{c}$ is in the domain of $\text{tr}_U^{C \otimes A, B \oplus D}$, then
+
+$$ \text{Generalised Yanking : } c \circled{tr}_U (g \otimes f \circled{m} \circled{c}) = f \circled{g}. $$
+
+As before, we may suppose *A* and *B* represent finite strings of objects.
+In circuit notation, this is the following rewrite:
+
+Note that in the case of a global trace operator, this generalised version of yanking
+is a consequence of (ordinary) yanking and tightening. In the partial operator case,
+tightening is to be interpreted as requiring that (using the notation of Definition 6) if
+$f$ is in the domain of tr, so is $g \circled{f} \circled{h}$, and moreover $\text{tr}(g \circled{f} \circled{h}) = g \circled{tr}(f) \circled{h}$. Likewise,
+superposing is to be interpreted as requiring that if $f$ is in the domain of tr, so are
+$f \otimes C$, $f \oplus C$, and moreover $\text{tr}(f \otimes C) = \text{tr}(f) \otimes C$, $\text{tr}(f \oplus C) = \text{tr}(f) \oplus C$. With this
+---PAGE_BREAK---
+
+revised definition, note that tightening guarantees that the domain of $\text{tr}_U^{AB}$ (for any $A$, $B$, $U$) is a two-sided ideal.
+
+Our first key result about traces is that in a MIX category, any complemented core object has a canonical trace operator. This fact plays the role in our theory that the canonical trace on a tortile monoidal category plays in the theory of [20].
+
+**Proposition 9.** Suppose $X$ is a MIX category, $U$ a complemented object in the core of $X$. Then $U$ has a trace, called the *complement trace*, defined as follows. For $f: U \otimes A \to U \oplus B$, $\text{tr}(f) = A \xrightarrow{u} A \otimes \tau \xrightarrow{\textstyle 1 \oplus \tau} A \otimes (V \oplus U) \xrightarrow{\delta'} V \oplus (U \otimes A) \xrightarrow{\textstyle 1 \oplus f} V \oplus (U \oplus B) \xrightarrow{m^{-1}} V \otimes (U \oplus B) \xrightarrow{\delta''} (U \otimes V) \oplus B \xrightarrow{\gamma \oplus 1} \perp \oplus B \xrightarrow{u} B$. This is given by the circuit below:
+
+**Proof.** There are three circuit diagrams to verify. The diagram for yanking is trivial: just rewire the MIX-barbell so it falls just below the $m^{-1}$ node, and then the circuit reduces directly to the identity wire. For tightening and superposing, there is actually nothing to do: the circuits are the same on either side of the equations. $\square$
+
+**Proposition 10.** If $U$ is a traced object of a MIX category $X$, then $U$ is in the core of $X$.
+
+**Proof.** The map inverse to $m: U \otimes A \to U \oplus A$ is the trace of the linear distributivity $\delta: U \otimes (U \oplus A) \to U \oplus (U \otimes A)$. The following rewrites show these are indeed inverse. The first shows that $m; \text{tr}(\delta) = 1_{U \otimes A}$, the second shows that $\text{tr}(\delta); m = 1_{U \oplus A}$. The main subtlety here is that in the first case we can rewire the thinning link at the bottom around the $\otimes$ link, and in the second case we must rewire the thinning link at the top around the $\oplus$ link. Switching the MIX-barbell to its mirror image is valid by the
+---PAGE_BREAK---
+
+coherence condition for MIX categories:
+
+There is an analogous result for the tensor units. It is clear that in an isoMIX category, the (common) unit for the tensor and par is trivially traceable; however, the converse is also true.
+
+**Proposition 11.** Suppose $X$ is a MIX category. If either $\top$ or $\bot$ has a trace, then $\bot \cong \top$, so that $X$ is an isoMIX category.
+
+Note that this implies that making all objects traced would eliminate the distinction between the set-up of this paper, using linearly distributive categories, and that of [20], since we would then have an isoMIX category in which all objects were in the core, and linear distributivity essentially just becomes associativity.
+
+**Proof.** From an “abstract” point of view, this is obvious: if, say, $\top$ is traced, then it is in the core, so the functor $\top \otimes _{-}$ is isomorphic to the functor $\top \oplus _{-}$, and so $\top$ is a unit for the par. However, it may be of interest to see what the isomorphism is explicitly. We
+---PAGE_BREAK---
+
+begin with the observation that if any object has a trace operator, then there is induced
+a morphism from $\top$ to $\bot$, viz. the trace of the morphism $U \otimes \top \xrightarrow{u\otimes} U \xrightarrow{u\oplus^{-1}} \bot \oplus U$. In
+general, there is no reason for this to be the inverse for $m: \bot \to \top$, but if $U$ is either
+unit, then we can show that it is indeed the inverse. We shall consider the case $U=\bot$;
+the other case is similar. The key step is to note that if we have $m$ on a wire, we
+can split the wire above and/or below the $m$ using the unit expansion rewrites, which
+creates a MIX-barbell $\mathbb{G}$. Likewise we can split the unit wire which is attached to the
+trace box, again creating a thinning link. Then the rest of the proof involves rewiring
+the unit thinning links as necessary. (This can be somewhat subtle, and the order in
+which such rewirings is done can be vital, as we showed in [7]. A rewiring can alter
+the empires of other units, thus altering what other rewirings are possible. In this way,
+rewirings become possible that were blocked before.)
+
+First we consider the composite $m; \text{tr}(u; u^{-1})$:
+
+which is the identity wire on $\bot$, by yanking. Note the step at the second equality,
+where we split the $\bot$ wire just above the $m$, creating a MIX-barbell, and the rewiring
+at the next step, where we rewired two thinning links, making it possible to join up
+the $\bot$ wires to give the last circuit.
+---PAGE_BREAK---
+
+Next, we consider the other composite; the steps are similar, although more rewiring
+is necessary.
+
+which, after yanking and unit reduction is the identity wire on $\bar{T}$. □
+
+One might ask whether an object *U* may have more than one trace operator. If we
+add a further condition, “compatibility”, then in fact this is not possible, as we shall
+see in the next section.
+
+**3. Compatible traces**
+
+**Definition 12.** Suppose $X$ is a MIX category, and $U$, $V$ objects of $X$ each with a trace operator, say $\text{tr}_U$, $\text{tr}_V$. These traces are called *compatible* if for any $f: U \otimes V \otimes A \to U \oplus V \oplus B$, $\text{tr}_V(\text{tr}_U(f)) = \text{tr}_U(\text{tr}_V(f'))$, where $f' = V \otimes U \otimes A \xrightarrow{\text{c}\otimes 1} U \otimes V \otimes A \xrightarrow{f} U \oplus V \oplus B \xrightarrow{\text{c}\otimes 1} V \oplus U \oplus B$. In circuits, this is the following equation:
+---PAGE_BREAK---
+
+We shall say a trace operator is “self-compatible” if it is compatible with itself. This condition really ought to be considered part of the definition of a “good” notion of trace; we have separated it just to keep clear what notions depend on what conditions.
+
+In [20] there is an axiom “sliding” which corresponds to the dinaturality in the variable $U$ of the family $\text{tr}_U$. The equivalent equation in our context is the following consequence of compatibility.
+
+**Proposition 13.** Suppose $U, V$ are objects of a MIX category with compatible trace operators $\text{tr}_U, \text{tr}_V$. Suppose $f : U \otimes X \to V \oplus Y$ and $g : V \otimes A \to U \oplus B$; let $f \circled{g} : U \otimes X \otimes A \to U \oplus B \oplus Y$ and $g \circled{f} : V \otimes X \otimes A \to V \oplus B \oplus Y$ be the evident "polycategorical compositions". Then $\text{tr}_U(f \circled{g}) = \text{tr}_V(g \circled{f})$.
+
+*Proof.*
+
+
+---PAGE_BREAK---
+
+As a corollary, taking $U = V$, $A = \top$, $B = \bot$, and $g = 1$, we obtain the result we
+promised in the last section.
+
+**Corollary 14.** Any two compatible traces are equal.
+
+There remains an axiom of [20] that we have not yet considered, viz. "vanishing",
+which allows the trace of a tensor to be given in terms of the traces of the components
+of the tensor. (The nullary case of vanishing, viz. trace on the tensor unit is identity,
+is either trivially true, when $\top \cong \bot$, i.e. in the isoMIX case, or does not even type
+properly, so we shall not deal further with that case of the vanishing axiom.) Now, it is
+fairly easy to define a trace operator on $U \otimes V$ given traces on $U$ and $V$: indeed there
+are two natural candidates, viz. for $f : (U \otimes V) \otimes A \to (U \otimes V) \oplus B$, $\text{tr}_V(\text{tr}_U(f \S m))$
+and its "twisted" variant $\text{tr}_U(\text{tr}_V(c\S f\S m\S c))$. It is easy to see that these operators are
+both compatible with any operators which are compatible with $\text{tr}_U$ and $\text{tr}_V$, but unless
+these last two are compatible with each other, there is no reason for them to be equal.
+So, although a slightly greater generality is possible, it seems most natural to define
+traces on tensors when the individual traces are compatible. This then leads us to the
+following definition/proposition.
+
+**Proposition 15.** Suppose $U, V$ are objects of a MIX category with compatible, self-compatible trace operators $\text{tr}_U, \text{tr}_V$. Then there is a canonical trace operator on $U \otimes V$ (as defined above) which is compatible with $\text{tr}_U, \text{tr}_V$, and in general is compatible with any trace which is compatible with $\text{tr}_U, \text{tr}_V$. In particular, it is self-compatible.
+
+Note that Corollary 14 shows that there can be at most one trace operator on $U \otimes V$
+with these properties, since any two such traces must be compatible.
+
+**Proof.** The trace on $U \otimes V$ has been defined above; as a circuit this is the following
+(we leave to the reader the construction of the “twisted” variant).
+
+To show this is indeed a trace is fairly straightforward — the only point that requires
+some effort is to verify yanking (tightening and superposing are trivial). This we do
+---PAGE_BREAK---
+
+with the rewrites below. The rest of the proposition, the statements about compatibility, is trivial.
+
+Note that if we take $U \otimes V$ (or the isomorphic $U \oplus V$) as the trace object, then (by Proposition 15) Definition 12 just amounts to the naturality condition for $\text{tr}_{U \otimes V}$. So in essence, dinaturality in $U = \text{sliding} = \text{compatibility}$.
+
+**Remark 16** (*Trace ideals, continued*). Continuing the ideas of Remark 8, we can extend the notion of compatible trace operators to the partial operator case. Here in Definition 12, we must interpret the equation $\text{tr}_V(\text{tr}_U(f)) = \text{tr}_U(\text{tr}_V(f'))$ in the sense that if one side is defined, so is the other, and the equality holds. Then in this case, it is easy to see that Proposition 13 holds, in a similar sense, that one trace is defined if the other is, and the equality holds. The proof is even somewhat simpler, as generalised yanking allows several steps to be combined in one. So, if we suppose that the partial trace operators are pairwise compatible, we have the key properties of a trace ideal, in the sense of [2], viz. that the domains are ideals, that the traces satisfy sliding, and that trace maps are closed under $\otimes$ and $\oplus$.
+---PAGE_BREAK---
+
+**Proposition 17.** If $U$ is complemented, then any trace on $U$ is compatible with the complement trace.
+
+**Proof.**
+
+**Corollary 18.** If $U$ is complemented, then any trace on $U$ is equal to the complement trace.
+
+**4. The geometry of interaction construction for MIX categories**
+
+In [20] a construction, originally given by Abramsky and Jagadeesan [3], is given
+of a tortile monoidal category $\mathrm{Int}$ from a traced monoidal category $\mathcal{V}$, together
+---PAGE_BREAK---
+
+with a full and faithful embedding $N : \mathcal{V} \to \mathrm{Int} \, \mathcal{V}$. The point about this is that this
+construction provides a complement to a traced object. In this section we propose to
+give the analogous construction in the present setting; again, our approach is somewhat
+more “local”, in that we shall start with a set U of compatibly traced objects in a MIX
+category X, and fully and faithfully embed the category X into a MIX category $X[U]$
+so that the image of U lies in the nucleus of $X[U]$.
+
+**Definition 19.** Suppose $\mathbf{X}$ is a *MIX* category, $\mathbf{U}$ a set of pairwise compatibly traced objects of $\mathbf{X}$. The category $\mathbf{X}[U]$ is defined as follows. An object is an ordered tuple $([U_1, U_2, \dots, U_n], A)$, where the (possibly empty) sequence $[U_1, U_2, \dots, U_n]$ consists of objects of $\mathbf{U}$, and $A$ is an arbitrary object of $\mathbf{X}$. A morphism
+
+$$f : ([U_1, U_2, \dots, U_n], A) \to ([V_1, V_2, \dots, V_m], B)$$
+
+of $\mathbf{X}[U]$ is a morphism
+
+$$f: V_1 \otimes V_2 \otimes \cdots \otimes V_m \otimes A \rightarrow U_1 \oplus U_2 \oplus \cdots \oplus U_n \oplus B$$
+
+of $\mathbf{X}$.
+
+$\mathbf{X}[U]$ is a category: the identity morphism for $([U_1, U_2, \dots, U_n], A)$ is given by the
+MIX isomorphism (appropriately extended to many types) $m: U_1 \otimes \dots \otimes U_n \otimes A \to$
+$U_1 \oplus \dots \oplus U_n \oplus A$. (As a circuit, this is a set of parallel wires each joined to its
+neighbour by a MIX-barbell.) Composition is defined using the trace operators. Given
+$f: ([U_1, U_2, \dots, U_n], A) \to ([V_1, V_2, \dots, V_m], B)$, viz. $f: V_1 \otimes V_2 \otimes \dots \otimes V_m \otimes A \to$
+$U_1 \oplus U_2 \oplus \dots \oplus U_n \oplus B$, and $g: ([V_1, V_2, \dots, V_m], B) \to ([W_1, W_2, \dots, W_k], C)$, viz.
+$g: W_1 \otimes W_2 \otimes \dots \otimes W_k \otimes B \to V_1 \oplus V_2 \oplus \dots \oplus V_m \oplus C$, the composite $f\bar{g}$ in $\mathbf{X}[U]$ is
+$\mathrm{tr}_{V_n}(\dots \mathrm{tr}_{V_1}(c_{\bar{\imath}_1} f_{\bar{\imath}_2} c_{\bar{\imath}_3} g_{\bar{\imath}_4} c_{\bar{\imath}_5})\dots).$
+
+Here, by $c$ we mean sufficient uses of symmetry to bring the objects into the correct
+position for the trace to be applied. This is perhaps clearer in circuit notation; for
+example, assuming $m=n=k=2$, the composition is the following circuit.
+
+In general, imagine the left-hand wires (*U*, *V*, *W*) represent “ribbons” of wires, with
+the *V* ribbon caught in a series of trace boxes. So in effect, composition amounts to
+“poly-composition” (i.e. “cutting” the intermediate rightmost variables) and then tracing
+on the intermediate *U*-variables to eliminate them.
+---PAGE_BREAK---
+
+**Remark 20.** The key point to notice about this definition is the “contravariance” in the “first” variable $[U_1, U_2, \dots, U_n]$. Such a sequence ought to be thought of as the tensor (or par — these are isomorphic since these objects all lie in the core of $\mathbf{X}$) of the individual $U_i$, and the pair $([U_1, U_2, \dots, U_n], A)$ then ought to be thought of as the tensor (or par — again these are isomorphic) of the complement of $[U_1, U_2, \dots, U_n]$ and $A$. This is why the notion of morphism “flips” the $U$'s.
+
+This definition is essentially just that given by [20] for $\mathrm{Int}^\mathcal{V}$; we use sequences of traced objects $U_i$ instead of single objects, just so that we can simulate using the tensor unit as such a $U$ (which is an option closed to us, since the units need not be traced) by using the empty sequence. We could in fact use sequences of length less than 2, if we supposed that the set $U$ was closed under tensor (and so also closed under par, since $U$ lies inside the core of $\mathbf{X}$). Although that is not necessary, in view of the next paragraph, we shall act as if that is in fact what we are doing.
+
+In order to simplify the notation, we shall adopt the following convention. We shall represent an arbitrary morphism $f : V_1 \otimes V_2 \otimes \cdots \otimes V_m \otimes A \to U_1 \oplus U_2 \oplus \cdots \oplus U_n \oplus B$ by a map of the form $f : V \otimes A \to U \oplus B$, intending by this that $V$ represents an arbitrary finite tensor of objects, and similarly $U$ an arbitrary finite par of objects. This convention serves to avoid notational clutter, and makes it easier to see what is going on in the proofs. Generally a rigorous proof may be done by induction based on the pattern of the “two input – two output” case.
+
+First, we must show that $\mathbf{X}[U]$ actually is a category, and indeed, a linearly distributive category.
+
+**Theorem 21.** $\mathbf{X}[U]$ is a linearly distributive category.
+
+**Proof.** To verify the categorical axioms for $\mathbf{X}[U]$ is straightforward. The unit equations follow from yanking: for example, for $f : V \otimes A \to U \oplus B$, $1_{\tau}f = \mathrm{tr}(c_{\S}m_{\S}c_{\S}f_{\S}c) = \mathrm{tr}(c_{\S}c_{\S}f_{\S}m_{\S}c) = f_{\S}\mathrm{tr}(m_{\S}c) = f_{\S}1 = f$. Associativity is an immediate consequence of compatibility; more precisely, using vanishing, the composite of three maps may be reduced to poly-composing the three maps and then doing a trace on the two intermediate $U$-variables simultaneously, by tracing on their tensor product. (We shall leave this as an easy exercise.)
+
+Next we must define the linearly distributive structure. It will be simpler to do this via a short detour. Given an object $U$ in $U$, we can define a functor $U : \mathbf{X} \to \mathbf{X}[U]$ which takes an object $A$ to the object $([U], A)$ (which we denote $(U, A)$), dropping the sequence bracket in the case of a singleton sequence). For a map $f : A \to B$, $U(f)$ is defined as $U \otimes A \xrightarrow{\mathrm{m}} U \oplus A \xrightarrow{\mathrm{1} \oplus f} U \oplus B$, or equivalently, $U \otimes A \xrightarrow{\mathrm{1} \otimes f} U \otimes B \xrightarrow{\mathrm{m}} U \oplus B$. That we are using the same notation for the object $U$ and the induced functor ought not cause confusion, context making the intended meaning clear.
+
+**Lemma 22.** $U(-)$ as defined above is a functor.
+---PAGE_BREAK---
+
+**Proof.** (of the lemma) Clearly $U$ preserves identity maps, by definition. To see that it also preserves composition, consider $A \xrightarrow{f} B \xrightarrow{g} C$. Then $U(f)\circ U(g)$ is the circuit on the left below:
+
+and the circuit on the right is $U(f; g)$, so we are done. $\square$
+
+Clearly, this construction and lemma applies to any sequence $\vec{U}$ of objects of $U$. The case when $\vec{U} = [\ ]$ is the empty sequence is of particular importance: it gives us an (obviously full and faithful) embedding $J : X \to X[U]$, which is our version of the embedding $N$ of [20]. We remark here that in general the functor $\vec{U}$ is neither full nor faithful: for example, we shall see later that $(U, \top) \cong (U, \bot)$ (Lemma 23).
+
+Now we return to the matter of the linearly distributive structure on $X[U]$. The tensor and par of the objects ($[U_1, \dots, U_n], A$) and ($[U_{n+1}, \dots, U_{n+m}], B$) are given by
+
+$$([U_1, \dots, U_n], A) \otimes ([U_{n+1}, \dots, U_{n+m}], B) = ([U_1, \dots, U_{n+m}], A \otimes B),$$
+
+$$([U_1, \dots, U_n], A) \oplus ([U_{n+1}, \dots, U_{n+m}], B) = ([U_1, \dots, U_{n+m}], A \oplus B).$$
+
+Note that by merely appending the lists of $U$ objects in both cases, we are relying on the fact that these objects are all in the core. The units are given by the images under $J$ of the corresponding units in $X$: $\top = ([ ], \top)$ and $\bot = ([ ], \bot)$. The natural transformations for associativity, unit isomorphisms, and linear distributivities are all given by the images of the similarly named transformations in $X$ under the appropriate functors $\vec{U}$, where $\vec{U}$ (the sequence, not the functor) is formed by concatenating the appropriate $U$ sequences so that the domain and codomain work out right. An example will illustrate this: $\delta_L^l : (U,A) \otimes ((V,B) \oplus (W,C)) \to ((U,A) \otimes (V,B)) \oplus (W,C)$ is $[U,V,W](\delta_L^l)$. The point here is that since both tensor and par merely concatenate the $U$ sequences, these natural transformations will have the same $U$ sequences in domain and codomain, and so we can use this trick to extend them to $X[U]$. There is a minor complication with the symmetry maps: one must apply symmetry to the $U$ component first. Consider $c_\otimes : (U,A) \otimes (V,B) \to (V,B) \otimes (U,A)$ for example. Since $(U,A) \cong (U,\top) \otimes ([ ],A)$ (and in view of Lemma 23 below, $\cong (U,\bot) \otimes ([ ],A) \cong (U,\top) \oplus ([ ],A) \cong (U,\bot) \oplus ([ ],A)$), we can identify the symmetry $c_\otimes$ with the tensor (or par) of the symmetries $c_0 : [U,V] \to [V,U]$ in the free monoid generated by the objects of $U$, and $c_1 : A \otimes B \to B \otimes A$ in $X$. This means we can “decompose” any
+---PAGE_BREAK---
+
+coherence diagram in $\mathbf{X}[U]$ into one in the free monoid generated by $U$ and one in $\mathbf{X}$. Since these will both commute, the diagram in $\mathbf{X}[U]$ will too. Diagrams not involving the symmetries are even simpler, although the same trick will work: since the linearly distributive structure is given functorially, the required coherence diagrams are just the images of similar diagrams in $\mathbf{X}$ and so automatically commute. So $\mathbf{X}[U]$ is indeed a linearly distributive category. So we have completed the proof of Theorem 21. □
+
+**Lemma 23.** ($U, \top) \cong (U, \bot)$.
+
+*Proof.* (of the lemma) The (inverse) maps are given as follows. $\alpha : U \otimes \top \to U \to U \oplus \bot$ represents $\alpha : (U, \top) \to (U, \bot)$, and $\beta : U \otimes \bot \xrightarrow{m} U \oplus \bot \xrightarrow{1 \oplus m} U \oplus \top$ represents $\beta : (U, \bot) \to (U, \top)$. In circuits, these are as follows.
+
+To see these are inverse, we first consider $\beta; \alpha$. In circuits:
+
+and the right-hand circuit is the identity. The reverse direction is similar, and will be left as an exercise. □
+
+**Proposition 24.** For each object $U$ of $\mathcal{U}$, $J(U)$ is complemented in $\mathbf{X}[U]$. Moreover, under $J$, the trace on $U$ becomes the complement trace on $J(U)$ in $\mathbf{X}[U]$.
+
+*Proof.* The complement of $U$, or rather $([ ], U)$, is given by $(U, \top)$, or equivalently, by the isomorphic $(U, \bot)$.
+---PAGE_BREAK---
+
+To show that $(U, \top)$ or $(U, \bot)$ is the complement of $([], U)$, we just have to con-
+struct the appropriate $\tau$ and $\gamma$ and show these satisfy the appropriate coherence condi-
+tions (Section 1.1.3). $\tau: ([], \top) \to ([], U) \oplus (U, \bot) \cong (U, U)$ is represented by the
+unit isomorphism $U \otimes \top \to U$. $\gamma: (U, \top) \otimes ([], U) \cong (U, U) \to ([], \bot)$ is represented
+by the unit isomorphism $U \to \bot \oplus U$. We have to show the composite $1 \otimes \tau \circ \delta_L^{\top, \bot} \gamma \oplus 1$
+is (once the unit isomorphisms are "factored out") the identity on $(U, \top)$. (There is a
+similar dual condition giving the identity on $([], U)$.) It is possible to eliminate the
+units from the calculation by grounding them; once we do that, this composite becomes
+the circuit on the left below, and we must reduce that to the identity on $U$. We use
+dotted arcs to represent MIX-barbells in order to save space.
+
+And this concludes the proof of Proposition 24. □
+
+**Remark 25.** It is a simple corollary that any (U, V) in X[U], where U, V are in U,
+is complemented, with complement (V, U).
+
+The construction of $\mathbf{X}[U]$ is the universal solution to making a trace “canonical”
+(in the sense that complement traces are canonical). To state this precisely, we need
+some definitions. In the following, “preservation” is understood as being up to coherent
+isomorphisms.
+
+**Definition 26.** Tr is the 2-category whose objects are pairs $\langle X,U \rangle$, where X is a MIX category and U is a collection of pairwise compatibly traced objects of X. A morphism $\langle X,U \rangle \to \langle X',U' \rangle$ is a functor $F : X \to X'$ that preserves the linearly distributive category structure and for which $F(U)$ is an object of $U'$ for each object $U$ of $U$. Moreover, $F$ must preserve the trace on each $U$. A 2-cell is a natural transformation that preserves the tensor and par, in the sense that $\alpha_{A \otimes B} = \alpha_A \otimes \alpha_B$, and similarly for par.
+
+CTr is the full sub-2-category of Tr whose objects satisfy the property that the ob-
+jects of U are all complemented, and whose traces are all complement traces. Again,
+the 2-cells are natural transformations preserving tensor and par.
+
+It is straightforward to show that these are indeed 2-categories, and that in the
+case of CTr, if a functor preserves linearly distributive category structure, it must also
+preserve complements and so complement traces, so that the morphisms of CTr need
+only preserve linearly distributive category structure and send objects of U to U'. We
+---PAGE_BREAK---
+
+shall denote the inclusion 2-functor $i: \mathrm{CTr} \to \mathrm{Tr}$, and the construction of $\mathbf{X}[U]$ induces a 2-functor $\mathcal{I}: \mathrm{Tr} \to \mathrm{CTr}$. Then the following result is a direct analogue of the corresponding result in [20].
+
+**Proposition 27.** *If $\mathcal{I}$ is left biadjoint to $i$, and $J$ induces the unit of this biadjunction.*
+
+**Proof.** (Sketch) By “biadjoint” we mean (as usual) that the usual identities hold up to coherent isomorphism. In the following we shall suppress mention of the inclusion $i$ when it is clear from the context.
+
+We shall sketch the equivalence of the appropriate hom categories. Note that for $\langle \mathbf{X}, U \rangle \in \mathrm{Tr}$, $\mathcal{I}\langle \mathbf{X}, U \rangle = \langle \mathbf{X}[U], J(U) \rangle$. Suppose given $F: \mathcal{I}\langle \mathbf{X}, U \rangle \to \langle \mathbf{Y}, V \rangle$ in $\mathrm{CTr}$; define $F^*: \langle \mathbf{X}, U \rangle \to \langle \mathbf{Y}, V \rangle$ in $\mathrm{Tr}$ by $F^* = J; F$. For the reverse association, suppose given $G: \langle \mathbf{X}, U \rangle \to \langle \mathbf{Y}, V \rangle$ in $\mathrm{Tr}$; then define $G_*: \mathcal{I}\langle \mathbf{X}, U \rangle \to \langle \mathbf{Y}, V \rangle$ in $\mathrm{CTr}$ by $G_*([[(U_1, ..., U_n), A]) = G(U_1)^\perp \otimes ... \otimes G(U_n)^\perp \otimes G(A)$, where we denote the complement of an object $V$ by $V^\perp$. On morphisms $G_*$ is easily induced by the canonical construction of a morphism $W^\perp \otimes B \to V^\perp \otimes C$ from a morphism $V \otimes B \to W \otimes C$, for objects $V, W$ in the core. (This requires the preservation of linearly distributive structure by $G$.) It is a straightforward matter to verify that $F^*$ and $G_*$ satisfy the required conditions to be 1-cells in the appropriate 2-category, and that these associations are binatural.
+
+Next we must verify that $(F^*)_* \cong F$ and $(G_*)^* \cong G$. To check the former, we note that since $([U_1, ..., U_n], A) \cong (U_1, \top) \otimes ... \otimes (U_n, \top) \otimes ((), A)$, it suffices to verify that $(F^*)_*((U, \top)) \cong F((U, \top))$ and that $(F^*)_*([(), A]) \cong F([(), A])$. Since $F$ preserves linearly distributive structure, the first isomorphism essentially reduces to the fact that the complement of ($[]$, $U$) is ($U$, $\top$). The second isomorphism is trivial, and essentially reduces to the fact that the complement of $\bot$ is $\top$. This remark also suffices to show that $(G_*)^*(A) \cong G(A)$. This completes the sketch of the proof. $\square$
+
+**5. Fixpoint combinators**
+
+There is a well-understood connection between trace operators and fixpoint operators in the context of cartesian categories (see Hasegawa [15] for example). In this section we investigate this connection more locally and in a more general setting, namely by considering having a fixpoint operator on a given object in a MIX category. There will be some additional structure we must impose, as we shall see below, and further, there are some subtle variations on the more familiar context. Note that we refer to “fixpoint operators”, but our definition does not postulate the usual fixpoint equations. Indeed, without some modest further conditions, these need not be satisfied, but we shall see that the present notion does capture the essence of fixpoint operators.
+
+**Definition 28.** Suppose $\mathbf{X}$ is a MIX category, $U$ an object of $\mathbf{X}$. We say $U$ has a *fixpoint combinator* if there is a morphism $e: U \to \bot$ in $\mathbf{X}$ and for each pair of objects $A, B$ of $\mathbf{X}$ a map $\text{fix}_U^{AB}: \text{Hom}(U \otimes A, U \oplus B) \to \text{Hom}(A, U \oplus B)$ making fix a strong combinator. These must satisfy the following conditions:
+---PAGE_BREAK---
+
+Yanking: $\mathrm{fix}(c_{\otimes}; m_{UU}); e \otimes 1; u_{\oplus}^{L} = 1_U$
+
+Fixed compatibility: $\mathrm{fix}(\mathrm{fix}(f)) = \mathrm{fix}(\mathrm{fix}(c_{\otimes} \otimes 1; f))$
+
+The first axiom, yanking, is just the yanking axiom for trace operators: under the association between traces and fixpoint operators, these yanking axioms correspond to each other. In the fixed compatibility axiom, $f$ is a morphism $U \otimes U \otimes A \to U \oplus B$, and we require the iterated fix of this to be equal to the similarly iterated fix of the morphism $U \otimes U \otimes A \xrightarrow{c^\otimes 1} U \otimes U \otimes A \xrightarrow{f} U \oplus B$. We have suppressed some instances of associativity here. This fixed compatibility axiom is a variant of compatibility for traces. The point of this axiom is that the “fixed” output is the same for both instances of the fixpoint combinator (this ought to be clearer in the circuit diagram below). Later we shall give a definition of (ordinary) compatibility for fixpoint combinators in which each fixpoint combinator will have a different “fixed” output; this will correspond to the compatibility condition as given before for trace operators.
+
+Note that by requiring fix to be a strong combinator, we are requiring that it satisfies the evident versions of tightening and superposing. We shall list the axioms below in circuit form. We use a box notation similar to that for trace operators for the fixpoint combinator, but indicate the output for the *U* that has been “fixed” with a small circle. (This may be regarded as the “principal port” of the fixpoint box.) Also, we denote *e* followed by the terminal node for ⊥ by a terminal *e* node (i.e. a node without output wires), as we did for γ earlier in the paper.
+
+Yanking:
+
+Fixed compatibility:
+---PAGE_BREAK---
+
+Tightening:
+
+Superposing:
+
+Of course, there is an alternate version of superposing, corresponding to the alternate
+version for traces.
+
+**Definition 29.** Suppose **X** is a MIX category, and *U*, *V* objects of **X** each with a fixpoint combinator, say $\text{fix}_U$, $\text{fix}_V$. These are called *compatible* if for any $f : U \otimes V \otimes A \to U \oplus V \oplus B$, $\text{fix}_V(\text{fix}_U(f); c_\oplus \otimes 1)$; $c_\oplus \otimes 1 = \text{fix}_U(\text{fix}_V(f')$; $c_\oplus \otimes 1)$, where $f' = V \otimes U \otimes A \xrightarrow{\text{c}\oplus1} U \otimes V \otimes A \xrightarrow{f} U \oplus V \oplus B \xrightarrow{\text{c}\oplus1} V \oplus U \oplus B$. (We have suppressed some
+---PAGE_BREAK---
+
+evident uses of associativity here.) In circuits, this is the following equation:
+
+**Remark 30.** We shall see in Corollary 34 below that from the compatibility condition given above, it follows that there is another compatibility condition that must hold between compatible fixpoint combinators $\text{fix}_1$, $\text{fix}_2$ defined on the same object $U$, viz. the two-combinator version of fixed compatibility $\text{fix}_1(\text{fix}_2(f)) = \text{fix}_2(\text{fix}_1(c \otimes 1; f))$, for a map $f: U \otimes U \otimes A \to U \oplus B$.
+
+Now we note that given a fixpoint combinator on $U$, there is an induced cocommutative comonoid (with respect to $\oplus$) structure on $U$. The comultiplication is given by the morphism $\Delta$ defined by $\text{fix}(m_{UU}; c_\oplus): U \to U \oplus U$. Note that this is equal to $\text{fix}(c_\oplus; m_{UU})$. (We shall show later that if $U$ has a fixpoint combinator, it also has a trace, and so is in the core. So if we wanted, we could define this comultiplication, and so the comonoid structure, with respect to the tensor $\otimes$.) We shall occasionally denote this as follows.
+
+**Lemma 31.** $\Delta$ is cocommutative.
+
+**Proof.** This is most simply shown by the following circuit rewrites (for the moment, ignore the small indices — their role will appear below in the proof of Corollary 34).
+---PAGE_BREAK---
+
+The key step in the proof above is in the middle of the second line, where use is made
+of fixed compatibility. In addition, some rewiring and some playing around with the
+“twist” maps $c_{\otimes}$, $c_{\oplus}$ has been done silently, most importantly, moving $c_{\oplus}$ inside the
+boxes and then past the barbell to cancel a twist introduced by the fixed compatibility.
+Moving a twist past a barbell uses the identity $c;m=m;c$ which we have seen before.
+
+**Proposition 32.** (*U*,*e*,Δ) is a cocommutative comonoid (with respect to ⊕).
+---PAGE_BREAK---
+
+**Proof.** We have seen that $\Delta$ is cocommutative; using this it is easy to show $e$ is a unit for $\Delta$, since yanking gives one side, and cocommutativity then allows us to derive the other side. So it only remains to show that $\Delta$ is coassociative:
+
+**Lemma 33.** Given any self-compatible fixpoint combinator on an object *U*, we have the following two equations:
+---PAGE_BREAK---
+
+**Proof.** The following circuit rewrites show these equations; note that we have again represented the MIX-barbell with a dotted arc to save space.
+
+There are several corollaries that we can derive from these equations.
+
+**Corollary 34.** Given two compatible fixpoint combinators $fix_1$ and $fix_2$ on the same object $U$, the following variant of fixed compatibility holds: $fix_1(fix_2(f)) = fix_2(fix_1(c \otimes 1; f))$ for a map $f : U \otimes U \otimes A \to U \oplus B$. In circuits, this is the following.
+---PAGE_BREAK---
+
+**Proof.** In the following we use the $\Delta$ notation as defined above. As we shall use Lemma 33 to “pull” this $\Delta$ outside a fixpoint box for $fix_2$, it will be necessary to use the $\Delta$ for $fix_2$, though it is simple to show that $\Delta$ is independent of such a choice. (This fact will also follow from the next corollary.)
+
+**Corollary 35.** Any two compatible fixpoint combinators on $U$ are equal.
+
+**Proof.** Notice the small indices in the proof that $\Delta$ is cocommutative (Lemma 31). If these identify two fixpoint combinators, then the step in the second line involving fixed compatibility is valid if the combinators are compatible, according to Corollary 34. Then we see that the two combinators must be equal. Note that this result will also follow from the corresponding result for compatible trace operators, once we have established the correspondence between fixpoint combinators and trace operators. $\square$
+
+**Theorem 36.** Suppose $X$ is a MIX category, and $U$ an object of $X$. The following are equivalent.
+---PAGE_BREAK---
+
+(i) $U$ has a self-compatible trace operator and a cocommutative comonoid structure (with respect to $\oplus$).
+
+(ii) $U$ has a self-compatible fixpoint combinator.
+
+**Proof.** Given a fixpoint combinator $f$ on $U$, we define a trace operator $\text{tr}^f$ by $\text{tr}^f(f) = \text{fix}(f)_\otimes e$. Using circuits, this is
+
+Clearly, yanking for this trace operator is given by yanking for the fixpoint combinator, and tightening, superposing, and self-compatibility are similarly induced by the same properties of the fixpoint combinator. $U$ has a cocommutative comonoid structure (Proposition 32).
+
+For the reverse direction, if we have a self-compatible trace operator $\text{tr}$ on $U$ with the stated properties, then we can define a fixpoint combinator by $\text{fix}^t(f) = \text{tr}(f_\otimes \Delta)$.
+
+In circuits:
+
+Again, yanking, tightening, and superposing are easy consequences of the corresponding equations for the trace operator. Self-compatibility likewise is straightforward, but to show fixed compatibility we need to use self-compatibility of the trace operator plus cocommutativity and coassociativity of $\Delta$ to get the wires arranged in the correct manner. This is a simple exercise similar to the many such calculations we have already seen.
+
+Finally, we want to show that these constructions are inverse. One direction is trivial: starting from a trace operator, the trace operator induced by the induced fixpoint combinator is clearly the original trace operator, since $e$ is a unit for $\Delta$: $\text{tr}^{\text{ft}}(f) =$
+---PAGE_BREAK---
+
+$$ \text{fix}^t(f)_{\flat} e = \text{tr}(f_{\flat} \Delta)_{\flat} e = \text{tr}(f)_{\flat} \Delta_{\flat} e = \text{tr}(f). $$
+
+For the reverse direction, we need an application of Lemma 33 to move the $\Delta$ (which is attached to the principal port of the fixpoint box) outside the fixpoint box so that it may be cancelled: $\text{fix}^{tf} = \text{tr}^t(f_{\flat} \Delta) = \text{fix}(f_{\flat} \Delta)_{\flat} e = \text{fix}(f)_{\flat} \Delta_{\flat} e = \text{fix}(f)$. $\square$
+
+Before leaving this section, we point out some properties that are characteristic of fixpoint combinators (and leave some others as exercises for the reader).
+
+To begin with, we need some notation. If we want to consider $\Delta$ as defined on the $\otimes$ structure, we must put an instance of $m^{-1}$ following $\Delta$, so as to change the implicit $\oplus$ into an $\otimes$, just as we did with the $\tau$ when we defined the complement trace (Proposition 9). To save space, we shall denote this use of $m^{-1}$ by an oval-shaped box — this is not the usual sort of component box, since its input wires are par’ed and its output wires are tensored. So this means that the oval on the left below is an abbreviation for the graph on the right.
+
+**Lemma 37 (The diagonal property).** *Suppose fix is a self-compatible fixpoint combinator on U, $f: U \otimes U \otimes A \to U \oplus B$. Then fix($\Delta_{\flat} f$) = fix(fix($f$)). In circuits:*
+
+We could make this statement somewhat more elegant by defining a new $\Delta$ operator that contained both the “old” $\Delta$ and the $m^{-1}$ oval. As we have no further need of this “new” $\Delta$ we see no real need for this definition, however.
+---PAGE_BREAK---
+
+**Proof.** In the circuits below, we again represent the MIX-barbell by a dotted wire.
+
+We end this section with a derivation of the fixpoint property, which we ought to expect a fixpoint combinator to satisfy. It turns out, however, that for this we need a further property of $\Delta$, namely that it be “natural” in the following sense. (Recall that since we are presenting this “locally”, for a fixed $U$, $\Delta$ is not a natural transformation; the property we now want would be a consequence of $\Delta$ being a natural transformation.) For simplicity, we begin with the “categorical” (one input, one output) case.
+
+**Definition 38.** Suppose $U$ has a fixpoint combinator; let $\Delta$ be the induced comultiplication map. $\Delta$ is said to be *natural* if for any $f : U \to U$, $\Delta(f \oplus f) = f$; $\Delta : U \to U \oplus U$.
+
+**Proposition 39.** Suppose $U$ has a fixpoint combinator fix for which the induced comultiplication $\Delta$ is natural. Then for any $f : U \to U$, $\text{fix}(f); f = \text{fix}(f) : \mathbb{T} \to U$.
+---PAGE_BREAK---
+
+**Proof.** It is simple to construct a circuit proof of this — it may be considered a special case of the next proposition in any event. The following equations ought to provide the necessary hint. $\text{fix}(f); f = \text{fix}(f; \Delta); e \oplus f = \text{fix}(\Delta; f \oplus f)_{\frac{}{\Delta}} = \text{fix}(f; \Delta)_{\frac{}{\Delta}} = \text{fix}(f).$
+
+For the general (“polycategorical”) case, we need to generalise the notion of “natu-
+rality” for $\Delta$. First, we shall try to simplify the notation. As we did in our discussion of
+$X[U]$, and without loss of generality, we shall avoid notational clutter by using the “two
+input – two output” case as generic. So we shall consider a map $f: U \otimes A \to U \oplus B$
+as an arbitrary component (or polymorphism). We shall state the next definition in this
+“two input – two output” style, but with obvious modifications, this may be restated
+in full generality, which is our intended meaning.
+
+**Definition 40.** Suppose $U$ has a fixpoint combinator; let $\Delta$ be the induced comultiplication map. Suppose $f$ is an arbitrary morphism as above; we shall denote it as $f : U \otimes A \to U \oplus B$. Suppose $A$ (i.e. each input other than the initial $U$) has a “$\otimes$-duplication” map $\blacktriangle : A \to A \otimes A$, and $B$ (i.e. each output other than the initial $U$) has a “$\otimes$-duplication” map $\Delta : B \to B \otimes B$. $\Delta$ is said to be *polynatural* for $f$ if the following diagram commutes:
+
+where $\delta'$ is the evident linear distributivity (plus some symmetry and associativity). In circuits this is the following. We represent $\blacktriangle$ as a link — this functions like a tensor link).
+---PAGE_BREAK---
+
+**Proposition 41.** Suppose $U$ has a fixpoint combinator, $\Delta$ the induced comultiplication, and $f: U \otimes A \to U \oplus B$ is a morphism for which $\Delta$ is polynatural. (This includes some structural assumptions on $A$ and $B$, as in Definition 40.) Then $\text{▲}_{\frac{2}{3}}\text{fix}(f)\text{$_{\frac{2}{3}}f = \text{fix}(f)\text{$_{\frac{2}{3}}}\Delta : A \to U \otimes B \oplus B$. In circuits, this is the following:
+
+**Proof.**
+
+**Remark 42.** There is a question of what is the most appropriate level of generality for the fixpoint property. We have stated it in a minimalist form: the conditions necessary are assumed, and no more. However, the most suitable context for this result would seem to be something a bit stronger; perhaps a cartesian linearly distributive category. If we suppose also that the product is both tensor and par, then we are essentially in the context of [15].
+
+**Remark 43.** There is another property of fixpoint operators that one often encounters, namely the Bekič property. It is well-known that in the usual cartesian context, this property is true of any fixpoint operator that is induced by a trace operator (see [15] for instance). In the present context, it may be shown that any fixpoint combinator satisfies this property. (It was in fact by establishing the Bekič property for traces that we were led to the correspondence of Theorem 36.) However, as we have no need of this result here, and it involves some lengthy circuit calculations, we are happy to leave it as a pleasant exercise for the reader.
+---PAGE_BREAK---
+
+## Acknowledgements
+
+We wish to thank the anonymous referee for several helpful remarks and suggestions. Diagrams in this paper were produced with the help of the `T_EXcad` drawing program of G. Horn and the *diagram* macros of F. Borceux.
+
+## References
+
+[1] S. Abramsky, Retracing some paths in process algebra, in: U. Montanari, V. Sassone (Eds.), CONCUR '96: Concurrency Theory, Proceedings of Seventh International Conference, Pisa, Italy, August 1996, Lecture Notes in Computer Science, vol. 1119, Springer, Berlin, 1996, pp. 1–17.
+
+[2] S. Abramsky, R. Blute, P. Panangaden, Nuclear and trace ideals in tensored *-categories. J. Pure Appl. Algebra (Special issue in honour of Prof. Michael Barr) 143 (1999) 3–47.
+
+[3] S. Abramsky, R. Jagadeesan, New foundations for the geometry of interaction, Inform. and Comput. 111 (1994) 53–119.
+
+[4] M. Barr, *-Autonomous Categories, Lecture Notes in Mathematics, vol. 752, Springer, Berlin, 1979.
+
+[5] M. Barr, Duality of vector spaces, Cahiers Topologie Géom. Différentielle 17 (1976) 3–14.
+
+[6] R.F. Blute, J.R.B. Cockett, R.A.G. Seely, ! and ?: storage as tensorial strength, Math. Struct. Comput. Sci. 6 (1996) 313–351.
+
+[7] R.F. Blute, J.R.B. Cockett, R.A.G. Seely, T.H. Trimble, Natural deduction and coherence for weakly distributive categories, J. Pure Appl. Algebra 113 (1996) 229–296.
+
+[8] R.F. Blute, P.J. Scott, Linear Läuchli semantics, Ann. Pure Appl. Logic 77 (1996) 101–142.
+
+[9] J.R.B. Cockett, R.A.G. Seely, Weakly distributive categories, in: M.P. Fourman, P.T. Johnstone, A.M. Pitts (Eds.), Applications of Categories to Computer Science, London Mathematical Society Lecture Note Series, vol. 177, 1992, pp. 45–65.
+
+[10] J.R.B. Cockett, R.A.G. Seely, Weakly distributive categories, J. Pure Appl. Algebra 114 (1997) 133–173. (Updated version available on http://www.math.mcgill.ca/~rags.)
+
+[11] J.R.B. Cockett, R.A.G. Seely, Proof theory for full intuitionistic linear logic, bilinear logic, and MIX categories, Theory and Appl. Categories 3 (1997) 85–131.
+
+[12] J.R.B. Cockett, R.A.G. Seely, Linearly distributive functors, preprint. J. Pure Appl. Algebra (Special issue in honour of Prof. Michael Barr) 143 (1999) 155–203.
+
+[13] J.-Y. Girard, Geometry of interaction I: Interpretation of system F, in: R. Ferro et al. (Eds.), Logic Colloquium '88, Studies in Logic and the Foundations of Mathematics, vol. 127, North-Holland, Amsterdam, 1989, pp. 221–260.
+
+[14] J.-Y. Girard, Geometry of interaction II: Deadlock-free algorithms, in: P. Martin-Löf, G. Mints (Eds.), COLOG-88, Lecture Notes in Computer Science, vol. 417, Springer, Berlin, 1990, pp. 76–93.
+
+[15] M. Hasegawa, Models of sharing graphs, Ph.D. Thesis, Edinburgh, 1997.
+
+[16] D.A. Higgs, K.A. Rowe, Nuclearity in the category of complete semilattices, J. Pure Appl. Algebra 57 (1989) 67–78.
+
+[17] P.M. Hines, A one-object compact closed category used in the geometry of interaction, manuscript, 1997.
+
+[18] H. Hu, A. Joyal, Coherence completions of categories and their enriched softness, in: S. Brookes, M. Mislove (Eds.), Proceedings, Mathematical Foundations of Programming Semantics, 13th Annual Conference, Electronic Notes in Theoretical Computer Science, vol. 6, 1997 (URL: http://www.elsevier.nl/cas/tree/store/tcs/free/noncas/pc/volume6.htm).
+
+[19] A. Joyal, R. Street, The geometry of tensor calculus I, Adv. Math. 88 (1991) 55–112.
+
+[20] A. Joyal, R. Street, D. Verity, Traced monoidal categories, Math. Proc. Cambridge Philos. Soc. 119 (1996) 447–468.
+
+[21] S. Lefschetz, Algebraic Topology, American Mathematical Society Colloquium Publications, vol. 27, 1963.
+
+[22] A.K. Simpson, A characterization of least-fixed-point operator by dinaturality, Theoret. Comput. Sci. 118 (1993) 301–314.
+
+[23] P. Vesely, Categorical combinators for CHARITY, M.Sc. Thesis, University of Calgary, 1997.
\ No newline at end of file
diff --git a/samples/texts_merged/3955960.md b/samples/texts_merged/3955960.md
new file mode 100644
index 0000000000000000000000000000000000000000..0503a508599a6f01c40f40013920d2745f9092cf
--- /dev/null
+++ b/samples/texts_merged/3955960.md
@@ -0,0 +1,1937 @@
+
+---PAGE_BREAK---
+
+Generalized Bayesian Likelihood-Free Inference
+Using Scoring Rules Estimators
+
+Lorenzo Pacchiardi¹\*, Ritabrata Dutta²
+
+¹Department of Statistics, University of Oxford, UK
+
+²Department of Statistics, University of Warwick, UK
+
+21st May 2021
+
+Abstract
+
+We propose a framework for Bayesian Likelihood-Free Inference (LFI) based on Generalized Bayesian Inference using scoring rules (SRs). SRs are used to evaluate probabilistic models given an observation; a proper SR is minimised in expectation when the model corresponds to the data generating process for the observations. Using a strictly proper SR, for which the above minimum is unique, ensures posterior consistency of our method. Further, we prove finite sample posterior consistency and outlier robustness of our posterior for the Kernel and Energy Scores. As the likelihood function is intractable for LFI, we employ consistent estimators of SRs using model simulations in a pseudo-marginal MCMC; we show the target of such chain converges to the exact SR posterior by increasing the number of simulations. Furthermore, we note popular LFI techniques such as Bayesian Synthetic Likelihood (BSL) can be seen as special cases of our framework using only proper (but not strictly so) SR. We empirically validate our consistency and outlier robustness results and show how related approaches do not enjoy these properties. Practically, we use the Energy and Kernel Scores, but our general framework sets the stage for extensions with other scoring rules.
+
+# 1 Introduction
+
+This work is concerned with performing inference for intractable likelihood models, for which it is impossible or very expensive to evaluate the likelihood $p(x|\theta)$, but from which it is easy to obtain a simulation $x$ at a given parameter value $\theta$. Given some observation $y$ and a prior on the parameters $\pi(\theta)$, the standard Bayesian posterior is $\pi(\theta|y) \propto \pi(\theta)p(y|\theta)$. However, obtaining that explicitly or sampling from it with Markov Chain Monte Carlo (MCMC) techniques is impossible without having access to the likelihood.
+
+Standard Likelihood-Free Inference (LFI) techniques allow to obtain approximations of the exact posterior distribution when the likelihood is unavailable, by relying on simulations from the model. Broadly, they can be split in two categories of approaches differing for the kind of approximation used: methods in the first category [Price et al., 2018, An et al., 2020, Thomas et al., 2020] replace the intractable likelihood with a surrogate misspecified one whose parameters can be easily estimated from simulations. The second category is constituted by Approximate Bayesian Computation (ABC) methods [Lintusaari et al., 2017, Bernton et al., 2019], which implicitly approximate the likelihood by weighting parameter values according to the mismatch between observed and simulated data.
+
+In this work, we build on the generalized Bayesian inference setup [Bissiri et al., 2016, Jewson et al., 2018, Knoblauch et al., 2019] and propose a set of LFI approaches which extend the first category of methods discussed above. In order to generalize Bayesian inference, Bissiri et al. [2016] considered a generic loss $l(y, \theta)$ between data $y$ and parameter $\theta$ and studied the following update for beliefs on parameter values:
+
+$$ \pi(\theta|y) \propto \pi(\theta) \exp(-w \cdot l(y, \theta)), \quad (1) $$
+
+\*Corresponding author: lorenzo.pacchiardi@stats.ox.ac.uk.
+---PAGE_BREAK---
+
+which is a way of learning about the parameter value which minimizes the expected loss over the data generating process¹, and respects Bayesian additivity (i.e., the posterior obtained by sequentially updating the belief with a set of observations does not depend on the order the observations are received). Here, $w$ is a scalar which controls speed of learning.
+
+The update in Eq. (1) can be defined even without specifying a model distribution $P_\theta$. In the LFI case, however, we have a model $P_\theta$ but cannot evaluate its likelihood $p(y|\theta)$. Therefore, we propose to take $\ell(y, \theta)$ to be a Scoring Rule (SR) $S(P_\theta, y)$, which assesses the performance of $P_\theta$ for an observation $y$, thus obtaining the *Scoring Rule posterior* $\pi_S$. If the chosen Scoring Rule can be easily estimated empirically from samples from $P_\theta$, we can apply this approach in a LFI setting without worrying about the missing likelihood $p(y|\theta)$ (contrarily to the standard posterior).
+
+We study theoretically the properties of $\pi_S$. First, when $S$ is strictly proper (meaning it is minimized in expectation over the observation if and only if $P_\theta$ corresponds to the data generating process), we show that the Scoring Rule posterior concentrates asymptotically on the exact parameter value in an M-closed scenario, and on the parameter value minimizing the expected Scoring Rule in an M-open setup (if the minimizer is unique). Further, with some specific SRs (Kernel and Energy Score), we establish a finite sample consistency property as well as outlier robustness, both of which hold without assuming correct model specification.
+
+Additionally, we discuss employing pseudo-marginal MCMC [Andrieu et al., 2009] to sample from an approximation of $\pi_S$ by generating simulations from $P_\theta$ at each step of the chain, and we show that this approximate target converges to the exact Scoring Rule posterior as the number of simulations increases. Next, we connect our approach with related works in LFI [Price et al., 2018, An et al., 2020, Thomas et al., 2020, Chérier-Abdellatif and Alquier, 2020]; specifically, we show that a proper (but not strictly so) Scoring Rule gives rise to the popular Bayesian Synthetic Likelihood (BSL, Price et al., 2018) approach.
+
+Finally, we assess performance of our proposed method with two different Scoring Rules and compare with related approaches; specifically, we study posterior concentration with the g-and-k model (in both well specified and misspecified case) and outlier robustness on a normal location example, as well as showcase the performance of our method on other commonly used benchmark models.
+
+Scoring Rules have been previously used to generalize Bayesian inference in Jewson et al. [2018], Loaiza-Maya et al. [2019], Giummolè et al. [2019]. Specifically, Giummolè et al. [2019] considered an update similar to ours, but adjusted the parameter value so that the posterior has the same asymptotic covariance matrix as the frequentist minimum Scoring Rule estimator. Instead, Loaiza-Maya et al. [2019] considered a timeseries setting in which the task is to learn about the parameter value which yields the best prediction, given the previous observations. Finally, Jewson et al. [2018] motivated Bayesian inference using general divergences (beyond the Kullback-Leibler one which underpins standard Bayesian inference) in an M-open setup, and discussed posteriors which employ estimators of the divergences from observed data; some of these estimators can be written using Scoring Rules. However, none of the above works considered explicitly the LFI setup.
+
+Parallel to our work and similar in spirit, Matsubara et al. [2021] investigated the generalized posterior obtained by using a Kernel Stein Discrepancy (first introduced in Chwialkowski et al., 2016, Liu et al., 2016); the authors showed how this posterior satisfies robustness and consistency properties, and is computationally convenient for doubly-intractable models (i.e., for which the likelihood is available, but only up to the normalizing constant). Albeit the focus of our work is different from Matsubara et al. [2021], we extend and build on some of their theoretical results to provide guarantees for our proposed method.
+
+The rest of this manuscript is organized as follows. First, in Section 2 we review Scoring Rules and show how they can be used to define a generalized Bayesian update; further, our theoretical results ensuring asymptotic normality, finite sample posterior consistency and outlier robustness of $\pi_S$ are presented. In Section 3 we discuss our proposed method for intractable-likelihood models, which employs empirical estimators of the Scoring Rules; specifically, we provide insights on the target of the pseudo-marginal MCMC and show connections to other works. Section 4 presents some experimental
+
+¹Indeed setting $\ell(y, \theta) = -\log p(y|\theta)$ and $w = 1$ recovers the standard Bayes update, which learns about the parameter value minimizing the KL divergence [Bissiri et al., 2016].
+---PAGE_BREAK---
+
+results. We conclude in Section 5, and suggest future directions for exploration.
+
+## 1.1 Notation
+
+We set here notation for the rest of our manuscript. We will denote respectively by $\mathcal{X} \subseteq \mathbb{R}^d$ and $\Theta \subseteq \mathbb{R}^p$ the data and parameter space. We will assume the observations are generated by a distribution $P_0$; we will instead use $P_\theta$ to denote the distribution of our model class, and $p(\cdot|\theta)$ its likelihood. Generic distributions will be indicated by $P$ or $Q$, while $S$ will denote a generic Scoring Rule. Other upper case letters will denote random variables while lower case ones will denote observed (fixed) values. We will denote by $Y$ or $y$ the observations (correspondingly random variables and realizations) and $X$ or $x$ the simulations; therefore, we will often write $Y \sim P_0$ and $X \sim P_\theta$. Finally, subscripts will denote sample index, while superscripts will denote vector components.
+
+# 2 Bayesian inference using Scoring Rules
+
+A Scoring Rule (SR) $S$ [Dawid and Musio, 2014, Gneiting and Raftery, 2007] is a function of a probability distribution over $\mathcal{X}$ and of an observation in $\mathcal{X}$. In the framework of probabilistic forecasting, $S(P,y)$ represents the penalty which you incur when stating a forecast $P$ for an observation $y$.²
+
+Assuming that the observation $y$ is a realization of a random variable $Y$ with distribution $Q$, the expected Scoring Rule is defined as:
+
+$$S(P,Q) := E_{Y\sim Q}S(P,Y),$$
+
+where we overload notation in the second argument of $S$. The Scoring Rule $S$ is said to be proper relative to a set of distributions $\mathcal{P}(\mathcal{X})$ over $\mathcal{X}$ if
+
+$$S(Q,Q) \leq S(P,Q) \quad \forall P,Q \in \mathcal{P}(\mathcal{X}),$$
+
+i.e., if the expected Scoring Rule is minimized in $P$ when $P=Q$. Moreover, $S$ is strictly proper relative to $\mathcal{P}(\mathcal{X})$ if $P=Q$ is the unique minimum:
+
+$$S(Q,Q) < S(P,Q) \quad \forall P,Q \in \mathcal{P}(\mathcal{X}) \text{ s.t. } P \neq Q.$$
+
+This nomenclature comes from the probabilistic forecasting literature [Gneiting and Raftery, 2007], as a forecaster minimizing an expected strictly proper Scoring Rule would provide the exact distribution for $Y$.
+
+By following Dawid and Musio [2014], we define the divergence related to a proper Scoring Rule as: $D(P,Q) := S(P,Q) - S(Q,Q) \geq 0$. Notice that $P = Q \implies D(P,Q) = 0$, but there may be $P \neq Q$ such that $D(P,Q) = 0$. However, if $S$ is strictly proper, $D(P,Q) = 0 \iff P = Q$, which is the commonly used condition to define a statistical divergence (as for instance the common Kullback-Leibler, or KL, divergence). Therefore, each strictly proper Scoring Rule is connected to a statistical divergence between probability distributions³.
+
+Consider now a set of observations $\mathbf{y} = (y_1, y_2, \dots, y_n) \in \mathcal{X}^n$, which are generated by a distribution $P_0$. We introduce the SR posterior for $S$ by setting $\ell(y,\theta) = S(P_\theta,y)$ in the general Bayes update in Eq. (1):
+
+$$\pi_S(\theta|\mathbf{y}) \propto \pi(\theta) \exp \left\{ -w \sum_{i=1}^{n} S(P_\theta, y_i) \right\}, \quad (2)$$
+
+which, as mentioned in the introduction, reminds of the posterior update considered in [Jewson et al., 2018, Giiummolè et al., 2019, Loaiza-Maya et al., 2019].
+
+²Notice that some authors [Gneiting and Raftery, 2007] use a different convention, in which $S(P,y)$ denotes a reward rather than a penalty. Everything we discuss here still holds with that convention up to a change of sign.
+
+³Conversely, if a statistical divergence $D(P,Q)$ can be written as: $D(P,Q) = E_{Y\sim Q}[S(P,Y)] - E_{Y\sim Q}[S(Q,Y)]$, then $S$ is a strictly proper SR.
+---PAGE_BREAK---
+
+**Remark 1 (Bayesian additivity).** The formulation of the posterior in Eq. (2) satisfies Bayesian additivity, meaning that sequentially updating the belief with a set of observations does not depend on the order the observations are received. Notice that some related approaches do not satisfy this property; for instance, the Hellinger posterior (among others) considered in Jewson et al. [2018] builds an estimate of the data generating density using all observations $y_i$, so that it does not respect it. Notice also that the ABC posterior [Lintusaari et al., 2017, Bernton et al., 2019] does not satisfy it.
+
+In Section 3, we discuss ways to estimate $S(P_\theta, y_i)$ in the case of intractable likelihood models. In the rest of this Section, we first provide an asymptotic normality result for the SR posterior (Sec. 2.1), then discuss some specific Scoring Rules (Sec. 2.2), and then present finite sample posterior consistency and outlier robustness results holding for the SRs we will use in the rest of this work (Secs 2.3 and 2.4).
+
+## 2.1 Asymptotic normality
+
+Across this section, we will consider univariate $\theta$ for simplicity, but extending our statement and proof to the multivariate case only involves notational difficulties. Moreover, our asymptotic normality result holds in probability, but almost sure convergence could be shown as well (see for instance Miller, 2019, Matsubara et al., 2021). However, the main aim of this work is the validation of the SR posterior for LFI rather than its asymptotic theory; therefore, we chose to provide the asymptotic normality result in the present form as it directly generalizes the Bernstein-von Mises theorem for standard Bayesian inference (we follow here the proof in Ghosh and Ramamoorthi, 2003, Section 1.4.2) and is thus insightful, even if it could strengthened in the ways suggested above.⁴
+
+Let us first introduce the following shorthand notation:
+
+$$S_n(\theta, \mathbf{y}) = \sum_{i=1}^{n} S(P_\theta, y_i).$$
+
+We then state the assumptions needed for our result:
+
+**A1** For each value of $n$, the Scoring Rule minimizer
+
+$$\hat{\theta}^{(n)}(\mathbf{y}) = \arg\min_{\theta \in \Theta} \frac{1}{n} S_n(\theta, \mathbf{y})$$
+
+is unique and in the interior of $\Theta$, so that it can be found by solving:
+
+$$\frac{d}{d\theta} S_n(\theta, \mathbf{y}) |_{\theta=\hat{\theta}^{(n)}(\mathbf{y})} = 0,$$
+
+as long as $S$ is differentiable with respect to $\theta$. Moreover, we also assume the minimizer of the expected Scoring Rule:
+
+$$\theta^* = \arg\min_{\theta \in \Theta} S(P_\theta, P_0) = \arg\min_{\theta \in \Theta} D(P_\theta, P_0)$$
+
+to be unique; if the model is well specified, this implies $P_{\theta^*} = P_0$.
+
+**A2** $S(\theta, y)$ is thrice differentiable with respect to $\theta$ in a neighborhood $(\theta^* - \delta, \theta^* + \delta)$. If $d_\theta S, d_\theta^2 S$, and $d_\theta^3 S$ stand for the first, second, and third derivatives with respect to $\theta$, then $\mathbb{E}_{Y \sim P_0} d_\theta S(P_\theta^\star, Y)$ and $I(\theta^\star) = \mathbb{E}_{Y \sim P_0} d_\theta^2 S(P_\theta^\star, Y)$ are both finite and
+
+$$\sup_{\theta \in (\theta^* - \delta, \theta^* + \delta)} |d_{\theta}^{3} S(P_{\theta}, y)| < M(y) \quad \text{and} \quad \mathbb{E}_{Y \sim P_0} M(Y) = C < \infty.$$
+
+⁴Besides almost sure convergence, another possible extension could be along the lines of Loaiza-Maya et al. [2019], which provides an asymptotic normality result for non-iid observations. We discuss more in details their result and the differences from our approach in Appendix A.1.
+---PAGE_BREAK---
+
+**A3** For any $\delta > 0$, there exists an $\epsilon > 0$ such that
+
+$$P_0 \left\{ \sup_{\theta:|\theta-\theta^*|>\delta} \frac{1}{n} (S_n(\theta^*, \mathbf{Y}) - S_n(\theta, \mathbf{Y})) \leq -\epsilon \right\} \to 1,$$
+
+which says that $\theta^*$ has increasingly high probability of having a larger score than far away points,
+with $n \to \infty$.
+
+**A4** The prior has a density $\pi(\theta)$ with respect to Lebesgue measure; $\pi(\theta)$ is continuous and positive at $\theta^*$.
+
+Notice that uniqueness of $\theta^*$ (in Assumption **A1**) is implied by $S$ being strictly proper and the
+model being well specified. If the model class is not well specified, a strictly proper $S$ does not guarantee
+the minimizer to be unique (as in fact there may be pathological cases where multiple minimizers exist),
+but it is likely that this condition is verified for most cases of practical interest. Additionally, notice
+that $I(\theta^*)$ generalizes the Fisher information, which is obtained for $S(P_\theta, y) = -\log p(y|\theta)$.
+
+We also remark that our Assumption **A1** above and standard results for M-estimators ensure that $\hat{\theta}^{(n)}(\mathbf{Y}) \to \theta^*$ as $n \to \infty$ in $P_0$ probability, namely, $\hat{\theta}^{(n)}(\mathbf{Y})$ is a consistent finite sample estimator of $\theta^*$; see for instance Theorem 4.1 in Dawid et al. [2016].
+
+We now state our result, whose proof is reported in Appendix A.1.
+
+**Theorem 1.** Under Assumptions **A1** to **A4**, let $\mathbf{Y}_n = (Y_1, Y_2, \dots, Y_n)$. Denote by $\pi_S^*(s|\mathbf{Y}_n)$ the SR posterior density of $s = \sqrt{n}(\hat{\theta} - \hat{\theta}^{(n)}(\mathbf{Y}_n))$. Then as $n \to \infty$, for any $w > 0$, in $P_0$ probability:
+
+$$\int_{\mathbb{R}} \left| \pi_S^*(s | \mathbf{Y}_n) - \frac{\sqrt{wI(\theta^*)}}{\sqrt{2\pi}} e^{-\frac{s^2 wI(\theta^*)}{2}} \right| ds \to 0.$$
+
+Let $\mathcal{N}(\mu, \sigma^2)$ denote now the normal distribution with mean $\mu$ and variance $\sigma^2$. By the above theorem, $s$ has, asymptotically in $P_0$ probability, a posterior distribution converging to $\mathcal{N}(0, \frac{1}{wI(\theta^*)})$. Therefore, in a similar fashion the posterior distribution for $\theta$ converges to $\mathcal{N}\left(\theta^*, \frac{1}{n \cdot w I(\theta^*)}\right)$ as $\hat{\theta}^{(n)}(\mathbf{Y}_n) \to \theta^*$ in $P_0$ probability. This implies that the SR posterior concentrates, for large $n$, on the parameter value minimizing the expected SR, if that minimizer is unique. If the model is well specified and $S$ is strictly proper, the SR posterior concentrates therefore on the true parameter value; this property is usually referred to as *posterior consistency*.
+
+**Remark 2 (Asymptotic fractional normality).** We remark that, in case multiple minimizers of $S(P_\theta, P_0)$ exist (in finite number), it may be possible to obtain an asymptotic fractional normality result, which ensures the SR posterior convergence to a mixture of normal distributions centered in the different minimizers; see for instance [Frazier et al., 2021a] for an example of such results. We leave this for future work.
+
+## 2.2 Some specific Scoring Rules
+
+We list here some Scoring Rules which are of interest for our work.
+
+**Log score** The log score is defined as:
+
+$$S_{\log}(P, y) = -\log p(y),$$
+
+where $p$ is the density for $P$. The corresponding divergence is the Kullback-Leibler (KL) divergence.
+Notice that this score only depends on the likelihood evaluated at $y$; it is therefore local. Using this
+in Eq. (2) yields the standard Bayesian posterior.
+---PAGE_BREAK---
+
+**Dawid-Sebastiani score** The Dawid-Sebastiani (DS) score is defined as:
+
+$$S_{\text{DS}}(P, y) = \ln |\Sigma_P| + (y - \mu_P)' \Sigma_P^{-1} (y - \mu_P),$$
+
+where $\mu_P$ and $\Sigma_P$ are the mean vector and covariance matrix of $P$. The DS score is equal to the negative log-likelihood of a multivariate normal distribution with mean $\mu_P$ and covariance matrix $\Sigma_P$, up to some constants. Therefore, it is equivalent to the log score when $P$ is a multivariate normal distribution.
+
+For a set of distributions $\mathcal{P}(\mathcal{X})$ which have well-defined second moments, this Scoring Rule is proper but not strictly so (as in fact several distributions of that class yield the same score, as long as the two first moments match, Gneiting and Raftery, 2007); it is strictly proper if distributions in $\mathcal{P}(\mathcal{X})$ are only determined by the first two moments, as it is the case for the multivariate normal distribution.
+
+**Energy score** The energy score is given by:
+
+$$S_{\mathrm{E}}(P, y) = 2 \cdot \mathbb{E}||X - y||_2^{\beta} - \mathbb{E}||X - X'||_2^{\beta},$$
+
+where $X, X'$ are independent copies of a random variable distributed according to $P$ and $\beta \in (0, 2)$. This is a strictly proper Scoring Rule for the class $\mathcal{P}_{\beta}(\mathcal{X})$ of probability measures $P$ such that $\mathbb{E}_{X \sim P}||X||^{\beta} < \infty$ [Gneiting and Raftery, 2007]. The related divergence is the square of the energy distance^5, which is a metric between probability distributions [Rizzo and Székely, 2016]:
+
+$$D_{\mathrm{E}}(P, Q) = 2 \cdot \mathbb{E}||X - Y||_2^{\beta} - \mathbb{E}||X - X'||_2^{\beta} - \mathbb{E}||Y - Y'||_2^{\beta},$$
+
+for $X, X' \sim P$ and $Y, Y' \sim Q$. Across the rest of this work, we will fix $\beta = 1$ unless otherwise specified.
+
+**Kernel score** Let $k(\cdot, \cdot)$ be a positive definite kernel. The kernel Scoring Rule for $k$ can be defined as [Gneiting and Raftery, 2007]:
+
+$$S_k(P, y) = \mathbb{E}[k(X, X')] - 2 \cdot \mathbb{E}[k(X, y)],$$
+
+where $X, X'$ are independent copies of a random variable distributed according to $P$. Notice that choosing $k(x, y) = -||x - y||_2^{\beta}$ leads to the energy score. The corresponding divergence is the squared Maximum Mean Discrepancy (MMD, Gretton et al., 2012) relative to the kernel $k$ (see Appendix C.1):
+
+$$D_k(P, Q) = \mathbb{E}[k(X, X')] + \mathbb{E}[k(Y, Y')] - 2 \cdot \mathbb{E}[k(X, Y)],$$
+
+for $X, X' \sim P$ and $Y, Y' \sim Q$.
+
+This Scoring Rule is proper for the class of probability distributions for which $\mathbb{E}[k(X, X')]$ is finite (by Theorem 4 in Gneiting and Raftery [2007]). Additionally, it is strictly proper under conditions which ensure that the MMD is a metric for probability distributions on $\mathcal{X}$ (see Appendix C.1). These conditions are satisfied, among others, by the Gaussian kernel (which we will use across our work):
+
+$$k(x, y) = \exp\left(-\frac{\|x - y\|_2^2}{2\gamma^2}\right),$$
+
+where $\gamma$ is a bandwidth tuned from data as described in Appendix D.1 in the Supplementary Information.
+
+⁵The probabilistic forecasting literature [Gneiting and Raftery, 2007] use a different convention of the energy score and the subsequent kernel score, which amounts to multiplying our definition by 1/2. We follow here the convention used in the statistical inference literature [Rizzo and Székely, 2016; Chérief-Abdellatif and Alquier, 2020; Nguyen et al., 2020]
+---PAGE_BREAK---
+
+**Remark 3 (Likelihood principle).** The SR posterior with general Scoring Rules in Eq. (2) does not respect the likelihood principle, which states that the likelihood at the observation contains all information needed to update belief about the parameters of the model; this principle is instead satisfied by the standard Bayes posterior (i.e., by the SR posterior with the log-score). In Jewson et al. [2018], the authors argue that the likelihood principle is sensible when the model is well specified, but it is not in an M-open setup. Additionally, we add, for likelihood-free inference the likelihood is unavailable in the first place, so that for computational convenience it may be preferable replacing it with a Scoring Rule which does not respect the likelihood principle but for which an easy estimator is available.
+
+**Remark 4 (Non-invariance to change of data coordinates).** An immediate consequence of the violation of the likelihood principle is that the SR posterior with a general Scoring Rule is not invariant to a change of the coordinates used for representing the observation; this is a property common to loss-based frequentist estimators and to the generalized posterior obtained from them [Matsubara et al., 2021].
+
+Specifically, for a single observation $y$, let $\pi_S^Y$ denote the SR posterior conditioned on values of $Y$, while $\pi_S^Z$ denote instead the posterior conditioned on values of $Z = f(Y)$ for some one-to-one function $f$; in general, $\pi_S^Y(\theta|y) \neq \pi_S^Z(\theta|f(y))$. By denoting as $w_Z$ (respectively $w_Y$) and $P_\theta^Z$ (respectively $P_\theta^Y$) the weight and model distributions appearing in $\pi_S^Z$ (resp. $\pi_S^Y$), the equality would in fact require $w_Z S(P_\theta^Z, f(y)) = w_Y S(P_\theta^Y, y) + C \ \forall \theta, y$ for some choice of $w_Z, w_Y$ and for all transformations $f$, where $C$ is a constant in $\theta$. Notice that this is satisfied for the standard Bayesian posterior with $w_Z = w_Y = 1$.
+
+Asymptotically, when a strictly proper Scoring Rule is used and the model is well specified, the SR posterior concentrates on the parameter value corresponding to the data generating process independently on the data coordinates used (albeit the covariance structure may depend on them). If the model is misspecified, however, SR posteriors using different data coordinates will concentrate on different parameter values in general; this property is consistent with the SR posterior learning about the parameter value which minimizes the considered expected Scoring Rule, which in turn depends on the chosen coordinate system. We explain these properties in more details in Appendix B.
+
+## 2.3 Finite sample posterior consistency
+
+In this Section, we consider the Energy and Kernel Score posteriors and their corresponding divergences, and provide a theoretical guarantee bounding the probability of deviation of the posterior expectation of the divergence from the minimum divergence achievable by the model class. This result holds with finite number of samples $n$, and does not require the model to be well specified, nor the minimizer of the divergence to be unique. In the literature, this is alternatively referred to as a generalization result [Chérief-Abdellatif and Alquier, 2020] or as a posterior consistency [Matsubara et al., 2021]; our result in this Section is inspired by the analogue one for the Kernel Stein Discrepancy posterior in the latter work.
+
+In order to obtain our result, we need to assume the following prior mass condition (following Matsubara et al. [2021]):
+
+A5 The prior has a density $\pi(\theta)$ (with respect to Lebesgue measure) which satisfies
+
+$$ \int_{B_n(\alpha_1)} \pi(\theta) d\theta \geq e^{-\alpha_2 \sqrt{n}} $$
+
+for some constants $\alpha_1, \alpha_2 > 0$, where we define the sets
+
+$$ B_n(\alpha_1) := \{ \theta \in \Theta : |D(P_\theta, P_0) - D(P_{\theta^*}, P_0)| \leq \alpha_1 / \sqrt{n} \}, $$
+
+where $D$ is the divergence associated to the Scoring Rule $S$ and $\theta^* \in \arg\min_{\theta \in \Theta} D(P_\theta, P_0)$.
+
+Assumption **A5** constrains the minimum amount of prior mass which needs to be given to $D$-balls with size decreasing as $n^{-1/2}$, and, albeit difficult to verify in practice, is in general a weak condition (similar assumptions are taken in Chérief-Abdellatif and Alquier [2020], Matsubara et al. [2021]; see the former for an example of explicit verification). It is however stronger than Assumption **A4**.
+---PAGE_BREAK---
+
+We now give our result, which considers the case of Kernel Score posterior with bounded kernel (as for instance the Laplace and Gaussian ones), or alternatively the case of Energy Score posterior with bounded $\mathcal{X}$.
+
+**Theorem 2.** Let $\mathbf{Y}_n = (Y_1, Y_2, \dots, Y_n)$, $Y_i \sim P_0$ for $i = 1, \dots, n$; the following two statements hold:
+
+1. Let $\pi_{S_k}(\cdot|\mathbf{Y}_n)$ be the Kernel Score posterior relative to a kernel $k$ such that $0 \le \sup_{x,y\in\mathcal{X}} k(x,y) \le \kappa < \infty$, and let $D_k$ be its associated divergence, for which $\theta^* \in \arg\min_{\theta\in\Theta} D_k(P_\theta, P_0)$; if the prior $\pi(\theta)$ satisfies Assumption **A5** for $D_k$, we have:
+
+$$P_0\left(\left|\int_{\Theta} D_k(P_{\theta}, P_0)\pi_{S_k}(\theta|\mathbf{Y}_n)d\theta - D_k(P_{\theta^*}, P_0)\right| \geq \epsilon\right) \leq 2 \exp\left\{-\frac{1}{2}\left(\frac{\sqrt{n}\epsilon - \alpha_1 - \alpha_2/w}{8\kappa}\right)^2\right\}.$$
+
+2. Let $\pi_{S_E^{(\beta)}}(\cdot|\mathbf{Y}_n)$ be the Energy Score posterior with power $\beta$, and assume the space $\mathcal{X}$ is bounded such that $\sup_{x,y\in\mathcal{X}} ||x-y||_2 \le B < \infty$; let also $D_E^{(\beta)}$ be its associated divergence, for which $\theta^* \in \arg\min_{\theta\in\Theta} D_E^{(\beta)}(P_\theta, P_0)$; if the prior $\pi(\theta)$ satisfies Assumption **A5** for $D_E^{(\beta)}$:
+
+$$P_0\left(\left|\int_{\Theta} D_{E}^{(\beta)}(P_{\theta}, P_0)\pi_{S_E}(\theta|\mathbf{Y}_n)d\theta - D_{E}^{(\beta)}(P_{\theta^*}, P_0)\right| \geq \epsilon\right) \leq 2 \exp\left\{-\frac{1}{2}\left(\frac{\sqrt{n}\epsilon - \alpha_1 - \alpha_2/w}{8B^{\beta}}\right)^2\right\}.$$
+
+In both cases, the probability is considered with respect to $\mathbf{Y}_n$.
+
+Proof of the Theorem, which is inspired by the one for Theorem 1 in Matsubara et al. [2021], is given in Appendix A.2. As $\epsilon$ or $n$ increase, the bound on the probability tends to 0; this implies that, for $n \to \infty$, the SR posterior concentrates on those parameter values for which the model achieves minimum divergence from the data generating process $P_0$. With respect to Theorem 1, Theorem 2 provides guarantees on the infinite sample behavior of the SR posterior even when $\theta^*$ is not unique; however the result only holds for specific SRs and it does not describe the specific form of the asymptotic distribution, which Theorem 1 instead does.
+
+## 2.4 Global bias-robustness
+
+Following similar arguments in Matsubara et al. [2021], we establish now a robustness property with respect to contaminations in the dataset which, similarly to the consistency result in Sec. 2.3, holds for the Kernel Score posterior with bounded kernel (as for instance the Laplace and Gaussian ones) and for the Energy Score posterior with bounded $\mathcal{X}$.
+
+In this subsection, let us denote by $\hat{P}_n = \frac{1}{n} \sum_{i=1}^n \delta_{y_i}$ the empirical distribution given by the observations $(y_1, \dots, y_n)$, considered as fixed; moreover, for a Scoring Rule $S$, let us define:
+
+$$L(\theta, \hat{P}_n) = \frac{1}{n} \sum_{i=1}^{n} S(P_{\theta}, y_i) = E_{\hat{P}_n} S(P_{\theta}, Y),$$
+
+so that the SR posterior in Eq. (2) can be written as $\pi_S(\theta|\mathbf{y}) = \pi_S(\theta|\hat{P}_n) \propto \pi(\theta) \exp\{-wnL(\theta, \hat{P}_n)\}$.
+
+We consider now the $\epsilon$-contamination distribution $\hat{P}_{n,\epsilon,z} = (1-\epsilon)\hat{P}_n + \epsilon\delta_z$; specifically, we perturb the observed empirical distribution with an outlier $z$ which has weight $\epsilon$. We now define the *posterior influence function* [Ghosh and Basu, 2016]:
+
+$$\text{PIF}\left(z, \theta, \hat{P}_n\right) = \frac{d}{d\epsilon} \pi_S\left(\theta | \hat{P}_{n,\epsilon,z}\right) \bigg|_{\epsilon=0},$$
+
+which measures how much the posterior in $\theta$ changes by adding an infinitesimal perturbation to the observations in $z$. The SR-posterior is said to be globally bias-robust if
+
+$$\sup_{\theta \in \Theta} \sup_{z \in \mathcal{X}} |\text{PIF}(z, \theta, \hat{P}_n)| < \infty.$$
+
+The following Theorem establishes global bias-robustness of the SR posterior with the Kernel and Energy SRs:
+---PAGE_BREAK---
+
+**Theorem 3.** Assume the prior $\pi(\theta)$ is bounded over $\Theta$; the following two statements hold:
+
+1. Let $\pi_{S_k}(\cdot|\mathbf{y})$ be the Kernel Score posterior relative to a kernel $k$ such that $0 \le \sup_{x,y\in\mathcal{X}} k(x,y) \le \kappa < \infty$; then, $\pi_{S_k}(\cdot|\mathbf{y})$ is globally bias-robust.
+
+2. Further, let $\pi_{SE}(\cdot|\mathbf{y})$ be the Energy Score posterior, and assume the space $\mathcal{X}$ is bounded such that $\sup_{x,y\in\mathcal{X}} ||x-y||_2 \le B < \infty$; then, $\pi_{SE}(\cdot|\mathbf{y})$ is globally bias-robust.
+
+Proof is given in Appendix A.3. We remark again that the Gaussian kernel (used later in this work) is bounded. Further, we highlight that our result does not hold for the Energy Score posterior when $\mathcal{X}$ is unbounded.
+
+# 3 Bayesian inference using Scoring Rules estimators
+
+Recall that, if $P_\theta$ is an intractable-likelihood model, we are unable to evaluate the likelihood $p(\cdot|\theta)$ but we can easily sample from $P_\theta$. Therefore, we can employ consistent estimators of the Scoring Rules discussed in Sec. 2.2 in order to obtain an approximation of the posterior. Specifically, we replace $S(P_\theta, y_i)$ with an estimator $\hat{S}(\{x_j^{(\theta)}\}_{j=1}^m, y_i)$, where $\{x_j^{(\theta)}\}_{j=1}^m$ is a set of samples $x_j^{(\theta)} \sim P_\theta$ and $\hat{S}$ is a function such that $\hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i) \to S(P_\theta, y_i)$ in probability as $m \to \infty$ (i.e., it estimates the Scoring Rules consistently).
+
+Therefore, we can employ an MCMC where, for each proposed value of $\theta$, we simulate $x_j^{(\theta)} \sim P_\theta$, $j = 1, \dots, m$, and we estimate the target in Eq. (2) with:
+
+$$ \pi(\theta) \exp \left\{ -w \sum_{i=1}^{n} \hat{S}(\{x_j^{(\theta)}\}_{j=1}^{m}, y_i) \right\}. \quad (3) $$
+
+This procedure is an instance of pseudo-marginal MCMC [Andrieu et al., 2009], with target:
+
+$$ \pi_{\hat{S}}^{(m)}(\theta|\mathbf{y}) \propto \pi(\theta)p_{\hat{S}}^{(m)}(\mathbf{y}|\theta), \quad (4) $$
+
+where:
+
+$$
+\begin{aligned}
+p_{\hat{S}}^{(m)}(\mathbf{y}|\theta) &= \mathbb{E} \left[ \exp \left\{ -w \sum_{i=1}^{n} \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i) \right\} \right] \\
+&= \int \exp \left\{ -w \sum_{i=1}^{n} \hat{S}(\{x_j^{(\theta)}\}_{j=1}^m, y_i) \right\} \prod_{j=1}^{m} p(x_j^{(\theta)}|\theta) dx_1^{(\theta)} dx_2^{(\theta)} \cdots dx_m^{(\theta)}.
+\end{aligned}
+$$
+
+In fact, the quantity in Eq. (3) for a single draw $\{x_j^{(\theta)}\}_{j=1}^m$ is a non-negative and unbiased estimate of the quantity in Eq. (4); this argument reminds of what originally discussed in [Drovandi et al., 2015] for inference with auxiliary likelihoods, of which the Bayesian Synthetic Likelihood approach by Price et al. [2018] is a specific instance.
+
+Similarly to Price et al. [2018], we remark that the target $\pi_{\hat{S}}^{(m)}(\theta|\mathbf{y})$ is not the same as the original $\pi_S(\theta|\mathbf{y})$ and depends on the number of simulations $m$; in fact, in general:
+
+$$ \mathbb{E} \left[ \exp \left\{ -w \sum_{i=1}^{n} \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i) \right\} \right] \neq \exp \left\{ -w \sum_{i=1}^{n} S(P_\theta, y_i) \right\}; $$
+
+even if $\hat{S}(\{x_j^{(\theta)}\}_{j=1}^m, y)$ is an unbiased estimate of $S(P_\theta, y)$, the above is not an equality due to the presence of the exponential function. However, it is possible to show that, as $m \to \infty$, $\pi_{\hat{S}}^{(m)}$ converges to $\pi_S$, as stated by the following Theorem (adapted from Drovandi et al. [2015]; more complete statement and proof in Appendix A.4):
+
+**Theorem 4.** If $\hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i)$ converges in probability to $S(P_\theta, y_i)$ as $m \to \infty$ for all $y_i$, $i = 1, \dots, n$, then, under some minor technical assumptions (see Appendix A.4):
+
+$$ \lim_{m \to \infty} \pi_{\hat{S}}^{(m)} = \pi_S. $$
+---PAGE_BREAK---
+
+**Remark 5 (Asymptotic normality for pseudo-marginal MCMC target).** *Notice that our theoretical results in Sec. 2 are referred to the “exact” SR posterior in Eq. (2), which is different from the target of the pseudo-marginal MCMC in Eq. (4) (albeit the latter converges to the former as the number of simulations $m \to \infty$). Similarly to what done in Frazier et al. [2021c], it could be possible to show asymptotic normality of the latter when both $n \to \infty$ and $m \to \infty$ at the same time; we leave this investigation for future work.*
+
+## 3.1 Connection to related works
+
+**Bayesian Synthetic Likelihood (BSL).** Synthetic Likelihood (SL) [Wood, 2010] replaces the exact likelihood of a model by assuming the model $P_\theta$ has normal distribution⁶ for each value of $\theta$, with mean $\mu_\theta$ and covariance matrix $\Sigma_\theta$. In a Bayesian setting, Price et al. [2018] defined therefore the following posterior (Bayesian Synthetic Likelihood, BSL):
+
+$$ \pi_{\text{SL}}(\theta|y) \propto \pi(\theta)\mathcal{N}(y; \mu_{\theta}, \Sigma_{\theta}), $$
+
+where $\mathcal{N}(y; \mu_\theta, \Sigma_\theta)$ denotes the multivariate normal density with mean $\mu_\theta$ and variance matrix $\Sigma_\theta$ evaluated in $y$.
+
+As the exact values of $\mu_\theta$ and $\Sigma_\theta$ are unknown, BSL estimates those from simulations (following Wood, 2010):
+
+$$ \hat{\mu}_{\theta}^{(m)} = \frac{1}{m} \sum_{j=1}^{m} x_{j}^{(\theta)}, $$
+
+$$ \hat{\Sigma}_{\theta}^{(m)} = \frac{1}{m-1} \sum_{j=1}^{m} (x_j^{(\theta)} - \hat{\mu}_{\theta}^{(m)})(x_j^{(\theta)} - \hat{\mu}_{\theta}^{(m)})^T, $$
+
+where data $\{x_j^{(\theta)}\}_{j=1}^m$ are generated from $P_\theta$, and inserts this in a pseudo-marginal MCMC.
+
+Commonly, BSL is considered as an inferential approach with a misspecified likelihood; additionally, we remark that it is an instance of our SR posterior using $w = 1$ and the DS Scoring Rule (see Section 2.2). From this point of view, the above empirical estimators of the mean and covariance matrix are combined to obtain an estimator of the DS Scoring Rule. Of course, other ways to estimate the Gaussian density are possible [Ledoit and Wolf, 2004, Price et al., 2018, An et al., 2019], which correspond to alternative ways of estimating the DS Scoring Rule, which we remark is proper but not strictly so in general.
+
+**Semi-parametric BSL (semiBSL)** In An et al. [2020], the authors relaxed the normality assumption in BSL to assuming the dependency structure between the different components in the model $P_\theta$ can be described by a Gaussian copula, with no constraints on marginal densities; this leads to larger robustness towards deviations from normality of the statistics.
+
+The semi-parametric BSL (semiBSL) likelihood for one single observation $y$ is thus:
+
+$$ p_{\text{semiBSL}}(y|\theta) = c_{R_{\theta}}(F_{\theta,1}(y^1), \dots, F_{\theta,d}(y^d)) \prod_{k=1}^{d} f_{\theta,k}(y^k), \quad (5) $$
+
+where $f_{\theta,k}$ and $F_{\theta,k}$ are respectively the marginal density and Cumulative Density Functions (CDFs) for the k-th component of the model; $c_R(u)$ denote instead the Gaussian copula density for $u \in [0, 1]^d$ and correlation matrix $R \in [-1, 1]^{d \times d}$, whose explicit form is given in Appendix C.2.
+
+⁶In BSL (and in the subsequent semi-parametric version), the data is usually summarized with a statistics function before applying the normal density; this is usually done as certain choices of summary statistics are approximately normal, for instance sums of very large number of variables by Central Limit Theorem arguments. Here, we keep the same notation as the rest of our work, remarking that applying a statistics function to the data corresponds to redefining the data space $\mathcal{X}$ and the data generating process. Additionally, we note that works that investigate the asymptotic properties of BSL [Frazier et al., 2021a,c], consider a different asymptotic regime from ours (Sec. 2.1); specifically, in those works an increasing number of observations (which do not need to be i.i.d.) are used to estimate one single set of summary statistics which are used in the definition of the posterior; in our approach, instead, each observation contributes to the posterior with a new term in a multiplicative fashion (this also holds for the non-i.i.d. setup considered in Loaiza-Mayo et al. [2019]).
+---PAGE_BREAK---
+
+Similarly to BSL, An et al. [2020] considered a pseudo-marginal MCMC where, for each value of $\theta$, simulations from $P_{\theta}$ are used to obtain an estimate of the correlation matrix of the Gaussian copula $\hat{R}_{\theta}$ as well as Kernel Density Estimates (KDE) of the marginals $\hat{f}_{\theta,k}$ (from which estimates of the CDFs $\hat{F}_{\theta,k}$ are obtained by integration). The estimated density is therefore:
+
+$$c_{\hat{R}_{\theta}}(\hat{F}_{\theta,1}(y^1), \dots, \hat{F}_{\theta,d}(y^d)) \prod_{k=1}^{d} \hat{f}_{\theta,k}(y^k). \quad (6)$$
+
+More details on semiBSL are given in Appendix C.2. Similarly as for BSL, we can connect semiBSL to our framework by rewriting Eq. (5) as:
+
+$$
+\begin{aligned}
+p_{\text{semiBSL}}(y|\theta) &= \exp \left\{ \sum_{k=1}^{d} \log f_{\theta,k}(y^k) + \log c_{R_\theta}(F_{\theta,1}(y^1), \dots, F_{\theta,d}(y^d)) \right\} \\
+&= \exp \left\{ -\sum_{k=1}^{d} S_{\log}\left(P_{\theta}^{k}, y^{k}\right) - S_{Gc}\left(C_{\theta}, (F_{\theta,1}(y^{1}), \dots, F_{\theta,d}(y^{d}))\right) \right\},
+\end{aligned}
+$$
+
+where $P_{\theta}^k$ is the distribution associated to the model for the k-th component, $C_{\theta}$ is the copula associated to $P_{\theta}$ and $S_{Gc}(C, u)$ is the Scoring Rule associated to the Gaussian copula, which evaluates the copula distribution $C$ for an observation $u$; we show in Appendix C.2 that this is a proper, but not strictly so, Scoring Rule for copula random variables. The approximate likelihood in Eq. (6) can be therefore seen as the estimate obtained when the marginal log Scoring Rules are estimated with KDEs and $S_{Gc}$ is estimated with the plug-in estimator $\hat{R}$.
+
+**MMD-Bayes** In Chérief-Abdellatif and Alquier [2020], the following posterior, termed MMD-Bayes, is considered:
+
+$$\pi_{\text{MMD}}(\theta|\mathbf{y}) \propto \pi(\theta) \exp \left\{-\beta \cdot D_k \left(P_\theta, \hat{P}_n\right)\right\},$$
+
+where $\beta > 0$ and $D_k(P_\theta, \hat{P}_n)$ denotes the squared MMD (see Appendix C.1) between the empirical measure of the observations $\hat{P}_n = \frac{1}{n} \sum_{i=1}^n \delta_{y_i}$ and the model distribution $P_\theta$.
+
+This posterior is equivalent to our SR posterior $\pi_{S_k}$ using the kernel Scoring Rule $S_k$ (see Appendix C.1.1 for a proof). However, Chérief-Abdellatif and Alquier [2020] considered a variational approximation to sample from $\pi_{\text{MMD}}$ in the LFI setting, while we use instead a pseudo-marginal MCMC approach for our method.
+
+**Ratio Estimation** The Ratio Estimation (RE) approach [Thomas et al., 2020] exploits the fact that having access to the ratio $r(y; \theta) = \frac{p(y|\theta)}{p(y)}$ is enough to perform Bayesian inference, as $\pi(\theta|y) = \pi(\theta) \cdot r(y; \theta)$. An approximate posterior can be therefore obtained by estimating the log ratio with some function $\hat{h}^{\theta}(y) \approx \log r(y; \theta)$ and considering $\pi_{\text{re}}(\theta|y) \propto \pi(\theta) \exp(\hat{h}^{\theta}(y))$.
+
+In practice, for every fixed $\theta$, Thomas et al. [2020] suggested estimating the log ratio with logistic regression. Given a set of $m$ samples from $P_{\theta}$, $\{x_j^{(\theta)}\}_{j=1}^m$, and $m$ reference samples from the marginal data distribution $\{x_j^{(r)}\}_{j=1}^m$, logistic regression solves the following optimization problem 8:
+
+$$
+\begin{gathered}
+\hat{h}_m^\theta = \arg \min_h J_m^\theta(h), \\
+J_m^\theta(h) = \frac{1}{2m} \left\{ \sum_{j=1}^m \log [1 + \exp(-h(x_j^{(\theta)}))] + \sum_{j=1}^m \log [1 + \exp(h(x_j^{(r)}))] \right\},
+\end{gathered}
+$$
+
+In the infinite data limit, the minimizer $h_{*}^{\theta}(y)$ of $J_m^{\theta}$ is equal to $\log r(y; \theta)$ (as discussed in Appendix C.3). For finite data, however, $\hat{h}_m^{\theta} = \arg \min_h J_m^{\theta}(h)$ is only an approximation of the ratio.
+
+⁷Which are obtained by drawing $\theta_j \sim p(\theta)$, $x_j \sim p(|\theta_j)$, and discarding $\theta_j$.
+
+⁸In general the number of reference samples and samples from the model can be different, see Appendix C.3; we make this choice here for the sake of simplicity.
+---PAGE_BREAK---
+
+We can therefore write this approach under our SR posterior framework by fixing $w = 1$ and defining:
+
+$$ \hat{S}_{\text{RE}}(\{x_j^{(\theta)}\}_{j=1}^m, \{x_j^{(r)}\}_{j=1}^m, y) = -\hat{h}_m^{\theta}(y), $$
+
+which, differently from the other SR estimators considered previously, depends on the reference samples, besides the simulations from $P_{\theta}$. Due to what mentioned above, $\hat{S}_{\text{RE}}$ converges in probability to the log-score (up to a constant term in $\theta$), for $m \to \infty$.
+
+The above characterization relies on using the set of all function in the optimization problem in Eq. (7); in practice, the minimization is restricted to a set of functions $\mathcal{H}$ (for instance a linear combination of predictors). In this case, the infinite data limit minimizer $h_{\mathcal{H}\star}^{\theta}(y)$ does not correspond in general to $\log r(y; \theta)$ (see Appendix C.3), but to the best possible approximation in $\mathcal{H}$ in some sense. Therefore, Ratio Estimation with a restricted set of functions $\mathcal{H}$ cannot be written exactly under our SR posterior framework. However, very flexible function classes (as for instance neural networks) can produce reasonable approximations to the log score when $m \to \infty$.
+
+## 3.2 Choice of w
+
+In the generalized Bayesian posterior in Eq. (1) and its Scoring Rules version in Eq. (2), $w$ represents the amount of information, with respect to prior information, one observation brings to the decision maker. For the standard Bayesian update, $w$ is fixed to 1, which corresponds to the natural scaling between prior and likelihood implied by Bayes theorem being the optimal way to process information in a well specified scenario Zellner [1988]. When the model is misspecified, some works have argued for the use of $w < 1$ in the standard Bayes update (see for instance Holmes and Walker, 2017).
+
+In the setup of generalized Bayesian inference, several works have suggested ways to tune this parameter (see Section 3 in [Bissiri et al., 2016] for a selection of possibilities). Specifically with Scoring Rules, Loaiza-Maya et al. [2019] set $w$ so that the rate of update of their posterior is the same as that with a misspecified likelihood, while Giummolè et al. [2019] considered Scoring Rules with fixed scale, but instead investigated how to transform the parameter value to match the asymptotic variances of the SR posterior and of the frequentist SR estimator.
+
+Here, we propose an heuristics that can be used in the LFI setup before the posterior $\pi_S$ for some $w$ is obtained; specifically, notice that, as remarked by Bissiri et al. [2016]:
+
+$$ \log \underbrace{\left\{ \frac{\pi_S(\theta|y)}{\pi_S(\theta'|y)} / \frac{\pi(\theta)}{\pi(\theta')} \right\}}_{\text{BF}(\theta,\theta';y)} = -w \{S(P_{\theta}, y) - S(P_{\theta'}, y)\} \iff w = - \frac{\log \text{BF}(\theta, \theta'; y)}{S(P_{\theta}, y) - S(P_{\theta'}, y)}, $$
+
+where BF($\theta, \theta'$; $y$) denotes the Bayes Factor of $\theta$ with respect to $\theta'$ for observation $y$. The practitioner can therefore choose the value BF($\theta, \theta'$; $y$) for a single choice of $\theta, \theta'$, $y$, thus determining $w$.
+
+We consider now the case in which the user has access to another posterior, say $\tilde{\pi}(\theta|y)$, which is obtained by means of a (in general misspecified) likelihood $\tilde{p}(y|\theta)$, with corresponding Bayes Factor $\widetilde{\text{BF}}$; if we chose:
+
+$$ w = - \frac{\log \widetilde{\text{BF}}(\theta, \theta'; y)}{S(P_{\theta}, y) - S(P_{\theta'}, y)}, $$
+
+for some $\theta, \theta', y$, we would ensure $\widetilde{\text{BF}}(\theta, \theta'; y) = \text{BF}(\theta, \theta'; y)$. In practice, we have no prior reason to prefer a specific choice of $(\theta, \theta')$; therefore, we set $w$ to be the median of $-\frac{\log \text{BF}(\theta, \theta'; y)}{S(P_{\theta}, y) - S(P_{\theta'}, y)}$, over values of $\theta, \theta'$ sampled from the prior. Using the median (instead of the mean) results in a value of $w$ which is robust to outliers in the computation of the above ratio for some choices of $\theta, \theta'$. Additionally, if $P_{\theta}$ is an intractable likelihood model, we estimate $w$ by replacing $S(P_{\theta}, y)$ with $\hat{S}(\{x_j^{(\theta)}\}_{j=1}^m, y)$, by generating data $\{x_j^{(\theta)}\}_{j=1}^m$ for each considered values of $\theta$.
+
+In our experiments, we will set $w$ for the SR posterior considering as a reference likelihood $\tilde{p}$ the one obtained with BSL.
+
+**Remark 6 (Synthetic Likelihood and model misspecification).** BSL and semiBSL correspond to standard Bayesian inference with a misspecified likelihood, therefore setting $w = 1$. As mentioned
+---PAGE_BREAK---
+
+above, some works argued for $w < 1$ in the case of misspecified likelihoods [Holmes and Walker, 2017]; this choice attributes more importance to prior information, but still allows information to accumulate through the likelihood, if the decision maker believes that some aspects of the misspecified likelihood are representative of the data generating process. Holmes and Walker [2017] designed a strategy that sets $w$ by matching an expected information gain in two experiments, one involving the exact data-generating process and the other one involving instead the best model approximation. That strategy recovers $w = 1$ in case the model is well-specified, and sets $w < 1$ otherwise. It would be of interest to understand whether applying similar strategies would improve the performance of BSL and semiBSL for misspecified models. We leave this for future exploration.
+
+**Remark 7 (Posterior invariance with data rescaling).** Following on from Remark 4, we highlight here that the Kernel and Energy Score posteriors are invariant to an affine transformation of the data ($Z = a \cdot Y + b$ for $a,b \in \mathbb{R}$), albeit non-invariant to a generic transformation of the data coordinates. Specifically, the Kernel Score posterior with Gaussian kernel is invariant to such transformations with $w_Z = w_Y$, provided the kernel bandwidth is scaled too, while the Energy Score posterior is invariante when $w_Z \cdot a^\beta = w_Y$, which is ensured by our heuristics for choosing the weight.
+
+### 3.3 Estimators for the Energy and Kernel Scores
+
+In this manuscript, we propose to perform inference with the SR posterior using the Energy Score and the Kernel Score with a Gaussian kernel. As both these Scoring Rules are defined through an expectation (see Section 2.2), the following U-statistics are immediately obtained:
+
+$$ \hat{S}_E(\{x_j\}_{j=1}^m, y) = \frac{2}{m} \sum_{j=1}^m ||x_j - y||_2^\beta - \frac{1}{m(m-1)} \sum_{\substack{j,k=1 \\ k \neq j}}^m ||x_j - x_k||_2^\beta, $$
+
+$$ \hat{S}_k(\{x_j\}_{j=1}^m, y) = \frac{1}{m(m-1)} \sum_{\substack{j,k=1 \\ k \neq j}}^m k(x_j, x_k) - \frac{2}{m} \sum_{j=1}^m k(x_j, y). $$
+
+In our experiments, we therefore employ these estimators for the Scoring Rules in order to perform inference for Likelihood-Free models; we remark that alternative ones can be employed (for instance the V-statistic suggested in Nguyen et al., 2020 for the squared Energy Distance), but we do not consider them here.
+
+## 4 Experiments
+
+We present here some experiments aiming to illustrate the behavior of our proposed approach. The LFI techniques are run using the ABCpy Python library [Dutta et al., 2020], while the PyMC3 library [Salvatier et al., 2016] is used to sample from the standard Bayes posterior when that is available (except for the M/G/1 example, where the custom strategy described in Shestopaloff and Neal, 2014 is exploited); code for reproducing all results is available at this link. We test our proposed method using the Energy (with $\beta = 1$) and Kernel Scores (with Gaussian kernel) and compare with Bayesian Synthetic Likelihood (BSL) and semi-parametric BSL (semiBSL). For all examples, the bandwidth of the Gaussian kernel is set from simulations as illustrated in Appendix D.1.
+
+In all experiments below, inference for the different methods is performed using MCMC with independent normal proposals for each component. In the examples where we use uniform priors on $\theta$, we run the MCMC on a transformed unbounded space.
+
+### 4.1 Concentration with the g-and-k model
+
+First, we study the behavior of the SR posteriors with an increasing number of observations, in order to verify our theoretical concentration results. We consider the univariate g-and-k model and its multivariate extension; the univariate g-and-k distribution Prangle [2017] is defined in terms of the
+---PAGE_BREAK---
+
+inverse of its cumulative distribution function $F^{-1}$. For this reason, likelihood evaluation is costly as it requires numerical inversion of $F^{-1}$. Given a quantile $q$, we define:
+
+$$F^{-1}(q) = A + B \left[ q + c \frac{1 - e^{-gz(q)}}{1 + e^{-gz(q)}} \right] (1 + z(q)^2)^k z(q),$$
+
+where $c$ is set to 0.8 to avoid degeneracy [Prangle, 2017], the parameters $A, B, g, k$ are broadly associated to the location, scale, skewness and kurtosis of the distribution, and $z(q)$ denotes the $q$-th quantile of the standard normal distribution $\mathcal{N}(0, 1)$. Sampling from this distribution is therefore immediate by drawing $z \sim \mathcal{N}(0, 1)$ and inputing it in place of $z(q)$ in the above transformation. A multivariate extension was first considered in the LFI literature in Drovandi and Pettitt [2011]; here we follow the setup of Jiang [2018]. Specifically, we consider drawing a multivariate normal $(Z^1, \dots, Z^5) \sim \mathcal{N}(0, \Sigma)$, where $\Sigma \in \mathbb{R}^{5\times5}$ has a sparse correlation structure: $\Sigma_{kk} = 1$, $\Sigma_{kl} = \rho$ for $|k-l| = 1$ and 0 otherwise; each component of $Z$ is then transformed as in the univariate case. The sets of parameters are therefore $\theta = (A, B, g, k)$ for the univariate case and $\theta = (A, B, g, k, \rho)$ for the multivariate one. We use uniform priors on $[0, 4]^4$ for the univariate case and $[0, 4]^4 \times [-\sqrt{3}/3, \sqrt{3}/3]$ for the multivariate case.
+
+We will study concentration in both well specified and misspecified case; in the well specified case, using a strictly proper SR ensures the uniqueness of $\theta^*$ in Assumption **A1**, which is required for the asymptotic normality result in Theorem 1 to hold. In the misspecified case, verifying Assumption **A1** is hard in practice (for both proper and strictly proper SRs); we proceed therefore by studying the behavior of the posterior with increasing $n$ and deduce from this whether $\theta^*$ is unique or not for the different SRs.
+
+In both setups, we perform inference with the different methods (excluding semiBSL for the univariate g-and-k, as that is defined for a multivariate setup only) setting the number of simulations per parameter value to $m = 500$, and run MCMC for 110000 steps, of which 10000 are burned in. We repeat this with 1, 5, 10, 15, 20 up to 100 observations spaced by 5.
+
+### 4.1.1 Well specified case
+
+For both univariate and multivariate case, we consider a set of 100 i.i.d. synthetic observations generated from parameter values $A^* = 3$, $B^* = 1.5$, $g^* = 0.5$, $k^* = 1.5$ and $\rho^* = -0.3$ (the latter is not used for the univariate case). For the SR posteriors, we fix $w$ by our suggested heuristics (Sec. 3.2) using as a reference BSL, with one single observation. The used values of $w$ are reported in Appendix D.2.1, together with the proposal sizes for MCMC and the resulting acceptance rates for all methods.
+
+For the univariate g-and-k, Fig. 1 reports the marginal posterior distributions for each parameter at different number of observations for the considered methods. With increasing $n$, the BSL posterior does not concentrate (except for the parameter $k$); the Energy Score posterior concentrates close to the true parameter value (green vertical line) for all parameters, while the Kernel Score posterior performs slightly worse, not being able to concentrate for the parameter $g$ (albeit this may happen with an even larger $n$, which we did not consider here).
+
+Similar results for the multivariate g-and-k are reported in Fig. 2. For this example, the MCMCs targeting the semiBSL and BSL posteriors do not converge beyond respectively 1 and 15 observations; the figure therefore reports only the posterior for the number of distributions for which MCMC converged. Instead, with the Kernel and Energy Scores we do not experience such a problem; additionally, the Energy Score concentrates well on the exact parameter value in this case too, while the Kernel Score is able to concentrate well for some parameters ($g$ and $k$) and some concentration can be observed for $\rho$; however, the Kernel Score posterior marginals for $A$ and $B$ are flatter and noisier (it may be that larger $n$ leads to more concentrate posteriors for $A$ and $B$ as well, but we did not research this further).
+
+We investigate now the reason for the poor performance of semiBSL and BSL for this example. The behavior of MCMC is illustrated in Fig. 3, where we fixed $n = 20$ and run MCMC with 10 different initializations, for 10000 MCMC steps with no burn-in, for BSL and semiBSL. After a short transient,
+---PAGE_BREAK---
+
+Figure 1: Marginal posterior distributions for the different parameters for the well specified univariate g-and-k model, with increasing number of observations (n = 1, 5, 10, 15, ..., 100). Darker (respectively lighter) colors denote a larger (smaller) number of observations. The densities are obtained by KDE on the MCMC output thinned by a factor 10. The Energy and Kernel Score posteriors concentrate around the true parameter value (green vertical line), while BSL does not.
+---PAGE_BREAK---
+
+Figure 2: Marginal posterior distributions for the different parameters for the well specified multivariate g-and-k model, with increasing number of observations (n = 1, 5, 10, 15, ..., 100). Darker (respectively lighter) colors denote a larger (smaller) number of observations. The densities are obtained by KDE on the MCMC output thinned by a factor 10. The Energy Score posterior concentrates well around the true parameter value (green vertical line), with the Kernel Score one performing slightly worse; we were not able to run BSL and semiBSL for a number of observations larger than 15 and 1 respectively (see text).
+---PAGE_BREAK---
+
+Figure 3: Traceplots for semiBSL and BSL for $n = 20$ for 10 different initializations (different colors), with 10000 MCMC steps (no burn-in); the green dashed line denotes the true parameter value. It can be seen that the chains are very sticky, and that they explore different parts of the parameter space. This behavior is worse for BSL, while semiBSL is able to move towards the true parameter value for some parameters.
+
+it can be seen that the different chains get stuck in different parameter values and have very “sticky” behavior. A possible explanation for this behavior could be the use of pseudo-marginal MCMC with large variance in the target estimate (which is known to produce similar issues); in fact, increasing the number of observations could lead to a large variance in the estimate. We repeat therefore the same experiments increasing the number of simulations $m$ (see Appendix D.2.2 for more details); however, even using $m = 30000$ does not solve our issue. Additionally, while the BSL assumptions are unreasonable for this model (which is non-Gaussian), we remark that the multivariate g-and-k fulfills the assumptions underlying semiBSL: in fact, applying a one-to-one transformation to each component of a random vector does not change the copula structure, which is Gaussian in this case. It is therefore surprising that the performance of semiBSL degrades so rapidly when $n$ increases.
+
+We are therefore unable to provide a conclusive explanation for this behavior, which we believe can be due to a combination of high variance in the pseudo-marginal MCMC and highly concentrated posterior.
+
+## Misspecified setup
+
+### 4.1.2 Misspecified setup
+
+Here, we consider as data generating process $P_0$ the Cauchy distribution, which has fatter tails than the g-and-k one. For the univariate case, the univariate Cauchy is used; for the multivariate case, the five components of each observation are drawn independently from the univariate Cauchy distribution (i.e., no correlation between components). For the SR posteriors, we use the values of $w$ which were obtained with our heuristics in the well specified case, in order to have the same scale of posterior update across the two setups; additional experimental details are reported in Appendix D.2.3.
+
+For the univariate g-and-k, we report the marginal posteriors in Fig. 4. The two Scoring Rule posteriors concentrate on a similar parameter value; additionally, differently from the well specified case, the BSL posterior concentrates as well, albeit on a slightly different parameter value from the SR posteriors (specially for $B$ and $k$). Therefore, we can conclude that, with this kind of misspecification, $\theta^*$ in unique both when using the strictly proper Kernel and Energy Scores, as well as the not strictly proper Dawid-Sebastiani Score (corresponding to BSL).
+
+For the multivariate g-and-k, we experienced the same issue with MCMC as in the well specified case for BSL and semiBSL; therefore, we do not report results for those methods. Marginal poste-
+---PAGE_BREAK---
+
+Figure 4: Marginal posterior distributions for the different parameters for the univariate g-and-k model, with increasing number of observations (n = 1, 5, 10, 15, ..., 100) generated from the Cauchy distribution. Darker (respectively lighter) colors denote a larger (smaller) number of observations. The densities are obtained by KDE on the MCMC output thinned by a factor 10. The Energy and Kernel Score posteriors concentrate around the same parameter value, while BSL concentrates on slightly different one (specially for *B* and *k*).
+---PAGE_BREAK---
+
+Figure 5: Marginal posterior distributions for the different parameters for the multivariate g-and-k model, with increasing number of observations ($n = 1, 5, 10, 15, \dots, 100$) generated from the Cauchy distribution. Darker (respectively lighter) colors denote a larger (smaller) number of observations. The densities are obtained by KDE on the MCMC output thinned by a factor 10. Both Energy and Kernel Score posteriors concentrate on a very similar parameter value, with slightly larger difference for $k$.
+
+riors for Energy and Kernel Score posteriors can be seen in Fig. 5; for both methods, the posterior concentrates for all parameters except for $\rho$ (which, we recall, describes correlation among different components in the observations, which is absent here). For the other parameters, the two methods concentrate on very close parameter values, with slightly larger difference for $k$, for which the Kernel Score posterior does not concentrate very well.
+
+## 4.2 Bias-robustness in normal location model
+
+We now empirically demonstrate the robustness properties of the SR posterior; specifically, following Matsubara et al. [2021], we consider a univariate normal model with fixed standard deviation $P_{\theta} = N(\theta, 1)$. We consider 100 observations, a proportion $1 - \epsilon$ of which is generated by $P_{\theta}$ with $\theta = 1$, while the remaining proportion $\epsilon$ is generated by $N(z, 1)$ for some value of $z$. Therefore, $\epsilon$ and $z$ control respectively the number and the location of outliers. The prior distribution on $\theta$ is set to $N(0, 1)$. In order to perform inference with our proposed SR posterior, we use $m = 500$ simulations and 60000 MCMC steps, of which 40000 are burned-in. Additionally, we also perform standard Bayesian inference (as the likelihood is available here). For the SR posteriors, $w$ is fixed in order to get approximately the same posterior variance as standard Bayes in the well specified case ($\epsilon = 0$); values are reported in Appendix D.3, together with the proposal sizes for MCMC and the resulting acceptance rates.
+
+We consider $\epsilon$ taking values in $(0, 0.1, 0.2)$ and $z$ in $(1, 3, 5, 7, 10, 20)$; in Figure 6, some results are shown: in each row of the Figure, we fix either $z$ or $\epsilon$ and change the other variable. Results for all combinations of $z$ and $\epsilon$ are available in Figure 10 in Appendix D.3. From Figure 6, it can be seen that the Kernel Score posterior is highly robust with respect to outliers, while the Energy Score posterior performs slightly worse. As expected, the standard Bayes posterior shifts significantly when either $\epsilon$ or $z$ are increased. We highlight that our theoretical result in Theorem 3 only ensures robustness for small values of $\epsilon$ and all values of $z$ for the Kernel Score posterior, which is in fact experimentally verified (our robustness result for the Energy Score posterior does not apply here as $\mathcal{X}$ is unbounded); however, we find empirically that both SR posteriors are more robust than the standard Bayes one when both $z$ and $\epsilon$ are increased.
+
+We stress how the Kernel Score posterior is remarkably insensitive to outliers far away from the rest of data (notice in fact how the posterior distribution for $z = 3$ is more shifted with respect to the one with, say, $z = 20$). This same property implies that special care needs to be taken when
+---PAGE_BREAK---
+
+Figure 6: Posterior distribution for the misspecified normal location model, following experimental setup introduced in Matsubara et al. [2021]. First row: fixed outliers location *z* = 10 and varying proportion *ε*; second row: fixed outlier proportion *ε*, varying location *z*. From both rows, it can be seen that both Kernel and Energy score are more robust with respect to Standard Bayes. The densities are obtained by KDE on the MCMC output thinned by a factor 10.
+
+running an MCMC targeting the Kernel SR posterior. In fact, if the chain was started close to the
+outlier location *z*, and if the proposal size and chain length were not long enough, the obtained MCMC
+posterior would be insensitive to the bulk of the data which is sampled close to *θ* = 1 and would be
+centered close to *z*. This issue is however easily solved by increasing the proposal size and the number
+of burn-in steps. Moreover, convergence of the posterior can be checked by initializing the MCMC
+both from the prior as well as the outliers location, which we do here.
+
+Finally, notice that BSL is well specified for this Gaussian example, so that it should recover the standard Bayes posterior. However, our experimental results with BSL were unsatisfactory; specifically, BSL is able to reproduce the standard Bayes posterior when no outliers are present or when *z* is not too larger than 1; in all other cases, MCMC does not converge and presents a sticky behavior, similar to what was already mentioned in Section 4.1. Further details on this are given in Appendix D.3.
+
+4.3 Performance with single observation for MA2 and M/G/1 models
+
+We test now the performance of the different methods on two other commonly used benchmark models with one single observation, the MA2 and the M/G/1 models Marin et al. [2012], An et al. [2020], in a well specified setting; for these, we also report the true posterior distribution. We remark here that the SR posterior does not approximate the standard Bayesian posterior, but rather is defined as a generalized Bayesian update; for this reason, it is unfair to evaluate the SR posterior (with respect to, say, BSL) by assessing the mismatch from the standard Bayes one. Still, it is insightful to show the performance of our methods alongside the true posterior and its approximations BSL and semiBSL. With both models, we find that the SR posteriors are located on similar regions of the parameter space as the true posterior, but are centered on slightly different parameter values.
+---PAGE_BREAK---
+
+Figure 7: Contour plot for the posterior distributions for the MA(2) model, with darker colors denoting larger posterior density, and dotted line denoting true parameter value. The posterior densities are obtained by KDE on the MCMC output thinned by a factor 10. Here, the Energy and Kernel Score posteriors are similar and broader than the true posterior; notice that they do not approximate the true posterior but rather provide a general Bayesian update. BSL and semiBSL reproduce the true posterior well, as expected for this model. The prior distribution is uniform on the white triangular region.
+
+### 4.3.1 MA(2)
+
+The Moving Average model of order 2, or MA(2), is a time-series model for which simulation is easy and the likelihood is available in analytical form; it has 2 parameters $\theta = (\theta^1, \theta^2)$. Sampling from the model is achieved with the following recursive process:
+
+$$x^1 = \xi^1, \quad x^2 = \xi^2 + \theta^1\xi^1, \quad x^t = \xi^t + \theta^1\xi^{t-1} + \theta^2\xi^{t-2}, \quad t = 3, \dots, 50,$$
+
+where $\xi^t$'s are i.i.d. samples from the standard normal distribution (recall here superscripts do not represent power but vector indices). The vector random variable $X \in \mathbb{R}^{50}$ has a multivariate normal distribution with sparse covariance matrix; therefore, this model satisfies the assumptions of both BSL and semiBSL. We set the prior distribution over the parameters to be uniform in the triangular region defined through the following inequalities: $-1 < \theta^2 < 1$, $\theta^1 + \theta^2 > -1$, $\theta^1 - \theta^2 < 1$. We consider an observation generated from $\theta^* = (0.6, 0.2)$; further, we use $m = 500$ simulations and 30000 MCMC steps, of which 10000 are burned-in, in order to sample from the different methods.
+
+For the SR posteriors, we attempted setting $w$ with our heuristics (Sec. 3.2), which however lead to broad posteriors; therefore, we investigated the posterior behavior with different values of $w$ (results available in Appendix D.4, together with MCMC proposal sizes and acceptance rates) and finally set respectively $w = 640$ and $w = 30$ for the Kernel and Energy Score posteriors. Exact posterior samples are obtained with MCMC using the exact MA(2) likelihood with 6 parallel chains with 20000 steps, of which 10000 are burned in, with the PyMC3 library [Salvatier et al., 2016]. For all methods, we report the bivariate posteriors in Fig. 7. The Energy Score and Kernel Score posterior perform similarly and are centered around the same parameter value as the true posterior, which is however narrower. As expected, both BSL and semiBSL recover the true posterior well.
+
+### 4.3.2 M/G/1
+
+The M/G/1 model is a single-server queuing system with Poisson arrivals and general service times. Specifically, we assume the distribution of the service time to be Uniform in $(\theta^1, \theta^2)$ and the interarrival times to have exponential distribution with parameter $\theta^3$, and denote the set of parameters as $\theta = (\theta^1, \theta^2, \theta^3)$. The observed data is the logarithm of the first 50 interdeparture times; as shown in An et al. [2020], the distribution of simulated data does not resemble any common distributions; we give more details on the model and how to simulate from it in Appendix D.5.1. We set a Uniform prior on the region $[0, 10] \times [0, 10] \times [0, 1/3]$ for $(\theta^1, \theta^2 - \theta^1, \theta^3)$ and generate observations from $\theta^* = (1, 5, 0.2)$. We use $m = 1000$ simulations and 30000 MCMC steps, of which 10000 are burned-in, in order to sample from the different methods.
+---PAGE_BREAK---
+
+Figure 8: Posterior distributions for the M/G/1 model, with each row showing contour plots of the bivariate marginals for a different pair of parameters; darker colors denoting larger posterior density, and dotted lines denote true parameter value. The posterior densities are obtained by KDE on the MCMC output thinned by a factor 10. All posteriors are close in parameter space (notice that the axis do not span the full prior range for the parameters); however, the Energy and Kernel Score posteriors are slightly different from each other as well as from the BSL and true posteriors. We remark that the SR posteriors do not approximate the true one but rather provide a general Bayesian update. As already noted in An et al. [2020], semiBSL recovers the true posterior well, while BSL performs worse.
+
+Again, we attempted setting $w$ for the SR posteriors with our heuristics (Sec. 3.2), which lead to broad posteriors; we obtained therefore posteriors with different values of $w$ (results available in Appendix D.5.2, together with MCMC proposal sizes and acceptance rates) and finally set respectively $w = 7000$ and $w = 50$ for the Kernel and Energy Score posteriors. To sample from the true posterior distribution, we exploit the custom procedure described in Shestopaloff and Neal [2014]. For all methods, we report bivariate marginals of the posterior in Fig. 8. As already noticed in An et al. [2020], semiBSL is able to recover the true posterior quite well, while BSL performs worse. The Kernel and Energy Score posteriors are centered on slightly different parameter values from the true posterior, highlighting the fact that the SRs focus on different features in the data. However, we remark here that all posteriors are close in parameter space (notice that the axis in the plots in Fig. 8 do not span the full prior range). Finally, we add that both the true posterior and the SR posteriors are guaranteed (by our Theorem 1) to concentrate on the exact parameter value as $n \to \infty$.
+---PAGE_BREAK---
+
+# 5 Conclusion
+
+We introduced a new way to perform Likelihood-Free Inference based on Generalized Bayesian Inference using Scoring Rules (SR). We showed how our SR posterior includes previously investigated approaches [Price et al., 2018, Thomas et al., 2020, Chérief-Abdellatif and Alquier, 2020] as special cases, and we hope new research directions are inspired by the connection we established between the Generalized Bayesian and Likelihood-Free Inference frameworks.
+
+As we study intractable likelihood models, we proposed to sample from the SR posterior in a pseudo-marginal MCMC fashion by consistently estimating the Scoring Rules using simulations from the model, and showed how the MCMC target converges to the exact SR posterior as the number of simulations at each MCMC step increases (generalizing previous results for BSL in Price et al., 2018).
+
+Further, we proved asymptotic normality (Sec. 2.1) when the minimizer of the expected Scoring Rule is unique (which is verified when a strictly proper Scoring Rule is used in a well specified setup); we empirically demonstrated this fact with the g-and-k model, showing how BSL (which does not employ a strictly proper SR) fails to concentrate on the true parameter value as the number of observations increases, in the well specified case.
+
+We also provided two additional theoretical results for the Kernel and Energy Score posteriors which hold for misspecified models: the first (Sec. 2.3) is a finite sample posterior consistency result which ensures that, when the number of observations increases, the SR posterior gives more mass to regions of the parameter space centered around the (potentially multiple) minimizers of the expected Scoring Rule. The second result (Sec. 2.4) establishes outliers robustness, which we empirically verified on a normal location example.
+
+We also tested our proposed method on two common benchmark models with a single observation, highlighting how the SR posterior behaves differently from the standard Bayesian posterior and its approximations. Across our experiments, we considered two Scoring Rules which admit easy empirical estimators, namely the Energy and the Kernel Scores.
+
+We envisage several extensions of this work:
+
+* Across our work, we have employed two specific Scoring Rules; however, many more exist [Gneit- ing and Raftery, 2007, Dawid and Musio, 2014, Ziel and Berk, 2019], some of which may be fruitfully applied for LFI setups.
+
+* During our experiments, we encountered issues with the pseudo-marginal MCMC approach with a large number of observations (as in the g-and-k example with BSL, Sec. 4.1), a large $w$ (as mentioned in Appendices D.4 and D.5.1 for the SR posteriors) or far-away outliers (normal loca- tion example with BSL, Sec. 4.2). Although we were unable to provide a conclusive explanation for this behavior (which may be due to a combination of highly concentrated posteriors and noise in the pseudo-marginal acceptance rate), we believe that a variational inference setup would be better suited to sample from approximations of the SR posterior in such cases; this could be implemented similarly to what was done in Ong et al. [2018], Chérief-Abdellatif and Alquier [2020], Frazier et al. [2021b] for related methods.
+
+* Generalized Bayesian approaches are often motivated with robustness arguments with respect to model misspecification, as the standard Bayes posterior may perform poorly in this setup [Bissiri et al., 2016, Jewson et al., 2018, Knoblauch et al., 2019]. Most LFI techniques are approximations of the true posterior, and as such are unsuited to a misspecified setup (albeit an emerging literature investigating the effect of misspecification in LFI exists, see Ridgway [2017], Frazier et al. [2017], Frazier [2020], Frazier et al. [2020], Fujisawa et al. [2021]). Although we provided an outlier robustness result, it would be of interest to better study the behavior of the SR posterior with more general forms of model misspecification.
+
+## Acknowledgment
+
+LP is supported by the EPSRC and MRC through the OxWaSP CDT programme (EP/L016710/1), which also funds the computational resources used to perform this work. RD is funded by EPSRC
+---PAGE_BREAK---
+
+(grant nos. EP/V025899/1, EP/T017112/1) and NERC (grant no. NE/T00973X/1). We thank Alex
+Shestopaloff for providing code for exact MCMC for the M/G/1 model, and Jeremias Knoblauch and
+François-Xavier Briol for valuable feedback and suggestions.
+
+References
+
+Z. An, L. F. South, D. J. Nott, and C. C. Drovandi. Accelerating Bayesian synthetic likelihood with the graphical lasso. *Journal of Computational and Graphical Statistics*, 28(2):471–475, 2019.
+
+Z. An, D. J. Nott, and C. Drovandi. Robust Bayesian synthetic likelihood via a semi-parametric approach. *Statistics and Computing*, 30(3):543–557, 2020.
+
+C. Andrieu, G. O. Roberts, et al. The pseudo-marginal approach for efficient Monte Carlo computations. *The Annals of Statistics*, 37(2):697–725, 2009.
+
+E. Bernton, P. E. Jacob, M. Gerber, and C. P. Robert. Approximate Bayesian computation with the Wasserstein distance. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 81(2):235–269, 2019. doi: https://doi.org/10.1111/rssb.12312 URL https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12312.
+
+P. Billingsley. *Convergence of probability measures*. John Wiley & Sons, 2nd edition, 1999.
+
+P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. *Journal of the Royal Statistical Society. Series B, Statistical methodology*, 78(5):1103, 2016.
+
+K. Boudt, J. Cornelissen, and C. Croux. The Gaussian rank correlation estimator: robustness properties. *Statistics and Computing*, 22(2):471–483, 2012.
+
+F.-X. Briol, A. Barp, A. B. Duncan, and M. Girolami. Statistical inference for generative models with maximum mean discrepancy. *arXiv preprint arXiv:1906.05944*, 2019.
+
+B.-E. Chérief-Abdellatif and P. Alquier. MMD-Bayes: Robust Bayesian estimation via maximum mean discrepancy. In *Symposium on Advances in Approximate Bayesian Inference*, pages 1–21. PMLR, 2020.
+
+K. Chwialkowski, H. Strathmann, and A. Gretton. A kernel test of goodness of fit. In *International conference on machine learning*, pages 2606–2615. PMLR, 2016.
+
+A. P. Dawid and M. Musio. Theory and applications of proper scoring rules. *Metron*, 72(2):169–183, 2014.
+
+A. P. Dawid, M. Musio, and L. Ventura. Minimum scoring rule inference. *Scandinavian Journal of Statistics*, 43(1): 123–138, 2016.
+
+C. C. Drovandi and A. N. Pettitt. Likelihood-free Bayesian estimation of multivariate quantile distributions. *Computational Statistics & Data Analysis*, 55(9):2541–2556, 2011.
+
+C. C. Drovandi, A. N. Pettitt, and A. Lee. Bayesian indirect inference using a parametric auxiliary model. *Statistical Science*, 30(1):72–95, 2015.
+
+R. Dutta, M. Schoengens, L. Pacchiardi, A. Ummadisingu, N. Widmer, J.-P. Onnela, and A. Mira. ABCpy: A high-performance computing perspective to approximate Bayesian computation. *arXiv preprint arXiv:1711.04694*, 2020.
+
+D. T. Frazier. Robust and efficient approximate Bayesian computation: A minimum distance approach. *arXiv preprint arXiv:2006.14126*, 2020.
+
+D. T. Frazier, C. P. Robert, and J. Rousseau. Model misspecification in ABC: consequences and diagnostics. *arXiv preprint arXiv:1708.01974*, 2017.
+
+D. T. Frazier, C. Drovandi, and R. Loaiza-Maya. Robust approximate Bayesian computation: An adjustment approach. *arXiv preprint arXiv:2008.04099*, 2020.
+
+D. T. Frazier, C. Drovandi, and D. J. Nott. Synthetic likelihood in misspecified models: Consequences and corrections. *arXiv preprint arXiv:2104.03436*, 2021a.
+
+D. T. Frazier, R. Loaiza-Maya, G. M. Martin, and B. Koo. Loss-based variational bayes prediction. *arXiv preprint arXiv:2104.14054*, 2021b.
+
+D. T. Frazier, D. J. Nott, C. Drovandi, and R. Kohn. Bayesian inference using synthetic likelihood: asymptotics and adjustments. *arXiv preprint arXiv:1902.04827*, 2021c.
+---PAGE_BREAK---
+
+M. Fujisawa, T. Teshima, I. Sato, and M. Sugiyama. γ-abc: Outlier-robust approximate Bayesian computation based on a robust divergence estimator. In *International Conference on Artificial Intelligence and Statistics*, pages 1783–1791. PMLR, 2021.
+
+A. Ghosh and A. Basu. Robust Bayes estimation using the density power divergence. *Annals of the Institute of Statistical Mathematics*, 68(2):413–437, 2016.
+
+J. K. Ghosh and R. Ramamoorthi. *Bayesian nonparametrics*. Springer Science & Business Media, 2003.
+
+F. Giummolè, V. Mameli, E. Ruli, and L. Ventura. Objective Bayesian inference with proper scoring rules. *Test*, 28(3): 728–755, 2019.
+
+T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. *Journal of the American Statistical Association*, 102(477):359–378, 2007.
+
+A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012.
+
+C. Holmes and S. Walker. Assigning a value to a power likelihood in a general bayesian model. *Biometrika*, 104(2): 497–503, 2017.
+
+J. Jewson, J. Q. Smith, and C. Holmes. Principles of Bayesian inference using general divergence criteria. *Entropy*, 20 (6):442, 2018.
+
+B. Jiang. Approximate Bayesian computation with Kullback-Leibler divergence as data discrepancy. In *International Conference on Artificial Intelligence and Statistics*, pages 1711–1721, 2018.
+
+J. Knoblauch, J. Jewson, and T. Damoulas. Generalized variational inference: Three arguments for deriving new posteriors. arXiv preprint arXiv:1904.02063, 2019.
+
+O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices. *Journal of multivariate analysis*, 88(2):365–411, 2004.
+
+J. Lintusaari, M. U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Fundamentals and recent developments in approximate Bayesian computation. *Systematic biology*, 66(1):e66-e82, 2017. doi: 10.1093/sysbio/syw077. URL https://doi.org/10.1093/sysbio/syw077.
+
+Q. Liu, J. Lee, and M. Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. In *International conference on machine learning*, pages 276–284. PMLR, 2016.
+
+R. Loaiza-Maya, G. M. Martin, and D. T. Frazier. Focused Bayesian prediction. arXiv preprint arXiv:1912.12571, 2019.
+
+J.-M. Marin, P. Pudlo, C. P. Robert, and R. J. Ryder. Approximate Bayesian computational methods. *Statistics and Computing*, 22(6):1167–1180, 2012.
+
+T. Matsubara, J. Knoblauch, F.-X. Briol, C. Oates, et al. Robust generalised Bayesian inference for intractable likelihoods.
+*arXiv preprint arXiv:2104.07359*, 2021.
+
+C. McDiarmid. On the method of bounded differences. *Surveys in combinatorics*, 141(1):148–188, 1989.
+
+J. W. Miller. Asymptotic normality, concentration, and coverage of generalized posteriors.
+*arXiv preprint arXiv:1907.09611*, 2019.
+
+B. Nelson. *Foundations and methods of stochastic simulation: a first course*. Springer Science & Business Media, 2013.
+
+H. D. Nguyen, J. Arbel, H. Lü, and F. Forbes. Approximate Bayesian computation via the energy statistic.
+*IEEE Access*, 8:131683–131698, 2020.
+
+V. M.-H. Ong, D. J. Nott, M.-N. Tran, S. A. Sisson, and C. C. Drovandi. Likelihood-free inference in high dimensions with synthetic likelihood.
+*Computational Statistics & Data Analysis*, 128:271–291, 2018.
+
+M. Park, W. Jitkrittum, and D. Sejdinovic. K2-ABC: Approximate Bayesian computation with kernel embeddings.
+In *Artificial Intelligence and Statistics*, 2016.
+
+D. Prangle. gk: An R package for the g-and-k and generalised g-and-h distributions.
+*arXiv preprint arXiv:1706.06889*, 2017.
+
+L. F. Price, C. C. Drovandi, A. Lee, and D. J. Nott. Bayesian synthetic likelihood.
+*Journal of Computational and Graphical Statistics*, 27(1):1–11, 2018.
+---PAGE_BREAK---
+
+J. Ridgway. Probably approximate Bayesian computation: nonasymptotic convergence of ABC under misspecification. *arXiv preprint arXiv:1707.05987*, 2017.
+
+M. L. Rizzo and G. J. Székely. Energy distance. *wiley interdisciplinary reviews: Computational statistics*, 8(1):27–38, 2016.
+
+J. Salvatier, T. V. Wiecki, and C. Fonnesbeck. Probabilistic programming in Python using PyMC3. *PeerJ Computer Science*, 2:e55, 2016.
+
+H. Scheffé. A useful convergence theorem for probability distributions. *The Annals of Mathematical Statistics*, 18(3): 434–438, 1947.
+
+A. Y. Shestopaloff and R. M. Neal. On Bayesian inference for the M/G/1 queue with efficient MCMC sampling. *arXiv preprint arXiv:1401.5548*, 2014.
+
+O. Thomas, R. Dutta, J. Corander, S. Kaski, M. U. Gutmann, et al. Likelihood-free inference by ratio estimation. *Bayesian Analysis*, 2020.
+
+S. N. Wood. Statistical inference for noisy nonlinear ecological dynamic systems. *Nature*, 466(7310):1102, 2010.
+
+A. Zellner. Optimal information processing and Bayes's theorem. *The American Statistician*, 42(4):278–280, 1988.
+
+F. Ziel and K. Berk. Multivariate forecasting evaluation: On sensitive and strictly proper scoring rules. *arXiv preprint arXiv:1910.07325*, 2019.
+
+# A Proofs of theoretical results
+
+## A.1 Proof and more details on Theorem 1
+
+In this section, we use the following short hand notation: $S(\theta, y) = S(P_{\theta}, y)$ and
+
+$$S_n(\theta, \mathbf{y}) = \sum_{i=1}^{n} S(\theta, y_i),$$
+
+using which the posterior can be written as:
+
+$$\pi_S(\theta|\mathbf{y}) = \frac{\pi(\theta) \exp(-wS_n(\theta, \mathbf{y}))}{\int_{\Theta} \pi(\theta) \exp(-wS_n(\theta, \mathbf{y}))d\theta}.$$
+
+Recall that upper case letters denote random variables while lower case ones denote observed (fixed) values. We assume observations are generated by the distribution $P_0$: $Y \sim P_0$, and denote by $\mathbb{E}_{Y\sim P_0}$ expectation over $Y \sim P_0$. Moreover, let $\xrightarrow{P}$ denote convergence in probability under the distribution $P$ as $n \to \infty$.
+
+For simplicity, we consider here a univariate $\theta$, but multivariate extensions of the result are imme-
+diate and do not entail any technical difficulty except for notational ones.
+
+In proving our result, we follow and adapt the Bernstein-von Mises theorem reported in Ghosh
+and Ramamoorthi [2003] (Theorem 1.4.2).
+
+We remark that uniqueness of the minimizer of the expected scoring rule $\theta^*$ (in Assumption **A1**) is satisfied in a well specified setup if $S$ is a strictly proper scoring rule (in which case $P_{\theta}^* = P_0$). If the model class is not well specified, a strictly proper $S$ does not guarantee the minimizer to be unique (as in fact there may be pathological cases where multiple minimizers exist), but it is likely that this condition is verified for most cases of practical interest.
+
+Additionally, it may be the case that, for a specific $P_0$ and misspecified model class $P_\theta$, the minimizer of $S(P_\theta, P_0)$ is unique even if $S$ is proper but not strictly so; in fact, in general, being not strictly proper means that there exist at least one pair of values $\theta^{(1)}, \theta^{(2)}$ for which $S(P_{\theta^{(1)}}, P_{\theta^{(2)}}) = S(P_{\theta^{(1)}}, P_{\theta^{(1)}})$, but it may be that the argmin$_{\theta \in \Theta}$ $S(P_\theta, P_0)$ is unique for that specific choice of $P_0$, as the minimizer is in a region of the parameter space for which there are no other parameter values which lead to the same value of the scoring rule.
+---PAGE_BREAK---
+
+In our proof below, we will use the fact that $\hat{\theta}^{(n)}(\mathbf{Y}) \xrightarrow{P_0} \theta^*$ for $n \to \infty$, namely, $\hat{\theta}^{(n)}(\mathbf{Y})$ is a consistent finite sample estimator of $\theta^*$. This is ensured by our assumptions above by applying standard results for M-estimators; see for instance Theorem 4.1 in Dawid et al. [2016].
+
+Before stating our result, we remark that Appendix A in Loaiza-Maya et al. [2019] provide an analogous result which also holds with non-i.i.d. (independent and identically distributed) data. Additionally, they replace our assumptions on differentiability (which ensure the existence of the Taylor series expansion in the proof below) with assuming the difference of the scoring rules $S_n(\theta^*, \mathbf{y}) - S_n(\theta, \mathbf{y})$ can be written as a quadratic term plus a bounded remainder term, which is slightly more general.
+
+Additionally, Miller [2019] investigated the asymptotic behavior of general Bayes posterior with generic losses and established almost sure asymptotic normality; their result assumes the loss can be written, for each value of $\theta$, as a quadratic form plus a remainder which is bounded by a cubic function of the coordinate, similar to Loaiza-Maya et al. [2019]. In Matsubara et al. [2021], this result is used when the loss is taken to be the Kernel Stein Discrepancy; in order to satisfy the conditions, third order differentiability conditions are assumed in Matsubara et al. [2021], which are similar to our Assumption **A2**. The remaining assumptions in Matsubara et al. [2021] are similar to ours, including prior continuity and uniqueness of the minimizer $\theta^*$. However, by exploiting the stronger results in Miller [2019] they are able to obtain almost sure convergence, while our result (as well as the one in Loaiza-Maya et al. [2019]) ensures only convergence in probability.
+
+Albeit the results mentioned above are more general and stronger, we believe our approach (in line with standard Bernstein-von Mises results [Ghosh and Ramamoorthi, 2003]) to be informative and worth stating.
+
+We report here the Theorem for ease of reference and prove it subsequently:
+
+**Theorem 1.** Under Assumptions **A1** to **A4**, let $\mathbf{Y}_n = (Y_1, Y_2, \dots, Y_n)$. Denote by $\pi_S^*(s|\mathbf{Y}_n)$ the SR posterior density of $s = \sqrt{n}(\theta - \hat{\theta}^{(n)}(\mathbf{Y}_n))$. Then as $n \to \infty$, for any $w > 0$:
+
+$$ \int_{\mathbb{R}} \left| \pi_S^*(s|\mathbf{Y}_n) - \sqrt{\frac{wI(\theta^*)}{2\pi}}e^{-\frac{s^2wI(\theta^*)}{2}} \right| ds \xrightarrow{P_0} 0. $$
+
+*Proof.* Without loss of generality, we will absorb $w$ into the scoring rule $S$; if $w > 0$, in fact, the new scoring rule $S' = w \cdot S$ inherits the propriety properties of $S$, and has a second derivative $I'(\theta^*) = \mathbb{E}_{Y\sim P_0} d_\theta^2 S'(\theta^*, Y) = w \cdot I(\theta^*)$. Therefore, it is enough to prove the statement of the Theorem with $w = 1$.
+
+We first prove that the statement in the Theorem is equivalent to another one, which is easier to prove. We will use shorthand notation $\hat{\theta}^{(n)} = \hat{\theta}^{(n)}(\mathbf{Y}_n)$.
+
+Because $s = \sqrt{n}(\theta - \hat{\theta}^{(n)})$,
+
+$$ \pi_S^*(s|\mathbf{Y}_n) = \frac{\pi\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)}}{\int_{\mathbb{R}} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} dt}, $$
+
+where we also dropped $\mathbf{Y}_n$ in $S_n$; thus we need to show:
+
+$$ \int_{\mathbb{R}} \left| \frac{\pi\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)}}{C_n} - \sqrt{\frac{I(\theta^*)}{2\pi}} e^{-\frac{s^2 I(\theta^*)}{2}} \right| ds \xrightarrow{P_0} 0, \quad (8) $$
+
+where we defined:
+
+$$ C_n = \int_{\mathbb{R}} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} dt $$
+
+We show now that the statement in Eq. (8) is equivalent to showing:
+
+$$ I_1 \xrightarrow{P_0} 0, \qquad (9) $$
+---PAGE_BREAK---
+
+where
+
+$$I_1 = \int_{\mathbb{R}} \left| \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} - \pi\left(\theta^*\right) e^{-\frac{t^2 I(\theta^*)}{2}} \right| dt,$$
+
+To see this, notice that the original statement in Eq. (8) is (provided $C_n$ is finite and greater than 0):
+
+$$C_n^{-1} \left[ \int_{\mathbb{R}} \left| \pi\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + s/\sqrt{n}\right) - S_n\left(\hat{\theta}^{(n)}\right)} - C_n \sqrt{\frac{I(\theta^*)}{2\pi}} e^{-\frac{s^2 I(\theta^*)}{2}} \right| ds \right] \xrightarrow{P_0} 0 \quad (10)$$
+
+If Eq. (9) holds, that implies that $C_n \to \pi(\theta^*)\sqrt{2\pi/I(\theta^*)}$, which is finite and greater than 0 due to Assumptions **A2** and **A4**; for this reason, showing the statement in Eq. (10) is equivalent to showing that the integral in the square brackets in Eq. (10) goes to 0 in probability. Moreover, by the triangle inequality, that term is less than $I_1 + I_2$, where:
+
+$$I_2 = \int_{\mathbb{R}} \left| \pi(\theta^*) e^{-\frac{s^2 I(\theta^*)}{2}} - C_n \sqrt{\frac{I(\theta^*)}{2\pi}} e^{-\frac{s^2 I(\theta^*)}{2}} \right| ds$$
+
+If Eq. (9) holds, $I_1$ goes to 0 and $I_2$ is equal to:
+
+$$\left|\pi(\theta^*) - C_n \sqrt{\frac{I(\theta^*)}{2\pi}}\right| \int_{\mathbb{R}} e^{-\frac{s^2 I(\theta^*)}{2}} ds,$$
+
+which goes to 0 because $C_n \to \pi(\theta^*)\sqrt{2\pi/I(\theta^*)}$. Combining these arguments shows that Eq. (9) implies the original statement in Eq. (8). Therefore, we now prove the statement in Eq. (9).
+
+We set:
+
+$$h_n = \frac{1}{n} \sum_{i=1}^{n} d_{\theta}^{2} S (\hat{\theta}^{(n)}, Y_i) = \frac{1}{n} d_{\theta}^{2} S_n (\hat{\theta}^{(n)}, Y_n).$$
+
+As $n \to \infty$, $h_n \xrightarrow{P_0} I(\theta^*)$ (by the Weak Law of Large Numbers and thanks to the fact that $\mathbb{E}_{Y \sim P_0} d_\theta^2 S(\theta^*, Y)$ is finite due to Assumption **A2**), to verify Eq. (9) it is enough if we show that
+
+$$\int_{\mathbb{R}} \left| \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} - \pi\left(\hat{\theta}^{(n)}\right) e^{-\frac{t^2 h_n}{2}} \right| dt \xrightarrow{P_0} 0, \quad (11)$$
+
+which also relies on the consistency of $\hat{\theta}^{(n)}$ to $\theta^*$.
+
+To show Eq. (11), given any $\delta', c > 0$, we break $\mathbb{R}$ into three regions:
+
+* $A_1 = \{t : |t| < c \log \sqrt{n}\}$
+
+* $A_2 = \{t : c \log \sqrt{n} < |t| < \delta' \sqrt{n}\}$, and
+
+* $A_3 = \{t : |t| > \delta' \sqrt{n}\}$
+
+and we prove the integrals over those three regions go to 0.
+
+**A3:**
+
+$$
+\begin{align*}
+& \int_{A_3} \left| \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} - \pi\left(\hat{\theta}^{(n)}\right) e^{-\frac{t^2 h_n}{2}} \right| dt \\
+& \leq \int_{A_3} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} dt + \int_{A_3} \pi\left(\hat{\theta}^{(n)}\right) e^{-\frac{t^2 h_n}{2}} dt.
+\end{align*}
+$$
+
+Here, the second integral goes to 0 as $n \to \infty$ by the usual tail estimates for a normal. The first instead goes to 0 by Assumption **A3**; let us first rewrite:
+
+$$
+\int_{A_3} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) + S_n\left(\hat{\theta}^{(n)}\right)} dt \\
+\leq e^n \sup_{t \in A_3} \left\{ \frac{1}{n} (S_n(\hat{\theta}^{(n)}) - S_n(\hat{\theta}^{(n)} + t/\sqrt{n})) \right\} \underbrace{\int_{A_3} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) dt}_{\leq 1};
+$$
+---PAGE_BREAK---
+
+recalling that $\theta = \hat{\theta}^{(n)} + t/\sqrt{n}$, we have that:
+
+$$t \in A_3 \iff |t| > \delta' \sqrt{n} \iff |\theta - \hat{\theta}^{(n)}| > \delta';$$
+
+you can therefore write:
+
+$$\sup_{t \in A_3} \left\{ \frac{1}{n} \left( S_n \left( \hat{\theta}^{(n)} \right) - S_n \left( \hat{\theta}^{(n)} + t/\sqrt{n} \right) \right) \right\} = \sup_{\theta: |\theta - \hat{\theta}^{(n)}| > \delta'} \left\{ \frac{1}{n} \left( S_n \left( \hat{\theta}^{(n)} \right) - S_n \left( \theta \right) \right) \right\};$$
+
+as $n \to \infty$, due to $\hat{\theta}^{(n)} \to \theta^*$, thanks to Assumption **A3** there exists $\epsilon' > 0$ such that, with probability (on the observations) converging to 1:
+
+$$\sup_{\theta: |\theta - \hat{\theta}^{(n)}| > \delta'} \left\{ \frac{1}{n} \left( S_n \left( \hat{\theta}^{(n)} \right) - S_n \left( \theta \right) \right) \right\} \leq -\epsilon',$$
+
+from which, with probability converging to 1:
+
+$$\int_{A_3} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) e^{-S_n(\hat{\theta}^{(n)} + t/\sqrt{n}) + S_n(\hat{\theta}^{(n)})} dt \leq e^{-n\epsilon'} \to 0.$$
+
+**A1:**
+
+By Taylor's theorem:
+
+$$S_n\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) - S_n\left(\hat{\theta}^{(n)}\right) = \frac{t^2}{2n} d_{\theta}^2 S_n\left(\hat{\theta}^{(n)}\right) + \frac{1}{6} (t/\sqrt{n})^3 d_{\theta}^3 S_n\left(\theta'_n(t)\right) = \frac{t^2 h_n}{2} + R_n(t),$$
+
+for some $\theta'_n(t) \in (\hat{\theta}^{(n)}, \hat{\theta}^{(n)} + t/\sqrt{n})$ if $t > 0$ and $\theta'_n(t) \in (\hat{\theta}^{(n)} + t/\sqrt{n}, \hat{\theta}^{(n)})$ if $t < 0$ (where we used the Lagrange form for the remainder $R_n(t)$, and we highlighted the fact that $\theta'_n$ depends on $t$ as well). Now consider:
+
+$$\begin{aligned}
+& \int_{A_1} \left| \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-\frac{t^2 h_n}{2} - R_n(t)} - \pi\left(\hat{\theta}^{(n)}\right) e^{-\frac{t^2 h_n}{2}} \right| dt \\
+& \leq \int_{A_1} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) \left| e^{-\frac{t^2 h_n}{2} - R_n(t)} - e^{-\frac{t^2 h_n}{2}} \right| dt + \int_{A_1} \left| \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) - \pi\left(\hat{\theta}^{(n)}\right) \right| e^{-\frac{t^2 h_n}{2}} dt.
+\end{aligned}$$
+
+As $\pi$ is continuous in $\theta^*$, the second integral goes to 0 in $P_0$ probability. The first integral equals:
+
+$$\int_{A_1} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) e^{-\frac{t^2 h_n}{2}} |e^{-R_n(t)} - 1| dt \leq \int_{A_1} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) e^{-\frac{t^2 h_n}{2}} e^{|R_n(t)|} |R_n(t)| dt, \quad (12)$$
+
+where the above inequality holds as, for all $x$'s:
+
+$$|e^x - 1| = \left| \sum_{n=1}^{\infty} \frac{x^n}{n!} \right| \leq \sum_{n=1}^{\infty} \frac{|x|^n}{n!} = |x| \sum_{n=0}^{\infty} \frac{|x|^n}{(n+1)!} \leq |x| \sum_{n=0}^{\infty} \frac{|x|^n}{n!} = |x| e^{|x|}$$
+
+Now,
+
+$$|R_n(t)| \leq \sup_{t \in A_1} \frac{1}{6} \left(\frac{|t|}{\sqrt{n}}\right)^3 |d_{\theta}^3 S_n(\theta'_n(t))|
+\leq \frac{c^3 (\log \sqrt{n})^3}{6 n^{3/2}} \sup_{t \in A_1} |d_{\theta}^3 S_n(\theta'_n(t))|.
+\quad (13)$$
+
+for each $t$, we have:
+
+$$|d_{\theta}^{3} S_{n}\left(\theta'_{n}(t)\right)| \leq \sum_{i=1}^{n}|d_{\theta}^{3} S\left(\theta'_{n}(t), Y_{i}\right)|;$$
+
+moreover, $\theta'_n(t) \xrightarrow{P_0} \theta^*$ due to Assumption **A1** and the fact that $t/\sqrt{n} \to 0$. Additionally, by the law of large numbers $\frac{1}{n}\sum_{i=1}^n |d_\theta^3 S(\theta, Y_i)| \xrightarrow{P_0} E_{P_0}|d_\theta^3 S(\theta, Y)|$ (thanks to the expectation being finite due to Assumption **A2**), so that putting the things together we have:
+
+$$\frac{1}{n}\sum_{i=1}^{n}|d_{\theta}^{3}S(\theta'_{n}(t), Y_i)| \xrightarrow{P_0} E_{P_0}|d_{\theta}^{3}S(\theta^*, Y)| < E_{P_0}[M(Y)] < \infty,$$
+---PAGE_BREAK---
+
+where the upper bound is given by Assumption **A2**. The upper bound in Eq. (13) is therefore:
+
+$$ \frac{c^3 (\log \sqrt{n})^3}{6 n^{3/2}} \sup_{t \in A_1} |d_\theta^3 S_n (\theta'_n(t))| = \frac{c^3 (\log \sqrt{n})^3}{6 n^{1/2}} O_p(1) = o_p(1), $$
+
+so that $|R_n(t)| = o_p(1)$. Hence, Eq. (12) is upper bounded by
+
+$$ \underbrace{\int_{A_1} e^{-\frac{t^2 h_n}{2}} dt}_{=o_p(1)} \cdot \underbrace{\sup_{t \in A_1} \pi \left(\hat{\theta}^{(n)} + t/\sqrt{n}\right)}_{=o_p(1)} \cdot \underbrace{\sup_{t \in A_1} e^{|R_n(t)|}}_{=o_P(1)} \cdot \underbrace{\sup_{t \in A_1} |R_n(t)|}_{=o_p(1)} = o_p(1), $$
+
+where the second factor being $O_p(1)$ is guaranteed by Assumption **A4**.
+
+**A2.**
+
+Finally, consider:
+
+$$ \begin{aligned} & \int_{A_2} \left|\pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-\frac{t^2 h_n}{2} - R_n(t)} - \pi\left(\hat{\theta}^{(n)}\right) e^{-\frac{t^2 h_n}{2}}\right| dt \\ & \leq \int_{A_2} \pi\left(\hat{\theta}^{(n)} + t/\sqrt{n}\right) e^{-\frac{t^2 h_n}{2} - R_n(t)} dt + \pi\left(\hat{\theta}^{(n)}\right) \int_{A_2} e^{-\frac{t^2 h_n}{2}} dt; \end{aligned} $$
+
+the second term is:
+
+$$ \le 2 \cdot \pi(\hat{\theta}^{(n)}) e^{-\frac{h_n (c \log \sqrt{n})^2}{2}} [\delta' \sqrt{n} - c \log \sqrt{n}] \le K \pi(\hat{\theta}^{(n)}) \frac{\sqrt{n}}{n^{\frac{c^2 h_n}{8} \log n}}, $$
+
+where $K$ is a constant in $n$; the above goes to 0 in $P_0$ probability as $h_n$ converges to $I(\theta^*)$ which is a finite value by Assumption **A2**.
+
+For the first integral, we will upper bound the integrand by a function for which the integral is 0. Notice that, as $t \in A_2$, $c \log \sqrt{n} < |t| < \delta' \sqrt{n} \implies |t|/\sqrt{n} < \delta'$. Thus, $|R_n(t)| = (\frac{|t|}{\sqrt{n}})^3 \frac{1}{6} |d_\theta^3 S_n(\theta')| \le \delta' \frac{t^2}{6n} |d_\theta^3 S_n(\theta')|$.
+
+Now, recall the definition of convergence in probability: a sequence of random variables $Z_n$ converge in probability to $Z$ if, $\forall \delta' > 0, \forall \epsilon_1 > 0, \exists n_1 : n > n_1 \implies P(|Z_n - Z| < \delta') > 1 - \epsilon_1$. Further, notice that, if $\theta'_n(t) \in (\theta^* - \delta, \theta^* + \delta)$, $\frac{1}{n}|d_\theta^3 S_n(\theta'_n(t))| < \frac{1}{n}\sum_{i=1}^n M(Y_i)$ by Assumption **A2** (where $\delta$ is defined there).
+
+Using the definition of convergence in probability, we can write that, $\forall \delta' > 0, \forall \epsilon_1 > 0, \exists n_1 : n > n_1 \implies P_0(|\hat{\theta}^{(n)} - \theta^*| < \delta') > 1 - \epsilon_1$. Moreover, notice that $\theta'_n(t) \in (\hat{\theta}^{(n)} - \delta', \hat{\theta}^{(n)} + \delta')$ (as in fact $\theta'_n(t)$ is in an interval with width $|t|/\sqrt{n}$ whose upper or lower boundary is $\hat{\theta}^{(n)}$, and $|t|/\sqrt{n} < \delta'$), so that $P_0(|\theta'_n(t) - \theta^*| < 2\delta') > 1 - \epsilon_1$. Therefore, as long as we choose $\delta' < \frac{1}{2}\delta$, the following statement holds:
+
+$$ P_0\left\{\frac{1}{n}|d_{\theta}^{3}S_{n}(\theta'_{n}(t))| < \frac{1}{n}\sum_{i=1}^{n}M(Y_{i}) \quad \forall t \in A_{2}\right\} > 1 - \epsilon_{1}, \forall n > n_{1}. $$
+
+Now, we have that $\frac{1}{n}\sum_{i=1}^n M(Y_i) \xrightarrow{n\to\infty} C < \infty$ for the Weak Law of Large Numbers; this is equivalent to saying:
+
+$$ \forall \delta'' > 0, \forall \epsilon_2 > 0, \exists n_2 : n > n_2 \implies P_0\left(\left|\frac{1}{n}\sum_{i=1}^{n} M(Y_i) - C\right| < \delta''\right) > 1 - \epsilon_2; $$
+
+putting this together with our previous statement, we have that:
+
+$$ P_0\left\{\frac{\delta'' t^2}{6n}|d_{\theta}^3 S_n(\theta'_n(t))| < \frac{\delta'' t^2}{6}(C + \delta'') \quad \forall t \in A_2\right\} > 1 - \epsilon_1 + \epsilon_2, \quad \forall n > \max\{n_1, n_2\}. $$
+
+Finally, recall that $h_n = \frac{d_\theta^2 S_n(\hat{\theta}^{(n)})}{n} \xrightarrow{n\to\infty} I(\theta^*) < \infty$, by combining the Weak Law of Large Numbers and the fact that $\hat{\theta}^{(n)} \xrightarrow{n\to\infty} \theta^*$. Therefore,
+
+$$ \forall \delta''' > 0, \epsilon_3 > 0, \exists n_3 : n > n_3 \implies P_0(I(\theta^*) - \delta''' < h_n < I(\theta^*) + \delta''') > 1 - \epsilon_3, $$
+---PAGE_BREAK---
+
+so that we can choose $\delta'$ to be small enough that:
+
+$$P_0 \left\{ |R_n(t)| < \frac{t^2}{2} \frac{\delta'(C + \delta'')}{3} < \frac{t^2}{2} (I(\theta^*) - \delta''') < \frac{t^2}{2} h_n \quad \forall t \in A_2 \right\} > 1 - \epsilon_1 + \epsilon_2 + \epsilon_3, \forall n > \max\{n_1, n_2, n_3\}.$$
+
+Hence, with probability greater than $1 - \epsilon_1 + \epsilon_2 + \epsilon_3$ and $\forall n > \max\{n_1, n_2, n_3\}$:
+
+$$\int_{A_2} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) e^{-\frac{t^2 h_n}{2}} - R_n(t) dt \leq \sup_{t \in A_2} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) \int_{A_2} e^{-t^2 h_n} dt$$
+
+and finally:
+
+$$\sup_{t \in A_2} \pi(\hat{\theta}^{(n)} + t/\sqrt{n}) \int_{A_2} e^{-t^2 h_n} dt \to 0 \text{ as } n \to \infty.$$
+
+The three steps can be put together by first choosing a $\delta'$ to ensure and then using this same $\delta'$ in proving the form for both $A_1$ and $A_3$. $\square$
+
+## A.2 Proof of Theorem 2
+
+First, we prove a finite sample posterior consistency result which is valid for the generalized Bayes posterior with a generic loss, assuming a concentration property and prior mass condition. Next, we will use this Lemma to prove Theorem 2 reported in the main body of the paper (in Section 2.3), by first proving concentration results for Kernel and Energy Scores. Our results are inspired by Theorem 1 in Matsubara et al. [2021].
+
+### A.2.1 Lemma for generalized Bayes posterior with generic loss
+
+In this Subsection, we consider the following generalized Bayes posterior:
+
+$$\pi_L(\theta|\mathbf{y}) \propto \pi(\theta) \exp\{-wnL(\theta, \mathbf{y})\}, \quad (14)$$
+
+where $\mathbf{y} = (y_1, y_2, \dots, y_n)$ denote the observations, $\pi$ is the prior and $L(\theta, \mathbf{y})$ is a generic loss function (which does not need to be additive in $y_i$). Here, the SR posterior for the scoring rule $S$ corresponds to choosing:
+
+$$L(\theta, \mathbf{y}) = \frac{1}{n} \sum_{i=1}^{n} S(P_{\theta}, y_i) = \frac{1}{n} S_n(\theta, \mathbf{y}).$$
+
+First, we state a result concerning this form of the posterior which we will use later (taken from Knoblauch et al. 2019), and reproduce here the proof for convenience:
+
+**Lemma 1** (Theorem 1 in Knoblauch et al. [2019]). *Provided that $\int_{\Theta} \pi(\theta) \exp\{-wnL(\theta, \mathbf{y})\} d\theta < \infty$, $\pi_L(\cdot|y)$ in Eq. (14) can be written as the solution to a variational problem:*
+
+$$\pi_L(\cdot|\mathbf{y}) = \underset{\rho \in \mathcal{P}(\Theta)}{\arg\min} \left\{ wn E_{\theta \sim \rho} [L(\theta, \mathbf{y})] + \mathrm{KL}(\rho\|\pi) \right\}, \quad (15)$$
+
+where $\mathcal{P}(\Theta)$ denotes the set of distributions over $\Theta$, and KL denotes the KL divergence.
+
+*Proof.* We follow here (but adapt to our notation) the proof given in Knoblauch et al. [2019], which in turn is based on the one for the related result contained in Bissiri et al. [2016].
+
+Notice that the minimizer of the objective in Eq. (15) can be written as:
+
+$$\begin{align*}
+\pi^*(\cdot|\mathbf{y}) &= \arg\min_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} \left[ \log(\exp\{wnL(\theta, \mathbf{y})\}) + \log\left(\frac{\rho(\theta)}{\pi(\theta)}\right) \right] \rho(\theta)d\theta \right\} \\
+&= \arg\min_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} \left[ \log\left(\frac{\rho(\theta)}{\pi(\theta)\exp\{-wnL(\theta, \mathbf{y})\}}\right) \right] \rho(\theta)d\theta \right\}.
+\end{align*}$$
+---PAGE_BREAK---
+
+As we are only interested in the minimizer $\pi^*(\cdot|\mathbf{y})$ (and not in the value of the objective), it holds that, for any constant $Z > 0$:
+
+$$
+\begin{aligned}
+\pi^*(\cdot|\mathbf{y}) &= \arg\min_{\rho\in\mathcal{P}(\Theta)} \left\{ \int_{\Theta} \left[ \log \left( \frac{\rho(\theta)}{\pi(\theta)\exp\{-wnL(\theta,\mathbf{y})\} Z^{-1}} \right) \right] \rho(\theta) d\theta - \log Z \right\} \\
+&= \arg\min_{\rho\in\mathcal{P}(\Theta)} \{\text{KL}(\rho(\theta)||\pi_L(\cdot|\mathbf{y}))\}.
+\end{aligned}
+$$
+
+Now, we can set $Z = \int_{\Theta} \pi(\theta) \exp\{-wnL(\theta, \mathbf{y})\} d\theta$ (which is finite by assumption) and notice that we get:
+
+$$ \pi^*(\cdot|\mathbf{y}) = \arg\min_{\rho\in\mathcal{P}(\Theta)} \{\text{KL}(\rho||\pi_L(\cdot|\mathbf{y}))\}, $$
+
+which yields $\pi^*(\cdot|\mathbf{y}) = \pi_L(\cdot|\mathbf{y})$ as the KL is minimized uniquely if the two arguments are the same. ☐
+
+Next, we prove a finite sample (as it holds for fixed number of samples $n$) posterior consistency result. Our statement and proof follow and slightly generalize Lemma 8 in Matsubara et al. [2021] (as we consider a generic loss function $L(\theta, \mathbf{y})$, while they consider the Kernel Stein Discrepancy only).
+
+In order to do this, let $J$ be a function of the parameter $\theta$, with $J(\theta)$ representing some loss (of which we will assume $L(\theta, \mathbf{y})$ is a finite sample estimate; the meaning of $J$ will be made clearer in the following and when applying this result to the SR posterior). Also, let us denote $\theta^* \in \arg\min_{\theta \in \Theta} J(\theta)$.
+
+We will assume the following *prior mass condition*, which is more generic with respect to the one considered in the main body of this manuscript (Assumption **A5**):
+
+**A5bis** The prior has a density $\pi(\theta)$ (with respect to Lebesgue measure) which satisfies
+
+$$ \int_{B_n(\alpha_1)} \pi(\theta) d\theta \geq e^{-\alpha_2 \sqrt{n}} $$
+
+for some constants $\alpha_1, \alpha_2 > 0$, where we define the sets
+
+$$ B_n(\alpha_1) := \{ \theta \in \Theta : |J(\theta) - J(\theta^*)| \leq \alpha_1 / \sqrt{n} \}. $$
+
+Assumption **A5bis** constrains the minimum amount of prior mass which needs to be given to $J$-balls with decreasing size, and is in general quite a weak condition (similar assumptions are taken in Chérief-Abdellatif and Alquier [2020], Matsubara et al. [2021]).
+
+Next, we state our result, which as mentioned above generalizes Lemma 8 in Matsubara et al. [2021]:
+
+**Lemma 2.** Consider the generalized posterior $\pi_L(\theta|\mathbf{y})$ defined in Eq. (14), and assume that:
+
+• (concentration) for all $\delta \in (0, 1]$:
+
+$$ P_0\{|L(\theta, \mathbf{Y}) - J(\theta)| \leq \epsilon_n(\delta)\} \geq 1 - \delta, \quad (16) $$
+
+where $\epsilon_n(\delta) \ge 0$ is an approximation error term;
+
+• $J(\theta^*) = \min_{\theta \in \Theta} J(\theta)$ is finite;
+
+• Assumption **A5bis** holds.
+
+Then, for all $\delta \in (0, 1]$, with probability at least $1 - \delta$:
+
+$$ \int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \leq J(\theta^*) + \frac{\alpha_1 + \alpha_2/w}{\sqrt{n}} + 2\epsilon_n(\delta), $$
+
+where the probability is taken with respect to realisations of the dataset $\mathbf{Y} = \{Y_i\}_{i=1}^n, Y_i \stackrel{iid}{\sim} P_0$ for $i = 1, \dots, n$; this also implies the following statement:
+
+$$ P_0\left( \left| \int_{\Theta} J(\theta) \pi_L(\theta | \mathbf{Y}) d\theta - J(\theta^*) \right| \geq \frac{\alpha_1 + \alpha_2/w}{\sqrt{n}} + 2\epsilon_n(\delta) \right) \leq \delta. $$
+---PAGE_BREAK---
+
+This result ensures that, with high probability, the expectation over the posterior of $J(\theta)$ is close to the minimum $J(\theta^*)$, provided that the distribution of $L(\theta, \mathbf{Y})$ (where $\mathbf{Y} \sim P_0^n$ is a random variable) satisfies a concentration bound, which constrains how far $L(\theta, \mathbf{Y})$ is distributed from the loss function $J(\theta)$. Notice that this result does not require the minimizer of $J$ to be unique.
+
+Typically the approximation error term $\epsilon_n(\delta)$ is such that $\epsilon_n(\delta) \xrightarrow{\delta \to 0} +\infty$ and $\epsilon_n(\delta) \xrightarrow{n \to \infty} 0$. If the second limit is verified, the posterior concentrates, for large $n$, on the values of $\theta$ which minimize $J$. In practical cases (as for instance for the SR posterior), it is common to have $J(\theta) = D(\theta, P_0)$, i.e., corresponding to a loss function relating $\theta$ with the data generating process $P_0$.
+
+We now prove the result.
+
+*Proof of Lemma 2.* Due to the absolute value in Eq. (16), the following two inequalities hold simultaneously with probability (w.p.) at least $1 - \delta$:
+
+$$J(\theta) \le L(\theta, \mathbf{Y}) + \epsilon_n(\delta), \quad (17)$$
+
+$$L(\theta, \mathbf{Y}) \le J(\theta) + \epsilon_n(\delta). \quad (18)$$
+
+Taking expectation with respect to the generalized posterior on both sides of Eq. (17) yields, w.p. $\ge 1 - \delta$:
+
+$$\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \le \int_{\Theta} L(\theta, \mathbf{Y})\pi_L(\theta|\mathbf{Y})d\theta + \epsilon_n(\delta).$$
+
+We now want to apply the identity in Eq. (15); therefore, we add $(wn)^{-1}\text{KL}(\pi_L(\cdot|\mathbf{Y})\|\pi) \ge 0$ in the right hand side such that, w.p. $\ge 1 - \delta$:
+
+$$\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \le \frac{1}{wn} \left\{ \int_{\Theta} wnL(\theta, \mathbf{Y})\pi_L(\theta|\mathbf{Y})d\theta + \text{KL}(\pi_L(\cdot|\mathbf{Y})\|\pi) \right\} + \epsilon_n(\delta).$$
+
+Now by Eq. (15):
+
+$$\begin{align}
+\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta &\le \frac{1}{wn} \inf_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} wnL(\theta, \mathbf{Y})\rho(\theta)d\theta + \text{KL}(\rho\|\pi) \right\} + \epsilon_n(\delta) \\
+&= \inf_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} L(\theta, \mathbf{Y})\rho(\theta)d\theta + \frac{1}{wn}\text{KL}(\rho\|\pi) \right\} + \epsilon_n(\delta),
+\end{align} \quad (19)$$
+
+where $\mathcal{P}(\Theta)$ denotes the space of probability distributions over $\Theta$. Putting now Eq. (18) in Eq. (19) we have, w.p. $\ge 1 - \delta$:
+
+$$\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \le \inf_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} J(\theta)\rho(\theta)d\theta + \frac{1}{wn}\text{KL}(\rho\|\pi) \right\} + 2\epsilon_n(\delta),$$
+
+and using the trivial bound $J(\theta) \le J(\theta^*) + |J(\theta) - J(\theta^*)|$ we get:
+
+$$\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \le J(\theta^*) + \inf_{\rho \in \mathcal{P}(\Theta)} \left\{ \int_{\Theta} |J(\theta) - J(\theta^*)| \rho(\theta)d\theta + \frac{1}{wn}\text{KL}(\rho\|\pi) \right\} + 2\epsilon_n(\delta).$$
+
+Finally, we upper bound the infimum term by exploiting the prior mass condition in Assumption **A5bis**. Specifically, letting $\Pi(B_n) = \int_{B_n} \pi(\theta)d\theta$, we take $\rho(\theta) = \pi(\theta)/\Pi(B_n)$ for $\theta \in B_n$ and $\rho(\theta) = 0$ otherwise. By Assumption **A5bis**, we have therefore $\int_{B_n} |J(\theta) - J(\theta^*)| \rho(\theta)d\theta \le \alpha_1/\sqrt{n}$ and that $\text{KL}(\rho\|\pi) = \int_{\Theta} \log(\rho(\theta)/\pi(\theta))\rho(\theta)d\theta = \int_{B_n} -\log(\Pi(B_n))\pi(\theta)d\theta/\Pi(B_n) = -\log\Pi(B_n) \le \alpha_2\sqrt{n}$. Thus, we have:
+
+$$\int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta \le J(\theta^*) + \frac{\alpha_1 + \alpha_2/w}{\sqrt{n}} + 2\epsilon_n(\delta),$$
+
+as claimed in the first statement.
+
+In order to obtain the second statement, notice that:
+
+$$J(\theta) - J(\theta^*) \ge 0, \quad \forall \theta \in \Theta \implies \int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta - J(\theta^*) \ge 0;$$
+
+thus:
+
+$$P_0\left( \left| \int_{\Theta} J(\theta)\pi_L(\theta|\mathbf{Y})d\theta - J(\theta^*) \right| \le \frac{\alpha_1 + \alpha_2/w}{\sqrt{n}} + 2\epsilon_n(\delta) \right) \ge 1 - \delta;$$
+
+taking the complement yields the result. □
+---PAGE_BREAK---
+
+### A.2.2 Case of Kernel and Energy Score posteriors
+
+We now state and prove concentration results of the form in Eq. (16) for the Kernel and Energy Scores. To this regards, notice that the kernel SR posterior can be written as:
+
+$$
+\begin{aligned}
+\pi_S(\theta|\mathbf{y}) & \propto \pi(\theta) \exp \left\{ -w \sum_{i=1}^{n} [\mathbb{E}_{X,X' \sim P_\theta} k(X, X') - 2\mathbb{E}_{X \sim P_\theta} k(X, y_i)] \right\} \\
+& \propto \pi(\theta) \exp \left\{ -w \sum_{i=1}^{n} \left[ \mathbb{E}_{X,X' \sim P_\theta} k(X, X') - 2\mathbb{E}_{X \sim P_\theta} k(X, y_i) + \frac{1}{n-1} \sum_{\substack{j=1 \\ j \neq i}}^{n} k(y_i, y_j) \right] \right\},
+\end{aligned}
+$$
+
+as in fact the terms $k(y_i, y_j)$ are independent of $\theta$. From the second line in the above expression and the form of the generalized Bayes posterior with generic loss in Eq. (14), we can identify:
+
+$$ L(\theta, \mathbf{y}) = \mathbb{E}_{X,X' \sim P_{\theta}} k(X, X') - \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_{\theta}} k(X, y_i) + \frac{1}{n(n-1)} \sum_{\substack{i,j=1 \\ i \neq j}}^{n} k(y_i, y_j). \quad (20) $$
+
+Similarly, the Energy Score posterior can be obtained by identifying in Eq. (14):
+
+$$ L(\theta, \mathbf{y}) = \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_{\theta}} ||X - y_i||_2^{\beta} - \frac{1}{n(n-1)} \sum_{\substack{i,j=1 \\ i \neq j}}^{n} ||y_i - y_j||_2^{\beta} - \mathbb{E}_{X,X' \sim P_{\theta}} ||X - X'||_2^{\beta}; \quad (21) $$
+
+this can be obtained by simply setting $k(x, y) = -\|x - y\|_2^\beta$ in Eq. (20), as the Kernel SR with that choice of kernel recovers the Energy SR.
+
+For both SRs, $L(\theta, Y)$ is an unbiased estimator (with respect to $Y_i \sim P_0$) of the associated divergences; in fact, considering $X, X' \sim P_\theta$ and $Y, Y' \sim P_0$, the associated divergence for Kernel SR is the squared MMD (see Section 2.2):
+
+$$ D_k(P_\theta, P_0) = \mathbb{E}_k(X, X') + \mathbb{E}_k(Y, Y') - 2\mathbb{E}_k(X, Y), \quad (22) $$
+
+while, for the Energy SR, the associated divergence is the squared Energy Distance:
+
+$$ D_E(P_\theta, P_0) = 2\mathbb{E}||X - Y||_2^\beta - \mathbb{E}||X - X'||_2^\beta - \mathbb{E}||Y - Y'||_2^\beta. \quad (23) $$
+
+In order to prove our concentration results, we will exploit the following Lemma:
+
+**Lemma 3** (McDiarmid's inequality, McDiarmid 1989). Let $g$ be a function of $n$ variables $y = (y_1, y_2, ..., y_n)$, and let
+
+$$
+\begin{gathered}
+\delta_i g(\mathbf{y}) := \sup_{z \in \mathcal{X}} g(y_1, \dots, y_{i-1}, z, y_{i+1}, \dots, y_n) - \inf_{z \in \mathcal{X}} g(y_1, \dots, y_{i-1}, z, y_{i+1}, \dots, y_n), \\
+\text{and } \| \delta_i g \|_{\infty} := \sup_{\mathbf{y} \in \mathcal{X}^n} |\delta_i g(\mathbf{y})|. \text{ If } Y_1, \dots, Y_n \text{ are independent random variables:}
+\end{gathered}
+$$
+
+$$ P(g(Y_1, \dots, Y_n) - \mathbb{E}g(Y_1, \dots, Y_n) \geq \varepsilon) \leq e^{-2\varepsilon^2 / \sum_{i=1}^n \| \delta_i g \|_{\infty}^2}. $$
+
+We are now ready to prove two concentration results of the form of Eq. (16). The first holds for the Kernel SR assuming a bounded kernel, while the latter holds for the Energy SR assuming a bounded $\mathcal{X}$. Let us start with a simple equality stated in the following Lemma:
+
+**Lemma 4.** For $L(\theta, Y)$ defined in Eq. (20) and $D_k(P_\theta, P_0)$ defined in Eq. (22), we have:
+
+$$ L(\theta, Y) - D_k(P_\theta, P_0) = g(Y_1, Y_2, \dots, Y_n) - \mathbb{E}[g(Y_1, Y_2, \dots, Y_n)] $$
+
+for
+
+$$ g(Y_1, Y_2, \dots, Y_n) = \frac{1}{n(n-1)} \sum_{\substack{i,j=1 \\ i \neq j}}^{n} k(Y_i, Y_j) - \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_\theta} k(X, Y_i). \quad (24) $$
+
+Similar expression holds for $L(\theta, Y)$ defined in Eq. (21) and $D_E(P_\theta, P_0)$ defined in Eq. (23), by setting $k(x, y) = -\|x - y\|_2^\beta$.
+---PAGE_BREAK---
+
+*Proof.* First, notice that, for $L(\theta, \mathbf{Y})$ defined in Eq. (20) and $D_k(P_\theta, P_0)$ defined in Eq. (22):
+
+$$
+\begin{align*}
+L(\theta, \mathbf{Y}) - D_k(P_\theta, P_0) &= \underbrace{\mathbb{E}_{X,X' \sim P_\theta} k(X, X')}_{\substack{i,j=1 \\ i \neq j}} - \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_\theta} k(X, Y_i) + \frac{1}{n(n-1)} \sum_{\substack{i,j=1 \\ i \neq j}}^{n} k(Y_i, Y_j) + \\
+&\quad - \underbrace{\mathbb{E}_{X,X' \sim P_\theta} k(X, X')}_{\substack{i,j=1 \\ i \neq j}} - \mathbb{E}_{Y,Y' \sim P_0} [k(Y, Y')] + 2 \mathbb{E}_{X \sim P_\theta, Y \sim P_0} [k(X, Y)] \\
+&= \frac{1}{n(n-1)} \sum_{\substack{i,j=1 \\ i \neq j}}^{n} k(Y_i, Y_j) - \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_\theta} k(X, Y_i) + \\
+&\quad - (\mathbb{E}_{Y,Y' \sim P_0} [k(Y, Y')] - 2 \mathbb{E}_{X \sim P_\theta, Y \sim P_0} [k(X, Y)]) \\
+&= g(Y_1, Y_2, \dots, Y_n) - \mathbb{E}[g(Y_1, Y_2, \dots, Y_n)],
+\end{align*}
+$$
+
+where the expectation in the last line is with respect to $Y_i \sim P_0$, $i = 1, \dots, n$, and where we set $g$ as in Eq. (24). $\square$
+
+Now, we give the concentration result for the kernel SR:
+
+**Lemma 5.** Consider $L(\theta, y)$ defined in Eq. (20) (corresponding to the loss function defining the Kernel Score posterior) and $D_k(P_\theta, P_0)$ defined in Eq. (22); if the kernel is such that $|k(x, y)| \le \kappa$, we have:
+
+$$P_0 \left( |L(\theta, \mathbf{Y}) - D_k(P_\theta, P_0)| \le \sqrt{-\frac{32\kappa^2}{n} \log \frac{\delta}{2}} \right) \ge 1 - \delta.$$
+
+*Proof.* First, we write:
+
+$$L(\theta, \mathbf{Y}) - D_k(P_\theta, P_0) = g(Y_1, Y_2, \dots, Y_n) - \mathbb{E}[g(Y_1, Y_2, \dots, Y_n)],$$
+
+where *g* is defined in Eq. (24) in Lemma 4. Next, notice that:
+
+$$
+\begin{align*}
+P_0(|g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]| \ge \epsilon) &\le P_0(g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]) \ge \epsilon) + P_0(g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]) \le -\epsilon) \\
+&= P_0(g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]) \ge \epsilon) + P_0(-g(\mathbf{Y}) - \mathbb{E}[-g(\mathbf{Y})]) \ge \epsilon)
+\end{align*}
+$$
+
+by the union bound. We use now McDiarmid's inequality (Lemma 3) to prove the result. Consider first $P_0(g(\mathbf{Y})) - \mathbb{E}[g(\mathbf{Y})] \ge \epsilon$); thus:
+
+$$
+\begin{align*}
+|\delta_i g(\mathbf{Y})| &= \left| \sup_z \left\{ \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) - \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) \right\} - \inf_z \left\{ \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) - \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) \right\} \right| \\
+&= \left| \sup_z \left\{ \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) - \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) \right\} + \sup_z \left\{ \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) - \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) \right\} \right| \\
+&\leq \sup_z \left| \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) - \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) \right| + \sup_z \left| \frac{2}{n} \mathbb{E}_{X \sim P_\theta} k(X, z) - \frac{2}{n(n-1)} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) \right| \\
+&= 2 \cdot \frac{2}{n} \sup_z \left| \frac{1}{n-1} \sum_{\substack{j=1 \\ j \neq i}}^n k(z, Y_j) - \mathbb{E}_{X \sim P_\theta} k(X, z) \right|
+&\leq 4 \sup_z &\left\{ \frac{1}{n-1} \sum_{\substack{j=1 \\ j \neq i}}^n |k(z, Y_j)| + |\mathbb{E}_{X \sim P_\theta} |k(X, z)| | \right\} \\
+&\leq 4 n &\left\{ 1 - n^{-1} &\sup_z |k(z, Y_j)| + |\mathbb{E}_{X \sim P_\theta} |k(X, z)| | \\
+& &\leq 4 n^{-1} &\sup_z |k(X, z)| + |\mathbb{E}_{X \sim P_\theta} |k(z, Y_j)| | \\
+& &\leq 4 n^{-1} &\left\{ 1 - (n-1)\kappa + \kappa \right\} = 8\kappa
+\end{align*}
+$$
+---PAGE_BREAK---
+
+As the bound does not depend on $\mathbf{Y}$, we have that $||\delta_i g||_\infty \le \frac{8\kappa}{n}$, from which McDiarmid's inequality (Lemma 3) gives:
+
+$$P_0(g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]) \ge \epsilon) \le \exp\left(-\frac{2\epsilon^2}{n \cdot \frac{64\kappa^2}{n^2}}\right) = e^{-\frac{n\epsilon^2}{32\kappa^2}}.$$
+
+For the bound on the other side, notice that $||\delta_i(-g)||_\infty = ||\delta_i g||_\infty$; therefore, we also have
+
+$$P_0(-g(\mathbf{Y}) - \mathbb{E}[-g(\mathbf{Y})]) \ge \epsilon) \le e^{-\frac{ne^2}{32\kappa^2}},$$
+
+from which:
+
+$$P_0(|g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]|) \ge \epsilon) \le 2e^{-\frac{ne^2}{32\kappa^2}}.$$
+
+Defining the right hand side of the bound as $\delta$, we get:
+
+$$P_0 \left( |g(\mathbf{Y}) - \mathbb{E}[g(\mathbf{Y})]| \ge \sqrt{-\frac{32\kappa^2}{n} \log \frac{\delta}{2}} \right) \le \delta,$$
+
+from which the result is obtained taking the complement.
+
+We now give the analogous result for the Energy Score:
+
+**Lemma 6.** Consider $L(\theta, \mathbf{y})$ defined in Eq. (21) (corresponding to the loss function defining the Energy Score posterior) and $D_E(P_\theta, P_0)$ defined in Eq. (23); assume that the space $\mathcal{X}$ is bounded such that $\sup_{x,y \in \mathcal{X}} ||x-y||_2 \le B < \infty$; therefore, we have:
+
+$$P_0 \left( |L(\theta, \mathbf{Y}) - D_E(P_\theta, P_0)| \le \sqrt{-\frac{32B^{2\beta}}{n} \log \frac{\delta}{2}} \right) \ge 1 - \delta.$$
+
+*Proof.* We rely on Lemma 5; in fact, recall that the Kernel Score recovers the Energy Score for $k(x, y) = -\|x - y\|_2^\beta$. With this choice of $k$, Eqs. (20) and (22) (considered in Lemma 5) respectively recover Eqs. (21) and (23).
+
+Additionally, assuming $\mathcal{X}$ to be bounded ensures that $|k(x, y)| = \|x - y\|_2^\beta \le B^\beta$; therefore, we can apply Lemma 5 with $\kappa = B^\beta$, from which the result follows.
+
+We are finally ready to prove our posterior consistency results:
+
+*Proof of Theorem 2.* The proof consists in verifying the assumptions of Lemma 2, for both the Energy and Kernel Score posteriors. First, notice that **A5** is a specific case of **A5bis** by identifying $J(\theta) = D_k(P_\theta, P_0)$ or $J(\theta) = D_E(P_\theta, P_0)$. We therefore need to verify the first and second assumptions only.
+
+As already mentioned before, the Kernel Score posterior corresponds to the generalized Bayes posterior in Eq. (14) by choosing $L(\theta, \mathbf{Y})$ defined in Eq. (20); with this choice of $L(\theta, \mathbf{Y})$, Lemma 5 holds, which corresponds to the first assumption of Lemma 2 with $J(\theta) = D_k(P_\theta, P_0)$ ($D_K$ being the divergence related to the kernel SR, defined in Eq. (22)) and:
+
+$$\epsilon_n(\delta) = \sqrt{-\frac{32\kappa^2}{n} \log \frac{\delta}{2}}.$$
+
+Finally, we have that $D_k(P_\theta, P_0) \ge 0$, which ensures the second assumption of Lemma 2. Thus, we have, from Lemma 2:
+
+$$P_0 \left( \left| \int_{\Theta} D_k(P_{\theta}, P_0) \pi_{S_k}(\theta | \mathbf{Y}) d\theta - D_k(P_{\theta}^*, P_0) \right| \geq \frac{1}{\sqrt{n}} \left( \alpha_1 + \frac{\alpha_2}{w} + 8\kappa \sqrt{-2 \log \frac{\delta}{2}} \right) \right) \leq \delta;$$
+
+by defining the deviation term as $\epsilon$ and inverting the relation, we obtain the result for the kernel Score Posterior.
+
+The same steps can be taken for the Energy Score posterior; specifically, we notice that it corresponds to the generalized Bayes posterior in Eq. (14) by choosing $L(\theta, \mathbf{Y})$ defined in Eq. (21);
+---PAGE_BREAK---
+
+with this choice of $L(\theta, \mathbf{Y})$, Lemma 6 holds, which corresponds to the first assumption of Lemma 2 with $J(\theta) = D_E(P_\theta, P_0)$ ($D_E$ being the divergence related to the kernel SR defined in Eq. (23)) and:
+
+$$\epsilon_n(\delta) = \sqrt{-\frac{32B^{2\beta}}{n} \log \frac{\delta}{2}}.$$
+
+Finally, we have that $D_E(P_\theta, P_0) \ge 0$, which ensures the second assumption of Lemma 2. Thus, we have, from Lemma 2:
+
+$$P_0 \left( \left| \int_{\Theta} D_E(P_\theta, P_0) \pi_{S_k}(\theta | \mathbf{Y}) d\theta - D_E(P_\theta^*, P_0) \right| \ge \frac{1}{\sqrt{n}} \left( \alpha_1 + \frac{\alpha_2}{w} + 8B^\beta \sqrt{-2 \log \frac{\delta}{2}} \right) \right) \le \delta;$$
+
+by defining the deviation term as $\epsilon$ and inverting the relation, we obtain the result for the Energy Score Posterior.
+
+□
+
+We remark here that Theorem 1 in Chérief-Abdellatif and Alquier [2020] proved a similar consistency result for the Kernel Score posterior holding in expectation (rather than in high probability, as for our bounds), albeit under a slightly different prior mass condition.
+
+### A.3 Proof of Theorem 3
+
+Our proof of Theorem 3 is inspired by the proof given in Matsubara et al. [2021] for their analogue result. Specifically, we will rely on Lemma 7, which establishes sufficient conditions for global bias-robustness to hold for generalized Bayes posterior with generic loss function; we recall that the definition of global bias-robustness is given in Sec. 2.4.
+
+Across this Section, we define as $\hat{P}_n = \frac{1}{n} \sum_{i=1}^n \delta_{y_i}$ the empirical distribution given by the observations $\mathbf{y} = (y_1, \dots, y_n)$ (considered to be non-random here) and consider the generalized Bayes posterior:
+
+$$\pi_L(\theta|\hat{P}_n) \propto \pi(\theta) \exp\{-wnL(\theta, \hat{P}_n)\}, \quad (25)$$
+
+from which the SR posterior in Eq. (2) with Scoring Rule S is recovered with:
+
+$$L(\theta, \hat{P}_n) = \frac{1}{n} \sum_{i=1}^{n} S(P_{\theta}, y_i) = E_{Y \sim \hat{P}_n} S(P_{\theta}, Y),$$
+
+We remark that the notation is here slightly different from Appendix A.2, in which we considered $L$ to be a function of $\theta$ and $\mathbf{y}$ (compare Eq. (25) with Eq. (14)). The reason of this will be clear in the following.
+
+We start by stating the result we will rely on, to which we provide proof for ease of reference.
+
+**Lemma 7** (Lemma 5 in Matsubara et al. [2021]). Let $\pi_L(\theta|\hat{P}_n)$ be the generalized posterior defined in Eq. (25) for fixed $n \in \mathbb{N}$, with a generic loss $L(\theta, \hat{P}_n)$ and prior $\pi(\theta)$. Suppose $L(\theta, \hat{P}_n)$ is lower-bounded and $\pi(\theta)$ upper bounded over $\theta \in \Theta$, for any $\hat{P}$. Denote $DL(z, \theta, \hat{P}_n) = (d/d\epsilon)L(\theta, \hat{P}_{n,\epsilon,z})|_{\epsilon=0}$. Then, $\pi_L$ is globally bias-robust if, for any $\hat{P}_n$,
+
+1. $\sup_{\theta \in \Theta} \sup_{z \in X} |\text{DL}(z, \theta, \hat{P}_n)| \pi(\theta) < \infty$, and
+
+2. $\int_{\Theta} \sup_{z \in X} |\text{DL}(z, \theta, \hat{P}_n)| \pi(\theta) d\theta < \infty$.
+
+*Proof.* We follow the proof given in Matsubara et al. [2021] and adapt it to our notation: First of all, Eq. (17) of Ghosh and Basu [2016] demonstrates that
+
+$$\text{PIF}\left(z, \theta, \hat{P}_n\right) = wn\pi_L\left(\theta|\hat{P}_n\right) \left(-\text{DL}\left(z, \theta, \hat{P}_n\right) + \int_{\Theta} \text{DL}\left(z, \theta', \hat{P}_n\right) \pi_L\left(\theta'|\hat{P}_n\right)d\theta'\right),$$
+---PAGE_BREAK---
+
+where PIF denotes the posterior influence function defined in Sec. 2.4 in the main text.
+
+We can apply Jensen's inequality to get the following upper bound:
+
+$$ \sup_{\theta \in \Theta} \sup_{z \in \mathcal{X}} |\text{PIF}(z, \theta, \hat{P}_n)| \le w n \sup_{\theta \in \Theta} \pi_L(\theta|\hat{P}_n) \left( \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta, \hat{P}_n)| + \int_{\Theta} \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta', \hat{P}_n)| \pi_L(\theta'|\hat{P}_n) d\theta' \right). $$
+
+Recall now that $\pi_L(\theta|\hat{P}_n) = \pi(\theta)\exp(-wnL(\theta;\hat{P}_n))/Z$, where $0 < Z < \infty$ is the normalising constant. Thus we can obtain an upper bound $\pi_L(\theta|\hat{P}_n) \le \pi(\theta)\exp(-wn\inf_{\theta\in\Theta}L(\theta;\hat{P}_n))/Z =: C\pi(\theta)$ for some constant $0 < C < \infty$, since $L(\theta,\hat{P}_n)$ is lower bounded by assumption and $n$ fixed.
+From this upper bound, we have:
+
+$$
+\begin{align*}
+\sup_{\theta \in \Theta} \sup_{z \in \mathcal{X}} |\text{PIF}(z, \theta, \hat{P}_n)| &\le w n C \sup_{\theta \in \Theta} \pi(\theta) \left( \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta, \hat{P}_n)| + C \int_{\Theta} \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta', \hat{P}_n)| \pi(\theta') d\theta' \right) \\
+&\le w n C \sup_{\theta \in \Theta} \left( \pi(\theta) \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta, \hat{P}_n)| \right) + \\
+&\qquad w n C^2 \left( \sup_{\theta \in \Theta} \pi(\theta) \right) \int_{\Theta} \sup_{z \in \mathcal{X}} |\text{DL}(z, \theta', \hat{P}_n)| \pi(\theta') d\theta'.
+\end{align*}
+$$
+
+Since $\sup_{\theta \in \Theta} < \infty$ by assumption, it follows that:
+
+$$ \sup_{\theta \in \Theta} \left( \pi(\theta) \sup_{z \in \mathcal{X}} |\mathrm{DL}(z, \theta, \hat{P}_n) | \right) < \infty \quad \text{and} \quad \int_{\Theta} \pi(\theta) \sup_{z \in \mathcal{X}} |\mathrm{DL}(z, \theta, \hat{P}_n)| d\theta < \infty $$
+
+are sufficient conditions for $\sup_{\theta \in \Theta} \sup_{z \in \mathcal{X}} |\text{PIF}(z, \theta, \hat{P}_n)| < \infty$, as claimed. $\square$
+
+Next, we give the explicit form for $\mathrm{DL}(z, \theta, \hat{P}_n)$ in our case in the following Lemma:
+
+**Lemma 8.** For $L(\theta, \hat{P}_{n,\epsilon,z}) = E_{Y\sim\hat{P}_{n,\epsilon,z}} S(P_\theta, Y)$, we have:
+
+$$ DL(z, \theta, \hat{P}_n) = S(P_\theta, z) - E_{Y\sim\hat{P}_n} S(P_\theta, Y); $$
+
+further, setting $S = S_k$, where $S_k$ is the kernel scoring rule with kernel $k$, we have:
+
+$$ DL(z, \theta, \hat{P}_n) = 2E_{X\sim P_\theta}\left[E_{Y\sim\hat{P}_n} k(X,Y) - k(X,z)\right]; $$
+
+finally, the form for the energy score can be obtained by setting $k(x,y) = -||x-y||_2^\beta$.
+
+*Proof.* For the first statement, notice that:
+
+$$ E_{Y\sim\hat{P}_{n,\epsilon,z}} S(P_\theta, Y) = (1-\epsilon)E_{Y\sim\hat{P}_n} S(P_\theta, Y) + \epsilon S(P_\theta, z), $$
+
+from which differentiating with respect to $\epsilon$ gives the statement.
+
+For the second statement, recall the form for the kernel SR:
+
+$$ S_k(P, z) = E_{X,X' \sim P}[k(X, X')] - 2E_{X \sim P}[k(X, z)], $$
+
+from which:
+
+$$
+\begin{align*}
+\mathrm{DL}(z, \theta, \hat{\mathrm{P}}_n) &= S_k(P_\theta, z) - E_{\hat{\mathrm{P}}_n} S_k(P_\theta, Y) \\
+&= E_{X,X' \sim P_\theta}[k(X, X')] - 2E_{X \sim P_\theta}[k(X, z)] - E_{Y \sim \hat{\mathrm{P}}_n}[E_{X,X' \sim P_\theta}[k(X, X')] - 2E_{X \sim P_\theta}[k(X, Y)]] \\
+&= E_{X,X' \sim P_\theta}[k(X', X')] - 2E_{X \sim P_\theta}[k(X, z)] - E_{X,X' \sim P_\theta}[k(X', X')] + 2E_{Y \sim \hat{\mathrm{P}}_n}[E_{X,X' \sim P_\theta}[k(X', Y)]] \\
+&= 2E_{X \sim P_\theta}[E_{Y \sim \hat{\mathrm{P}}_n} k(X, Y) - k(X, z)].
+\end{align*}
+\square $$
+---PAGE_BREAK---
+
+Finally, we state the proof for Theorem 3:
+
+*Proof of Theorem 3.* The proof consists in verifying the conditions necessary for Lemma 7 for the Kernel and Energy Score posteriors
+
+First, let us consider the Kernel Score posterior and let us show that, under the assumptions of the Theorem, $L(\theta, \hat{P}_n)$ for the Kernel Score $S_k$ is lower bounded; specifically, we have:
+
+$$
+\begin{align*}
+L(\theta, \hat{P}_n) &= \frac{1}{n} \sum_{i=1}^{n} S_k(P_\theta, y_i) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{E} [k(X, X') - 2k(X, y_i)] \geq -\frac{1}{n} \mathbb{E} \left| \sum_{i=1}^{n} [k(X, X') - 2k(X, y_i)] \right| \\
+&\geq -\frac{1}{n} \sum_{i=1}^{n} \mathbb{E} [|k(X, X')| + |2k(X, y_i)|] \\
+&\geq -\frac{1}{n} \sum_{i=1}^{n} [\kappa + 2\kappa] = -3\kappa > -\infty,
+\end{align*}
+$$
+
+where all expectations are over $X, X' \sim P_\theta$ and the bound exploits the fact that $|k(x,y)| < \kappa$.
+
+We now need to verify Assumptions 1 and 2 from Lemma 7. For the former, we proceed in a similar way as above by noticing that, for the kernel SR (using Lemma 8):
+
+$$
+\begin{align*}
+|\mathrm{DL}(z, \theta, \hat{P}_n)| &= 2 |\mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [k(X, Y) - k(X, z)]| \\
+&\le 2 \mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [|k(X, Y)| + |k(X, z)|] \\
+&\le 2 \mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [\kappa + \kappa] = 4\kappa.
+\end{align*}
+$$
+
+Now:
+
+$$
+\sup_{\theta \in \Theta} \left( \pi(\theta) \sup_{z \in \mathcal{X}} \left| DL(z, \theta, \hat{P}_n) \right| \right) \leq 4\kappa \sup_{\theta \in \Theta} \pi(\theta),
+$$
+
+which is < $\infty$ as the prior is assumed to be upper bounded over $\Theta$, which verifies Assumption 1. Next:
+
+$$
+\int_{\Theta} \sup_{z \in \mathcal{X}} |\mathrm{DL}(z, \theta, \hat{P}_n)| \pi(\theta) d\theta \leq 4\kappa \int_{\Theta} \pi(\theta) d\theta = 4\kappa < \infty,
+$$
+
+which verifies Assumption 2; together, these prove the first statement.
+
+For the statement about the Energy Score posterior, we proceed in similar manner. First, let us show that, under the assumptions of the Theorem, $L(\theta, \hat{P}_n)$ for the Energy Score $S_E$ is lower bounded; in fact:
+
+$$
+\begin{align*}
+L(\theta, \hat{P}_n) &= \frac{1}{n} \sum_{i=1}^{n} S_E(P_\theta, y_i) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{E} [2||X - y_i||_2^\beta - ||X - X'||_2^\beta] \\
+&= \frac{2}{n} \sum_{i=1}^{n} \underbrace{\mathbb{E}||X - y_i||_2^\beta - \mathbb{E}||X - X'||_2^\beta - \frac{1}{n^2} \sum_{i,j=1}^{n} ||y_i - y_j||_2^\beta}_{=D_E(P_\theta, \hat{P}_n)} + \frac{1}{n^2} \sum_{i,j=1}^{n} ||y_i - y_j||_2^\beta,
+\end{align*}
+$$
+
+where $D_E(P_\theta, \hat{P}_n)$ is the squared Energy Distance between $P_\theta$ and the empirical distribution $\hat{P}_n$; as the Energy Distance is a distance between probability measures [Rizzo and Székely, 2016], $D_E(P_\theta, \hat{P}_n) \ge 0$, from which:
+
+$$
+L(\theta, \hat{P}_n) = D_E(P_\theta, \hat{P}_n) + \frac{1}{n^2} \sum_{i,j=1}^{n} ||y_i - y_j||_2^\beta \geq 0.
+$$
+
+We now need to verify Assumptions 1 and 2 from Lemma 7. For the former, notice that, for the Energy SR (using Lemma 8):
+
+$$
+\begin{align*}
+|\mathrm{DL}(z, \theta, \hat{P}_n)| &= 2 |\mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [||X - z||_2^\beta - ||X - Y||_2^\beta]| \\
+&\le 2\mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [|||X - z||_2^\beta| + ||X - Y||_2^\beta]| \\
+&\le 2\mathrm{E}_{X \sim P_\theta} \mathrm{E}_{Y \sim \hat{P}_n} [B^\beta + B'^\beta] = 4B^\beta,
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where the last inequality is due to $z \in \mathcal{X}$ and the boundedness assumptions for $\mathcal{X}$. Now:
+
+$$\sup_{\theta \in \Theta} \left( \pi(\theta) \sup_{z \in \mathcal{X}} \left| DL(z, \theta, \hat{P}_n) \right| \right) \le 4B^{\beta} \sup_{\theta \in \Theta} \pi(\theta),$$
+
+which is $< \infty$ as the prior is assumed to be upper bounded over $\Theta$, which verifies Assumption 1. Next:
+
+$$\int_{\Theta} \sup_{z \in \mathcal{X}} |DL(z, \theta, \hat{P}_n)| \pi(\theta) d\theta \le 4B^{\beta} \int_{\Theta} \pi(\theta) d\theta = 4B^{\beta} < \infty,$$
+
+which verifies Assumption 2; together, these prove the second statement. $\square$
+
+## A.4 Proof of Theorem 4
+
+In order to prove Theorem 4, we adapt and generalize the proof for the analogous result for Bayesian inference with an auxiliary likelihood [Drovandi et al., 2015]. Our setup is slightly more general as we do not constrain the update to be defined in terms of a likelihood; notice that the original setup in Drovandi et al. [2015] is recovered when we consider $S$ being the negative log likelihood, for some auxiliary likelihood.
+
+We recall here for simplicity the useful definitions. We consider the SR posterior:
+
+$$\pi_S(\theta|\mathbf{y}) \propto \pi(\theta) \underbrace{\exp\left\{-w \sum_{i=1}^{n} S(P_\theta, y_i)\right\}}_{p_S(\mathbf{y}|\theta)},$$
+
+Further, we recall the form of the target of the pseudo-marginal MCMC:
+
+$$\pi_{\hat{S}}^{(m)}(\theta|\mathbf{y}) \propto \pi(\theta)p_{\hat{S}}^{(m)}(\mathbf{y}|\theta),$$
+
+where:
+
+$$
+\begin{aligned}
+p_{\hat{S}}^{(m)}(\mathbf{y}|\theta) &= \mathbb{E}\left[\exp\left\{-w \sum_{i=1}^{n} \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i)\right\}\right] \\
+&= \int \exp\left\{-w \sum_{i=1}^{n} \hat{S}(\{x_j\}_{j=1}^m, y_i)\right\} \prod_{j=1}^{m} p(x_j|\theta)dx_1 dx_2 \cdots dx_m.
+\end{aligned}
+$$
+
+We begin by stating a useful property:
+
+**Lemma 9 (Theorem 3.5 in Billingsley [1999]).** If $X_n$ is a sequence of uniformly integrable random variables and $X_n$ converges in distribution to $X$, then $X$ is integrable and $\mathbb{E}[X_n] \to \mathbb{E}[X]$ as $n \to \infty$.
+
+**Remark 8 (Remark 1 in Drovandi et al. [2015]).** A simple sufficient condition for uniform integrability is that for some $\delta > 0$:
+
+$$\sup_n \mathbb{E}[|X_n|^{1+\delta}] < \infty.$$
+
+The result in the main text is the combination of the following two Theorems, which respectively follow Results 1 and 2 in Drovandi et al. [2015]:
+
+**Theorem 5 (Generalizes Result 1 in Drovandi et al. [2015]).** Assume that $p_{\hat{S}}^{(m)}(\mathbf{y}|\theta) \to p_S(\mathbf{y}|\theta)$ as $m \to \infty$ for all $\theta$ with positive prior support; further, assume $\inf_m \int_{\Theta} p_{\hat{S}}^{(m)}(\mathbf{y}|\theta)\pi(\theta)d\theta > 0$ and $\sup_{\theta \in \Theta} p_S(\mathbf{y}|\theta) < \infty$. Then
+
+$$\lim_{m \to \infty} \pi_{\hat{S}}^{(m)}(\theta|\mathbf{y}) = \pi_S(\theta|\mathbf{y}).$$
+
+Furthermore, if $f: \Theta \to \mathbb{R}$ is a continuous function satisfying $\sup_m \int_{\Theta} |f(\theta)|^{1+\delta} \pi_S^{(m)}(\theta|\mathbf{y})d\theta < \infty$ for some $\delta > 0$ then
+
+$$\lim_{m \to \infty} \int_{\Theta} f(\theta) \pi_{\hat{S}}^{(m)}(\theta | \mathbf{y}) d\theta = \int_{\Theta} f(\theta) \pi_S(\theta | \mathbf{y}) d\theta.$$
+---PAGE_BREAK---
+
+*Proof.* The first part follows from the fact that the numerator of
+
+$$
+\pi_{\hat{S}}^{(m)}(\theta|\mathbf{y}) = \frac{p_{\hat{S}}^{(m)}(\mathbf{y}|\theta)\pi(\theta)}{\int_{\Theta} p_{\hat{S}}^{(m)}(\mathbf{y}|\theta)\pi(\theta)d\theta}
+$$
+
+converges pointwise and the denominator is positive and converges by the bounded convergence theorem.
+
+For the second part, if for each $m \in \mathbb{N}$, $\theta_m$ is distributed according to $\pi_{\hat{S}}^{(m)}(\cdot|\mathbf{y})$ and $\theta$ is distributed according to $\pi_S(\cdot|\mathbf{y})$ then $\theta_m$ converges to $\theta$ in distribution as $m \to \infty$ by Scheffé's lemma [Scheffé, 1947]. Since $f$ is continuous, $f(\theta_m)$ converges in distribution to $f(\theta)$ as $n \to \infty$ by the continuous mapping theorem and we conclude by application of Remark 8 and Lemma 9. $\square$
+
+The following gives a convenient way to ensure $p_{\hat{S}}^{(m)}(\mathbf{y}|\theta) \to p_S(\mathbf{y}|\theta):$
+
+**Theorem 6** (Generalizes Result 2 in Drovandi et al. [2015]). *Assume that $\exp\{-w \sum_{i=1}^n \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i)\}$ converges in probability to $p_S(\mathbf{y}|\theta)$ as $m \to \infty$. If*
+
+$$
+\sup_m \mathbb{E} \left[ \left| \exp\left\{ -w \sum_{i=1}^n \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i) \right\} \right|^{1+\delta} \right] < \infty
+$$
+
+for some $\delta > 0$ then $p_{\hat{S}}^{(m)}(\mathbf{y}|\theta) \to p_S(\mathbf{y}|\theta)$ as $m \to \infty$.
+
+*Proof.* The proof follows by applying Remark 8 and Lemma 9. $\square$
+
+Notice that the convergence in probability of $\exp\{-w \sum_i \hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i)\}$ to $p_S(\mathbf{y}|\theta)$ is implied by
+the convergence in probability of $\hat{S}(\{X_j^{(\theta)}\}_{j=1}^m, y_i)$ to $S(P_\theta, y_i)$ and by the continuity of the exponential
+function. Therefore, Theorem 4 follows from applying Theorem 6 followed by Theorem 5.
+
+# B Changing data coordinates
+
+We give here some more details on the behavior of the SR posterior when the coordinate system used
+to represent the data is changed, as mentioned in Remark 4.
+
+**Frequentist estimator** First, we investigate whether the minimum scoring rule estimator (for a strictly proper scoring rule) is affected by a transformation of the data. Specifically, considering a strictly proper $S$, we are interested in whether $\theta_Y^* = \arg \min_{\theta \in \Theta} S(P_\theta^Y, Q_Y) = \arg \min_{\theta \in \Theta} D(P_\theta^Y, Q_Y)$ is the same as $\theta_Z^* = \arg \min_{\theta \in \Theta} S(P_\theta^Z, Q_Z) = \arg \min_{\theta \in \Theta} D(P_\theta^Z, Q_Z)$, where $Z = f(Y) \implies Y \sim Q_Y \iff Z \sim Q_Z$ and $Y \sim P_\theta^Y \iff Z \sim P_\theta^Z$. If the model is well specified, $P_{\theta_Y^*}^Y = Q_Y, P_{\theta_Z^*}^Z = Q_Z \implies \theta_Y^* = \theta_Z^*$.
+
+If the model is misspecified, for a generic SR the minimizer of the expected SR may
+change according to the parametrization. We remark how this is not a drawback of the frequentist
+minimum SR estimator but rather a feature, as such estimator is the parameter value corresponding
+to the model minimizing the chosen expected scoring rule from the data generating process *in that*
+coordinate system, and is therefore completely reasonable for it to change when the coordinate system
+is modified.
+
+Notice that a sufficient condition for $\theta_Y^* = \theta_Z^*$ is $S(P_\theta^Y, y) = a \cdot S(P_\theta^Z, z) + b$ for $a > 0, b \in \mathbb{R}$. This condition is verified when $S$ is chosen to be the log-score, as in fact:
+
+$$
+S(P_{\theta}^{Z}, f(y)) = -\ln p_{Z}(f(y)|\theta) = S(P_{\theta}^{Z}, y) + \ln |J_{f}(y)|,
+$$
+
+where we assumed $f$ to be a one-to-one function and we applied the change of variable formula to the
+density $p_Z$.
+---PAGE_BREAK---
+
+**Generalized Bayesian posterior** As mentioned in Remark 4, in order to have invariance to change of data coordinates we would need $w_Z S(P_\theta^Z, f(y)) = w_Y S(P_\theta^Y, y) + C \quad \forall \theta, y$ for some choice of $w_Z, w_Y$ and for all transformations $f$, where $C$ is a constant in $\theta$.
+
+This condition is easily satisfied with $w_Y = w_Z$ when $S$ is the log-score (due to what is said in the previous paragraph); instead, for other scoring rules the above condition cannot be satisfied in general for any choice of $w_Z, w_Y$. For instance, consider the kernel SR:
+
+$$S(P_{\theta}^{Z}, f(y)) = \mathbb{E}[k(Z, \tilde{Z})] - \mathbb{E}[k(Z, f(y))] = \mathbb{E}[k(f(Y), f(\tilde{Y}))] - \mathbb{E}[k(f(Y), f(y))];$$
+
+for general kernels and functions $f$, the above is different from $S(P_{\theta}^{Y}, y) = \mathbb{E}[k(Y, \tilde{Y})] - \mathbb{E}[k(Y, f(x))]$ up to a constant. Therefore, the posterior shape depends on the chosen data coordinates. Considering the expression for the kernel SR, it is clear that is a consequence of the fact that the likelihood principle is not satisfied (as the kernel SR does not only depend on the likelihood value at the observation). Similar argument holds for the Energy Score posterior as well.
+
+We also remark that this is also the case for BSL [Price et al., 2018], as in that case the model is assumed to be multivariate normal, and changing the data coordinates impacts their normality (in fact it is common practice in BSL to look for transformations of data which yield distribution as close as possible to a normal one).
+
+The theoretical semiBSL posterior [An et al., 2020], instead, is invariant with respect to one-to-one transformation applied independently to each data coordinate, which do not affect the copula structure. Notice however that different data coordinate systems may yield better empirical estimates of the marginal KDEs from model simulations.
+
+# C More details on related techniques
+
+## C.1 Maximum Mean Discrepancy (MMD)
+
+We follow here Section 2.2 in Gretton et al. [2012]; all proofs of our statements can be found there.
+Let $k(\cdot, \cdot) : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ be a positive definite and symmetric kernel. Under these conditions, there exists a unique Reproducing kernel Hilbert space (RKHS) $\mathcal{H}_k$ of real functions on $\mathcal{X}$ associated to $k$.
+Now, let's define the Maximum Mean Discrepancy (MMD).
+
+**Definition 1.** Let $\mathcal{F}$ be a class of functions $f: \mathcal{X} \to \mathbb{R}$; we define the MMD relative to $\mathcal{F}$ as:
+
+$$\text{MMD}_{\mathcal{F}}(P,Q) = \sup_{f \in \mathcal{F}} [\mathbb{E}_{X \sim P} f(X) - \mathbb{E}_{Y \sim Q} f(Y)].$$
+
+We will show here how choosing $\mathcal{F}$ to be the unit ball in an RKHS $\mathcal{H}_k$ turns out to be computationally convenient, as it allows to avoid computing the supremum explicitly. First, let us define the mean embedding of the distribution $P$ in $\mathcal{H}_k$:
+
+**Lemma 10 (Lemma 3 in Gretton et al. [2012]).** If $k(\cdot, \cdot)$ is measurable and $\mathbb{E}_{X \sim P} \sqrt{k(X, X)} < \infty$, then the mean embedding of the distribution $P$ in $\mathcal{H}_k$ is:
+
+$$\mu_P = \mathbb{E}_{X \sim P} [k(X, \cdot)] \in \mathcal{H}_k.$$
+
+Using this fact, the following Lemma shows that the MMD relative to $\mathcal{H}_k$ can be expressed as the distance in $\mathcal{H}_k$ between the mean embeddings:
+
+**Lemma 11 (Lemma 4 in Gretton et al. [2012].)** *Assume the conditions in Lemma 10 are satisfied, and let $\mathcal{F}$ be the unit ball in $\mathcal{H}_k$; then:*
+
+$$\text{MMD}_{\mathcal{F}}^2(P,Q) = ||\mu_P - \mu_Q||_{\mathcal{H}}^2.$$
+
+In general, the MMD is a pseudo-metric for probability distributions (i.e., it is symmetric, satisfies the triangle inequality and $\text{MMD}_{\mathcal{F}}(P,P) = 0$, Briol et al. 2019). For the probability measures on a compact metric space $\mathcal{X}$, the next Lemma states the conditions under which the MMD is a metric, which additionally ensures that $\text{MMD}_{\mathcal{F}}(P,Q) = 0 \implies P = Q$. Specifically, this holds when the kernel is universal, which requires that $k(\cdot, \cdot)$ is continuous, and $\mathcal{H}_k$ being dense in $C(\mathcal{X})$ with respect to the $L_\infty$ norm (these conditions are satisfied by the Gaussian and Laplace kernel).
+---PAGE_BREAK---
+
+**Lemma 12** (Theorem 5 in Gretton et al. [2012]). Let $\mathcal{F}$ be the unit ball in $H_k$, where $H_k$ is defined on a compact metric space $\mathcal{X}$ and has associated continuous kernel $k(\cdot, \cdot)$. Then:
+
+$$MMD_{\mathcal{F}}(P, Q) = 0 \iff P = Q.$$
+
+This result can be generalized to more general spaces $\mathcal{X}$, by considering the notion of characteristics
+kernel, for which the mean map is injective; it can be shown that the Laplace and Gaussian kernels are
+characteristics [Gretton et al., 2012], so that MMD for those two kernels is a metric for distributions
+on $\mathbb{R}^d$.
+
+Additionally, the form of MMD for a unit-ball in an RKHS allows easy estimation, as shown next:
+
+**Lemma 13** (Lemma 6 in Gretton et al. [2012]). Assume that the form for MMD given in Lemma 11 holds; say $X, X' \sim P, Y, Y' \sim Q$, and let $\mathcal{F}$ be the unit ball in $H_k$. Then, you can write:
+
+$$MMD_{\mathcal{F}}^2(P, Q) = \mathbb{E}[k(X, X')] + \mathbb{E}[k(Y, Y')] - 2\mathbb{E}[k(X, Y)].$$
+
+In the main body of this work (Section 2.2), we denoted the squared MMD by $D_k(P, Q)$.
+
+### C.1.1 Equivalence between MMD-Bayes posterior and $\pi_{S_k}$
+
+Chérief-Abdellatif and Alquier [2020] considered the following posterior, termed MMD-Bayes:
+
+$$\pi_{\text{MMD}}(\theta|\mathbf{y}) \propto \pi(\theta) \exp \left\{-\beta \cdot D_k (P_\theta, \hat{P}_n)\right\}$$
+
+where $\beta > 0$ is a temperature parameter and $D_k(P_\theta, \hat{P}_n)$ denotes the squared MMD between the
+empirical measure of the observations $\hat{P}_n = \frac{1}{n} \sum_{i=1}^n \delta_{y_i}$ and the model distribution $P_\theta$.
+From the properties of MMD (see Appendix C.1), notice that:
+
+$$
+\begin{align*}
+D_k (P_\theta, \hat{P}_n) &= \mathbb{E}_{X,X' \sim P_\theta} k(X, X') + \frac{1}{n^2} \sum_{i,j=1}^{n} k(y_i, y_j) - \frac{2}{n} \sum_{i=1}^{n} \mathbb{E}_{X \sim P_\theta} k(X, y_i) \\
+&= \frac{1}{n} \left( n \cdot \mathbb{E}_{X,X' \sim P_\theta} k(X, X') - 2 \sum_{i=1}^{n} \mathbb{E}_{X \sim P_\theta} k(X, y_i) \right) + \frac{1}{n^2} \sum_{i,j=1}^{n} k(y_i, y_j) \\
+&= \frac{1}{n} \left( \sum_{i=1}^{n} S_k (P_\theta, y_i) \right) + \frac{1}{n^2} \sum_{i,j=1}^{n} k(y_i, y_j),
+\end{align*}
+$$
+
+where we used the expression of the SR scoring rule $S_k$, and where the second term is independent on
+$\theta$. Therefore, the MMD-Bayes posterior is equivalent to the SR posterior with kernel scoring rule $S_k$,
+by identifying $w = \beta/n$.
+
+## C.2 Semi-Parametric Synthetic Likelihood
+
+We discuss here in more details the semiBSL approach An et al. [2020], introduced in the main body
+in Sec. 3.1.
+
+**Copula theory** First, recall that a copula is a multivariate Cumulative Density Function (CDF) such that the marginal distribution for each variable is uniform on the interval $[0,1]$. Consider now a multivariate random variable $X = (X^1, ..., X^d)$, for which the marginal CDFs are denoted by $F_j(x) = P(X^j < x)$; then, the multivariate random variable built as:
+
+$$(U^1, U^2, \dots, U^d) = (F_1(X^1), F_2(X^2), \dots, F_d(X^d))$$
+
+has uniform marginals on $[0, 1]$.
+---PAGE_BREAK---
+
+Sklar's theorem exploits copulas to decompose the density $h$ of $X^9$; specifically, it states that the following decomposition is valid:
+
+$$h(x^1, \dots, x^d) = c(F_1(x^1), \dots, F_d(x^d)) f_1(x^1) \cdots f_d(x^d),$$
+
+where $f_j$ is the marginal density of the j-th coordinate, and $c$ is the density of the copula.
+
+We now review definition and properties of the Gaussian copula, which is defined by a correlation matrix $R \in [-1, 1]^{d \times d}$, and has cumulative density function:
+
+$$C_R(u) = \Phi_R(\Phi^{-1}(u^1), \dots, \Phi^{-1}(u^d)),$$
+
+where $\Phi^{-1}$ is the inverse cdf (quantile function) of a standard normal, and $\Phi_R$ is the cdf of a multivariate normal with covariance matrix $R$ and 0 mean. If you define as $U$ the random variable which is distributed according to $C_R$, it can be easily seen that $R$ is the covariance matrix of the multivariate normal random variable $Z = \Phi^{-1}(U)$, where $\Phi^{-1}$ is applied element-wise. In fact:
+
+$$P(Z \le \eta) = P(U \le \Phi(\eta)) = C_R(\Phi(\eta)) = \Phi_R(\eta),$$
+
+where the inequalities are intended component-wise.
+
+By defining as $\eta$ a $d$-vector with components $\eta^k = \Phi^{-1}(u^k)$, the Gaussian copula density is:
+
+$$c_R(u) = \frac{1}{\sqrt{|R|}} \exp \left\{ -\frac{1}{2} \eta^\top (R^{-1} - I_d) \eta \right\},$$
+
+where $I_d$ is a $d$-dimensional identity matrix, and $|\cdot|$ denotes the determinant.
+
+**Semiparametric Bayesian Synthetic Likelihood (semiBSL)** The semiBSL approach assumes that the likelihood for the model has a Gaussian copula; therefore, the likelihood for a single observation $y$ can be written as:
+
+$$p_{\text{semiBSL}}(y|\theta) = c_{R_\theta}(F_{\theta,1}(y^1), \dots, F_{\theta,d}(y^d)) \prod_{k=1}^{d} f_{\theta,k}(y^k),$$
+
+where $y^k$ is the $k$-th component of $y$, $f_{\theta,k}$ is the marginal density of the $k$-th component and $F_{\theta,k}$ is the CDF of the $k$-th component.
+
+In order to obtain an estimate for it, we exploit simulations from $P_\theta$ to estimate $R_\theta$, $f_{\theta,k}$ and $F_{\theta,k}$; this leads to:
+
+$$
+\begin{aligned}
+\hat{p}_{\text{semiBSL}}(y|\theta) &= c_{\hat{R}_\theta}(\hat{F}_{\theta,1}(y^1), \dots, \hat{F}_{\theta,d}(y^d)) \prod_{k=1}^{d} \hat{f}_{\theta,k}(y^k) \\
+&= \frac{1}{\sqrt{|\hat{R}_\theta|}} \exp \left\{ -\frac{1}{2} \hat{\eta}_y^\top (\hat{R}_\theta^{-1} - I_d) \hat{\eta}_y \right\} \prod_{k=1}^{d} \hat{f}_{\theta,k}(y^k),
+\end{aligned}
+$$
+
+where $\hat{f}_{\theta,k}$ and $\hat{F}_{\theta,k}$ are estimates for $f_{\theta,k}$ and $F_{\theta,k}$, $\hat{\eta}_y = (\hat{\eta}_y^1, \dots, \hat{\eta}_y^d)$, $\hat{\eta}_y^k = \Phi^{-1}(\hat{u}^k)$, $\hat{u}^k = \hat{F}_{\theta,k}(y^k)$. Moreover, $\hat{R}_\theta$ is an estimate of the correlation matrix.
+
+We discuss now how the different quantities are estimated. First, a Kernel Density Estimate (KDE) is used for the marginals densities and cumulative density functions. Specifically, given samples $x_1, \dots, x_m \sim P_\theta$, a KDE estimate for the $k$-th marginal density is:
+
+$$\hat{f}_{\theta,k}(y^k) = \frac{1}{m} \sum_{j=1}^{m} K_h(y^k - x_j^k),$$
+
+where $K_h$ is a normalized kernel which is chosen to be Gaussian in the original implementation [An et al., 2020]. The CDF estimates are obtained by integrating the KDE density.
+
+⁹Provided that the density exists in the first place; a more general version of Sklar’s theorem is concerned with general random variables, but we restrict here to the case where densities are available.
+---PAGE_BREAK---
+
+Next, for estimating the correlation matrix, An et al. [2020] proposed to use a robust procedure based on the ranks (grc, Gaussian rank correlation, Boudt et al., 2012); specifically, given *m* simulations $x_1, \dots, x_m \sim P_\theta$, the estimate for the (k,l)-th entry of $R_\theta$ is given by:
+
+$$
+\left[ \hat{R}_{\theta}^{\text{grc}} \right]_{k,l} = \frac{\sum_{j=1}^{m} \Phi^{-1}\left(\frac{r(x_j^k)}{m+1}\right) \Phi^{-1}\left(\frac{r(x_j^l)}{m+1}\right)}{\sum_{j=1}^{m} \Phi^{-1}\left(\frac{j}{m+1}\right)^2},
+$$
+
+where $r(\cdot) : \mathbb{R} \to \mathcal{A}$, where $\mathcal{A} = \{1, \dots, m\}$ is the rank function.
+
+**Copula scoring rule** Finally, we write down the explicit expression of the copula scoring rule $S_{Gc}$, associated to the Gaussian copula. We show that this is a proper, but not strictly so, scoring rule for copula distributions. Specifically, let $C$ be a distribution for a copula random variable, and let $u \in [0, 1]^d$. We define:
+
+$$
+S_{Gc}(C, u) = \frac{1}{2} \log |R_C| + \frac{1}{2} (\Phi^{-1}(u))^T (R_C^{-1} - I_d) \Phi^{-1}(u),
+$$
+
+where $\Phi^{-1}$ is applied element-wise to $u$, and $R_C$ is the correlation matrix associated to $C$ in the following way: define the copula random variable $V \sim C$ and its transformation $\Phi^{-1}(V)$; then, $\Phi^{-1}(V)$ will have a multivariate normal distribution with mean 0 and covariance matrix $R_C$.
+
+Similarly to the Dawid-Sebastiani score (see Sec. 2.2), this scoring rule is proper but not strictly so as it only depends on the first 2 moments of the distribution of the random variable $\Phi^{-1}(V)$ (the first one being equal to 0). To show this, assume the copula random variable $U$ has an exact distribution $Q$ and consider the expected scoring rule:
+
+$$
+S_{Gc}(C,Q) = E_{U \sim Q} S_{Gc}(C,U) = \frac{1}{2} \log |R_C| + E_{U \sim Q} \left[ (\Phi^{-1}(U))^T (R_C^{-1} - I_d) \Phi^{-1}(U) \right];
+$$
+
+now, notice that $\Phi^{-1}(U)$ is a multivariate normal distribution whose marginals are standard normals.
+Therefore, let us denote as $R_Q$ the covariance matrix of $\Phi^{-1}(U)$, which is a correlation matrix. From
+the well-known form for the expectation of a quadratic form¹⁰, it follows that:
+
+$$
+\begin{align*}
+S_{Gc}(C,Q) &= \frac{1}{2} \log |R_C| + \frac{1}{2} \operatorname{Tr} [ (R_C^{-1} - \mathbf{I}_d) \cdot R_Q ] \\
+&= \frac{1}{2} \log |R_C| + \frac{1}{2} \operatorname{Tr} [ R_C^{-1} \cdot R_Q ] - \frac{1}{2} \operatorname{Tr} [ R_Q ] \\
+&= \frac{1}{2} \left\{ \underbrace{\log \frac{|R_C|}{|R_Q|} - d + \operatorname{Tr}[R_C^{-1} \cdot R_Q]}_{D_{KL}(Z_Q||Z_C)} \right\} + \frac{1}{2} \log R_Q + \frac{d}{2} - \frac{1}{2} \operatorname{Tr}[R_Q],
+\end{align*}
+$$
+
+where $D_{KL}(Z_Q||Z_C)$ is the KL divergence between two multivariate normal distributions $Z_Q$ and $Z_C$
+of dimension $d$, with mean 0 and covariance matrix $R_Q$ and $R_C$ respectively. Further, notice that the
+remaining factors do not depend on the distribution $C$. Therefore, $S_{Gc}(C,Q)$ is minimized whenever
+$R_C$ is equal to $R_Q$; this happens when $C=Q$, but also for all other choices of $C$ which share the
+associated covariance matrix with $Q$. This implies that the Gaussian copula score is a proper, but not
+strictly so, scoring rule for copula distributions.
+
+**C.3 Ratio estimation**
+
+We discuss here in more details the Ratio Estimation approach by Thomas et al. [2020], introduced
+in the main body in Sec. 3.1. In doing so, we also relax the assumption of having the same number of
+simulations in both datasets (which we used in the main body for ease of exposition).
+
+¹⁰$\mathbb{E}[X^T\Lambda X] = \text{tr}[\Lambda\Sigma] + \mu^T\Lambda\mu$, for a symmetric matrix $\Lambda$, and where $\mu$ and $\Sigma$ are the mean and covariance matrix of $X$ (which in general does not need to be normal, but only needs to have well defined second moments).
+---PAGE_BREAK---
+
+Specifically, recall that logistic regression, given a function $h$, a predictor $x$ and a response $t \in \{0,1\}$, approximates the probability of the predictor being 1 as:
+
+$$ \mathrm{Pr}(T = 1 | X = x; h) = \frac{1}{1 + \exp(-h(x))}. \qquad (26) $$
+
+Then, considering a training dataset of $m_0$ elements $\{x_j^{(0)}\}_{j=1}^{m_0}$ belonging to class 0 and $m_1$ elements $\{x_j^{(1)}\}_{j=1}^{m_1}$ belonging to class 1, the function $h$ corresponding to the best classifier is determined by minimizing the cross entropy loss:
+
+$$ J_{\mathbf{m}}(h) = \frac{1}{m_0 + m_1} \left\{ \sum_{j=1}^{m_1} \log \left[ 1 + \exp(-h(x_j^{(1)})) \right] + \sum_{j=1}^{m_0} \log \left[ 1 + \exp(h(x_j^{(0)})) \right] \right\}, $$
+
+where $\mathbf{m} = (m_0, m_1)$.
+
+In the setup of interest to the main body of this paper and for a fixed $\theta$, class 1 is associated to being sampled from the marginal $p(\cdot|\theta)$ and class 0 to being sampled from $p(\cdot)$, which implies that $X|T=1 \sim p(\cdot|\theta)$ and $X|T=0 \sim p(\cdot)$.
+
+We will consider the setting in which $m_1, m_0 \to \infty$ but such that the limit of their ratio is a constant $\nu = \lim m_1/m_0$ which is equal to the ratio of prior probability $\frac{P(T=1)}{P(T=0)}$. In this limit case, we want to show that the minimizer of $J_{\mathbf{m}}(h)$ is $h^*(x) = \log r(x; \theta) + \log \nu$, as long as the minimization of $J$ is performed under the full set of functions $h$; an alternative proof for this fact is given in Thomas et al. [2020].
+
+We proceed in two steps: first, we show how $\mathrm{Pr}(T = 1 | X = x; h^*) = \mathrm{Pr}(T = 1 | X = x)$; secondly, we use that fact to show that $h^*(x) = \log r(x; \theta) + \log \nu$.
+
+Let us denote now $g(h(X)) = \mathrm{Pr}(T = 1 | X = x; h)$. For the first part, we proceed by rewriting the objective in the infinite data limit as:
+
+$$
+\begin{aligned}
+J(h) &= \mathbb{E}_{X,T}[-T \log g(h(X)) - (1-T)\log(1-g(h(X)))] \\
+&= \mathbb{E}_X \mathbb{E}_{T|X}[-T \log g(h(X)) - (1-T)\log(1-g(h(X))].
+\end{aligned}
+ $$
+
+For fixed X, $g(h(X))$ is a probability value between (0, 1). [$-T \log g(h(X)) - (1-T)\log(1-g(h(X)))$] is the logarithmic score for the binary variable $T$, which is a strictly proper scoring rule (see Example 3 in Gneiting and Raftery [2007]). Therefore,
+
+$$ \mathbb{E}_{T|X}[-T \log g(h(X)) - (1-T)\log(1-g(h(X))]. $$
+
+is minimized for each fixed $X$ whenever $g(h(x)) = P(T = 1 | X = x)$, and the overall $J(h)$ is minimized when the inner expectation is minimized for each value of $X$, so that $h^*$ is such that
+
+$$ \mathrm{Pr}(T = 1 | X = x; h^*) = \mathrm{Pr}(T = 1 | X = x). $$
+
+For the second part, let's now consider:
+
+$$
+\begin{aligned}
+P(T = 1 | X = x) &= \frac{\underbrace{p(x|\theta)}_{p(X=x|T=1)} P(T=1)}{p(X=x|T=1)P(T=1) + p(X=x|T=0)P(T=0)} \\
+&= \frac{\underbrace{p(x|\theta)P(T=1)}_{p(x|\theta)P(T=1)} P(T=0)}{p(X=x|T=1)P(T=1) + p(X=x|T=0)P(T=0)},
+\end{aligned}
+ $$
+
+$$
+\begin{aligned}
+P(T = 0 | X = x) &= \frac{\overbrace{p(x)}^{p(x|T=0)} P(T=0)}{p(X=x|T=1)P(T=1) + p(X=x|T=0)P(T=0)} \\
+&= \frac{\underbrace{p(x)P(T=0)}_{p(x|T=0)P(T=0)}}{p(X=x|T=1)P(T=1) + p(X=x|T=0)P(T=0)}.
+\end{aligned}
+ $$
+---PAGE_BREAK---
+
+Moreover, the definition in Eq. (26) implies that:
+
+$$h(x) = \log \frac{P(T=1|X=x; h)}{P(T=0|X=x; h)}$$
+
+Therefore, by performing logistic regression in the infinite data limit, we get that:
+
+$$h^*(x) = \log \frac{P(T=1|X=x; h^*)}{P(T=0|X=x; h^*)} = \log \frac{P(X=x|T=1)}{P(X=x|T=0)} + \log \frac{P(T=1)}{P(T=0)} = \log \frac{p(x|\theta)}{p(x)} + \log \nu,$$
+
+which concludes our proof, with $\nu = \frac{P(T=1)}{P(T=0)}$.
+
+# D Further experimental details
+
+## D.1 Tuning the bandwidth of the Gaussian kernel
+
+Consider the Gaussian kernel:
+
+$$k(x, y) = \exp\left(-\frac{\|x - y\|_2^2}{2\gamma^2}\right);$$
+
+inspired by Park et al. [2016], we fix the bandwidth $\gamma$ with the following procedure:
+
+1. Simulate a value $\theta_j \sim \pi(\theta)$ and a set of samples $x_{jk} \sim P_{\theta_j}$, for $k = 1, \dots, m_\gamma$.
+
+2. Estimate the median of $\{||x_{jk} - x_{jl}||_2\}_{kl}^{m_\gamma}$ and call it $\hat{\gamma}_j$.
+
+3. Repeat points 1) and 2) for $j = 1, \dots, m_{\theta,\gamma}$.
+
+4. Set the estimate for $\gamma$ as the median of $\{\hat{\gamma}_j\}_{j=1}^{m_{\theta,\gamma}}$.
+
+Empirically, we use $m_{\theta,\gamma} = 1000$ and we set $m_\gamma$ to the corresponding value of $m$ for the different models.
+
+## D.2 The g-and-k model
+
+### D.2.1 Additional experimental details for well specified setup
+
+We report here additional experimental details on the g-and-k model experiments (Sec. 4.1.1).
+
+First, we discuss settings for the SR posteriors:
+
+* For the Energy Score posterior, our heuristic procedure (Sec.3.2) for setting $w$ using BSL as a reference resulted in $w \approx 0.35$ for the univariate model and $w \approx 0.16$ for the multivariate one.
+
+* For the Kernel Score posterior, we first fit the value of the Gaussian kernel bandwidth parameter as described in Appendix D.1, which resulted in $\gamma \approx 5.50$ for the univariate case and $\gamma \approx 52.37$ for the multivariate one. Then, the heuristic procedure for $w$ using BSL as a reference resulted in $w \approx 18.30$ for the univariate model and $w \approx 52.29$ for the multivariate one.
+
+Next, we discuss the proposal sizes for MCMC; recall that we use independent normal proposals on each component of $\theta$, with standard deviation $\sigma$. We report here the values for $\sigma$ used in the experiments; we stress that, as the MCMC is run in the transformed unbounded parameter space (obtained applying a logit transformation), these proposal sizes refer to that space.
+
+For the univariate g-and-k, the proposal sizes we use are the following:
+
+* For BSL, we use $\sigma = 1$ for all values of $n$.
+
+* For Energy and Kernel Scores, we take $\sigma = 1$ for $n$ from 1 up to 25 (included), $\sigma = 0.4$ for $n$ from 30 to 50, and $\sigma = 0.2$ for $n$ from 55 to 100.
+---PAGE_BREAK---
+
+For the multivariate g-and-k:
+
+* For BSL and semiBSL, we use $\sigma = 1$ for all values of $n$ for which the chain converges. We stress that we tried decreasing the proposal size, but that did not solve the non-convergence issue (discussed in Sec. 4.1.1).
+
+* For Energy and Kernel Scores, we take $\sigma = 1$ for $n$ from 1 up to 15 (included), $\sigma = 0.4$ for $n$ from 20 to 35, $\sigma = 0.2$ for $n$ from 40 to 50 and $\sigma = 0.1$ for $n$ from 55 to 100.
+
+In Table 1, we report the acceptance rates the different methods achieve for all values of $n$, with the proposal sizes mentioned above. We denote by “/” the experiments for which we did not manage to run MCMC satisfactorily. We remark how the Energy Score achieves a larger acceptance rates in all experiments compared to the Kernel Score.
+
+| N. obs. | Univariate g-and-k | Multivariate g-and-k |
|---|
| BSL | Kernel Score | Energy Score | BSL | semiBSL | Kernel Score | Energy Score |
|---|
| 1 | 0.355 | 0.456 | 0.401 | 0.211 | 0.173 | 0.470 | 0.422 | | 5 | 0.218 | 0.318 | 0.372 | 0.048 | / | 0.126 | 0.178 | | 10 | 0.132 | 0.235 | 0.262 | 0.009 | / | 0.116 | 0.170 | | 15 | 0.110 | 0.222 | 0.202 | / | / | 0.066 | 0.149 | | 20 | 0.100 | 0.139 | 0.195 | / | / | 0.125 | 0.271 | | 25 | 0.091 | 0.139 | 0.201 | / | / | 0.089 | 0.219 | | 30 | 0.084 | 0.196 | 0.322 | / | / | 0.099 | 0.207 | | 35 | 0.083 | 0.153 | 0.298 | / | / | 0.041 | 0.146 | | 40 | 0.077 | 0.131 | 0.270 | / | / | 0.038 | 0.192 | | 45 | 0.070 | 0.104 | 0.239 | / | / | 0.037 | 0.174 | | 50 | 0.061 | 0.099 | 0.200 | / | / | 0.044 | 0.175 | | 55 | 0.060 | 0.152 | 0.279 | / | / | 0.035 | 0.209 | | 60 | 0.058 | 0.146 | 0.287 | / | / | 0.035 | 0.196 | | 65 | 0.055 | 0.141 | 0.278 | / | / | 0.048 | 0.207 | | 70 | 0.050 | 0.134 | 0.242 | / | / | 0.043 | 0.198 | | 75 | 0.049 | 0.123 | 0.233 | / | / | 0.041 | 0.195 | | 80 | 0.046 | 0.124 | 0.221 | / | / | 0.047 | 0.192 | | 85 | 0.046 | 0.114 | 0.210 | / | / | 0.038 | 0.147 | | 90 | 0.045 | 0.110 | 0.207 | / | / | 0.036 | 0.143 | | 95 | 0.045 | 0.106 | 0.199 | / | / | 0.028 | 0.135 | | 100 | 0.043 | 0.098 | 0.194 |
|---|
+
+Table 1: Acceptance rates for the univariate and multivariate g-and-k experiments with different values of $n$, with the MCMC proposal sizes reported in Appendix D.2.1. “/” denotes experiments for which MCMC did not run satisfactorily.
+
+### D.2.2 Effect of increased number of simulations
+
+As mentioned in the main text (Sec. 4.1.1), we report here the results of our study increasing the number of simulations for a fixed number of observations $n = 20$ for the g-and-k model in order to better understand the reason why MCMC with BSL and semiBSL performs poorly in this setup.
+
+Specifically, we want to investigate whether the poor performance is due to large variance in the estimate of the target (as we recall we use pseudo-marginal MCMC, Andrieu et al., 2009); as increasing the number of simulations reduces such variance, we study the effect of this on the MCMC performance.
+
+We investigated $m = 500, 1000, 1500, 2000, 2500, 3000$ for both of them, and additionally tried $m = 30\,000$ for BSL (testing that with semiBSL was prohibitively expensive); as discussed in Appendix D.2.1, we used a proposal size $\sigma = 0.4$, with which the Energy and Kernel Score posteriors
+---PAGE_BREAK---
+
+Figure 9: Traceplots for BSL and semiBSL and BSL for $n = 20$ using different number of simulations $m$, reported in the legend for each row; green dashed line denotes the true parameter value. There is no improvement in the mixing of the chain for increasing the number of simulations.
+
+| N. simulations m | 500 | 1000 | 1500 | 2000 | 2500 | 3000 | 30000 |
|---|
| Acc. rate BSL | 3.0 · 10-5 | 8.0 · 10-5 | 2.2 · 10-4 | 1.4 · 10-4 | 5.7 · 10-4 | 4.0 · 10-5 | 8.0 · 10-5 | | Acc. rate semiBSL | 1.6 · 10-4 | 4.0 · 10-5 | 3.1 · 10-4 | 3.0 · 10-5 | 2.5 · 10-4 | 1.5 · 10-4 | / |
+
+Table 2: Acceptance rates for BSL and semiBSL and BSL for $n = 20$ using different number of simulations $m$; there is no improvement in the acceptance rate for increasing number of simulations. We recall that we were not able to run semiBSL for $m = 30000$ due to its high computational cost.
+
+performed well. We report traceplots in Fig. 9 and corresponding acceptance rates in Table 2; from this experiment, we note that increasing the number of simulations does not have any effect of the chain mixing; therefore, poor MCMC performance cannot be imputed to the variance of the target estimate alone.
+
+### D.2.3 Additional experimental details for misspecified setup
+
+We report here additional experimental details on the misspecified g-and-k model experiments, where the observations were generated by a Cauchy distribution (Sec. 4.1.2).
+
+In order to have coherent results with respect to the well specified case, we use here the values of *w* and $\gamma$ determined in the well specified case (reported in Appendix D.2.1).
+
+Next, we discuss the proposal sizes for MCMC (which is run with independent normal proposals on each component of $\theta$ with standard deviation $\sigma$, in the same way as in the well specified case, after applying a logit transformation to the parameter space).
+
+* For the univariate g-and-k, for all methods (BSL, Energy and Kernel Scores), we take $\sigma = 1$ for $n$ from 1 up to 25 (included), $\sigma = 0.4$ for $n$ from 30 to 50, and $\sigma = 0.2$ for $n$ from 55 to 100.
+
+* For the multivariate g-and-k, recall that we did not report results for BSL and semiBSL here as we were not able to sample the posteriors with MCMC for large $n$, as already experienced in the well specified case. For the remaining techniques, we used the same values of $\sigma$ as in the well specified experiments (Appendix D.2.3).
+
+In Table 3, we report the acceptance rates for the different methods achieving all values of $n$, with the proposal sizes discussed above. We remark how the Energy Score achieves a larger acceptance rates in all experiments compared to the Kernel Score.
+---PAGE_BREAK---
+
+| N. obs. | Misspecified univariate g-and-k | Misspecified multivariate g-and-k |
|---|
| BSL | Kernel Score | Energy Score | Kernel Score | Energy Score |
|---|
| 1 | 0.454 | 0.469 | 0.518 | 0.474 | 0.475 | | 5 | 0.302 | 0.392 | 0.443 | 0.314 | 0.375 | | 10 | 0.189 | 0.412 | 0.413 | 0.359 | 0.332 | | 15 | 0.144 | 0.405 | 0.377 | 0.361 | 0.277 | | 20 | 0.103 | 0.218 | 0.301 | 0.544 | 0.410 | | 25 | 0.092 | 0.237 | 0.302 | 0.533 | 0.377 | | 30 | 0.146 | 0.361 | 0.447 | 0.535 | 0.356 | | 35 | 0.134 | 0.238 | 0.413 | 0.534 | 0.331 | | 40 | 0.128 | 0.257 | 0.406 | 0.621 | 0.415 | | 45 | 0.124 | 0.257 | 0.396 | 0.499 | 0.362 | | 50 | 0.114 | 0.200 | 0.366 | 0.342 | 0.332 | | 55 | 0.161 | 0.183 | 0.436 | 0.398 | 0.415 | | 60 | 0.149 | 0.162 | 0.418 | 0.347 | 0.379 | | 65 | 0.153 | 0.163 | 0.412 | 0.304 | 0.362 | | 70 | 0.149 | 0.132 | 0.386 | 0.289 | 0.341 | | 75 | 0.138 | 0.167 | 0.391 | 0.180 | 0.299 | | 80 | 0.140 | 0.156 | 0.381 | 0.149 | 0.255 | | 85 | 0.133 | 0.142 | 0.372 | 0.168 | 0.256 | | 90 | 0.136 | 0.086 | 0.335 | 0.155 | 0.254 | | 95 | 0.138 | 0.068 | 0.322 | 0.163 | 0.248 | | 100 | 0.132 | 0.096 | 0.330 | 0.153 | 0.234 |
+
+Table 3: Acceptance rates for the univariate and multivariate g-and-k experiments with different values of *n*, with the MCMC proposal sizes reported in Appendix D.2.3.
+
+## D.3 Additional details on misspecified normal location model
+
+As mentioned in the main text (Sec. 4.2), we set the weight $w$ such that the variance achieved by our SR posteriors is approximately the same as the one achieved by the standard Bayes distribution for the well specified case ($\epsilon = 0$). This resulted in $w = 1$ for the Energy Score posterior and $w = 2.8$ for the Kernel Score posterior. Additionally, the bandwidth for the Gaussian kernel was tuned to be $\gamma \approx 0.9566$ (with the strategy discussed in Appendix D.1).
+
+In Figure 10 we report the full set of posterior distributions for the different values of $\epsilon$ and $z$ obtained with the standard Bayes posterior and with our SR posteriors.
+
+In the MCMC with the SR posteriors, a proposal size $\sigma = 2$ is used for all values of $\epsilon$ and $z$. For all experiments, Table 4 reports acceptance rates obtained with the SR posteriors, while Table 5 reports the obtained posterior standard deviation with SR posteriors and for the standard Bayes distribution (for which we do not give the proposal size and acceptance rate as it was sampled using more advanced MCMC techniques than standard Metropolis-Hastings using the PyMC3 library [Salvatier et al., 2016]).
+
+
+
+
+ | Setup |
+ ε = 0 |
+ ε = 0.1 |
+ ε = 0.2 |
+
+
+ | - |
+ z = 3 z = 5 z = 7 z = 10 z = 20 |
+ z = 3 z = 5 z = 7 z = 10 z = 20 |
+
+
+
+
+ | Kernel Score |
+ 0.079 0.081 0.079 0.083 0.090 0.085 0.074 0.076 0.079 0.078 0.077 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2 ε = 5 ε = 7 ε = 1 ε = 2∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈∈
+ |
+
+ | Energy Score |
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ---- --
+ | - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ |
+
+
+ | Acceptance rate of BSL proposal size z=3 (proposed) |
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - |
+ +∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈ ∈
|
+
+Table 4: Acceptance rates for MCMC targeting the Energy and Kernel Score posteriors for the different outlier setups, for the misspecified normal location model.
+
+Finally, as mentioned in the main text (Sec. 4.2), we attempted using BSL in this scenario. As the model is Gaussian, we expected the BSL posterior to be very close to the standard posterior. Indeed,
+---PAGE_BREAK---
+
+Figure 10: Posterior distribution obtained with the Scoring Rules and exact Bayes for the misspecified normal location model; each panel represents a different choice of $\epsilon$ and $z$. It can be seen that both Kernel and Energy score are more robust with respect to Standard Bayes, with the Kernel Score one being extremely robust. The densities are obtained by KDE on the MCMC output thinned by a factor 10.
+
+| Setup | ε = 0 | ε = 0.1 | ε = 0.2 |
|---|
| - | z = 3 | z = 5 | z = 7 | z = 10 | z = 20 | z = 3 | z = 5 | z = 7 | z = 10 | z = 20 |
|---|
| Standard Bayes | 0.100 | 0.100 | 0.099 | 0.099 | 0.099 | 0.100 | 0.099 | 0.099 | 0.100 | 0.099 | 0.099 | | Kernel Score | 0.103 | 0.108 | 0.107 | 0.110 | 0.108 | 0.104 | 0.120 | 0.126 | 0.109 | 0.117 | 0.119 | | Energy Score | 0.102 | 0.104 | 0.107 | 0.107 | 0.107 | 0.106 | 0.109 | 0.114 | 0.114 | 0.121 | 0.114 |
+
+Table 5: Obtained posterior standard deviation for the standard Bayes and the Energy and Kernel Score posteriors, for the different outlier setups, for the misspecified normal location model.
+
+this is what we observed in the well specified case and for small *z* (Figure 11). When however *z* is increased, the MCMC targeting the BSL posterior does not perform satisfactorily (see the trace plots in Figure 12). Neither reducing the proposal size nor running the chain for a longer number of steps seems to solve this issues, which reminds of the issue discussed in Sec. 4.1.
+
+## D.4 Additional details on the MA2 model experiment
+
+We report here additional experimental details on the MA(2) model experiment (Sec. 4.3.1).
+
+First, Table 6 reports the proposal sizes $\sigma$ and the resulting acceptance rates and trace of the posterior covariance matrix $\Sigma_{\text{post}}$ for BSL and semiBSL; we also report the trace of $\Sigma_{\text{post}}$ for the true posterior, for which we do not give the proposal size and acceptance rate as it was sampled using more advanced MCMC techniques than standard Metropolis-Hastings using the PyMC3 library [Salvatier et al., 2016].
+
+| Technique | BSL | semiBSL | True posterior |
|---|
| Proposal size σ | 1 | 0.2 | / |
|---|
| Acceptance rate | 0.16 | 0.16 | / |
|---|
| Tr[Σpost] | 0.0860 | 0.0527 | 0.04483 |
|---|
+
+Table 6: Proposal sizes and acceptance rates for the BSL, semiBSL and the true posterior for the MA2 model.
+
+For the Kernel Score posterior with the Gaussian Kernel, we first fit the value of the bandwidth
+---PAGE_BREAK---
+
+Figure 11: Standard Bayes and BSL posteriors for the normal location model, for different choices of $\epsilon$ and $z$. First row: fixed outliers location $z = 3$ and varying proportion $\epsilon$; second row: fixed outlier proportion $\epsilon$, varying location $z$. As expected, the BSL posterior is very close to the standard Bayes posterior. The densities are obtained by KDE on the MCMC output thinned by a factor 10.
+
+Figure 12: Trace plots for MCMC targeting the BSL posterior with different choices of $z$ and $\epsilon$, for the misspecified normal location model. We used here proposal size $\sigma = 2$ and 60000 MCMC steps, of which 40000 were burned in; reducing the proposal size or increasing the number of steps did not seem to solve this issue.
+---PAGE_BREAK---
+
+Figure 13: Contour plot for the posterior distributions for the MA(2) model with different values of *w*, with darker colors denoting larger posterior density and dotted line denoting true parameter value. The posterior densities are obtained by KDE on the MCMC output thinned by a factor 10. The prior distribution is uniform on the white triangular region. We remark how increasing *w* leads to narrower posteriors, as expected.
+
+of the Gaussian kernel as described in Appendix D.1, which resulted in $\gamma \approx 12.77$.
+
+Next, we attempted tuning the value of the weight $w$ for both the Kernel and Energy Score Posteriors using our heuristic procedure (Sec.3.2); this resulted in $w \approx 12.97$ for the Energy Score and $w \approx 208$ for the Kernel Score. However, running the inference scheme with these values lead to quite broad posterior density, from which it is hard to understand the behavior of the SR posteriors. For this reason, we obtained the Scoring Rule posteriors with different choices of $w$, by running an MCMC chain with 30000 steps of which 10000 are burned in, and by using $m = 500$. In Table 7, we report the proposal size, acceptance rate and the trace of the posterior covariance matrix for the different weights used, for both the Kernel and Energy Score posteriors. Using larger values of $w$ than the ones reported there did not lead to satisfactory MCMC performance, which could possibly be improved by using a multivariate proposal with a tuned covariance matrix, which we however did not pursue further here. We highlight that increasing $w$ leads to smaller posterior variance (as expected), but even with the largest values of $w$ we were able to use the posterior variance is still larger than the one obtained with BSL, semiBSL and the true posterior.
+
+The posterior density plots for the different values of $w$ are reported in Figure 13,
+
+| Kernel Score | Energy Score |
|---|
| w | Prop. size σ | Acc. rate | Tr [Σpost] | w | Prop. size σ | Acc. rate | Tr [Σpost] |
|---|
| 250 | 1 | 0.37 | 0.2827 | 12 | 0.3 | 0.46 | 0.2715 | | 300 | 0.9 | 0.32 | 0.2628 | 14 | 0.3 | 0.4 | 0.2537 | | 350 | 0.8 | 0.29 | 0.2461 | 16 | 0.3 | 0.35 | 0.2344 | | 400 | 0.7 | 0.26 | 0.2392 | 18 | 0.3 | 0.31 | 0.2278 | | 450 | 0.6 | 0.22 | 0.2204 | 20 | 0.15 | 0.29 | 0.2264 | | 500 | 0.5 | 0.19 | 0.2276 | 22 | 0.15 | 0.25 | 0.2066 | | 550 | 0.4 | 0.18 | 0.2138 | 24 | 0.15 | 0.21 | 0.1917 | | 600 | 0.3 | 0.14 | 0.1932 | 26 | 0.15 | 0.18 | 0.1863 | | 620 | 0.15 | 0.15 | 0.1888 | 28 | 0.1 | 0.16 | 0.1673 | | 640 | 0.15 | 0.14 | 0.1849 | 30 | 0.1 | 0.13 | 0.1589 |
+
+Table 7: Proposal size, acceptance rate and trace of the posterior covariance matrix for different weight values for MA2, for the Kernel and Energy Score posteriors.
+---PAGE_BREAK---
+
+## D.5 The M/G/1 model
+
+### D.5.1 Simulating the M/G/1 model
+
+We give here two different recursive formulations of the M/G/1 model which can be used to generate samples from it.
+
+We follow the notation and the model description in Shestopaloff and Neal [2014]. Specifically, we consider customers arriving at a single server with independent interarrival times $W_i$ distributed according to an exponential distribution with parameter $\theta_3$. The service time $U_i$ is assumed to be $U_i \sim \text{Uni}(\theta_1, \theta_2)$; the observed random variables are the interdeparture times $Y_i$. In Shestopaloff and Neal [2014], $Y_i$ is written using the following recursive formula:
+
+$$Y_i = U_i + \max\left(0, \sum_{j=1}^{i} W_j - \sum_{j=1}^{i-1} Y_j\right) = U_i + \max(0, V_i - X_{i-1}), \quad (27)$$
+
+where $V_i = \sum_{j=1}^i W_j$ and $X_i = \sum_{j=1}^i Y_j$ is the departure time are respectively the arrival and departure time of the i-th customer.
+
+A different formulation of the same process is given in Chapter 4.3 in Nelson [2013] by exploiting Lindley's equation, and is of independent interest. We give it here and we show how the two formulations correspond. Specifically, this formulation considers an additional variable $Z_i$ which denotes the waiting time of customer *i*. For this, a recursion can be obtained to be:
+
+$$Z_i = \max(0, Z_{i-1} + U_{i-1} - W_i),$$
+
+where $Z_0 = 0$ and $U_0 = 0$. Then, the interdeparture time is found to be:
+
+$$Y_i = W_i + U_i - U_{i-1} + Z_i - Z_{i-1}; \quad (28)$$
+
+this can be easily found as the absolute departure time for *i*-th client is $\sum_{j=1}^{i} W_j + U_i + Z_i$.
+These two formulations are the same; indeed, the latter can be written as:
+
+$$Y_i = U_i + \max(0, Z_{i-1} + U_{i-1} - W_i) - Z_{i-1} - U_{i-1} + W_i = U_i + \max(0, W_i - Z_{i-1} - U_{i-1}). \quad (29)$$
+
+By comparing Eqs. (27) and (29), the two formulations are equal if the following equality is verified:
+
+$$\max(0, W_i - Z_{i-1} - U_{i-1}) = \max\left(0, \sum_{j=1}^{i} W_j - \sum_{j=1}^{i-1} Y_j\right)$$
+
+which is equivalent to:
+
+$$W_i - Z_{i-1} - U_{i-1} = \sum_{j=1}^{i} W_j - \sum_{j=1}^{i-1} Y_j \iff Z_{i-1} + U_{i-1} = \sum_{j=1}^{i-1} (Y_j - W_j)$$
+
+Now, from Eq. (28) we have:
+
+$$\sum_{j=1}^{i-1} (Y_j - W_j) = \sum_{j=1}^{i-1} (U_j + Z_j - U_{j-1} - Z_{j-1}) = U_i + Z_i - U_0 - Z_0 = U_i + Z_i,$$
+
+from which the chain of equalities are satisfied.
+
+### D.5.2 Additional experimental details
+
+We report here additional experimental details on the M/G/1 model experiment (Sec. 4.3.2).
+
+First, Table 8 reports the proposal sizes $\sigma$ and the resulting acceptance rates and trace of the posterior covariance matrix $\Sigma_{post}$ for BSL and semiBSL; we also report the trace of $\Sigma_{post}$ for the true posterior, for which we do not give the proposal size and acceptance rate as it was sampled using more
+---PAGE_BREAK---
+
+| Technique | BSL | semiBSL | True posterior |
|---|
| Proposal size σ | 1 | 0.2 | / | | Acceptance rate | 0.12 | 0.11 | / | | Tr[Σpost] | 4.5183 | 0.2726 | 0.2108 |
+
+Table 8: Proposal sizes and acceptance rates for the BSL, semiBSL and the true posterior for the M/G/1 model.
+
+advanced MCMC techniques than standard Metropolis-Hastings using the PyMC3 library [Salvatier et al., 2016].
+
+For the Kernel Score posterior with the Gaussian Kernel, we first fit the value of the Gaussian kernel bandwidth parameter as described in Appendix D.1, which resulted in $\gamma \approx 28.73$.
+
+Next, we attempted tuning the value of the weight $w$ for both the Kernel and Energy Score Posteriors using our heuristic procedure (Sec.3.2); this resulted in $w \approx 10.98$ for the Energy Score and $w \approx 597$ for the Kernel Score. However, running the inference scheme with these values lead to quite broad posterior density, from which it is hard to understand the behavior of the SR posteriors. For this reason, we obtained the Scoring Rule posteriors with different choices of $w$; by running an MCMC chain with 30000 steps of which 10000 are burned in, and by using $m = 1000$. In Table 9, we report the proposal size, acceptance rate and the trace of the posterior covariance matrix for the different weights used, for both the Kernel and Energy Score posteriors. Using larger values of $w$ than the ones reported there did not lead to satisfactory MCMC performance, which could possibly be improved using a multivariate proposal with a tuned covariance matrix, which we however did no pursue further here. We highlight here that the large values of $w$ lead to much smaller posterior variance than BSL, and almost as small as semiBSL and the true posterior.
+
+The posterior density plots for the different value of $w$ are reported in Figure 14, where it can be seen that increasing $w$ leads to narrower posteriors.
+
+| Kernel Score | Energy Score |
|---|
| w | Prop. size σ | Acc. rate | Tr [Σpost] | w | Prop. size σ | Acc. rate | Tr [Σpost] |
|---|
| 500 | 1 | 0.33 | 5.4777 | 11 | 0.9 | 0.16 | 5.0448 | | 1000 | 0.8 | 0.28 | 4.8602 | 14 | 0.8 | 0.14 | 4.6072 | | 1500 | 0.5 | 0.3 | 4.8131 | 17 | 0.6 | 0.14 | 3.6742 | | 2000 | 0.3 | 0.32 | 4.7935 | 20 | 0.5 | 0.13 | 3.6797 | | 2500 | 0.2 | 0.32 | 4.6715 | 23 | 0.4 | 0.13 | 3.8724 | | 3000 | 0.1 | 0.36 | 4.1218 | 26 | 0.3 | 0.13 | 2.1551 | | 3500 | 0.1 | 0.32 | 4.0993 | 29 | 0.2 | 0.16 | 3.4833 | | 4000 | 0.05 | 0.35 | 2.8173 | 32 | 0.05 | 0.29 | 2.8267 | | 4500 | 0.05 | 0.3 | 2.3159 | 35 | 0.05 | 0.27 | 3.3188 | | 5000 | 0.04 | 0.27 | 2.3113 | 38 | 0.05 | 0.24 | 2.3488 | | 5500 | 0.02 | 0.27 | 1.5395 | 41 | 0.05 | 0.21 | 1.7488 | | 6000 | 0.02 | 0.25 | 2.4561 | 44 | 0.04 | 0.2 | 1.1017 | | 6500 | 0.01 | 0.25 | 0.9995 | 47 | 0.04 | 0.18 | 0.7685 | | 7000 | 0.005 | 0.23 | 1.0091 | 50 | 0.04 | 0.17 | 0.8157 | | 7500 | 0.005 | 0.19 | 0.5664 | 53 | 0.04 | 0.15 | 1.191 | | 8000 | 0.002 | 0.15 | 0.3769 | 56 | 0.01 | 0.21 | 0.5315 |
+
+Table 9: Proposal size, acceptance rate and trace of the posterior covariance matrix for different weight values for M/G/1, for the Kernel and Energy Score posteriors.
+---PAGE_BREAK---
+
+Figure 14: Posterior densities for the Kernel and Energy Score posteriors with different values of $w$ for the M/G/1 model; for both methods, each row shows bivariate marginals for a different pair of parameters, with darker colors denoting larger posterior density and dotted line denoting true parameter value. In the figure for the Kernel Score posterior, we write $\tilde{w} = w/100$ for brevity. The posterior densities are obtained by KDE on the MCMC output thinned by a factor 10. Notice that the axis do not span the full prior range of the parameters. We remark how increasing $w$ leads to narrower posteriors, as expected.
\ No newline at end of file
diff --git a/samples/texts_merged/4054627.md b/samples/texts_merged/4054627.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd0e93409d3fad66554eec60220d22579465c401
--- /dev/null
+++ b/samples/texts_merged/4054627.md
@@ -0,0 +1,1183 @@
+
+---PAGE_BREAK---
+
+# Mixed convection flow and heat transfer in a vertical wavy channel containing porous and fluid layer with traveling thermal waves
+
+J.C. Umavathi* and M. Shekar
+
+Department of Mathematics, Gulbarga University, Gulbarga-585 106, Karnataka, INDIA.
+
+*Corresponding Author: jc_uma11@yhaoo.com.
+
+## Abstract
+
+Mixed convection flow and heat transfer in a vertical wavy channel filled with porous and fluid layers is studied analytically. The flow in the porous medium is modeled using Darcy-Brinkman equation. The coupled non-linear partial differential equations describing the conservation of mass, momentum and energy are solved by linearization technique, wherein the flow is assumed to be in two parts; a mean part and a perturbed part. Exact solutions are obtained for the mean part and a perturbed part is solved using long wave approximation. Separate solutions are matched at the interface using suitable matching conditions. Results for a wide range of governing parameters such as Grashof number, viscosity ratio, width ratio, conductivity ratio, and traveling thermal temperature are plotted for different values of porous parameter on the velocity and temperature fields. Closed form expression for the skin friction and Nusselt number at both left and right channel walls are also derived and all the results are depicted pictorially. It is found that the presence of porous matrix, viscosity ratio and conductivity ratio suppress the velocity whereas, Grashof number and width ratio promotes the velocity parallel to the flow direction and reversal effect is observed on the velocity perpendicular to the flow direction.
+
+**Keywords:** convective flow; wavy channel; porous medium; traveling thermal waves.
+
+DOI: http://dx.doi.org/10.4314/ijest.v3i6.16
+
+## 1. Introduction
+
+Mixed convection heat transfer in porous media has received increasing interest over last twenty years, due to its numerous applications in geophysics and energy-related systems. Conservation in ducted flow may occur in many applications, such as heat exchangers, chemical processing equipment, transport of heated and cooled fluids, solar collectors and micro-electronic cooling. Buoyancy effects distort the velocity and temperature profiles relative to the forced convection case. This phenomenon is of substantial significance because it may strongly affect wall friction, pressure drop, and heat transfer, occurrence of extreme temperatures and stability of the flow. Convective heat transfer and fluid flow in a system containing simultaneously a fluid reservoir and a porous medium saturated with fluid is of great mathematical and physical interest. More specifically the existence of a fluid layer adjacent to layer of fluid saturated porous medium is a common occurrence in both geophysical and engineering environments. The fundamental nature and the growing volume of work in this area is amply documented in the books by Ingham and Pop (2005), Vafai (2005), Nield and Bejan (2006) etc.
+
+Composite systems are part of numerous and other engineering applications also, such as fibrous and granular insulation, porous insulation of ducts, ambient air heat transfer from hair covered skin, grain storage, and drying paper. Freezing of soils and melting of ice frozen soils due to the change in weather conditions also require the knowledge of interaction mechanism between the fluid and porous layers. Composite layers are also find application in porous journal bearings.
+
+Deajani et al. (1986) and Nield (1983) have studied the thermal instabilities of a superposed porous and fluid layer using Darcy's law together with matching conditions. Masuoka (1974) has observed convective flow in a layer of fluid heated from below and divided by a horizontal porous wall. He has found that the porous wall suppresses the convection. Recently, heat transfer in channels partially filled with porous media has received considerable attention and was the focus of several investigations (Chikh et al., 1995 and Vafai and Kim, 1995). As previously mentioned, the need for better understanding of heat
+---PAGE_BREAK---
+
+transfer in porous media is motivated by numerous engineering applications encountered. In general, most analytical studies of fluid flow adopt Darcy's law (refer Nield and Bejan, 2006 for different models such as Darcy model, Darcy-Brinkman model and Brinkman-Forchheimer model). In the study conducted by Al-Nimr and Alkam (1998) there appears to be very limited research on the problems of forced convection in composite fluids and porous layers. Beavers and Joseph (1967) first investigated the fluid mechanics at the interface between fluid layer and a porous medium over a flat plate. Rudraiah (1985) investigated the same problem using Darcy-Brinkman model. Neild (1991) discussed the limitation of the Brinkman-Forchheimer model in porous media and the interface, between the clear fluid and porous region. Later on Vafai and Kim (1995) presented an exact solution for the same problem. Recently Tang, et al. (2010) studied the combined heat and moisture convective transport in a partial enclosure with multiple free ports.
+
+Composite layer flows are exciting because of the modeling challenge that is thrown at researchers to model interface and boundary condition/s. By browsing the literature, it becomes evident that the interface conditions which have been in use are either of the 'slip' or 'stick' type of condition (see Beavers and Joseph, 1967 and Ochoa-Tapia and Whitaker, 1995). Several works have appeared on composite layer flows using these conditions or variant of them. Vafai and Kim (1990) presented an exact solution for the fluid flow at the interface between porous medium and a fluid layer including the inertia and boundary effect. They considered the shear stress in the fluid and the porous medium were taken to be equal at the interface region. Kuznetsov (1998) assumed that the shear stress jump is inversely proportional to the permeability of the porous medium. Later Alazmi and Vafai (2001) gave a detailed analysis of different types of interfacial conditions between a porous medium and a fluid layer which is the benchmark article to understand various types of interface conditions found in the literature. Following the analysis for interfacial conditions as defined in Vafai and Thiyagaraja (1987), Malashetty and his research group worked on flow and heat transfer through composite porous medium through channels. Convective flow and heat transfer in an inclined channel bounded by two rigid plates with one region filled with porous matrix saturated with a viscous fluid and another region with clear viscous fluid different from the fluid in the first region was studied by Malashetty et al. (2004). The same authors in 2005 analysed flow and heat transfer in an inclined channel consisting of a fluid layer sandwiched between two porous matrix layers. Oscillatory flow and heat transfer in composite porous medium channel was studied by Umavathi *et al.* (2006). An analysis of fully developed combined free and forced convective flow in a fluid saturated porous medium channel bounded by two vertical parallel plates was presented by Prathap Kumar *et al.* (2009). Recently Umavathi *et al.* (2010) found the exact solutions for the generalized plain Couette flow in a composite channel.
+
+The dynamics of incompressible viscous fluid flows bounded by wavy walls is of special interest and has many practical applications in the transpiration cooling of re-entry vehicles and rocket boosters, cross-hatching on ablative surfaces and film vaporization in combustion chambers, the finishing of painted walls and in reducing friction of drag on the hulls of ships and submarines. Wavy walls are also employed in medical operations in order to increase mass transfer (blood oxygen-generator Eldabe et al., 2008). Processes involving heat and mass transfer are often encountered in the chemical industry, in reservoir engineering connection with thermal recovery processes, and in the study of the dynamics of salty hot springs in the sea. In view of these applications, several authors have made investigations of the fluid flows over a wavy wall. Vajravelu and Sastri (1978) have made an interesting analysis of the free convective heat transfer in a viscous incompressible fluid bounded by a long (when compared to width of the channel) vertical wavy and parallel flat wall. Later Vajravelu (1989) studied the combined free and forced convection in hydromagnetic flows in a vertical wavy channel with traveling thermal waves. Malashetty et al. (2001) studied on magneto convective flow and heat transfer between vertical wavy wall and a parallel flat wall. Luo (2008) studied the flow of two superposed viscous fluid layers in a two-dimensional channel confined between a plane and a wavy wall by analytical and numerical methods at arbitrary Reynolds numbers. Srinivas and Muthuraj (2010) studied MHD flow with slip effects and temperature-dependent heat source in a vertical wavy porous space. Recently Umavathi et al. (2010, 2011) studied the flow and heat transfer in a long vertical channel composed of a smooth and a corrugated wall filled with two immiscible viscous fluids.
+
+Much attention has not been given to the mixed convection flow and heat transfer in a fluid superposed porous medium in vertical wavy channel even though the study is useful in many areas of applications as mentioned above. Thus the objective of this work is to study the flow and heat transfer in a vertical wavy channel, containing porous layer saturated with a fluid and a clear viscous fluid layer. In this study, the porous matrix is assumed to be sparse; the Darcy-Brinkman model is thus adopted to describe the fluid flow in porous medium region.
+
+## 2. Mathematical formulation of the problem
+
+The geometry under consideration illustrated in Figure 1 consists of wavy walls in which X-axis is taken vertically upward, and parallel to the direction of buoyancy, and the Y-axis is normal to it. The wavy walls are represented by $Y = -h^{(1)} + a \cos(\lambda X + \theta)$ (right wavy wall) and $Y = h^{(2)} + a \cos(\lambda X)$ (left wavy wall). Considering different and constant temperatures $\hat{T}_1$ (right wavy wall) and $\hat{T}_2$ (left wavy wall), where $\hat{T}_2 > \hat{T}_1$. The region $-h^{(1)} \leq Y \leq 0$ is occupied by a fluid-saturated porous medium of density $\rho$, specific heat at constant pressure $C_p$, viscosity $\mu$, permeability $\kappa$, thermal conductivity $K$, thermal expansion coefficient $\beta$, and the region $0 \leq Y \leq h^{(2)}$ (region-II) is occupied by the fluid as in region-I without porous matrix.
+We make the following assumptions:
+---PAGE_BREAK---
+
+(a) the fluid properties are assumed to be constant and the Boussinesq approximation will be used so that the density variation is retained only in the buoyancy term;
+
+(b) the flow is laminar and two-dimensional (that is, the flow is identical in vertical layers, which is a valid assumption);
+
+(c) the wave length of the wavy wall which is proportional to $a^{-1}$ is very large where a is the amplitude.
+
+**Figure 1:** Physical configuration.
+
+We consider fluid to be incompressible and the flow is steady and fully developed. Thus, with these the continuity equation, momentum equation, energy equation and state equation using Darcy-Brinkman model yield (Nield and Bejan, 2006). Region - I
+
+$$ \frac{\partial U^{(1)}}{\partial X^{(1)}} + \frac{\partial V^{(1)}}{\partial Y^{(1)}} = 0 \quad (1) $$
+
+$$ \rho \left( U^{(1)} \frac{\partial U^{(1)}}{\partial X^{(1)}} + V^{(1)} \frac{\partial U^{(1)}}{\partial Y^{(1)}} \right) = -\frac{\partial P^{(1)}}{\partial X^{(1)}} + \mu_{eff} \left( \frac{\partial^2 U}{\partial X^2} + \frac{\partial^2 U}{\partial Y^2} \right)^{(1)} - \rho g - \frac{\mu}{\kappa} U^{(1)} \quad (2) $$
+
+$$ \rho \left( U^{(1)} \frac{\partial V^{(1)}}{\partial X^{(1)}} + V^{(1)} \frac{\partial V^{(1)}}{\partial Y^{(1)}} \right) = -\frac{\partial P^{(1)}}{\partial Y^{(1)}} + \mu_{eff} \left( \frac{\partial^2 V}{\partial X^2} + \frac{\partial^2 V}{\partial Y^2} \right)^{(1)} - \frac{\mu}{\kappa} V^{(1)} \quad (3) $$
+
+$$ \rho C_p \left( U^{(1)} \frac{\partial T^{(1)}}{\partial X^{(1)}} + V^{(1)} \frac{\partial T^{(1)}}{\partial Y^{(1)}} \right) = K_{eff} \left( \frac{\partial^2 T}{\partial X^2} + \frac{\partial^2 T}{\partial Y^2} \right)^{(1)} \quad (4) $$
+
+$$ \rho = \rho_s (1 - \beta (T^{(1)} - T_s)) $$
+
+Region - II
+
+$$ \frac{\partial U^{(2)}}{\partial X^{(2)}} + \frac{\partial V^{(2)}}{\partial Y^{(2)}} = 0 \quad (5) $$
+
+$$ \rho \left( U^{(2)} \frac{\partial U^{(2)}}{\partial X^{(2)}} + V^{(2)} \frac{\partial U^{(2)}}{\partial Y^{(2)}} \right) = -\frac{\partial P^{(2)}}{\partial X^{(2)}} + \mu \left( \frac{\partial^2 U}{\partial X^2} + \frac{\partial^2 U}{\partial Y^2} \right)^{(2)} - \rho g \quad (6) $$
+
+$$ \rho \left( U^{(2)} \frac{\partial V^{(2)}}{\partial X^{(2)}} + V^{(2)} \frac{\partial V^{(2)}}{\partial Y^{(2)}} \right) = -\frac{\partial P^{(2)}}{\partial Y^{(2)}} + \mu \left( \frac{\partial^2 V}{\partial X^2} + \frac{\partial^2 V}{\partial Y^2} \right)^{(2)} \quad (7) $$
+---PAGE_BREAK---
+
+$$
+\rho C_p \left( U^{(2)} \frac{\partial T^{(2)}}{\partial X^{(2)}} + V^{(2)} \frac{\partial T^{(2)}}{\partial Y^{(2)}} \right) = K \left( \frac{\partial^2 T}{\partial X^2} + \frac{\partial^2 T}{\partial Y^2} \right)^{(2)}
+$$
+
+$$
+\rho = \rho_s (1 - \beta (T^{(2)} - T_s))
+$$
+
+The fluid viscosity and the effective viscosity in the Brinkman term are distinguished respectively as $\mu$ and $\mu_{\text{eff}}$ in Eqs. (2) and (3). Most works which used the Brinkman model assumed that $\mu_{\text{eff}} = \mu$. However, recent direct numerical simulation (Martys et al., 1994) and recent experimental investigation (Givler and Altobelli, 1994) have demonstrated that there are situations when it is important to distinguish between these two coefficients. For example, in Givler and Altobelli (1994) a water flow through a tube filled with an open-cell rigid form of high porosity was investigated. It was obtained that for this flow $\mu_{\text{eff}} = (7.5_{-2.4}^{+3.4})\mu$.
+
+The boundary conditions on $U^{(j)}$, $V^{(j)}$ are both no-slip conditions and boundary conditions on $T$ are $\hat{T}_1$ at the left wall and $\hat{T}_2$ at the right wall. For the problem displayed in Figure 1 at the interface (between region-I and region-II) we utilize the assumption of Kim and Choi (1996) and Kuznetsov (1999) that is, continuity of velocity, continuity of shear stress, continuity of pressure gradient along the flow direction, continuity of temperature and continuity of heat flux and are given below
+
+The relevant boundary and interface conditions on velocity are
+
+$$
+U^{(1)} = V^{(1)} = 0 \quad \text{at } Y = -h^{(1)} + a \cos(\lambda X + \theta); \quad U^{(2)} = V^{(2)} = 0 \quad \text{at } Y = h^{(1)} + a \cos(\lambda X)
+$$
+
+$$
+U^{(1)} = U^{(2)}, \quad V^{(1)} = V^{(2)}, \quad \mu_{\text{eff}} \left( \frac{\partial U}{\partial Y} + \frac{\partial V}{\partial X} \right)^{(1)} = \mu \left( \frac{\partial U}{\partial Y} + \frac{\partial V}{\partial X} \right)^{(2)}, \quad \frac{\partial P^{(1)}}{\partial X^{(1)}} = \frac{\partial P^{(2)}}{\partial X^{(2)}} \quad \text{at } Y=0 \quad (9)
+$$
+
+The relevant boundary and interface conditions for temperature are
+
+$$
+\begin{align*}
+T^{(1)} &= T_1 [1 + a \cos(\lambda X)] \\
+&= \hat{T}_1 (\text{say}) && \text{at } Y = -h^{(1)} + a \cos(\lambda X + \theta) \\
+T^{(2)} &= T_2 [1 + a \cos(\lambda X)] \\
+&= \hat{T}_2 (\text{say}) && \text{at } Y = h^{(2)} + a \cos(\lambda X)
+\end{align*}
+$$
+
+$$
+T^{(1)} = T^{(2)}, \quad K_{\text{eff}} \left( \frac{\partial T}{\partial Y} + \frac{\partial T}{\partial X} \right)^{(1)} = K \left( \frac{\partial T}{\partial Y} + \frac{\partial T}{\partial X} \right)^{(2)} \quad \text{at } Y=0 \qquad (10)
+$$
+
+We next introduce the non-dimensional flow variables as
+
+$$
+x^{(1)} = \frac{X^{(1)}}{h^{(1)}}, x^{(2)} = \frac{X^{(2)}}{h^{(2)}}, y^{(1)} = \frac{Y^{(1)}}{h^{(1)}}, y^{(2)} = \frac{Y^{(2)}}{h^{(2)}}, u^{(1)} = \frac{h^{(1)}}{\nu} U^{(1)}, v^{(1)} = \frac{h^{(1)}}{\nu} V^{(1)}, u^{(2)} = \frac{h^{(2)}}{\nu} U^{(2)}, v^{(2)} = \frac{h^{(2)}}{\nu} V^{(2)},
+$$
+
+$$
+p^{(1)} = \frac{\rho P^{(1)}}{\mu^2 / h^{(1)^2}}, p^{(2)} = \frac{\rho P^{(2)}}{\mu^2 / h^{(2)^2}}, T^{*(1)} = \frac{T^{(1)} - T_s}{\hat{T}_2 - \hat{T}_1}, T^{*(2)} = \frac{T^{(2)} - T_s}{\hat{T}_2 - \hat{T}_1}, Gr = \frac{h^{(1)^3} g \beta (\hat{T}_2 - \hat{T}_1)}{\nu^2}, m = \frac{\mu_{\text{eff}}}{\mu}, k = \frac{K_{\text{eff}}}{K}, h = \frac{h^{(1)^2}}{h}
+$$
+
+$$
+\sigma = \frac{h^{(1)}}{\sqrt{\kappa}}, \quad \varepsilon = \frac{a}{h^{(1)}}, \quad \lambda^* = \frac{\lambda}{h^{(1)}}, \quad \text{Pr} = \frac{C_p \mu}{K} \tag{11}
+$$
+
+In terms of these non-dimensional variables, the basic Eqs. (1) to (8) can be expressed in the dimensionless form, as, (for simplicity, the notation is considered as $x^{(1)} = x$ ; $y^{(1)} = y$ in region-I and $x^{(2)} = x$ ; $y^{(2)} = y$ in region-II)
+
+Region-I
+
+$$
+\frac{\partial u^{(1)}}{\partial x} + \frac{\partial v^{(1)}}{\partial y} = 0
+$$
+
+$$
+u^{(1)} \frac{\partial u^{(1)}}{\partial x} + v^{(1)} \frac{\partial u^{(1)}}{\partial y} = -\frac{\partial p^{(1)}}{\partial x} + m \left( \frac{\partial^2 u^{(1)}}{\partial x^2} + \frac{\partial^2 u^{(1)}}{\partial y^2} \right) + GrT^{*(1)} - \sigma^2 u^{(1)}
+$$
+
+$$
+u^{(1)} \frac{\partial v^{(1)}}{\partial x} + v^{(1)} \frac{\partial v^{(1)}}{\partial y} = -\frac{\partial p^{(1)}}{\partial y} + m \left(\frac{\partial^2 v^{(1)}}{\partial x^2} + \frac{\partial^2 v^{(1)}}{\partial y^2}\right) - \sigma^2 v^{(1)}
+$$
+
+$$
+u^{(1)} \frac{\partial T^{*(1)}}{\partial y} + v^{(1)} \frac{\partial T^{*(1)}}{\partial y} = -\frac{k}{\mathrm{Pr}} \left(\frac{\partial^2 T^{*(1)}}{\partial x^2} + \frac{\partial^2 T^{*(1)}}{\partial y^2}\right)
+$$
+
+Region-II
+
+$$
+\frac{\partial u^{(2)}}{\partial x} + \frac{\partial v^{(2)}}{\partial y} = 0
+$$
+
+
+---PAGE_BREAK---
+
+$$
+\begin{align}
+& u^{(2)} \frac{\partial u^{(2)}}{\partial x} + v^{(2)} \frac{\partial u^{(2)}}{\partial y} = - \frac{\partial p^{(2)}}{\partial x} + \frac{\partial^2 u^{(2)}}{\partial x^2} + \frac{\partial^2 u^{(2)}}{\partial y^2} + Grh^3 T^{*(2)} \tag{17} \\
+& u^{(2)} \frac{\partial v^{(2)}}{\partial x} + v^{(2)} \frac{\partial v^{(2)}}{\partial y} = - \frac{\partial p^{(2)}}{\partial y} + \frac{\partial^2 v^{(2)}}{\partial x^2} + \frac{\partial^2 v^{(2)}}{\partial y^2} \tag{18} \\
+& u^{(2)} \frac{\partial T^{*(2)}}{\partial x} + v^{(2)} \frac{\partial T^{*(2)}}{\partial y} = \frac{1}{\text{Pr}} \left( \frac{\partial^2 T^{*(2)}}{\partial x^2} + \frac{\partial^2 T^{*(2)}}{\partial y^2} \right) \tag{19}
+\end{align}
+$$
+
+Using Eqn. (11) boundary and interface conditions Eqn. (9) for velocity field become
+
+$$
+\begin{gather*}
+u^{(1)} = v^{(1)} = 0 \quad \text{at} \quad y = -1 + \epsilon \cos(\lambda^* x + \theta); \quad u^{(2)} = v^{(2)} = 0 \quad \text{at} \quad y = 1 + \epsilon \cos(\lambda^* x) \\
+u^{(1)} = \frac{u^{(2)}}{h}, \quad v^{(1)} = \frac{v^{(2)}}{h}, \quad \frac{\partial u^{(1)}}{\partial y} + \frac{\partial v^{(1)}}{\partial x} = \frac{1}{mh^2} \left( \frac{\partial u^{(2)}}{\partial y} + \frac{\partial v^{(2)}}{\partial x} \right), \quad \frac{\partial p^{(1)}}{\partial x} = \frac{1}{h^3} \frac{\partial p^{(2)}}{\partial x} \quad \text{at } y=0
+\end{gather*}
+$$
+
+$$
+u^{(1)} = \frac{u^{(2)}}{h}, v^{(1)} = \frac{v^{(2)}}{h}, \frac{\partial u^{(1)}}{\partial y} + \frac{\partial v^{(1)}}{\partial x} = \frac{1}{mh^2} \left( \frac{\partial u^{(2)}}{\partial y} + \frac{\partial v^{(2)}}{\partial x} \right), \frac{\partial p^{(1)}}{\partial x} = \frac{1}{h^3} \frac{\partial p^{(2)}}{\partial x} \quad \text{at } y=0
+\quad (20)
+$$
+
+Using Eqn. (11) boundary and interface conditions Eqn. (10) for temperature field become
+
+$$
+T^{*(1)} = 0 \text{ at } y = -1 + \epsilon \cos(\lambda^* x + \theta);
+$$
+
+$$
+T^{*(2)} = 1 \text{ at } y = 1 + \epsilon \cos(\lambda^* x)
+$$
+
+$$
+T^{*(1)} = T^{*(2)}, \quad \frac{\partial T^{*(1)}}{\partial y} + \frac{\partial T^{*(1)}}{\partial x} = \frac{1}{kh} \left( \frac{\partial T^{*(2)}}{\partial y} + \frac{\partial T^{*(2)}}{\partial x} \right) \quad \text{at } y=0
+$$
+
+In the static fluid we have (see Vajravelu and Sastri, 1978)
+
+$$
+0 = -\frac{\partial p_s}{\partial x} - \frac{\rho_s g h^{(1)3}}{\rho v^2} = -\frac{\partial p_s}{\partial x} - \frac{\rho_s g h^{(2)3}}{\rho v^2}
+$$
+
+In view of Eqn. (22). Equations (13) and (17) becomes
+
+$$
+u^{(1)} \frac{\partial u^{(1)}}{\partial x} + v^{(1)} \frac{\partial u^{(1)}}{\partial y} = -\frac{\partial (p^{(1)} - p_s)}{\partial x} + m \left( \frac{\partial^2 u^{(1)}}{\partial x^2} + \frac{\partial^2 u^{(1)}}{\partial y^2} \right) - \sigma^2 u^{(1)} + GrT^{*(1)}
+$$
+
+$$
+u^{(2)} \frac{\partial u^{(2)}}{\partial x} + v^{(2)} \frac{\partial u^{(2)}}{\partial y} = -\frac{\partial (p_0^{(2)} - p_s)}{\partial x} + \frac{\partial^2 u^{(2)}}{\partial x^2} + \frac{\partial^2 u^{(2)}}{\partial y^2} + Grh^3 T_0^{*(2)}
+$$
+
+**3. Solutions to the problem.**
+
+Equations (12), (14)-(16), (18), (19), (23), and (24) are coupled nonlinear and are to be solved simultaneously. Due to the nonlinearity, analytical solutions are difficult; however approximate solutions can be obtained using perturbation techniques. Assuming that the solutions consists of a mean part and a perturbed part, velocity, pressure and temperature can be written as,
+
+$$
+u^{(j)} (x, y) = u_{0}^{(j)} (y) + u_{1}^{(j)} (x, y)
+$$
+
+$$
+v^{(j)} (x, y) = v_{1}^{(j)} (x, y)
+$$
+
+$$
+p_j(x, y) = p_0^{(j)}(x, y) + p_1^{(j)}(x, y)
+$$
+
+$$
+T_0^{*(j)} (x, y) = T_0^{*(j)} (y) + T_1^{*(j)} (x, y)
+$$
+
+where the perturbed quantities $u_1$, $v_1$, $p_1$ and $T_1^*$ are small compared with the mean or zeroth order quantities $u_0$, $T_0^*$. The asterisk on $T$ and $\lambda$ is removed for the sake of simplicity in the following process.
+
+Using Eqs. (25) to (28) in the Eqs. (12), (14)-(16), (18), (19), (23), and (24), separating the mean part (zeroth order) and the perturbed part (first order), gives the following equations.
+
+Zeroth order equations
+
+$$
+\begin{align}
+&\frac{d^2 T_0^{(1)}}{dy^2} = 0 \tag{29}\\
+&m\frac{d^2 u_0^{(1)}}{dy^2} - \sigma^2 u_0^{(1)} + GrT_0^{(1)} = 0 \tag{30}
+\end{align}
+$$
+
+$$
+\frac{d^2 T_0^{(2)}}{dy^2} = 0
+$$
+
+$$
+\frac{d^2 u_0^{(2)}}{dy^2} + Grh^3 T_0^{*(2)} = 0
+$$
+---PAGE_BREAK---
+
+First order equations
+
+$$
+\begin{align}
+& u_0^{(1)} \frac{\partial u_1^{(1)}}{\partial x} + v_1^{(1)} \frac{du_0^{(1)}}{dy} = - \frac{\partial p_1^{(1)}}{\partial x} + m \left( \frac{\partial^2 u_1^{(1)}}{\partial x^2} + \frac{\partial^2 u_1^{(1)}}{\partial y^2} \right) - \sigma^2 u_1^{(1)} + GrT_1^{(1)} \tag{33} \\
+& u_0^{(1)} \frac{\partial v_1^{(1)}}{\partial x} = - \frac{\partial p_1^{(1)}}{\partial y} + m \left( \frac{\partial^2 v_1^{(1)}}{\partial x^2} + \frac{\partial^2 v_1^{(1)}}{\partial y^2} \right) - \sigma^2 v_1^{(1)} \tag{34} \\
+& u_0^{(1)} \frac{\partial T_1^{(1)}}{\partial x} + v_1^{(1)} \frac{dT_0^{(1)}}{dy} = \frac{k}{Pr} \left( \frac{\partial^2 T_1^{(1)}}{\partial x^2} + \frac{\partial^2 T_1^{(1)}}{\partial y^2} \right) \tag{35} \\
+& u_0^{(2)} \frac{\partial u_1^{(2)}}{\partial x} + v_1^{(2)} \frac{du_0^{(2)}}{dy} = - \frac{\partial p_1^{(2)}}{\partial x} + \frac{\partial^2 u_1^{(2)}}{\partial x^2} + \frac{\partial^2 u_1^{(2)}}{\partial y^2} + Grh^3 T_1^{(2)} \tag{36} \\
+& u_0^{(2)} \frac{\partial v_1^{(2)}}{\partial x} = - \frac{\partial p_1^{(2)}}{\partial y} + \frac{\partial^2 v_1^{(2)}}{\partial x^2} + \frac{\partial^2 v_1^{(2)}}{\partial y^2} \tag{37} \\
+& u_0^{(2)} \frac{\partial T_1^{(2)}}{\partial x} + v_1^{(1)} \frac{dT_0^{(2)}}{dy} = \frac{1}{Pr} \left( \frac{\partial^2 T_1^{(2)}}{\partial x^2} + \frac{\partial^2 T_1^{(2)}}{\partial y^2} \right) \tag{38}
+\end{align}
+$$
+
+In view of Eqs. (25) to (28) the boundary and interface conditions as defined in Eqs. (20) and (21) can be split as follows,
+Zeroth order boundary and interface conditions for velocity and temperature are
+
+$$
+\begin{align*}
+& u_{0}^{(1)} = 0 \text{ at } y = -1; & u_{0}^{(2)} = 0 \text{ at } y = 1; & u_{0}^{(1)} = \frac{u_{0}^{(2)}}{h}, \quad \frac{du_{0}^{(1)}}{dy} = \frac{1}{mh^2}\frac{du_{0}^{(2)}}{dy} &&\text{at } y=0 \tag{39}\\
+& T_{0}^{(1)} = 0 \text{ at } y = -1; & T_{0}^{(2)} = 1 \text{ at } y = 1; & T_{0}^{(1)} = T_{0}^{(2)}, \quad \frac{dT_{0}^{(1)}}{dy} = \frac{1}{kh}\frac{dT_{0}^{(2)}}{dy} &&\text{at } y=0 \tag{40}
+\end{align*}
+$$
+
+First order boundary and interface conditions for velocity and temperature are
+
+$$
+\begin{align*}
+& u_1^{(1)} = -\cos(\lambda x + \theta) \frac{du_0^{(1)}}{dy}, \quad v_1^{(1)} = 0 \quad \text{at } y = -1; && u_1^{(2)} = -\frac{\cos(\lambda x)}{h} \frac{du_0^{(2)}}{dy}, \quad v_1^{(2)} = 0 && \text{at } y = 1 \\
+& u_1^{(1)} = \frac{1}{h} u_1^{(2)}, \quad v_1^{(1)} = \frac{1}{h} v_1^{(2)}, \quad \frac{\partial u_1^{(1)}}{\partial y} + \frac{\partial v_1^{(1)}}{\partial x} = \frac{1}{mh^2} \left( \frac{\partial u_1^{(2)}}{\partial y} + \frac{\partial v_1^{(2)}}{\partial x} \right), \quad \frac{\partial p^{(1)}}{\partial x} = \frac{1}{h^3} \frac{\partial p^{(2)}}{\partial x} && \text{at } y=0 \tag{41} \\
+& T_1^{(1)} = -\cos(\lambda x + \theta) \frac{dT_0^{(1)}}{dy} && at y = -1; & T_1^{(2)} = -\frac{\cos(\lambda x)}{h} \frac{dT_0^{(2)}}{dy} && at y = 1 \\
+& T_1^{(1)} = T_1^{(2)}, \quad \frac{\partial T_1^{(1)}}{\partial y} + \frac{\partial T_1^{(2)}}{\partial x} = \frac{1}{kh} \left( \frac{\partial T_1^{(2)}}{\partial y} + \frac{\partial T_1^{(2)}}{\partial x} \right) && at y=0 \tag{42}
+\end{align*}
+$$
+
+In order to solve Eqs. (33) to (38), for the first order quantities it is convenient to introduce stream function $\bar{\psi}$ in the following form
+
+$$
+u_i^{(j)} = -\frac{\partial\bar{\psi}_i^{(j)}}{\partial y} \quad \text{and} \quad v_i^{(j)} = \frac{\partial\bar{\psi}_i^{(j)}}{\partial x} \quad \text{for } j=1,2
+$$
+
+The stream function approach reduces the number of dependent variables to be solved and also eliminates pressure from the list of variables. Differentiate Eqn. (33) with respect to $y$ and differentiate Eqn. (34) with respect to $x$ and then subtract Eqn. (33) with Eqn. (34) which will result in the elimination of pressure $p_1^{(1)}$. Similar procedure is opted for elimination of pressure $p_1^{(2)}$ from Eqs. (36) and (37). Equations (33) to (38) after elimination of $p_1^{(1)}$ and $p_1^{(2)}$, can be expressed in terms of the stream function $\bar{\psi}$ in the form
+
+Region-I
+
+$$
+u_{0}^{(1)}\bar{\psi}_{xyy} - \bar{\psi}_{x}^{(1)}u_{0yy} + u_{0}^{(1)}\bar{\psi}_{xxx} - m(\bar{\psi}_{xxxy} + \bar{\psi}_{xyyy}) + \sigma^2(\bar{\psi}_{xx}^{(1)} + \bar{\psi}_{yy}^{(1)}) - 2\bar{\psi}_{xxyy} + GrT_{1xy} = 0
+$$
+
+$$
+u_{0}^{(1)}T_{1x}(t) + \bar{\psi}_{x}(t)T_{0y}(t) = -\frac{k}{Pr}(T_{xx}(t) + T_{yy}(t)) \quad (45)
+$$
+
+Region-II
+
+$$
+u_{0}^{(2)}\bar{\psi}_{xyy} - \bar{\psi}_{x}^{(2)}u_{0yy} + u_{0}^{(2)}\bar{\psi}_{xxx} - \bar{\psi}_{xyyy} - \bar{\psi}_{xxxx} - 2\bar{\psi}_{xxyy} + Grh^3 T_{ly}(t) = 0
+$$
+
+$$
+u_{0}^{(2)}T_{lx}(t)^{(2)} + \bar{\psi}_{x}(t)^{(2)}T_{oy}(t)^{(2)} = -\frac{1}{Pr}(T_{xx}(t)^{(2)} + T_{yy}(t)^{(2)})
+$$
+---PAGE_BREAK---
+
+where a suffix x or y represents derivative with respect to x or y and 0 or 1 represents the zeroth order or first order terms respectively.
+
+The corresponding boundary and interface conditions on velocity and temperature reduces to
+
+$$
+\bar{\psi}_y^{(1)} = \cos(\lambda x + \theta) u_{0y}^{(1)}, \quad \bar{\psi}_x^{(1)} = 0 \text{ at } y = -1; \qquad \bar{\psi}_y^{(2)} = \frac{\cos(\lambda x)}{h} u_{0y}^{(2)}, \quad \bar{\psi}_x^{(2)} = 0 \text{ at } y = 1,
+$$
+
+$$
+\bar{\psi}_{y}^{(1)} = \frac{\bar{\psi}_{y}^{(2)}}{h}, \quad \bar{\psi}_{x}^{(1)} = \frac{\bar{\psi}_{x}^{(2)}}{h}, \quad \bar{\psi}_{xx}^{(1)} - \bar{\psi}_{yy}^{(1)} = \frac{1}{mh^2} (\bar{\psi}_{xx}^{(2)} - \bar{\psi}_{yy}^{(2)}) \text{ at } y=0
+$$
+
+$$
+\bar{\psi}_x^{(1)} u_{0y}^{(1)} - u_0^{(1)} \bar{\psi}_{xy}^{(1)} + m (\bar{\psi}_{xxy}^{(1)} + \bar{\psi}_{xyy}^{(1)}) - \sigma^2 \bar{\psi}_y^{(1)} - Gr T_1^{(1)} = \frac{1}{h^3} (\bar{\psi}_x^{(2)} u_{0y}^{(2)} - u_0^{(2)} \bar{\psi}_{xy}^{(2)} + \bar{\psi}_{xx_y}^{(1)} + \bar{\psi}_{yyy}^{(1)} - Grh^3 T_1^{(2)}) \text{ at } y=0 \quad (48)
+$$
+
+$$
+T_1^{(1)} = -\cos(\lambda x + \theta) \frac{dT_0^{(1)}}{dy} \quad \text{at } y = -1; \qquad T_1^{(2)} = -\frac{\cos(\lambda x) dT_0^{(2)}}{h dy} \quad \text{at } y = 1
+$$
+
+$$
+T_1^{(1)} = T_1^{(2)}, \quad T_1^{(1)} + T_1^{(1)} = \frac{T_1^{(2)} + T_1^{(2)}}{kh} \quad \text{at } y=0 \tag{49}
+$$
+
+We assume stream function and temperature in the following form
+
+$$
+\bar{\psi}^{(j)} = \epsilon e^{i\lambda x} \psi(y), \quad T_1^{(j)} = \epsilon e^{i\lambda x} t(y) \quad \text{for } j=1,2
+$$
+
+from which we infer
+
+$$
+u_1(x, y) = \epsilon e^{i\lambda x} u_1(y), \quad v_1(x, y) = \epsilon e^{i\lambda x} v_1(y) \tag{51}
+$$
+
+where *i* is the imaginary unit.
+
+In view of Eqn. (50), Eqs. (44) to (47) become
+
+Region-I
+
+$$
+m\psi_{yyy}^{(1)} - i(\lambda u_{0}^{(1)} + 2m\lambda^2 + \sigma^2)\psi_{yy}^{(1)} + (i\lambda u_{0yy}^{(1)} + \sigma^2\lambda^2 + i\lambda^3 u_{0}^{(1)} + m\lambda^4)\psi^{(1)} - Grt_{y}^{(1)} - i(\lambda u_{0}^{(1)} t^{(1)} + \lambda T_{0y}^{(1)}\psi^{(1)}) = 0
+$$
+
+$$
+(i \lambda u_0^{(1)} t^{(1)} + \lambda T_{0y}^{(1)} \psi^{(1)}) = \frac{k}{Pr} (-\lambda^2 t^{(1)} + t_{yy}^{(1)})
+$$
+
+Region-II
+
+$$
+\psi_{yy,y}^{(2)} - i(\lambda u_0^{(2)} + 2\lambda^2)\psi_{yy}^{(2)} + (i\lambda u_{0,yy}^{(2)} + i\lambda^3 u_0^{(2)} + \lambda^4)\psi_y^{(2)} - Grh^3 t_y^{(2)}
+$$
+
+$$
+i(\lambda u_0^{(2)} t^{(2)} + \lambda T_{0y}^{(2)} \psi^{(2)}) = \frac{1}{Pr} (-\lambda^2 t^{(2)} + t_{yy}^{(2)})
+$$
+
+Boundary and interface conditions as defined in Eqs. (48) and (49) can be written in terms of $\psi^{(j)}$ and $t^{(j)}$ as
+
+$$
+\frac{\partial \psi^{(1)}}{\partial y} = \cos(\theta) \frac{du_0^{(1)}}{dy}, \quad \psi^{(1)} = 0 \quad \text{at } y = -1;
+$$
+
+$$
+\frac{\partial \psi^{(2)}}{\partial y} = -\frac{1}{h} \frac{du_0^{(2)}}{dy}, \quad \psi^{(2)} = 0, \quad \text{at } y=1
+$$
+
+$$
+\psi_y^{(1)} = \frac{\psi_y^{(2)}}{h}, \quad \psi_x^{(1)} = \frac{\psi_x^{(2)}}{h}, \quad \psi_{xy}^{(1)} + \lambda^2 \psi_x^{(1)} = \frac{\psi_{yy}^{(2)} + \lambda^2 \psi_x^{(2)}}{mh^2} \quad \text{at } y=0
+$$
+
+$$
+i\lambda\psi_y^{(1)} u_{0y}^{(1)} - i\lambda u_0^{(1)}\psi_y^{(1)} + m(\psi_{yyy}^{(1)} - \lambda^2\psi_y^{(1)}) - \sigma^2\psi_y^{(1)} - Grt_y^{(1)} = \frac{1}{h^3}(i\lambda\psi_x^{(2)} u_{0x}^{(2)} - i\lambda u_0^{(2)}\psi_x^{(2)} - \lambda^2\psi_x^{(2)} + \psi_{yyy}^{(2)} - Grh^3 t_y^{(2)}) \quad \text{at } y=0 \quad (56)
+$$
+
+$$
+t^{(1)} = -\cos(\theta) \frac{dT_0^{(1)}}{dy} \quad \text{at } y = -1; \quad t^{(2)} = -\frac{1}{h} \frac{dT_0^{(2)}}{dy} \quad \text{at } y = 1; \quad t^{(1)} = t^{(2)}, \quad t_y^{(1)} + i\lambda t_y^{(1)} = \frac{t_y^{(2)} + i\lambda t_y^{(2)}}{kh} \quad \text{at } y = 0
+$$
+
+We restrict our attention to the real parts of the solutions for the perturbed quantities $\psi$, $t$, $u_1$, $v_1$ and $T_1$.
+
+Consider only small values of $\lambda$ and on substituting
+
+$$
+\psi(\lambda, y) = \sum_{z=0}^{\infty} \lambda^z \psi_z, \quad t(\lambda, y) = \sum_{z=0}^{\infty} \lambda^z t_z
+$$
+
+into Eqs. (52) to (57) we obtain to the order of $\lambda$, the following set of ordinary differential equations.
+
+Zeroth order
+
+$$
+\begin{gather}
+\frac{d^2 t_{10}}{dy^2} = 0 \\
+m \frac{d^4 \psi_{10}}{dy^4} - \sigma^2 \frac{d^2 \psi_{10}}{dy^2} - Gr \frac{dt_{10}}{dy} = 0
+\end{gather}
+$$
+---PAGE_BREAK---
+
+$$ \frac{d^2 t_{20}}{dy^2} = 0 $$
+
+$$ \frac{d^4 \psi_{20}}{dy^4} - Grh^3 \frac{dt_{20}}{dy} = 0 \qquad (59) $$
+
+First order
+
+$$ \frac{d^2 t_{11}}{dy^2} = i \frac{k}{\text{Pr}} \left( u_0^{(1)} t_{10} + \frac{dT_0^{(1)}}{dy} \psi_{10} \right) $$
+
+$$ m \frac{d^4 \psi_{11}}{dy^4} - \sigma^2 \frac{d^2 \psi_{11}}{dy^2} = i \left( u_0^{(1)} \frac{d^2 \psi_{10}}{dy^2} - \frac{d^2 u_0^{(1)}}{dy^2} \psi_{10} \right) + Gr \frac{dt_{11}}{dy} $$
+
+$$ \frac{d^2 t_{21}}{dy^2} = i \frac{1}{\text{Pr}} \left( u_0^{(2)} t_{20} + \frac{dT_0^{(2)}}{dy} \psi_{20} \right) $$
+
+$$ \frac{d^4 \psi_{21}}{dy^4} = i \left( u_0^{(2)} \frac{d^2 \psi_{20}}{dy^2} - \frac{d^2 u_0^{(2)}}{dy^2} \psi_{20} \right) + Grh^3 \frac{dt_{21}}{dy} \qquad (60) $$
+
+Zeroth order boundary and interface conditions in terms of stream function and temperature are
+
+$$ \frac{d\psi_{10}}{dy} = \cos(\theta) \frac{du_0^{(1)}}{dy}, \quad \psi_{10} = 0 \text{ at } y = -1; \quad \frac{d\psi_{20}}{dy} = \frac{1}{h} \frac{du_0^{(2)}}{dy}, \quad \psi_{20} = 0 \text{ at } y = 1 $$
+
+$$ \frac{d\psi_{10}}{dy} = \frac{1}{h}\frac{d\psi_{20}}{dy}, \quad \psi_{10} = \frac{1}{h}\psi_{20}, \quad \frac{d^2\psi_{10}}{dy^2} = \frac{1}{mh^2}\frac{d^2\psi_{20}}{dy^2} \quad \text{at } y=0 $$
+
+$$ m \frac{d^3\psi_{10}}{dy^3} - \sigma^2 \frac{d\psi_{10}}{dy} - Gr t_{10} = \frac{1}{h^3} \left( \frac{d^3\psi_{20}}{dy^3} - Grh^3 t_{20} \right) \quad \text{at } y=0 $$
+
+$$ t_{10} = -\cos(\theta) \frac{dT_0^{(1)}}{dy} \quad \text{at } y = -1; \quad t_{20} = -\frac{1}{h} \frac{dT_0^{(2)}}{dy} \quad \text{at } y = 1; \quad t_{10} = t_{20}, \quad \frac{dT_{10}}{dy} = \frac{1}{kh} \frac{dT_{20}}{dy} \quad \text{at } y = 0 \qquad (61) $$
+
+First order boundary and interface conditions in terms of stream function and temperature are
+
+$$ \frac{d\psi_{11}}{dy} = 0, \quad \psi_{11} = 0 \text{ at } y = -1; \quad \frac{d\psi_{21}}{dy} = 0, \quad \psi_{21} = 0 \text{ at } y = 1 $$
+
+$$ \frac{d\psi_{11}}{dy} = \frac{1}{h}\frac{d\psi_{21}}{dy}, \quad \psi_{11} = \frac{\psi_{21}}{h}, \quad \frac{d^2\psi_{11}}{dy^2} = \frac{1}{mh^2}\frac{d^2\psi_{21}}{dy^2} \quad \text{at } y=0 $$
+
+$$ \frac{du_0^{(1)}}{dy}\psi_{10} - i u_0^{(1)}\frac{d\psi_{10}}{dy} + m\frac{d^3\psi_{11}}{dy^3} - \sigma^2\frac{d\psi_{11}}{dy} - Gr t_{11} = \frac{1}{h^3}\left(i\frac{du_0^{(2)}}{dy}\psi_{20} - iu_0^{(2)}\frac{d\psi_{20}}{dy} + \frac{d^3\psi_{21}}{dy^3} - Grh^3t_{21}\right) \quad \text{at } y=0 $$
+
+$$ t_{11} = 0 \text{ at } y = -1; \quad t_{21} = 0 \text{ at } y = 1; \quad t_{11} = t_{21}, \quad \frac{dt_{11}}{dy} + it_{10} = -\frac{1}{kh}\left(\frac{dt_{21}}{dy} + it_{20}\right) \quad \text{at } y=0 \qquad (62) $$
+
+The set of Eqs. (29) to (32) subjected to boundary and interface conditions (39) and (40) have been solved exactly for $u_0^{(j)}$ and $T_0^{(j)}$, and the set of Eqs. (59) and (60) subject to boundary and interface conditions (61) and (62) have been solved for $\psi_j$ and $t_j$ ($j=1, 2$). From these solutions, the first order quantities can be put in the form,
+
+$$ \psi_j = (\psi_r + i\psi_i)_j = \psi_{j0} + \lambda\psi_{j1}, \quad t_j = (t_r + i t_i)_j = t_{j0} + i\lambda t_{j1} \quad (j=1, 2) \qquad (63) $$
+
+where suffix r denotes the real part and i denotes the imaginary part.
+
+Considering only real part, the expression for first order velocity and temperature become
+
+$$ u_1^{(j)} = \varepsilon \left( \lambda \sin(\lambda x) \frac{d\psi_i^{(j)}}{dy} - \cos(\lambda x) \frac{d\psi_r^{(j)}}{dy} \right) \qquad (64) $$
+
+$$ v_1^{(j)} = \varepsilon (-\lambda\psi_r^{(j)}\sin(\lambda x) - \lambda^2\psi_i^{(j)}\cos(\lambda x)) \qquad (65) $$
+
+$$ T_1^{(j)} = \varepsilon (\cos(\lambda x)t_r^{(j)} - \lambda\sin(\lambda x)t_i^{(j)}) \qquad (66) $$
+
+The total solutions for the velocity and temperature become the summation of the mean and perturbed part.
+
+The solutions and constants are given in the appendix section.
+---PAGE_BREAK---
+
+### 3.1 Skin friction and Nusselt number.
+
+The shearing stress $τ_{xy}$ at any point in the fluid is given in non-dimensional form, by
+
+$$ \tau_{xy} = \left( \frac{h^2}{\rho v^2} \right) \bar{\tau}_{xy} = \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \quad (67) $$
+
+At the wavy walls, $y = -1 + \varepsilon \cos(\lambda x + \theta)$ and $y = 1 + \frac{\varepsilon \cos(\lambda x)}{h}$, the skin friction $τ_{xy}$ becomes
+
+$$ \tau_{-1} = \tau_{-1}^{0} + \varepsilon (\cos(\lambda x + \theta) u_{0}'(-1) + u_{1}'(-1)) \quad (68) $$
+
+and
+
+$$ \tau_1 = \tau_1^0 + \varepsilon \left( \frac{1}{h} \cos(\lambda x) u_0'(1) + u_1'(1) \right) \quad (69) $$
+
+respectively, where
+
+$$ \tau_{-1}^{0} = \frac{du_{0}^{(1)}}{dy}(-1) \quad \text{and} \quad \tau_{1}^{0} = \frac{du_{0}^{(2)}}{dy}(1) $$
+
+The dimensionless Nusselt number is given by
+
+$$ Nu = \frac{\partial T}{\partial y} = T'_{0}(y) + \varepsilon Re(e^{i\lambda x} T'_{1}(y)) \quad (70) $$
+
+At the wavy walls, $y = -1 + \varepsilon \cos(\lambda x + \theta)$ and $y = 1 + \frac{\varepsilon \cos(\lambda x)}{h}$, Eqn. (70) assumes the form
+
+$$ Nu_{-1} = Nu_{-1}^{0} + \varepsilon (\cos(\lambda x + \theta) T_{0'}^{(1)}(-1) + t'(-1)) \quad (71) $$
+
+and
+
+$$ Nu_1 = Nu_1^0 + \varepsilon \left( \frac{1}{h} \cos(\lambda x) T_0^{(2)\prime} (1) + t_1'(1) \right) \quad (72) $$
+
+respectively, where
+
+$$ Nu_{-1}^{0} = \frac{dT_{0}^{(1)}}{dy}(-1) \quad \text{and} \quad Nu_{1}^{0} = \frac{dT_{0}^{(2)}}{dy}(1) $$
+
+where Re represents the real part
+
+The expressions for $τ_{-1}^0$, $τ_1^0$, $Nu_{-1}^0$ and $Nu_1^0$ are obtained from zeroth order solutions $u_0$ and $T_0$ and are numerically evaluated for several sets of values of the parameters m, h, k and θ. Also, the wall skin friction $τ_{-1}$, $τ_1$ and the wall Nusselt number $Nu_{-1}$ and $Nu_1$ are calculated numerically and some of the qualitative interesting features are presented graphically.
+
+## 4. Results and discussion
+
+Mixed convective flow and heat transfer of composite porous medium in a vertical wavy channel is studied analytically. The parameters such as Prandtl number, wave number, amplitude parameter and $\lambda x$ are fixed as 0.7, 0.05, 0.02, 0.785398 respectively for all the computations, whereas Grashof number, viscosity ratio, width ratio, conductivity ratio and traveling thermal temperature are fixed as 5, 1, 1, 1, 0.785398 respectively for all the graphs except the varying one. The effect of porous parameter $\sigma$ is observed for all the graphs from Figures 2 to 11 and Tables 1 to 3.
+
+The effect of increasing Grashof number $Gr$ is to increase the fluid motion for zeroth order velocity $u_0$ as seen in Figure 2a. It is also observed that for large porous parameter $\sigma$, frictional drag resistance against the flow motion becomes pronounced and as a result, the velocity generally reduced in porous region. As the porous parameter $\sigma$ increases velocity decreases significantly in the permeable fluid layer. With the dragging effect across the interface the velocity in the region-II also decreases as $\sigma$ increases. The first order velocity decreases in region-I ($y = -1$ to 0 approximately) and increases in region-II with the Grashof number increases as seen in Figure 2b. The first order velocity $u_1$ is pronounced as the porous parameter $\sigma$ increases, in the region-I ($y = -1$ to 0 approximately) whereas, as $\sigma$ increases, first order velocity decreases in the region-II. The velocity $u$ for different values of Grashof number and porous parameter is shown in Figure 2c. The effect of $Gr$ and $\sigma$ on velocity $u$ parallel to the flow direction is similar to the effect on zeroth order velocity $u_0$. Physically, an increase in the value of the Grashof number means an increase in the buoyancy force which supports the motion. The behavior of the fluid velocity $v$ perpendicular to the channel length on the Grashof number and porous parameter is observed in Figure 2d. As the Grashof number increases velocity $v$ decreases whereas as the porous parameter $\sigma$ increases, velocity increases.
+---PAGE_BREAK---
+
+The effect of viscosity ratio $m (= \mu_{eff} / \mu)$ on the velocity $u$ and $v$ is shown in Figure 3. As the viscosity ratio $m$ increases zeroth order velocity decreases in both the regions as seen in Figure 3a. It is also observed that as the porous parameter $\sigma$ increases, zeroth order velocity decreases in both the regions. However, the effect of $\sigma$ is more operative in region-I when compared to region-II. The effect of viscosity ratio on first order velocity $u_1$ is to enhance the velocity in the porous region in magnitude and suppress the velocity in the viscous region as seen in Figure 3b. The effect of viscosity ratio $m$ and porous parameter $\sigma$ on the velocity $u$ is again similar to the effect on zeroth order velocity $u_0$ as seen in Figure 3c. Physically, the increase in viscosity ratio means, fluid become thicker which will reduce the flow filed. The fluid velocity $v$ perpendicular to the channel length increases as the viscosity ratio $m$ and porous parameter $\sigma$ increases as seen in Figure 3d.
+
+**Figure 2:** Velocity profiles for different values of Grashof number and porous parameter.
+(a) zeroth order profiles, (b) first order profiles, (c) velocity profiles in $u$ and (d) velocity profiles in $v$.
+
+The influence of width ratio $h (= h^{(2)}/h^{(1)})$ on the velocity field is displayed in Figure 4. The effect of width ratio $h$ on zeroth order velocity $u_0$ is to enhance the velocity in both the regions. That is larger the width of the clear viscous fluid layer compared to width of the permeable fluid layer, the stronger the flow field. It is observed that the width ratio $h$ is more effective in viscous fluid region compared to permeable fluid region. The effect of porous parameter $\sigma$ on zeroth order velocity is to reduce the velocity in both the regions as seen in Figure 4a. The effect of width ratio $h$ on the first order velocity $u_1$ is not significant in region-I compared to region-II whereas, as the width ratio increases, velocity increases in region-II. The effect of porous parameter $\sigma$ is to decrease the first order velocity $u_1$ in region-II and its effect is not significant in region-I as seen in Figure 4b. The effect of width ratio $h$ and porous parameter $\sigma$ on the velocity $u$ is exactly similar to the effect on zeroth order velocity $u_0$ as observed
+---PAGE_BREAK---
+
+in Figure 4c. The effect of width ratio *h* and porous parameter *σ* on fluid velocity *v* is shown in Figure 4d. As the width ratio increases velocity *v* decreases for small porous parameter and it increases for porous parameter *σ* ≥ 4 in porous region, in viscous region, the width ratio reduces the velocity *v* whereas, the porous parameter *σ* enhance the fluid velocity *v*.
+
+**Figure 3:** Velocity profiles for different values of viscosity ratio and porous parameter.
+(a) zeroth order profiles, (b) first order profiles, (c) velocity profiles in *u* and (d) velocity profiles in *v*.
+
+The influence of width ratio *h* on the temperature field is to decrease the zeroth order temperature field *T*₀, as displayed in Figure 5a. Figure 5b reflects that as the width ratio *h* increases the first order temperature *T*ᵢ increases. The effect of width ratio *h* on temperature *T* is similar to its effect on zeroth order temperature *T*₀. It is also observed from Figure 5a and 5c that the effect of porous parameter *σ* does not affect the temperature field whereas, first order temperature decreases to the order of 10⁻³ as porous parameter increases as seen in Figure 5b.
+
+The role of conductivity ratio $k (= K_{\text{eff}} / K)$ and porous parameter $\sigma$ is to suppress the zeroth order velocity $u_0$ as seen in Figure 6a. The first order velocity $u_1$ increases in region-I and decreases in region-II as the conductivity ratio $k$ and porous parameter $\sigma$ increases as seen in Figure 6b. The effect of $k$ and $\sigma$ on the velocity $u$ is similar to the effect on the zeroth order velocity. Physically, larger the conductivity of the porous matrix compared to fluid, the smaller the flow field. The fluid velocity $v$ enhances as the conductivity ratio $k$ and porous parameter increases as seen in Figure 6d.
+
+The effect of conductivity ratio $k$ on zeroth order and total temperature $T$ is similar to the effect of width ratio $h$ (Figure 5) as seen in Figures 7a and 7c. That is, as the conductivity ratio increases, zeroth and total temperature decreases. The effect of $k$ on first order temperature $T_i$ decreases as conductivity ratio increases as seen in Figure 7b. It is also observed from Figure 7a and 7c
+---PAGE_BREAK---
+
+that the effect of porous parameter $\sigma$ does not affect the temperature field whereas, as the porous parameter $\sigma$ increases first order temperature decreases as seen in Figure 7b.
+
+**Figure 4:** Velocity profiles for different values of width ratio and porous parameter.
+(a) zeroth order profiles, (b) first order profiles, (c) velocity profiles in $u$ and (d) velocity profiles in $v$.
+
+The effect of the traveling thermal temperature $\theta$ on first order velocity $u_1$, and velocity $u$ is shown in Figure 8. The first order velocity $u_1$ increases as traveling thermal temperature $\theta$ increases from $y = -1$ to -0.25 approximately and decreases from $y = -0.25$ onwards as seen in Figure 8a. The velocity $u$ increases as the traveling thermal temperature $\theta$ increases in region-I near the left wavy wall and decreases in region-II near the interface as seen in Figure 8b. The fluid velocity $v$ perpendicular to the channel increases as the traveling thermal temperature $\theta$ increases as seen in Figure 8c. Its effect is more significant near the left wavy wall.
+
+The effect of traveling thermal temperature $\theta$ on first order and total temperature is shown in Figure 9. First order temperature increases as the traveling thermal temperature $\theta$ increases and its effect is more in region-I compared to region-II as seen in Figure 9a. The effect of the traveling thermal temperature $\theta$ on total temperature increases slightly near the left wavy wall and remains constant at the right wavy wall as seen in Figure 9b.
+---PAGE_BREAK---
+
+**Figure 5:** Temperature profiles for different values of width ratio and porous parameter. (a) zeroth order, (b) first order and (c) total temperature.
+
+Shear stress $\tau$ at the walls of the channel is analysed for different values of Grashof number $Gr$, width ratio $h$ and porous parameter $\sigma$ and is shown in Figure 10. As the Grashof number $Gr$ and width ratio $h$ increases the skin friction at the left wall increases and decrease at the right wall. The porous parameter decreases the skin friction at the left wall and increases in magnitude at the right wall as seen in Figure 10.
+
+The heat transfer coefficient $Nu$ for different values of $Gr$ and width ratio $h$ is shown in Figure 11. The Nusselt number at the left wavy wall $Nu_{-1}$ and Nusselt number at the right wavy wall $Nu_1$ remains invariant on Grashof number. Varying the width ratio $h$, the Nusselt number at the left wavy wall is vary large compare to the Nusselt number at the right wavy wall.
+
+Table 1 depicts the effect of amplitude and porous parameter on Nusselt number distribution for fixed values of $Gr = 10$, $m=h=k=1$, Pr = 0.7, $\lambda x = \pi/4$ and $\theta = 0$. It is noted that the $Nu_{-1}$ decreases and $Nu_1$ increases with increase in amplitude (or wave number) and porous parameter, which is the similar result obtained by Jang *et al.* (2003, 2004). (That is the Nusselt number is small for large amplitude-wavelength ratio). Also, the Nusselt number decreases as the wavelength increases as observed by Varol and Oztop, (2006).
+---PAGE_BREAK---
+
+**Figure 6:** Velocity profiles for different values of conductivity ratio and porous parameter.
+
+(a) zeroth order profiles, (b) first order profiles, (c) velocity profiles in *u* and (d) velocity profiles in *v*.
+
+The effect of convective parameter Gr on temperature is shown in Table 2. The zeroth order temperature equation does not contain Grashof number and hence remains invariant for the effects of Grashof number. However Grashof number occurs in the first order temperature equation through zeroth order velocity. It is observed that the first order temperature increases to the order of $10^{-5}$ with increase in the Grashof number. The effect of Grashof number on total temperature also increases to the order of $10^{-5}$. The porous parameter $\sigma$ decreases the first order temperature and total temperature as seen in Table 2.
+
+The effect of viscosity ratio on the temperature field is shown in Table 3. It is observed that zeroth order temperature is invariant on viscosity ratio whereas, first order temperature varies to the order of $10^{-3}$. However the effect of viscosity ratio on total temperature also varies to the order of $10^{-3}$ and there is no effect of porous parameter $\sigma$ on zeroth, first order and total temperature as seen in Table 3.
+
+To validate the results for the present model, the results are compared with Umavathi and Shekar (2011) and (Vajravelu and Sastri, 1978). Considering purely viscous fluid in region-I, the present model will reduce to Umavathi and Shekar (2011). For values of viscosity ratio, width ratio and conductivity ratio to be one and in the absence of porous parameter with $\lambda x = \pi/2$ and $\theta=0$ will reduce the present model to one-fluid model (Vajravelu and Sastri, 1978). The results of velocity $u$ and temperature $T$ agree very well with Umavathi and Shekar (2011) and Vajravelu and Sastri (1978) as seen in Table 4. For comparison of the present model with Vajravelu and Sastri (1978), the problem of Vajravelu and Sastri (1978) is solved in the absence of heat source/sink, the plates are placed at $y = -1 + \epsilon \cos(\lambda x)$ instead of $y = \epsilon \cos(\lambda x)$ and boundary conditions on temperature are chosen to be $T = 0$ at $y = -1 + \epsilon \cos(\lambda x)$ and $T = 1$ at $y = 1$.
+---PAGE_BREAK---
+
+**Figure 7:** Temperature profiles for different values of conductivity ratio and porous parameter. (a) zeroth order, (b) first order and (c) total temperature profiles.
+
+**Figure 8:** Velocity profiles for different values of traveling thermal temperature $\theta$. (a) velocity in $u$, (b) first order velocity and (c) velocity in $v$.
+---PAGE_BREAK---
+
+**Figure 9:** Temperature profiles for different values of traveling thermal temperature $\theta$.
+
+**Figure 10:** skin friction profiles for different values of width ratio and porous parameter.
+---PAGE_BREAK---
+
+Figure 11: Nusselt number profiles for different values of width ratio.
+
+Table 1. Values of the Nusselt number for different values of amplitude, $\lambda x$ and porous parameter with $m = k = h = 1$.
+
+| σ | ε = 0.02, λ = 0.05 | ε = 1.0, λ = 1.0 |
|---|
Flat walls (λx = π/2, θ = 0) | Wavy walls (λx = π/4, θ = 0) | Flat walls (λx = π/2, θ = 0) | Wavy walls (λx = π/4, θ = 0) |
|---|
| Nu-1 | Nu1 | Nu-1 | Nu1 | Nu-1 | Nu1 | Nu-1 | Nu1 |
|---|
| 10.0 | 0.499991 | 0.500047 | 0.499994 | 0.500033 | 0.491215 | 0.547170 | 0.493788 | 0.533354 | | 20.0 | 0.499976 | 0.500064 | 0.499983 | 0.500045 | 0.475529 | 0.564109 | 0.482696 | 0.545332 | | 30.0 | 0.499973 | 0.500066 | 0.499981 | 0.500047 | 0.472535 | 0.566010 | 0.480579 | 0.546676 |
+
+Table 2. Values of the Temperature field for different values of Grashof number and porous parameter.
+
+| y | σ = 2 |
|---|
| To | Ti | T |
|---|
| Gr = 5, 50, 100 | Gr = 5 | Gr = 50 | Gr = 100 | Gr = 5 | Gr = 50 | Gr = 100 |
|---|
| -1 | 0 | -0.005 | -0.005 | -0.005 | -0.0050 | -0.0050 | -0.0050 | | -0.75 | 0.125 | -0.00526 | -0.00524 | -0.00523 | 0.11974 | 0.11976 | 0.11978 | | -0.5 | 0.25 | -0.00551 | -0.00548 | -0.00544 | 0.24449 | 0.24452 | 0.24456 | | -0.25 | 0.375 | -0.00577 | -0.00573 | -0.00568 | 0.36923 | 0.36927 | 0.36932 | | 0 | 0.5 | -0.00603 | -0.00599 | -0.00595 | 0.49397 | 0.49401 | 0.49405 | | 0.25 | 0.625 | -0.00629 | -0.00626 | -0.00623 | 0.61871 | 0.61874 | 0.61877 | | 0.5 | 0.75 | -0.00655 | -0.00653 | -0.00651 | 0.74345 | 0.74347 | 0.74349 | | 0.75 | 0.875 | -0.00681 | -0.00680 | -0.00679 | 0.86819 | 0.86820 | 0.86821 | | 1 | 1 | -0.00707 | -0.00707 | -0.00707 | 0.99293 | 0.99293 | 0.99293 |
+---PAGE_BREAK---
+
+| σ = 4 |
|---|
| -1 | 0 | -0.005 | -0.005 | -0.005 | -0.0050 | -0.0050 | -0.0050 | | -0.75 | 0.125 | -0.00526 | -0.00525 | -0.00523 | 0.11974 | 0.11975 | 0.11977 | | -0.5 | 0.25 | -0.00551 | -0.00549 | -0.00547 | 0.24448 | 0.24451 | 0.24453 | | -0.25 | 0.375 | -0.00577 | -0.00575 | -0.00573 | 0.36923 | 0.36925 | 0.36927 | | 0 | 0.5 | -0.00603 | -0.00602 | -0.00601 | 0.49397 | 0.49398 | 0.49399 | | 0.25 | 0.625 | -0.00629 | -0.00630 | -0.00630 | 0.61871 | 0.61870 | 0.61870 | | 0.5 | 0.75 | -0.00655 | -0.00656 | -0.00657 | 0.74345 | 0.74344 | 0.74343 | | 0.75 | 0.875 | -0.00681 | -0.00682 | -0.00682 | 0.86819 | 0.86818 | 0.86818 | | 1 | 1 | -0.00707 | -0.00707 | -0.00707 | 0.99293 | 0.99293 | 0.99293 |
+
+| σ = 6 |
|---|
| -1 | 0 | -0.005 | -0.005 | -0.005 | -0.0050 | -0.0050 | -0.0050 | | -0.75 | 0.125 | -0.00526 | -0.00526 | -0.00526 | 0.11974 | 0.11974 | 0.11975 | | -0.5 | 0.25 | -0.00552 | -0.00551 | -0.00551 | 0.24448 | 0.24449 | 0.24449 | | -0.25 | 0.375 | -0.00577 | -0.00578 | -0.00578 | 0.36922 | 0.36922 | 0.36922 | | 0 | 0.5 | -0.00604 | -0.00606 | -0.00608 | 0.49396 | 0.49394 | 0.49392 | | 0.25 | 0.625 | -0.00630 | -0.00633 | -0.00636 | 0.61870 | 0.61867 | 0.61864 | | 0.5 | 0.75 | -0.00656 | -0.00659 | -0.00662 | 0.74344 | 0.74341 | 0.74338 | | 0.75 | 0.875 | -0.00681 | -0.00683 | -0.00685 | 0.86819 | 0.86817 | 0.86815 |
+
+**Table 3.** Values of the Temperature field for different values of viscosity ratio and porous parameter.
+
+| σ = 2 |
|---|
| y | To | Ti | Ts |
|---|
| m = 1, 1, 2 | m = 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, m = -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1br/>
+
+
+
+
+ |
+ σ = 4
+ |
+
+
+
+
+ |
+ -1
+ |
+
+ 0
+ |
+
+ -0.75
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+
+ |
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+
+ |
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+
+ |
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+ -(-
+
+ x
+
+ )
+ |
+
+
+ |
+ σ = 6
+ |
+
+
+ |
+ |
+
+
+
+
+ |
+ |
+
+
+
+
+
+---PAGE_BREAK---
+
+**Table 4.** Comparison of results with one fluid and two fluid model for $m = k = h = 1$, $\theta = 0$, $\lambda x = \pi/2$, $\lambda = 0.05$ and $\epsilon = 0.02$.
+
+| y | Umavathi and Shekar (2011) (Two-fluid model) h = 0.1 | Present model h = 0.1 | Vajravelu and Sastri (1978) (One-fluid model) | Present model m = k = h = 1 |
|---|
| u | T | u | T | u | T | u | T |
|---|
| -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | -0.75 | 0.2173 | 0.22749 | 0.2173 | 0.22749 | 0.41016 | 0.125 | 0.41014 | 0.125 | | -0.5 | 0.36363 | 0.45497 | 0.36363 | 0.45497 | 0.78125 | 0.25 | 0.78123 | 0.25 | | -0.25 | 0.36791 | 0.68243 | 0.36791 | 0.68243 | 1.07422 | 0.375 | 1.07419 | 0.375 | | 0 | 0.1591 | 0.90986 | 0.1591 | 0.90986 | 1.25 | 0.5 | 1.24998 | 0.5 | | 0 | 0.01591 | 0.90986 | 0.01591 | 0.90986 | 1.25 | 0.5 | 1.24998 | 0.5 | | 0.25 | 0.01238 | 0.93239 | 0.01238 | 0.93239 | 1.26953 | 0.625 | 1.26951 | 0.625 | | 0.5 | 0.00855 | 0.95493 | 0.00855 | 0.95493 | 1.09375 | 0.75 | 1.09374 | 0.75 | | 0.75 | 0.00443 | 0.97746 | 0.00443 | 0.97746 | 0.68359 | 0.875 | 0.68359 | 0.875 | | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
+
+**5. Conclusions**
+
+Mixed convection flow and heat transfer in a vertical wavy channel filled with porous and fluid layers was studied. Results were presented for variations of Grashof number, viscosity ratio, width ratio, conductivity ratio and traveling thermal temperature on the velocity, temperature, skin friction and Nusselt number.
+
+1. Grashof number and width ratio were enhanced by the velocity parallel to the flow and diminished the velocity perpendicular to the flow.
+
+2. Viscosity ratio and conductivity ratio was suppressed by the velocity parallel to the flow and promoted the velocity perpendicular to the flow.
+
+3. The effect of Grashof number and viscosity ratio were not operative on temperature whereas, width ratio and conductivity ratio reduced the temperature.
+
+4. It was also observed in all the results that, as the porous parameter increases, the velocity parallel to the flow decreases and increases the velocity perpendicular to the flow whereas, it remains invariant on temperature.
+
+5. The effect of traveling thermal temperature was to enhance the velocity and temperature near the left wavy wall and remains invariant at the right wavy wall.
+
+6. The skin friction increases at the left wall and decreases at the right wall as the Grashof number increases whereas, width ratio and porous parameter decreases the skin friction at the left wall and increases at the right wall.
+
+7. The Nusselt number remains invariant on Grashof number whereas, the Nusselt number decreases at the left wall and increases at the right wall as the width ratio increases. Amplitude and porous parameter decreases the Nusselt number at both the walls.
+
+8. The results obtained were in good agreement with the results for two-fluid model (Umavathi and Shekar, 2011) and for one-fluid model (Vajravelu and Sastri, 1978).
+
+**Nomenclature**
+
+a amplitude (m)
+
+$C_p$ specific heat at constant pressure (kJ kg$^{-1}$ K$^{-1}$)
+
+g acceleration due to gravity (ms$^{-2}$)
+
+Gr Grashof number ($h^{(1)} g \beta \Delta T / \nu^2$)
+
+$h$ width ratio of the channel ($h^{(2)}/h^{(1)}$)
+
+K thermal conductivity (W m$^{-1}$K$^{-1}$)
+
+$K_{eff}$ effective thermal conductivity (W m$^{-1}$K$^{-1}$)
+
+$k$ thermal conductivity ratio ($K_{eff} / K$)
+
+$m$ viscosity ratio ($\mu_{eff} / \mu$)
+
+Nu Nusselt number
+---PAGE_BREAK---
+
+P pressure (Nm⁻²)
+
+p dimensionless pressure
+
+$p_s$ static pressure (Nm⁻²)
+
+Pr Prandtl number ($C_p \mu/K$)
+
+T temperature (K)
+
+$T_s$ static temperature (K)
+
+U, V velocities along X and Y directions (ms⁻¹)
+
+u, v dimensionless velocities
+
+X, Y space co-ordinates (m)
+
+x, y dimensionless space co-ordinates
+
+**Greek Symbols**
+
+β coefficient of thermal expansion
+
+$\epsilon$ dimensionless amplitude parameter ($a/h^{(1)}$)
+
+κ permeability of the porous media (m²)
+
+λ wavelength (m)
+
+$\lambda^*$ dimensionless wave number
+
+μ viscosity (kg m⁻¹ s⁻¹)
+
+$\mu_{eff}$ effective viscosity (kg m⁻¹ s⁻¹)
+
+ν kinematic viscosity (μ / ρ)
+
+ρ density (kg m⁻³)
+
+$\rho_s$ static density (kg m⁻³)
+
+$\sigma$ porous parameter ($h^{(1)}/\sqrt{\kappa}$)
+
+$\tau$ skin friction
+
+$\psi$ stream function
+
+**Superscripts**
+
+1 and 2 refer quantities for the fluids in region-I and region-II respectively.
+
+**Subscripts**
+
+p porous
+
+f fluid
+
+0 and 1 refer quantities for the zeroth order and first order equations.
+
+## Appendix
+
+A. Primary categories of fluid flow interface conditions between a porous medium and a fluid layer.
+
+Model 1: $u_p = u_f$ ; $\left(\frac{du}{dy}\right)_p = \left(\frac{du}{dy}\right)_f$ used by Neale and Nader (1974), Vafai and Kim (1990), Jang and Chen (1992).
+
+Model 2: $u_p = u_f$ ; $\mu_{eff} \left(\frac{du}{dy}\right)_p = \mu \left(\frac{du}{dy}\right)_f$ used by Kim and Choi (1996), Poulikakos and Kazmierczak (1987).
+
+Model 3: $u_p = u_f$ ; $\frac{\mu}{\varepsilon} \left(\frac{du}{dy}\right)_p - \mu \left(\frac{du}{dy}\right)_f = \beta_1 \frac{\mu}{\sqrt{K}} u$ used by Ochoa-Tapia and Whitaker (1995), Kuznetsov (1999).
+
+Model 4: $u_p = u_f$ ; $\frac{\mu}{\varepsilon} \left(\frac{du}{dy}\right)_p - \mu \left(\frac{du}{dy}\right)_f = \beta_1 \frac{\mu}{\sqrt{K}} u + \beta_2 \rho u^2$ used by Ochoa-Tapia and Whitaker (1998).
+---PAGE_BREAK---
+
+Model 5: $\left(\frac{du}{dy}\right)_f = \frac{\alpha^*}{\sqrt{K}}(u_{int} - u_\infty)$ used by Beavers and Joseph (1967), Sahraoui and Kaviani (1992). A Forchheimer term is added to the momentum equation in the porous side for the purpose of comparison.
+
+### B. Primary categories of heat transfer interface conditions between a porous medium and a fluid layer.
+
+Model 1: $T_p = T_f$; $K_{eff} \frac{\partial T_f}{\partial y} = K_f \frac{\partial T_p}{\partial y}$ used by Kuznetsov (1999), Jang and Chen (1992).
+
+Model 2: $T_p = T_f$; $\phi + K_f \left(\frac{\partial T}{\partial y}\right)_f = K_{eff} \left(\frac{\partial T}{\partial y}\right)_p$ used by Ochoa-Tapia and Whitaker (1998).
+
+Model 3: $\left(\frac{dT}{dy}\right)_p = \frac{\alpha_T}{\lambda}(T_p - T_f)$; $K_{eff} \frac{\partial T_f}{\partial y} = K_f \frac{\partial T_p}{\partial y}$ used by Sahraoui and Kaviani (1994), using fluid flow of Model 1.
+
+Model 4: $\left(\frac{dT}{dy}\right)_p = \frac{\alpha_T}{\lambda}(T_p - T_f)$; $K_{eff} \frac{\partial T_f}{\partial y} = K_f \frac{\partial T_p}{\partial y}$ used by Sahraoui and Kaviani (1994), using fluid flow of Model 3.
+
+For the present problem Model A2 is used for fluid flow and Model B1 is used for heat transfer along with the continuity of pressure gradient along the flow direction.
+
+### Solutions and constants
+
+$$ \theta_0^{(1)} = c_1 y + c_2, \quad \theta_0^{(2)} = c_3 y + c_4, \quad u_0^{(1)} = l_1 y + l_2 + d_1 \cosh(\sqrt{a}y) + d_2 \sinh(\sqrt{a}y), \quad u_0^{(2)} = l_3 y^3 + l_4 y^2 + d_3 y + d_4, $$
+
+$$ u_1^{(1)} = -\cos(\lambda x)(2l_5 y + d_6 + \sqrt{ad_7} \sinh(\sqrt{a}y) + \sqrt{ad_8} \cosh(\sqrt{a}y)) + \lambda \sin(\lambda x)(2f_{11} y + 3f_{12} y^2 + 4f_{13} y^3 + 5f_{14} y^4 \\
++ (\sqrt{a}f_{16} + 2f_{17}) y \cosh(\sqrt{a}y) + (\sqrt{a}f_{15} + 2f_{18}) y \sinh(\sqrt{a}y) \\
++ (\sqrt{a}f_{18} + 3f_{19}) y^2 \cosh(\sqrt{a}y) + (\sqrt{a}f_{17} + 3f_{20}) y^2 \sinh(\sqrt{a}y) \\
++ f_{15} \cosh(\sqrt{a}y) + f_{16} \sinh(\sqrt{a}y) + f_{19} \sqrt{a} y^3 \sinh(\sqrt{a}y) + f_{20} \sqrt{a} y^3 \cosh(\sqrt{a}y) \\
++ d_{14} + d_{15} \sqrt{a} \sinh(\sqrt{a}y) \\
++ d_{16} \sqrt{a} \cosh(\sqrt{a}y) - Gr c_9 y / ma) $$
+
+$$ u_1^{(2)} = -\cos(\lambda x)\left(4l_6 y^3 + \frac{d_9}{2} y^2 + d_{10} y + d_{11}\right) + \lambda \sin(\lambda x)\left(9l_{27} y^8 + 8l_{28} y^7 + 7l_{29} y^6 + 6l_{30} y^5 + 5l_{31} y^4 + 4l_{32} y^3 + 3l_{33} y^2 + \frac{d_{17}}{2} y\right. \\
+\left.+ d_{18} y + d_{19}\right) $$
+
+$$ v_1^{(1)} = -\lambda \sin(\lambda x)(l_5 y^2 + d_5 + d_6 y + d_7 \cosh(\sqrt{a}y) + d_8 \sinh(\sqrt{a}y)) - \lambda^2 \cos(\lambda x)(f_{11} y^2 + f_{12} y^3 + f_{13} y^4 + f_{14} y^5 + f_{15} y \cosh(\sqrt{a}y) \\
++ f_{16} y \sinh(\sqrt{a}y) + f_{17} y^2 \cosh(\sqrt{a}y) + f_{18} y^2 \sinh(\sqrt{a}y) + f_{19} y^3 \cosh(\sqrt{a}y) + f_{20} y^3 \sinh(\sqrt{a}y) - Gr c_9 / (2ma) + d_{13} + d_{14} y \\
++ d_{15} \cosh(\sqrt{a}y) + d_{16} \sinh(\sqrt{a}y)) $$
+
+$$ v_1^{(2)} = -\lambda \sin(\lambda x)\left(l_6 y^4 + \frac{d_9}{6} y^3 + \frac{l_{10}}{2} y^2 + d_{11} y + d_{12}\right) - \lambda^2 \cos(\lambda x)\left(l_{27} y^9 + l_{28} y^8 + l_{29} y^7 + l_{30} y^6 + l_{31} y^5 + l_{32} y^4 + l_{33} y^3 + \frac{d_{17}}{6} y^3\right. \\
+\left. + \frac{d_{18}}{2} y^2 + d_{19} y + d_{20}\right) $$
+
+$$ \theta_1^{(1)} = \cos(\lambda x)(c_5 y + c_6) - \lambda \sin(\lambda x)(l_{14} y^6 + l_{15} y^5 + l_{16} \cosh(\sqrt{a}y) + l_{17} \sinh(\sqrt{a}y) + l_{18} y^4 + l_{19} y \cosh(\sqrt{a}y) + l_{20} y \sinh(\sqrt{a}y), \\
++c_9 y + c_{10}) $$
+
+$$ \theta_1^{(2)} = \cos(\lambda x)(c_7 y + c_8) - \lambda \sin(\lambda x)(l_{21} y^6 + l_{22} y^5 + l_{23} y^4 + l_{24} y^3 + l_{25} y^2 + c_{11} y + c_{12}), $$
+
+$$ \tau_{-1}^{0} = l_1 - \sqrt{ad_1} \cosh(\sqrt{a}) + d_2 \sqrt{a} \sinh(\sqrt{a}), \quad \tau_1^{0} = 3l_3 + 2l_4 + d_3, $$
+
+$$ \tau_{-1} = l_1 - \sqrt{ad_1} \sinh(\sqrt{a}) + d_2 \sqrt{a} \cosh(\sqrt{a}) + \varepsilon \cos(\lambda x)(ad_1 \cosh(\sqrt{a}) + d_2 a \sinh(\sqrt{a}) - 2l_5 + ad_7 \cosh(\sqrt{a}) + d_8 a \sinh(\sqrt{a})) \\
++ \varepsilon\lambda \sin(\lambda x)(2f_{11}-6f_{12}+12f_{13}-20f_{14}) \\
++ (\sqrt{a}f_{16}+2f_{17}-af_{15}+af_{17}-4\sqrt{a}f_{18}-6f_{19}-af_{19}+6f_{20}\sqrt{a})\cosh(\sqrt{a}) \\
++ (2\sqrt{a}f_{15}+af_{16}+4\sqrt{a}f_{17}-2f_{18}-af_{18}-6\sqrt{a}f_{19}+6f_{20}+af_{20})\sinh(\sqrt{a}) - Gr c_9/(ma)+ad_{15}\cosh(\sqrt{a})-ad_{16}\sqrt{a}\sinh(\sqrt{a}) $$
+---PAGE_BREAK---
+
+$$
+\begin{aligned}
+& \tau_1 = 3l_3 + 2l_4 + d_3 - \varepsilon \cos(\lambda x)(12l_6 + d_9 + d_{10}) - \varepsilon \lambda \sin(\lambda x)(72l_{27} + 56l_{28} + 42l_{29} + 30l_{30} + 20l_{31} + 12l_{32} + 6l_{33} + d_{17} + d_{18}), \\
+& Nu_{-1} = c_1 + \varepsilon \cos(\lambda x)c_5 - \varepsilon \lambda \sin(\lambda x)(-2l_{14} + 3l_{15} - (\sqrt{a}(l_{16}-l_{19})+l_{20})\sinh(\sqrt{a})+(\sqrt{a}l_{17}+l_{19}-\sqrt{a}l_{20})\cosh(\sqrt{a})-4l_{18}+c_9), \\
+& Nu_1 = c_3 + \varepsilon \cos(\lambda x)c_7 - \varepsilon \lambda \sin(\lambda x)(6l_{21}+5l_{22}+4l_{23}+3l_{24}+2l_{25}+c_{11}).
+\end{aligned}
+$$
+
+$$
+\begin{aligned}
+a &= \frac{\sigma^2}{m}, \quad c_3 = \frac{kh}{kh+1}, \quad c_1 = \frac{c_3}{kh}, \quad c_4 = 1-c_3, \quad c_2 = c_4, \quad l_1 = \frac{Gr c_1}{ma}, \quad l_2 = \frac{Gr c_2}{ma}, \quad l_3 = -\frac{Gr h^3 c_3}{6}, \quad l_4 = -\frac{Gr h^3 c_4}{2}, \quad l_5 = \frac{Gr c_5}{2ma}, \\
+l_6 &= \frac{Gr h^3 c_7}{24}, \quad d_1 = \frac{d_2 \sinh(\sqrt{a}) + l_1 - l_2}{\cosh(\sqrt{a})}, \quad d_2 = \frac{(l_2 - l_1)h - (l_3 + l_4 + mh^2 l_1 + hl_2)\cosh(\sqrt{a})}{mh^2\sqrt{a}\cosh(\sqrt{a}) + h\sinh(\sqrt{a})}, \quad d_3 = mh^2(d_2\sqrt{a} + l_1), \quad c_8 = -c_7 - \frac{c_3}{h}, \\
+d_4 &= h(d_1 + l_2), \quad c_7 = \frac{k(c_1 h \cos(\theta) - c_3)}{kh+1}, \quad c_5 = \frac{c_7}{kh}, \quad c_6 = c_8, \quad l_7 = c_1 d_5 + c_6 l_2, \quad l_8 = c_5 l_2 + c_6 l_1 + d_6 c_1, \quad l_9 = c_6 d_1 + d_7 c_1, \\
+l_{10} &= c_6 d_2 + d_8 c_1, \quad l_{11} = c_5 l_1 + c_1 l_5, \quad l_{12} = d_1 c_5, \quad l_{13} = d_2 c_5, \quad l_{14} = \frac{Pr l_7}{2k}, \quad l_{15} = \frac{Pr l_8}{2k}, \quad l_{16} = \frac{Pr}{k}\left(\frac{l_9}{a} - \frac{l_{12}}{\sqrt{a^3}} - \frac{l_{13}}{\sqrt{a^3}}\right), \quad l_{18} = \frac{Pr l_{11}}{2k}, \\
+l_{17} &= \frac{Pr}{k}\left(\frac{l_{10}}{a} - \frac{l_{12}}{\sqrt{a^3}} - \frac{l_{13}}{\sqrt{a^3}}\right), \quad l_{19} = \frac{Pr l_{12}}{ka}, \quad l_{20} = \frac{Pr l_{13}}{ka}, \quad l_{21} = \frac{Pr}{30}(c_7 l_3 + c_3 l_6), \quad l_{22} = \frac{Pr}{20}\left(c_7 l_4 + c_8 l_3 + \frac{c_3 d_9}{6}\right), \\
+l_{23} &= \frac{Pr}{12}\left(c_7 d_3 + c_8 l_4 + \frac{c_3 d_{10}}{2}\right), \quad l_{24} = \frac{Pr}{6}(c_7 d_4 + c_8 d_3 + c_3 d_{11}), \quad l_{25} = \frac{Pr}{2}(c_8 d_4 + c_3 d_{12}), \quad f_1 = 2l_1 l_5 + 2Gr l_{14}, \quad f_2 = 3Gr l_{15}, \\
+f_3 &= 4Gr l_{18}, \quad f_4 = al_2 d_7 + 2l_5 d_1 - ad_1 d_5 + Gr(l_{17}\sqrt{a} + l_{19}), \quad f_9 = -ad_2 l_5, \quad f_5 = al_2 d_8 + 2l_5 d_2 - ad_2 d_5 + Gr(l_{16}\sqrt{a} + l_{20}), \\
+f_6 &= al_1 d_7 - ad_1 d_6 + Gr l_{20}\sqrt{a}, \quad f_7 = al_1 d_8 - ad_2 d_6 + Gr l_{19}\sqrt{a}, \quad f_8 = -ad_1 l_5, \quad f_{10} = 2l_2 l_5, \quad f_{11} = -\frac{f_2}{ma^2} - \frac{f_{10}}{2ma},
+\end{aligned}
+$$
+
+$$
+\begin{align*}
+f_{12} &= -\frac{f_{1}}{6ma} - \frac{f_{3}}{ma^{2}}, & f_{13} &= -\frac{f_{2}}{12ma}, & f_{14} &= -\frac{f_{3}}{20ma}, & f_{15} &= \frac{f_{5}}{2m\sqrt{a^{3}}} - \frac{5f_{6}}{4ma^{2}} + \frac{17f_{9}}{4m\sqrt{a^{5}}}, & f_{16} &= \frac{f_{4}}{2m\sqrt{a^{3}}} - \frac{5f_{7}}{4ma^{2}} + \frac{17f_{8}}{4m\sqrt{a^{5}}}, \\
+f_{17} &= \frac{f_{7}}{4m\sqrt{a^{3}}} - \frac{5f_{8}}{4ma^{2}}, & f_{18} &= \frac{f_{6}}{4m\sqrt{a^{3}}} - \frac{5f_{9}}{4ma^{2}}, & f_{19} &= \frac{f_{9}}{6m\sqrt{a^{3}}}, & f_{20} &= \frac{f_{8}}{6m\sqrt{a^{3}}}, & f_{21} &= 6l_{3}l_{6} + 6Grh^{3}l_{21}, & f_{22} &= 10l_{4}l_{6} + 5Grh^{3}l_{22}, \\
+f_{23} &= 12d_{3}l_{6} + \frac{2l_{4}d_{9}}{3} - 2l_{3}d_{10} + 4Grh^{3}l_{23}, & f_{24} &= 12d_{4}l_{6} + d_{3}d_{9} - 6l_{3}d_{11} + 3Grh^{3}l_{24}, & f_{25} &= d_{4}d_{9} + d_{3}d_{10} - 6l_{3}d_{12} - 2l_{4}d_{11} + 2Grh^{3}l_{25}, \\
+f_{26} &= d_{4}d_{10} - 2l_{4}d_{12}, & f_{27} &= \frac{f_{21}}{3024}, & f_{28} &= \frac{f_{22}}{1680}, & f_{29} &= \frac{f_{23}}{840}, & f_{30} &= \frac{f_{24}}{360}, & f_{31} &= \frac{f_{25}}{120}, & f_{32} &= \frac{f_{26}}{24}, & f_{33} &= \frac{Grh^{3}c_{11}}{6}; \\
+z_{1} &= \cos(\theta)(-d_{1}\sqrt{a}\sinh(\sqrt{a}) + d_{2}\sqrt{a}\cosh(\sqrt{a}) + l_{1}) + 2l_{5}, & z_{2} &= \frac{3l_{3} + 2l_{4} + d_{3}}{h} - 4l_{6}, & z_{3} &= \frac{-mah^{3} + 6h}{6}, & z_{4} &= \frac{mah^{2} + 2h}{2},
+\end{align*}
+$$
+
+$$
+z_s = l_s - mh^2 l_s, z_d = -mh^3 l_s, z_h = mh^4 a, z_g = 2mh^3 l_s - z_m, z_v = h + z_s, z_w = z_m - h \cosh(\sqrt{\alpha}), z_x = h\sqrt{\alpha} + h \sinh(\sqrt{\alpha}),
+$$
+
+$$
+z' _ { 1 } = - h l _ { 5 } + z _ { 5 } , z ' _ { 3 } = z _ { 7 } + z _ { 6 } \sqrt { a } \sinh ( \sqrt { a } ) , z ' _ { 4 } = h \sqrt { a } - z _ { 6 } \sqrt { a } \cosh ( \sqrt { a } ) , z ' _ { 5 } = z _ { 8 } + z _ { 1 } z _ { 6 } , z ' _ { 9 } = z _ { 8 } + z _ { 9 } \sqrt { a } \sinh ( \sqrt { a } ) ,
+$$
+
+$$
+z' _ { 7 } = z' _ { 9 } - z' _ { 8 } \sqrt { a } \cosh (\sqrt { a } ) , z' _ { 8 } = z' _ { 9 } + z' _ { 8 } z' _ { 9 } ; d' _ { 8 } = z' _ { 9 } z' _ { 8 } - z' _ { 8 } z' _ { 9 } , d' _ { 7 } = - z' _ { 9 } z' _ { 8 } - z' _ { 8 } d' _ { 9 } , d' _ { 6 } = d' _ { 7 } \sqrt { a } \sinh (\sqrt { a } ) - d' _ { 8 } \sqrt { a } \cosh (\sqrt { a } ) + z' _ { 9 } ,
+$$
+
+$$
+d_s = d_g \sinh(\sqrt{\alpha}) - d_r \cosh(\sqrt{\alpha}) + d_e - l_s, d_v = -h^3 m a d_s, d_w = mh^2(2l_s + ad_r), d_x = h(d_e + ad_g), d_y = h(d_s + d_r);
+$$
+
+$$
+z' _ { 9 } = l _ { 14 } - l _ { 15 } + ( l _ { 16 } - l _ { 9 } ) \cosh ( \sqrt { a } ) + ( l _ { 20 } - l _ { 7 } ) \sinh ( \sqrt { a } ) + l _ { 8 },
+$$
+
+$$
+z' _ { 20 } = l _ { 21 } + l _ { 22 } + l _ { 23 } + l _ { 24 } + l _ { 25 }, z' _ { 21 } = kh ( c _ { 6 } + l _ { 7 } \sqrt { a } + l _ { 9 } - \frac { c _ { 8 } } { kh } );
+$$
+
+$$
+c_g = \frac{z'_{19}-z'_{20}-z'_{21}-l'_{16}}{kh+1}, c_l = c_g-z'_{49}, c_n = z'_{21}+khc_g, c_p = l'_6+c_l'_0, z_q = f'_7+f'_8+f'_9+f'_7-f'_8-f'_9+\cosh(\sqrt{\lambda})+(f'_6-f'_8+f'_9)\sinh(\sqrt{\lambda}), z_r = z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'_{30}-z'{}^ {} _ { 5 } , \\ z_s = f'_7-f'_8+f'_9-f'_7-f'_8-f'_9+\cosh(\sqrt{\lambda})+(f'_6-f'_8+f'_9)\sinh(\sqrt{\lambda}), \\ z_t = f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda})-(f'_6-f'_8-f'_9)\sinh(\sqrt{\lambda}), \\ z_u = -f'_7-f'_8-f'_9-f'_7-f'_8-f'_9+\cosh(\sqrt{\lambda})+(f'_6-f'_8-f'_9)\sinh(\sqrt{\lambda}), \\ z_v = f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda})-(f'_6-f'_8-f'_9)\sinh(\sqrt{\lambda}). \\ z_w = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_x = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_y = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_z = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_w = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_x = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_y = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_z = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_w = (f'_7-f'_8-f'_9-f'_7-f'_8-f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}). \\ z_x = (f'_7+f'_8+f'_9+f'_7+f'_8+f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_y = (f'_7+f'_8+f'_9+f'_7+f'_8+f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_z = (f'_7+f'_8+f'_9+f'_7+f'_8+f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}), \\ z_w = (f'_7+f'_8+f'_9+f'_7+f'_8+f'_9-\cosh(\sqrt{\lambda}))\sinh(\sqrt{\lambda}). \\ z_x = (f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''-f''. $$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+z_{271} &= -(d_1+l_2)(d_6+\sqrt{ad_8})+(l_1+\sqrt{ad_2})(d_5+d_7), & z_{272} &= m(6f_{12}+3af_{15}+6\sqrt{a}f_{18}+6f_{19})-\sigma^2 f_{15}-Gr l_{16}+d_4 d_{11}-d_3 d_{12}, \\
+z_{29} &= \frac{1}{2}(h^2ma+2h), & z_{30} &= z_{24}+\frac{z_{26}}{2}+\frac{z_{27}}{6}+hf_{15}, & z_{31} &= \frac{1}{2}(-h^3ma+2h), & z_{32} &= h^2ma, & z_{33} &= z_{25}+z_{26}+\frac{z_{27}}{2}+hf_{15}, & z_{34} &= z_{28}+h, \\
+z_{35} &= z_{29}-h \cosh(\sqrt{a}), & z_{36} &= h\sqrt{a}+h \sinh(\sqrt{a}), & z_{37} &= z_{30}-hz_{22}, & z_{38} &= z_{32}+z_{31}\sqrt{a}\sinh(\sqrt{a}), & z_{39} &= h\sqrt{a}-z_{31}\sqrt{a}\cosh(\sqrt{a}), \\
+z_{40} &= z_{33}-z_{23}z_{31}, & z_{41} &= z_{35}+z_{34}\sqrt{a}\sinh(\sqrt{a}), & z_{42} &= z_{36}-z_{34}\sqrt{a}\cosh(\sqrt{a}), & z_{43} &= z_{37}-z_{23}z_{34}, & d_{15} &= \frac{-z_{39}d_{16}+z_{40}}{z_{38}}, \\
+d_{13} &= d_{14}-d_{15}\cosh(\sqrt{a})+d_{16}\sinh(\sqrt{a})-z_{22}, & d_{14} &= d_{15}\sqrt{a}\sinh(\sqrt{a})-d_{16}\sqrt{a}\cosh(\sqrt{a})-z_{23}, & d_{16} &= \frac{z_{38}z_{43}-z_{40}z_{41}}{z_{30}z_{41}-z_{38}z_{42}}, \\
+d_{17} &= -mah^3d_{14}+z_{27}, & d_{18} &= mah^2d_{15}+z_{26}, & d_{19} &= h(d_{14}+\sqrt{ad}_{16}+f_{15}), & d_{20} &= h(d_{13}+d_{15}).
+\end{align*}
+$$
+
+## Acknowledgements
+
+The authors would like to thank UGC-New Delhi for the financial support under UGC-Major Research Project.
+
+## References
+
+Alazmi, M. and Vafai, K., 2001. Analysis of fluid flow and heat transfer interfacial conditions between a porous medium and a fluid layer, *Int. J. Heat Mass Transfer*, Vol. 44, pp. 1735-1749.
+
+Al-Nimr, M.A. and Alkam, M.K., 1998. Unsteady non-Darcian fluid flow in a parallel channels filled with porous material, *Heat Mass Transfer*, Vol. 33, pp. 315-318.
+
+Beavers, G.S. and Joseph, D.D., 1967. Boundary conditions at naturally permeable wall, *J. Fluid Mech.*, Vol. 13, pp. 197-207.
+
+Chikh, S., Boumedian, A., Bouhadef, K. and Lauriat, G., 1995. Analytical solution of non-Darcian forced convection in an annular duct partially filled with porous medium, *Int. J. Heat Mass Transfer*, Vol. 38, pp. 1543-1551.
+
+Deajani, G., Talsim, M.E. and Narusawa, U., 1986. Effects of boundary conditions on thermal instability of superposed porous and fluid layer, *Natural Convection in Enclosures*, R.S. Figliola and I. Catton (eds.), ASME, New York, pp. 83-89
+
+Eldabe, N.T.M., El-Sayed, M.F., Ghaly, A.Y. and Sayed, H.M., 2008. Mixed convective heat and mass transfer in a non-Newtonian fluid at a peristaltic surface with temperature dependent viscosity, *Arch, Appl. Mech.*, Vol. 78, pp. 599-624.
+
+Gilver, R.C. and Altobelli, S.A., 1994. A determination of the effective viscosity for Brinkman-Forchheimer flow model, *J. Fluid Mech.*, Vol. 258, pp. 355-370.
+
+Ingham, D.B. and Pop, I., 2005. Transport phenomena in porous media, *Elsevier*, Oxford.
+
+Jang, J.Y. and Chen, J.L., 1992. Forced convection in a parallel plate channel partially filled with a high porosity medium, *Int. Commun. Heat Mass Transfer*, Vol. 19, pp. 263-273.
+
+Jang, J.H., Yan, W.M. and Liu, H.C., (2003). Natural convection heat and mass transfer along a vertical wavy surface, *Int. J. Heat Mass Transfer*, Vol. 46, pp. 1075-1083.
+
+Jang, J.H. and Yan, W.M., 2004. Mixed convection heat and mass transfer along a vertical wavy surface, *Int. J. Heat Mass Transfer*, Vol. 47, pp. 419-428.
+
+Kim, S.J. and Choi, C.Y., 1996. Convection heat transfer in porous and overlying layers heated from below, *Int. J. Heat Mass Transfer*, Vol. 39, pp. 319-329.
+
+Kuznetsov, A.V., 1999. Fluid mechanics and heat transfer in the interface region between a porous medium and a fluid layer: a Boundary layer solution, *J. Porous Media*, Vol. 2(3), pp. 309-321.
+
+Kuznetsov, A.V., 1998. Analytical investigation of Couette flow in a composite channel partially filled with porous medium and partially filled with clear fluid, *Int. J. Heat Mass Transfer*, Vol. 41, pp. 2556-2560.
+
+Li Tang, Di Liu, Fu-Yun Zhao, Guang-Fa Tang, 2010. Combined heat and moisture convective transport in a partial enclosure with multiple free ports, *Applied Thermal Engineering*, Vol. 30, pp. 977-990.
+
+Luo, H., Blyth, M.G. and Pozrikidis, C., 2008. Two-layer flow in a corrugated channel, *J. Eng. Math.* Vol. 60, pp.127–147.
+
+Malashetty, M.S., Umavathi, J.C. and Leela, V., 2001. Magnetocovective flow and heat transfer between vertical wavy wall and a parallel flat wall, *Int. J. Appl. Mech. Engg.*, Vol. 6(2), pp. 437-456.
+
+Malashetty, M.S., Umavathi, J.C. and Prathap Kumar, J., 2004. Two fluid flow and heat transfer in an inclined channel containing porous and fluid layer, *Heat and Mass Transfer*, Vol. 40, pp. 871-876.
+
+Malashetty, M.S., Umavathi, J.C. and Prathap Kumar, J., 2005. Flow and heat transfer in an inclined channel containing a fluid layer sandwiched between two porous layers, *J. Porous Media*, Vol. 8, No. 5, pp. 443-453.
+
+Martys, N., Bentz, D.P. and Garboczi, E.J., 1994. Computer simulation study of the effective viscosity in Brinkman's equation, *Phys. Fluids*, Vol. 6, pp. 1434-1439.
+
+Masuoka, T., 1974. Convective currents in a horizontal layer divided by a permeable wall, *Bull. Japan, Soc. Mech. Eng.*, Vol. 17,
+pp. 225-237.
+
+Neale, G. and Nader, W., 1974. Practical significance of Brinkman's extension of Darcy's law: coupled parallel flows within
+a channel and a bounding porous medium, *Can. J. Chem. Engrg.*, Vol. 52, pp. 475-478.
+---PAGE_BREAK---
+
+Neild, D.A., 1991. The limitations of Brinkman-Forchhiemer equation in modeling flow in a saturated porous medium and at an interface, *Int. J. Heat Mass Transfer*, Vol. 12, pp. 269-272.
+
+Nield, D.A., 1983. The boundary correction for the Rayleigh-Darcy problem: Limitations of Brinkman equation, *J. Fluid Mech.*, Vol. 128, pp. 37-46.
+
+Nield, D.A., and Bejan, A., 2006. Convection in porous media (3rd Ed.), Springer-Verlag, New York.
+
+Ochoa-Tapia, J.A. and Whitaker, S., 1995. Momentum transfer at the boundary between a porous medium and homogeneous fluid I: Theoretical development, *Int. J. Heat Mass Transfer*, Vol. 38, pp. 2635-2646.
+
+Ochoa-Tapia, J.A. and Whitaker, S., 1998. Heat transfer at the boundary between a porous medium and a homogeneous fluid: the one-equation model, *J. Porous Media*, Vol. 1, pp. 31-46.
+
+Prathap Kumar, J., Umavathi, J.C., Pop, I. and Basavaraj M Biradar, 2009. Full developed mixed convection flow in a vertical channel containing porous and fluid layers with isothermal or isoflux boundaries, *Trans. Porous Media*, Vol. 80, pp. 117-135.
+
+Poulikakos, D. and Kazmierczak, M., 1987. Forced convective in a duct partially filled with a porous material, *ASME J. Heat Mass Transfer*, Vol. 109, pp. 653-662.
+
+Rudraiah, N., 1985. Forced convection in a parallel plate channel partially filled with a porous material, *ASME J. Heat Transfer*, Vol. 107, pp. 331-332.
+
+Sahraoui, M. and Kaviany, M., 1992. Slip and no-slip velocity boundary conditions at the interface of porous, plain media, *Int. J. Heat Mass Transfer*, Vol. 35, pp. 927-943
+
+Sahraoui, M. and Kaviany, M., 1994. Slip and no-slip temperature boundary conditions at the interface of porous, plain media: convection, *Int. J. Heat Mass Transfer*, Vol. 37, pp. 1029-1044.
+
+Srinivas, S. and Muthuraj, R., 2010. MHD flow with slip effects and temperature-dependent heat source in a vertical wavy porous space, *Chem. Eng. Comm.* Vol. 197, pp. 1387-1403.
+
+Umavathi, J.C., Chamka, A.J., Abdul Mateen, and Al-Mudhaf, A., 2006. Oscillatory flow and heat transfer in horizontal composite porous medium channel, Int. Heat and Tech., Vol. 24, pp. 75-86.
+
+Umavathi, J.C., Ali. J. Chamkha and Sridhar, K.S.R., 2010. Generalised plain Couette flow heat transfer in a composite channel, *Trans Porous Media*, Vol. 85, pp. 157-169.
+
+Umavathi, J.C., Prathap Kumar, J. and Shekar, M., 2010. Mixed convective flow of immiscible viscous fluids confined between a long vertical wavy wall and a parallel flat wall, *Int. J. Engg. Sci. Tech.*, Vol. 2, No. 6, pp. 256-277.
+
+Umavathi, J.C. and Shekar, M., 2011. Mixed convective flow of two immiscible viscous fluids in a vertical wavy channel with traveling thermal waves, *Heat Transfer Asian Research*, Vol. 40, No.7, pp. 608-640.
+
+Vafai, K. and Thiyagaraja, R., 1987. Analysis of flow and heat transfer at the interface region of a porous medium, *Int. J. Heat Mass Transfer*, Vol. 30, pp. 1391-1405.
+
+Vafai, K., 2005. Handbook of porous media (2nd Ed.), Taylor & Francis, Boca Raton.
+
+Vafai, K. and Kim, S.J., 1990. Fluid mechanics of interface region between a porous medium and a fluid layer- an exact solution,
+*Int. J. Heat Mass Transfer*, Vol. 11, pp. 254-256.
+
+Vafai, K. and Kim, S.J., 1995. On the limitations of the Brinkman-Forchheimer-extended Darcy equation, *Int. J. Heat Mass Transfer*, Vol. 16, pp. 11-15.
+
+Vajravelu, K. and Sastri, K.S., 1978. Free convective heat transfer in a viscous incompressible fluid confined between a long vertical wavy wall and a parallel flat wall, *J. Fluid Mech.*, Vol. 86, pp. 365-383.
+
+Vajravelu, K., 1989. Combined free and forced convection in hydromagnetic flows in vertical wavy channels with traveling thermal waves, *Int. J. Engg. Sci.*, Vol. 30, pp. 278-289.
+
+Yasin Varol and Hakan F. Oztop, (2006). Free convection in a shallow wavy enclosure, *Int. Commun. Heat and Mass Transfer*, Vol. 33, pp. 764-771.
+
+Biographical notes
+
+**J. C. Umavathi** received Ph. D degree from Gulbarga University Gulbarga India in 1992. She is a Professor in the Department of Mathematics, Gulbarga University, Gulbarga, Karnataka, India. Her research interest includes heat and mass transfer of multiple (Newtonian and non-Newtonian) fluids through channels and rectangular ducts, numerical simulation using Finite differences and Range-Kutta Gill method, magnetohydrodynamics, flow through porous media she has published more than 70 papers in referred International journals. She has also presented more than 20 research articles in National and International conferences.
+She is currently dealing with few projects sponsored by Government of India.
+
+**M. Shekar** received post graduation in Mathematics from Bangalore University, Bangalore, Karnataka, India in 2009. He is working for Ph. D. and he is also a Fellow of research project sponsored by Government of India. His research interest includes heat and mass transfer of two fluid flows for Newtonian and non-Newtonian fluids through wavy channels.
+
+Received May 2011
+Accepted November 2011
+Final acceptance in revised form December 2011
\ No newline at end of file
diff --git a/samples/texts_merged/4055151.md b/samples/texts_merged/4055151.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c2cfed2908100e7b9f5a109cf219276e5cd8379
--- /dev/null
+++ b/samples/texts_merged/4055151.md
@@ -0,0 +1,545 @@
+
+---PAGE_BREAK---
+
+# Restoration and Storage of Film and Video
+Archive Material
+
+P.M.B. van Roosmalen and J. Biemond and R.L. Lagendijk
+
+*Information and Communication Theory Group*
+*Faculty of Information Technology and Systems*
+*Delft University of Technology*
+*The Netherlands*
+
+**ABSTRACT.** Many unique records of archived motion pictures are in a fragile state and contain many artifacts. Preservation of these pictures can be achieved by copying them onto new digital media in compressed format, using, for instance, the MPEG standard. Restoration of the old film sequences prior to renewed storage is often beneficial both in terms of visual quality and in terms of coding efficiency. Restoring old image sequences manually is a tedious and costly process and therefore the use of an automated system for image restoration capable of dealing with common and less common artifacts is desirable. This chapter presents the AURORA project, whose goal it is to create such an automated system. Three algorithms developed within the AURORA project, for dealing with noise, blotches and intensity flicker, are described here. The success of these algorithms is demonstrated using both common measures of performance and using a new objective measure based on coding efficiency.
+
+## 1. Introduction
+
+AURORA¹ is an acronym for *AUtomated Restoration of ORiginal video and film Archives*. AURORA consortium members that are actively involved in research include the BBC, Snell & Wilcox, and Cambridge University in the U.K., Tampere University in Finland, INA in France, and the Delft University of Technology in the Netherlands.
+
+The goal of AURORA is to create new algorithms and real-time hardware for the restoration of old video and film sequences. Many unique records of historic, artistic, and cultural developments of every aspect of the 20th century are stored in huge stocks of archived moving pictures and many of these historically significant items are in a fragile state. Preservation of these pictures can be achieved by copying them onto new digital media in compressed format, using, for instance, the MPEG standard. There are a number of reasons why old motion pictures should be restored before they are stored on digital media. First, restoration improves the visual quality of the film sequence and thereby the commercial value increases. Second, restoration generally speaking leads to more efficient compression, i.e., to higher quality at identical bit rates and conversely to lower bit rates at identical quality. The latter is especially important in digital broadcasting and storage environments where the price of broadcasting/storage is directly related to the number of bits being broadcast/stored.
+
+The first part of the AURORA acronym, *AUtomated*, should be stressed because of the absolutely huge amounts of old film and video that need to be processed and because of economical constraints. Automated, real-time image restoration systems allow for bulk processing of data and reduce the high cost of labor required for manual restoration. In order to make bulk processing possible, the AURORA system takes digital image sequences as its starting point instead of physical reels of film. Rather than having a system capable of
+
+¹This work was funded by the European Union under contract AC 072.
+---PAGE_BREAK---
+
+handling a large number of film formats and systems that have been used in one period or
+another over the last century, it is assumed that these films have been digitized by skilled
+engineers who know best how to digitize the various types of film. Digital image sequences
+are obtained by digitizing the output of the film-to-video telecine. It must be kept in mind
+that the earlier telecines have their limitations in terms of noise characteristics and resolution.
+Sometimes a copy on video obtained from an earlier telecine is all that remains of a film.
+
+Areas of interest within AURORA are typical artifacts in old film sequences such as noise, line-scratches, blotches (dirt and sparkle), film unsteadiness, line jitter and intensity flicker. In the following sections we focus on restoration techniques for a selection of these artifacts. In Section 2 we discuss the general principles of noise reduction and we present a new algorithm for this purpose. We devote Section 3 to the detection (and correction) of blotches. In Section 4 we present a novel and very successful approach to correct intensity flicker. In section 5 we investigate the improvement in quality of the restored sequences using a new objective quality measure. Here it is also verified that image restoration and coding efficiency are closely related. We conclude this chapter with a discussion in Section 6.
+
+**2. Noise reduction**
+
+Noise is a common problem in old film and video sequences and many methods for noise reduction can be found in the literature [15], [16], [22], [2], [7], [10]. In this section we consider image sequences *I(x, y, t)* corrupted by noise η(x, y, t) that is signal independent, white and additive (x, y indicate discrete spatial co-ordinates, t indicates the frame number). The observed images *Y(x, y, t)* are thus given by:
+
+$$
+(1) \qquad Y(x, y, t) = I(x, y, t) + \eta(x, y, t).
+$$
+
+For many practical purposes (1) is an adequate model. Using more accurate signal-
+dependent models (e.g., for taking into account the gamma correction applied by televi-
+sion cameras that makes the noise signal dependent and the signal dependent film grain
+noise [7], [3]) often leads to little extra gain compared to the added complexity.
+
+The goal of noise reduction is to obtain an estimate of the original signal $\hat{I}(x, y, t)$ given
+the noisy observed signal $Y(x, y, t)$. A class of very successful nonlinear filters is based
+on scale-space representations of the data to be filtered [22], [10]. Having representations
+of the data with various levels of detail allows local and global signal characteristics to be
+preserved better than purely spatial methods, e.g., Wiener filters. Noise reduction is achieved
+by *thresholding* or *coring* the transform coefficients computed at the various scales. Taking
+the inverse transform of the adjusted coefficients gives the noise reduced result.
+
+In the following sections we present a *scale-space* based noise reduction filter which tries to get an optimal separation between signal and noise by including both directional and temporal information into the transformation. In Section 2.1 we motivate the approach we take to obtaining a scale-space representation of the image data. After presenting details of this approach in Section 2.2, we extend the decomposition to 3D using wavelets in Section 2.3. We introduce the actual noise reduction operation in Section 2.4. In Section 2.5 we present some experiments and results. We conclude the topic of noise reduction with a brief discussion in Section 2.6.
+
+**2.1. Methods for obtaining scale-space representations of data**
+
+The DWT is a popular tool for obtaining scale-space representations of data. A problem with this transform, however, is that, due to the aliasing caused by the critical subsampling, a spatial shift of the input image may lead to totally different distribution of the signal energy over the transform coefficients. Therefore, shifting the input image can lead to significantly different filtering results. The DWT is not shift invariant. Shift invariance can be obtained
+---PAGE_BREAK---
+
+by applying nondecimated DWT using bi-orthogonal wavelets [10]. However, this leads to a
+massive increase in the number of transform coefficients.
+
+Instead of using the DWT, we follow the approach of Simoncelli et al. [20], who proposed a pyramid decomposition that is shift invariant. The shift invariance is accomplished avoiding aliasing effects by ensuring that no components with frequencies larger than $\pi/2$ are present before 2:1 subsampling. Furthermore, the Simoncelli pyramid has the advantage that it is based on directionally sensitive filters. This means that the distribution of signal energy over frequency bands depends on the orientation of structures within the image. This gives improved filtering results for diagonal components in the image compared to a separable DWT. Also, in the case that, say, 4 orientations are used, the noise energy is distributed over 4 orientations, whereas the energy of image structures such as straight lines is distributed over 1 or 2 orientations. This leads to better local separation between noise and signal.
+
+The Simoncelli decomposition is significantly overcomplete, the number of transform coefficients is much larger than the number of pixels in the original image. For example, a five-level pyramid decomposition (four sets of high-pass coefficients and one set of low-pass coefficients) with 4 orientations of an $N \times N$ image results in about $9.3N^2$ coefficients. The undecimated DWT gives $10N^2$ for a four-level decomposition.
+
+## 2.2. The Simoncelli pyramid
+
+Figure 1 shows the 2D Simoncelli pyramid decomposition and reconstruction scheme. The filters $L_i$, $H_i$ and $F_j$ are the 2D low-pass, high-pass and directional (fan) filters, respectively. Figure 2 represents a decomposition in the frequency domain. The filters $L_0(\omega)$, $H_0(\omega)$ and $H_1(\omega)$ ideally are linear phase, self inverting and satisfy the following constraints: the aliasing in the low-frequency (subsampled) bands is minimized (eq. (2)), the overall system has unity response (eq. (3)), and, all radial bands have a bandwidth of one octave (eq. (4)):
+
+$$ (2) \qquad L_1(\omega) \to 0 \quad \text{for } \omega > \pi/2 $$
+
+$$ (3) \qquad |L_i(\omega)|^2 + |H_i(\omega)|^2 = 1 $$
+
+$$ (4) \qquad L_0(\omega) = L_1(2\omega). $$
+
+The 2D filters can be obtained from 1D linear phase FIR filters using the McClellan transform [12]. Using (4), the two-dimensional filter $L_0(\omega)$ can be obtained from $L_1(\omega)$. We used a conjugate gradient algorithm to find the filters $H_0(\omega)$ and $H_1(\omega)$ under the constraints set by (3) [17].
+
+For practical purposes the high-pass filters $H_0(\omega)$ and $H_1(\omega)$ are directly combined with the fan filters $F_1(\omega)$, $F_2(\omega)$, $F_3(\omega)$, and $F_4(\omega)$. This can be done by multiplying the Fourier transform of the filters $H_0(\omega)$ and $H_1(\omega)$ with the angular term $f(\theta - \theta_0)$ in (5), where $\theta_m$ is the center of orientation of the filter. The inverse Fourier transform of the product gives the required 2D filter coefficients.
+
+$$ (5) \qquad f(\theta - \theta_m) = \begin{cases} 1 & \text{if } |\theta - \theta_m| < \pi/16 \\ \cos(|\theta - \theta_m|) & \text{if } \frac{\pi}{16} \le |\theta - \theta_m| < 3\pi/16 \\ 0 & \text{otherwise} \end{cases} $$
+---PAGE_BREAK---
+
+FIGURE 1. The Simoncelli analysis/synthesis filter bank. The total decomposition is obtained by recursively inserting the contents of the dashed box into the white spot.
+
+FIGURE 2. The pyramid decomposition in the frequency domain.
+
+### 2.3. An extension to 3D using wavelets
+
+The 2D decorrelating pyramid transform separates signal and noise. We now include motion compensated temporal information to improve on this separation. If the signals are stationary in a temporal direction, the motion compensated frames from $t-n, \dots, t+m$ should all be identical to frame $t$, except for the noise term. The pyramid decomposition of these images should also be identical, except for the noise term. If we view a set of coefficients at corresponding scale-space locations in a temporal sense, we have a 1D DC signal plus noise. These can be separated into a low-pass and a high-pass signal; we propose applying the DWT to this effect.
+
+Note that ideally speaking one would use a long filter to obtain good separation between signal and noise in the spatio-temporal decomposition step. The inaccuracies of the motion estimator and the fact that areas become occluded or uncovered form a limiting factor to the length of the wavelet used. The inaccuracies of the motion estimator are also the reason why we do not apply straightforward averaging in the temporal direction to obtain the low-pass signal: we want to avoid blur.
+
+The 3D spatial-temporal pyramid decomposition is given by the following steps:
+
+1. Calculate the motion compensated images for images $t-n, \dots, t+m$ ($n, m$ depend on the size of wavelet),
+
+2. Calculate the Simoncelli pyramid decomposition for each (motion compensated) image (Fig. 1),
+
+3. Apply the DWT in the temporal direction to each set of coefficients at corresponding scale-space locations, with the coefficient belonging to frame $t$ being the center coefficient.
+
+The reconstruction phase is as follows:
+---PAGE_BREAK---
+
+FIGURE 3. Coring functions. (a) Soft thresholding. (b) Hard thresholding. (c) Bayesian thresholding. (d) Piecewise linear approximation to Bayesian thresholding.
+
+FIGURE 4. Schematic representation of the 3D pyramid noise reduction system
+
+4a Apply the inverse DWT to each set of wavelet coefficients, this leaves us with the spatially decomposed frame at *t*,
+
+5a Apply the synthesis stage of the Simoncelli scheme (Fig. 1).
+
+## 2.4. Noise reduction by coring
+
+Coring or thresholding [10], [19] the transform domain coefficients by either soft or hard thresholding are popular filtering operations. Alternatively, coring functions which are optimal in a mean-squared-error sense can be computed or approximated using a Bayesian framework. Figure 3 shows these coring functions. Soft thresholding leads to a slight loss in contrast of the reconstructed image. Hard thresholding introduces disturbing ringing artifacts near edges.
+
+The structure of the proposed decomposition/reconstruction algorithm offers several possibilities for applying coring by introducing the following steps (see Fig. 4):
+
+2b For all spatially decomposed (motion compensated) frames, threshold all the spatial transform coefficients (except for those in the DC band),
+
+3b Threshold all the high-pass spatio-temporal coefficients,
+
+4b Threshold the coefficients in all spatial frequency bands of the reconstructed frame *t* (except for those in the DC band).
+
+Note that the spatio-temporal decomposition makes temporal filtering of the DC band from the Simoncelli pyramid possible without introducing visible blur or other artifacts. 2D scale-space noise reduction filters have no way of filtering the DC bands.
+
+## 2.5. Experiments and results
+
+For our experiments we used a three-level pyramid with four orientations. To significantly reduce the computational load we omitted the filters $L_0(\omega)$ and $H_0(\omega)$, i.e., we omitted everything outside the dashed box in Fig. 1. This yields $5.3N^2$ transform coefficients for an $N \times N$ image. As a result, the highest frequency bands cover a larger relative bandwidth than the lower frequency bands. $L_1(\omega)$ was designed to have an attenuation of 40 dB at $\pi/2$. A 6-tap bi-orthogonal Daubechies wavelet [1] was used for the temporal extension. For the
+---PAGE_BREAK---
+
+FIGURE 5. (a) Noisy field from Plane sequence. (b) Corrected field.
+
+sake of simplicity we applied soft-thresholding with values 1.0, 10.0 and 1.0 in steps 2b, 3b and 4b respectively (see Section 2.4).
+
+The test sequence (100 frames) of a plane flying over a landscape was processed. It contains fine detail, sharp edges, uniform regions and significant motion. Eight levels of white Gaussian noise were added to this scene, leading to an average PSNR of 25.2 dB. The motion vectors from one field to the next were computed using a hierarchical block-matcher with additional constraints on the smoothness of the motion vectors. Motion vectors to and from fields further apart were obtained by vector tracing (adding the motion vectors of consecutive fields).
+
+Figure 5 shows a noisy field and a noise reduced field. Clearly, much noise has been reduced whilst edges have not been blurred. However, we do note a slight reduction in contrast in some lightly textured regions. A considerable increase in PSNR of 5.2 dB results from this new filter.
+
+## 2.6. Discussion
+
+Even though the proposed filter gives very good results, it is not a practical filter due to its high complexity. The low-pass and high-pass filters used consisted of 23 × 23 taps and therefore, to ensure a reasonable processing time, the convolutions have to be performed in the Fourier domain. Clearly, this filter is too expensive to be built in real-time hardware. However, its function is more to serve as a benchmark to see what amount of noise reduction is attainable and to see how close the noise filters that are being implemented in hardware for AURORA come to that optimum.
+---PAGE_BREAK---
+
+### 3. Blotch detection
+
+Blotches present a common type of artifact in old film sequences that manifests itself as disturbing bright or dark spots caused by dirt and by the loss of the gelatin covering the film due to aging effects and bad film quality. Characteristics of blotches are that they seldom appear at the same spatial location in consecutive frames, they tend to be smooth (little texture), and they usually have intensity values that are very different from the original contents they cover (see, e.g., Fig. 9a). Films corrupted by blotches are often restored using a two-step approach. In the first step blotches are detected and detection masks are generated that indicate for each pixel whether or not it is part of a blotch. In the second step, corrupted pixels are corrected by means of spatio-temporal interpolation [8], [9], [6], [14].
+
+Blotch detectors are either object based or pixel based. Pixel based detectors determine for each pixel whether or not it is part of a blotch independently from whether or not its neighboring pixels are considered to be part of a blotch. Object based detectors exploit the spatial coherence within blotches via, e.g., Markov random fields. So far, pixel based detectors have shown to achieve similar detection results as object based detectors at a fraction of the computational cost [8].
+
+We present three post-processing operations that can be applied on the candidate blotches output by a blotch detector to improve the quality of the detection masks. The key is that we use a pixel based detector and that in the post-processing we consider blotches as objects. This allows us to exploit the spatial coherency within blotches while maintaining low complexity and low computational effort. The first post-processing operation detects and removes possible false alarm by taking into account the probability that the detector wrongly detects a blotch (an object) of a certain size due to noise. The second post-processing operation finds missing pieces of blotches that would otherwise be only partially detected by applying a technique called hysteresis thresholding [4]. The final post processing operation consists of a constrained dilation operator that fills small holes in and on edges of the candidate blotches.
+
+Section 3.1 describes the blotch detector we will be using. Section 3.2 describes the post-processing operations. Section 3.3 describes the results and concludes this topic.
+
+#### 3.1. *The simplified ranked ordered difference (S-ROD) detector*
+
+Blotches are characterized by the fact that they seldom appear at the same location in a pair of consecutive frames and that they have intensity values different from the original image contents. Therefore, blotches can be detected by detecting temporal discontinuities in image intensity. The additional use of motion compensation significantly reduces the number of false alarms. The ROD detector [14] is based on these principles. We present a simplified version of ROD, which we call S-ROD.
+
+Let $I_n(z)$ denote the intensity of a pixel at a spatial location $z^T = (x, y)$ in frame $n$. Let $p_{n,i}(z)$ form a set of six reference pixels, ordered by magnitude, obtained from spatially co-sited pixels and their vertical neighbors in motion compensated previous and next frames (see Fig. 6). The output of S-ROD is then defined by:
+
+$$ (6) \qquad d_n(z) = \begin{cases} \min(p_{n,i}(z)) - I_n(z) & \text{if } \min(p_{n,i}(z)) - I_n(z) > 0 \\ I_n(z) - \max(p_{n,i}(z)) & \text{if } I_n(z) - \max(p_{n,i}(z)) > 0, \\ 0 & \text{otherwise} \end{cases} $$
+
+and a blotch is detected when:
+
+$$ (7) \qquad d_n(z) > T_1 \quad \text{with} \quad T_1 \ge 0. $$
+
+What the S-ROD basically does compute the range of the reference intensities from motion compensated frames and it compares the pixel intensity under investigation to this range
+---PAGE_BREAK---
+
+FIGURE 6. Selection of reference pixels $p_{n,i}(z)$ from previous and next frames using motion compensation
+
+FIGURE 7. Schematic overview of the functioning of the Simplified ROD detector. (The reference pixels $p_i$ have been ordered by their magnitude)
+
+FIGURE 8. ROC-curves resulting from ROD, S-ROD, and S-ROD with post-processing applied to the test sequence
+
+(Fig. 7). A blotch is detected if the intensity of the current pixel is not included in that range and lies far enough outside that range. What is considered "far enough" is determined by $T_1$. If $T_1$ is small many blotches will be detected correctly but many false alarms will occur. As $T_1$ becomes larger, fewer blotches are detected and the number of false alarms drops.
+
+By means of a *receiver operator characteristic* (ROC) curve, which depicts the probability of false alarm vs. the probability of correct detection, the numerical performance of a detector can be visualized graphically. Figure 8 shows the ROC curves obtained from a test sequence using ROD, S-ROD and S-ROD with the post-processing proposed in the next section. The test sequence, which was also used in [8], [14], is the *Western* sequence (64 frames) to which artificial blotches have been added. Each artificial blotch had a fixed gray value which was drawn uniformly between 0 and 255. We observe that the performance of S-ROD is slightly below that of ROD. We also see that much is gained after applying the post-processing operations we describe next.
+---PAGE_BREAK---
+
+## 3.2. Improving the detection results by post-processing
+
+We propose a number of post-processing operations in which the goal is to maximize the ratio of correct detections to false alarms. The key to the post-processing operations is that the candidate blotches are viewed not as individual pixels but as objects. Section 3.2.1 defines what we consider to be an object. Section 3.2.2 presents the first post-processing technique that detects and removes possible false alarms due to noise given a specific detector. Often blotches are detected only partially. Section 3.2.3 presents our second post-processing technique that finds more complete blotches in those cases. Section 3.2.4 presents the third post-processing technique, which is a constrained dilation technique that includes small holes in the candidate blotches that are missed by the detector.
+
+### 3.2.1. Object definition.
+We want to manipulate candidate blotches as objects rather than as individual pixels. Because we are particularly interested in blotches it is reasonable to use characteristics of blotches in the object definition. The characteristics we use are that blotches are spatially coherent and that they tend to be smooth, i.e., that adjacent pixels have similar intensities. We consider a pair of pixels to be similar if their difference is smaller than twice the standard deviation of the noise. Other characteristics of blotches are taken into account implicitly due to the fact that we are only interested in pixels flagged by the blotch detector.
+
+Therefore, adjacent pixels that have similar intensities and that are flagged by the blotch detector are considered to be part of the same candidate blotch. To differentiate between the various candidate blotches a unique label is assigned to each candidate blotch (and to each pixel that is part of that blotch).
+
+### 3.2.2. Removing false alarms due to noise.
+High correct detection rates are achieved by setting the blotch detector to a high degree of sensitivity. However, the detector is then not only sensitive to blotches but also to noise and many false alarms result. This is undesirable because it increases the probability of introducing visible errors during the correction stage. To reduce the influence of noise, we propose computing the probability that the detector gives a specific response due to noise under the assumption that no blotches are present. This allows us to compute the probability that a blotch of given size is wrongly detected. If that probability exceeds a certain risk $R$, all candidate blotches of that size and with the same corresponding detector response are removed from the detection mask.
+
+We demonstrate this approach for the S-ROD detector. After labelling the candidate blotches we compute the size $N$ and the mean value of the detector output $d_n(z)$ for each blotch. We assume that in the absence of noise $I_n(z) = p_{n,i}(z)$ for at least one $i$, i.e., that no false alarms occur in the absence of noise. We also assume that the noise is *independent and identically distributed* (i.i.d.). The probability that S-ROD, with $T_1 \ge 0$, generates a single false alarm due to noise is then given by:
+
+$$
+\begin{align}
+P[d_n(z) > T_1] &= P[I_n(z) - \max(p_{n,i}(z)) > T_1, \quad I_n(z) - \max(p_{n,i}(Z)) > 0] + \\
+&= P[\min(p_{n,i}(z)) - I_n(z) > T_1, \quad \min(p_{n,i}(z)) - 0I_n(z) > 0] \\
+&= P[I_n(z) - \max(p_{n,i}(z)) > T_1] + P[\min(p_{n,i}(z)) - I_n(z) > T_1] \\
+&= P[\text{all } I_n(z) - p_{n,i}(z) > T_1] + P[\text{all } p_{n,i}(z) - I_n(z) > T_1] \\
+&= P^6[I_n(z) - p_{n,i}(z) > T_1] + P^6[p_{n,i}(z) - I_n(z) > T_1].
+\end{align}
+\tag{8} $$
+
+Because we operate in the digital domain, it is easy to compute the probability mass function $P[d_n(z) = X]$ once (8) has been determined, i.e., the probability that S-ROD gives a specific response $X$ for a single pixel due to noise.
+
+After the labeling procedure, a candidate blotch is an object with spatial support $S$ that consists of $N$ pixels, and the mean value of the output of the blotch detector equals $\bar{d}_n(z)$ for that object. Let $H_0$ denote the hypothesis that this object is purely the result of false alarms
+---PAGE_BREAK---
+
+FIGURE 9. (a) Blotched frame from test sequence. (b) Mask of artificial blotches. (c) Initial detection mask using S-ROD with $T_1 = 0$. (d) Detection mask after removing possible false alarms due to noise.
+
+Table 1. See text for explanation.
+
+due to noise. $P[H_0]$ is then the probability that a collection of $N$ individual pixels are flagged by the S-ROD independently of their location and of their neighbors. By approximation:
+
+$$ (9) \qquad P[H_0] = P\left[\frac{1}{N} \sum_{z \in S} d_n(z) = \overline{d_n(z)}, \text{ size} = N \mid \text{no blotch present}\right] \\ \approx P^N\left[d_n(z) = \overline{d_n(z)}\right] $$
+
+We now remove those candidate blotches for which the probability that they are solely the result of noise exceeds a certain risk $R$:
+
+$$ (10) \qquad P[H_0] > R. $$
+
+The result of this approach is illustrated in Figure 9, which shows frame 8 of the *Western* sequence together with the artificial blotch mask and the detection masks before and after post-processing. The initial detection mask was obtained using S-ROD where we chose $T_1 = 0$. The noise was assumed to be i.i.d. Gaussian and the noise variance was estimated to be 9 using the method described in [11]. Table 1 shows the probability of a false alarm due to noise and the sizes below which blotches are removed when the average detector response value for that blotch equals $d_n(z)$. These sizes were computed by setting the risk to $R = 10^{-5}$.
+
+In frame 8 84.8% of the blotches were detected correctly and 13.1% of the uncorrupted pixels were mistakenly flagged as being part of a blotch before post-processing. After post-processing 83.4% of the blotches were detected correctly and only 1.0% of the clean pixels were mistakenly flagged as being part of a blotch. Clearly this is a great improvement.
+---PAGE_BREAK---
+
+FIGURE 10. Schematic overview of hysteresis thresholding. (a) Detection mask from detector set to low sensitivity with partially detected blotches. (b) Detection mask from detector set to high sensitivity with many false alarms. (c) Result after validation (propagation): the partially detected blotches are completed
+
+3.2.3. *Completing partially detected blotches.* We note that when the detector is set to low detection rates, many blotches are not detected at all and other blotches are detected only partially. We now want to make those blotches that are detected only partially more complete. We achieve this by noting from Fig. 8 that as $T_1$ is lowered the probability of false alarms decreases faster than the probability of correct detections. This means that detections resulting from a blotch detector set to a low detection rate are more likely to be correct and can thus be used to validate the detections from that detector when set to a high detection rate.
+
+This can be implemented by applying hysteresis thresholding [4] (see Fig. 10). The first stage computes and labels the set of candidate blotches using the operator settings of the blotch detector (in the case of S-ROD this is the operator setting of $T_1$). Possible false alarms due to noise are removed as described before. The second stage sets the blotch detector to a very high detection rate (i.e., $T_1 = 0$ for S-ROD) and again a set of candidate blotches is computed and labeled. Candidate blotches from the second set can now be validated; they are preserved if corresponding candidate blotches in the first set exist. The other candidate blotches in the second set, which are more likely to have resulted from false alarms, are discarded. Effectively we have preserved the candidate blotches detected using the operator settings and we have made them more complete.
+
+3.2.4. *Constrained dilation for missing details.* Even though a blotch detector may be very refined, there is always a probability that it fails to detect elements of a blotch. This is illustrated by Fig. 9 where it can be seen that, even though the S-ROD detector has been set to its most sensitive setting, not all the blotches have been detected completely. In this final post-processing step we refine the candidate blotches by removing small holes in and on the edges of the candidate blotches.
+
+We propose using a constrained dilation operation for filling in the holes. It applies the following rule: if a pixel's neighbor is flagged as being blotched and its intensity difference with that neighbor is small (e.g., less than twice the standard deviation of the noise) then that pixel should also be flagged as being part of that blotch. Because of the constraint on the differences in intensity, the probability that uncorrupted pixels surrounding a blotch will mistakenly become flagged as "blotched" is reduced because blotches tend to have gray values that are significantly different from their surroundings.
+
+It is important not to apply too many iterations of this constrained dilation operation because it is always possible that the contrast between a candidate blotch and its surroundings is low. The result would be that the candidate blotch would grow completely out of its bounds and many false alarms would occur. In practice, we found that applying two iterations leads to good results.
+---PAGE_BREAK---
+
+FIGURE 11. (a) Mask of artificial blotches. (b) Blotches detected using ROD (c) Blotches detected using S-ROD with post-processing. (d) Blotched frame. (e) corrected frame after ROD. (f) Corrected frame after S-ROD with post-processing. Note the differences in the boxed regions.
+
+### 3.3. Results and conclusions
+
+We already observed in Figure 8 that S-ROD in combination with post-processing gives a significant improvement over plain ROD. For example, the number of false alarms resulting from S-ROD with post-processing is a factor 6 lower than that which results from ROD at a correct detection rate of 85%.
+
+Figure 11 shows the detection masks and corrected versions of frame 8 of the test sequence that result from ROD and S-ROD with post-processing. The m13dex interpolation method as described in [9], a three-dimensional multi-stage median filter, was used for interpolating the blotched data. ROD and S-ROD with post-processing were set to an overall false alarm rate of 0.001 for the whole test sequence. Note the differences in the number of small blotches and the differences in detection and correction results in the boxed regions.
+
+In conclusion, the methodology described here can also be applied to other blotch detectors. Alternatively, the rules and constraints posed by the post-processing could well be defined implicitly in a new detector, e.g. based on Markov random fields. This would, however, significantly increase the complexity and computational effort.
+
+## 4. Correction of intensity flicker
+
+A common artifact in old black-and-white film sequences is intensity flicker. We define intensity flicker as unnatural temporal fluctuations in perceived image intensity that do not originate from the original scene. Intensity flicker has a great number of causes, e.g., aging of film, dust, chemical processing, copying, aliasing, and, in the case of earlier film cameras, variations in shutter time. Neither equalizing the intensity histograms nor equalizing the mean frame values of consecutive frames, as suggested in [13], [18] give general solutions
+---PAGE_BREAK---
+
+to the problem. These methods do not take changes in scene contents into account, and they
+do not appreciate the fact that intensity flicker can be a spatially localized effect. We propose
+equalizing local intensity means and variances in a temporal sense to reduce the undesirable
+temporal fluctuations in image intensities.
+
+In Section 4.1 we model the effects of intensity flicker, and we derive a solution to this problem for stationary sequences. Here we also define a measure of reliability for the model parameters. In section 4.2 we extend the applicability of our method to include nonstationary sequences by incorporating motion. In the presence of intensity flicker, it is difficult to compensate for motion of local objects in order to satisfy the requirement of stationarity. We therefore describe a method for compensating global motion (camera pan) and a method for detecting the remaining local object motion. The model parameters are interpolated where local motion is detected. We show, in Section 4.3, the overall system of intensity-flicker correction and discuss some practical aspects. Experiments and results form the topics of Section 4.4. Finally, we conclude with a discussion in Section 4.5.
+
+**4.1. Estimating and correcting intensity flicker in stationary sequences**
+
+We develop a method for correcting intensity flicker that is robust to the wide range of causes
+of this artifact. First, in Section 4.1.1 we model the effects of intensity flicker. We find
+a solution to this problem that is optimal in a linear mean square error sense. In Section
+4.1.2 we concentrate on how the model parameters can be estimated for stationary image
+sequences, and we define a measure of reliability of those estimated parameters.
+
+4.1.1. *Estimating and correcting intensity flicker in stationary sequences.* It is not practical to find explicit physical models for each of the mechanisms mentioned that cause intensity flicker. Instead, our model of the effects of this phenomenon is based on the observation that it causes temporal fluctuations in local intensity mean and variance. Since noise is unavoidable in the various phases of digital image formation, we also include a noise term in our model:
+
+$$
+(11) \qquad Y(x,y,t) = \alpha(x,y,t) \cdot I(x,y,t) + \beta(x,y,t) + \eta(x,y,t)
+$$
+
+Here x, y are discrete spatial coordinates and t indicates the frame number. Y(x, y, t) and I(x, y, t) indicate the observed and original image intensities. Note that by I(x, y, t) we do not necessarily mean the original scene intensities, but a signal that, prior to the introduction of intensity flicker, may already have been distorted. The distortion could be due to signal-dependent additive granular noise that is characteristic of film [3], [16], for example. The multiplicative and additive intensity-flicker parameters are denoted by α(x, y, t) and β(x, y, t). In the ideal case, when no intensity flicker is present, α(x, y, t) = 1 and β(x, y, t) = 0 for all x, y, t. We assume that α(x, y, t) and β(x, y, t) are spatially smooth functions.
+
+The intensity-flicker-independent noise, denoted by $\eta(x, y, t)$, models noise that has been
+added to the signal after the introduction of intensity flicker. We assume that this noise term is
+uncorrelated with the original image intensities. We also assume that $\eta(x, y, t)$ is a zero-mean
+signal with known variance. Examples are quantization noise and thermal noise originating
+from electronic studio equipment (VCR, amplifiers, etc.).
+
+To correct intensity flicker, we must estimate the original intensity for each pixel from
+the observed intensities. We propose using the following linear estimator for estimating
+$I(x, y, t)$:
+
+$$
+(12) \qquad \hat{I}(x, y, t) = a(x, y, t) \cdot Y(x, y, t) + b(x, y, t).
+$$
+
+If we define the error between the original image intensity and the estimated original
+image intensity as:
+---PAGE_BREAK---
+
+$$ (13) \qquad \epsilon(x, y, t) = I(x, y, t) - \hat{I}(x, y, t), $$
+
+then we can easily determine that, given $\alpha(x, y, t)$ and $\beta(x, y, t)$, the optimal values for $a(x, y, t)$ and $b(x, y, t)$ in a linear minimum mean square error (LMMSE) sense are given by:
+
+$$ (14) \qquad a(x, y, t) = \frac{\mathrm{var}[Y(x, y, t)] - \mathrm{var}[\eta(x, y, t)]}{\mathrm{var}[Y(x, y, t)]} \cdot \frac{1}{\alpha(x, y, t)} $$
+
+$$ (15) \qquad b(x, y, t) = -\frac{\beta(x, y, t)}{\alpha(x, y, t)} + \frac{\mathrm{var}[\eta(x, y, t)]}{\mathrm{var}[Y(x, y, t)]} \cdot \frac{E[Y(x, y, t)]}{\alpha(x, y, t)} $$
+
+where $E[]$ stands for the expectation operator and $\mathrm{var}[]$ indicates the variance. It is interesting to note that from equations (12), (14), and (15) it follows that, in the absence of noise, $a(x,y,t) = 1/\alpha(x,y,t)$, $b(x,y,t) = -\beta(x,y,t)/\alpha(x,y,t)$ and that $\hat{I}(x,y,t) = I(x,y,t)$. That is to say, the estimated intensities are exactly equal to the original intensities. In the extreme case that the observed signal variance equals the noise variance, we find that $a(x,y,t) = 0$ and $\hat{I}(x,y,t) = b(x,y,t) = E[I(x,y,t)];$ the estimated intensities equal the expected values of the original intensities.
+
+**4.1.2. Estimating intensity-flicker parameters in stationary scenes.** In the previous section we derived a LMMSE solution to intensity flicker, assuming that the intensity-flicker parameters $\alpha(x, y, t)$ and $\beta(x, y, t)$ are known. This is not the case in most practical situations, and these parameters will have to be estimated from the observed data. In this section we determine how the intensity-flicker parameters can be estimated from stationary image sequences. We already assumed that $\alpha(x, y, t)$ and $\beta(x, y, t)$ are spatially smooth functions. For practical purposes we now also assume that the intensity-flicker parameters are constant locally:
+
+$$ (16) \qquad \begin{cases} \alpha(x, y, t) = \alpha_{m,n}(t) \\ \beta(x, y, t) = \beta_{m,n}(t) \end{cases} \quad \text{for } x, y \in \Omega_{m,n}, $$
+
+where $\Omega_{m,n}$ indicates a small image region. The image regions $\Omega_{m,n}$ can, in principle, have any shape, but they are rectangular blocks in practice, and $m, n$ indicate their horizontal and vertical spatial locations. The $\alpha_{m,n}(t)$ and $\beta_{m,n}(t)$ corresponding to $\Omega_{m,n}$ are considered as frame-dependent matrix entries at $m, n$. The size $M \times N$ of the matrix depends on the total number of blocks in the horizontal and vertical directions.
+
+Keeping in mind the assumption that the zero-mean noise $\eta(x,y,t)$ is signal independent, we compute from (11) the expected value and variance of $Y(x,y,t)$ in a spatial sense for $x,y \in \Omega_{m,n}$:
+
+$$ (17) \qquad E[Y(x,y,t)] = \alpha_{m,n}(t) \cdot E[I(x,y,t)] + \beta_{m,n}(t)m $$
+
+$$ (18) \qquad \mathrm{var}[Y(x, y, t)] = \alpha_{m,n}^2(t) \cdot \mathrm{var}[I(x, y, t)] + \mathrm{var}[\eta(x, y, t)]. $$
+
+Rewriting (17) and (18) gives for $x,y \in \Omega_{m,n}$:
+
+$$ (19) \qquad \beta_{m,n}(t) = E[Y(x,y,t)] - \alpha_{m,n}(t) \cdot E[I(x,y,t)], $$
+---PAGE_BREAK---
+
+$$ (20) \qquad \alpha_{m,n}(t) = \sqrt{\frac{var[Y(x, y, t)] - var[\eta(x, y, t)]}{var[I(x, y, t)]}} $$
+
+We now wish to solve (19) and (20) in a practical situation. The means and variances of $Y(x, yt)$ can be estimated directly from the regions $\Omega_{m,n}$ from the observed data. We assumed the noise variance to be known. What remains to be estimated are the expected values and variances of $I(x, y, t)$. For $x, y \in \Omega_{m,n}$, these estimates can be obtained by using the frame corrected previously as a reference:
+
+$$ (21) \qquad E[I(x, y, t)] = E[\hat{I}(x, y, t-1)], $$
+
+$$ (22) \qquad \operatorname{var}[I(x, y, t)] = \operatorname{var}[\hat{I}(x, y, t-1)]. $$
+
+Thus, for $x, y \in \Omega_{m,n}$, the estimated intensity-flicker parameters are given by:
+
+$$ (23) \qquad \hat{\beta}_{m,n}(t) = E[Y(x, y, t)] - \hat{\alpha}_{m,n}(t) \cdot E[\hat{I}(x, y, t-1)], $$
+
+$$ (24) \qquad \hat{\alpha}_{m,n}(t) = \sqrt{\frac{var[Y(x,y,t)] - var[\eta(x,y,t)]}{var[\hat{I}(x,y,t-1)]}} $$
+
+We need a measure of reliability for $\hat{\alpha}_{m,n}(t)$ and $\hat{\beta}_{m,n}(t)$, to be able to avoid introducing significant errors in the corrected sequence. There are some cases in which the $\hat{\alpha}_{m,n}(t)$ and $\hat{\beta}_{m,n}(t)$ are not very reliable. The first case is that of uniform image intensities. For any original image intensity in a uniform region, there are an infinite number of combinations of $\alpha_{m,n}(t)$ and $\beta_{m,n}(t)$ that lead to the observed intensity. Another case in which $\hat{\alpha}_{m,n}(t)$ and $\hat{\beta}_{m,n}(t)$ are potentially unreliable is caused by the fact that (21) and (22) discard the noise in $I(x, y, t)$ originating from $\eta(x, y, t)$. Considerable errors result in regions $\Omega_{m,n}$ in which the signal variance is small compared to the noise variance (low signal-to-noise ratio). It is clear from these examples that the accuracy of the estimated parameters decreases with decreasing signal variances. Therefore we define the following measure of reliability, for $x, y \in \Omega_{m,n}$:
+
+$$ (25) \qquad W_{m,n,t} = \begin{cases} 0 & \text{if } var[Y(x,y,t)] < T_n \\ \sqrt{\frac{var[Y(x,y,t)]-T_n}{T_n}} & \text{otherwise} \end{cases} $$
+
+where $T_n$ is a threshold depending on the variance of $\eta(x, y, t)$. Large values for $W_{m,n,t}$ indicate reliable estimates, small values indicate unreliable estimates.
+
+## 4.2. Incorporating motion
+
+We have modeled the effects of intensity flicker, and we derived a solution for stationary sequences. Real sequences, of course, are seldom stationary. Measures will have to be taken to avoid estimates of $\alpha(x, y, t)$ and $\beta(x, y, t)$ that are incorrect due to motion. Compensating motion between $Y(x, y, t)$ and $\hat{I}(x, y, t-1)$ helps satisfy the assumption of stationarity. This requires motion estimation.
+
+Robust methods for estimating global motion (camera pan) that are relatively insensitive to fluctuations in image intensities exist. Unfortunately, the presence of intensity flicker hampers the estimation of local motion (motion in small image regions) because local motion estimators usually have a constant luminance constraint, i.e., pel-recursive methods and all motion estimators that make use of block matching in one stage or another [21]. Even if
+---PAGE_BREAK---
+
+motion can be well compensated, a strategy is required for correcting flicker in previously occluded regions that have become uncovered.
+
+For these reasons, our strategy for estimating intensity-flicker parameters in nonstationary scenes is based on local motion detection. First, we register a pair of frames to compensate for global motion (Section 4.2.1). Then we estimate the intensity-flicker parameters as outlined in Section 4.1.2. Using these parameters, we detect the remaining local motions (Section 4.2.2). Finally, we interpolate the missing parameters of the nonstationary regions (Section 4.2.3).
+
+**4.2.1. Estimating global motion with phase correlation.** In sequences with camera pan, applying global-motion compensation only helps satisfy the requirement of stationarity if the global motion vectors (one vector to each frame) are accurate: i.e., if the global motion estimator is robust against intensity flicker. A global motion estimator that suits our purpose is the phase correlation method [21] applied to high-pass-filtered versions of the images. Phase correlation determines motion based on phase shifts in the Fourier domain. Because it uses Fourier coefficients normalized by their magnitude, this method is relatively insensitive to fluctuations in image intensity. As we assumed that the amount of intensity flicker varies smoothly in a spatial sense, the direction of changes in intensity over edges and textured regions is preserved in the presence of intensity flicker. This means that the phases of the higher-frequency components will not be affected by intensity flicker. Local mean intensities can show considerable variations from frame to frame. This gives rise to random variations in the phase of the low-frequency components. These random variations are disturbing factors in the motion estimation process that can be avoided by removing the low-pass frequency components from the input images.
+
+Phase correlation determines phase shift in the Fourier domain as follows:
+
+$$ (26) \qquad C_{t,t-1}(\omega_1, \omega_2) = \frac{S_t(\omega_1, \omega_2) S_{t-1}^*(\omega_1, \omega_2)}{\|S_t(\omega_1, \omega_2) S_{t-1}^*(\omega_1, \omega_2)\|} $$
+
+where $S_t(\omega_1, \omega_2)$ stands for the 2D Fourier transform of $Y(x, y, t)$ and $\*$ denotes the complex conjugate. If $Y(x, y, t)$ and $Y(x, y, t-1)$ are spatially shifted but otherwise identical images, the inverse transform of (26) leads to a delta pulse in the 2D correlation function. Its location yields the global displacement vector $(d_x, d_y)^T$. We can now compensate for global motion in estimating the model parameters by replacing (23) and (24) with:
+
+$$ (27) \qquad \hat{\beta}_{m,n}(t) = E[Y(x,y,t)] - \hat{\alpha}_{m,n}(t) \cdot E[\hat{I}(x-d_x,y-d_y,t-1)], $$
+
+$$ (28) \qquad \hat{\alpha}_{m,n}(t) = \sqrt{\frac{\mathrm{var}[Y(x,y,t)] - \mathrm{var}[\eta(x,y,t)]}{\mathrm{var}[\hat{I}(x-d_x,y-d_y,t-1)]}} $$
+
+**4.2.2. Detecting remaining local motion.** It is important to detect the remaining local motion after we have compensated for global motion. Local motion causes changes in image statistics that are not due to intensity flicker. This leads to incorrect estimates of $\alpha(x, y, t)$ and $\beta(x, y, t)$: i.e., to visible artifacts in the corrected image sequence.
+
+We have developed a robust motion detection system that relies on the current frame only. The underlying assumption of this method is that motion should only be detected if visible artifacts would otherwise be introduced. First, the observed image is subdivided into blocks $\Omega_{m,n}$ that overlap their neighbors both horizontally and vertically (The overlapping boundary regions form sets of reference intensities. The intensity-flicker parameters are estimated for
+---PAGE_BREAK---
+
+FIGURE 12. (a) Set of original measurements that have variable accuracy, the missing measurements have been set to 1. (b) Smoothed and interpolated parameters using SOR.
+
+each block by (27) and (28). These parameters are used with (12), (14), and (15) for correcting the intensities in the boundary regions. Then, for each pair of overlapping blocks, the common pixels that are assigned significantly different values are counted. Corrected pixels are considered to be significantly different when their absolute difference exceeds a threshold $T_d$. Finally, motion is flagged if the number of significantly different pixels exceeds a constant $D_{max}$, which depends on the number of pixels compared.
+
+4.2.3. Interpolation of missing parameters using successive overrelaxation (SOR). Due to noise and local motion, the estimated intensity-flicker parameters are unreliable in some cases. We refer to these parameters as *missing*. The other parameters are referred to as *known*. We want to find estimates of the *missing* parameters by means of interpolation. We also want to smooth the *known* parameters. The interpolation and smoothing functions should meet the following requirements. First, the system of intensity-flicker correction should switch itself off when the correctness of the interpolated values is less certain. This means that the interpolator should incorporate biases for $\hat{\alpha}_{m,n}(t)$ and $\hat{\beta}_{m,n}(t)$ towards unity and zero, respectively, that grow as the smallest distance to a region with *known* parameters becomes larger. Second, the reliability of the *known* parameters should be taken into account, i.e., the weights $W_{m,n,t}$ defined in (eq. 25).
+
+An interpolation method that meets our requirements is SOR, which is a well-known iterative method based on repeated low-pass filtering [17]. Figure 12 shows an example of this method.
+
+### 4.3. Practical issues
+
+Figure 13 shows the overall structure of the system of intensity-flicker correction. We have added some operations to this figure that we have not mentioned before and that improve the system's behavior. First, the current input and the previous system output (with global motion compensation) are low-pass filtered with a 5×5 Gaussian kernel. Prefiltering suppresses the influence of high-frequency noise and the effects of small motion. Then, local means $\mu$ and variances $\sigma^2$ are computed to be used for estimating the intensity-flicker parameters. These and the current input are used to detect local motions. Then, the missing parameters are interpolated and the known parameters are smoothed. Bilinear interpolation is used for upsampling the estimated parameters to full spatial resolution. This avoids the introduction of blocking artifacts in the correction stage that follows.
+---PAGE_BREAK---
+
+FIGURE 13. Global structure of intensity-flicker correction system
+
+To avoid possible drift due to error accumulation (resulting from the need to approximate the expectation operator and from model mismatches), we bias the corrected intensities towards the contents of the current frame. Equation (12) is therefore replaced by:
+
+$$ (29) \qquad \hat{I}(x, y, t) = \kappa \cdot (\alpha(x, y, t) \cdot Y(x, y, t) + b(x, y, t) + (1-\kappa) \cdot Y(x, y, t)) $$
+
+where $\kappa$ is the forgetting factor. A practical value for $\kappa$ is 0.85.
+
+## 4.4. Experiments and results
+
+We applied the system of intensity-flicker correction both to sequences containing artificially added intensity flicker and to sequences with real (non synthetic) intensity flicker. The first set of experiments takes place in a controlled environment and allows us to evaluate the correction system under extreme conditions. The second set of experiments verifies the practical effectiveness of our system and forms a verification of the underlying assumptions of our approach.
+
+If our algorithm functions well and the image content does not change significantly, then the equalized frame means and variances, in a temporal sense, should have a low variance. Indeed, the converse need not be true, but visual inspection helps us to verify the results. Therefore, we propose to use the decreases in variation in frame means and in frame variances to measure the effectiveness of our system.
+
+### 4.4.1. *Experiment on artificial intensity flicker.*
+
+For our first experiment we used the Mobile sequence (40 frames) which contains moving objects and camera panning (0.8 pixels/frame). The sequence was degraded in a manner that simulates noise and intensity flicker as one can expect to find in a practical situation. First, we added film-grain noise to the original sequence. This noise was generated by the method mentioned in [16] with a variance of 33. Then we added artificial intensity flicker. The intensity-flicker parameters were artificially created from second-order 2D polynomial surfaces. The coefficients for the surfaces were drawn from the normal distribution $N(0, 0.1)$ (from $N(1, 0.1)$) for the zero-th order (DC) term) to generate the $\alpha(x, y, t)$ and from $N(0, 10)$ to generate the $\beta(x, y, t)$. Visually speaking, this leads to severe amounts of intensity flicker. Finally, we simulated thermal noise introduced by, for instance, a *telecine* or VCR. For this purpose we added zero-mean signal-independent white Gaussian noise with a variance of 20.
+---PAGE_BREAK---
+
+FIGURE 14. Top: frames 16, 17 and 18 of Mobile sequence with synthetic intensity-flicker. Bottom: corrected frames
+
+FIGURE 15. Means (left) and variances (right) of original Mobile sequence, Mobile sequence degraded with artificial intensity-flicker and noise, and corrected Mobile sequence.
+
+Global motion was compensated for during the sequence correction.
+
+Figure 14 shows some frames from the degraded and the corrected sequence. Figure 15 shows the frame means and variances of the original, degraded and corrected test sequence. From these graphs we note that the variation in mean and in variance has been significantly reduced. Visual inspection confirms that the amount of intensity flicker has been reduced significantly.
+
+### 4.4.2. *Experiment on real intensity flicker.*
+For our second experiment we used a sequence, which we call *Tunnel*, that originates from the film archives of INA. Tunnel is 226 frames long and it consists of a man entering the scene through a tunnel. There is some camera unsteadiness during the first 120 frames, then the camera pans to the right and up. There is film-grain noise and considerable intensity flicker in this sequence. We used the method described in [11] to estimate the total noise variance, which was 8.9. Global motion compensation was applied during the correction. Figure 16 shows three frames from this sequence, original and corrected. Figure 17 shows that the fluctuation in frame means and variances
+---PAGE_BREAK---
+
+FIGURE 16. Top: frames 13, 14 and 15 of the Tunnel sequence. Bottom: corrected frames
+
+FIGURE 17. Means (left) and variances (right) of degraded and corrected Tunnel sequence.
+
+have significantly been reduced. Visual inspection shows that the intensity flicker has been
+reduced significantly without introducing visible new artifacts.
+
+**4.5. Conclusions**
+
+We introduced a system for correcting intensity flicker that performs well on artificially and naturally degraded sequences. In broadcasting and in film restoration environments, real-time implementation of our system is required. From an implementation point of view, the required computations are simple and can easily be performed by hardware.
+
+**5. Objective evaluation of the improvement in quality of restored image sequences**
+
+Once an integrated image restoration system that is capable of simultaneously correcting multiple artifacts has been designed, as the one in AURORA, one wants to know how effective
+---PAGE_BREAK---
+
+FIGURE 18. Overview of the OMIQ system.
+
+the system is, how well it does its job. Because the goal of restoration is to improve the visual (and audio) quality of degraded film sequences, the appropriate evaluation method is by subjective evaluation. However, subjective evaluation is time consuming, expensive, and does not allow large amounts of information to be evaluated due to practical constraints. This means that some objective measure of quality must be defined.
+
+In the previous sections we proposed three algorithms for image restoration and we used three different measures of performance. Though these measures give good objective indications of the performance of the individual restoration processes, they do not directly give a measure of quality of the overall result. In fact, some of the measures used require information that is not available in a practical situation, such as the original noise-free images and the real (not estimated) blotch masks.
+
+In the next section, Section 5.1, we propose an objective measure for the improvement in image quality based on coding efficiency. Then in Section 5.2 we apply this measure to the experimental results from the previous sections. We conclude with a discussion in Section 5.3.
+
+## 5.1. An Objective Measure of Improvement in image Quality (OMIQ)
+
+It should be noted that, as no original unimpaired reference sequences are available, any measure for the quality of a restored sequence can be at most a relative measure. In order to come to an objective relative measure we make the following two assumptions:
+
+1. Image restoration improves the objective quality by removing artifacts,
+
+2. Removing artifacts increases the coding efficiency.
+
+The second assumption is likely to be true for artifacts such as noise, blotches and intensity flicker because removing these reduces the magnitude of the prediction errors both in a temporal and a spatial sense. Having smaller prediction errors means that less bits are required to obtain the same quality and, conversely, that higher quality can be obtained at the identical bit rate. In [5] there is evidence that image stabilization also increases the coding efficiency. Note that there are some artifacts for which the second assumption is not true, e.g., deblurred images require more bits for coding than the out of focus originals. For that reason we exclude unsharpness artifacts from the discussion.
+
+We now define a measure of quality improvement $\Delta Q$ as the difference between the coding efficiency of the restored image sequence and the coding efficiency of the impaired sequence:
+
+$$ \Delta Q = E(\text{corrected}) - E(\text{impaired}). $$
+---PAGE_BREAK---
+
+The unit of the coding efficiency function $\Delta Q$ can be either *bits* or *dB*. Figure 18 shows the scheme that allows us to measure coding efficiency when dB is taken as the unit of efficiency. First, the restored image sequence is encoded for a constant bitrate. Then $E(\text{corrected})$ is given by the PSNR computed between the input and decoded output. Next, the impaired image sequence is encoded at the same bitrate and $E(\text{impaired})$ is given by the PSNR computed between the input and decoded output. $\Delta Q$ can now be computed and its interpretation, in a digital broadcasting or storage environment, is that of the lower bound on the improvement in image quality as observed by the (home) viewer. It is a lower bound because, according to assumption 1, the quality of the corrected sequence is higher than that of the impaired original, but we do not how much higher. It measures an *improvement* because, according to assumption 2, the coding error is smaller and thus the visual quality of the decoded sequence is higher.
+
+When we use *bit* as the unit of coding efficiency, we set the coder's bitrate so that the PSNR between coded/decoded corrected sequences equals the PSNR between coded/decoded impaired sequences. $E(\text{corrected})$ and $E(\text{impaired})$ are given by the sizes in bits of the compressed sequence. The interpretation of $\Delta Q$, which is negative in this case, is that of how many bits of irrelevant information were removed by the restoration process.
+
+## 5.2. *Objective evaluation of the experimental results*
+
+We applied OMIQ to the impaired and corrected sequences from the previous sections. We used the standard TM5 MPEG2 encoder. Initially we set the bitrate to 5 Mb/s for the Plane, Mobile and Tunnel sequences, which all have a frame size of 720×576 pixels. Because the frame size of the Western sequence is smaller (256×256 pixels) we selected a lower bitrate for this sequence, namely 0.8 Mb/s. Figure 19 shows the PSNR between the input sequences and the coded/decoded results. From these graphs it is clear that the coding errors for corrected sequences are smaller than those for the original impaired sequences. This is in accordance with assumption 2 in the previous section.
+
+Table 2 shows the mean of the PSNR of the coded sequences and the objective improvement in image quality in dB. These numbers confirm that image restoration does indeed lead to a higher quality of the coded sequence at identical bitrates for the artifacts under consideration. The gains achieved by noise reduction are especially noteworthy. Table 2 also gives the objective improvement in image quality in bits. These numbers were obtained by coding the impaired sequences at higher bit rates than before such that the PSNR of coded/decoded impaired sequences equals the PSNR of coded/decoded corrected sequences (at 5 Mb/s and 0.8 Mb/s respectively). We see that applying restoration can lead to great savings, ranging from 30 to 270 percent in bits, whilst maintaining the same coding quality.
+
+## 5.3. *Discussion*
+
+A question is how well this objective measure corresponds to subjective evaluation. We stated that OMIQ gives a lower bound on the improvement in image quality as observed by the (home) viewer. This, however, does not mean that the viewer experiences this improvement in quality as such. And, in fact, he or she does not. The reason is that it is difficult for an observer to grade the improvement in image quality in terms of dB or bits. It is natural to say, for instance, that removing intensity flicker, noise and blotches has increased the visual quality, but it is difficult to assign a figure of merit to this increase.
+
+The conclusion, therefore, must be that OMIQ indicates what savings can be achieved in terms of bandwidth and storage capacity by applying image restoration prior to digital broadcast and storage. One can assume (for the artifacts under consideration) that when taking advantage of these savings the visual quality of coded/decoded corrected sequences is at least equal, but probably better, than that of coded/decoded impaired sequences. The achievable savings are a measure of the overall performance of the restoration system.
+---PAGE_BREAK---
+
+FIGURE 19. PSNR of sequences after MPEG2 encoding. (a) Impaired (noise) and corrected Plane sequence. (b) Impaired (blotches) and corrected Western sequence. (c) Impaired (intensity-flicker) and corrected Mobile sequence. (d) Impaired (intensity-flicker) and corrected Tunnel sequence.
+
+| SEQUENCE | IMPAIRMENT | MEAN PSNR (dB) IMPAIRED | MEAN PSNR (dB) CORRECTED | ΔQ (dB) | ΔQ (Mbit) | % |
|---|
| Plane | Noise | 25.2 | 30.4 | 5.2 | -13.5 | 270 | | Western | Blotches | 34.6 | 36.1 | 1.5 | -0.35 | 44 | | Mobile | Flicker | 28.7 | 29.9 | 1.2 | -2.7 | 54 | | Tunnel | Flicker | 36.4 | 36.9 | 0.5 | -1.5 | 30 |
+
+Table 2. Sequence mean in PSNR between input and coded/decoded output at 5 Mbit/s. Also given is the improvement in image quality in dB and in bits. The last column gives the relative increase in the number of bits. See text for explanation.
+
+## 6. Discussion
+
+Applying image restoration prior to storing or broadcasting an image sequence in a compressed format can lead to significant savings in the number of bits required for coding.
+---PAGE_BREAK---
+
+This is important to note in an era in which broadcasters are transforming their broadcasts
+from analog to digital. Due to compression and digital broadcasting, the number of channels
+available to the home viewer will increase dramatically. However, these channels require
+programming. Nowadays, production costs for new programming are tremendous. An alter-
+native to making new programming is recycling the many films, soaps and quiz shows that
+have been made over the last 50 or 60 years. This alternative can only be viable if the vi-
+sual and audio quality meets the standards expected by the modern viewer and if the costs
+for broadcasting old films do not exceed the costs for broadcasting contemporary films. We
+suggest that image restoration can fulfill both of these requirements.
+
+The Plane sequence was made available by courtesy of the BBC. The Western sequence was made available by courtesy of Dr. A.C. Kokaram of the University of Cambridge. The Tunnel sequence was made available by courtesy of INA.
+
+References
+
+[1] M. Antonini, T. Gaidon, P. Mathieu, and M. Barlaud. Wavelet transforms and image coding. In M. Barlaud, editor, *Wavelets in Image Communication*, volume 5, pages 66–78. Elsevier, 1994.
+
+[2] G. R. Arce. Multistage order statistic filters for image sequence processing. *IEEE Trans. on Signal Processing*, 39(5):1146–1163, 1991.
+
+[3] F.C. Bilingsley. Noise considerations in digital image processing hardware. In T.S. Huang, editor, *Topics in Applied Physics*, volume 6. Springer Verlag, Berlin, 1975.
+
+[4] J. Canny. A computational approach to edge detection. *IEEE PAMI*, 8(6), 1986.
+
+[5] A. T. Erdem and C. Eroglu. The effect of image stabilization on the performance of the mpeg-2 video coding algorithm. In *Proc. of VCIP-98*, volume 1, pages 272–277, 1998.
+
+[6] S. Karla, M.N. Chong, and D. Krishnan. A new auto regressive (ar) model-based algorithm for motion picture restoration. In *Proc. ICASSP97*, volume 4, pages 2557–2560, Munich, 1997. ICASSP97.
+
+[7] R.P. Kleihorst. *Noise Filtering of Image Sequences*. PhD thesis, Delft University of Technology, October 1994.
+
+[8] A.C. Kokaram, R.D. Morris, W.J. Fitzgerald, and P.J.W. Rayner. Detection of missing data in image sequences. *IEEE Trans. on Image Processing*, 4(11):1496–1508, 1995.
+
+[9] A.C. Kokaram, R.D. Morris, W.J. Fitzgerald, and P.J.W. Rayner. Interpolation of missing data in image sequences. *IEEE Trans. on Image Processing*, 4(11):1509–1519, 1995.
+
+[10] M. Lang, H. Guo, J.E. Odegard, C.S. Burrus, and R.O. Wells. Nonlinear processing of a shift-invariant dwt for noise reduction. volume 2491. Orlando, Florida, 1995.
+
+[11] J.B. Martens. Adaptive contrast enhancement through residue-image processing. *Signal Processing*, (44):1–18, 1995.
+
+[12] J.H. McClellan. The design of two-dimensional filters by transformations. In *Proc. 7th Annual Princeton conference of Sciences and Systems*, pages 247–251, 1973.
+
+[13] H. Muller-Seelich, W. Plaschzug, P. Schallauer, S. Potzman, and W. Haas. Digital restoration of 35mm film. In *Proc. of ECMAST 96*, volume 1, pages 255–265, Louvain-la-Neuve, Belgium, 1996. ECMAST96.
+
+[14] M.J. Nadenau and S.K. Mitra. Blotch and scratch detection in image sequences based on rank ordered differences. volume 5 of *Time-Varying Image Processing and Moving Object Recognition*. Elsevier, 1997.
+
+[15] M.K. Özkan, A.T. Erdem, M.I. Sezan, and A.M. Tekalp. Efficient multiframe wiener restoration of blurred and noisy image sequences. *IEEE Trans. on Image Processing*, 1(4):453–476, 1992.
+
+[16] M.K. Özkan, M.I. Sezan, and M. Tekalp. Adaptive motion compensated filtering of noisy image sequences. *IEEE Trans. on Circuits and Systems for Video Technology*, 3(4), 1993.
+
+[17] Press, Teukolsky, Vetterling, and Flannery. *Numerical Recipes in C*. Cambridge University Press, 2nd edition, 1992.
+
+[18] P. Richardson and D. Suter. Restoration of historic film for digital compression: A case study. In *Proc. of ICIP-95*, volume 2, pages 49–52. ICIP95, 1995.
+
+[19] E.P. Simoncelli and E.H. Adelson. Noise removal via bayesian wavelet coring. In *Proc. of ICIP-96*, volume 1, pages 379–372. ICIP96, 1996.
+
+[20] E.P. Simoncelli, W.T. Freeman, E.H. Adelson, and D.J. Heeger. Shiftable multiscale transforms. *IEEE Trans. on Information Theory*, 38(2):587–607, 1992.
+
+[21] A.M. Tekalp. *Digital Video Processing*. Prentice Hall, 1995.
+---PAGE_BREAK---
+
+[22] N. Weyrich and G. T. Warhola. Wavelet shrinkage and generalized cross validation for image denoising. *IEEE Trans. on Image Processing*, 7(1):82–90, 1998.
+---PAGE_BREAK---
+
diff --git a/samples/texts_merged/4085474.md b/samples/texts_merged/4085474.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f505821776f4a2669a40f098191a995a58fc02f
--- /dev/null
+++ b/samples/texts_merged/4085474.md
@@ -0,0 +1,2512 @@
+
+---PAGE_BREAK---
+
+NEAR EQUILIBRIUM FLUCTUATIONS FOR SUPERMARKET MODELS
+WITH GROWING CHOICES
+
+SHANKAR BHAMIDI, AMARJIT BUDHIRAJA, AND MIHEER DEWASKAR
+
+**ABSTRACT.** We consider the supermarket model in the usual Markovian setting where jobs arrive at rate $n\lambda_n$ for some $\lambda_n > 0$, with $n$ parallel servers each processing jobs in its queue at rate 1. An arriving job joins the shortest among $d_n \le n$ randomly selected service queues. We show that when $d_n \to \infty$ and $\lambda_n \to \lambda \in (0, \infty)$, under natural conditions on the initial queues, the state occupancy process converges in probability, in a suitable path space, to the unique solution of an infinite system of constrained ordinary differential equations parametrized by $\lambda$. Our main interest is in the study of fluctuations of the state process about its near equilibrium state in the critical regime, namely when $\lambda_n \to 1$. Previous papers e.g. [25] have considered the regime $\frac{d_n}{\sqrt{n}\log n} \to \infty$ while the objective of the current work is to develop diffusion approximations for the state occupancy process that allow for all possible rates of growth of $d_n$. In particular we consider the three canonical regimes (a) $d_n/\sqrt{n} \to 0$; (b) $d_n/\sqrt{n} \to c \in (0, \infty)$ and, (c) $d_n/\sqrt{n} \to \infty$. In all three regimes we show, by establishing suitable functional limit theorems, that (under conditions on $\lambda_n$) fluctuations of the state process about its near equilibrium are of order $n^{-1/2}$ and are governed asymptotically by a one dimensional Brownian motion. The forms of the limit processes in the three regimes are quite different; in the first case we get a linear diffusion; in the second case we get a diffusion with an exponential drift; and in the third case we obtain a reflected diffusion in a half space. In the special case $d_n/(\sqrt{n}\log n) \to \infty$ our work gives alternative proofs for the universality results established in [25].
+
+# 1. INTRODUCTION
+
+In this work we study the asymptotic behavior of a family of randomized load balancing schemes for many server systems. Consider a processing system with $n$ parallel queues in which each queue's jobs are processed by the associated server at rate 1. Jobs arrive at rate $n\lambda_n$ and join the shortest queue amongst $d_n$ randomly selected queues (without replacement), with $d_n \in [n] := \{1, \dots, n\}$. The interarrival times and service times are mutually independent exponential random variables. This queuing system with the above described 'join-the-shortest-queue amongst chosen queues' discipline is often denoted as *JSQ(d**n**>) and frequently referred to as the supermarket model (cf. [12,17–19,21,25] and references therein). Note that when $d_n = n$ the above description corresponds to a policy where an incoming job joins the shortest of all queues in the system (see e.g. [8]). The case $d_n = 1$ is the other extreme corresponding to incoming jobs joining a randomly chosen queue in which case the system is equivalent to one with $n$ independent $M/M/1$ queues with arrival rate $\lambda_n$ and service rate 1. The case $d_n = d$ where $d > 1$ is a fixed positive integer is sometimes also referred to as the power-of-$d$ scheme. The analysis of *JSQ(d**n**>) schemes has been a focus of much recent research motivated by problems from large scale service centers, cloud computing platforms, and data storage and retrieval systems (see. e.g. [0, 2, 12, 23, 16]). The influential works of Mitzenmacher [23, 24] and Vvedenskaya et al. [29] showed by considering a fluid scaling
+
+DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH, 304 HANES HALL, UNIVERSITY OF NORTH CAROLINA, CHAPEL HILL, NC 27599
+
+E-mail addresses: bhamidi@email.unc.edu, budhiraja@email.unc.edu, miheer@live.unc.edu.
+
+2010 Mathematics Subject Classification. Primary: 90B15, 60F17, 90B22, 60C05.
+
+Key words and phrases. power of choice, join-the-shortest-queue, fluid limits, heavy traffic, Halfin-Whitt, load balancing, diffusion approximations, Skorohod problem, reflected diffusions, functional limit theorems.
+---PAGE_BREAK---
+
+that increasing $d$ from 1 to 2 leads to significant improvement in performance in terms of steady-state queue length distributions in that the tails of the asymptotic steady-state distributions decay exponentially when $d=1$ and super-exponentially when $d=2$. Limit theorems under a diffusion scaling for the $JSQ(d)$ system, with a fixed $d$, can be found in [5, 7]. Although $JSQ(d)$ for a fixed $d \ge 2$ leads to significant improvements over $JSQ(1)$, as observed in [10, 11], no fixed value of $d$ provides the optimal waiting time properties of the join-the-shortest-queue system (i.e. $JSQ(n)$). See the survey [28] for an overview of the progress in this general area. This motivates the study of asymptotic behavior of a $JSQ(d)$ system in which the number of choices $d$ increase with system size, namely $n$. Such an asymptotic study is the goal of this work.
+
+The paper [25] studied the law of large numbers (LLN) behavior of a $JSQ(d_n)$ system, under a standard scaling, when $d_n \to \infty$. The precise result of [25] is as follows. For $i \in \mathbb{N}_0 := \{0, 1, 2, ...\}$ and $t \in [0, \infty)$, let $G_{n,i}(t)$ denote the fraction of queues with at least $i$ customers at time $t$ in the $n$-th system. Note that $G_{n,0}(t) = 1$ for all $t \ge 0$. We will call $\mathbf{G}_n(t) := \{G_{n,i}(t) : i > 0\}$ the state occupancy process. This process has sample paths in the space of summable nonnegative sequences. More precisely, for $p \ge 1$, let $\ell_p$ be the space of real sequences $\mathbf{x} := (x_1, x_2, ...)$ such that $\|\mathbf{x}\|_p = (\sum_{i=1}^\infty |x_i|^p)^{1/p} < \infty$. Let
+
+$$ \ell_1^\downarrow := \{\mathbf{x} \in \ell_1 : x_i \ge x_{i+1} \text{ and } x_i \in [0, 1] \text{ for all } i \in \mathbb{N}\} \quad (1.1) $$
+
+be the space of non-increasing sequences in $\ell_1$ with values in $[0, 1]$, equipped with the topology generated by $\|\cdot\|_1$. Note that $\ell_1^\downarrow$ is a closed subset of $\ell_1$ and hence is a Polish space. Then, whenever $\|\mathbf{G}_n(0)\|_1 < \infty$ a.s., it can be shown that $\{\mathbf{G}_n(t) : t \ge 0\}$ is a stochastic process with sample paths in $\mathcal{D}([0, \infty) : \ell_1^\downarrow)$ (the space of right continuous functions with left limits from $[0, \infty)$ to $\ell_1^\downarrow$ equipped with the usual Skorohod topology); see Section 3. The paper [25] shows the following two facts under the assumption that $\mathbf{G}_n(0)$ converges in probability to some $\mathbf{r} \in \ell_1^\downarrow$:
+
+(a) When $d_n = n$ and $\lambda_n \to \lambda \in (0, \infty)$, $\mathbf{G}_n$ is a tight sequence in $\mathcal{D}([0, \infty) : \ell_1^\downarrow)$ and every weak limit point satisfies a certain set of “fluid limit equations” (see [25, Theorem 5], and equations (2.4)-(2.5) in the current work);
+
+(b) When $d_n$ is an arbitrary sequence growing to $\infty$ and $\lambda_n \to \lambda \in (0, 1)$, then the statements in (a) hold once more for $\mathbf{G}_n$.
+
+The current work begins by revisiting the above LLN results from [25]. In Theorem 2.1 of this work, we show that, when $\mathbf{G}_n(0) \xrightarrow{P} \mathbf{r}$, for arbitrary sequences $d_n \to \infty$ and $\lambda_n \to \lambda \in (0, \infty)$, $\mathbf{G}_n$ converges in probability in $\mathcal{D}([0, \infty) : \ell_1^\downarrow)$ to a continuous trajectory $g$ in $\ell_1^\downarrow$ that is characterized as the unique solution of an infinite system of constrained ordinary differential equations (ODE) (see (2.2) in Proposition 2.1). Using standard properties of the Skorohod map we observe in Remark 2.3 that a continuous trajectory in $\ell_1^\downarrow$ solves the fluid limit equations of [25] if and only if it solves (2.2). This together with Proposition 2.1 proves that the fluid limit equations in [25] in fact have a unique solution. In this manner we complete and strengthen the result from [25]. Our proof of the LLN result is quite different from the arguments in [25]. The latter are based on sophisticated ideas of separation of time scales and weak convergence of measure valued processes from [14] to handle the convergence for $d_n = n$, and certain coupling techniques to treat the general case when $d_n < n$ and $d_n \to \infty$. In contrast, our approach is more direct and uses martingale estimates and well known characterization properties of solutions of Skorohod problems (see e.g. proof of Lemma 4.7).
+
+Our main goal in this work is to study diffusion approximations for $\mathbf{G}_n$ in the heavy traffic regime, namely when $\lambda_n \to 1$. In the case when $d_n = n$ ($JSQ(n)$ system), this problem has been studied in [7]. Their basic result is as follows. Suppose $d_n = n$ and $\sqrt{n}(\lambda_n - 1) \to \beta > 0$. Consider the unit vector $\mathbf{e}_1 = (1, 0, ...)$ in $\ell_2$. Then under conditions on $\mathbf{G}_n(0)$, $\mathbf{Y}_n(\cdot) = \sqrt{n}(\mathbf{G}_n(\cdot) - \mathbf{e}_1)$ converges in distribution in $\mathcal{D}([0, \infty) : \ell_2)$ to a continuous stochastic process $\mathbf{Y} = (Y_1, Y_2, ...)$, described in terms
+---PAGE_BREAK---
+
+of a one dimensional Brownian motion, for which $Y_i = 0$ for $i > r$ for some $r \in \mathbb{N}$ (which depends on the conditions assumed on $G_n(0)$). Specifically, when $r=2$, the pair $Y_1, Y_2$ is given as a two dimensional diffusion in the half space $(-\infty, 0] \times \mathbb{R}$ with oblique reflection in the direction $(-1, 1)'$ at the boundary $\{0\} \times \mathbb{R}$. (For the form of the limit in the general case see Corollary 2.6). In [25] this result is extended to the case where $d_n < n$ and $\frac{d_n}{\sqrt{n} \log n} \to \infty$. Under the same assumptions on the initial condition as in [7], it is shown in [25] that $Y_n$ converges to the same limit process as for the case $d_n = n$. The proof, as for the LLN result, proceeded by constructing a suitable coupling between a JSQ($d_n$) and JSQ($n$) system. The paper [25] also argued that when $\frac{d_n}{\sqrt{n} \log n} \to 0$, the process $Y_n$ cannot be tight and thus in this regime the above diffusion approximation cannot hold.
+
+Our objective in this work is to develop diffusion approximations for $G_n$ in the critical regime (i.e. when $\lambda_n \to 1$ in a suitable manner) that allow for possibly a slower growth of $d_n$ than that permitted by the results in [25]. In fact the results we establish will allow for $d_n \to \infty$ in an arbitrary manner and will recover the results of [25] in the special case $\frac{d_n}{\sqrt{n} \log n} \to \infty$ (with a different proof). In order to motivate the type of limit theorems we seek, we begin by observing that the centering $e_1$ used in the definition of $Y_n$ is a stationary point of the fluid limit given in (2.2) with $\lambda = 1$ and thus the results of [7] and [25] give information on fluctuations of the state process $G_n$ about this stationary point. However $e_1$ is not the only stationary point of (2.2) (when $\lambda = 1$) and in fact this ODE has uncountably many fixed points with a typical such point given as $\dot{f}_k^\gamma = \sum_{j=1}^k e_j + \gamma e_{k+1}$, where $e_j$ is the $j$-th unit vector in $\ell_2$ (with 1 at the $j$-th coordinate and zeroes elsewhere), $k \in \mathbb{N}$ and $\gamma \in [0, 1]$. All of these stationary points arise in a natural fashion. Indeed, it turns out that the evolution of the state process $G_n$ can be described via the equation (see Remark 3.1)
+
+$$ G_n(t) = G_n(0) + \int_0^t [\boldsymbol{a}_n(G_n(s)) - \boldsymbol{b}(G_n(s))]ds + M_n(t), $$
+
+where $M_n$ is a (infinite dimensional) martingale converging to 0 in probability (see Lemma 4.1) and $\boldsymbol{a}_n, \boldsymbol{b}$ are certain maps from $\ell_1^\perp$ to $\ell_1$ (see Remark 3.1 for details). Thus for large $n$, trajectories of $G_n$ will be close to solutions of the infinite dimensional ODE
+
+$$ \dot{g}_n = a_n(g_n) - b(g_n). $$
+
+This equation has a unique stationary point $\mu_n$ which is introduced in Definition 2. The fixed point $\mu_n$ corresponds to the point in the state space $\ell_1^\perp$ at which the inflow rate equals the outflow rate in the $n$-th system and thus it is of interest to explore system behavior in the neighborhood of this point. Since $G_n$ is approximated by $g_n$ (over any compact time interval), one can loosely interpret $\mu_n$ as a *near fixed point* of the state process $G_n$. Furthermore, it can be shown (see Remark 2.4(iv)) that, if $d_n \to \infty$ and $\lambda_n \to 1$ in a suitable manner, $\mu_n$ can converge to any specified fixed point $\dot{f}_k^\gamma$ of (2.2) and thus every fixed point of (2.2) arises from $\mu_n$ in a suitable asymptotic regime. In order to explore fluctuations of $G_n$ close to different fixed points of (2.2) it is then natural to study the asymptotic behavior of
+
+$$ Z_n(t) \doteq \sqrt{n}(G_n(t) - \mu_n), \quad t \ge 0. \tag{1.2} $$
+
+We note that in the regime considered in [25] where $\frac{d_n}{\sqrt{n} \log n} \to \infty$ and $\sqrt{n}(1-\lambda_n) \to \alpha > 0$, $\sqrt{n}(e_1 - \mu_n) \to \alpha$ and so in this case the asymptotic behavior of $Z_n$ can be read off from that of $Y_n$ (see Corollary 2.6 and Remark 2.7(v)). However in general $\sqrt{n}(e_1 - \mu_n)$ (and more generally, $\sqrt{n}(\dot{f}_k^\gamma - \mu_n)$) may not be bounded and so the asymptotic behavior of $Z_n$ and $Y_n$ may be very different.
+
+In this work we obtain limit theorems for $Z_n$ as $d_n \to \infty$ in an arbitrary fashion and $\lambda_n \to 1$ in a suitable manner. Specifically in Theorems 2.2, 2.3 and 2.4 we consider the three cases:
+
+(a) $d_n/\sqrt{n} \to 0$; (b) $d_n/\sqrt{n} \to c \in (0, \infty)$ and, (c) $d_n/\sqrt{n} \to \infty$, respectively. In all three regimes we consider initial conditions $G_n(0)$ such that for some $r \in \mathbb{N}$, $G_{n,m}(0) = \mu_{n,m} + o_p(n^{-1/2})$ for all $m > r$ and in each case (under conditions on $\lambda_n$) we obtain a limit process driven by a one
+---PAGE_BREAK---
+
+dimensional Brownian motion with continuous sample paths in $l_2$ which has all but finitely many coordinates 0. In particular, when $r=2$ in the second and the third case and $r=k+2$ for some $k \in \mathbb{N}$ in the first case (and $d_n$, $\lambda_n$ depend on $k$ in a suitable fashion), one can describe the limit through a two dimensional diffusion driven by a one dimensional Brownian motion. The form of this two dimensional process in the three regimes is quite different; in the first case we get a linear diffusion (i.e. the drift is of the form $b(y) = Ay$ for, $y \in \mathbb{R}^2$ and some $2 \times 2$ matrix $A$); in the second case we get a diffusion with an exponential drift; and in the third case we obtain a reflected diffusion in the half space $(-\infty, \alpha] \times \mathbb{R}$ for some $\alpha \ge 0$.
+
+Although the limit processes in Theorems 2.2 and 2.3 are quite different from those obtained in [7] and [25], the limit in Theorem 2.4 has a similar form (in that it is a reflected diffusion in a half space) as in the above papers. However here as well there are some differences. In particular, depending on how $\lambda_n$ approaches 1, the reflection occurs at a different barrier $\alpha \in (0, \infty)$; in fact $\alpha = \infty$ is possible as well in which case there is no reflection. Furthermore, recall that $Z_n$ is defined by centering about $\mu_n$. In general $\sqrt{n}(\mu_n - e_1)$ will diverge and thus the process $Y_n$ considered in the above cited papers may not converge in this regime. However, as noted previously, when $d_n$ grows sufficiently fast, namely $\frac{d_n}{\sqrt{n}\log n} \to \infty$ the process $Y_n$ will indeed converge and in that case we recover the result in [25] (in fact a slight strengthening in that the drift parameter in Corollary 2.6 is allowed to be 0). In addition Theorem 2.4 also covers the case $\frac{d_n}{\sqrt{n}\log n} \to c \in (0, \infty)$ and situations where $\lambda_n = 1 + O(n^{-1/2})$ (see Remark 2.7 (iv)). In such settings, once more both $Z_n$ and $Y_n$ converge and the limit of the latter has the same form as in [7, 25].
+
+As is observed in Remarks 2.5 and 2.7, under conditions of Theorem 2.3 or Theorem 2.4, $\mu_n$ must converge to the fixed point $e_1 = f_1^0$. In contrast, Theorem 2.2 allows for a range of asymptotic behavior for $\mu_n$. In particular, under the conditions of the theorem, with suitable $\lambda_n$, $d_n$, $\mu_n$ can converge to the fixed point $f_k^0$ for an arbitrary $k \in \mathbb{N}$ (see [4] for a similar observation). In such a setting the first $k-1$ coordinates of the limit process are essentially 0 (see Theorem 2.2 for a precise statement) and the $k$-th coordinate is the first one to exhibit stochastic variability. Thus a rather novel asymptotic behavior for the $JSQ(d_n)$ system emerges when $d_n$ approaches $\infty$ at significantly slower rates than those considered in [25] and $\lambda_n$ approach 1 in a suitable manner (in relation to $d_n$).
+
+We now make some comments on the proofs of Theorems 2.2 - 2.4. The starting point is a convenient semimartingale representation for the centered state process $Z_n$ in (6.1). In the study of the behavior of the drift term in this decomposition, an important ingredient is an analysis of the asymptotic properties of the near fixed point $\mu_n$, and the asymptotic behavior of the function $\beta_n$ (see Definition 1) in $O(n^{-1/2})$ sized neighborhoods around the coordinates of $\mu_n$. This behavior, which is different in the three regimes considered above, determines the asymptotics of the drift $A_n(Z_n(s)) - b(Z_n(s))$. Properties of $\mu_n$ are also key in arguing that, in all three cases, under our conditions, $(Z_{n,r+1}, \dots)$ converges to 0 in probability in $D([0, \infty) : l_2)$ (see Lemma 6.4). The rest of the work is in characterizing the asymptotics of the finite dimensional process $(Z_{n,1}, \dots, Z_{n,r})$. For this study, the three regimes require different approaches. In particular, Theorem 2.2 hinges on a detailed understanding of the asymptotic behavior of a tridiagonal matrix function $A_n(s)$ (see e.g. Lemmas 7.4 and 7.5); Theorem 2.3 requires an analysis of a stochastic differential equation with an exponential drift term (in particular the drift does not satisfy the usual growth conditions); and Theorem 2.4 is based on a careful study of excursions of the prelimit processes above the limiting reflecting barrier and properties of Skorohod maps in order to characterize the reflection properties of the limit process.
+
+## 1.1. Organization of the paper.
+Section 2 contains all our main results. The remaining Sections starting with Section 3 contain proofs of the main results.
+---PAGE_BREAK---
+
+1.2. **Notation and setup.** For $m \ge 1$, let $[m] = \{1, 2, \dots, m\}$. We will denote finite-dimensional vectors in $\mathbb{R}^m$ as $\vec{x}, \vec{y}$, etc. and $\langle \vec{x}, \vec{y} \rangle$ will denote the standard inner-product. The standard basis vectors in $\mathbb{R}^m$ will be denoted by $\vec{e}_i$ for $i = 1, 2, \dots, m$. Also, $\|\vec{x}\| = \langle \vec{x}, \vec{x} \rangle$ will denote the usual Euclidean norm.
+
+We will often use bold symbols such as $\boldsymbol{x} := (x_1, x_2, \dots)$ to denote an infinite dimensional vector or function. For $p \in \{1, 2, \dots, \infty\}$, let $\|\cdot\|_p$ denote the usual p-th norm on the space of infinite sequences and $\ell_p := \{\boldsymbol{x} \in \mathbb{R}^\infty \mid \|\boldsymbol{x}\|_p < \infty\}$. Let $\ell_1^\downarrow$ be as in (1.1), which is a Polish space under $\|\cdot\|_1$. For $k \in \mathbb{N}$, let $\boldsymbol{f}_k = (1, 1, \dots, 1, 0, 0, \dots) \in \ell_1^\downarrow$ denote the vector with first $k$ indices equal to $1$, and $\boldsymbol{e}_k = (0, \dots, 0, 1, 0, \dots) \in \ell_1$ denote the vector with $1$ in the $k$th coordinate. Finally, for $\boldsymbol{z} = (z_1, z_2, \dots) \in \mathbb{R}^\infty$ and $r \in \mathbb{N}$, let $\boldsymbol{z}_{r+} := (z_{r+1}, z_{r+2}, \dots) \in \mathbb{R}^\infty$ denote the vector shifted by $r$ steps. Similar notation will be used for functions and processes with values in $\mathbb{R}^\infty$.
+
+For a Polish space $\mathcal{S}$ and $T > 0$, denote by $\mathcal{C}([0,T] : \mathcal{S})$ (resp. $\mathcal{D}([0,T] : \mathcal{S})$) the space of continuous functions (resp. right continuous functions with left limits) from $[0,T]$ to $\mathcal{S}$, endowed with the uniform topology (resp. Skorokhod topology). Spaces $\mathcal{C}([0,\infty) : \mathcal{S})$, $\mathcal{D}([0,\infty) : \mathcal{S})$ are defined similarly. For $f \in \mathcal{D}([0,T] : \mathbb{R})$ and $t \le T$, let $|f|_{*,t} := \sup_{s \in [0,t]} |f(s)|$. Similarly for $\boldsymbol{g} \in \mathcal{D}([0,T] : \ell_p)$, let $\|\boldsymbol{g}\|_{p,t} := \sup_{s \in [0,t]} \|g(s)\|_p$.
+
+We will use $\mathbb{I}_{\{\text{cond}\}}$ to denote the indicator function that takes the value 1 if cond is true, otherwise it takes the value 0. We will denote by id the identity map, $id(t) = t$, on $[0,T]$ or $[0,\infty)$.
+
+We use **P** and **E** to denote the probability and expectation operators, respectively. For $x,y \in \mathbb{R}$, $x \wedge y$ denotes the minimum and $x \vee y$ the maximum of $x$ and $y$ respectively. For any $x \in \mathbb{R}$, $x^+ = x \vee 0$ and $x^- = (-x) \vee 0$. We use $\xrightarrow{P}$ and $\Rightarrow$ to denote convergence in probability and convergence in distribution respectively on an appropriate Polish space which will depend on the context. For a sequence of real valued random variables $(X_n, n \ge 1)$, we write $X_n = o_P(b_n)$ when $|X_n|/b_n \xrightarrow{P} 0$ as $n \to \infty$. For non-negative functions $f(\cdot), g(\cdot)$, we write $f(n) = O(g(n))$ when $f(n)/g(n)$ is uniformly bounded, and $f(n) = o(g(n))$ (or $f(n) \ll g(n)$) when $\lim_{n\to\infty} f(n)/g(n) = 0$. We write $f(n) \sim g(n)$ if $f(n)/g(n) \to 1$ as $n \to \infty$. We will use the notation $\lambda_n \nearrow 1$ to mean that $\lambda_n < 1$ for every $n$ and $\lambda_n \to 1$ as $n \to \infty$.
+
+## 2. MAIN RESULTS
+
+Recall the process $\mathbf{G}_n$ from Section 1. Our first result gives a law of large numbers (LLN) for the process $\mathbf{G}_n$ as $n \to \infty$. In order to state this result we begin by recalling the one dimensional Skorohod map (cf. [15, Section 3.6.C]) with a reflecting barrier at $\alpha \in \mathbb{R}$. For $\alpha \in \mathbb{R}$ and $f \in \mathcal{D}([0, \infty) : \mathbb{R})$ with $f(0) \le \alpha$, define $\Gamma_\alpha(f), \hat{\Gamma}_\alpha(f) \in \mathcal{D}([0, \infty) : \mathbb{R})$ as
+
+$$
+\Gamma_{\alpha}(f)(t) = f(t) - \sup_{s \in [0,t]} (f(s) - \alpha)^+, \quad \hat{\Gamma}_{\alpha}(f)(t) = \sup_{s \in [0,t]} (f(s) - \alpha)^+.
+\quad (2.1)
+$$
+
+The map $\Gamma_\alpha$ (and sometimes the pair $(\Gamma_\alpha, \hat{\Gamma}_\alpha)$) is referred to as the one-dimensional Skorohod map (with reflection at $\alpha$). The following wellposedness result, which is proved in Section 4, will be used to characterize the LLN limit of $\mathbf{G}_n$.
+
+**Proposition 2.1.** Fix $\boldsymbol{r} \in \ell_1^\downarrow$. Then there is a unique $(\boldsymbol{g}, \boldsymbol{v}) \in \mathbb{C}([0, \infty) : \ell_1^\downarrow \times \ell_\infty)$ that solves the following system of equations
+
+$$
+\begin{align}
+g_i(t) &= \Gamma_1\left(r_i - \int_0^t (g_i(s) - g_{i+1}(s))ds + v_{i-1}(\cdot)\right)(t) && \forall i \ge 1, t \ge 0 \\
+v_i(t) &= \hat{\Gamma}_1\left(r_i - \int_0^t (g_i(s) - g_{i+1}(s))ds + v_{i-1}(\cdot)\right)(t) && \forall i \ge 1, v_0(t) = \lambda t, t \ge 0.
+\end{align}
+\quad (2.2)
+$$
+---PAGE_BREAK---
+
+**Remark 2.2.** Using the well known characterization of a one-dimensional Skorohod map, one can alternatively characterize $(g, v)$ as the unique pair in $C([0, \infty) : \ell_1^\perp \times \ell_\infty)$ such that $v_i$ is nondecreasing,
+
+$$
+\left.
+\begin{aligned}
+g_i(t) &= r_i - \int_0^t (g_i(s) - g_{i+1}(s))ds + v_{i-1}(t) - v_i(t) \\
+v_i(t) &\ge 0, g_i(t) \le 1, \quad \int_0^t (1-g_i(s))dv_i(s) = 0
+\end{aligned}
+\right\} \forall i \ge 1 \qquad (2.3)
+$$
+
+and $v_0(t) = \lambda t$, for all $t > 0$.
+
+We can now present the LLN result. The proof is given in Section 4.
+
+**Theorem 2.1.** Let $\mathbf{r} \in \ell_1^\perp$. Suppose that $\mathbf{G}_n(0) \xrightarrow{P} \mathbf{r}$ in $\ell_1^\perp$, $\lambda_n \to \lambda$ and $d_n \to \infty$, as $n \to \infty$. Then $\mathbf{G}_n \to \mathbf{g}$ in probability in $D([0, \infty) : \ell_1^\perp)$ as $n \to \infty$, where $(\mathbf{g}, \mathbf{v}) \in C([0, \infty) : \ell_1^\perp \times \ell_\infty)$ is the unique solution of (2.2).
+
+**Remark 2.3.** Note that Theorem 2.1 allows $d_n \to \infty$ in an arbitrary manner. In [25, Theorem 1] it is shown that, under the assumptions of Theorem 2.1, $\mathbf{G}_n$ is a tight sequence of $D([0, \infty) : \ell_1^\perp)$ valued random variables and that every subsequential weak limit $\hat{\mathbf{g}}$ satisfies a system of equations given as
+
+$$ \hat{g}_i(t) = r_i - \int_0^t (\hat{g}_i(s) - \hat{g}_{i+1}(s))ds + \int_0^t p_{i-1}(\hat{g}(s))ds \quad \text{for } i \ge 1 \quad (2.4) $$
+
+where
+
+$$ p_j(\hat{\mathbf{g}}(s)) =
+\begin{cases}
+\lambda - (\lambda - 1 + \hat{\mathbf{g}}_{j+2}(s))^+ & \text{if } j = m(\hat{\mathbf{g}}(s)) - 1 \\
+(\lambda - 1 + \hat{\mathbf{g}}_{j+1}(s))^+ & \text{if } j = m(\hat{\mathbf{g}}(s)) > 0 \\
+\lambda & \text{if } j = m(\hat{\mathbf{g}}(s)) = 0 \\
+0 & \text{otherwise,}
+\end{cases}
+\quad (2.5) $$
+
+and for $\boldsymbol{x} \in \ell_1^\perp$, $m(\boldsymbol{x}) = \inf\{i | x_{i+1} < 1\}$. (Note that $m(\mathbf{G}_n(t))$ is the length of the smallest queue at time $t$.) The uniqueness of the above system of equations was not shown in [25].
+
+From (2.2) and the definition in (2.1) it follows that each $v_i$ is absolutely continuous and, for a.e. $t$,
+
+$$ \frac{dv_i(t)}{dt} = \left( \frac{dv_{i-1}(t)}{dt} - g_i(t) + g_{i+1}(t) \right)^+ \mathbb{I}\{g_i(t)=1\} $$
+
+for any $i \ge 1$. From this we see that, for a.e. $t$,
+
+$$
+\frac{dv_i(t)}{dt} =
+\begin{cases}
+\lambda & \text{if } i=0 \\
+\displaystyle\frac{dv_{i-1}(t)}{dt} & \text{if } i < m(\mathbf{g}(t)) \text{ and } i \ge 1, \\
+\displaystyle\frac{(dv_{i-1}(t)}{dt} - 1 + g_{i+1}(t))^+ & \text{if } i = m(\mathbf{g}(t)) \text{ and } i \ge 1, \\
+0 & \text{if } i > m(\mathbf{g}(t)).
+\end{cases}
+\qquad (2.6)
+$$
+
+and consequently $p_j(g(s)) = \frac{dv_j(s)}{ds} - \frac{dv_{j+1}(s)}{ds}$ for a.e. $s$. Substituting this back in (2.3) shows that $g$ solves the system of equations in (2.4). Conversely, for any solution $\hat{\mathbf{g}}$ of (2.4), defining $\hat{\mathbf{v}}$ by the right side of (2.6) by replacing $g$ with $\hat{\mathbf{g}}$, we see that $(\hat{\mathbf{g}}, \hat{\mathbf{v}})$ solves (2.3). From the uniqueness result in Lemma 2.1 it then follows that in fact there is only one solution to the system of equations in (2.4) and this solution equals $g$ given in (2.2).
+
+Consider now the time asymptotic behavior of $g$ given in (2.2). When $\lambda < 1$, $(\lambda, 0, 0, ... ) \in \ell_1$ is the unique fixed point of (2.2), as can be seen by setting the derivative of the right side of (2.4) to 0. In the critical case, i.e. when $\lambda = 1$, the situation is very different and in fact there are uncountably
+---PAGE_BREAK---
+
+many fixed points given by {$f \in \ell_1^\perp | m(f) > 0, f_m(f)_{n+2} = 0$} $\subset \ell_1^\perp$, which once more is seen by
+checking that the derivative on the right side of (2.4) is 0 at exactly these points when $\lambda = 1$. In
+this work we are interested in the fluctuations of $G_n$ in the critical case when the system starts
+suitably close to one of the fixed points of (2.3). Thus for the remaining section we will assume
+that $\lambda_n < 1$ for every $n$ and $\lambda_n \to 1$ as $n \to \infty$. In order to formulate precisely what we mean by
+'suitably close to the fixed point' we need some definitions and notation. The functions $\beta_n$ in the
+next definition will play a central role.
+
+**Definition 1.** Define the function $\beta_n : [0, 1] \to [0, 1]$ by
+
+$$
+\beta_n(x) \doteq \prod_{i=0}^{d_n-1} \left( \frac{x - \frac{i}{n}}{1 - \frac{i}{n}} \right)^{+} \quad (2.7)
+$$
+
+The function $\beta_n(\cdot)$ arises when sampling $d_n$ random servers without replacement. Specifically, when $nx \in \mathbb{N}$, $\beta_n(x) = P(\mathbb{A}_{n,d_n} \subseteq [nx]) = \binom{nx}{d_n}/\binom{n}{d_n}$, where $\mathbb{A}_{n,d_n}$ is a randomly chosen subset (without replacement) from $[n]$ of size $d_n$. An alternative is to perform sampling with replacement, which corresponds to the simpler function $\gamma_n(x) \doteq x^{d_n}$ in place of $\beta_n$.
+
+We now introduce the notion of a 'near fixed point' of $\mathbf{G}_n$.
+
+**Definition 2.** For $n \in \mathbb{N}$, the **near fixed point** $\mu_n$ of $\mathbf{G}_n$ is the vector in $\ell_1^\perp$ given as $\mu_n = (\mu_{n,1}, \mu_{n,2}, \dots)$ where $\mu_{n,i}$ are defined recursively as $\mu_{n,1} = \lambda_n$ and $\mu_{n,i+1} = \lambda_n \beta_n(\mu_{n,i})$ for $i \ge 1$.
+
+Using $\beta_n(x) \le x^{d_n} \le x$ and $\lambda_n < 1$, it is easy to check that $\boldsymbol{\mu}_n \in \ell_1^\perp$. The reason $\boldsymbol{\mu}_n$ is referred to as a near fixed point of $\boldsymbol{G}_n$ is discussed in Remark 3.1. To study the fluctuations of the process around the near fixed point $\boldsymbol{\mu}_n$ we define the centered and scaled process, $\boldsymbol{Z}_n$ as in (1.2). We now present our three main results on fluctuations which correspond to the three cases $d_n/\sqrt{n} \to 0$, $d_n/\sqrt{n} \to c \in (0, \infty)$, and $d_n/\sqrt{n} \to \infty$ respectively.
+
+**Theorem 2.2.** Suppose that, as $n \to \infty$, $1 \ll d_n \ll \sqrt{n}$, $\lambda_n \nearrow 1$, and there is a $k \in \mathbb{N}$ so that $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha \in [0, \infty)$ as $n \to \infty$. Further suppose that $\{\|\mathbf{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$ is tight and that $\mathbf{Z}_n(0) \xrightarrow{P} z$ in $\ell_2$, where $\mathbf{z}_{r+} = 0$ for some $r > k$. Then for any $T \in (0, \infty)$,
+
+$$
+\lim_{M \to \infty} \sup_n P (\|\mathbf{Z}_n\|_{2,T} > M) = 0. \tag{2.8}
+$$
+
+Furthermore, if $k > 1$, then $\sup_{t \in [\epsilon, T]} |Z_{n,i}(t)| \xrightarrow{P} 0$ as $n \to \infty$ for any $T < \infty$, $0 < \epsilon \le T$ and $i \in [k-1]$.
+
+Consider the shifted process $\mathbf{Y}_n(t) \doteq (\sum_{i=1}^k Z_{i,n}(t), Z_{k+1,n}(t), Z_{k+2,n}(t), \ldots)$ and $\mathbf{y} \doteq (\sum_{i=1}^k z_i, z_{k+1}, z_{k+2}, \ldots)$. Then $\mathbf{Y}_n \Rightarrow \mathbf{Y}$ in $\mathbb{D}([0, \infty) : \ell_2)$, where $\mathbf{Y} \in \mathbb{C}([0, \infty) : \ell_2)$ is the unique pathwise solution to
+
+$$
+\begin{align*}
+Y_1(t) &= y_1 - (\alpha + I_{\{k=1\}}) \int_0^t Y_1(s)ds + \int_0^t Y_2(s)ds + \sqrt{2B}(t) \\
+Y_2(t) &= y_2 + \alpha \int_0^t Y_1(s)ds - \int_0^t Y_2(s)ds + \int_0^t Y_3(s)ds \\
+Y_i(t) &= y_i - \int_0^t Y_i(s)ds + \int_0^t Y_{i+1}(s)ds &\text{for } i \in \{3, \ldots, r-k+1\} \\
+Y_i(t) &= 0 &\text{for } i > r-k+1,
+\end{align*}
+$$
+
+and $B(\cdot)$ is a one dimensional standard Brownian motion.
+
+**Remark 2.4.**
+---PAGE_BREAK---
+
+(i) Note that the convergence $\sup_{t \in [\epsilon, T]} |Z_{n,i}(t)| \xrightarrow{P} 0$ as $n \to \infty$ for any $0 < \epsilon \le T$ is equivalent to the statement that $Z_{n,i} \to 0$ in probability in $D((0,T] : \mathbb{R})$ where the latter space is equipped with the topology of uniform convergence on compacts. Note also that, since, for $i \in [k-1]$, $Z_{n,i}(0)$ may converge in general to a non-zero limit, the above convergence to 0 cannot be strengthened to a convergence in probability in $D([0,T] : \mathbb{R})$.
+
+(ii) By Corollary 5.3 in Section 5, when $\mu_{n,k}$ is away from 0,
+
+$$\beta'_{n}(\mu_{n,k}) = (1 + o(1)) \frac{d_n \mu_{n,k+1}}{\lambda_n \mu_{n,k}}$$
+
+as $n \to \infty$. Hence the assumptions $d_n \to \infty$, $\lambda_n \to 1$, $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha < \infty$ in Theorem 2.2 say that $\mu_{n,k+1} \to 0$. Since $\mu_{n,k} \to 1$, this in fact shows that $\boldsymbol{\mu}_n \to \boldsymbol{f}_k$ in $\ell_1^\perp$, where recall that $\boldsymbol{f}_k$ is one of the fixed points of the fluid-limit (2.2) when $\lambda = 1$. The fact that the convergence happens in $\ell_1^\perp$ can be seen on observing that if $\mu_{n,k+1} \le \epsilon$ then, by (5.2), $\mu_{n,k+1+i} \le \epsilon^{d_n^i}$. This convergence, along with (2.8) shows that most queues will be of length $k$ on any fixed interval $[0,T]$. We also note that in general $\sqrt{n}(\boldsymbol{\mu}_n - \boldsymbol{f}_k)$ will diverge, and thus $\sqrt{n}(G_n - \boldsymbol{f}_k)$ will typically not be tight, in this regime.
+
+(iii) In the special case when the system starts sufficiently close to the near fixed point $\boldsymbol{\mu}_n$ so that $z_i = 0$ for $i > k + 1$, the limit process $\mathbf{Y}$ simplifies to an essentially two dimensional process given as, $Y_i(t) = 0$ for $i > 2$, and
+
+$$
+\begin{align*}
+Y_1(t) &= y_1 - (\alpha + \mathbb{I}_{\{k=1\}}) \int_0^t Y_1(s)ds + \int_0^t Y_2(s)ds + \sqrt{2B}(t) \\
+Y_2(t) &= y_2 + \alpha \int_0^t Y_1(s)ds - \int_0^t Y_2(s)ds
+\end{align*}
+$$
+
+(iv) The convergence behavior of $Z_n$ is governed by the sequence of parameters $(d_n, \lambda_n)$. In Corollary 5.5 from Section 5, we show that if $1 \ll d_n^{k+1} \ll n$ and $1 - \lambda_n = \frac{\xi_n + \log d_n}{d_n^k}$ with $\xi_n \to -\log(\alpha) \in (-\infty, \infty]$ and $\frac{\xi_n^2}{d_n} \to 0$, then the conditions $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha \in [0, \infty)$ of Theorem 2.2 are satisfied. Using this fact we make the following observations. For simplicity, consider $z=0$.
+
+(a) Suppose that $d_n = \log n$, $1 - \lambda_n = \frac{\log \log n}{(\log n)^k}$. In this case the assumptions of Theorem 2.2 are satisfied and one essentially sees non-zero fluctuations only in the $k$-th and $k+1$-th coordinates. Note that as $k$ becomes large, the traffic intensity increases and one sees more and more coordinates of the near fixed point approach 1.
+
+(b) With the same $d_n$ as in (a) but a somewhat lower traffic intensity given as $1 - \lambda_n = \frac{(\log n)^{1/2-\epsilon}}{(\log n)^k}$ for some $\epsilon \in (0, 1/2)$, one sees that condition of the theorem are satisfied with $\alpha = 0$ (i.e. $\beta'_n(\mu_{n,k}) \to 0$). Thus the limit process $\mathbf{Y}$, in the case $k > 1$, simplifies to $Y_i = 0$ for $i > 1$ and $Y_1(t) = \sqrt{2B}(t)$. When $k=1$, $Z_1 = Y_1$ is instead given as the following Ornstein-Uhlenbeck(OU) process
+
+$$Z_1(t) = - \int_0^t Z_1(s) ds + \sqrt{2B}(t). \quad (2.10)$$
+
+(c) With higher values of $d_n$, using Theorem 2.2, one can analyze fluctuations for systems with higher traffic intensity. For example, suppose that $d_n = \frac{\sqrt{n}}{\log n}$. Then the conditions of the theorem are satisfied with $k=1$ and $1 - \lambda_n \sim (\log n)^2/\sqrt{n}$. In fact in this case $\alpha = 0$ and the limit process is described by the one dimensional OU process (2.10). With a slightly higher traffic intensity given as $1 - \lambda_n = ((\log n)^2 - 2\log n \log \log n)/2\sqrt{n}$ one obtains a two dimensional limit diffusion.
+---PAGE_BREAK---
+
+(d) The theorem allows for traffic intensity in the Halfin-Whitt scaling regime (i.e. $\sqrt{n}(1-\lambda_n) \to \beta > 0$) as well. Specifically, for $k \ge 2$, if $d_n = (\sqrt{n} \log n)^{1/k}$ and $(1 - \lambda_n) = \frac{\beta+o(1)}{\sqrt{n}}$ for some $\beta > \beta_0 = 1/2k$, the conditions of the theorem are satisfied with $\alpha = 0$. With slightly higher traffic intensity (e.g. $\beta + o(1)$ replaced by $\beta_0 + (1/k \log \log n - \log \alpha)/\log n$) conditions of the theorem are met with a non-zero $\alpha$.
+
+(e) Recall that a fixed point of (2.2) when $\lambda = 1$ takes the form $\mathbf{f}_k^\gamma = \mathbf{f}_k + \gamma e_k$, where $k \in \mathbb{N}$ and $\gamma \in [0, 1)$. Although Theorem 2.2 only considers settings where the near fixed point $\mu_n$ converges to $\mathbf{f}_k^0 = \mathbf{f}_k$ for some $k$, it is possible to give conditions under which $\mu_n$ converges to a different fixed point. Specifically, suppose that $1 \ll d_n^{k+1} \ll n$ and $1 - \lambda_n = \frac{a}{d_n^k}$ for some $a > 0$. Then it can be checked using Lemma 5.4 that $\mu_n \to \mathbf{f}_k^\gamma$ with $\gamma = e^{-a}$.
+
+(v) Suppose for some $a \in (0, \frac{1}{2})$, $d_n = n^{a+o(1)}$ and $\lambda_n$ is taken as in Remark 2.4 (iv) with $k \in \mathbb{N}$ such that $a(k+1) < 1$. By Theorem 2.2, all but $O(\sqrt{n})$ queues will have length $k$ over bounded times. This result is analogous to [4, Theorem 1.1] which considers, for such choice of $d_n$, $\lambda_n$, the behavior of queues in equilibrium in a setting where $d_n$ queues are sampled with replacement (instead of without replacement as in the current work). In fact, for this scenario, [4, Theorem 1.1] is able to show a stronger result which says that with high probability, as $n \to \infty$, most of the queues in equilibrium will have length $k$ and that there will be no larger queues.
+
+The next theorem describes the fluctuations of $Z_n$ when $d_n$ is of order $\sqrt{n}$.
+
+**Theorem 2.3.** Suppose that $\frac{d_n}{\sqrt{n}} \to c \in (0, \infty)$ and $\lambda_n = 1 - \left(\frac{\log d_n}{d_n} + \frac{\alpha_n}{\sqrt{n}}\right)$ with $\alpha_n \to \alpha \in (-\infty, \infty]$ and $\alpha_n = o(n^{1/4})$. Then, $\boldsymbol{\mu}_n \to \mathbf{f}_1$ in $l_1^\downarrow$. Suppose further that $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$ is tight and $\boldsymbol{Z}_n(0) \xrightarrow{P} z$ in $l_2$ with $z_{r+} = 0$ for some $r \ge 2$. Then, as $n \to \infty$, $\boldsymbol{Z}_n \Rightarrow \mathbf{Z}$ in $D([0, \infty) : l_2)$, where $\mathbf{Z}$ is the unique pathwise solution to:
+
+$$
+\begin{align*}
+Z_1(t) &= z_1 - \int_0^t (Z_1(s) - Z_2(s))ds - (ce^{\alpha\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds + \sqrt{2B}(t), \\
+Z_2(t) &= z_2 - \int_0^t (Z_2(s) - Z_3(s))ds + (ce^{\alpha\alpha})^{-1} \int_0^t (e^{cZ_2(s)} - 1)ds, \\
+Z_i(t) &= z_i - \int_0^t (Z_i(s) - Z_{i+1}(s))ds &\text{for each } i \in \{3...r\}, \\
+Z_i(t) &= 0 &\text{for each } i > r,
+\end{align*}
+$$
+
+and B is standard Brownian motion.
+
+**Remark 2.5.**
+
+(i) Note that the coefficients in the above system of equations are only locally Lipschitz and have an exponential growth. However since c is positive, the system of equations has a unique pathwise solution as is shown in Lemma 8.2.
+
+(ii) Once more, when $z_i = 0$ for all $i > 2$, the system of equations simplifies to a two dimensional system given as $Z_i = 0$ for all $i > 2$, and
+
+$$
+\begin{align*}
+Z_1(t) &= z_1 - \int_0^t (Z_1(s) - Z_2(s))ds - (ce^{\alpha\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds + \sqrt{2B}(t), \\
+Z_2(t) &= z_2 - \int_0^t Z_2(s)ds + (ce^{\alpha\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds.
+\end{align*}
+$$
+
+(iii) In the regime considered in Theorem 2.3, the near fixed point $\mu_n$ can converge to only one particular fixed point of (2.2), namely $\mathbf{f}_1$. As before, the term $\sqrt{n}(\mu_n - \mathbf{f}_1)$ may diverge and thus $\sqrt{n}(G_n(\cdot) - \mathbf{f}_1)$ will in general not be tight.
+---PAGE_BREAK---
+
+(iv) Suppose that $d_n = c\sqrt{n}$ for some $c > 0$, $z = 0$ and $1 - \lambda_n = (\beta + o(1))\log n/\sqrt{n}$ for some $\beta > \beta_0 = 1/2c$. Then the assumptions of the above theorem are satisfied with $\alpha = \infty$ and the limit system simplifies to a one dimensional OU process given as $Z_i = 0$ for all $i > 1$, and $Z_1$ satisfies (2.10). If $(\beta + o(1)) \log n$ is replaced by $\beta_0 \log n + \gamma$ for some $\gamma \in \mathbb{R}$, we instead obtain a two dimensional limit system given as $Z_i = 0$ for all $i > 2$, and
+
+$$
+\begin{aligned}
+Z_1(t) &= -\int_0^t (Z_1(s) - Z_2(s))ds - e^{-c\gamma}\int_0^t (e^{cZ_1(s)} - 1)ds + \sqrt{2}B(t), \\
+Z_2(t) &= -\int_0^t Z_2(s)ds + e^{-c\gamma}\int_0^t (e^{cZ_1(s)} - 1)ds.
+\end{aligned}
+ $$
+
+Finally we consider the fluctuation behavior when $d_n \gg \sqrt{n}$. This time the limit system will involve reflected diffusion processes. Recall from (2.1) the definition of the Skorohod maps $\Gamma_\alpha$ and $\hat{\Gamma}_\alpha$ associated with a reflection barrier at $\alpha \in \mathbb{R}$. We will extend the definition of these maps to $\alpha = \infty$ by setting
+
+$$ \Gamma_{\infty}(f) = f, \quad \hat{\Gamma}_{\infty}(f) = 0 \text{ for } f \in D([0, \infty) : \mathbb{R}). \tag{2.11} $$
+
+**Theorem 2.4.** Suppose that $\sqrt{n} \ll d_n$ and
+
+$$ \lambda_n = 1 - \left(\frac{\log d_n}{d_n} + \frac{\alpha_n}{\sqrt{n}}\right), \quad \text{where } \alpha_n \to \alpha \in [0, \infty], \text{ with } \alpha_n^- = O(\sqrt{n}/d_n), \text{ and } \alpha_n = O(n^{1/6}). \tag{2.12} $$
+
+Then $\boldsymbol{\mu}_n \to \mathbf{f}_1$ in $\ell_1^\perp$. Suppose further that $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$ is tight and $\boldsymbol{Z}_n(0) \xrightarrow{P} \mathbf{z}$ in $\ell_2$ where $z_1 \le \alpha$ and $\boldsymbol{z}_{r+} = 0$ for some $r \ge 2$. Then, as $n \to \infty$, $\boldsymbol{Z}_n \Rightarrow \boldsymbol{Z} \in D([0, \infty) : \ell_2)$, where $(\boldsymbol{Z}, \eta)$ is a $\ell_2 \times \mathbb{R}_+$ valued continuous process given as the unique solution to:
+
+$$
+\begin{align*}
+Z_1(t) &= \Gamma_{\alpha} \left( z_1 - \int_0^t (Z_1(s) - Z_2(s))ds + \sqrt{2}B(\cdot) \right) (t), \\
+Z_2(t) &= z_2 - \int_0^t (Z_2(s) - Z_3(s))ds + \eta(t), \\
+\eta(t) &= \hat{\Gamma}_{\alpha} \left( z_1 - \int_0^t (Z_1(s) - Z_2(s))ds + \sqrt{2}B(\cdot) \right) (t), \tag{2.13} \\
+Z_i(t) &= z_i - \int_0^t (Z_i(s) - Z_{i+1}(s))ds & \text{for each } i \in \{3... r\}, \\
+Z_i(t) &= 0 & \text{for each } i > r,
+\end{align*}
+ $$
+
+and B is a standard Brownian motion.
+
+As a corollary to this Theorem, we obtain the specific regime considered in [25] (in fact we provide a slight strengthening in that, unlike [25], we allow $\alpha = 0$). See Remark 2.7 (v) for further discussion.
+
+**Corollary 2.6.** As $n \to \infty$, suppose that $d_n \gg \sqrt{n} \log n$ and $\sqrt{n}(1-\lambda_n) \to \alpha \in [0, \infty)$, along with $\sqrt{n}(1-\lambda_n) \ge (\sqrt{n} \log n)/d_n$ for large $n$ if $\alpha = 0$. Let $\mathbf{Y}_n(\cdot) := \sqrt{n}(\mathbf{G}_n(\cdot) - \mathbf{f}_1)$ and assume that the sequence of random variables $\{\|\mathbf{Y}_n(0)\|_1\}$ is tight, and as $n \to \infty$, $\mathbf{Y}_n(0) \xrightarrow{P} \mathbf{y} \in \ell_2$ with $\mathbf{y}_{r+} = 0$ for some $r \ge 2$. Then $\mathbf{Y}_n \Rightarrow \mathbf{Y}$ in $D([0, \infty) : \ell_2)$, where $(\mathbf{Y}, \tilde{\eta})$ is the $\ell_2 \times [0, \infty)$ valued continuous
+---PAGE_BREAK---
+
+process given by the unique solution to
+
+$$
+\begin{align*}
+Y_1(t) &= \Gamma_0 \left( y_1 - \alpha \operatorname{id}(\cdot) - \int_0^t (Y_1(s) - Y_2(s))ds + \sqrt{2}B(\cdot) \right) (t) \\
+Y_2(t) &= y_2 - \int_0^t (Y_2(s) - Y_3(s))ds + \tilde{\eta}(t), \\
+\tilde{\eta}(t) &= \hat{\Gamma}_0 \left( y_1 - \alpha \operatorname{id}(\cdot) - \int_0^t (Y_1(s) - Y_2(s))ds + \sqrt{2}B(\cdot) \right) (t), \\
+Y_i(t) &= y_i - \int_0^t (Y_i(s) - Y_{i+1}(s))ds & \text{for each } i \in \{3...r\}, \\
+Y_i(t) &= 0 & \text{for each } i > r,
+\end{align*}
+$$
+
+and B is a standard Brownian motion.
+
+**Remark 2.7.**
+
+(i) The existence and uniqueness of solutions to the stochastic integral equations in (2.13) follows by standard fixed point arguments on using the Lipschitz property of the map $\Gamma_\alpha$ on $D([0, \infty) : \mathbb{R})$. This system of equations can equivalently be written as
+
+$$
+\begin{align}
+Z_1(t) &= z_1 - \int_0^t (Z_1(s) - Z_2(s))ds + \sqrt{2}B(t) - \eta(t), \nonumber \\
+Z_2(t) &= z_2 - \int_0^t (Z_2(s) - Z_3(s))ds + \eta(t), \tag{2.14} \\
+Z_i(t) &= z_i - \int_0^t (Z_i(s) - Z_{i+1}(s))ds & \text{for each } i \in \{3...r\}, \nonumber \\
+Z_i(t) &= 0 & \text{for each } i > r, \nonumber
+\end{align}
+$$
+
+where $\eta = 0$ when $\alpha = \infty$, and when $\alpha \in \mathbb{R}$, it satisfies
+
+$$
+\left.
+\begin{array}{l}
+\eta(0) = 0 \text{ and } \eta \text{ is a monotonically increasing function.} \\
+Z_1(t) \leq \alpha \\
+\displaystyle\int_0^\infty (\alpha - Z_1(s))d\eta(s) = 0
+\end{array}
+\right\}
+\quad (2.15)
+$$
+
+(ii) The convergence $\boldsymbol{\mu}_n \to f_1$ along with tightness of $\{\boldsymbol{Z}_n\}_{n\in\mathbb{N}}$ shows that, under the conditions of Theorems 2.3 or 2.4, most queues will be of length 1 on any fixed interval $[0, T]$.
+
+(iii) The limit system in Theorem 2.4 simplifies when $z_i = 0$ for $i > 2$ and is given as $Z_i = 0$ for all $i > 2$, and
+
+$$
+\begin{align*}
+Z_1(t) &= z_1 - \int_0^t (Z_1(s) - Z_2(s))ds + \sqrt{2}B(t) - \eta(t), \\
+Z_2(t) &= z_2 - \int_0^t Z_2(s)ds + \eta(t),
+\end{align*}
+$$
+
+where $\eta$ is as in the statement of the theorem.
+
+(iv) Suppose that $d_n = \sqrt{n} \log n / 2a$ for some $a > 0$ and $1 - \lambda_n = \frac{a}{\sqrt{n}} + \frac{2a(\log \log n + O(1))}{\sqrt{n} \log n}$. Then the assumptions in Theorem 2.4 are satisfied with $\alpha = 0$. In this case the reflection barrier is at 0, namely $Z_1(t) \le 0$ for all $t$. Also note that since $\sqrt{n}(1-\lambda_n) \to a$, we have that $\mu_{n,1} = \lambda_n \to 1$. Since $d_n/\sqrt{n} \to \infty$, this shows that for $k \ge 2$
+
+$$
+\sqrt{n}\mu_{n,2} = \sqrt{n}\lambda_n\beta_n(\lambda_n) \leq \sqrt{n}\lambda_n\lambda_n^{d_n} = \sqrt{n}(1-(1-\lambda_n)^{d_n+1}) \to 0.
+$$
+---PAGE_BREAK---
+
+Using $\mu_{n,i+1} \le \mu_{n,i}^{d_n}$, see that $\sqrt{n}(\boldsymbol{\mu}_n - \mathbf{f}_1) \to -a\boldsymbol{e}_1 \in \ell_1$ and hence the fluctuations of $G_n$ about the fixed point $\mathbf{f}_1$ can be characterized as well. Specifically, letting $Y_n(\cdot) = \sqrt{n}(G_n(\cdot) - \mathbf{f}_1) = Z_n(\cdot) + \sqrt{n}(\boldsymbol{\mu}_n - \mathbf{f}_1)$, we see that, under the condition of the above theorem, $Y_n \Rightarrow Y$ in $D([0, \infty) : \ell_2)$, where $Y = Z - a\boldsymbol{e}_1$ and hence, assuming $z_i = 0$ for $i > 2$, $(Y, \tilde{\eta}) \in C([0, \infty) : \ell_2 \times \mathbb{R}_+)$ is the unique solution to (2.15) with $(Z_1, \eta, \alpha)$ replaced with $(Y_1, \tilde{\eta}, -a)$, and the equations
+
+$$
+\begin{aligned}
+Y_1(t) &= y_1 - at - \int_0^t (Y_1(s) - Y_2(s))ds + \sqrt{2B}(t) - \tilde{\eta}(t), \\
+Y_2(t) &= y_2 - \int_0^t Y_2(s)ds + \tilde{\eta}(t),
+\end{aligned}
+ $$
+
+where $y = z - ae_1$ and $B$ is a standard Brownian motion. In particular, the limit $Y$ takes the same form as in [7,25].
+
+(v) Suppose that $d_n \gg \sqrt{n} \log n$. Then it is easy to see that (2.12) holds with some $\alpha > 0$ if and only if $\sqrt{n}(1-\lambda_n) \to \alpha > 0$. This regime was studied in [25]. Using the arguments as in (iv) above, it is easy to check that $\sqrt{n}(\boldsymbol{\mu}_n - \mathbf{f}_1) \to -a\boldsymbol{e}_1$ in $\ell_1$ (and hence $\ell_2$). Corollary 2.6 is immediate from this and Theorem 2.4. In particular we recover [25, Theorem 3]. However the proof techniques in the current paper are different from the stochastic coupling techniques employed in [25].
+
+(vi) Suppose $\sqrt{n} \ll d_n \ll \sqrt{n} \log n$ and that (2.12) holds with $\alpha < \infty$. Then, as observed in [25], in this regime $Y_n$ is not tight. Indeed, it is easy to see that $\sqrt{n}(1-\lambda_n) = (\sqrt{n} \log d_n)/d_n + \alpha_n \to \infty$. Nevertheless the process $Z_n$ converges in distribution and the limit process has a reflecting barrier at $\alpha$, i.e. $Z_1 \le \alpha$. In particular, unlike the case $d_n \gg \sqrt{n} \log n$, the barrier in this case does not come from the constraint $G_{n,1} \le 1$.
+
+(vii) Theorem 2.4 allows for a slower approach to criticality than $n^{-1/2}$, e.g. $\lambda_n$ such that $n^{1/3}(\lambda_n - 1) \to \gamma > 0$. In this case $\alpha = \infty$ and there is no reflection. When $z_i = 0$ for all $i \ge 1$, this system reduces to the one dimensional OU process given by (2.10) with $Z_i = 0$ for $i > 1$.
+
+### 3. POISSON REPRESENTATION OF STATE PROCESSES
+
+We now embark on the proofs of the main results. We start with a brief overview of the organization of the proofs. In this Section we describe a specific construction of the state process. Proof of the law of large numbers (Theorem 2.1) is given in Section 4. Section 5 describes fine-scaled (de-terministic) properties of the function $\beta_n$ and the near fixed points $\boldsymbol{\mu}_n$ which play a key technical role in the proofs of our diffusion approximations. Section 6 derives preliminary estimates required to prove all the main results for the fluctuations of the state process. Sections 7, 8 and 9 complete the proofs of Theorem 2.2, 2.3 and 2.4 respectively.
+
+We start with a specific construction of the state process through time changed Poisson processes (cf. [9, 16]). A similar representation has been used in previous work on *JSQ(d)* systems (cf. [7,25]). Let {$N_{i,+}, N_{i,-} : i \ge 1$} be a collection of mutually independent rate one Poisson processes given on some probability space ($\Omega, \mathcal{F}, \mathbf{P}$). Then $\mathbf{G}_n$ has the following (equivalent in distribution) representation. For $i \ge 1$ and $t \ge 0$
+
+$$
+\begin{align}
+G_{n,i}(t) = G_{n,i}(0) &- \frac{1}{n} N_{i,-} \left( n \int_0^t [G_{n,i}(s) - G_{n,i+1}(s)] ds \right) \tag{3.1} \\
+&+ \frac{1}{n} N_{i,+} \left( \lambda_n n \int_0^t [\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))] ds \right),
+\end{align}
+ $$
+
+where $G_{n,0}(t) = 1$ for all $t \ge 0$. Denoting
+
+$$ A_{n,i}(t) \doteq N_{i,+}\left(\lambda_n n \int_0^t [\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))]ds\right), D_{n,i}(t) \doteq N_{i,-}\left(n \int_0^t [G_{n,i}(s) - G_{n,i+1}(s)]ds\right), $$
+---PAGE_BREAK---
+
+the above evolution equation can be rewritten as
+
+$$
+G_{n,i}(t) = G_{n,i}(0) - \frac{1}{n}D_{n,i}(t) + \frac{1}{n}A_{n,i}(t), \quad i \in \mathbb{N}, t \ge 0. \tag{3.2}
+$$
+
+Here $D_{n,i}$ describe events causing a decrease in $G_{n,i}$ owing to completion of service events for jobs
+in queues of length exactly $i$ whilst $A_{n,i}$ describe events causing an increase in $G_{n,i}$ which only occur
+if the chosen queue of a new job has exactly $i-1$ individuals; this occurs if amongst the $d_n$ random
+choices made by this job, all of the chosen queues have load at least $i-1$ but not all have load at
+least $i$.
+
+Let
+
+$$
+\tilde{\mathcal{F}}_t^n = \sigma \{ A_i^n(s), D_i^n(s), s \le t, i \ge 1 \},
+$$
+
+and let $\mathcal{F}_t^n$ be the augmentation of $\tilde{\mathcal{F}}_t^n$ with **P**-null sets. It then follows that, for each $i \ge 1$
+
+$$
+M_{n,i,+}(t) \doteq \frac{1}{n} N_{i,+} \left( \lambda_n n \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds \right) \\
+\phantom{M_{n,i,+}(t) \doteq} - \lambda_n \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds \tag{3.3}
+$$
+
+and
+
+$$
+M_{n,i,-}(t) \doteq \frac{1}{n} N_{i,-} \left( n \int_0^t G_{n,i}(s) - G_{n,i+1}(s) ds \right) - \int_0^t (G_{n,i}(s) - G_{n,i+1}(s)) ds \quad (3.4)
+$$
+
+are {$\mathcal{F}_t^n$}-martingales with predictable (cross) quadratic variation processes given, for $t \ge 0$, as
+
+$$
+\begin{align*}
+\langle M_{n,i,+} \rangle_t &= \frac{\lambda_n}{n} \int_0^t (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))) ds, & i \ge 1, \\
+\langle M_{n,i,-} \rangle_t &= \frac{1}{n} \int_0^t (G_{n,i}(s) - G_{n,i+1}(s)) ds, & i \ge 1, \\
+\langle M_{n,i,-}, M_{n,j,-} \rangle_t &= 0, & \langle M_{n,i,+}, M_{n,j,+} \rangle_t, & \text{for all } i,j \ge 1, i \ne j \text{ and} \\
+\langle M_{n,i,+}, M_{n,k,-} \rangle_t &= 0 & & \text{for all } i,k \ge 1.
+\end{align*}
+$$
+
+Using these martingales, the evolution of **G**_n can be rewritten as
+
+$$
+\begin{equation}
+\begin{aligned}
+G_{n,i}(t) = G_{n,i}(0) & - \int_0^t (G_{n,i}(s) - G_{n,i+1}(s))ds \\
+& + \lambda_n \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds + M_{n,i}(t), \quad i \ge 1
+\end{aligned}
+\tag{3.5}
+\end{equation}
+$$
+
+where $M_{n,i}(t) \doteq M_{n,i,+}(t) - M_{n,i,-}(t)$ and
+
+$$
+\langle M_{n,i} \rangle_t = \frac{1}{n} \left( \int_0^t (G_{n,i}(s) - G_{n,i+1}(s))ds + \lambda_n \int_0^t (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s)))ds \right). \quad (3.6)
+$$
+
+We will assume throughout that $\mathbf{G}_n(0) \in \ell_1^\perp$ a.s. Then it follows that, for every $t \ge 0$, $\|\mathbf{G}_n(t)\|_1 < \infty$
+almost surely. Indeed, over any time interval $[0,t]$ finitely many jobs enter the system a.s. and
+denoting by $k(n)$ the number of jobs that arrive over $[0,t]$, we see that $\|\mathbf{G}_n(t)\|_1 \le \|\mathbf{G}_n(0)\|_1 +$
+$k(n)/n < \infty$ a.s. Thus $\mathbf{G}_n$ is a stochastic process with sample paths in $\mathbb{D}([0,\infty) : \ell_1^\perp)$. Note that,
+for any $t > 0$, $\|\mathbf{G}_n(t) - \mathbf{G}_n(t-\cdot)\|_1 \le 1/n$.
+
+**Remark 3.1.** Let **a**n, **b**: ℓ1↓ → ℓ1 be given by
+
+$$
+\boldsymbol{a}_n(\boldsymbol{x})_i \doteq \lambda_n (\beta_n(x_{i-1}) - \beta_n(x_i)), \quad \boldsymbol{b}(\boldsymbol{x})_i \doteq x_i - x_{i+1}, \quad \boldsymbol{x} \in \ell_1^\perp, i \ge 1,
+$$
+---PAGE_BREAK---
+
+where, by convention, for $\boldsymbol{x} \in \ell_1^\perp$, $x_0 = 1$. Then (3.5) can be rewritten as an evolution equation in $\ell_1$ as,
+
+$$ \mathbf{G}_n(t) = \mathbf{G}_n(0) + \int_0^t [\boldsymbol{a}_n(\mathbf{G}_n(s)) - \boldsymbol{b}(\mathbf{G}_n(s))]ds + \mathbf{M}_n(t), \quad (3.7) $$
+
+where $\mathbf{M}_n(t) \doteq (M_{n,i}(t))_{i\ge 1}$ is a stochastic process with sample paths in $D([0,\infty) : \ell_1)$ and the integral is a Bochner-integral [30]. Note that the near fixed point $\boldsymbol{\mu}_n$ from Definition 2 satisfies $\boldsymbol{a}_n(\boldsymbol{\mu}_n) = \boldsymbol{b}(\boldsymbol{\mu}_n)$. It is in fact the unique solution to,
+
+$$ \boldsymbol{a}_n(\boldsymbol{x}) = \boldsymbol{b}(\boldsymbol{x}) \quad \text{for } \boldsymbol{x} \in \ell_1^\perp, \qquad (3.8) $$
+
+as is seen by adding up all the coordinates of (3.8) and using $\boldsymbol{x} \in \ell_1$. In Lemma 4.1 we will see that for any $T > 0$, as $n \to \infty$, $\sup_{t \le T} \| \mathbf{M}_n(t) \|_2 \xrightarrow{P} 0$. Hence if $\mathbf{G}_n(0) = \boldsymbol{\mu}_n$, then by (3.7), we expect the process $\mathbf{G}_n(t)$ to stay close to $\boldsymbol{\mu}_n$ (over any compact time interval) as $n \to \infty$. In this sense $\boldsymbol{\mu}_n$ can be viewed as a 'near fixed point' of $\mathbf{G}_n(\cdot)$ and the terminology in Definition 2 is justified. Another reason for this terminology comes from the results in Theorems 2.2–2.4 which show that, under conditions, $\boldsymbol{\mu}_n$ converges to one of the fixed points of the fluid limit (2.2) when $\lambda = 1$.
+
+# 4. THE LAW OF LARGE NUMBERS
+
+In this section we prove Proposition 2.1 and Theorem 2.1.
+
+## 4.1. Uniqueness of Fluid Limit Equations.
+In this subsection we show that there is at most one solution of (2.2) in $C([0, \infty) : \ell_1^\perp \times \ell_\infty)$. Results of Section 4.2 will provide existence of solutions to this equation. Suppose $(g, v)$ and $(g', v')$ are two solutions to (2.2) in $C([0, \infty) : \ell_1^\perp \times \ell_\infty)$. We will now argue that the two solutions are equal.
+
+We claim that that $v_i'$ and $v_i$ are non-zero for only finitely many $i$'s. Indeed, since $g, g' \in C([0,T] : \ell_1^\perp)$, there is a constant $C \in (0, \infty)$ so that $\sup_{s \le T} \|g(s)\|_1 \vee \sup_{s \le T} \|g'(s)\|_1 \le C$. Since
+
+$$ x_i \le \|x\|_1/i \quad \text{for any } x \in \ell_1^\perp, \qquad (4.1) $$
+
+taking $M \doteq [C+1] \in \mathbb{N}$ shows that $\sup_{s \le T} g_i(s) \vee g_i'(s) < 1$ for any $i \ge M$. But then by the equivalent representation of (2.2) given in (2.3) (in particular the second line), we must have $v_i = v_i' = 0$ for any $i \ge M$. This proves the claim.
+
+Since $v_i = v_i' = 0$ for $i \ge M$, the first line of the equivalent formulation in (2.3) shows that both $\boldsymbol{x} = g$ and $\boldsymbol{x} = g'$ satisfy the integral equations
+
+$$ x_i(t) = r_i - \int_0^t (x_i(s) - x_{i+1}(s))ds \quad \text{for } i \ge M+1 \text{ and } t \in [0,T]. $$
+
+By standard arguments using Gronwall's lemma [9, Appendix 5], we then must have $g_i = g_i'$ for each $i \ge M+1$. Indeed, letting $z_i(\cdot) \doteq g_i(\cdot) - g_i'(\cdot)$ for $i \ge M+1$ and $v(t) \doteq \sum_{i=M+1}^\infty |z_i(t)|$ for $t \in [0,T]$, we have that
+
+$$ |z_i(t)| \leq \int_0^t (|z_i(s)| + |z_{i+1}(s)|)ds \quad \text{for all } i \geq M+1, \text{ and } t \in [0,T] $$
+
+and so
+
+$$ v(t) \leq 2 \int_{0}^{t} v(s) ds, t \in [0, T], $$
+
+which implies that $v(t) = 0$ for $t \in [0, T]$.
+
+We now show that $g_i = g_i'$ for $i \le M$. From the definition of the Skorohod map in (2.1) we see that for $f_1, f_2 \in D([0, \infty) : \mathbb{R})$ with $f_i(0) \le 1$, $i = 1, 2$, and $t \ge 0$
+
+$$ \| \Gamma_1(f_1) - \Gamma_2(f_2) \|_{*,t} \le 2 \| f_1 - f_2 \|_{*,t}, \quad \| \hat{\Gamma}_1(f_1) - \hat{\Gamma}_2(f_2) \|_{*,t} \le \| f_1 - f_2 \|_{*,t}. $$
+---PAGE_BREAK---
+
+Thus, since $(g, v)$ and $(g', v')$ solve (2.2),
+
+$$
+\begin{align}
+\|g_i - g'_i\|_{*,t} &\le 2 \left( \int_0^t \|g_i - g'_i\|_{*,s} ds + \int_0^t \|g_{i+1} - g'_{i+1}\|_{*,s} ds + \|v_{i-1} - v'_{i-1}\|_{*,t} \right), \text{ and} \tag{4.2} \\
+\|v_i - v'_i\|_{*,t} &\le \int_0^t \|g_i - g'_i\|_{*,s} ds + \int_0^t \|g_{i+1} - g'_{i+1}\|_{*,s} ds + \|v_{i-1} - v'_{i-1}\|_{*,t} \tag{4.3}
+\end{align}
+$$
+
+for any $i \ge 1$. Let $H_t \doteq \max_{i \in \{1, \dots, M\}} \|g_i - g'_i\|_{*,t}$. Note $g_{M+1} = g'_{M+1}$ and hence $H_t = \max_{i \in \{1, \dots, M+1\}} \|g_i - g'_i\|_{*,t}$. Then from (4.3), we have
+
+$$
+\|v_i - v'_i\|_{*,t} \le 2 \int_0^t H_s ds + \|v_{i-1} - v'_{i-1}\|_{*,t} \quad \text{for any } i \le M. \quad (4.4)
+$$
+
+Repeatedly using (4.4) along with $v_0 = v'_0$ shows that $\|v_i - v'_i\|_{*,t} \le 2i \int_0^t H_s ds$ for any $i \le M$.
+Using this bound in (4.2) shows for $1 \le i \le M$:
+
+$$
+\|g_i - g_i'\|_{*,t} \le 2 \left( 2 \int_0^t H_s ds + 2(i-1) \int_0^t H_s ds \right) = 4i \int_0^t H_s ds.
+$$
+
+Hence considering the maximum of ||gᵢ - g'i||*,t over 1 ≤ i ≤ M we get
+
+$$
+0 \le H_t \le 4M \int_0^t H_s ds \quad \text{for each } t \in [0, T].
+$$
+
+Gronwall's Lemma now shows that $H_T = 0$, and hence $g_i = g'_i$ for $i = 1...M$. Finally, since $v_0 = v'_0$,
+we see recursively from the second equation in (2.2) that $v_i = v'_i$ for all $i \ge 0$. ■
+
+**4.2. Tightness and Limit Point Characterization.** Some of the arguments in this section are similar to [25] however in order to keep the presentation self-contained we provide details in a concise manner. The next result establishes the convergence of the martingale term *M*n* in the semimartingale decomposition in (3.7). Throughout this subsection and the next we assume that the conditions of Theorem 2.1 are satisfied, namely, *G*n*(0) $\xrightarrow{P}$ *r* in *l*₁↓, $\lambda_n$ $\to$ $\lambda$ and *d*n* $\to$ $\infty$, as *n* $\to$ $\infty$.
+
+**Lemma 4.1.** For any $T > 0$, $\sup_{s \le T} \|M_n(s)\|_2 \xrightarrow{P} 0$.
+
+*Proof.* It suffices to show that for any $T > 0$, $\lim_n E \sup_{s \le T} \|M_n(s)\|_2^2 = 0$. Applying Doob's maximal inequality we have that
+
+$$
+E \sup_{s \le T} \|M_n(s)\|_2^2 \le 4E \|M_n(T)\|_2^2 = 4E \sum_{i \ge 1} M_{n,i}(T)^2. \quad (4.5)
+$$
+
+Since $EM_{n,i}^2(T) = E\langle M_{n,i} \rangle_T$, using the monotone convergence theorem in (4.5) shows,
+
+$$
+E \sup_{s \le T} \|M_n(s)\|_2^2 \le 4E \sum_{i \ge 1} \langle M_{n,i} \rangle_T \le 4 \frac{T(1 + \sup_n \lambda_n)}{n}, \quad (4.6)
+$$
+
+where the last inequality is from (3.6) on observing that
+
+$$
+\sum_{i=1}^{\infty} \langle M_{n,i} \rangle_T \leq \frac{1}{n} \int_0^T G_{n,1}(s) + \frac{\lambda_n}{n} \int_0^T \beta_n(G_{n,0}(t)) \leq \frac{T(1+\lambda_n)}{n}.
+$$
+
+Sending $n \to \infty$ in (4.6) completes the proof of the lemma. $\blacksquare$
+
+The next lemma characterizes compact sets in $\ell_1^\perp$. The proof is standard and can be found for example in [25].
+
+**Proposition 4.2.** A subset $C \subseteq \ell_1^\perp$ is precompact if and only if the following two conditions hold:
+---PAGE_BREAK---
+
+(1) (norm-bounded) $\sup_{x \in C} \|x\|_1 < \infty$, and
+
+(2) (uniformly decaying tails) $\limsup_{M \to \infty} \sup_{x \in C} \sum_{i>M} |x_i| = 0$.
+
+**Lemma 4.3.** For each $n \in \mathbb{N}$ there is a square integrable $\{\mathcal{F}_t^n\}$-martingale $\{L_n(t)\}$ such that, for any $t \ge 0$,
+
+$$ \sup_{s \in [0,t]} \|\mathbf{G}_n(s)\|_1 \le \|\mathbf{G}_n(0)\|_1 + \lambda_n t + L_n(t). $$
+
+Furthermore, $\langle L_n \rangle_t \le \frac{\lambda_n t}{n}$, for all $t \ge 0$.
+
+*Proof.* For $i = 1, \dots, n$, let $X_i(t)$ denote the number of jobs in the $i$-th server's queue at time $t$. Then
+
+$$ \|\mathbf{G}_n(t)\|_1 = \sum_{j=1}^{\infty} G_{n,j}(t) = \sum_{j=1}^{\infty} \sum_{i=1}^{n} \frac{\mathbb{I}_{\{X_i(t) \ge j\}}}{n} = \frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{\infty} \mathbb{I}_{\{X_i(t) \ge j\}} = \frac{1}{n} \sum_{i=1}^{n} X_i(t). $$
+
+Hence $\|\mathbf{G}_n(t)\|_1$ is the total number of jobs in the system at time $t$, divided by $n$.
+
+Since the total number of jobs in the system at time $t$ is bounded above by the sum of number of job arrivals by time $t$ and the initial number of jobs, $\sup_{s \in [0,t]} \|\mathbf{G}_n(s)\| \le \|\mathbf{G}_n(0)\|_1 + \frac{A_n(t)}{n}$, where $A_n(t)$ is the total number of arrivals to the system by time $t$. Since, $A_n$ is a Poisson process with arrival rate $\lambda_n n$, the result follows on setting $L_n(t) = \frac{A_n(t)}{n} - \lambda_n t$, $t \ge 0$. ■
+
+The estimate in the next lemma will be useful when applying Aldous-Kurtz tightness criteria [9] for proving tightness of $\{\mathbf{G}_n\}$.
+
+**Lemma 4.4.** Fix $n \in \mathbb{N}$ and $\delta \in (0, \infty)$. Let $\tau$ be a bounded $\{\mathcal{F}_t^n\}$-stopping time. Then
+
+$$ E \|\mathbf{G}_n(\tau + \delta) - \mathbf{G}_n(\tau)\|_1 \le (\lambda_n + 1)\delta $$
+
+*Proof.* From (3.2), for any $i \in \mathbb{N}$,
+
+$$ |G_{n,i}(\tau + \delta) - G_{n,i}(\tau)| \le \frac{1}{n}(A_{n,i}(\tau + \delta) - A_{n,i}(\tau) + D_{n,i}(\tau + \delta) - D_{n,i}(\tau)). \quad (4.7) $$
+
+From (3.3) and (3.4) we see that
+
+$$
+\begin{aligned}
+E \frac{1}{n}(A_{n,i}(\tau + \delta) - A_{n,i}(\tau)) &= \lambda_n E \int_{\tau}^{\tau+\delta} (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))) ds \\
+E \frac{1}{n}(D_{n,i}(\tau + \delta) - D_{n,i}(\tau)) &= E \int_{\tau}^{\tau+\delta} (G_{n,i}(s) - G_{n,i+1}(s)) ds.
+\end{aligned}
+$$
+
+Using the above identities in (4.7)
+
+$$
+\begin{aligned}
+& E |G_{n,i}(\tau + \delta) - G_{n,i}(\tau)| \\
+&\leq \lambda_n E \int_{\tau}^{\tau+\delta} (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))) ds + E \int_{\tau}^{\tau+\delta} (G_{n,i}(s) - G_{n,i+1}(s)) ds && (4.8)
+\end{aligned}
+$$
+
+Adding (4.8) over various values of $i \in \mathbb{N}$, we have
+
+$$
+\begin{aligned}
+E \|\mathbf{G}_n(\tau + \delta) - \mathbf{G}_n(\tau)\|_1 &\leq \lambda_n \sum_{i=1}^{\infty} E \int_{\tau}^{\tau+\delta} (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))) ds \\
+&\quad + \sum_{i=1}^{\infty} E \int_{\tau}^{\tau+\delta} (G_{n,i}(s) - G_{n,i+1}(s)) ds \\
+&\leq E \int_{\tau}^{\tau+\delta} (\lambda_n \beta_n(G_{n,0}(s)) + G_{n,1}(s)) ds \\
+&\leq (\lambda_n + 1)\delta.
+\end{aligned}
+$$
+---PAGE_BREAK---
+
+The following lemma will be useful in verifying the tightness of {$G_n(t)$} in $\ell_1^\downarrow$ for each fixed $t \ge 0$.
+
+**Lemma 4.5.** For every $n, m \in \mathbb{N}$ there is a square integrable $\{\mathcal{F}_t^n\}$ martingale $L_{n,m}(\cdot)$ so that, for all $t \ge 0$,
+
+$$ \sup_{s \le t} \sum G_{n,i}(s) \le \sum_{i>m} G_{n,i}(0) + \frac{\lambda_n t}{m} \|G_n\|_{1,t} + L_{n,m}(t) $$
+
+and $\langle L_{n,m} \rangle_t \le \frac{\lambda_n t}{nm} \|G_n\|_{1,t}$.
+
+*Proof.* From (3.1), for any $i \in \mathbb{N}$ and $t \ge 0$:
+
+$$ G_{n,i}(t) \le G_{n,i}(0) + \frac{1}{n} N_{+,i} \left( n\lambda_n \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds \right) \quad (4.9) $$
+
+Consider the point-process given by
+
+$$ B_{n,m}(t) = \sum_{i>m} N_{+,i} \left( n\lambda_n \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds \right). $$
+
+Adding over $i > m$ in (4.9) we get
+
+$$ \sup_{s \le t} \sum_{i>m} G_{n,i}(s) \le \sum_{i>m} G_{n,i}(0) + \frac{1}{n} B_{n,m}(t) \quad (4.10) $$
+
+It is easy to see that, with
+
+$$ b_{n,m}(t) = n\lambda_n \sum_{i>m} \int_0^t \beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))ds, \quad t \ge 0, $$
+
+$\tilde{L}_{n,m}(t) = B_{n,m}(t) - b_{n,m}(t)$ is a $\mathcal{F}_t^n$-martingale and
+
+$$
+\begin{align*}
+\langle \tilde{L}_{n,m} \rangle_t &= b_{n,m}(t) = n\lambda_n \int_0^t \beta_n(G_{n,m}(s))ds \\
+&\le n\lambda_n \int_0^t G_{n,m}(s)ds = n\lambda_n t \left( \sup_{s \le t} G_{n,m}(s) \right) \le \frac{n\lambda_n t}{m} \|G_n\|_{1,t},
+\end{align*}
+$$
+
+where, for the last inequality we have used (4.1). The lemma now follows on setting $L_{n,m}(t) = \tilde{L}_{n,m}(t)/n$ and using (4.10). ■
+
+Recall that under our assumptions, $\lambda_n \to \lambda$ and $d_n \to \infty$ as $n \to \infty$.
+
+**Lemma 4.6.** Suppose that {$G_n(0)\}_{n\ge 1}$ is a tight sequence of $\ell_1^\downarrow$ valued random variables. Then for any $T > 0$, {$G_n\}_{n\ge 1}$ is a tight sequence of $\mathbb{D}([0,T] : \ell_1^\downarrow)$ valued random variables.
+
+*Proof.* To show that {$G_n\}_{n\ge 1}$ is tight it suffices to show that (cf. [9, Theorem 8.6])
+
+(1) For any $t \in [0, T]$ and $\epsilon > 0$, there is a compact set $\Gamma \subset \ell_1^\downarrow$ so that $\inf_{n\in\mathbb{N}} P(G_n(t) \in \Gamma) \ge 1 - \epsilon$.
+
+(2) $\lim_{\delta\to 0} \limsup_{n\to\infty} \sup_{\tau\le T} E\|\mathbf{G}_n(\tau+\delta) - \mathbf{G}_n(\tau)\|_1 = 0$, where the innermost supremum is taken over all $\mathcal{F}_t^n$-stopping times $\tau$ that are bounded by $T-\delta$.
+
+The second condition is immediate from Proposition 4.4. Now consider (1). Fix $\epsilon > 0$. Let $\bar{\lambda} = \sup_{n\ge 1} \lambda_n$. Since $G_n(0)$ is tight, there is a compact $K_1 \subset \ell_1^\downarrow$ such that
+
+$$ P(G_n(0) \in K_1) \ge 1 - \frac{\epsilon}{8} \quad \text{for all } n \in \mathbb{N}. $$
+
+From Proposition 4.2 there is a $\kappa_1 \in (0, \infty)$ such that $\sup_{x\in K_1} \|x\|_1 \le \kappa_1$. From Lemma 4.3 we can find $\kappa_2 \in (0, \infty)$ so that
+
+$$ P(\bar{\lambda}T + \|L_n\|_{1,T} > \kappa_2) \le \frac{\epsilon}{8}. $$
+---PAGE_BREAK---
+
+Then, using the above estimates and Lemma 4.3 again, with $\kappa = \kappa_1 + \kappa_2$,
+
+$$P(\|\mathbf{G}_n\|_{1,T} \geq \kappa) \leq \frac{\epsilon}{4}.$$
+
+Let $m_k \uparrow \infty$ be a sequence such that $4 \frac{\bar{\lambda} T \kappa}{m_k^{1/2}} \leq \frac{\epsilon}{2^{k+2}}$ for all $k \in \mathbb{N}$. Define
+
+$$K_2 = \left\{ y \in l_1^\perp : \|y\|_1 \le \kappa \text{ and for some } x \in K_1, \sum_{i>m_k} y_i \le \sum_{i>m_k} x_i + \frac{\bar{\lambda}T\kappa}{m_k} + \frac{1}{m_k^{1/4}}, \forall k \in \mathbb{N} \right\}.$$
+
+Since $K_1$ is compact, it is immediate from Proposition 4.2 that $K_2$ is precompact in $l_1^\perp$. Also, using Lemma 4.5, for any $t \in [0, T]$,
+
+$$
+\begin{aligned}
+P(\mathbf{G}_n(t) \in K_2^c) &\le P(\|\mathbf{G}_n\|_{1,T} \ge \kappa) + P(\mathbf{G}_n(0) \in K_1^c) + P(\|L_{n,m_k}\|_{*,T} > \frac{1}{m_k^{1/4}} \text{ for some } k \in \mathbb{N}) \\
+&\le \frac{\epsilon}{4} + \frac{\epsilon}{8} + 4\kappa\bar{\lambda}T \sum_{k=1}^{\infty} m_k^{1/2} \frac{1}{m_k} \le \epsilon,
+\end{aligned}
+$$
+
+where the second inequality follows from Doob's maximal inequality and from the expression of $\langle L_{n,m_k} \rangle$ in Lemma 4.5 and the third inequality follows from the choice of $\{m_k\}$. This proves (1) and completes the proof of the lemma. ■
+
+The following lemma gives a characterization of the limit points of $\mathbf{G}_n$.
+
+**Lemma 4.7.** Fix $T \in (0, \infty)$. Suppose that, along some subsequence $\{n_k\}_{k\ge1}$, $\mathbf{G}_{n_k} \Rightarrow \mathbf{G}$ in $D([0, T] : l_1^\perp)$ as $k \to \infty$. Then $\mathbf{G} \in C([0, T] : l_1^\perp)$ a.s., and (2.2) is satisfied with $(g_i, v_i)$ replaced with $(G_i, V_i)$, where $V_i$ are defined recursively using the second equation in (2.2) with $V_0(t) = \lambda t$ for $t \ge 0$.
+
+*Proof*. From Lemma 4.1 we see that $\mathbf{M}_{n_k} \xrightarrow{P} 0$, in $D([0,T]: l_2)$. By Skorohod embedding theorem, let us assume that $\mathbf{G}_{n_k}, \mathbf{M}_{n_k}, \mathbf{G}$ are all defined on the same probability space and
+
+$$ (\mathbf{G}_{n_k}, \mathbf{M}_{n_k}) \rightarrow (\mathbf{G}, 0), \text{ a.s.} $$
+
+in $D([0,T] : l_1^\perp \times l_2)$. Since the jumps of $\mathbf{G}_n$ have size at most $1/n$, $\mathbf{G}$ is continuous and $\|\mathbf{G}(s) - \mathbf{G}_{n_k}(s)\|_{1,T} \to 0$ a.s. Similarly, $\|\mathbf{M}_{n_k}(s)\|_{2,T} \to 0$ almost surely. To simplify notation from now on we will take $n_k = n$.
+
+Let $V_{n,i}(t) = \lambda_n \int_0^t \beta_n(G_{n,i}(s))ds$ for $i \ge 1$ and $V_{n,0}(t) = \lambda_n t$. From (3.1), for any $i \ge 1$
+
+$$ G_{n,i}(t) = G_{n,i}(0) - \int_{0}^{t} (G_{n,i}(s) - G_{n,i+1}(s))ds + V_{n,i-1}(t) - V_{n,i}(t) + M_{n,i}(t). \quad (4.11) $$
+
+For $i \in \mathbb{N}$, $\sup_{s \le T} |G_{n,i}(s) - G_i(s)| \le \sup_{s \le T} \|\mathbf{G}_n(s) - \mathbf{G}(s)\|_1 \to 0$ and $\sup_{s \le T} |M_{n,i}(s)| \le \sup_{s \le T} \|\mathbf{M}_n(s)\|_2 \to 0$, almost surely as $n \to \infty$. We now show that, for each $i \in \mathbb{N}_0$, $V_{n,i}$ converges uniformly (a.s.) to some limit process $V_i$. Clearly this is true for $i=0$ and in fact $V_0(t) = \lambda t$, $t \ge 0$. Proceeding recursively, suppose now that $V_{n,i-1} \to V_{i-1}$ for some $i \ge 1$. Then, since all the terms in (4.11), except $V_{n,i}$, converge uniformly, $V_{n,i}$ must converges uniformly as well to some limit process $V_i$. Sending $n \to \infty$ in (4.11) we get, for every $t \le T$ and $i \ge 1$:
+
+$$ G_i(t) = G_i(0) - \int_0^t (G_i(s) - G_{i+1}(s))ds + V_{i-1}(t) - V_i(t), \text{ a.s.} $$
+
+This shows the first line in (2.3) is satisfied with $(g_i, v_i)$ replaced with $(G_i, V_i)$.
+
+We now show that the second line in (2.3) is satisfied as well. Since $V_i$ is the limit of $\{V_{n,i}\}$, the following properties hold:
+---PAGE_BREAK---
+
+(i) $V_0(t) = \lambda t$ for all $t \in [0, T]$.
+
+(ii) $V_i$ is continuous, non-decreasing and $V_i(0) = 0$.
+
+(iii) For any $t \in [0, T]$, $\int_0^t (1-G_i(s))dV_i(s) = 0$. This is a consequence of the following identities:
+
+$$
+\begin{align*}
+\int_0^t (1 - G_i(s)) dV_i(s) &= \lim_n \int_0^t (1 - G_i(s)) dV_{n,i}(s) \\
+&= \lim_n \int_0^t \lambda_n(1 - G_i(s)) \beta_n(G_{n,i}(s)) ds \\
+&= \lambda \int_0^t \lim_{n \to \infty} (1 - G_i(s)) \beta_n(G_{n,i}(s)) ds \\
+&= 0
+\end{align*}
+$$
+
+where the first equality holds since $G_i$ is a continuous and bounded function and $V_{n,i} \to V_i$ uniformly on $[0,T]$; the second equality uses the definition of $V_{n,i}$, the third is from the dominated convergence theorem, and the fourth follows since $\beta_n(x) \le x^{d_n}$, for $x \in [0,1]$ and $d_n \to \infty$, $\beta_n(x) \to 0$ for every $x \in [0,1)$.
+
+Thus we have verified that the second line in (2.3) is satisfied with $(G_i, V_i)$ as well. The result is now immediate from Remark 2.2. ■
+
+### 4.3. Completing the Proof of LLN.
+We can now complete the proofs of Proposition 2.1 and Theorem 2.1.
+
+**Proof of Proposition 2.1.** Fix $r \in l_1^\perp$, $\lambda > 0$ and choose a sequence $r_n \in l_1^\perp$ such that $r_n \to r$ in $l_1^\perp$ and for each $i$, $nr_{n,i} \in \mathbb{N}_0$. Consider parameters $\lambda_n = \lambda$, $d_n = n$ and a JSQ($d_n$) system initialized at $G_n(0) = r_n$. From Lemma 4.7 we have that there is at least one solution of (2.2) which is given as a limit point of an arbitrary weakly convergent subsequence of $G_n$ (such a sequence exists in view of the tightness shown in Lemma 4.6). The fact that this equation can have at most one solution was shown in Section 4.1. The result follows. ■
+
+**Proof of Theorem 2.1.** Since $G_n(0) \xrightarrow{P} r$ in $l_1^\perp$, the hypothesis of Lemma 4.6 is satisfied, and thus the sequence $\{G_n\}_{n \ge 1}$ is tight in $\mathcal{D}([0,T] : l_1^\perp)$ for any fixed $T > 0$. The result is now immediate from Lemma 4.7 and unique solvability of (2.2) shown in Proposition 2.1. ■
+
+**Remark 4.8.** We note that the proofs of Lemma 4.7 and Theorem 2.1 also show that, under the conditions of Theorem 2.1, for each $i \ge 1$,
+
+$$ \sup_{t \le T} \left| \lambda_n \int_0^t \beta_n(G_{n,i}(s))ds - v_i(t) \right| \xrightarrow{P} 0, $$
+
+where $(g_i, v_i)$ is the unique solution of (2.2).
+
+## 5. PROPERTIES OF THE NEAR FIXED POINT
+
+In this section we give some important properties of the near fixed point $\mu_n$ that will be needed in the proofs of fluctuation theorems. Since $\mu_n$ is defined in terms of the function $\beta_n$, we begin by giving some results on the asymptotic behavior of $\beta_n$ and its derivatives. Proofs follow via elementary algebra and Taylor's approximation and can be found in Appendix A. Roughly speaking, these results control the error between sampling with and without replacement of $d_n$ servers from a collection of $n$ servers. We first note that the function $\beta_n$ is differentiable on $(0,1) \setminus \{\frac{d_n-1}{n}\}$ and the derivative is given as
+
+$$ \beta'_n(x) = \sum_{j=0}^{d_n-1} (1-j/n)^{-1} \prod_{\substack{i=0 \\ i \ne j}}^{d_n-1} \frac{x-i/n}{1-i/n} \quad \text{for } x \in (\frac{d_n-1}{n}, 1] \text{ and } \beta'_n(x) = 0 \text{ for } x \in (0, \frac{d_n-1}{n}). \quad (5.1) $$
+---PAGE_BREAK---
+
+As a convention, we set $\beta'_n(x) = 0$ for $x = \frac{d_n-1}{n}$.
+
+Note that $f(t) = \frac{a+t}{b+t}$ is an increasing function of $t$ on $(-b, \infty)$ when $b > a$. Using this fact in (2.7) shows that, when $d_n \le n$,
+
+$$0 \le \beta_n(x) \le x^{d_n} \doteq \gamma_n(x), x \in [0, 1]. \quad (5.2)$$
+
+Using the same fact in (5.1) shows that, for $d_n < n$,
+
+$$0 \le \beta'_n(x) \le \frac{d_n x^{d_n-1}}{1 - \frac{d_n}{n}}, x \in (0, 1). \quad (5.3)$$
+
+The following lemma estimates the ratio between $\beta_n$ and $\gamma_n$ and its derivatives.
+
+**Lemma 5.1.** Assume $\frac{d_n}{n} \to 0$. Then for any $\epsilon \in (0, 1)$, as $n \to \infty$,
+
+$$\sup_{x \in [\epsilon, 1]} \left| \frac{\beta'_{n}(x)/\beta_{n}(x)}{\gamma'_{n}(x)/\gamma_{n}(x)} - 1 \right| \to 0. \quad (5.4)$$
+
+Furthermore, if $\frac{d_n}{\sqrt{n}} \to 0$, then
+
+$$\sup_{x \in [\epsilon, 1]} \left| \frac{\beta_n(x)}{\gamma_n(x)} - 1 \right| \to 0 \quad \text{and} \quad \sup_{x \in [\epsilon, 1]} \left| \frac{\beta'_n(x)}{\gamma'_n(x)} - 1 \right| \to 0. \quad (5.5)$$
+
+**Corollary 5.2.** Assume $d_n \ll n$. Then for any $\epsilon \in (0, 1)$
+
+$$\sup_{x \in [\epsilon, 1]} |\log \beta_n(x) - \log \gamma_n(x)| = O\left(\frac{d_n^2}{n}\right).$$
+
+Recall the near fixed points $\mu_n = (\mu_{n,i})_{i \ge 1}$ introduced in Definition 2.
+
+**Corollary 5.3.** Suppose that $d_n \ll n$. Let $i \in \mathbb{N}$ be such that $\liminf_n \mu_{n,i} > 0$. Then
+
+$$\lim_{n \to \infty} \frac{\lambda_n \mu_{n,i} \beta'_n(\mu_{n,i})}{d_n \mu_{n,i+1}} = 1.$$
+
+**Lemma 5.4.** Assume $d_n \ll n$ and fix $\epsilon \in (0, 1)$. Then there is a $C \in (0, \infty)$ and $n_0 \in \mathbb{N}$ such that, if for some $k \in \mathbb{N}$ and $n_1 \in \mathbb{N}$, $\mu_{n,k} \ge \epsilon$ for all $n \ge n_1$, then for all $n \ge n_1 \vee n_0$
+
+$$\left| \log \mu_{n,k+1} - (\log \lambda_n) \left( \sum_{i=0}^{k} d_n^i \right) \right| \le \frac{C}{n} \sum_{i=1}^{k} d_n^{i+1}.$$
+
+**Corollary 5.5.** Suppose that $d_n \to \infty$ and that for some $k \in \mathbb{N}$, $d_n^{k+1} \ll n$. Suppose also that $1 - \lambda_n = \frac{\xi_n + \log d_n}{d_n^k}$ where $\xi_n \to -\log(\alpha) \in (-\infty, \infty]$ and $\frac{\xi_n^2}{d_n} \to 0$. Then $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha$.
+
+**Lemma 5.6.** Suppose that $\lambda_n \nearrow 1$, $d_n \to \infty$ and $d_n \ll n$. Suppose also that, for some $k \ge 2$, $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha \in [0, \infty)$ as $n \to \infty$. Then $\beta'_n(\mu_{n,1}) \to \infty$ and for any $i \in [k-1]$
+
+$$\frac{\beta'_{n}(\mu_{n,i})}{\beta'_{n}(\mu_{n,1})} \to 1.$$
+
+The following result is along the lines of Lemma 5.1. It allows for weaker assumptions on $d_n$ but gives an approximation only in a neighborhood of 1.
+
+**Lemma 5.7.** Suppose that $\frac{d_n}{n^{2/3}} \to 0$, as $n \to \infty$. Let $\{\epsilon_n\}$ be a sequence in $[0, 1]$ such that $d_n\epsilon_n^2 \to 0$. Then as $n \to \infty$:
+
+$$\sup_{x \in [1-\epsilon_n, 1]} \left| \frac{\beta_n(x)}{\gamma_n(x)} - 1 \right| \to 0,
+\quad (5.6)$$
+---PAGE_BREAK---
+
+and
+
+$$
+\sup_{x \in [1-\epsilon_n, 1]} \left| \frac{\beta'_n(x)}{\gamma'_n(x)} - 1 \right| \to 0. \quad (5.7)
+$$
+
+The next result shows that if $d_n \to \infty$, then the behavior of $\beta_n(x)$ is interesting only when $x$ is sufficiently close to 1.
+
+**Lemma 5.8.** Suppose that $d_n \to \infty$, and let $\epsilon_n \doteq \frac{2\log d_n}{d_n}$. Then as $n \to \infty \sup_{x \in [0,1-\epsilon_n]} |\beta_n(x)| \to 0$. Furthermore, if $\lim \sup_n \frac{d_n}{n} < 1$ then we also have $\sup_{x \in [0,1-\epsilon_n]} |\beta'_n(x)| \to 0$.
+
+6. PRELIMINARY ESTIMATES UNDER DIFFUSION SCALING
+
+Recall the near fixed point $\boldsymbol{\mu}_n$ from Definition 2 and the process $Z_n$ introduced in (1.2). Also,
+recall the maps $\boldsymbol{a}_n$ and $\boldsymbol{b}$ from Remark 3.1. We will extend the definition of $\beta_n$ and $\beta_n'$ to $\mathbb{R}$ by setting
+$\beta_n(x) = \beta_n'(x) = 0$ for $x < 0$. Further, in what follows, for $z < 0$ and real valued integrable function
+$h(\cdot)$, the integral $\int_{[0,z]} h(u)du = -\int_{[z,0]} h(u)du$. We start by giving a semimartingale decomposition
+for $Z_n$.
+
+**Lemma 6.1.** For $t \ge 0$, $Z_n(t)$ satisfies
+
+$$
+Z_n(t) = Z_n(0) + \int_0^t A_n(Z_n(s))ds - \int_0^t b(Z_n(s))ds + \sqrt{n}M_n(t) \quad (6.1)
+$$
+
+where $\mathbf{A}_n : \ell_\infty \rightarrow \ell_\infty$ is given as
+
+$$
+\mathbf{A}_n(z)_i = t_{n,i-1}(z_{i-1}) - t_{n,i}(z_i), \quad i \in \mathbb{N} \tag{6.2}
+$$
+
+and for $i \in \mathbb{N}$
+
+$$
+\begin{align*}
+t_{n,i}(z) &\doteq \lambda_n \int_{[0,z]} \beta'_n (\mu_{n,i} + y/\sqrt{n}) dy, && z \in \mathbb{R} \\
+t_{n,0}(z) &\doteq 0, && z \in \mathbb{R}.
+\end{align*}
+$$
+
+Proof. From (3.7) and since **a**_n(**μ**_n) = **b**(**μ**_n),
+
+$$
+\begin{equation}
+\begin{split}
+\sqrt{n}(\mathbf{G}_n(t) - \boldsymbol{\mu}_n) = {}& \sqrt{n}(\mathbf{G}_n(0) - \boldsymbol{\mu}_n) + \int_0^t \sqrt{n}\{\boldsymbol{a}_n(\mathbf{G}_n(s)) - \boldsymbol{a}_n(\boldsymbol{\mu}_n)\}ds \\
+& - \int_0^t \sqrt{n}\{\boldsymbol{b}(\mathbf{G}_n(s)) - \boldsymbol{b}(\boldsymbol{\mu}_n)\}ds + \sqrt{n}\mathbf{M}_n(t)
+\end{split}
+\end{equation}
+$$
+
+Let $\mathbf{A}_n(z) \doteq \sqrt{n}\{\mathbf{a}_n(\boldsymbol{\mu}_n + n^{-1/2}z) - \mathbf{a}_n(\boldsymbol{\mu}_n)\}$. By the definition of $\mathbf{a}_n$ we see that (6.2) holds where
+
+$$
+t_{n,i}(z) \doteq \begin{cases} \lambda_n \sqrt{n} \{\beta_n(\mu_{n,i} + z/\sqrt{n}) - \beta_n(\mu_{n,i})\} & \text{for } i \ge 1 \\ 0 & \text{if } i = 0 \end{cases} \tag{6.4}
+$$
+
+Clearly, the $t_{n,i}$ defined in (6.4) is same as that given in (6.3). The result follows.
+***
+
+**Lemma 6.2.** Suppose that $d_n \to \infty$, $\lambda_n \to 1$, and for some $k \ge 1$, $G_n(0) \xrightarrow{P} f_k$ in $\ell_1$. Then there is a standard Brownian motion B so that $\sqrt{n}M_n \Rightarrow \sqrt{2Be_k}$ in $D([0, \infty) : \ell_2)$.
+
+*Proof.* Fix $T > 0$. Since $G_n(0) \to f_k$ and $f_k$ is a fixed point of (2.2), by Theorem 2.1, $G_n \xrightarrow{P} f_k$ in $D([0,T] : \ell_1)$, where $f_k$ here is viewed as the function on $[0,T]$ that takes the constant value $f_k \in \ell_1$. Moreover, by Remark 4.8, for every $i \ge 1$ $V_{n,i}(t) = \lambda_n \int_0^t \beta_n(G_{n,i}(s))ds$ converge uniformly on $[0,T]$ in probability to $v_i(t)$, where $v_i$ solves
+
+$$
+v_i = \hat{\Gamma}_1(f_{k,i} - (f_{k,i} - f_{k,i+1})\mathrm{id} + v_{i-1}(\cdot)), i \ge 1, \quad (6.5)
+$$
+---PAGE_BREAK---
+
+and $v_0(t) \doteq t$, where recall that $\text{id} : [0, T] \to [0, T]$ is the identity map. Recalling the definition of $f_k$ we see by a recursive argument that
+
+$$ v_i(t) \doteq \begin{cases} t & \text{if } i < k \\ 0 & \text{if } i \ge k. \end{cases} \tag{6.6} $$
+
+Combining this with (3.6), we have for each $i \ge 1$
+
+$$ \begin{aligned} \langle \sqrt{n} M_{n,i} \rangle_t = & \int_0^t (G_{n,i}(s) - G_{n,i+1}(s)) ds + \lambda_n \int_0^t (\beta_n(G_{n,i-1}(s)) - \beta_n(G_{n,i}(s))) ds \\ & \xrightarrow{} (f_{k,i} - f_{k,i+1}) \text{id} + v_{i-1}(\cdot) - v_i(\cdot) = H(\cdot), \end{aligned} $$
+
+in probability in $C([0,T]: \mathbb{R})$ where
+
+$$ H(t) \doteq \begin{cases} 2t & \text{if } i=k \\ 0 & \text{if } i \neq k, \end{cases}, \quad t \in [0,T]. $$
+
+Adding (3.6) over $i$, we have for $t \in [0, T]$,
+
+$$ \sum_{i>k} \langle \sqrt{n} M_{n,i} \rangle_t \le \int_0^t G_{n,k+1}(s) ds + \lambda_n \int_0^t \beta_n(G_{n,k})(s) ds. \tag{6.7} $$
+
+The process on the right side converges in probability in $C([0,T]: \mathbb{R})$ to $f_{k,k+1}\text{id} + v_k(\cdot) = 0$ and thus $\sum_{i>k} \langle\sqrt{n}M_{n,i}\rangle_T$ converges to 0 in probability. By Doob's maximal inequality,
+
+$$ n E \sup_{t \le T} \sum_{i>k} M_{n,i}^2(t) \le 4 E \sum_{i>k} \langle \sqrt{n} M_{n,i} \rangle_T \to 0, \text{ as } n \to \infty, $$
+
+where the last convergence follows by the dominated convergence theorem on noting that the right side of (6.7) is bounded above by $\sup_n(1+\lambda_n) < \infty$. The result now follows on using the martingale central limit theorem (cf. [9, Theorem 7.1.4]) for the $k$-dimensional martingale sequence $(\sqrt{n}M_{n,1}, \dots, \sqrt{n}M_{n,k})$. $\blacksquare$
+
+Recall the functions $t_{n,i}$ from Lemma 6.1.
+
+**Lemma 6.3.** Assume that for some $r \in \mathbb{N}$, $\limsup_{n\to\infty} \mu_{n,r} < 1$. Then for any $L > 0$
+
+$$ \limsup_{n\to\infty} \sup_{r} \sup_{0<|z|\le L} \left|\frac{t_{n,i}(z)}{z}\right| = 0 $$
+
+*Proof.* By (6.3):
+
+$$ \begin{aligned} \sup_{i \ge r} \sup_{0 < |z| \le L} \left| \frac{t_{n,i}(z)}{z} \right| &\le \lambda_n \sup_{i \ge r} \sup_{0 < |z| \le L} \sup_{|y| \le z} \left| \beta'_n \left( \mu_{n,i} + \frac{y}{\sqrt{n}} \right) \right| \\ &= \lambda_n \sup_{i \ge r} \sup_{|z| \le L} \left| \beta'_n \left( \mu_{n,i} + \frac{z}{\sqrt{n}} \right) \right| \le \lambda_n \sup_{0 \le x \le \mu_{n,r} + \frac{L}{\sqrt{n}}} \beta'_n(x) \end{aligned} $$
+
+which converges to 0 by Lemma 5.8, since $\limsup_{n\to\infty} (\mu_{n,r} + L/\sqrt{n}) < 1$. $\blacksquare$
+
+For $L \in (0, \infty)$ define the stopping time
+
+$$ \tau_{n,L} = \inf \left\{ t |||Z_n(t)||_2 \ge L - \frac{1}{\sqrt{n}} \right\}. $$
+
+(6.8)
+
+Since the jumps of $Z_n$ are of size $1/\sqrt{n}$, we see that, for any $T > 0$
+
+$$ \|Z_n\|_{2,T} = L. $$
+
+(6.9)
+
+Recall from Section 1.2 the vector $z_{r+} \in \mathbb{R}^\infty$ associated with a vector $z \in \mathbb{R}^\infty$.
+---PAGE_BREAK---
+
+**Lemma 6.4.** Suppose that as $n \to \infty$, $G_n(0) \xrightarrow{P} f_k$ in $\ell_1^\downarrow$ and $Z_{n,r+}(0) \xrightarrow{P} 0$ in $\ell_2$ for some $r > k$. Then for any $T, L > 0$, $\|Z_{n,r+}\|_{2,T \wedge \tau_{n,L}} \xrightarrow{P} 0$.
+
+*Proof.* For $i > k$ and $z \in \mathbb{R}$, let $\Delta_{n,i}(z) \doteq \frac{t_{n,i}(z)}{z} \mathbb{I}_{\{z \neq 0\}}$. Then, since $\lim_{n \to \infty} \mu_{n,k+1} = 0$, by Lemma 6.3
+
+$$
+\delta_{n,L} \doteq \sup_{i \ge k+1} \sup_{|z| \le L} |\Delta_{n,i}(z)| \to 0, \text{ as } n \to \infty. \quad (6.10)
+$$
+
+Next, from (6.1), for $i \ge r + 1 > k + 1$
+
+$$
+\begin{align*}
+Z_{n,i}(t \wedge \tau_n) &= Z_{n,i}(0) + \int_0^{t \wedge \tau_n} \Delta_{n,i-1}(Z_{n,i-1}(s)) Z_{n,i-1}(s) ds - \int_0^{t \wedge \tau_n} \Delta_{n,i}(Z_{n,i}(s)) Z_{n,i}(s) ds \\
+&\quad - \int_0^{t \wedge \tau_n} (Z_{n,i}(s) - Z_{n,i+1}(s)) ds + \sqrt{n} M_{n,i}(t \wedge \tau_n)
+\end{align*}
+$$
+
+where we use $\tau_n$ instead of $\tau_{n,L}$ for notational simplicity. Then, observing from (6.10) that
+$\sup_{i \ge k+1} \sup_{t \in [0, \tau_n]} |\Delta_{n,i}(Z_{n,i}(t))| \le \delta_{n,L}$, we have
+
+$$
+\begin{equation}
+\begin{split}
+|Z_{n,i}(t \wedge \tau_n)| &\le |Z_{n,i}(0)| + \delta_{n,L} \int_0^{t \wedge \tau_n} (|Z_{n,i-1}(s)| + |Z_{n,i}(s)|)ds \\
+&\quad + \int_0^{t \wedge \tau_n} (|Z_{n,i}(s)| + |Z_{n,i+1}(s)|)ds + |\sqrt{n}M_{n,i}(t \wedge \tau_n)|.
+\end{split}
+\tag{6.11}
+\end{equation}
+$$
+
+Define maps $\mathbf{A}_1, \mathbf{A}_2 : \mathbb{R}^\infty \to \mathbb{R}^\infty$ by
+
+$$
+(A_1 x)_i = \begin{cases} x_1 & i=1 \\ x_{i-1} + x_i & i \ge 2 \end{cases}
+$$
+
+$$
+(A_2 x)_i = x_i + x_{i+1}, i \in \mathbb{N}.
+$$
+
+Then by collecting (6.11) over all $i \ge r + 1$ we get
+
+$$
+\begin{equation}
+\begin{aligned}
+|Z_{n,r+}(t \wedge \tau_n)| &\le |Z_{n,r+}(0)| + \delta_{n,M} \int_0^{t^\wedge \tau_n} A_1 |Z_{n,r+}(s)| ds + \delta_{n,M} \int_0^{t^\wedge \tau_n} |Z_{n,r}(s)| e_1 ds \\
+&\quad + \int_0^{t^\wedge \tau_n} A_2 |Z_{n,r+}(s)| ds + |\sqrt{n} M_{n,r+}(t \wedge \tau_n)|
+\end{aligned}
+\tag{6.12}
+\end{equation}
+$$
+
+where the absolute values and the integrals are interpreted as being coordinate-wise for infinite
+dimensional vectors. Now noting that the maps $A_i$, when considered from $\ell_2 \to \ell_2$, are bounded
+linear operators with norm bounded by 2, we have for $i = 1, 2,$
+
+$$
+\left\| \int_0^{t^\wedge \tau_n} A_i |Z_{n,r+}(s)| ds \right\|_2 \leq \int_0^{t^\wedge \tau_n} 2 \|Z_{n,r+}(s)\|_2 ds.
+$$
+
+Using the triangle inequality in (6.12) shows for any $t \le T$
+
+$$
+\begin{align*}
+\|\mathbf{Z}_{n,r+}(t \wedge \tau_n)\|_2 &\leq \|\mathbf{Z}_{n,r+}(0)\|_2 + \|(\sqrt{n}\mathbf{M}_{n,r+})\|_{2,T} + \delta_{n,M}MT \\
+&\quad + 2(1+\delta_{n,M})\int_0^{t^\wedge\tau_n} \|\mathbf{Z}_{n,r+}(s)\|_2 ds
+\end{align*}
+$$
+
+where we have used that $\int_0^{t^\wedge\tau_n} |Z_{n,r}(s)|ds \le L t$. Hence, using Gronwall's inequality
+
+$$
+\|\mathbf{Z}_{n,r+}\|_{2,T\wedge\tau_n} \le (\|\mathbf{Z}_{n,r+}(0)\|_2 + \delta_{n,L}LT + \|(\sqrt{n}\mathbf{M}_{n,r+})\|_{2,T})e^{2(1+\delta_{n,L})T}
+$$
+
+Now, as $n \to \infty$, $\|\mathbf{Z}_{n,r+}(0)\|_2 \xrightarrow{P} 0$ by assumption, $\delta_{n,L} \to 0$ by (6.10), and $\|\sqrt{n}\mathbf{M}_{n,r+}\|_{2,T} \xrightarrow{P} 0$
+by Lemma 6.2. The result follows. ■
+---PAGE_BREAK---
+
+The following elementary lemma will allow us to replace $\tau_{n,L} \wedge T$ with $T$ in various convergence results. The proof is omitted.
+
+**Lemma 6.5.** Fix $T \in [0, \infty)$. Suppose for each $n \in \mathbb{N}$ and $L > 0$ that $\tau_{n,L}$ is a $[0, T]$ valued random variable such that $\lim_{L \to \infty} \sup_n P(\tau_{n,L} < T) \to 0$ for some $T > 0$. Suppose that there is a sequence of stochastic processes $\{F_n\}_{n \in \mathbb{N}}$ with sample paths in $D([0, T] : \mathbb{R})$ such that for each $L > 0$ $|F_n|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$. Then in fact $|F_n|_{*,T} \xrightarrow{P} 0$ as $n \to \infty$.
+
+The next lemma gives conditions under which the near fixed point $\boldsymbol{\mu}_n$ converges to $\boldsymbol{f}_1$.
+
+**Lemma 6.6.** Let $0 \le \epsilon_n \doteq 1 - \lambda_n$ be such that $\epsilon_n \to 0$ and $\epsilon_n d_n \to \infty$. Then $\boldsymbol{\mu}_n \to \boldsymbol{f}_1$ in $\ell_1$ as $n \to \infty$.
+
+*Proof.* Since $d_n \to \infty$ under our assumptions, in order to show $\boldsymbol{\mu}_n \to \boldsymbol{f}_1$ in $\ell_1$ it suffices to show that (1) $\mu_{n,1} \to 1$, and (2) $\mu_{n,2} \to 0$. The convergence in (1) is immediate on observing that $\mu_{n,1} = \lambda_n = 1 - \epsilon_n \to 1$, and (2) follows by noting from Definition 2 and (5.2) that $\mu_{n,2} \le \mu_{n,1}^{d_n} = (1 - \epsilon_n)^{d_n} \le e^{-\epsilon_n d_n} \to 0$. ■
+
+The following lemma gives a convenient approximation of the term $t_{n,1}$ introduced in (6.3) in terms of certain exponentials.
+
+**Lemma 6.7.** Suppose $d_n \to \infty$ and $d_n \ll n^{2/3}$. Let $\lambda_n = 1 - (\log d_n / d_n + \alpha_n / \sqrt{n})$ for some real sequence $\{\alpha_n\}$ satisfying $\frac{d_n \alpha_n^2}{n} \to 0$. Then, for any $L > 0$,
+
+$$
+\limsup_{n \to \infty} \sup_{0 < |z| \le L} \left| \frac{\exp\left(\frac{d_n}{\sqrt{n}}(z-\alpha_n)\right) - \exp\left(-\frac{d_n}{\sqrt{n}}\alpha_n\right)}{t_{n,1}(z)d_n/\sqrt{n}} - 1 \right| = 0. \quad (6.13)
+$$
+
+*Proof.* We only consider the case $0 < z \le L$. The case $-L < z < 0$ is treated similarly. Recall that $\mu_{n,1} = \lambda_n$. Noting that $d_n(1 - \lambda_n + \frac{L}{\sqrt{n}})^2 \le 4d_n(\frac{\log^2 d_n}{d_n^2} + \alpha_n^2/n + L^2/n) \to 0$ we have on applying Lemma 5.7 with $\epsilon_n = (1 - \lambda_n + \frac{L}{\sqrt{n}})$ that, for any $|z| \le L$,
+
+$$
+\begin{align*}
+t_{n,1}(z) &= (1+o(1)) \int_0^z \gamma'_n \left( \lambda_n + \frac{y}{\sqrt{n}} \right) dy \\
+&= (1+o(1)) \int_0^z \exp\left( (d_n-1)\log\left\{ \lambda_n + \frac{y}{\sqrt{n}} \right\} + \log d_n \right) dy \\
+&= (1+o(1)) \int_0^z \exp\left( d_n \log\left\{ \lambda_n + \frac{y}{\sqrt{n}} \right\} + \log d_n \right) dy
+\end{align*}
+$$
+
+Using expansion for $\log(1+h)$ around $h=0$ and once more the fact that $d_n\left(1-\lambda_n+\frac{L}{\sqrt{n}}\right)^2 \to 0$,
+
+$$
+\begin{align*}
+t_{n,1}(z) &= (1+o(1)) \int_0^z \exp\left(d_n\left\{\lambda_n - 1 + \frac{y}{\sqrt{n}}\right\} + \log d_n\right) dy \\
+&= (1+o(1)) \int_0^z \exp\left(\frac{d_n}{\sqrt{n}}(y-\alpha_n)\right) dy \\
+&= (1+o(1)) \frac{\exp\left(\frac{d_n}{\sqrt{n}}(z-\alpha_n)\right) - \exp\left(-\frac{d_n}{\sqrt{n}}\alpha_n\right)}{d_n/\sqrt{n}}
+\end{align*}
+$$
+
+which proves (6.13). ■
+
+Proof of the following lemma proceeds by standard arguments but we provide details in Appendix B.
+---PAGE_BREAK---
+
+**Lemma 6.8.** Fix $T > 0$. Let $f,g,M$ be three bounded measurable functions from $[0,T] \to \mathbb{R}$ and assume further that $M$ is a right continuous bounded variation function. Suppose that $m \doteq \inf_{s \in [0,T \wedge \tau]} f(s) > 0$ for some $\tau \ge 0$. Let $z : [0,T] \to \mathbb{R}$ be a bounded measurable function that satisfies for every $t \in [0,T]$
+
+$$z(t) = z(0) - \int_0^t f(s)z(s)ds + \int_0^t g(s)ds + M(t). \quad (6.14)$$
+
+Then for any $t \in [0, T \wedge \tau]$
+
+$$|z(t)| \leq \frac{|g|_{*,T \wedge \tau}}{m} + 2|M|_{*,T \wedge \tau} + e^{-mt}(|z(0)| + |M(0)|).$$
+
+**Lemma 6.9.** Fix $T \in (0, \infty)$. For each $n$, let $V_n$ be a martingale with respect to some filtration $\{G^n_t\}$ such that $V_n(0) = 0$. Let $(r_n)_{n=1}^\infty$ be a positive sequence so that $\lim_{n\to\infty} r_n = +\infty$. Suppose that there is a $C \in (0, \infty)$ such that for all $n \in \mathbb{N}$ and $t \in [0, T]$, $\langle V_n \rangle_t \le Ct$. Then for any $\epsilon > 0$
+
+$$P\left(\sup_{t \le T} (V_n(t) - r_n t) > \epsilon\right) \to 0$$
+
+as $n \to \infty$.
+
+*Proof.* Let $\delta_n \doteq \frac{1}{\sqrt{r_n}}$. Then
+
+$$
+\begin{align*}
+P\left(\sup_{0\le t\le T} [V_n(t) - r_n t] > \epsilon\right)
+&\le P\left(\sup_{0\le t\le \delta_n} |V_n(t)| > \epsilon\right) + P\left(\sup_{\delta_n < t \le T} |V_n(t)| > r_n \delta_n\right) \\
+&\le \frac{4E V_n(\delta_n)^2}{\epsilon^2} + \frac{4E V_n(T)^2}{(r_n \delta_n)^2} \\
+&= \frac{4E \langle V_n \rangle_{\delta_n}}{\epsilon^2} + \frac{4E \langle V_n \rangle_T}{(r_n \delta_n)^2} \le \frac{4C\delta_n}{\epsilon^2} + \frac{4CT}{(r_n \delta_n)^2} \to 0
+\end{align*}
+$$
+
+where the inequality on the second line is from Doob's maximal inequality. ■
+
+## 7. PROOF OF THEOREM 2.2
+
+Now we start with some preliminary lemmas. Recall from Remark 2.4(ii) that under the hypothesis of Theorem 2.2 we have $\boldsymbol{\mu}_n \to \boldsymbol{f}_k \in \ell_1^\perp$ as $n \to \infty$. Along with the tightness of $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$ this also shows that $\boldsymbol{G}_n(0) \to \boldsymbol{f}_k \in \ell_1^\perp$ as $n \to \infty$.
+
+**Lemma 7.1.** Let $d_n \to \infty$, $\frac{d_n}{\sqrt{n}} \to 0$, and $\lambda_n \nearrow 1$. Assume that for some $k \in \mathbb{N}$, $\boldsymbol{\mu}_n \to \boldsymbol{f}_k$ in $\mathbb{R}^\infty$. Then for any $M > 0$ and $1 \le i \le k$, as $n \to \infty$
+
+$$\sup_{0<|z| k$ from the proof of Lemma 6.4. We extend this definition by setting
+
+$$
+\Delta_{n,i}(z) = t_{n,i}(z) / (\beta'_{n}(\mu_{n,i})z) I_{\{z \neq 0\}} - 1 \quad \text{if } 1 \le i \le k \quad (7.2)
+$$
+
+where $t_{n,i}$ is defined by (6.3). With this extension
+
+$$
+t_{n,i}(z) = \begin{cases} \beta'_{n}(\mu_{n,i})(1 + \Delta_{n,i}(z))z & \text{if } 1 \le i \le k \\ \Delta_{n,i}(z)z & \text{if } i > k \end{cases}. \quad (7.3)
+$$
+
+Using this notation, Lemma 7.1 and Lemma 6.3 show that, for any $L > 0$
+
+$$
+\gamma_{n,L} \doteq \sup_{i \in \mathbb{N} \ 0 < |z| \le L} |\Delta_{n,i}(z)| \to 0 \text{ as } n \to \infty. \quad (7.4)
+$$
+
+The following corollary is an immediate consequence of Remark 7.2 and Lemma 6.1.
+
+**Corollary 7.3.** Under the hypothesis of Lemma 7.1, $Z_n$ satisfies the following integral equations.
+
+For $i=1$
+
+$$
+Z_{n,1}(t) = Z_{n,1}(0) - \int_0^t \beta_n'(\mu_{n,1})(1 + \Delta_{n,1}(Z_{n,1}(s)))Z_{n,1}(s)ds - \int_0^t (Z_{n,1}(s) - Z_{n,2}(s))ds + \sqrt{n}M_{n,1}(t)
+$$
+
+For $i \in \{2, \dots, k\}$
+
+$$
+\begin{align*}
+Z_{n,i}(t) = Z_{n,i}(0) &+ \int_0^t \beta_n'(u_{n,i-1})(1 + \Delta_{n,i-1}(Z_{n,i-1}(s)))Z_{n,i-1}(s)ds \\
+& - \int_0^t \beta_n'(u_{n,i})(1 + \Delta_{n,i}(Z_{n,i}(s)))Z_{n,i}(s)ds - \int_0^t (Z_{n,i}(s) - Z_{n,i+1}(s))ds + \sqrt{n}M_{n,i}(t).
+\end{align*}
+$$
+
+For $i = k+1$
+
+$$
+\begin{align*}
+Z_{n,k+1}(t) = Z_{n,k+1}(0) &+ \int_0^t \beta_n'(u_{n,k})(1 + \Delta_{n,k}(Z_{n,k}(s)))Z_{n,k}(s)ds \\
+& - \int_0^t \Delta_{n,k+1}(Z_{n,k+1}(s))Z_{n,k+1}(s) - \int_0^t (Z_{n,k+1}(s) - Z_{n,k+2}(s))ds + \sqrt{n}M_{n,k+1}(t),
+\end{align*}
+$$
+
+For $i > k + 1$
+
+$$
+\begin{align*}
+Z_{n,i}(t) = Z_{n,i}(0) &+ \int_0^t \Delta_{n,i-1}(Z_{n,i-1}(s)) Z_{n,i-1}(s) ds - \int_0^t \Delta_{n,i}(Z_{n,i}(s)) Z_{n,i}(s) ds \\
+& - \int_0^t (Z_{n,i}(s) - Z_{n,i+1}(s)) ds + \sqrt{n} M_{n,i}(t),
+\end{align*}
+$$
+
+where $\Delta_{n,i}$ is as in Remark 7.2.
+---PAGE_BREAK---
+
+Finally, if $Y_{n,1} \doteq \sum_{i=1}^k Z_{n,i}$, then
+
+$$
+\begin{align}
+Y_{n,1}(t) = Y_{n,1}(0) & - \int_0^t \beta'_n(\mu_{n,k}) (1 + \Delta_{n,k}(Z_{n,k}(s))) Z_{n,k}(s) ds \nonumber \\
+& - \int_0^t (Z_{n,1}(s) - Z_{n,k+1}(s)) ds + \sum_{i=1}^k \sqrt{n} M_{n,i}(t) \tag{7.5}
+\end{align}
+$$
+
+**Lemma 7.4.** Suppose $\lambda_n \nearrow 1$, $d_n \to \infty$ and $d_n \ll n$. Assume that for some $k \ge 2$ $\mu_{n,k} \to 1$ and $\beta'_n(\mu_{n,k}) \to \alpha \in [0, \infty)$ as $n \to \infty$. Define the $k-1 \times k-1$ tridiagonal matrix $A_n(s)$ as
+
+$$
+\begin{align*}
+A_n(s)[j,j] &= a_{n,j}(s) + 1, && 1 \le j \le k-1, \\
+A_n(s)[j,j+1] &= -1, && 1 \le j \le k-2, \\
+A_n(s)[j,j-1] &= -a_{n,j-1}(s), && 2 \le j \le k-1,
+\end{align*}
+\quad (7.6)
+$$
+
+and for all other $j, k$, $A_n(s)[j, k] = 0$, where $a_{n,i}(s) \doteq \beta'_n(\mu_{n,i})(1 + \Delta_{n,i}(Z_{n,i}(s)))$. Then for any $T, L \in (0, \infty)$
+
+$$
+\lim_{n \to \infty} \inf_{s \in [0, T \wedge \tau_{n,L}]} \inf_{\vec{x} \in \mathbb{R}^{k-1} \setminus \{0\}} \frac{\|\vec{x}^t A_n(s) \vec{x}\|}{\|\vec{x}\|^2} = +\infty.
+$$
+
+Proof. Let $b_{n,i}(s) = a_{n,i}(s) + 1$. And $B_n(s) = A_n(s) + A_n(s)^t$. Then $B_n(s)$ is a symmetric tridiagonal matrix with entries
+
+$$
+\begin{align*}
+B_n(s)[j,j] &= 2b_{n,j}(s), && 1 \le j \le k-1, \\
+B_n(s)[j,j+1] &= -b_{n,j}, && 1 \le j \le k-2, \\
+B_n(s)[j,j-1] &= -b_{n,j-1}(s), && 2 \le j \le k-1,
+\end{align*}
+\quad (7.7)
+$$
+
+Let $b_n \doteq \beta'_n(\mu_{n,1})$. By Lemma 5.6, $b_n \to \infty$ and by the uniform convergence in (7.4) and Lemma 5.6 once more
+
+$$
+\max_{i \le k-1} \sup_{s \in [0, T \wedge \tau_{n,L}]} \left| \frac{b_{n,i}(s)}{b_n} - 1 \right| \to 0 \text{ as } n \to \infty.
+$$
+
+This in particular shows that
+
+$$
+\sup_{s \in [0, T \wedge \tau_{n,L}]} \left\| \frac{1}{b_n} B_n(s) - H \right\|_F \to 0,
+\quad (7.8)
+$$
+
+where $\|\cdot\|_F$ is the Frobenius norm and $H$ is the $k-1 \times k-1$ tridiagonal matrix given as
+
+$$
+\begin{align*}
+H[j,j] &= 2, && 1 \le j \le k-1, \\
+H[j,j+1] &= -1, && 1 \le j \le k-2, \\
+H[j,j-1] &= -1, && 2 \le j \le k-1,
+\end{align*}
+$$
+
+Note for any $\vec{x} = (x_1, x_2, \dots, x_{k-1}) \in \mathbb{R}^{k-1}$ by completing squares
+
+$$
+\begin{align*}
+\vec{x}^t H \vec{x} &= x_1^2 + (x_2 - x_1)^2 + (x_3 - x_2)^2 + \dots + (x_{k-2} - x_{k-1})^2 + x_{k-1}^2, \\
+\vec{x}^t H (\vec{x} - x) &= x_1^2
+\end{align*}
+$$
+
+which is always non-zero if $\vec{x} \neq 0$. Let $c = \inf_{\|\vec{x}\|=1} \vec{x}^t H^{-1}$. This shows that $H$ is a positive definite matrix. Since the unit sphere is compact, the minimum is attained and hence $c > 0$.
+
+Finally,
+
+$$
+\begin{align*}
+\frac{1}{b_n} B_n(s) \vec{x} &= \vec{x}^t H^{-1} \vec{x} + \vec{x}^t (\frac{1}{b_n} B_n - H) \vec{x} \\
+&\geq \vec{x}^t H^{-1} \vec{x} - \|b_n^{-1} B_n - H\|_F \|x\|^2
+\end{align*}
+$$
+
+By (7.8), there is an $N_0 \in \mathbb{N}$ so that for each $n \ge N_0$ and each $s \in [0, T \wedge \tau_{n,L}]$, $\|b_n^{-1} B_n - H\|_F \le c/2$. Hence for each $\vec{x} \in \mathbb{R}^{k-1}$
+
+$$
+2\vec{x}^t A_n(s)\vec{x} = \vec{x}^2 B_n(s)\vec{x} \ge (c/2)b_n ||\vec{x}||^2.
+$$
+---PAGE_BREAK---
+
+Since $b_n \to \infty$, this completes the proof.
+
+**Lemma 7.5.** Suppose that the hypothesis of Theorem 2.2 holds with $k \ge 2$ and let $\vec{X}_n := (Z_{n,1}, Z_{n,2}, \dots, Z_{n,k-1})$. Then for $L, T, \epsilon \in (0, \infty)$
+
+$$ P\left( \sup_{s \in [0, T \wedge \tau_{n,L}]} \|\vec{X}_n(s)\| > \|\vec{X}_n(0)\| + \epsilon \right) \to 0, \quad (7.9) $$
+
+and
+
+$$ \sup_{s \in [\epsilon, T \wedge \tau_{n,L}]} \|\vec{X}_n(s)\| \xrightarrow{P} 0, \qquad (7.10) $$
+
+as $n \to \infty$.
+
+*Proof.* Let $\vec{W}_n = (\sqrt{n}M_{n,1}, \dots, \sqrt{n}M_{n,k-1})$. Then by Corollary 7.3
+
+$$ \vec{X}_n(t) = \vec{X}_n(0) - \int_0^t A_n(s)\vec{X}_n(s)ds + \vec{e}_{k-1}\int_0^t Z_{n,k}(s)ds + \vec{W}_n(t). \quad (7.11) $$
+
+where $\vec{e}_{k-1}$ is the vector $(0, 0, \dots, 0)' \in \mathbb{R}^{k-1}$ and $A_n(s)$ is $k-1 \times k-1$ matrix defined in (7.6). Using Ito's formula to the function $f(\vec{x}) = \|\vec{x}\|^2$ along with the semimartingale representation from (7.11)
+
+$$
+\begin{aligned}
+\|\vec{X}_n(t)\|^2 &= \|\vec{X}_n(0)\|^2 + 2 \int_{0^+}^t \langle \vec{X}_n(s-), d\vec{X}_n(s) \rangle + [\vec{W}_n]_t \\
+&= \|\vec{X}_n(0)\|^2 - 2 \int_0^t \langle \vec{X}_n(s), A_n(s)\vec{X}_n(s) \rangle ds + 2 \int_0^t Z_{n,k}(s) \langle \vec{X}_n(s), \vec{e}_{k-1} \rangle ds \\
+&\quad + 2 \int_0^t \langle \vec{X}_n(s-), d\vec{W}_n(s) \rangle + [\vec{W}_n]_t,
+\end{aligned}
+\quad (7.12)
+$$
+
+where $[\vec{W}_n]_t = \sum_{i=1}^{k-1} [\sqrt{n}M_{n,i}]_t$. Define
+
+$$
+\begin{align*}
+f_n(s) &\doteq \frac{\overline{\vec{X}}_n(s)^t A_n(s) \overline{\vec{X}}_n(s)}{\|\overline{\vec{X}}_n(s)\|^2} \mathbb{I}\{\overline{\vec{X}}_{n(s)} \neq 0\} + n\mathbb{I}\{\overline{\vec{X}}_{n(s)} = 0\} \\
+g_n(s) &\doteq 2Z_{n,k}(s)Z_{n,k-1}(s) \\
+B_n(s) &\doteq 2 \int_0^t \langle \overline{\vec{X}}_n(s-), d\overline{\vec{W}}_n(s) \rangle + [\overline{\vec{W}}_n]_t
+\end{align*}
+$$
+
+then (7.12) becomes
+
+$$ \|\vec{X}_n(t)\|^2 = \|\vec{X}_n(0)\|^2 - 2 \int_0^t f_n(s) \|\vec{X}_n(s)\|^2 ds + \int_0^t g_n(s) ds + B_n(t). \quad (7.13) $$
+
+By Lemma 7.4
+
+$$ m_n = \inf_{s \in [0, T \wedge \tau_{n,L}]} f_n(s) \to +\infty \text{ as } n \to \infty. \quad (7.14) $$
+
+By Ito's isometry, for $i \le k-1$
+
+$$
+\begin{aligned}
+\mathbf{E} \sup_{s \in [0, T \wedge \tau_{n,L}]} &\left| \int_0^t Z_{n,i}(s-) d(\sqrt{n}M_{n,i})(s) \right|^2 \\
+&\le 4\mathbf{E} \int_0^{T \wedge \tau_{n,L}} Z_{n,i}^2(s-) d[\sqrt{n}M_{n,i}]_s \\
+&\le 4L^2\mathbf{E}[\sqrt{n}M_{n,i}]_T = 4L^2\mathbf{E}\langle\sqrt{n}M_{n,i}\rangle_T.
+\end{aligned}
+$$
+
+where the second to last inequality is obtained by using $\|\mathbf{Z}_n\|_{2,T^\wedge\tau_{n,L}} \le L$. From the proof of Lemma 6.2 we see that for any $i \le k-1$, $\mathbf{E}\langle\sqrt{n}M_{n,i}\rangle_T \to 0$ as $n \to \infty$. Hence from the definition of $B_n$ and $\overline{\mathbf{W}}_n$
+
+$$ |B_n|_{*,T^\wedge\tau_{n,L}} \xrightarrow{P} 0 \text{ as } n \to \infty. \quad (7.15) $$
+---PAGE_BREAK---
+
+Applying Lemma 6.8 to (7.13) with $z(t) = \|X_n(t)\|^2$, $f = 2f_n$, $g = g_n$, $M = B_n$, and $\tau = \tau_{n,L}$ shows for any $t \in [0, T \wedge \tau_{n,L}]$
+
+$$
+\|\vec{X}_n(t)\|^2 \leq \frac{|g_n|_{*,T \wedge \tau_{n,L}}}{2m_n} + 2|B_n|_{*,T \wedge \tau_{n,L}} + e^{-2m_n t} \left( \|\vec{X}_n(0)\|^2 + |B_n(0)| \right).
+$$
+
+Taking $t = \epsilon_n \doteq 1/\sqrt{m_n}$ and using (7.14), (7.15), $|g_n|_{*,T \wedge \tau_{n,L}} \le 2L^2$ and $\vec{X}_n(0) \xrightarrow{P} (z_1, \dots, z_{k-1})^t$, we see that
+
+$$
+\sup_{t \in [\epsilon_n, T \wedge \tau_{n,L}]} \left\| \vec{X}_n(t) \right\| \xrightarrow{P} 0.
+$$
+
+Since $\epsilon_n \to 0$, this shows (7.10) for any fixed $\epsilon > 0$. Finally, from (7.13), we see that
+
+$$
+\sup_{t \in [0, \epsilon_n \wedge \tau_{n,L} \wedge T]} \|\vec{X}_n(t)\|^2 \leq \|\vec{X}_n(0)\|^2 + |g_n|_{*,T \wedge \tau_{n,L}} \epsilon_n + |B_n|_{*,T \wedge \tau_{n,L}}.
+$$
+
+The convergence in (7.9) is now immediate on using that $\epsilon_n \to 0$, $|g_n|_{*,T \wedge \tau_{n,L}} \le 2L^2$ and (7.15) holds.
+
+**Corollary 7.6.** Under the assumptions of Lemma 7.5, for each $i < k$, $\int_0^{T \wedge \tau_{n,L}} |Z_{n,i}(s)| ds \xrightarrow{P} 0$, as $n \to \infty$.
+
+*Proof.* For any $\epsilon > 0$
+
+$$
+\begin{align*}
+\int_0^{T \wedge \tau_{n,L}} |Z_{n,i}(s)| ds &\leq \int_{[0,\epsilon \wedge \tau_{n,L}]} |Z_{n,i}(s)| ds + \int_{[\epsilon, T \wedge \tau_{n,L}]} |Z_{n,i}(s)| ds \\
+&\leq L\epsilon + \sup_{s \in [\epsilon, T \wedge \tau_{n,L}]} |Z_{n,i}(s)| T.
+\end{align*}
+$$
+
+Now fix $\delta > 0$ and let $\epsilon = \frac{\delta}{2L}$. Then for any $i < k$
+
+$$
+\mathbf{P}\left(\int_0^{T \wedge \tau_{n,L}} |Z_{n,i}(s)| ds > \delta\right) \leq \mathbf{P}\left(\sup_{s \in [\epsilon, T \wedge \tau_{n,L}]} |Z_{n,i}(s)| > \frac{\delta}{2T}\right), \quad (7.16)
+$$
+
+which from (7.10) converges to 0 as $n \to \infty$. Since $\delta > 0$ was arbitrary, this completes the proof.
+
+*Proof of Theorem* **2.2.** Recall the conditions in the theorem. By Remark 2.4(ii) and the tightness of $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$, the hypothesis of Lemma 6.2 holds. Hence by Skorokhod's embedding theorem, we can assume that $\{(\boldsymbol{Z}_n(0), \boldsymbol{M}_n)\}_{n \in \mathbb{N}}$ and a standard Brownian motion $B$ are defined on a common probability space such that for any $T > 0$
+
+$$
+\sup_{t \le T} \| \sqrt{n} M_n(t) - \sqrt{2B(t)} e_k \|_2
+\rightarrow
+0
+\quad (7.17)
+$$
+
+and
+
+$$
+\|\mathbf{Z}_n(0) - z\|_2 \to 0
+\quad (7.18)
+$$
+
+almost surely, as $n \to \infty$. Let $\mathbf{Y}$ and $\mathbf{Y}_n$ be as in the statement of the theorem. Taking $m = r-k+1$, $\vec{Y}_n = (\sum_{i=1}^k Z_{n,i}, Z_{n,k+1}, \dots, Z_{n,r})$ be the stochastic process with sample paths in $\mathcal{D}([0,T] : \mathbb{R}^m)$ corresponding to the first $m$ coordinates of $\mathbf{Y}_n$. Note $\mathbf{Y}_{n,m+} = \mathbf{Z}_{n,r+}$, $Z_{n,k} = Y_{n,1} - \sum_{i=1}^{k-1} Z_i$, and
+---PAGE_BREAK---
+
+for $k=1$, $Y_{n,1} = Z_{n,1}$. Hence by Corollary 7.3, $\vec{Y}_n$ satisfy
+
+$$
+\begin{align}
+Y_{n,1}(t) &= Y_{n,1}(0) - \int_0^t a_{n,k}(s)Y_{n,1}(s)ds - \mathbb{I}_{\{k=1\}} \int_0^t Y_{n,1}(s)ds + \int_0^t Y_{n,2}(s)ds + \sqrt{n}M_{n,k}(t) \nonumber \\
+&\quad + \sum_{i=1}^{k-1} \int_0^t a_{n,k}(s)Z_{n,i}(s)ds - \mathbb{I}_{\{k>1\}} \int_0^t Z_{n,1}(s)ds + \sum_{i=1}^{k-1} \sqrt{n}M_{n,i}(t), \tag{7.19}
+\end{align}
+$$
+
+$$
+\begin{align}
+Y_{n,2}(t) = Y_{n,2}(0) &+ \int_0^t a_{n,k}(s)Y_{n,1}(s)ds - \int_0^t Y_{n,2}(s)ds + \int_0^t Y_{n,3}(s)ds \nonumber \\
+& - \sum_{i=1}^{k-1} \int_0^t a_{n,k}(s)Z_{n,i}(s)ds - \int_0^t \delta_{n,k+1}(s)Y_{n,2}(s)ds + \sqrt{n}M_{n,k+1}(t), \tag{7.20}
+\end{align}
+$$
+
+and for $i \in \{3, 4, \dots, m\}$
+
+$$
+\begin{align}
+Y_{n,i}(t) = Y_{n,i}(0) &- \int_0^t Y_{n,i}(s)ds + \int_0^t Y_{n,i+1}(s)ds \nonumber \\
+&+ \int_0^t \delta_{n,k+i-2}(s)Y_{n,i-1}(s)ds - \int_0^t \delta_{n,k+i-1}(s)Y_{n,i}(s)ds + \sqrt{n}M_{n,k+i-1}(t). \tag{7.21}
+\end{align}
+$$
+
+where $a_{n,k}(s)$ is as in Lemma 7.4 and $\delta_{n,i}(s) = \Delta_{n,i}(Z_{n,i}(s))$ for $i \in \mathbb{N}$.
+Since $\|\mathbf{Z}_n\|_{2,T^\tau\tau_{n,L}} \le L$, we have by (7.4) that, for any $i \in \mathbb{N}$,
+
+$$ |\delta_{n,i}|_{*,T\wedge\tau_{n,L}} \leq \gamma_{n,L} \to 0 \text{ as } n \to \infty. \quad (7.22) $$
+
+Moreover since $\beta'_n(\mu_{n,k}) \to \alpha \in [0, \infty)$, this also shows that the term
+
+$$ \sup_{s \in [0, T \wedge \tau_{n,L}]} |a_{n,k}(s) - \alpha| \to 0 \text{ as } n \to \infty. \quad (7.23) $$
+
+We now show that
+
+$$ \| \mathbf{Y}_n - \mathbf{Y} \|_{2, T \wedge \tau_{n,L}} \xrightarrow{P} 0 \text{ as } n \to \infty. \quad (7.24) $$
+
+To see this, note that, by Remark 2.4(ii), the hypothesis of Lemma 6.4 is satisfied, and hence
+$$
+\|Z_{n,r+}\|_{2,T\wedge\tau_{n,L}} \xrightarrow{P} 0. \text{ Since } Y_{n,m+} = Z_{n,r+} \text{ and } Y_{m+} = 0, \text{ this shows that}
+$$
+
+$$
+\| \mathbf{Y}_{n,m+} - \mathbf{Y}_{m+} \|_{2, T \wedge \tau_{n,L}} \xrightarrow{P} 0. \quad (7.25)
+$$
+
+Thus in order to prove (7.24) it suffices to show that $\sum_{i=1}^m |Y_{n,i} - Y_i|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$. To show this we consider $U_{n,i} = Y_{n,i} - Y_i$. Subtracting (2.9) from (7.19), (7.20) and (7.21), we see
+
+$$
+\begin{align}
+U_{n,1}(t) &= U_{n,1}(0) - (\alpha + I_{\{k=1\}}) \int_0^t U_{n,1}(s) ds + \int_0^t U_{n,2}(s) ds + \sqrt{n} M_{n,k}(t) - B(t) + W_{n,1}(t) \\
+U_{n,2}(t) &= U_{n,2}(0) + \alpha \int_0^t U_{n,1}(s) ds - \int_0^t U_{n,2}(s) ds + \int_0^t U_{n,3}(s) ds + W_{n,2}(t) \\
+U_{n,i}(t) &= U_{n,i}(0) - \int_0^t U_{n,i}(s) ds + \int_0^t U_{n,i+1}(s) ds + W_{n,i}(t) && \text{for } i \in \{3, 4, \dots, m\}
+\end{align}
+\quad (7.26)
+$$
+---PAGE_BREAK---
+
+where
+
+$$
+\begin{align*}
+W_{n,1}(t) & \doteq \int_0^t (\alpha - a_{n,k}(s)) Y_{n,1}(s) ds + \sum_{i=1}^{k-1} \int_0^t a_{n,k}(s) Z_{n,i}(s) ds - \mathbb{I}_{\{k>1\}} \int_0^t Z_{n,1}(s) ds + \sum_{i=1}^{k-1} \sqrt{n} M_{n,i}(t) \\
+W_{n,2}(t) & \doteq \int_0^t (a_{n,k}(s) - \alpha) Y_{n,1}(s) ds - \sum_{i=1}^{k-1} \int_0^t a_{n,k}(s) Z_{n,i}(s) ds - \int_0^t \delta_{n,k+1}(s) Y_{n,2}(s) ds + \sqrt{n} M_{n,k+1}(t), \\
+W_{n,i}(t) & \doteq \int_0^t \delta_{n,k+i-2}(s) Y_{n,i-1}(s) ds - \int_0^t \delta_{n,k+i-1}(s) Y_{n,i}(s) ds + \sqrt{n} M_{n,k+i-1}(t) \quad \text{for } i \in \{3, \dots, m\}.
+\end{align*}
+$$
+
+Note that, for each $n$, $\|\mathbf{Y}_n\|_{2,T\wedge\tau_{n,L}} \le k \|\mathbf{Z}_n\|_{2,T\wedge\tau_{n,L}}$, which by (6.9) is bounded above by $kL$.
+Hence by (7.23), (7.22), (7.17) and Corollary 7.6,
+
+$$
+|W_{n,i}|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0 \text{ as } n \to \infty \qquad (7.27)
+$$
+
+for each $i \in [m]$. Let $\|U_n\|_{1,t} = \sup_{s \in [0,t]} \sum_{i=1}^m |U_{n,i}(t)|$. Then, from (7.26), for any $t \in [0, T \wedge \tau_{n,L}]$
+
+$$
+\|U_n\|_{1,t} \leq \sum_{i=1}^{m} \left( |U_{n,i}(0)| + |W_{n,i}|_{*,T \wedge \tau_{n,L}} \right) + |\sqrt{n}M_{n,k} - B|_{*,T} + R \int_0^t \|U_n\|_{1,s} ds
+$$
+
+with $R = \max(2\alpha + \mathbb{I}_{\{k=1\}}, 2)$. Hence by Gronwall's inequality
+
+$$
+\|U_n\|_{1,T \wedge \tau_{n,L}} \leq \left( |\sqrt{n}M_{n,k} - B|_{*,T} + \sum_{i=1}^{m} (|U_{n,i}(0)| + |W_{n,i}|_{*,T \wedge \tau_{n,L}}) \right) e^{RT}.
+$$
+
+By our hypothesis, as $n \to \infty$, $|U_{n,i}(0)| = |Z_{n,k+i-1}(0) - z_{n,k+i-1}| \xrightarrow{P} 0$ for each $i \in [m]$. Hence by (7.27) and (7.17), $\|U_n\|_{1,T\wedge\tau_{n,L}} = \sum_{i=1}^m |Y_{n,i} - Y_i|_{*,T\wedge\tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$. Combined with (7.25), this completes the proof of (7.24).
+
+Next we prove (2.8). Fix $\delta > 0$. Since $\mathbf{Y}$ has sample paths in $C([0,T] : l_2)$, we can find
+$L_1 \in (0, \infty)$ so that
+
+$$
+P(\|\mathbf{Y}\|_{2,T} > L_1) \leq \frac{\delta}{2}. \tag{7.28}
+$$
+
+Also, since $\mathbf{Z}_n(0) \xrightarrow{P} z$, we can find a $L_2 \in (0, \infty)$ so that
+
+$$
+\sup_n P (\|Z_n(0)\|_2 > L_2) \leq \frac{\delta}{2}. \tag{7.29}
+$$
+
+Let $L = (L_1 + 1) + k(L_2 + 1) + 1$. Also, let $\vec{X}_n$ be as in Lemma 7.5 when $k > 1$. For $k = 1$, we set
+$\vec{X}_n = 0$. Then,
+
+$$
+\begin{align*}
+\|\mathbf{Z}_n\|_{2,T\wedge\tau_{n,L}} &\leq \|\vec{X}_n\|_{2,T\wedge\tau_{n,L}} + \left\|\mathbf{Y}_n - e_1 \sum_{i=1}^{k-1} Z_{n,i}\right\|_{2,T\wedge\tau_{n,L}} \\
+&\leq \|\mathbf{Y}_n\|_{2,T\wedge\tau_{n,L}} + k\mathbb{I}_{\{k>1\}} \left\|\vec{X}_n\right\|_{2,T\wedge\tau_{n,L}}.
+\end{align*}
+$$
+
+Hence for each $n \in N$
+
+$$
+\begin{align*}
+P(\tau_{n,L} \le T) &\le P(\|\mathbf{Z}_n\|_{2,T\wedge\tau_{n,L}} > L-1) \\
+&\le P(\|\mathbf{Y}_n\|_{2,T\wedge\tau_{n,L}} > L_1+1) + P(\|\vec{X}_n\|_{2,T\wedge\tau_{n,L}} > L_2+1), \\
+&\le \delta + P(\|\mathbf{Y}_n - \mathbf{Y}\|_{2,T\wedge\tau_{n,L}} > 1) + P(\|\vec{X}_n\|_{2,T\wedge\tau_{n,L}} > \|\vec{X}_n(0)\|+1),
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where the last inequality uses (7.28) and (7.29). From Lemma 7.5 and (7.24) we see
+
+$$
+\limsup_{n \to \infty} \mathbf{P} \left( \| \mathbf{Z}_n \|_{2,T} \ge L \right) \le \limsup_{n \to \infty} \mathbf{P} \left( \tau_{n,L} \le T \right) \le \delta.
+$$
+
+Since $\delta > 0$ is arbitrary, the convergence in (2.8) is now immediate.
+
+This convergence in particular says that $\lim_{L \to \infty} \sup_n P(\tau_{n,L} \le T) = 0$. Using Lemma 6.5 with $F_n(t) = \|Y_n - Y\|_{2,t}$ we now see from (7.24) that $\|Y_n - Y\|_{2,T} \xrightarrow{p} 0$ as $n \to \infty$. Similarly, if $k > 1$, then taking $F_n(t) = \sup_{s \in [\epsilon, t]} |Z_{n,i}(s)|$ in Lemma 6.5 we conclude from Lemma 7.5 that for each $i \in [k-1]$ and $\epsilon > 0$ $\sup_{s \in [\epsilon, T]} |Z_{n,i}(s)| \xrightarrow{p} 0$ as $n \to \infty$. This completes the proof of Theorem 2.2. ■
+
+8. PROOF OF THEOREM 2.3
+
+In this section we give the proof of Theorem 2.3. We begin by giving a convenient representation
+for Z_n under the assumptions of Theorem 2.3 and establishing some apriori convergence properties.
+
+**Lemma 8.1.** Suppose $c_n = \frac{d_n}{\sqrt{n}} \to c \in (0, \infty)$ and $\lambda_n = 1 - (\frac{\log d_n}{d_n} + \frac{\alpha_n}{\sqrt{n}})$ where $\alpha_n \in \mathbb{R}$,
+$\liminf_{n\to\infty} \alpha_n > -\infty$ and $\frac{\alpha_n}{n^{1/4}} \to 0$. Suppose also that $\{\|\mathbf{Z}_n(0)\|_1\}_{n\in\mathbb{N}}$ is a tight sequence of
+random variables and $\mathbf{Z}_{n,r+}(0) \xrightarrow{P} \mathbf{0}$ in $\ell_2$ for some $r \ge 2$. Then there are real stochastic processes
+$\delta_n, W_{n,i}$ with sample paths in $\mathcal{D}([0, \infty) : \mathbb{R})$ such that for any $t \ge 0$
+
+$$
+\begin{align}
+Z_{n,1}(t) &= Z_{n,1}(0) - \int_0^t Z_{n,1}(s)ds + \int_0^t Z_{n,2}(s)ds + \sqrt{n}M_{n,1}(t) \nonumber \\
+&\quad - (c_n e^{c_n\alpha_n})^{-1} \int_0^t (1+\delta_n(s))(e^{c_n Z_{n,1}(s)} - 1)ds \\
+Z_{n,2}(t) &= Z_{n,2}(0) - \int_0^t Z_{n,2}(s)ds + \int_0^t Z_{n,3}(s)ds + W_{n,2}(t) \tag{8.1} \\
+&\quad + (c_n e^{c_n\alpha_n})^{-1} \int_0^t (1+\delta_n(s))(e^{c_n Z_{n,1}(s)} - 1)ds \\
+Z_{n,i}(t) &= Z_{n,i}(0) - \int_0^t Z_{n,i}(s)ds + \int_0^t Z_{n,i+1}(s)ds + W_{n,i}(t) \quad \text{for } i \in \{3, \dots, r\}
+\end{align}
+$$
+
+and for any fixed $L, T \in (0, \infty)$,
+
+(1) $\sqrt{n}M_{n,1} \Rightarrow \sqrt{2B}$ in $\mathbb{D}([0, \infty); \mathbb{R})$ where B is a standard Brownian motion,
+
+(2) $|\delta_n|_{*,T_n} \to 0$ a.s.
+
+(3) $|W_{n,i}|_{*,T_n} \xrightarrow{P} 0$ for $i \in \{2, \dots, r\},$
+
+(4) $\|\mathbf{Z}_{n,r+}\|_{2,T_n} \xrightarrow{P} 0,$
+
+where $T_n = T \wedge \tau_{n,L}$ and $\tau_{n,L}$ is defined as in (6.8).
+
+*Proof.* Recall the definition of $t_{n,i}$ from Lemma 6.1. Define
+
+$$
+\delta_n(s) \doteq t_{n,1}(Z_{n,1}(s))c_n\left(e^{c_n[Z_{n,1}(s)-\alpha_n]} - e^{-c_n\alpha_n}\right)^{-1} - 1
+$$
+
+so that
+
+$$
+t_{n,1}(Z_{n,1}(s)) = (1 + \delta_n(s))c_n^{-1}\left(e^{c_n[Z_{n,1}(s)-\alpha_n]} - e^{-c_n\alpha_n}\right).
+$$
+---PAGE_BREAK---
+
+Since $\sup_{s \le T \wedge \tau_{n,L}} |Z_{n,1}(s)| \le L$, Lemma 6.7 shows that $|\delta_n|_{*,T_n} \to 0$ a.s. Define
+
+$$
+\begin{align*}
+W_{n,2}(t) &\doteq -\int_0^t t_{n,2}(Z_{n,2}(s))ds + \sqrt{n}M_{n,2}(t) \\
+W_{n,i}(t) &\doteq \int_0^t t_{n,i-1}(Z_{n,i-1}(s))ds - \int_0^t t_{n,i}(Z_{n,i}(s))ds + \sqrt{n}M_{n,i}(t) \quad \text{for } i \in \{3, \dots, r\}.
+\end{align*}
+$$
+
+From Lemma 6.1 it follows that (8.1) is satisfied. Lemma 6.6 shows that $\boldsymbol{\mu}_n \to \mathbf{f}_1 \in \ell_1^\perp$. Along with the assumed tightness of $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$, this shows $\boldsymbol{G}_n(0) = \boldsymbol{\mu}_n + \frac{\boldsymbol{z}_n(0)}{\sqrt{n}} \xrightarrow{P} \mathbf{f}_1$ in $\ell_1^\perp$. Hence by Lemma 6.2 and Lemma 6.4,
+
+$$ \sqrt{n}\boldsymbol{M}_n \Rightarrow \sqrt{2}B\boldsymbol{e}_1 \text{ in } \mathbb{D}([0, \infty) : \ell_2) \quad (8.2) $$
+
+and $\|\boldsymbol{Z}_n, \boldsymbol{r}+\|_{2, T \wedge \tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$. Since $|Z_{n,i}|_{*,T \wedge \tau_{n,L}} \le L$ and $\mu_{n,2} \to 0$, Lemma 6.3, together with (8.2), shows that $|W_{n,i}|_{*,T_n} \xrightarrow{P} 0$ for each $i \in \{2, \dots, r\}$, as $n \to \infty$. ■
+
+The next lemma gives pathwise existence and uniqueness of solutions to a system of stochastic differential equations in which the drift fails to satisfy a linear growth condition.
+
+**Lemma 8.2.** Suppose $c \in (0, \infty)$, $\alpha \in (0, \infty]$ and B is a standard Brownian motion. Then for any $r \ge 2$ the system of equations
+
+$$
+\begin{align}
+Z_1(t) &= z_1 - \int_0^t Z_1(s)ds + \int_0^t Z_2(s)ds + \sqrt{2B}(t) - (ce^{c\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds \\
+Z_2(t) &= z_2 - \int_0^t Z_2(s)ds + \int_0^t Z_3(s)ds + (ce^{c\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds \\
+Z_i(t) &= z_i - \int_0^t Z_i(s)ds + \int_0^t Z_{i+1}(s)ds \quad \text{for } i \in \{3, \dots, r\} \\
+Z_i(t) &= 0 \quad \text{for } i > r
+\end{align}
+\tag{8.3}
+$$
+
+has a unique pathwise solution **Z** with sample paths in $C([0, \infty) : \ell_2)$ for any $(z_1, \dots, z_r) \in \mathbb{R}^r$.
+
+*Proof.* The case when $\alpha = \infty$ is standard and is thus omitted. Consider now the case $\alpha < \infty$. It is straightforward to see that there is a unique $Z_{2+} = (Z_3, Z_4, \dots)$ in $C([0, \infty) : \ell_2)$ that solves the last two equations in (8.3). Hence it suffices to show that, the system of equations
+
+$$
+\begin{align}
+Z_1(t) &= z_1 - (ce^{c\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds + \int_0^t (Z_2(s) - Z_1(s))ds + \sqrt{2B}(t) \\
+Z_2(t) &= z_2 + (ce^{c\alpha})^{-1} \int_0^t (e^{cZ_1(s)} - 1)ds - \int_0^t Z_2(s)ds + \int_0^t f(s)ds
+\end{align}
+\tag{8.4}
+$$
+
+has a unique pathwise solution $(Z_1, Z_2)$ with sample paths in $C([0, \infty) : \mathbb{R}^2)$ where $f = Z_3 \in C([0, \infty) : \mathbb{R})$ is a given (non-random) continuous trajectory and $(z_1, z_2) \in \mathbb{R}^2$.
+
+Define $y_1 = z_1 + z_2$, $y_2 = z_2$ and consider the equation:
+
+$$
+\begin{align}
+Y_1(t) &= y_1 - (ce^{c\alpha})^{-1} \int_0^t (e^{cY_1(s)} - 1)ds + \int_0^t (Y_2(s) - 2Y_1(s))ds + \sqrt{2B}(t) \\
+Y_2(t) &= y_2 - \int_0^t Y_1(s)ds + \int_0^t f(s)ds + \sqrt{2B}(t).
+\end{align}
+\tag{8.5}
+$$
+
+Note that $(Z_1, Z_2)$ solve (8.4) if and only if $(Y_1, Y_2)$, with $Y_1 = Z_1$ and $Y_2 = Z_1 + Z_2$ solve (8.5). Thus it suffices to prove existence and uniqueness of solutions for (8.5).
+---PAGE_BREAK---
+
+For $L \in (0, \infty)$, let $\eta_L : \mathbb{R} \to [0, 1]$ be such that $\eta_L$ is smooth, $\eta_L(x) = 1$ for $|x| \le L$ and $\eta_L(x) = 0$ for $|x| \ge L + 1$. Consider the equation
+
+$$
+\begin{align}
+Y_1^L(t) &= y_1 - (ce^{c\alpha})^{-1} \int_0^t e^{cY_1^L(s)} \eta_L(Y_1^L(s)) ds + (ce^{c\alpha})^{-1} t \nonumber \\
+&\quad + \int_0^t (Y_2^L(s) - 2Y_1^L(s)) ds + \sqrt{2}B(t) \tag{8.6}
+\end{align}
+$$
+
+$$ Y_2^L(t) = y_2 - \int_0^t Y_1^L(s)ds + \int_0^t f(s)ds + \sqrt{2}B(t). $$
+
+Since for each $L$ (8.6) is an equation with (globally) Lipschitz coefficients, by standard results, it
+has a unique pathwise continuous solution.
+
+Fix $T \in (0, \infty)$ and let $\tau_L = \inf\{t \ge 0 : |Y_1^L(t)| \ge L\} \wedge T$ for any $L > 0$. Then by pathwise uniqueness of (8.6), for $0 \le t \le \tau_L \wedge \tau_{L+1}$,
+
+$$ Y^L(t) = Y^{L+1}(t). $$
+
+This in particular shows that, $\tau_L \le \tau_{L+1}$ a.s.
+
+We now estimate the second moment of $|Y_1^L(t)|$. By Itô's formula
+
+$$
+\begin{align*}
+(Y_1^L(t))^2 &= (y_1)^2 - 2(ce^{c\alpha})^{-1} \int_0^t Y_1^L(s)e^{cY_1^L(s)} \eta_L(Y_1^L(s)) ds + 2(ce^{c\alpha})^{-1} \int_0^t Y_1^L(s) ds \\
+&\quad + 2 \int_0^t Y_1^L(s)(Y_2^L(s) - 2Y_1^L(s)) ds + 2\sqrt{2} \int_0^t Y_1^L(s) dB(s) + 2t \\
+(Y_2^L(t))^2 &= (y_2)^2 - 2 \int_0^t Y_1^L(s)Y_2^L(s) ds + 2 \int_0^t Y_2^L(s)f(s) ds + 2\sqrt{2} \int_0^t Y_2^L(s) dB(s) + 2t.
+\end{align*}
+$$
+
+Thus
+
+$$
+\begin{align*}
+(Y_1^L(t))^2 + (Y_2^L(t))^2 &= (y_1)^2 + (y_2)^2 - 2(ce^{c\alpha})^{-1} \int_0^t Y_1^L(s)e^{cY_1^L(s)} \eta_L(Y_1^L(s)) ds \\
+&\quad + 2(ce^{c\alpha})^{-1} \int_0^t Y_1^L(s) ds + 2 \int_0^t Y_2^L(s) f(s) ds \\
+&\quad - 4 \int_0^t (Y_1^L(s))^2 ds + 2\sqrt{2} \int_0^t (Y_1^L(s) + Y_2^L(s)) dB(s) + 4t.
+\end{align*}
+$$
+
+Since $c > 0$, we have on using the inequality $|x| \le 1 + |x|^2$ that $-xe^{cx}\eta_L(x) \le (1 + |x|^2)$ for all $x \in \mathbb{R}$. Thus with $\|Y^L\|_{*,t} = \sup_{s \in [0,t]} \|Y^L(s)\|$:
+
+$$
+\begin{align*}
+\|Y^L\|_{*,t}^2 &\le \|y\|^2 + 4(ce^{c\alpha})^{-1} \int_0^t (1 + \|Y^L\|_{*,s}^2) ds \\
+&\quad + 2 \int_0^t (1 + \|Y^L\|_{*,s}^2) |f(s)| ds \\
+&\quad + 2\sqrt{2} \left( 1 + \sup_{0 \le s \le t} \left| \int_0^s (Y_1^L(u) + Y_2^L(u)) dB(u) \right|^2 \right) + 4t.
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Taking expectations, for any $t \in [0, T]$
+
+$$
+\begin{align*}
+\mathbf{E} \|Y^L\|_{*,t}^2 &\leq \|y\|^2 + (4(ce^{c\alpha})^{-1} + 2|f|_{*,T}) \int_0^t (1 + \mathbf{E}\|Y^L\|_{*,s}^2) ds \\
+&\quad + 2\sqrt{2} \left( 1 + 4\mathbf{E} \int_0^t |Y_1^L(u) + Y_2^L(u)|^2 du \right) + 4t \\
+&\leq (\|y\|^2 + K(T+1)) + K \int_0^t \mathbf{E}\|Y^L\|_{*,s}^2 ds.
+\end{align*}
+$$
+
+with $K \doteq 4(ce^{c\alpha})^{-1} + 2|f|_{*,T} + 16\sqrt{2}$. By Gronwall lemma, for every $L \in \mathbb{N}$
+
+$$
+E\|\mathbf{Y}^L\|_{*,T}^2 \leq (\|\mathbf{y}\|^2 + K(T+1))e^{KT} \doteq c_1.
+$$
+
+Thus, as $L \to \infty$
+
+$$
+P(\tau_L < T) \le P(\|Y^L\|_{*,T} \ge L) \le c_1/L^2 \to 0
+$$
+
+and consequently $\tau_L \uparrow T$ as $L \to \infty$. Now define $Y(t) = Y^L(t)$ for $0 \le t \le \tau_L$. Then $Y$ is a solution of (8.5) on $[0,T)$. The same argument as before shows that this is the unique pathwise solution on $[0,T)$. Since $T$ is arbitrary we get a unique pathwise solution of (8.5) on $[0,\infty)$. This completes the proof of the lemma. ■
+
+**Lemma 8.3.** Suppose the assumptions of Theorem 2.3 hold. Suppose further that $\mathbf{Z}_n(0)$, $\mathbf{M}_n$ and a standard Brownian motion $B$ are given on a common probability space such that $\mathbf{Z}_n(0) \to z$ in $\ell_1^\downarrow$ and $\mathbf{M}_n \to \sqrt{2}Be_1$ in $\mathbb{D}([0, \infty) : \ell_2)$ almost surely. Let $\mathbf{Z}$ be as defined in Lemma 8.2. Then for any $T, L \in (0, \infty)$
+
+$$
+\|\mathbf{Z}_n - \mathbf{Z}\|_{2,T \wedge \tau_{n,L} \wedge \tau_L} \xrightarrow{P} 0 \quad \text{as } n \to \infty. \tag{8.7}
+$$
+
+where $\tau_L \doteq \inf\{t \mid \|\mathbf{Z}_n(t)\|_{2,t} > L\}$.
+
+*Proof.* Fix $L, T \in (0, \infty)$ and let $T_n \doteq T \wedge \tau_{n,L} \wedge \tau_L$ and $U_{n,i} \doteq Z_{n,i} - Z_i$ for $i \in \mathbb{N}$. Using the estimate $|e^{ax} - e^{ay}| \le ae^{a(x\vee y)} |x-y|$ for $x,y \in \mathbb{R}$, $a \ge 0$ and since $|Z_{n,1}(s)|, |Z_1(s)| \le L$ for any $s \in [0, T_n]$,
+
+$$
+\begin{align*}
+& |a_n(s)e^{c_n Z_{n,1}(s)} - ae^{cZ_1(s)}| \\
+&\le |a_n(s)e^{c_n Z_{n,1}(s)} - a_n(s)e^{c_n Z_1(s)}| + |a_n(s)e^{c_n Z_1(s)} - a_n(s)e^{cZ_1(s)}| + |e^{cZ_1(s)}| |a_n(s) - a| \\
+&\le |a_n(s)| c_n e^{c_n L} |U_{n,1}(s)| + |a_n(s)| Le^{L(c_n \vee c)} |c_n - c| + e^{cL} |a_n(s) - a|
+\end{align*}
+$$
+
+where $a_n(s) \doteq (c_n e^{c_n \alpha_n})^{-1}(1+\delta_n(s))$, $c_n = d_n/\sqrt{n}$, $\delta_n$ is as in Lemma 8.1, and $a \doteq (ce^{c\alpha})^{-1}$. Since $c_n \to c$ and $|\delta_n|_{*,T_n} \to 0$ by Lemma 8.1, $|a_n - a|_{*,T_n} \to 0$. Hence for any $s \in [0, T_n]$
+
+$$
+|a_n e^{c_n Z_{n,1}} - a e^{c Z_1}|_{*,s} \le K |U_{n,1}|_{*,s} + r_n
+\quad (8.8)
+$$
+
+where $K\doteq\sup_n(c_n e^{c_n L}|a_n|_{*,T_n})<\infty$ and
+
+$$
+r_n \doteq |a_n|_{*,T_n} L e^{L(c_n v_c)} |c_n - c| + e^{cL} |a_n - a|_{*,T_n} \to 0
+$$
+---PAGE_BREAK---
+
+almost surely. Subtracting (8.3) from (8.1), for any $t > 0$,
+
+$$
+\begin{align}
+U_{n,1}(t) &= U_{n,1}(0) - \int_0^t (U_{n,1}(s) - U_{n,2}(s))ds + \sqrt{n}M_{n,1}(t) - \sqrt{2}B(t) \nonumber \\
+&\quad - \int_0^t (a_{n,1}(s)e^{c_n Z_{n,1}(s)} - ae^{cZ_1(s)})ds + \int_0^t (a_n(s) - a)ds \notag \\
+U_{n,2}(t) &= U_{n,2}(0) - \int_0^t (U_{n,2}(s) - U_{n,3}(s))ds + W_{n,2}(t) \tag{8.9} \\
+&\quad + \int_0^t (a_{n,1}(s)e^{c_n Z_{n,1}(s)} - ae^{cZ_1(s)})ds - \int_0^t (a_n(s) - a)ds \notag \\
+U_{n,i}(t) &= U_{n,i}(0) - \int_0^t (U_{n,i}(s) - U_{n,i+1}(s))ds + W_{n,i}(t) \quad \text{for } i \in \{3, \dots, r\}. \notag
+\end{align}
+$$
+
+Let $H_t = \sup_{s \in [0,t]} \sum_{i=1}^r |U_{n,i}(s)|$. Then from (8.8) and (8.9), for any $t \in [0, T_n]$,
+
+$$ H_t \le H_0 + |\sqrt{n}M_{n,1} - \sqrt{2}B|_{*,T} + 2T(|a_n - a|_{*,T_n} + r_n) + \sum_{i=2}^r |W_{n,i}|_{*,T_n} + |U_{n,r+1}|_{*,T_n} + 2(1+K) \int_0^t H_s ds. $$
+
+Hence by Gronwall's lemma
+
+$$ H_{T_n} \le \left( H_0 + \left| \sqrt{n}M_{n,1} - \sqrt{2}B \right|_{*,T} + 2T(|a_n - a|_{*,T_n} + r_n) + \sum_{i=2}^r |W_{n,i}|_{*,T_n} + |U_{n,r+1}|_{*,T_n} \right) e^{2(1+K)T}. $$
+
+Note $\mathbf{U}_{n,r+} = Z_{n,r+}$ and $U_{n,i}(0) = Z_{n,i}(0) - z_i$. Then by Lemma 8.1, $H_{T_n} \xrightarrow{P} 0$ and $\|\mathbf{U}_{n,r+}\|_{2,T_n} \xrightarrow{P} 0$. Together these show $\|\mathbf{U}\|_{2,T_n} = \|\mathbf{Z}_n - \mathbf{Z}\|_{2,T_n} \xrightarrow{P} 0$ as $n \to \infty$. ■
+
+**Corollary 8.4.** Under assumptions of Theorem 2.3, $\{\|\mathbf{Z}_n\|_{2,T}\}_{n \in \mathbb{N}}$ is a tight sequence of random variables and
+
+$$ \limsup_n P(\tau_{n,L} \le T) = 0. \qquad (8.10) $$
+
+*Proof.* Fix $\delta > 0$. Since $\mathbf{Z}$ has sample paths in $C([0,T] : \ell_2)$, we can find $L \in (0,\infty)$ so that $P(\|\mathbf{Z}\|_{2,T} > L) \le \delta$. Note that the events $\{\|\mathbf{Z}\|_{2,T} \le L\} \subseteq \{\tau_{L+2} > T\}$, where $\tau_{L+2} = \inf\{t | \|Z(t)\|_2 > L+2\}$. Hence for each $n \in \mathbb{N}$, by right continuity of $\mathbf{Z}_n$
+
+$$
+\begin{align*}
+P(\tau_{n,L+2} \le T) &\le P(\|\mathbf{Z}_n\|_{2,T \wedge \tau_{n,L+2}} > L+1) \\
+&\le P(\|\mathbf{Z}_n - \mathbf{Z}\|_{2,T \wedge \tau_{n,L+2}} > 1 \text{ or } \|\mathbf{Z}\|_{2,T} > L) \\
+&\le P(\|\mathbf{Z}_n - \mathbf{Z}\|_{2,T \wedge \tau_{n,L+2} \wedge \tau_{L+2}} > 1) + P(\|\mathbf{Z}\|_{2,T} > L) \\
+&\le \delta + P(\|\mathbf{Z}_n - \mathbf{Z}\|_{2,T \wedge \tau_{n,L+2} \wedge \tau_{L+2}} > 1).
+\end{align*}
+$$
+
+Sending $n \to \infty$ and using Lemma 8.3 we see that $\limsup_n P(\tau_{n,L+2} \le T) \le \delta$. Finally,
+
+$$ \limsup_n P(\|\mathbf{Z}_n\|_{2,T} > L+2) \le \limsup_n P(\tau_{n,L+2} \le T) \le \delta. $$
+
+Since $\delta > 0$ is arbitrary, this shows that $\{\|\mathbf{Z}_n\|_{2,T}\}_{n \in \mathbb{N}}$ is tight. The convergence in (8.10) is now immediate. ■
+
+*Proof of Theorem 2.3.* Using Lemma 8.1 and Skorohod embedding theorem we can assume without loss of generality that $\mathbf{Z}_n(0)$, $\mathbf{M}_n$ and $B$ are given on a common probability space, $\mathbf{Z}_n(0) \to z$ in $\ell_1^\downarrow$, and $\mathbf{M}_n \to \sqrt{2Be_1}$ in $D([0,\infty) : \ell_2)$ almost surely. From Lemma 8.3 we now have that for
+---PAGE_BREAK---
+
+every $T, L \in (0, \infty)$ (8.7) holds. Finally using from Corollary 8.4 the fact that $\sup_n P(\tau_{n,L} \le T) + P(\tau_L \le T) \to 0$ as $L \to \infty$, we have that $\|Z_n - Z\|_{2,T} \xrightarrow{P} 0$ as $n \to \infty$. The result follows. ■
+
+9. PROOF OF THEOREM 2.4
+
+In this section we give the proof of Theorem 2.4. As for the proof of Theorem 2.3 we begin with a convenient representation for $Z_n$ and by establishing some useful convergence properties.
+
+**Lemma 9.1.** Let $\lambda_n, \alpha_n, d_n$ be as in the statement of Theorem 2.4. Suppose that $\{\|\boldsymbol{Z}_n(0)\|_1\}_{n \in \mathbb{N}}$ is a tight sequence of random variables and $\boldsymbol{Z}_{n,r+}(0) \xrightarrow{P} \mathbf{0}$ in $\ell_2$ for some $r \ge 2$. Then there are real stochastic processes $U_n, V_n, W_{n,i}, \eta_n$ with sample paths in $\mathbb{D}([0, \infty) : \mathbb{R})$ so that, $U_n, \eta_n$ have absolutely continuous paths a.s., $U_n(0) = \eta_n(0) = 0$, and for any $t \ge 0$
+
+$$Z_{n,1}(t) = Z_{n,1}(0) - \int_0^t Z_{n,1}(s)ds + \int_0^t Z_{n,2}(s)ds + \sqrt{n}M_{n,1}(t) + U_n(t) - \eta_n(t) \quad (9.1)$$
+
+$$Z_{n,2}(t) = Z_{n,2}(0) - \int_0^t Z_{n,2}(s)ds + \int_0^t Z_{n,3}(s)ds + V_n(t) + \eta_n(t) \quad (9.2)$$
+
+$$Z_{n,i}(t) = Z_{n,i}(0) - \int_0^t Z_{n,i}(s)ds + \int_0^t Z_{n,i+1}(s)ds + W_{n,i}(t) \quad \text{for } i \in \{3, \dots, r\}. \quad (9.3)$$
+
+Furthermore, $\eta_n$ is non-decreasing process with $\eta_n(0) = 0$ that satisfies
+
+$$\eta_n(t) = \int_0^t \mathbb{I}_{\{Z_{n,1}(s) \ge \theta_n\}} d\eta_n(s) \quad \text{a.s.} \quad (9.4)$$
+
+for some constants $\theta_n = \alpha_n + O(\sqrt{n}/d_n) \ge 0$ as $n \to \infty$. Also for any $L, T \in (0, \infty)$, as $n \to \infty$
+
+(1) $\sqrt{n}M_{n,1} \Rightarrow \sqrt{2B}$ in $\mathbb{D}([0,T] : \mathbb{R})$
+
+(2) $\text{tv}(U_n; [0, T_n]) \doteq \int_0^{T_n} |\dot{U}_n(s)|ds \xrightarrow{P} 0$
+
+(3) $|V_n|_{*,T_n} \xrightarrow{P} 0$ and $|W_{n,i}|_{*,T_n} \xrightarrow{P} 0$ for $i \in \{3, \dots, r\}$
+
+(4) $\|\boldsymbol{Z}_{n,r+}\|_{2,T_n} \xrightarrow{P} 0$.
+
+Here B is a standard Brownian motion and $T_n = T \wedge \tau_{n,L}$.
+
+*Proof.* By our assumptions on $\alpha_n$, we can find $\kappa \in (0, \infty)$ be such that $\theta_n = \alpha_n + \frac{\kappa\sqrt{n}}{d_n} \ge 0$ for every $n$. Note $\theta_n \to \alpha$ as $n \to \infty$. Define
+
+$$U_n(t) := -\int_0^t t_{n,1}(Z_{n,1}(s))\mathbb{I}_{\{Z_{n,1}(s)<\theta_n\}}ds$$
+
+$$\eta_n(t) := \int_0^t t_{n,1}(Z_{n,1}(s))\mathbb{I}_{\{Z_{n,1}(s)\ge\theta_n\}}ds$$
+
+so that $\eta_n(t) = \int_0^t \mathbb{I}_{\{Z_{n,1}(s)\ge\theta_n\}}d\eta_n(s)$, and
+
+$$\int_0^t t_{n,1}(Z_{n,1}(s))ds = \eta_n(t) - U_n(t). \quad (9.5)$$
+
+From Lemma 6.1 it then follows that (9.1) is satisfied. Recall the expression for $t_{n,1}(z)$ from (6.4). Then, by monotonicity of $\beta_n$, $t_{n,1}(z) \ge 0$ whenever $z \ge 0$. Since $\theta_n \ge 0$, $\eta_n$ is non-decreasing and
+
+$$
+\begin{align*}
+\sup_{z \le \theta_n} |t_{n,1}(z)| &\le 2\sqrt{n}\beta_n (\lambda_n + \theta_n/\sqrt{n}) \le 2\sqrt{n}(\lambda_n + \theta_n/\sqrt{n})^{d_n} \\
+&= 2\sqrt{n}\left(1 - ((\log d_n)/d_n + (\alpha_n - \theta_n)/\sqrt{n})\right)^{d_n} = 2\sqrt{n}\left(1 - (\log d_n - \kappa)/d_n\right)^{d_n} \\
+&\le 2\exp\left(-\log \frac{d_n}{\sqrt{n}} + \kappa\right) \to 0 \text{ as } n \to \infty.
+\end{align*}
+$$
+---PAGE_BREAK---
+
+This shows that $\text{tv}(U_n; [0, T]) \to 0$ almost surely.
+
+Next, since $d_n(1 - \lambda_n) \to \infty$, Lemma 6.6 shows that
+
+$$ \mu_n \to f_1 \in \ell_1^{\perp} \text{ as } n \to \infty. \quad (9.6) $$
+
+Therefore $G_n(0) = \mu_n + \frac{Z_n(0)}{\sqrt{n}} \to f_1$ in $\ell_1^{\perp}$. Then by Lemma 6.2,
+
+$$ \sqrt{n}M_n \Rightarrow \sqrt{2Be_1} \quad \text{in } D([0, \infty) : \ell_2), \quad (9.7) $$
+
+and by Lemma 6.4, $\|Z_{n,r}+\|_{2,T\wedge\tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$. Define
+
+$$ V_n(t) := - \int_0^t t_{n,2}(Z_{n,2}(s))ds + \sqrt{n}M_{n,2}(t) - U_n(t). $$
+
+Using (9.5) and Lemma 6.1 once more, we see that (9.2) is satisfied. Finally, for $i \in \{3, \dots, r\}$, define
+
+$$ W_{n,i}(t) := \int_0^t t_{n,i-1}(Z_{n,i-1}(s))ds - \int_0^t t_{n,i}(Z_{n,i}(s)) + \sqrt{n}M_{n,i}(t). $$
+
+Then, from Lemma 6.1 again, it follows that (9.3) is satisfied with the above choice of $W_{n,i}$.
+Lemma 6.3 along with (9.6), (9.7), and $|Z_{n,i}|_{*,T\wedge\tau_{n,L}} \le L$ show that, as $n \to \infty$, $|V_n|_{*,T_n} \xrightarrow{P} 0$
+and $|W_{n,i}|_{*,T_n} \xrightarrow{P} 0$ for each $i \in \{3, \dots, r\}$. ■
+
+**Corollary 9.2.** Suppose that the assumptions in Lemma 9.1 are satisfied. Assume further that $d_n \ll n^{2/3}$. Then, the conclusions of Lemma 9.1 hold with $\theta_n = \alpha_n$ and
+
+$$ \eta_n(t) := \int_0^t \gamma_n^{-1}(1 + \delta_n(s))^+ e^{\gamma_n(Z_{n,1}(s)-\alpha_n)} \mathbb{I}_{\{Z_{n,1}(s)\ge\alpha_n\}} ds, \quad (9.8) $$
+
+where $\gamma_n \doteq \frac{d_n}{\sqrt{n}}$ and $\delta_n$ is a process with sample paths in $D([0, \infty), \mathbb{R})$ such that $|\delta_n|_{*,T\wedge\tau_{n,L}} \to 0$ a.s. for each $L > 0$.
+
+*Proof.* Since $d_n \ll n^{2/3}$ and $\alpha_n = O(n^{1/6})$, the hypothesis of Lemma 6.7 is satisfied. Define
+
+$$ \delta_n(s) := t_{n,1}(Z_{n,1}(s))\gamma_n\left(e^{\gamma_n[Z_{n,1}(s)-\alpha_n]} - e^{-\gamma_n\alpha_n}\right)^{-1} - 1. $$
+
+Since $\sup_{s\le T\wedge\tau_{n,L}} |Z_{n,1}(s)| \le L$, Lemma 6.7 shows that $|\delta_n|_{*,T_n} \to 0$ a.s. as $n \to \infty$. Next define
+
+$$
+\begin{aligned}
+U_n(s) &:= \gamma_n^{-1} \int_0^t (1 + \delta_n(s)) (e^{-\gamma_n \alpha_n} - e^{\gamma_n (Z_{n,1}(s)-\alpha_n)}) \mathbb{I}_{\{Z_{n,1}(s)<\alpha_n\}} ds \\
+&\quad + \int_0^t \gamma_n^{-1} (1 + \delta_n(s))^{-} e^{\gamma_n (Z_{n,1}(s)-\alpha_n)} \mathbb{I}_{\{Z_{n,1}(s)\ge\alpha_n\}} ds
+\end{aligned}
+$$
+
+Then $U_n(0) = 0$, $U_n$ is absolutely continuous and, with $\kappa = \sup_n \frac{d_n}{\sqrt{n}}\alpha_n^- < \infty$,
+
+$$
+\begin{aligned}
+\text{tv}(U_n; [0, T_n]) \mathbb{I}_{\{|&\delta_n|_{*,T_n} < 1\}} = \\
+&\gamma_n^{-1} \int_0^{T_n} |1 + \delta_n(s)| |e^{-\gamma_n \alpha_n} - e^{\gamma_n (Z_{n,1}(s)-\alpha_n)}| \mathbb{I}_{\{Z_{n,1}(s)<\alpha_n\}} ds \\
+&\leq \frac{2(1 + e^\kappa)T}{\gamma_n} \to 0 \quad \text{as } n \to \infty.
+\end{aligned}
+$$
+
+Hence, since $|\delta_n|_{*,T_n} \to 0$, we have that $\text{tv}(U_n; [0, T_n]) \xrightarrow{P} 0$ as $n \to \infty$. By rearranging terms we see that, with the above definitions of $U_n$ and $\eta_n$, (9.5) is satisfied. The result follows. ■
+
+Since $\gamma_n \to \infty$ and $\theta_n \to \alpha$ as $n \to \infty$, the previous lemma suggests a connection to the Skorokhod map $\Gamma_\alpha$ defined in (2.1). In order to make this connection precise, we begin with the following lemma.
+---PAGE_BREAK---
+
+**Lemma 9.3.** *Under the assumptions of Theorem 2.4, for any $L \in (0, \infty)$*
+
+$$
+\sup_{t \in [0, T \wedge \tau_{n,L}]} (Z_{n,1}(t) - \alpha_n)^+ \xrightarrow{P} 0 \text{ as } n \to \infty. \quad (9.9)
+$$
+
+*Proof.* Consider first the case $d_n \gg \sqrt{n} \log n$. For this case $\epsilon_n \doteq \frac{\sqrt{n} \log d_n}{d_n} \to 0$, and since
+
+$$
+Z_{n,1}(t) = \sqrt{n}(G_{n,1}(t) - \lambda_n) \le \sqrt{n}(1 - \lambda_n) = \frac{\sqrt{n} \log d_n}{d_n} + \alpha_n,
+$$
+
+we have that (9.9) holds. Now consider the complementary case, namely $d_n = O(\sqrt{n} \log n)$. We will use Corollary 9.2. Since $Z_{n,1}(0) \xrightarrow{P} z_1 \in \mathbb{R}$ with $z_1 \le \alpha$, we have $(Z_{n,1}(0) - \alpha_n)^+ \xrightarrow{P} 0$ as $n \to \infty$. It now suffices to show that for any $\epsilon > 0$
+
+$$
+\mathbf{P}\left( \sup_{t \in [0, T \wedge \tau_{n,L}]} Z_{n,1}(t) > \alpha_n + 6\epsilon \right) \to 0 \text{ as } n \to \infty.
+$$
+
+Let $\tau_n \doteq \inf\{t \ge 0 | Z_{n,1}(t) > \alpha_n + 6\epsilon\}$ and, as before, $T_n \doteq T \wedge \tau_{n,L}$. It is then enough to show that $\mathbf{P}(\tau_n \le T_n) \to 0$ as $n \to \infty$. For this inductively define stopping times, $\sigma_{n,0} = 0$,
+
+$$
+\begin{align*}
+\sigma_{n,2k-1} &= \inf\{t > \sigma_{n,2k-2} | Z_{n,1}(t) > \alpha_n + 3\epsilon\} \\
+\sigma_{n,2k} &= \inf\{t > \sigma_{n,2k-1} | Z_{n,1}(t) < \alpha_n + 2\epsilon\}, \quad k \in \mathbb{N}.
+\end{align*}
+$$
+
+Note that for each $n \in N$, $\sigma_{n,r} \to \infty$ as $r \to \infty$, almost surely. Also, henceforth, without loss of generality we consider only $n$ that are large enough so that $1/\sqrt{n} < \epsilon$. Hence on the set $\tau_n < \infty$, $\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k})$ for some $k \in N$. Then for every $K \in N$
+
+$$
+\mathbf{P}(\tau_n \le T_n) \le \sum_{k=1}^{K} \mathbf{P}(\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k} \wedge T_n]) + \mathbf{P}(\sigma_{n,2K+1} \le T_n).
+$$
+
+Hence to complete the proof it is enough to show that
+
+(1) For each $k \in N$, $\lim_{n\to\infty} P(\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k} \wedge T_n]) = 0$,
+
+(2) $\lim_{K\to\infty} \limsup_{n\to\infty} P(\sigma_{n,2K+1} \le T_n) = 0$.
+
+Consider (1) first. Note that on the set $C_{n,1} := \{\, Z_{n,1}(0) \le \alpha_n + 3\epsilon \,\}$, for any $k \in N$,
+
+$$
+\begin{equation}
+\begin{aligned}
+\alpha_n + 3\epsilon &\le Z_{n,1}(\sigma_{n,2k-1}) = Z_{n,1}(\sigma_{n,2k-1}-) + Z_{n,1}(\sigma_{n,2k-1}) - Z_{n,1}(\sigma_{n,2k-1}-) \\
+&\le Z_{n,1}(\sigma_{n,2k-1}-) + \epsilon \le \alpha_n + 4\epsilon.
+\end{aligned}
+\tag{9.10}
+\end{equation}
+$$
+
+Similarly,
+
+$$
+Z_{n,1}(t) \geq \alpha_n + \epsilon \quad \text{for each } t \in [\sigma_{n,2k-1}, \sigma_{n,2k}]. \tag{9.11}
+$$
+
+Let $M'_n(t) = \sqrt{n}M_{n,1}(t+\sigma_{n,2k-1}) - \sqrt{n}M_{n,1}(\sigma_{n,2k-1})$ for $t \ge 0$ and consider the sets
+
+$$
+C_{n,2} := \{\tau_n, \sigma_{n,2k-1} \le T_n\}, \quad C_{n,3} := \left\{ |U_n|_{*,T_n} \le \epsilon/2, |\delta_n|_{*,T_n} \le \frac{1}{2} \right\}.
+$$
+
+Then on the set $C_n = \cap_{i=1}^3 C_{n,i}$, using Corollary 9.2, for any $t \in [0, (T_n \wedge \sigma_{n,2k}) - \sigma_{n,2k-1}]$,
+
+$$
+\begin{align*}
+Z_{n,1}(t + \sigma_{n,2k-1}) - Z_{n,1}(\sigma_{n,2k-1}) \\
+&= -\int_{\sigma_{n,2k-1}}^{\sigma_{n,2k-1}+t} (Z_{n,1}(s) - Z_{n,2}(s))ds + M'_{n}(t) + U_{n}(t + \sigma_{n,2k-1}) - U_{n}(\sigma_{n,2k-1}) \\
+&\quad -\int_{\sigma_{n,2k-1}}^{\sigma_{n,2k-1}+t} \gamma_n^{-1}(1 + \delta_n(s))^{+} e^{\gamma_n(Z_{n,1}(s)-\alpha_n)} d\mathbb{I}_{\{Z_{n,1}(s)\ge\alpha_n\}} ds
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Since for $t$ in the above interval $\sigma_{n,2k-1} + t \le T_n \le \tau_{n,L}$, $|Z_{n,1}(s)| + |Z_{n,2}(s)| \le 2L$ for any $s \le \sigma_{n,2k-1}+t$. Also, since $\sigma_{n,2k-1}+t \le \sigma_{n,2k}$, by (9.11), $Z_{n,1}(s)-\alpha_n \ge \epsilon$ for any $s \in [\sigma_{n,2k-1}, \sigma_{n,2k-1}+t]$. Thus on $C_n$ we have
+
+$$ Z_{n,1}(t + \sigma_{n,2k-1}) - Z_{n,1}(\sigma_{n,2k-1}) \le 2Lt + M'_n(t) + \epsilon - \frac{t}{2\gamma_n} \exp(\gamma_n \epsilon) \doteq Y_n(t). \quad (9.12) $$
+
+Using (9.10), on $C_n$, $Z_n(\tau_n) - Z_n(\sigma_{n,2k-1}) \ge \alpha_n + 6\epsilon - \alpha_n - 4\epsilon = 2\epsilon$. Hence
+
+$$
+\begin{aligned}
+P(\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k} \wedge T_n)) &\le P(\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k} \wedge T_n), C_n) + P(C_{n,1}^c) + P(C_{n,3}^c) \\
+&\le P\left(\sup_{t \in [0,T]} Y_n(t) \ge 2\epsilon\right) + P(C_{n,1}^c) + P(C_{n,3}^c),
+\end{aligned}
+\quad (9.13)
+$$
+
+where the second inequality is on observing that on the set $\{\tau_n \in [\sigma_{n,2k-1}, \sigma_{n,2k} \wedge T_n)\}$, (9.12) holds with $t$ replaced by $\tau_n$. Next note that $M'_n$ is a $\mathcal{G}_t^n$ martingale, where $\mathcal{G}_t^n = \mathcal{F}_{t+\sigma_{n,2k-1}}^n$ and
+
+$$
+\begin{align*}
+\langle M'_n \rangle_t &= \langle \sqrt{n} M_{n,1} \rangle_{t+\sigma_{n,2k-1}} - \langle \sqrt{n} M_{n,1} \rangle_{\sigma_{n,2k-1}} \\
+&= \int_{\sigma_{n,2k-1}}^{\sigma_{n,2k-1}+t} [G_{n,1}(s) - G_{n,2}(s)] ds + \lambda_n(1 - \beta_n(G_{n,1}(s))) ds \\
+&\le 2t,
+\end{align*}
+$$
+
+where the second equality is from (3.6).
+
+Since $\gamma_n \to \infty$ we can apply Lemma 6.9 to conclude
+
+$$ P\left(\sup_{t \in [0,T]} Y_n(t) \ge 2\epsilon\right) = P\left(\sup_{t \in [0,T]} \left[M'_n(t) - \left(\frac{\exp(\gamma_n \epsilon)}{2\gamma_n} - 2L\right)t\right] \ge \epsilon\right) \to 0 $$
+
+as $n \to \infty$. We also have $\lim_n P(C_{n,i}^c) = 0$ for $i=1,3$ since, as noted earlier $(Z_{n,1}(0) - \alpha_n)^+ \xrightarrow{P} 0$, and by Corollary 9.2, respectively. From these observations it follows that the right side of (9.13) converges to 0 as $n \to \infty$, which completes the proof of (1)
+
+Now we prove (2). Let $\rho_i = \sigma_{n,i} \wedge \tau_{n,L}$ and define
+
+$$ Y_{n,K}(t) \doteq \sum_{i=0}^{K} (Z_{n,1}(t \wedge \rho_{n,2i+1}) - Z_{n,1}(t \wedge \rho_{n,2i})). $$
+
+Note that $\{\sigma_{n,2K+1} \le T_n\} \subseteq \{Y_{n,K}(T) \ge K\epsilon\}$ and hence to prove (2) it is sufficient to show that
+
+$$ \limsup_{n \to \infty} P(Y_{n,K}(T) \ge K\epsilon) \to 0 \quad \text{as } K \to \infty. \quad (9.14) $$
+
+From Corollary 9.2, we have that on the set $C_{n,4} = \{\text{tv}(U_n; [0,T_n]) \le 1\}$,
+
+$$
+\begin{align*}
+Y_{n,K}(T) &= \sum_{i=0}^{K} \int_{T \wedge \rho_{n,2i}}^{T \wedge \rho_{n,2i+1}} (Z_{n,2}(s) - Z_{n,1}(s)) ds + \sum_{i=0}^{K} \sqrt{n} M_{n,1}(T \wedge \rho_{n,2i+1}) - \sqrt{n} M_{n,1}(T \wedge \rho_{n,2i}) \\
+&\quad + \sum_{i=0}^{K} U_n(T \wedge \rho_{n,2i+1}) - U_n(T \wedge \rho_{n,2i}) - \sum_{i=0}^{K} \int_{T \wedge \rho_{n,2i}}^{T \wedge \rho_{n,2i+1}} \gamma_n^{-1} (1 + \delta_n(s))^+ e^{\gamma_n (Z_{n,1}(s)-\alpha_n)} dv_{\{Z_{n,1}(s)>\alpha_n\}} ds \\
+&\le 2LT + \sum_{i=0}^{K} (\sqrt{n} M_{n,1}(T \wedge \rho_{n,2i+1}) - \sqrt{n} M_{n,1}(T \wedge \rho_{n,2i})) + \text{tv}(U_n; [0,T]) \\
+&\le 2LT + 1 + M'_{n,K}(T)
+\end{align*}
+$$
+
+where we have used the facts that $\sup_{s \le \tau_{n,L}} |Z_{n,1}(s)| \le L$, and that the rightmost term in the third line is non-positive. Also, here
+
+$$ M'_{n,K}(t) = \sum_{i=0}^{K} (\sqrt{n} M_{n,1}(t \wedge \rho_{n,2i+1}) - \sqrt{n} M_{n,1}(t \wedge \rho_{n,2i})) . $$
+---PAGE_BREAK---
+
+Using (3.6), we see that $M'_{n,K}$ is a $\mathcal{F}_t^n$-martingale with quadratic variation given by
+
+$$
+\begin{align*}
+\langle M'_{n,K} \rangle_t &= \sum_{i=0}^{K} \left( \langle \sqrt{n} M_{n,1} \rangle_{t\wedge\rho_{n,2i+1}} - \langle \sqrt{n} M_{n,1} \rangle_{t\wedge\rho_{n,2i}} \right) \\
+&= \sum_{i=0}^{K} \int_{t\wedge\rho_{n,2i}}^{t\wedge\rho_{n,2i+1}} (G_{n,1}(s) - G_{n,2}(s) + \lambda_n - \lambda_n\beta_n(G_{n,1}(s))) ds \\
+&\le 2t.
+\end{align*}
+$$
+
+Hence
+
+$$
+\begin{align*}
+\boldsymbol{P}(Y_{n,K}(T) \geq K\epsilon) &\leq \boldsymbol{P}(Y_{n,K}(T) \geq K\epsilon, C_{n,4}) + \boldsymbol{P}(C_{n,4}^c) \\
+&\leq \boldsymbol{P}(M'_{n,K}(T) > K\epsilon - (2LT + 1)) + \boldsymbol{P}(C_{n,4}^c) \\
+&\leq \frac{\boldsymbol{E}M'^2_{n,K}(T)}{(K\epsilon - (2LT + 1))^2} + \boldsymbol{P}(C_{n,4}^c) \\
+&\leq \frac{2T}{(K\epsilon - (2LT + 1))^2} + \boldsymbol{P}(C_{n,4}^c).
+\end{align*}
+$$
+
+From Corollary 9.2, $\mathbf{P}(C_{n,4}^c) \to 0$ as $n \to \infty$. This together with the above display shows $\lim_{K \to \infty} \limsup_{n \to \infty} \mathbf{P}(Y_{n,K}(T) \ge K\epsilon) = 0$. Thus we have shown (9.14) and the proof of (2) is complete. The result follows. ■
+
+**Lemma 9.4.** Suppose the hypothesis of Theorem 2.4 holds, then for each $n \in \mathbb{N}$, there is a real constant $\theta_n = \alpha_n + O(\sqrt{n}/d_n) \ge 0$ and processes $U'_n, V_n$ with sample paths in $D([0, \infty) : \mathbb{R})$ such that with $\tilde{Z}_{n,1} \doteq Z_{n,1} \wedge \theta_n$
+
+$$
+\tilde{Z}_{n,1}(t) = \Gamma_{\theta_n} \left( \tilde{Z}_{n,1}(0) - \int_0^t (\tilde{Z}_{n,1}(s) - Z_{n,2}(s)) + \sqrt{n} M_{n,1}(\cdot) + U'_n(\cdot) ds \right), \quad \text{and} \quad (9.15)
+$$
+
+$$
+Z_{n,2}(t) = Z_{n,2}(0) - \int_0^t (Z_{n,2}(s) - Z_{n,3}(s))ds + V_n(t) + \eta_n(t) \quad \text{for all } t > 0,
+$$
+
+where
+
+$$
+\eta_n = \hat{\Gamma}_{\theta_n} \left( \tilde{Z}_{n,1}(0) - \int_0^t (\tilde{Z}_{n,1}(s) - Z_{n,2}(s)) + \sqrt{n} M_{n,1}(\cdot) + U'_n(\cdot) ds \right). \quad (9.16)
+$$
+
+Furthermore, for any $L, T \in (0, \infty)$, $|U'_n|_{*,T \wedge \tau_{n,L}}$, $|V_n|_{*,T \wedge \tau_{n,L}}$ and $|(Z_{n,1} - \theta_n)^+|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$.
+
+Proof. Let $\theta_n$ be as in Lemma 9.1. Since $d_n \gg \sqrt{n}$, $\theta_n = \alpha_n + o(1)$ and Lemma 9.3 shows
+
+$$
+|(Z_{n,1} - \theta_n)^+|_{*, T \wedge \tau_{n,L}} \to 0. \tag{9.17}
+$$
+
+Note that $\tilde{Z}_{n,1} = Z_{n,1} - (Z_{n,1} - \theta_n)^+$. Hence we can rewrite (9.1) and (9.2) as
+
+$$
+\begin{align}
+\tilde{Z}_{n,1}(t) &= \tilde{Z}_{n,1}(0) - \int_0^t \tilde{Z}_{n,1}(s)ds + \int_0^t Z_{n,2}(s)ds + \sqrt{n}M_{n,1}(t) + U'_n(t) - \eta_n(t) \tag{9.18} \\
+Z_{n,2}(t) &= Z_{n,2}(0) - \int_0^t (Z_{n,2}(s)-Z_{n,3}(s))ds + V_n(t) + \eta_n(t),
+\end{align}
+$$
+
+where
+
+$$
+U'_{n}(t) = U_{n}(t) - \int_{0}^{t} (Z_{n,1}(s) - \theta_{n})^{+} ds - (Z_{n,1}(t) - \theta_{n})^{+} + (Z_{n,1}(0) - \theta_{n})^{+}.
+$$
+
+The properties of $\eta_n$ from Lemma 9.1 (and Corollary 9.2) say that $\eta_n$ is a non-decreasing process,
+with $\eta_n(0) = 0$ and $\eta_n(t) = \int_0^t I\{\tilde{Z}_{n,1}(s)=\theta_n\} d\eta_n(s)$. Since $\tilde{Z}_{n,1} \le \theta_n$, (9.18) and the characterizing
+---PAGE_BREAK---
+
+properties of the Skorokhod map show (9.15) and (9.16). Finally, by Lemma 9.1, Corollary 9.2, and Lemma 9.3
+
+$$
+|U_n|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0, \text{ and } |V_n|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0
+$$
+
+as $n \to \infty$. Hence, using (9.17), $|U'_n|_{*,T \wedge \tau_{n,L}} \xrightarrow{P} 0$ as $n \to \infty$, and the result follows.
+
+The following lemma will be needed in order to prove the tightness of Z_n.
+
+**Lemma 9.5.** *Under the hypothesis of Theorem 2.4, the collection of random variables $\{\|\mathbf{Z}_n\|_{2,T}\}_{n\in\mathbb{N}}$ is tight for any $T \in (0, \infty)$.*
+
+*Proof.* Fix $T \in (0, \infty)$. In Lemma 9.4 using the definition of the Skorokhod map $\Gamma_{\theta_n}$ for $\theta_n \ge 0$ (see (2.1)), we have, for any $t > 0$ that
+
+$$
+\eta_n(t) \le |\tilde{Z}_{n,1}(0)| + \int_0^t |\tilde{Z}_{n,1}(s)| ds + \int_0^t |Z_{n,2}(s)| ds + |\sqrt{n}M_{n,1}|_{*,t} + |U'_n|_{*,t}.
+$$
+
+This shows that for any $t \ge 0$
+
+$$
+\begin{align*}
+|\tilde{Z}_{n,1}|_{*,t} &\le 2\left( |\tilde{Z}_{n,1}(0)| + \int_0^t |\tilde{Z}_{n,1}|_{*,s} ds + \int_0^t |Z_{n,2}|_{*,s} ds + |\sqrt{n}M_{n,1}|_{*,t} + |U'_n|_{*,t} \right) \\
+|Z_{n,2}|_{*,t} &\le |\tilde{Z}_{n,1}(0)| + |Z_{n,2}(0)| + \int_0^t |\tilde{Z}_{n,1}|_{*,s} ds + \int_0^t (2|Z_{n,2}|_{*,s} + |Z_{n,3}|_{*,s})ds \\
+&\quad + |\sqrt{n}M_{n,1}|_{*,t} + |U'_n|_{*,t} + |V_n|_{*,t},
+\end{align*}
+$$
+
+and
+
+$$
+|Z_{n,i}|_{*,t} \le |Z_{n,i}(0)| + \int_0^t |Z_{n,i}|_{*,s} ds + \int_0^t |Z_{n,i+1}|_{*,s} ds + |W_{n,i}|_{*,t} \quad \text{for } i \in \{3, \dots, r\}
+$$
+
+where the last line is from Lemma 9.1. Let $H_t = |\tilde{Z}_{n,1}|_{*,t} + |Z_{n,2}|_{*,t} + \dots + |Z_{n,r}|_{*,t}$. By adding over equations in the above display, we have for $t \in [0, \tau]$ and $\tau \in [0, T]$ that
+
+$$
+0 \le H_t \le 4 \left( H_0 + |\sqrt{n}M_{n,1}|_{*,\tau} + |U'_n|_{*,\tau} + |V_n|_{*,\tau} + \sum_{i=3}^{r} |W_{n,i}|_{*,\tau} + \int_0^t H_s ds \right).
+$$
+
+By Gronwall's inequality, for all $\tau \in [0, T]$,
+
+$$
+H_\tau \le 4 \left( H_0 + |\sqrt{n}M_{n,1}|_{*,\tau} + |U'_n|_{*,\tau} + |V_n|_{*,\tau} + \sum_{i=3}^{r} |W_{n,i}|_{*,\tau} \right) e^{4\tau}. \quad (9.19)
+$$
+
+Let $\vec{Z}_n := (\tilde{Z}_{n,1}, Z_{n,2}, \dots, Z_{n,r})$. Since $\vec{Z}_n(0) = P(z_1, \dots, z_r)$, and $\sqrt{n}M_n \Rightarrow Be_1$, for every $\epsilon > 0$ there is a $L_1 \in (0, \infty)$ such that for every $n \in \mathbb{N}$
+
+$$
+\mathbf{P}(C_{n,1}) \leq \frac{\epsilon}{2}, \text{ where } C_{n,1} = \{H_0 + |\sqrt{n}M_{n,1}|_{*,T} \geq L_1\}.
+$$
+
+Applying Lemmas 9.1 and Lemma 9.4 with $L = 4(L_1 + 1)e^{4T} + 2$ we can find an $n_0 \in \mathbb{N}$ so that $\mathbf{P}(C_{n,2}) \leq \frac{\epsilon}{2}$ for $n \geq n_0$, where
+
+$$
+C_{n,2} = \left\{ |U'_{n}|_{*,T_n} + |V_n|_{*,T_n} + \sum_{i=3}^{r} |W_{n,i}|_{*,T_n} + |(Z_{n,1} - \alpha_n)^+|_{*,T_n} + \|Z_n,r\|_{2,T_n} + \|Z_n,r+\|_{2,T_n} \ge 1 \right\}
+$$
+
+and $T_n = T \wedge \tau_{n,L}$. On the event $(C_{n,1} \cup C_{n,2})^c$
+
+$$
+\|\vec{Z}_n\|_{1,T_n} = H_{T_n} < 4(L_1+1)e^{4T}
+$$
+---PAGE_BREAK---
+
+by (9.19), and hence by triangle inequality
+
+$$
+\begin{align}
+\|\mathbf{Z}_n\|_{2,T_n} & \leq \|\vec{\mathcal{Z}}_n\|_{1,T_n} + |(Z_{n,1} - \alpha)^+|_{*,T_n} + \|\mathbf{Z}_{n,r+}\|_{2,T_n} \tag{9.20} \\
+& < 4(L_1+1)e^{4T} + 1 = L-1. \nonumber
+\end{align}
+$$
+
+Also, by the definition of $\tau_{n,L}$, $\|\mathbf{Z}_n(\tau_{n,L})\|_2 \ge L - \frac{1}{\sqrt{n}}$ on the set $\tau_{n,L} < T$. Hence we must have that $\tau_{n,L} > T$ whenever (9.20) holds, and hence
+
+$$
+\|\mathbf{Z}_n\|_{2,T} < L - 1 \text{ on the event } (C_{n,1} \cup C_{n,2})^c.
+$$
+
+This shows that
+
+$$
+P\left(\|\mathbf{Z}_n\|_{2,T} \ge L\right) \le P(C_{n,1} \cup C_{n,2}) \le \epsilon \quad \forall n \ge n_0
+$$
+
+Since $\epsilon > 0$ is arbitrary, the result follows.
+
+The following result is immediate from Lemmas 6.5, 9.1, 9.4, and 9.5.
+
+**Corollary 9.6.** Under the hypothesis of Theorem 2.4, for any $T > 0$, $\lim_{L \to \infty} \sup_n P(\tau_{n,L} \le T) = 0$. In particular the processes $W_{n,i}, U'_n, V_n, \|\mathbf{Z}_{n,r+}\|_2, (Z_{n,1} - \alpha_n)^+, (Z_{n,1} - \theta_n)^+$ converge in probability to zero in $D([0, \infty) : \mathbb{R})$.
+
+**Corollary 9.7.** *Under the hypothesis of Theorem 2.4, the sequence of processes ${\mathbf{Z}_n}$ with $\mathbf{z} = [0, \infty)$: tight*
+
+*in D([0, ∞) : ℓ₂).*
+
+
+
+Proof. Let $\theta_n$ be as in Lemma 9.4. Then by Corollary 9.6, for each fixed $T < \infty$, $\|\mathbf{Z}_{n,r+}\|_{2,T} \xrightarrow{P} 0$ and $|(Z_{n,1}-\theta_n)^+|_{*,T} \xrightarrow{P} 0$. Hence it is sufficient to show that the sequence $\{\vec{\mathcal{Z}}_n\}_{n \in \mathbb{N}}$ introduced in the proof of Lemma 9.5 is tight in $D([0,T] : \mathbb{R}^r)$. From Lemma 9.5, the convergence of $W_{n,i}$ in Corollary 9.6, and equations for $Z_{n,j}, j = 3, \dots, r$ in Lemma 9.1, it is immediate that $(Z_{n,3}, \dots, Z_{n,r})$ is tight in $D([0, \infty) : \mathbb{R}^{r-2})$. Finally consider the pair $(\tilde{Z}_{n,1}, Z_{n,2})$. Once again using Lemma 9.5, the convergence of $\sqrt{n}M_{n,1}$ in Lemma 9.1, and the convergence of $U'_n$ in Corollary 9.6, it follows that
+
+$$
+R_n \doteq \tilde{Z}_{n,1}(0) - \int_0^\cdot (\tilde{Z}_{n,1}(s) - Z_{n,2}(s)) + \sqrt{n} M_{n,1}(\cdot) + U'_n(\cdot) ds \quad (9.21)
+$$
+
+is tight in $D([0, \infty) : \mathbb{R})$. Using the identity
+
+$$
+\Gamma_{\theta_n}(R_n)(t) = \Gamma_{\theta_n}(\Gamma_{\theta_n}(R_n)(s) + R_n(\cdot + s) - R_n(s))(t-s)
+$$
+
+for $0 \le s \le t \le T$, we see from the definition of the Skorohod map that
+
+$$
+|\Gamma_{\theta_n}(R_n)(t) - \Gamma_{\theta_n}(R_n)(s)| \le 2 \sup_{s \le u \le t} |R_n(u) - R_n(s)|.
+$$
+
+Together with the tightness of $R_n$ this immediately implies the tightness of $\tilde{Z}_{n,1} = \Gamma_{\theta_n}(R_n)$ and of $\hat{\Gamma}_{\theta_n}(R_n)$. Finally the tightness of $Z_{n,2}$ is now immediate from Lemma 9.5, the convergence of $V_n$ in Corollary 9.6 and the tightness of $\hat{\Gamma}_{\theta_n}(R_n)$ noted above. The result follows.
+
+*Proof of Theorem 2.4.* From Lemma 6.6 and from the tightness of ${\{\|\mathbf{Z}_n(0)\|_1\}}_{n\in\mathbb{N}}$, it follows under the conditions of the theorem that $\mu_n \xrightarrow{P} f_1$ and $G_n(0) \xrightarrow{P} f_1$ in $\ell_1^{\perp}$. This proves the first statement in the theorem. Now consider the second statement. Fix $T < \infty$. From Corollary 9.7, ${\{\mathbf{Z}_n\}}_{n\in\mathbb{N}}$ is tight in $D([0, \infty) : l_2)$. Also from Lemma 9.1, $\sqrt{n}M_{n,1}$ converges in distribution to $\sqrt{2B}$ where B is a standard Brownian motion and from Corollary 9.6
+
+$$
+({W_{n,i}}^r_{i=3}, U'_n, V_n, (Z_{n,1} - \theta_n)^+) \xrightarrow{P} 0 \text{ in } D([0,T] : \mathbb{R}^r)
+$$
+
+Suppose that along a subsequence
+
+$(\mathbf{Z}_n, \sqrt{n}M_{n,1}, \{W_{n,i}\}_{i=3}^r, U'_n, V_n, (Z_{n,1} - \theta_n)^+) \Rightarrow (\mathbf{Z}, \sqrt{2B}, \mathbf{0})$
+---PAGE_BREAK---
+
+in $D([0, \infty) : \ell_2 \times \mathbb{R}^{r+2})$ and for notational simplicity label the subsequence once more as $\{n\}$. Also by appealing to Skorohod embedding theorem we assume that all the processes in the above display are given on a common probability space and the above convergence holds a.s. Since $J(Z_n) \le \frac{1}{\sqrt{n}}$ and $Z_n(0) \xrightarrow{P} z$, we have $J(Z) = 0$ and $Z(0) = z$ almost surely. In particular $Z$ has sample paths in $C([0, \infty) : \ell_2)$ and $(Z_n, \sqrt{n}M_{n,1}) \to (Z, \sqrt{2B})$ uniformly over compact time intervals in $\ell_2 \times \mathbb{R}$. Since by Corollary 9.6, for every $T < \infty$, $\|Z_{n,r+}\|_{2,T} \xrightarrow{P} 0$, it suffices to show that $(Z_1, \dots, Z_r)$ along with $B$ satisfy (2.13).
+
+From the equations of $(Z_{n,3}, \dots, Z_{n,r})$ in Lemma 9.1, uniform convergence of $Z_n$ to $Z$, and the uniform convergence of $W_{n,i}$ to 0, it is immediate that $(Z_3, \dots, Z_r)$ satisfy (2.13). Finally consider the equations for $(Z_1, Z_2)$. From (9.21) and uniform convergence properties observed above it is immediate that $R_n$ converges uniformly, a.s., to $R$ given as
+
+$$R(\cdot) = Z_1(0) - \int_0^\cdot (Z_1(s) - Z_2(s)) + \sqrt{2B}(\cdot)$$
+
+Since $\theta_n = \alpha_n + O(\sqrt{n}/d_n) \to \alpha$, this shows that, for every $T < \infty$,
+
+$$
+\begin{aligned}
+\Gamma_{\theta_n}(R_n)(t) &= R_n(t) - \sup_{s \in [0,t]} (R_n(t) - \theta_n)^+ \\
+&\to R(t) - \sup_{s \in [0,t]} (R(t) - \alpha)^+ = \Gamma_\alpha(R)(t)
+\end{aligned}
+$$
+
+uniformly for $t \in [0, T]$, a.s., where $(R(t) - \alpha)^+$ is taken to be 0 when $\alpha = \infty$. Similarly,
+
+$$\hat{\Gamma}_{\theta_n}(R_n)(t) \to \hat{\Gamma}_\alpha(R)(t)$$
+
+uniformly for $t \in [0, T]$, a.s. Here, when $\alpha = \infty$, $\Gamma_\alpha$ and $\hat{\Gamma}_\alpha$ are as introduced in (2.11). The fact that $(Z_1, Z_2)$ solve the first two equations in (2.13) is now immediate from Lemma 9.4, the convergence $\tilde{Z}_{n,1} - Z_{n,1} \xrightarrow{P} 0$, and the uniform convergence of $V_n$ to 0 noted previously. The result follows. ■
+
+ACKNOWLEDGEMENTS
+
+Research of SB is supported in part by NSF grants DMS-1613072, DMS-1606839 and ARO grant W911NF-17-1-0010. Research of AB is supported in part by the National Science Foundation (DMS-1814894 and DMS-1853968). Research of MD is supported by the NSF grant DMS-1613072 and NIH R01 grant HG009125-01.
+
+REFERENCES
+
+[1] E. Altman, U. Ayesta, and B. J Prabhu, *Load balancing in processor sharing systems*, Telecommunication Systems **47** (2011), no. 1-2, 35–48.
+
+[2] P Billingsey, *Probability and Measure*, Wiley, New York **979** (1995), 344.
+
+[3] M. Bramson, Y. Lu, and B. Prabhakar, *Asymptotic independence of queues under randomized load balancing*, Queueing Systems **71** (2012), no. 3, 247–292.
+
+[4] G. Brightwell, M. Fairthorne, and M. J Luczak, *The supermarket model with bounded queue lengths in equilibrium*, Journal of Statistical Physics **173** (2018), no. 3-4, 1149–1194.
+
+[5] A. Budhiraja and E. Friedlander, *Diffusion approximations for load balancing mechanisms in cloud storage systems*, Advances in Applied Probability **51** (2019), no. 1, 41–86.
+
+[6] V. Cardellini, E. Casalicchio, M. Colajanni, and P. S Yu, *The state of the art in locally distributed web-server systems*, ACM Computing Surveys (CSUR) **34** (2002), no. 2, 263–311.
+
+[7] P. Eschenfeldt and D. Gamarnik, *Supermarket queueing system in the heavy traffic regime*. Short queue dynamics, arXiv preprint arXiv:1610.03522 (2016).
+
+[8] P. Eschenfeldt and D. Gamarnik, *Join the shortest queue with many servers*. The heavy-traffic asymptotics, Mathematics of Operations Research **43** (2018), no. 3, 867–886.
+
+[9] S. N Ethier and T. G Kurtz, *Markov Processes: Characterization and Convergence*, Vol. 282, John Wiley & Sons, 2009.
+---PAGE_BREAK---
+
+[10] D. Gamarnik, J. N Tsitsiklis, and M. Zubeldia, *Delay, memory, and messaging tradeoffs in distributed service systems*, ACM SIGMETRICS Performance Evaluation Review **44** (2016), no. 1, 1–12.
+
+[11] D. Gamarnik, J. N Tsitsiklis, and M. Zubeldia, *Delay, memory, and messaging tradeoffs in distributed service systems*, Stochastic Systems **8** (2018), no. 1, 45–74.
+
+[12] C. Graham, *Chaoticity on path space for a queueing network with selection of the shortest queue among several*, Journal of Applied Probability **37** (2000), no. 1, 198–211.
+
+[13] V. Gupta, M. H. Balter, K. Sigman, and W. Whitt, *Analysis of join-the-shortest-queue routing for web server farms*, Performance Evaluation **64** (2007), no. 9-12, 1062–1081.
+
+[14] P. Hunt and T. Kurtz, *Large loss networks*, Stochastic Processes and their Applications **53** (1994), no. 2, 363–378.
+
+[15] I. Karatzas and S. E Shreve, *Brownian Motion and Stochastic Calculus*, Springer-Verlag, New York, 1998.
+
+[16] T. G Kurtz, *Strong approximation theorems for density dependent Markov chains*, Stochastic Processes and their Applications **6** (1978), no. 3, 223–240.
+
+[17] M. Luczak and C. McDiarmid, *Asymptotic distributions and chaos for the supermarket model*, Electronic Journal of Probability **12** (2007), 75–99.
+
+[18] M. J Luczak and C. McDiarmid, *On the maximum queue length in the supermarket model*, The Annals of Prob- ability **34** (2006), no. 2, 493–527.
+
+[19] M. J Luczak and J. Norris, *Strong approximation for the supermarket model*, The Annals of Applied Probability **15** (2005), no. 3, 2038–2061.
+
+[20] S. T. Maguluri, R. Srikant, and L. Ying, *Stochastic models of load balancing and scheduling in cloud computing clusters*, 2012 proceedings IEEE Infocom, 2012, pp. 702–710.
+
+[21] J. Martin and Y. M Suhov, *Fast Jackson networks*, The Annals of Applied Probability **9** (1999), no. 3, 854–870.
+
+[22] E. J. McShane and T. A. Botts, *Real Analysis*, Courier Corporation, 2013.
+
+[23] M. Mitzenmacher, *The power of two choices in randomized load balancing*, IEEE Transactions on Parallel and Distributed Systems **12** (2001), no. 10, 1094–1104.
+
+[24] M. Mitzenmacher, A. W. Richa, and R. Sitaraman, *The power of two random choices: A survey of techniques and results*, Handbook of randomized computing, Vol. I, II, 2001, pp. 255–312. MR1966907
+
+[25] D. Mukherjee, S. C Borst, J. S. Van Leeuwaarden, and P. A Whiting, *Universality of power-of-d load balancing in many-server systems*, Stochastic Systems **8** (2018), no. 4, 265–292.
+
+[26] A. Mukhopadhyay and R. R Mazumdar, *Analysis of randomized join-the-shortest-queue (JSQ) schemes in large heterogeneous processor-sharing systems*, IEEE Transactions on Control of Network Systems **3** (2015), no. 2, 116–126.
+
+[27] D. Ongaro, S. M Rumble, R. Stutsman, J. Ousterhout, and M. Rosenblum, *Fast crash recovery in RAMCloud*, Proceedings of the twenty-third ACM symposium on operating systems principles, 2011, pp. 29–41.
+
+[28] M. van der Boor, S. C Borst, J. S. van Leeuwaarden, and D. Mukherjee, *Scalable load balancing in networked systems: A survey of recent advances*, arXiv preprint arXiv:1806.05444 (2018).
+
+[29] N. D. Vvedenskaya, R. L. Dobrushin, and F. I. Karpelevich, *Queueing system with selection of the shortest of two queues: An asymptotic approach*, Problemy Peredachi Informatsii **32** (1996), no. 1, 20–34.
+
+[30] K. Yosida, *Functional Analysis*, Springer Science & Business Media, 2012.
+
+## APPENDIX A. PROOFS OF RESULTS IN SECTION 5
+
+### A.1. Proof of Lemma 5.1:
+
+*Proof.* Fix $\epsilon \in (0, 1)$. First suppose $\frac{d_n}{n} \to 0$. Consider $x \in (\epsilon, 1]$. Let $\Delta_n(x) \doteq \log \beta_n(x) - \log \gamma_n(x)$. Let $n_0 \in \mathbb{N}$ be such that for all $n \ge n_0$, $d_n/n < \epsilon/2$. Then, for $n \ge n_0$,
+
+$$
+\begin{aligned}
+\Delta_n(x) &\doteq \sum_{i=0}^{d_n-1} \log\left(\frac{x-i/n}{1-i/n}\right) - \log x^{d_n} = \sum_{i=0}^{d_n-1} \left\{\log\left(\frac{x-i/n}{1-i/n}\right) - \log x\right\}, \\
+&= \sum_{i=0}^{d_n-1} \log\left(\frac{1-i/(nx)}{1-i/n}\right) = \sum_{i=0}^{d_n-1} \log\left(1-(i/n)\frac{1/x-1}{1-i/n}\right).
+\end{aligned}
+\quad (\text{A.1})
+$$
+
+Differentiating $\Delta_n$ gives,
+
+$$ \Delta'_n(x) = \sum_{i=0}^{d_n-1} \left( \frac{1}{x - i/n} - \frac{1}{x} \right) = \sum_{i=0}^{d_n-1} \frac{i/n}{x(x - i/n)}. $$
+---PAGE_BREAK---
+
+Since $n \ge n_0$ and $x \in [\epsilon, 1]$ we have $x(x - \frac{i}{n}) \ge \epsilon^2/2$ for $i \le d_n - 1$. Hence,
+
+$$|\Delta'_n(x)| \le \frac{2}{\epsilon^2} \sum_{i=0}^{d_n-1} (i/n) \le \frac{1}{\epsilon^2} \frac{d_n^2}{n}.$$
+
+From the definition of $\Delta_n$, we also have,
+
+$$\Delta'_{n}(x) = \frac{\beta'_{n}(x)}{\beta_{n}(x)} - \frac{\gamma'_{n}(x)}{\gamma_{n}(x)} = \frac{\gamma'_{n}(x)}{\gamma_{n}(x)} \left( \frac{\beta'_{n}(x)}{\gamma'_{n}(x)} \frac{\gamma_{n}(x)}{\beta_{n}(x)} - 1 \right). \quad (\text{A.2})$$
+
+Since $\frac{\gamma_n'(x)}{\gamma_n(x)} = \frac{d_n}{x} \ge d_n$ for $x \in [\epsilon, 1]$, from (A.2) we have,
+
+$$\sup_{x \in [\epsilon, 1]} \left| \frac{\beta_n'(x)}{\gamma_n'(x)} \frac{\gamma_n(x)}{\beta_n(x)} - 1 \right| \le \frac{1}{d_n} \sup_{x \in [\epsilon, 1]} |\Delta_n'(x)| \le \frac{1}{\epsilon} \frac{d_n}{n} \to 0.$$
+
+This proves (5.4).
+
+Now assume $\frac{d_n}{\sqrt{n}} \to 0$. Once more consider $x \in (\epsilon, 1]$ and $n \ge n_0$. Let $C := \sup_{n \ge n_0} \frac{1/\epsilon - 1}{1 - d_n/n} < \infty$ and let $n_1 > n_0$ be such that $d_n C/n < 1/2$ for all $n \ge n_1$. Then for $n \ge n_1$ and $x \in [\epsilon, 1]$:
+
+$$|\Delta_n(x)| \le \sum_{i=0}^{d_n-1} 2 \left| (i/n) \frac{1/x - 1}{1 - i/n} \right| \le 2C \sum_{i=0}^{d_n-1} i/n \le C \frac{d_n^2}{n}, \quad (\text{A.3})$$
+
+where the first inequality is from (A.1) and the inequality $\log(1+h) \le 2|h|$ for $|h| \le 1/2$. This shows $\sup_{x \in [\epsilon, 1]} |\Delta_n(x)| \to 0$, hence showing the first convergence in (5.5). Finally the second convergence (5.5) is immediate on combining the first convergence with (5.4). ■
+
+### A.2. Proof of Corollary 5.2: This is an immediate consequence of the estimate in (A.3).
+
+### A.3. Proof of Corollary 5.3:
+
+*Proof.* Let $\epsilon > 0$ and $n_0 \in \mathbb{N}$ be such that $\mu_{n,i} > \epsilon$ for all $n \ge n_0$. By Lemma 5.1, as $n \to \infty$:
+
+$$\frac{\beta'_{n}(\mu_{n,i})}{\beta_{n}(\mu_{n,i})} = (1 + o(1)) \frac{\gamma'_{n}(\mu_{n,i})}{\gamma_{n}(\mu_{n,i})}. \quad (\text{A.4})$$
+
+Recall that $\mu_{n,i+1} = \lambda_n\beta_n(\mu_{n,i})$ and $\gamma_n(x) = x^{d_n}$. Hence (A.4) gives:
+
+$$\frac{\beta'_{n}(\mu_{n,i})}{\mu_{n,i+1}/\lambda_n} = (1 + o(1)) \frac{d_n}{\mu_{n,i}} \quad (\text{A.5})$$
+
+completing the proof.
+
+### A.4. Proof of Lemma 5.4:
+
+*Proof.* From Corollary 5.2, there is a $n_0 \in \mathbb{N}$ and $C \in (0, \infty)$ such that for all $n \ge n_0$:
+
+$$\sup_{x \in [\epsilon, 1]} |\log \beta_n(x) - \log \gamma_n(x)| \le \frac{Cd_n^2}{n}.$$
+
+Thus, if for $n \ge n_0$ and $i \in \mathbb{N}$, $\mu_{n,i} \ge \epsilon$, then:
+
+$$\begin{align}
+\log \mu_{n,i+1} &= \log \lambda_n + \log \beta_n(\mu_{n,i}) = \log \lambda_n + \log \gamma_n(\mu_{n,i}) + \gamma_{n,i} \\
+&= \log \lambda_n + d_n \log \mu_{n,i} + \gamma_{n,i},
+\tag{A.6}
+\end{align}$$
+
+where $\lvert\gamma_{n,i}\rvert \le \frac{Cd_n^2}{n}$. Now let $k \in \mathbb{N}$ and $n_1 \in \mathbb{N}$ be such that for all $n \ge n_1$, $\mu_{n,k} \ge \epsilon$. We will show that for $n \ge n_0 \lor n_1$ and $j \in \{1, ..., k\}$ that:
+
+$$\log \mu_{n,j+1} = (\log \lambda_n) \left( \sum_{i=0}^{j} d_n^i \right) + \beta_{n,j},
+\quad (\text{A.7})$$
+---PAGE_BREAK---
+
+where $|\beta_{n,j}| \le \frac{C}{n} \sum_{i=1}^{j} d_n^{i+1}$. Note that the lemma is immediate from (A.7) on taking $j=k$. To prove (A.7) we argue inductively. First note that since $\boldsymbol{\mu}_n \in \ell_1^\perp$, $\mu_{n,i} \ge \mu_{n,k} \ge \epsilon$ for each $i \le k$ and $n \ge n_1$. Hence (A.6) holds for each $i \le k$ and $n \ge n_0 \lor n_1$. Taking $i=1$ in (A.6) and noting that $\mu_{n,1} = \lambda_n$ proves (A.7) for the case $j=1$.
+
+Suppose now (A.7) holds for some $j \le k-1$. Then, using $i=j+1$, in (A.6)
+
+$$\log \mu_{n,j+2} = \log \lambda_n + d_n \log \mu_{n,j+1} + \gamma_{n,j+1},$$
+
+where $|\gamma_{n,j+1}| \le \frac{Cd_n^2}{n}$. By the induction hypothesis, (A.7) holds for $j$. Hence
+
+$$\log \mu_{n,j+2} = \log \lambda_n + d_n \left\{ (\log \lambda_n) \left( \sum_{i=0}^{j} d_n^i \right) + \beta_{n,j} \right\} = (\log \lambda_n) \left( \sum_{i=0}^{j+1} d_n^i \right) + d_n \beta_{n,j} + \gamma_{n,j}$$
+
+and hence $\beta_{n,j+1} = d_n \beta_{n,j} + \gamma_{n,j}$. This shows
+
+$$|\beta_{n,j+1}| = |d_n\beta_{n,j} + \gamma_{n,j}| \le d_n \frac{C}{n} \sum_{i=1}^{j} d_n^{i+1} + \frac{Cd_n^2}{n} = \frac{C}{n} \sum_{i=1}^{j+1} d_n^{i+1}$$
+
+which shows that (A.7) holds for $j+1$. This completes the proof. ■
+
+### A.5. Proof of Corollary 5.5:
+
+*Proof.* Since $d_n \to \infty$, the assumption $\frac{\xi_n^2}{d_n} \to 0$ shows that $\frac{|\xi_n|}{d_n} \le \frac{1+\xi_n^2}{d_n} \to 0$. This shows that $\epsilon_n \doteq 1 - \lambda_n = \frac{\xi_n + \log d_n}{d_n}$ also converges to 0.
+
+We first show that $\mu_{n,i} \to 1$ for each $i \in \{1, \dots, k\}$. We will argue inductively. Since $\mu_{n,1} \doteq \lambda_n = 1 - \epsilon_n$, we have $\mu_{n,1} \to 1$. Suppose now that $\mu_{n,i} \to 1$ for some $i \le k-1$. Hence eventually $\mu_{n,i} \ge \frac{1}{2}$. Applying Lemma 5.4 with $k=i$ and $\epsilon=\frac{1}{2}$ and simplifying the resulting expression, we get
+
+$$
+\begin{align}
+\log \mu_{n,i+1} &= (\log \lambda_n) \frac{d_n^{i+1}-1}{d_n-1} + O\left(\frac{d_n^2(d_n^i-1)}{n(d_n-1)}\right) && (A.8) \\
+&= O(\epsilon_n) \frac{d_n^{i+1}-1}{d_n-1} + O\left(\frac{d_n^2(d_n^i-1)}{n(d_n-1)}\right) = O\left(\frac{\xi_n + \log d_n}{d_n^{k-i}}\right) + O\left(\frac{d_n^{i+1}}{n}\right), && (A.9)
+\end{align}
+$$
+
+where the second equality uses $\log \lambda_n = \log(1-\epsilon_n) = O(\epsilon_n)$ and the third follows on recalling that $d_n \to \infty$. Since $i \le k-1$, $\frac{|\xi_n|}{d_n^{k-i}} \le \frac{1+\xi_n^2}{d_n} \to 0$. Using this along with $d_n^{k+1} \ll n$ in (A.9) shows that $\mu_{n,i+1} \to 1$. Hence, by induction, $\mu_{n,i} \to 1$ for $i \le k$.
+
+Next we argue that $\beta'_n(\mu_{n,k}) \to \alpha$. Since $\lambda_n \to 1$ and $\mu_{n,k} \to 1$, from Corollary 5.3 we have that
+
+$$\lim_{n\to\infty} \frac{\beta'_{n}(\mu_{n,k})}{d_n\mu_{n,k+1}} = 1$$
+
+Hence it suffices to show that $d_n\mu_{n,k+1} \to \alpha$. For this note that
+
+$$
+\begin{align*}
+\log(d_n\mu_{n,k+1}) &= \log\mu_{n,k+1} + \log d_n \\
+&= \log(1-\epsilon_n)\left(\frac{d_n^{k+1}-1}{d_n-1}\right) + O\left(\frac{d_n^2}{n}\right) \\
+&= (-\epsilon_n + O(\epsilon_n^2))d_n^k(1+O(1/d_n)) + \log d_n + O\left(\frac{d_n^{k+1}}{n}\right),
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where the second equality is from (A.8) and last equality is by using Taylor's expansion for $\log(1-\epsilon_n)$.
+Using $d_n^{k+1} \ll n$ and $|\epsilon_n^2 d_n^k| \le \frac{2(\xi_n^2 + (\log d_n)^2)}{d_n^k} \to 0$, we now have
+
+$$
+\begin{align*}
+\log(d_n \mu_{n,k+1}) &= (-\epsilon_n d_n^k + o(1))(1 + O(1/d_n)) + \log d_n + o(1) \\
+&= (-\xi_n - \log d_n)(1 + O(1/d_n)) + \log d_n + o(1) \\
+&= -\xi_n - \log d_n + \log d_n + O\left(\frac{\xi_n + \log d_n}{d_n}\right) + o(1) = -\xi_n + o(1) \to \log(\alpha)
+\end{align*}
+$$
+
+where the last equality once more uses the observation that $\frac{|\xi_n|}{d_n} \to 0$. Thus we have $d_n\mu_{n,k+1} \to \alpha$
+as $n \to \infty$ which completes the proof. ■
+
+**A.6. Proof of Lemma 5.6:**
+
+*Proof.* Since $\mu_{n,k} \to 1$ and $j \mapsto \mu_{n,j}$ is nonincreasing, we have $\mu_{n,i} \to 1$ for each $i \le k$. Additionally, since $\lambda_n \to 1$, Corollary 5.3 shows that for any $i \in [k] \lim_{n\to\infty} \frac{\beta'_n(\mu_{n,i})}{d_n\mu_{n,i+1}} = 1$. As a consequence, $\beta'_n(\mu_{n,k-1}) \to \infty$ as $n \to \infty$, and for any $j \in [k-2]$
+
+$$
+\lim_{n \to \infty} \frac{\beta'_{n}(\mu_{n,j})}{\beta'_{n}(\mu_{n,j+1})} = \lim_{n \to \infty} \frac{d_{n}\mu_{n,j+1}}{d_{n}\mu_{n,j+2}} = \lim_{n \to \infty} \frac{\mu_{n,j+1}}{\mu_{n,j+2}} = 1.
+$$
+
+This completes the proof of the lemma.
+
+A.7. **Proof of Lemma 5.7:**
+
+*Proof.* By the first part of Lemma 5.1, (5.7) is immediate from (5.6). Now consider (5.6). Taking logarithms in (2.7), for $x > d_n/n$,
+
+$$
+\log \beta_n(x) = \sum_{i=0}^{d_n-1} \left( \log \left( x - \frac{i}{n} \right) - \log \left( 1 - \frac{i}{n} \right) \right) = \sum_{i=0}^{d_n-1} \left( \log \left( 1 - \frac{i}{n} - (1-x) \right) - \log \left( 1 - \frac{i}{n} \right) \right).
+$$
+
+Let $\delta_n = \epsilon_n + \frac{d_n}{n}$. For large $n$, $\delta_n \le \frac{1}{2}$, and hence, using the expansion $\log(1-h) = -h + O(h^2)$ for $|h| \le \frac{1}{2}$, for any $x \in [1-\epsilon_n, 1]$:
+
+$$
+\begin{align*}
+\log \beta_n(x) &= \sum_{i=0}^{d_n-1} \left\{ -\frac{i}{n} - (1-x) + \frac{i}{n} + O(\delta_n^2) \right\} \\
+&= -d_n(1-x) + O(d_n \delta_n^2) \\
+&= d_n \log(1-(1-x)) + O(d_n \delta_n^2) = \log \gamma_n(x) + O(d_n \delta_n^2).
+\end{align*}
+$$
+
+Note that $\delta_n^2 = (\epsilon_n + d_n/n)^2 \le 2(\epsilon_n^2 + \frac{d_n^2}{n^2})$. Hence by our assumptions $d_n\delta_n^2 \to 0$. This proves (5.6)
+and completes the proof of the lemma. ■
+
+A.8. **Proof of Lemma 5.8:**
+
+*Proof.* By (5.2)
+
+$$
+\sup_{x \in [0,1-\epsilon_n]} |\beta_n(x)| \le (1-\epsilon_n)^{d_n} = e^{-d_n\epsilon_n+o(1)} \to 0.
+$$
+
+Similarly, by (5.3), under the assumption $\limsup_n \frac{d_n}{n} < 1$, for large $n$,
+
+$$
+\sup_{x \in [0,1-\epsilon_n]} |\beta'_n(x)| \le (1-d_n/n)^{-1} d_n (1-\epsilon_n)^{d_n-1} = e^{-d_n\epsilon_n+\log d_n+O(1)} \to 0.
+$$
+---PAGE_BREAK---
+
+## APPENDIX B. PROOF OF LEMMA 6.8
+
+For a right continuous bounded variation function $F: [0, T] \to \mathbb{R}$, let $dF$ denote the signed measure on $(0, T]$ given by $dF(a, b] = F(b) - F(a)$ for $0 \le a < b \le T$, and $d\lambda$ denote the Lebesgue measure on $(0, T]$. Bounded measurable functions $h: [0, T] \to \mathbb{R}$ act on signed measure $d\mu$ on $(0, T]$ on the left as follows: $hd\mu$ denotes the signed measure $A \mapsto \int_A h(x)d\mu(x)$, $A \in \mathcal{B}(0, T]$.
+
+Let $F(t) \doteq \int_0^t f(s)d\lambda(s)$ for $t \in [0, T]$. Note that $z$ defined in (6.14) is a right continuous function with bounded variations. The corresponding measure $dz$ on $(0, T]$ satisfies the identity
+
+$$dz = -fzd\lambda + gd\lambda + dM,$$
+
+namely
+
+$$dz + fzd\lambda = gd\lambda + dM.$$
+
+Acting on the left in the above identity by the bounded continuous function $e^F(x) \doteq e^{F(x)}$ we get
+
+$$e^F dz + e^F fzd\lambda = e^F gd\lambda + e^F dM.$$
+
+Since $dF = fd\lambda$, by the change of variable formula (cf. [22, Theorem VI.8.3]) $de^F = fe^F d\lambda$. Hence
+
+$$e^F dz + zde^F = e^F gd\lambda + e^F dM.$$
+
+Two applications of the integration by parts formula (cf. [2, Theorem 18.4]) show that
+
+$$d(e^F z) = e^F gd\lambda + d(e^F M) - Mde^F.$$
+
+Computing the total measure on $(0, t]$ for $t \le T$:
+
+$$e^{F(t)}z(t) - z(0) = \int_{0}^{t} e^{F(s)}g(s)d\lambda(s) + e^{F(t)}M(t) - M(0) - \int_{0}^{t} M(s)de^{F}(s)$$
+
+Rearranging terms and multiplying by $e^{-F(t)}$ on both sides:
+
+$$z(t) = \int_{0}^{t} e^{F(s)-F(t)}g(s)d\lambda(s) + M(t) - e^{-F(t)}\int_{0}^{t} M(s)de^{F}(s) + e^{-F(t)}(z(0) - M(0)). \quad (B.1)$$
+
+We now estimate the various terms on the right hand side of (B.1). The first term on the right hand side of (B.1) satisfies for $t \in [0, T \wedge \tau]$
+
+$$
+\begin{align*}
+\left|\int_0^t e^{F(s)-F(t)} g(s) d\lambda(s)\right| &\le |g|_{*,T\wedge\tau} \int_0^t e^{F(s)-F(t)} d\lambda(s) \\
+&\le |g|_{*,T\wedge\tau} \int_0^t e^{-\int_s^t f(u)du} d\lambda(s) \\
+&\le |g|_{*,T\wedge\tau} \int_0^t e^{-m(t-s)} d\lambda(s) = |g|_{*,T\wedge\tau} \frac{1-e^{-tm}}{m} \le \frac{|g|_{*,T\wedge\tau}}{m}. \tag{B.2}
+\end{align*}
+$$
+
+Next we estimate the third term in the right hand side of (B.1). Since $f$ is non-negative on $[0, T \wedge \tau]$, $de^F$ in a positive measure on $(0, T \wedge \tau]$. Hence for $t \in [0, T \wedge \tau]$
+
+$$\left| e^{-F(t)} \int_0^t M(s) de^F(s) \right| \le |M|_{*,T \wedge \tau} e^{-F(t)} \int_0^t de^F(s) \le |M|_{*,T \wedge \tau}. \quad (B.3)$$
+
+Finally, the last term in the right hand side of (B.1) for any $t \in [0, \tau \wedge T]$ can be bounded as
+
+$$\left| e^{-F(t)}(z(0) - M(0)) \right| \le (|z(0)| + |M(0)|)e^{-F(t)} \le (|z(0)| + |M(0)|)e^{-mt}. \quad (B.4)$$
+
+Using (B.2), (B.3) and (B.4) in (B.1) completes the proof of the lemma. ■
\ No newline at end of file
diff --git a/samples/texts_merged/411192.md b/samples/texts_merged/411192.md
new file mode 100644
index 0000000000000000000000000000000000000000..d3530faf6fb3ffceeab5167fd2d175f88c910b9c
--- /dev/null
+++ b/samples/texts_merged/411192.md
@@ -0,0 +1,601 @@
+
+---PAGE_BREAK---
+
+On the detection of a moving rigid solid in a perfect fluid
+
+Carlos Conca, Muslim Malik, Alexandre Munnier
+
+► To cite this version:
+
+Carlos Conca, Muslim Malik, Alexandre Munnier. On the detection of a moving rigid solid in a perfect fluid. Inverse Problems, IOP Publishing, 2010, 26 (9), 10.1088/0266-5611/26/9/095010.inria-00468480v2
+
+HAL Id: inria-00468480
+
+https://hal.inria.fr/inria-00468480v2
+
+Submitted on 3 Apr 2010
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+On the detection of a moving rigid solid in a perfect fluid
+
+Carlos Conca*
+
+Center for Mathematical Modelling,
+University of Chile, Santiago, Chile,
+
+Cconca@dim.uchile.cl
+
+Muslim Malik
+
+Center for Mathematical Modelling,
+University of Chile, Santiago, Chile
+
+Alexandre Munnier
+
+Institut Elie Cartan UMR 7502, Nancy-Université,
+CNRS, INRIA, B.P. 239,
+F-54506 Vandoeuvre-lès-Nancy Cedex, France,
+
+Alexandre.munnier@iecn.u-nancy.fr
+
+April 2, 2010
+
+Abstract
+
+In this paper, we consider a moving rigid solid immersed in a potential fluid. The fluid-solid system fills the whole space and the fluid is assumed to be at rest at infinity. Our aim is to study the inverse problem, initially introduced in [3], that consists in recovering the position and the velocity of the solid assuming that the potential function is known at a given time.
+
+We show that this problem is in general ill-posed by providing counterexamples for which the same potential corresponds to different positions and velocities of a same solid. However, it is also possible to find solids having a specific shape, like ellipses for instance, for which the problem of detection admits a unique solution.
+
+Using complex analysis, we prove that the well-posedness of the inverse problem is equivalent to the solvability of an infinite set of nonlinear equations. This result allows us to show that when the solid enjoys some symmetry properties, it can be *partially* detected. Besides, for any solid, the velocity can always be recovered when both the potential function and the position are supposed to be known.
+
+At last, we prove that by performing continuous measurements of the fluid potential over a time interval, we can always track the position of the solid.
+
+# 1 Introduction
+
+## 1.1 History
+
+Sonars are the most common devices used to spot immersed bodies, like submarines or banks of fish. These systems use acoustic waves: active sonars emit acoustic waves (making themselves
+
+*C. Conca thanks the MICDB for partial support through Grant ICM P05-001-F, Fondap-Basal-Conicyt, and the French & Chilean Governments through Ecos-Conicyt Grant C07E05.
+---PAGE_BREAK---
+
+detectable), while passive sonars only listen (and therefore are only able to detect targets that are noisy enough). To overcome these limitations, it would be interesting to design systems imitating the *lateral line systems* of fish, a sense organ they use to detect movement and vibration in the surrounding water.
+
+Most of the published results on inverse problems in Fluid Mechanics concern the detection of fixed immersed obstacles. Let us mention the article [1] of Alvarez et al, in which the authors prove that a fixed smooth convex obstacle surrounded by a fluid governed by the Navier-Stokes equations can be identified via a localized boundary measurement of the velocity of the fluid and the Cauchy forces. In [4], the authors identify a single rigid obstacle immersed in a Navier-Stokes fluid by measuring both the gradient of the pressure and the velocity of the fluid on one part of the boundary. The distance from a given point to an obstacle is estimated in [5] from boundary measurements for a fluid governed by the stationary Stokes equations.
+
+To our knowledge, the only work addressing the detection of moving bodies is the article [3] of Conca et al. In this paper, the authors consider a single moving disk in an ideal fluid and prove that the position and velocity of the body can be deduced from one single measurement of the potential along some part of the exterior boundary of the fluid. They obtain linear stability results as well, by using shape differentiation techniques.
+
+## 1.2 Problem settings
+
+### Domains, frames, coordinates
+
+At a given time $t$, we assume that a rigid solid occupies the domain $S \subset \mathbf{R}^2$, while the domain $\mathcal{F} := \mathbf{R}^2 \setminus \bar{S}$ is filled by a perfect fluid. Let us assume that $S$ is a simply connected compact set. The unitary normal to $\partial\mathcal{F}$ directed toward the exterior of $\mathcal{F}$ is denoted by **n**. As being a rigid solid, $S$ is the image by a rotation and a translation of a given reference domain $S_0$ which will merely be called in the sequel the *shape* of the solid. Therefore, at any time, there exist an angle $\theta \in \mathbf{R}/2\pi$, a rotation matrix $R(\theta) \in SO_2(\mathbf{R})$ of angle $\theta$, a point $\mathbf{s} := (s_1, s_2)^T \in \mathbf{R}^2$ (the center of the rotation) and a vector $\mathbf{r} := (r_1, r_2)^T$ of $\mathbf{R}^2$ such that $S = R(\theta)(S_0 - \mathbf{s}) + \mathbf{r}$. We will be concerned with recovering the position of the solid, so it is worth remarking that the triplet $(\theta, \mathbf{s}, \mathbf{r})$ is not unique. Two triplets $(\theta_j, \mathbf{s}_j, \mathbf{r}_j) \in \mathbf{R}/2\pi \times \mathbf{R}^2 \times \mathbf{R}^2$ ($j=1,2$) give the same position for any $S_0$ if and only if $R(\theta_1) = R(\theta_2) = \mathbf{R}$ and $R(\mathbf{s}_1 - \mathbf{s}_2) = \mathbf{r}_1 - \mathbf{r}_2$. These equalities define an equivalence relation in $\mathbf{R}/2\pi \times \mathbf{R}^2 \times \mathbf{R}^2$. However, we want also to take into account the possible symmetries of the solid. So, given $S_0$, we say that two triplets $(\theta_j, \mathbf{s}_j, \mathbf{r}_j)$ are equivalent when $R(\theta_1)(S_0 - \mathbf{s}_1) + \mathbf{r}_1 = R(\theta_2)(S_0 - \mathbf{s}_2) + \mathbf{r}_2$. We denote by $\mathcal{P}$ the set of all of the equivalence class **p**. We will make no difference in the notation between **p** and any element $(\theta, \mathbf{s}, \mathbf{r})$ belonging to this class. In particular, we will write in short that for any $x \in \mathbf{R}^2$, $\mathbf{p}x = R(\theta)(x - \mathbf{s}) + \mathbf{r}$. In the sequel, **p** will be merely referred to as *position* of the solid.
+
+Later on, we will use tools of complex analysis so rather than $\mathbb{R}^2$, we will sometimes identify the plane with the complex field $\mathbb{C}$. For any complex number $z := z_1 + i z_2$ ($i^2 = -1$, $z_1, z_2 \in \mathbf{R}$), we will denote $\bar{z} := z_1 - i z_2$ the conjugate of $z$ and $\overline{D}$ will stand for the unitary disk of $\mathbb{C}$.
+
+### Sequences of complex numbers
+
+For any sequence of complex numbers $c := (c_k)_{k \in \mathbb{Z}}$, we can define $\bar{c} := (\bar{c}_k)_{k \in \mathbb{Z}}$ and $\check{c} := (\bar{c}_{-k})_{k \in \mathbb{Z}}$. For any two sequences $a := (a_k)_{k \in \mathbb{Z}}$ and $b := (b_k)_{k \in \mathbb{Z}}$, we recall the definition of the convolution
+---PAGE_BREAK---
+
+product: $a*b := (\sum_{j \in \mathbb{Z}} a_{k-j} b_j)_{k \in \mathbb{Z}}$. The convolution product can be iterated $n$ times ($n$ an integer)
+to obtain $a^n := a * a * \dots * a$.
+
+**Rigid velocity**
+
+The solid is moving. We denote by $\mathbf{v}(x) := (v_1(x), v_2(x))^T \in \mathbf{R}^2$ the rigid Eulerian velocity field
+defined for all $x \in S$. This notation turns out to be $\mathbf{v}(z) := v_1(z) + iv_2(z)$ in complex notation.
+It is well known in Solid Mechanics that $\mathbf{v}$ can be decomposed into the sum of an instantaneous
+rotational velocity field and a translational velocity field. For any $x := (x_1, x_2)^T \in \mathbf{R}^2$, we introduce
+the notation $x^\perp := (-x_2, x_1)^T$ and we have $\mathbf{v}(x) = \omega(x - \mathbf{s})^\perp + \mathbf{w}$ where $\mathbf{s} \in \mathbf{R}^2$ is the center of the
+instantaneous rotation, $\omega \in \mathbf{R}$ is the angular velocity and $\mathbf{w} := (w_1, w_2)^T \in \mathbf{R}^2$ the translational
+velocity. Since we will be willing to recover these data, it is worth observing that the triplet
+$(\omega, \mathbf{s}, \mathbf{w}) \in \mathbf{R} \times \mathbf{R}^2 \times \mathbf{R}^2$ is not unique. Both triplets $(\omega_j, \mathbf{s}_j, \mathbf{w}_j)$ ($j=1,2$) give the same rigid
+velocity field $\mathbf{v}$ if and only if $\omega_1 = \omega_2 = \omega$ and $\omega(\mathbf{s}_1^\perp - \mathbf{s}_2^\perp) = \mathbf{w}_1 - \mathbf{w}_2$ (in particular, we can always
+choose for $\mathbf{s}$ any point of $\mathbf{R}^2$). This is an equivalence relation and the velocity $\mathbf{v}$ can be seen as
+an equivalence class. We denote $\mathcal{V}$ the set of all of the equivalence class and we will make not
+difference, in what follows, between the vector field, the class of equivalence and any element of this
+class. All of them will be denoted by $\mathbf{v}$.
+
+**Definition 1.1 (Configurations).** For any given shape $S_0$, we define a configuration as any position-velocity pair $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$.
+
+**Fluid dynamics**
+
+The dynamics of the fluid is described by means of its Eulerian velocity field $\mathbf{u}(x) := (u_1(x), u_2(x))^T$
+defined for all $x \in \mathcal{F}$. Since the fluid is assumed to be perfect (i.e. incompressible and inviscid) and
+the flow irrotational, there exists a potential function $\varphi$, harmonic in $\mathcal{F}$, such that $\mathbf{u}(x) = \nabla\varphi(x)$
+($x \in \mathcal{F}$). The fluid is assumed to be at rest at infinity so we impose the asymptotic behavior
+$|\nabla\varphi(x)| \to 0$ as $|x| \to +\infty$. The classical slip boundary condition for inviscid fluid reads $\mathbf{u} \cdot \mathbf{n} = \mathbf{v} \cdot \mathbf{n}$
+on $\partial\mathcal{S}$ and yields a Neumann boundary condition for $\varphi$, namely $\partial_n\varphi = \mathbf{v} \cdot \mathbf{n}$ on $\partial\mathcal{S}$. Although the
+domain $\mathcal{F}$ is not simply connected, we can still consider $\psi$, the harmonic conjugate function to $\varphi$,
+because $\int_{\partial\mathcal{S}} \partial_n\varphi d\sigma = 0$. The functions $\varphi$ and $\psi$ satisfy the relation $\nabla\psi = (\nabla\varphi)^\perp$ in $\mathcal{F}$. In Fluid
+Mechanics, $\psi$ is called the stream function and the complex function $\xi = \varphi + i\psi$ is the holomorphic
+complex potential. As usual, we define $u := u_1 + iu_2 = \tilde{\xi}'$ as the complex fluid velocity. Observe
+that the complex potential, as being solution of a boundary value problem, depends on the domain $\mathcal{F}$ and the velocity $\mathbf{v}$ only. With the notation introduced earlier, we deduce that $\xi$ depends only on
+the shape $S_0$ and the configuration $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$.
+
+The complex potential is defined up to an additive constant which can be chosen such that
+$|\xi(z)| \to 0$ as $|z| \to +\infty$. For any $\nu \in \mathcal{C}$, the complex potential can be expanded in the form of a
+Laurent series:
+
+$$
+\xi(z) := \sum_{j \ge 1} \frac{\lambda_j(\nu)}{(z-\nu)^j}, \quad |z-\nu| > R(\nu), \qquad (1.1)
+$$
+
+where $\lambda_j(\nu)$ ($j \ge 1$) are complex numbers and $R(\nu) := \limsup_{j\to+\infty} |\lambda_j(\nu)|^{1/j}$. The series is
+uniformly convergent on the set $\{z \in \mathbb{C} : |z - \nu| > R(\nu)\}$.
+---PAGE_BREAK---
+
+# Measurements
+
+We measure the complex velocity $u$ of the fluid in some open subset of $\mathcal{F}$. The Analytic Continuation theorem tells us that we can deduce the value of $\xi'$ everywhere in the connected open set $\mathcal{F}$ and then also the value of $\xi$, up to an additive constant. In particular, we will assume that for all $\nu \in \mathbf{C}$, we can always evaluate all of the terms of the complex sequence $(\lambda(\nu))_{j \ge 1}$ arising in the expression (1.1).
+
+## 1.3 Main results
+
+**Definition 1.2** (Detectability). A solid of shape $S_0$ is said to be detectable if for any configuration $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$, the knowledge of the potential holomorphic function $\xi$ suffices to recover the pair $(\mathbf{p}, \mathbf{v})$.
+
+Observe that this definition makes the property of being detectable independent of the configuration: detectability is a purely geometric property of the solid. Our first result is that not all the solids are detectable:
+
+**Theorem 1.3.** For any integer $n \ge 2$, there exist a holomorphic function $\xi$, a shape $S_0$, and $n$ configurations $(\mathbf{p}_j, \mathbf{v}_j) \in \mathcal{P} \times \mathcal{V}$, $j = 1, \dots, n$ satisfying $\mathbf{p}_j \neq \mathbf{p}_k$ if $j \neq k$ such that $\xi$ is the potential of the fluid corresponding to the solid of shape $S_0$ with any of the configurations $(\mathbf{p}_j, \mathbf{v}_j)$, $j = 1, \dots, n$.
+
+In other words, for any integer $n$, there exists at least one solid that can occupy $n$ different positions with $n$ different velocities and for which the fluid potential is the same. This theorem shows that the result obtained in [3] for a disk can not be generalized to any solid. However, not only the disk is a detectable body:
+
+**Proposition 1.1.** Any ellipse is a detectable solid.
+
+Going back to the general case, it is easy to see that the holomorphic potential never admits an analytic continuation over the whole complex plane. Furthermore, for any analytic continuation of the potential inside the solid, we will prove that the locations of the singularities provide clues allowing one in many cases to determine the position of the solid. This discussion is carried out in Subsection 6.1.
+
+According to Theorem 1.3, the problem of detection is ill-posed in the general case. However, we claim that when the solid enjoys some symmetry properties, it can be *partially detected* (i.e. some but not all of the parameters among $\mathbf{r}$, $\alpha$, $\mathbf{w}$, $\omega$ can be deduced from the potential). The following proposition illustrates this idea:
+
+**Proposition 1.2.** If the shape of the solid is invariant under a rotation of angle $\pi/2$ then $\mathbf{r}$, $\mathbf{w}$ and $|\omega|$ can be deduced from the potential function.
+
+Results of this kind are made precise in Propositions 6.4 and 6.5.
+
+Going back now to the general case, we can also try to determine less parameters with more information. For instance, we can prove:
+
+**Proposition 1.3.** For any solid with configuration $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$, the knowledge of both the potential function and the position $\mathbf{p}$ suffices in recovering $\mathbf{v}$.
+---PAGE_BREAK---
+
+At last, we can also measure the potential function not only at a given instant but over a time interval. In this case, we obtain:
+
+**Theorem 1.4 (Tracking).** For any solid $S_0$, if we know its position at the time $t = 0$ and we perform continuous measurements of the complex potential over the time interval $[0,T]$ for some $T > 0$ then we can deduce the configuration of the solid at any time $t \in [0,T]$.
+
+## 1.4 Outline of the paper
+
+In the next section, we provide examples of non-detectable solids and prove Theorem 1.3. In Section 3, we derive the expression of the complex potential. In Section 4, we determine all the stealth solids, i.e. all the solids that can move in the fluid without disturbing it. The detection of a moving ellipse is discussed in Section 5. Section 6 is split into three parts: the first one is dedicated to the study of the singularities of the potential function and the second one to its asymptotic expansion and how these results can be used for the detection problem we are dealing with. The third subsection deals with an example of detection. In Section 7 we give the proof of Theorem 1.4 and at last in Section 8, we indicate some remaining open problems.
+
+# 2 Examples of Non-detectable Solids
+
+This Section is mostly devoted to the proof of Theorem 1.3.
+
+## Expression of the stream function
+
+Let a shape $S_0$ and a configuration $(\mathbf{p}, \mathbf{v})$ be given with $\mathbf{v} = \omega(x - \mathbf{s})^\perp + \mathbf{w}$ (for some real number $\omega$ and some vector $\mathbf{s}$) and remember that $\mathcal{S} = \mathbf{p}(S_0)$. Then, let us introduce $\gamma : [0, \ell] \mapsto \gamma(s) = (\gamma_1(s), \gamma_2(s))^\top \in \mathbb{R}^2$ a parameterization of $\partial\mathcal{S}$ satisfying $|\gamma'(s)| = 1$ for all $s \in [0, \ell[$ ($\ell > 0$). We assume that $\partial\mathcal{S}$ is described positively (counterclockwise parameterization), we denote $\tau = \gamma'$ (the unitary tangent vector to $\partial\mathcal{S}$) and we get $\mathbf{n} = \boldsymbol{\tau}^\perp$. We deduce that $\partial_n\varphi = -\partial_\tau\psi$ and hence that $\partial_\tau\psi(\gamma) = -w_1\gamma'_2 + w_2\gamma'_1 + \omega\gamma'\cdot(\gamma - \mathbf{s})$. We can integrate along $\partial\mathcal{S}$ to obtain $\psi(\gamma) = -w_1\gamma_2 + w_2\gamma_1 + (\omega/2)|\gamma - \mathbf{s}|^2 + C$ on $\partial\mathcal{S}$, where $C$ is real constant. This Dirichlet boundary condition for the stream function reads also: $\psi(x) = -w_1x_2 + w_2x_1 + (\omega/2)|x - \mathbf{s}|^2 + C$ on $\partial\mathcal{S}$. In this form, the boundary of the solid turns out to be a level set of the function $g(x) := (\omega/2)|x - \mathbf{s}|^2 - w_1x_2 + w_2x_1 - \psi(x)$, an observation we will now take advantage of.
+
+## Proof of Theorem 1.3
+
+Pick some integer $n \ge 2$ and consider the harmonic function whose expression in polar coordinates is $\psi(r, \theta) := \cos(n\theta)r^{-n}$. Since, in Cartesian coordinates, $|(\partial\psi/\partial x_j)(x)| \le |\nabla\psi(x)| = n|x|^{-n-1} \quad (j=1,2)$, we deduce that for any $\omega > 0$ and $\mathbf{s} \in \mathbb{R}^2$, there exists $\delta > 0$ such that $(\partial\psi/\partial x_1)(x) - \omega(x_1 - s_1)$ and $(\partial\psi/\partial x_2)(x) - \omega(x_2 - s_2)$ can not be simultaneously null providing $|x - \mathbf{s}| > \delta$. Applying the local inversion Theorem, we deduce that for any $\lambda \in \mathbb{R}$, the solutions of
+
+$$ \frac{\omega}{2} |x - s|^2 - \psi(x) - \lambda = 0, \qquad (2.1) $$
+
+satisfying $|x - s| > \delta$ (if any) are locally smooth curves.
+---PAGE_BREAK---
+
+From the estimate $|\psi(x)| < |x|^{-n} (x \in \mathbf{R}^2)$, we deduce that for all $\mathbf{s} := (s_1, s_2)^T$, all $\omega > 0$ and all $\varepsilon > 0$, there exists $\delta' > 0$ such that:
+
+$$ \frac{\omega - \varepsilon}{2} |x - \mathbf{s}|^2 - \lambda \leq \frac{\omega}{2} |x - \mathbf{s}|^2 - \psi(x) - \lambda \leq \frac{\omega + \varepsilon}{2} |x - \mathbf{s}|^2 - \lambda, $$
+
+for all $\lambda \in \mathbf{R}$ providing $|x-s| \ge \delta'$. If we choose for instance $\lambda > \max(\delta', \delta)^2(\omega+\varepsilon)$, there is a zero level set of the function $g(x) := \omega|x-s|^2/2 - \psi(x) - \lambda$ between the circles $|x-s| = \sqrt{2\lambda}/\sqrt{\omega-\varepsilon}$ and $|x-s| = \sqrt{2\lambda}/\sqrt{\omega+\varepsilon}$ (because $\sqrt{2\lambda}/\sqrt{\omega+\varepsilon} \ge \sqrt{2\delta'} > \delta$). It remains to choose $\mathbf{s}$ properly, in order to take advantage of the symmetry of the function $\psi$. Let $\rho$ be any positive number and denote $S_1$ the zero level set of $g$ obtained as described above by specifying $\mathbf{s}_1 := (\rho, 0)$. This level set defines the smooth boundary of a solid for which $\psi$ is the stream function (and $\xi(z) := i/z^n$ the holomorphic potential) associated with the velocity $\mathbf{v}_1 := \omega(x - s_1)^{\perp}$. By choosing next $\mathbf{s}_k = (\rho \cos(2(k-1)\pi/n), \rho \sin(2(k-1)\pi/n))^T$ for $k=2, \dots, n$, we obtained $n-1$ copies of $S_1$ at $n-1$ different positions with respective velocities $\mathbf{v}_k := \omega(x - s_k)^{\perp}$. Some examples of such solids with the associated rigid velocity fields are pictured in Figures 1, 2 and 3.
+
+Figure 1: For both configurations, the stream function is the same. It reads $\psi(r, \theta) := \cos(2\theta)/r^2$ in polar coordinates. The holomorphic potential is $\xi(z) := i/z^2$.
+
+## 3 The Complex Potential
+
+Before going further, we need to describe the shape $S_0$. Actually, for convenience, rather than $S_0$ we shall describe $\mathcal{F}_0 := C \setminus \bar{S}_0$. Thus, assume that $\mathcal{F}_0$ is the image by a conformal mapping $f$ of $\Omega := C \setminus \bar{D}$, the exterior of the unitary disk. For any simply connected shape $S_0$ and corresponding domain $\mathcal{F}_0$, the Riemann mapping Theorem tells us that $f$ can be written in the form:
+
+$$ f(z) = c_1z + c_0 + \sum_{k \le -1} c_k z^k, \quad (z \in \Omega), \tag{3.1} $$
+
+where $c_k \in C$ for $k = 1$ and all $k \le -1$ and $c_1 \ne 0$. We can assume, without loss of generality, that $c_0 = 0$. To simplify forthcoming computations, we will also assume that $c_k$ is actually defined for all $k \in Z$ and that $c_k = 0$ for $k = 0$ and $k \ge 2$. We denote $c := (c_k)_{k \in Z}$ the complex sequence of elements $c_k$ and the *area Theorem* (see [7, Theorem 14.13]) tells us that the area of $S_0$ is equal
+---PAGE_BREAK---
+
+Figure 2: For both configurations, the stream function and the holomorphic potential are the same as in Figure 1.
+
+to $\pi \sum_{k \le 1} k |c_k|^2$. Since $S_0$ is of finite extent, it means that this sum has to be finite. Actually, we will assume also that $c \in l^1(C)$, which entails in particular that $f$ is continuous in the closed set $\bar{\Omega}$.
+
+Such a description allows us to consider a broad set of solids. In particular, the boundary of the solid can be very rough. Degenerate cases can be considered as well (for instance $S_0$ can be a segment modeling a one dimensional beam).
+
+For any position **p** := (R(α), 0, **r**), we recall that **S** := R(α)S₀ + **r** is the actual domain occupied by the solid. Let us introduce then the functions ϕ₀(x) := ϕ(R(α)x + **r**) and ψ₀(x) := ψ(R(α)x + **r**) which are harmonic (and defined) over the fixed domain F₀. For any velocity **v** := (ω, **r**, **w**) (we choose here **s** = **r**), the Dirichlet boundary condition for ψ turns out to be, with complex notation 2iψ₀ := w₀z̄ − w̄o₀z + iω|z|² where w₀(z) := w(R(α)z + **r**). We introduce ζ the holomorphic complex potential of the fluid defined for any z ∈ Ω by ζ(z) = ϕ₀(f(z)) + iψ₀(f(z)). Since z̄ = 1/z on ∂Ω, we get the identity 2iψ₀(f(z)) = −w̄f(z) + w f̄(1/z) + iωf(z)f̄(1/z). For any z ∈ ∂Ω, we have also f̄(1/z) = ∑ₖ∈Z c̄ₖzᵏ = ∑ₖ∈Z c̃ₖzᵏ and f(z)f̄(1/z) = ∑ₖ∈Z (c̃ * c)ₖzᵏ. So we get:
+
+$$
+2i\psi_0(f(z)) = \sum_{k \in \mathbb{Z}} [-\bar{w}_0 c_k + w_0 \check{c}_k + i\omega(\check{c} * c)_k] z^k, \quad (z \in \partial\Omega). \tag{3.2}
+$$
+
+According to [6, Chap. IX, §9.63], we keep only the negative powers in (3.2) to get the expression of ζ. Defining the coefficients ζₖ(w₀, ω) := [-w̄o₀cₖ + w₀čₖ + iω(č * c)ₖ] for all k ≤ −1, we obtain:
+
+$$
+\zeta(z) = \sum_{k \le -1} \zeta_k(w_0, \omega) z^k, \quad (z \in \Omega). \tag{3.3}
+$$
+
+Eventually, the expression of the measured complex potential, defined in $\mathcal{F}$, is:
+
+$$
+\xi(z) = \zeta(f^{-1}((z-r)e^{-i\alpha})), \quad (z \in \mathcal{F}). \tag{3.4}
+$$
+
+According to our rule of notation, we introduce as well
+
+$$
+\xi_0(z) = \varphi_0(z) + \psi_0(z) = \zeta(f^{-1}(z)), \quad (z \in \mathcal{F}_0). \tag{3.5}
+$$
+---PAGE_BREAK---
+
+Figure 3: The stream function is $\psi(r, \theta) = \cos(6\theta)/r^6$, the holomorphic potential is $\xi = i/z^6$ and $\omega = 0.7$, $\rho = 0.9$, $\lambda = -2.5$ and $s_1 = (\rho \cos(k\pi/6), \rho \sin(2k\pi/6))$ for $k = 1, \dots, 6$.
+
+# 4 Stealth Rigid Solids
+
+In this Section we are willing to determine all the possible shapes and configurations of solids for which the complex potential $\xi$ is identically null. Such a displacement will be termed stealth.
+
+**Theorem 4.1.** *The only solids S that can undergo stealth motions in a fluid are:*
+
+* Disks rotating about their centers;
+
+* Arc of circles and segments with velocity field everywhere tangent to S.
+
+The arc of circles and segments are one dimensional solids and can be considered as degenerated cases.
+
+*Proof.* Let us assume that $\xi = 0$. Then we have also $\zeta = 0$ which means that $-2\Im(\bar{u}f(z)) + \omega|f(z)|^2 = 0$ for all $z \in \partial\Omega$. If $\omega \neq 0$, some easy computations tell us that for all $z \in \partial\Omega$, $f(z)$ belongs to the circle of center $iw_0/2\omega$ and radius $|w_0|/(2|\omega|)$. Since $f$ is an homeomorphism from $\partial\Omega$ onto $f(\partial\Omega)$, $f(\partial\Omega)$ is a connected compact subset of this circle.
+---PAGE_BREAK---
+
+* If $f(\partial\Omega)$ is the complete circle, it means that $c_1 = 1$ and $c_k = 0$ for all $k \neq 1$. In this case, since $\zeta_1 = 0$ and $(\check{c}*c)_1 = 0$, we deduce that $w_0 = 0$ and hence that the circle is just rotating about its center.
+
+* Up to a translation and a rotation, all the conformal mappings that map the circle onto an arc of circle have the form $f(z) = z + (1 - h^2)/(z + ih)$ where $h$ is any real number such that $0 < h < 1$. We can put $f$ into the general form (3.1) by setting: $c_1 = 1$, $c_{-1} = 1 - h^2$ and $c_k = (1 - h^2)(-ih)^{-k-1}$ for all $k \le -2$. Some simple computations lead to: $(\check{c}*c)_{-1} = -ihc_{-1}$ and $(\check{c}*c)_k = (ih^{-1} - ih)c_k$ for all $k \le -2$. Plugging these expressions into (3.3) and writing that $\zeta_k(w_0, \omega) = 0$ for all $k \le -1$ we obtain the same equation for all $k$ which yields the relation: $w_0 = \omega(h - 1/h)$. We can then easy prove that this motion corresponds to the case where the velocity field is tangent to the solid.
+
+Let us assume now that $\omega$ is zero (and $w_0 \neq 0$). In this case, we deduce with (3.3), that $c_k = 0$ for all $k \le -2$. For $k = -1$, we get $c_{-1} = \bar{c}_1 w_0 / \bar{w}_0$. We set $w_0 = Re^{i\theta}$, $c_1 = \tilde{R}e^{i\beta}$ and we rewrite $f$ in the form:
+
+$$f(z) = \tilde{R} \left[ e^{i\beta} z + e^{i(-\beta+2\theta)/z} \right] = 2\tilde{R}e^{i\theta} \left[ e^{i(\beta-\theta)}z + e^{-i(\beta-\theta)/z} \right].$$
+
+We seek the image of the unitary circle by $f$. We specify $z = e^{it}$ with $t \in \mathbb{R}/2\pi$ and we get $f(e^{it}) = 2\tilde{R}e^{i\theta}\cos(\beta - \theta + t)$. So the image of the unitary circle is the segment $[-\tilde{R}, \tilde{R}]$ turned by an angle $\theta$. The velocity $w_0$ is collinear to the segment. $\square$
+
+# 5 Detection of a Moving Ellipse
+
+When $S_0$ is an ellipse, the function $f$ has the form $f(z) = (a+b)z/2 + (a-b)/2z$ where $a, b \in \mathbb{R}_+$, $a > b > 0$. We can now give the proof of Proposition 1.1.
+
+*Proof.* First, we can explicitly compute the inverse function
+
+$$f^{-1}(z) = \frac{z}{(a+b)} \left( 1 + \sqrt{1 - \frac{(a^2 - b^2)}{z^2}} \right), \quad (z \in \mathcal{F}_0). \tag{5.1}$$
+
+In this expression, $\sqrt{a^2 - b^2}$ and $\sqrt{a^2 - b^2}$ are branch points and the function is holomorphic everywhere but on the segment $[-\sqrt{a^2 - b^2}, \sqrt{a^2 - b^2}]$ which is a branch cut. Next, we get:
+
+$$\zeta(z) = \left[ -\bar{w}_0 \frac{a-b}{2} + w_0 \frac{a+b}{2} \right] \frac{1}{z} + i\omega \frac{a^2-b^2}{4} \frac{1}{z^2}, \quad (z \in \Omega),$$
+
+and then:
+
+$$\xi(z) = \frac{\left[-(a^2 - b^2)\bar{w}_0 + (a+b)^2 w_0\right]e^{i\alpha}}{2(z-r)\left[1 + \sqrt{1 - \frac{(a^2-b^2)e^{2i\alpha}}{(z-r)^2}}\right]} + \frac{i(a^2-b^2)(a+b)^2 e^{2i\alpha}\omega}{4(z-r)^2\left[1+\sqrt{1-\frac{(a^2-b^2)e^{2i\alpha}}{(z-r)^2}}\right]^2}, \quad (z \in \mathcal{F}).$$
+
+Observe that, owning to the symmetry of the ellipse, we can change $\alpha$ into $\alpha + \pi$ and accordingly $w_0$ into $-w_0$ without changing the expression of $\xi$. The potential $\xi$ is holomorphic everywhere but on the branch cut $[r - \sqrt{a^2 - b^2}e^{i\alpha}, r + \sqrt{a^2 - b^2}e^{i\alpha}]$. So if we know thoroughly $\xi$, we can determine the location of the branch points $r - \sqrt{a^2 - b^2}e^{i\alpha}$ and $r + \sqrt{a^2 - b^2}e^{i\alpha}$ and hence also
+---PAGE_BREAK---
+
+the position of the center $r$ and the orientation $\alpha$ (up to $\pi$ only). We compute next the limit
+$\mu := \lim_{|z|\to+\infty} e^{-i\alpha}\xi(z)z/(a+b) = [-(a-b)\bar{w}_0 + (a+b)w_0]/4$ and we deduce the expression of
+$w_0$, namely: $w_0 = (\mu + \bar{\mu})/b + (\mu - \bar{\mu})/a$. The only remaining unknown quantity $\omega$ is next easily
+obtained, following the same idea. $\square$
+
+Figure 4: Level sets of the holomorphic potential $\xi_0$, for $a=2$, $b=1$, $w_0 = e^{i\pi/3}$ and $\omega = -2$. The boundary of the ellipse (dashed line) can hardly be directly detected but the branch points $-\sqrt{3}$ and $\sqrt{3}$ are easily spotted.
+
+# 6 Detection: General Case
+
+## 6.1 Singularities of the holomorphic potential
+
+In the preceding example, the branch points of the potential $\xi$ played a crucial role in determining
+the position of the solid in the fluid. Notice that the existence of these points did not depend on the
+configuration but only on the shape of the solid (they came out in the definition (5.1) of the inverse
+function $f^{-1}$ and were subsequently just translated and rotated according to the position). We
+shall prove that this result can be generalized to any solid: there is no singularity in the potential
+function, that does not come from the conformal mapping $f^{-1}$ (but unfortunately, the potential
+function may have less singular points than the function $f^{-1}$). Let us make this statement precise:
+
+**Definition 1** (Analytic continuation). An holomorphic function $\tilde{\xi}$ (respect. $\tilde{\xi}_0$) defined in a connected open set $\tilde{\mathcal{F}}$ (respect. $\tilde{\mathcal{F}}_0$) containing $\mathcal{F}$ (respect. $\mathcal{F}_0$) is called an analytic continuation of $\xi$ (respect. $\xi_0$) when $\tilde{\xi} = \xi$ in $\mathcal{F}$ (respect. $\tilde{\xi}_0 = \xi_0$ in $\mathcal{F}_0$).
+
+It may exist several analytic continuations of $\xi$ that do not coincide everywhere. Assume that
+$\tilde{\xi}_1$ and $\tilde{\xi}_2$ are two such functions defined respectively on $\tilde{\mathcal{F}}_1$ and $\tilde{\mathcal{F}}_2$. So the Analytic Continuation
+---PAGE_BREAK---
+
+Theorem ensures only that $\tilde{\xi}_1 = \tilde{\xi}_2$ on the connected component of $\tilde{\mathcal{F}}_1 \cap \tilde{\mathcal{F}}_2$ containing $\mathcal{F}$. In Section 5 for instance, we can not choose where the branch points are but there are many different possible choices for the branch cut, each one corresponding to a different analytic continuation of $\xi$.
+
+Assume that for some potential function $\xi$, there exists an analytic continuation $\tilde{\xi}$ such that $\tilde{\mathcal{F}} = \mathbf{C}$. Since, by construction, $\xi(z)$ tends to 0 as $|z|$ goes to infinity, $\tilde{\xi}$ is a bounded entire function. According to Liouville's Theorem, this function is constant, equal to 0. This case was treated in Section 4 and is possible only for solids listed in Theorem 4.1. For all of the other solids and for any analytic continuation $\tilde{\xi}$, there exists at least one point, located inside the solid, which does not belong to $\tilde{\mathcal{F}}$. This very simple observation allows one to locate the solid in a very first approximation.
+
+Let us now prove that the singularities of $\tilde{\xi}$ come from the singularities of $f^{-1}$.
+
+**Proposition 6.1.** If there exists an analytic continuation $\tilde{g}$ of $f^{-1}$ defined on an open connected set $\tilde{\mathcal{F}}_0$ containing $\mathcal{F}_0$, then for any configuration $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$, there exists an analytic continuation $\tilde{\xi}$ of $\xi$ defined on $\tilde{\mathcal{F}} := \mathbf{p}(\tilde{\mathcal{F}}_0 \setminus \tilde{g}^{-1}(\{0\}))$.
+
+In this proposition, the notation $\tilde{g}^{-1}(\{0\})$ stands for the preimage of $\{0\}$ under $\tilde{g}$ and does not mean that $\tilde{g}$ is invertible. Since $\tilde{g}$ is holomorphic, the set $\tilde{g}^{-1}(\{0\})$ consists only in isolated points and $\tilde{\mathcal{F}}_0 \setminus \tilde{g}^{-1}(\{0\})$ is still connected and still contains $\mathcal{F}_0$.
+
+*Proof.* For all $z \in \partial D$, we can rewrite $\zeta$ in the form:
+
+$$
+\begin{aligned}
+\zeta(z) = & -\bar{w}_0(f(z) - c_1z) + w_0\bar{c}_1z^{-1} + i\omega[\bar{c}_1z^{-1}(f(z) - c_1z) + \\
+& \qquad \sum_{k \ge 1} \bar{c}_{-k}z^k(f(z) - \sum_{-k \le j \le 1} c_j z^j)].
+\end{aligned}
+\quad (6.1)
+$$
+
+Expanding the right hand side and recombining terms, we get the identity:
+
+$$ \zeta(z) = -\bar{w}_0 f(z) + c_1 \bar{w}_0 z + u \bar{c}_1 z^{-1} + i \omega [f(z) \bar{f}(z^{-1}) - H_1(z)], \quad (z \in \partial D), $$
+
+where $H_1(z) := \sum_{j \ge 0} (\check{c} * c)_j z^j$ and $\bar{f}(z) := \bar{c}_1 z + \sum_{k \le -1} \bar{c}_k z^k$. Classical results for the convolution product ensure that this series is uniformly convergent for $|z| \le 1$ since $\|\check{c} * c\|_{\ell^1(\mathcal{C})} \le \|\bar{c}\|_{\ell^1(\mathcal{C})} \|c\|_{\ell^1(\mathcal{C})}$. We next obtain that:
+
+$$ \zeta(f^{-1}(z)) = -\bar{w}_0 z + c_1 \bar{w}_0 f^{-1}(z) + u \bar{c}_1 / f^{-1}(z) + i\omega[z\bar{f}(1/f^{-1}(z)) - H_1(f^{-1}(z))], \quad (z \in \partial S_0). $$
+
+Let $\tilde{g}$ by any analytic continuation of $f^{-1}$ (not necessary invertible), defined in an open set $\tilde{\mathcal{F}}_0$. It can be split into three parts: $\tilde{\mathcal{F}}_0^+ := \{z \in \tilde{\mathcal{F}}_0 : |\tilde{g}(z)| > 1\}$, $\tilde{\mathcal{F}}_0^- := \{z \in \tilde{\mathcal{F}}_0 : 0 < |\tilde{g}(z)| < 1\}$ and $\tilde{\mathcal{F}}_0^1 := \{z \in \mathbf{C} : \tilde{g}(z) = 1\}$. We can next define:
+
+$$
+\begin{aligned}
+\tilde{\xi}_0(z) &:= -\bar{w}_0 z + c_1 \bar{w}_0 \tilde{g}(z) + w_0 \bar{c}_1 / \tilde{g}(z) + i\omega[z\bar{f}(1/\tilde{g}(z)) - H_1(\tilde{g}(z))], && z \in \tilde{\mathcal{F}}_0^- \cup \tilde{\mathcal{F}}_0^1, \\
+\tilde{\xi}_0(z) &:= -\bar{w}_0 z + c_1 \bar{w}_0 \tilde{g}(z) + w_0 \bar{c}_1 / \tilde{g}(z) + i\omega[H_2(\tilde{g}(z))], && z \in \tilde{\mathcal{F}}_0^+,
+\end{aligned}
+$$
+
+where $H_2(z) := \sum_{j \le -1} (\check{c} * c)_j z^j$ is uniformly convergent for $|z| \ge 1$. We deduce that the function $\tilde{\xi}_0$ is holomorphic in $\tilde{\mathcal{F}}_0^-$ and in $\tilde{\mathcal{F}}_0^+$ and continuous in $\tilde{\mathcal{F}}_0 \setminus \tilde{g}^{-1}(\{0\})$. Let $z_0 \in \tilde{\mathcal{F}}_0^1$ and denote
+---PAGE_BREAK---
+
+Figure 5: The conformal mapping $\tilde{g}$ nearby the point $z_0$ for $n = 3$.
+
+$z_1 := \tilde{g}(z_0)$. Since $\tilde{g}$ is holomorphic at the point $z_0$, there exist $R > 0$, $n \ge 1$ and a function $h$ holomorphic in the disk $D(0, R)$ such that $h(0) \ne 0$ and
+
+$$ \tilde{g}(z) = z_1 + (z - z_0)^n h(z - z_0), \quad z \in D(0, R). $$
+
+This identity allows us to describe the set $\tilde{\mathcal{F}}_0^1$ nearby the point $z_0$. For instance, if $n=3$, and for $R$ small enough, we get something like in Figure 5 where $z_1 = \tilde{g}(z_0)$, $\cup_{j=1}^3 A_j^+ = D(z_0, R) \cap \tilde{\mathcal{F}}_0^+$, $\cup_{j=1}^3 A_j^- = D(z_0, R) \cap \tilde{\mathcal{F}}_0^-$ and the curves radiating from $z_0$ correspond to the set $D(z_0, R) \cap \tilde{\mathcal{F}}_0^1$. Except maybe at the point $z_0$, the boundaries shared by the regions $A_j^+$ and $A_j^-$ are smooth. The function $\tilde{g}$ maps $A_1^+$ onto $U^+$ (an open set located outside the unitary disk) and $A_1^-$ onto $U^-$ (an open set located inside the unitary disk). Furthermore, the function $\tilde{g}|_{A^+ \cup A^-}$: $A_1^+ \cup A_1^- \to U^- \cup U^+$ is a conformal mapping. Consider next a conformal mapping $\phi$ which maps a neighborhood of $z_1$ onto a neighborhood of 0 as in Figure 5 and such that the image of the unitary circle is the imaginary axis. We can then apply [7, Theorem 16.8]: the function $\xi_0 \circ \tilde{g}^{-1} \circ \phi^{-1}$ is holomorphic on both sides of the imaginary axis and continuous across this boundary, so it is holomorphic over the whole domain. We deduce that $\xi_0$ is holomorphic across the boundary between $A_1^+$ and $A_1^-$. We can repeat this process with the domains $A_1^+ \cup A_2^-$, $A_2^+ \cup A_2^-$ and so on. Finally, we obtain that $\xi_0$ is holomorphic on the whole disk $D(z_0, R)$ except maybe at the point $z_0$. But once more, the continuity of the function $\xi_0$ does not allow this possibility. □
+
+As already mentioned, the potential function can make some singularities of $f^{-1}$ to vanish. This is illustrated by the example of Figure 6.
+
+## 6.2 Asymptotic expansion of the holomorphic potential
+
+In this section, we shall compute the asymptotic expansion of $\xi$ in terms of the geometrical data of $S_0$ and the configuration $(\mathbf{p}, \mathbf{v}) \in \mathcal{P} \times \mathcal{V}$. As explained in the preceding section, we know that if $S_0$ is not one of the solid listed in Theorem 4.1, the potential function admits no analytic continuation on the whole complex plane. It allows one to deduce where is the solid approximately. For all $\nu \in \mathbf{C}$, we can next consider $\Gamma$, a contour large enough to encircle the solid and the point $\nu$. The contour $\tilde{\Gamma} := f(e^{-i\alpha}(\Gamma - r))$ encircles the unitary disk. According to the expression (1.1) of the
+---PAGE_BREAK---
+
+Figure 6: The solid consists in a disk of center C with a segment [A, B]. When this solid is moving to the left, parallel to the segment, one can easily check that the potential coincides with the potential of the disk. Although any analytic continuation of $f^{-1}$ has singularities at A and B, the complex potential does not see these points. In this case, the singularities do not allow one to determine the orientation of the solid.
+
+potential as a Laurent series, we obtain that, for all $n \ge 1$:
+
+$$
+\begin{aligned}
+\lambda_n(\nu) &= \frac{1}{2i\pi} \oint_{\Gamma} \xi(z)(z-\nu)^{n-1} dz \\
+&= \frac{e^{i\alpha}}{2i\pi} \oint_{\bar{\Gamma}} \zeta(z) (e^{i\alpha} f(z) + r - \nu)^{n-1} f'(z) dz \\
+&= \frac{1}{n} \frac{1}{2i\pi} \oint_{\bar{\Gamma}} \zeta(z) \frac{d}{dz} (e^{i\alpha} f(z) + r - \nu)^n dz \\
+&= -\frac{1}{n} \frac{1}{2i\pi} \oint_{\bar{\Gamma}} \zeta'(z) (e^{i\alpha} f(z) + r - \nu)^n dz \\
+&= -\frac{1}{n} \frac{1}{2i\pi} \sum_{k=0}^{n} \binom{n}{k} e^{ik\alpha} (r-\nu)^{n-k} \oint_{\bar{\Gamma}} \zeta'(z) f(z)^k dz.
+\end{aligned}
+$$
+
+If we define the complex sequence $d := (d_k)_{k \in \mathbb{Z}}$ by $d_k := (k+1)\zeta_{k+1}$ for all $k \le -1$ and $d_k = 0$ for $k \ge 0$, we obtain:
+
+$$ \lambda_n(\nu) = -\frac{1}{n} \sum_{k=1}^{n} \binom{n}{k} e^{ik\alpha} (r-\nu)^{n-k} (d*c^k)_{-1}, \quad (\nu \in \mathbb{C}). $$
+
+We can rewrite the last term:
+
+$$
+\begin{aligned}
+(d * c^k)_{-1} &= \sum_{i_1 + \dots + i_{k+1} = -1} c_{i_1} \dots c_{i_k} d_{i_{k+1}} \\
+&= -A_k \bar{w}_0 + B_k w_0 + i\omega C_k,
+\end{aligned}
+$$
+
+where
+
+$$ A_k := \sum_{\substack{i_1+\cdots+i_{k+1}=0 \\ i_1 \le -1}} i_1 c_{i_1} \cdots c_{i_{k+1}}, $$
+
+$$ B_k := \sum_{\substack{i_1+\cdots+i_{k+1}=0 \\ i_1 \le -1}} i_1 \bar{c}_{-i_1} c_{i_2} \cdots c_{i_{k+1}}, $$
+
+$$ C_k := \sum_{\substack{i_1+\cdots+i_{k+2}=0 \\ i_1+i_2 \le -1}} (i_1+i_2)\bar{c}_{-i_1} c_{i_2} \cdots c_{i_{k+2}}. $$
+
+In the following, to simplify the notation, we will rather consider the quantities $A_k := -A_k/k$, $B_k := -B_k/k$ and $C_k := -C_k/k$. Indeed, we get, for all $n \ge 1$:
+
+$$ \lambda_n(\nu) = \sum_{k=1}^{n} \binom{n-1}{k-1} e^{ik\alpha} (r-\nu)^{n-k} [-A_k \bar{w}_0 + B_k w_0 + i\omega C_k]. \quad (6.2) $$
+---PAGE_BREAK---
+
+The problem of detection can now be reformulated as a purely algebraic problem: the complex sequence $(\lambda_j(\nu))_{j \ge 1}$ being given for all $\nu \in \mathbb{C}$, as well as the complex numbers $A_k, B_K$ and $C_k$ ($k \ge 1$), can we solve the infinite nonlinear system of equations (6.2) and find the values of $r, \alpha, w_0$ and $\omega$? According to the results of both Section 2 and Section 4, we already know that there exist cases (namely, coefficients $(c_k)_{k \in \mathbb{Z}}$) for which the answer is negative.
+
+In order to rewrite this infinite set of equations in a convenient short form, we introduce some linear operators: let us denote by $N$ any positive integer and define $\mathcal{G}_N : \mathbf{R}^3 \to \mathbf{C}^N$ by $(\mathcal{G}_N U)_k := (-A_k + B_k)U_1 + i(A_k + B_k)U_2 + iC_k U_3$ for all $U = (U_1, U_2, U_3)^T \in \mathbf{R}^3$ and all $1 \le k \le N$. We define $D_N : \mathbf{C}^N \to \mathbf{C}^N$ and $S_N : \mathbf{C}^N \to \mathbf{C}^N$ as well, by respectively $(D_N Z)_k := kZ_k$ ($1 \le k \le N$) and $(S_N Z)_1 := 0$ and $(S_N Z)_k := Z_{k-1}$ ($2 \le k \le N$) for all $Z := (Z_1, \dots, Z_N) \in \mathbf{C}^N$. The first $N$ equations (6.2) can now be rewritten as:
+
+$$
+\begin{align}
+\Lambda_N(\nu) &= e^{\log(r-\nu)D_N} e^{S_N D_N} e^{-\log(r-\nu)D_N} e^{i\alpha D_N} \mathcal{G}_N U, \tag{6.3a} \\
+&= \Theta_N(r-\nu, \alpha) \mathcal{G}_N U, \tag{6.3b}
+\end{align}
+ $$
+
+where $U := (\Re(w_0), \Im(w_0), \omega)^T$ and $\Lambda_N(\nu) := (\lambda_1(\nu), \dots, \lambda_N(\nu))^T$. The operator $e^{S_N D_N}$ is lower triangular and the identity $(e^{S_N D_N})_{k,n} = \binom{n-1}{k-1}$ for all $1 \le k \le n \le N$, not so obvious, can be found in [2]. Considering the expressions (6.3), it is worth noticing that:
+
+* In (6.3), the coefficients $\lambda_j(\nu)$ in the asymptotic expansion of $\xi$ are obtained by applying to the vector $U$ (the velocity) first the operator $\mathcal{G}_N$ encapsulating the information relating to the geometry of the solid and next the operator $\Theta_N(r - \nu, \alpha)$ depending only on the position.
+
+* The linear operator $\mathcal{G}_N$ depends on the complex sequence $c$ only, i.e. on the shape of the solid. Moreover, the complex quantities $A_k - c_{-k}c_1^k$ ($k \ge 1$), $B_{k+1} - c_{-k}\bar{c}_1c_1^k$ ($k \ge 2$) and $C_{k-1} - c_{-k}\bar{c}_{-1}c_1^{k-1}$ ($k \ge 1$) do not depend on $c_{-n}$ for all $n \ge k$. In other words, for all $N \ge 1$, $\mathcal{G}_N$ depends on $c_1, c_{-1}, c_{-2}, \dots, c_{-N-1}$ only. We deduce:
+
+**Proposition 6.2.** Let $S_0^1$ and $S_0^2$ be two shapes described by means of the complex sequences $(c_k^1)_{k \ge 1}$ and $(c_k^2)_{k \ge 1}$, such that $c_k^1 = c_k^2$ for all $1 \le k \le N$. Then, if both solids have the same configuration, their complex potentials will have the same asymptotic expansion up to the order $N-1$.
+
+* The solutions $(r_1, r_2, \alpha, \Re(w_0), \Im(w_0), \omega)^T$ of all of the equations (6.3) (for all $N \ge 1$), form a subanalytic set of $\mathbf{R}^6$ (because $\Theta_N$ is analytic in $r_1, r_2$ and $\alpha$). This subanalytic set has dimension $d$ with $0 \le d \le 6$. However, because the dependence in $(\Re(w_0), \Im(w_0), \omega)^T$ is linear, if it had dimension $d \ge 4$, it would entail the existence of a position $(r_1, r_2, \alpha)^T$ and a non-zero velocity $U_0 \in \mathbf{R}^3$ such that $\Theta_N(r - \nu, \alpha)\mathcal{G}_N U_0 = 0$ for all $N \ge 1$. As already mentioned before, this case is only possible if the solid is in the list of Theorem 4.1.
+
+We do not know if there exist solids such that $0 < d \le 3$. Observe that in Section 2, we have only given examples for which the solids can occupy a finite number of different positions, so $d = 0$ in these cases.
+
+For all $N \ge 1$, we can invert the system (6.3) to obtain:
+
+$$
+\begin{align}
+\mathcal{G}_N U &= e^{-i\alpha D_N} e^{\log(r-\nu)D_N} e^{-S_N D_N} e^{-\log(r-\nu)D_N} \Lambda_N(\nu), \tag{6.4} \\
+&= \Theta_N(r-\nu, \alpha)^{-1} \Lambda_N(\nu), \tag{6.5}
+\end{align}
+ $$
+---PAGE_BREAK---
+
+or equivalently, with the notation of equation (6.2),
+
+$$
+-A_n \bar{w}_0 + B_n w_0 + i\omega C_n = e^{-in\alpha} \left[ \sum_{k=1}^{n} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right], \quad (6.6)
+$$
+
+for all $n \in \mathbb{N}$. In this form, we can easily prove:
+
+**Proposition 6.3.** If the solid does not come out in Theorem 4.1 and its position is given, then we can deduce its velocity.
+
+*Proof.* Denote $\mathcal{G}^j$ ($j = 1, 2, 3$) the complex sequences respectively defined by $\mathcal{G}_k^1 := (-\mathcal{A}_k + \mathcal{B}_k)_{k \ge 1}$,
+$\mathcal{G}_k^2 := (i(\mathcal{A}_k + \mathcal{B}_k))_{k \ge 1}$ and $\mathcal{G}_k^3 := (i\mathcal{C}_k)_{k \ge 1}$. If these sequences were not $\mathbf{R}$-linearly independent in
+$\mathbf{C}^\mathbb{N}$, it would exist $(U_1, U_2, U_3)^T \neq 0$ in $\mathbf{R}^3$ such that $\sum_j U_j \mathcal{G}^j = 0$ and then, for any position $(r, \alpha)$
+and any $N \ge 1$, we would have $\Theta_N(r - v, \alpha)\mathcal{G}_N U = 0$ which contradicts the assumption that the
+solid is not listed in Theorem 4.1. Conversely, if the sequences $\mathcal{G}^i$ are $\mathbf{R}$-linearly independent, then
+there exists $N_0 \ge 3$ such that for all $N \ge N_0$, $\mathcal{G}_N$ is of rank 3 and the proof is completed. $\square$
+
+**Proposition 6.4.** Assume that the shape $\mathcal{S}_0$ of the solid, described by the conformal mapping (3.1),
+is such that $e^{i\pi/2}\mathcal{S}_0 = \mathcal{S}_0$ (the shape of the solid is invariant by rotation of angle $\pi/2$ and center
+0). Then, from the holomorphic potential $\xi$, we can always deduce the values of $r$, $w_0e^{i\alpha}$ (the
+linear velocity expressed in a the reference fixed frame) and $|\omega|$ (the absolute value of the rotational
+velocity).
+
+*Proof.* The assumption on the shape $\mathcal{S}_0$ means that $f(\Omega) = \tilde{f}(\Omega)$ where $\tilde{f}$ is the conformal mapping defined by:
+
+$$
+\tilde{f}(z) := \tilde{c}_1 z + \sum_{k \le 1} \tilde{c}_k z^k, \quad (z \in \Omega),
+$$
+
+with $\tilde{c}_k = e^{i\pi/2}c_k$ for any $k=1$ and $k \le -1$. Replacing the function $f$ by $\tilde{f}$ in the computations,
+we obtain that:
+
+$$
+\lambda_n(\nu) = \sum_{k=1}^{n} \binom{n-1}{k-1} e^{ik\alpha} (r-\nu)^{n-k} \left[ -A_k e^{i(k+1)\pi/2} \bar{w}_0 + B_k w_0 e^{i(k-1)\pi/2} + i\omega C_k e^{ik\pi/2} \right], \quad (6.7)
+$$
+
+and the coefficients $\lambda_k(\nu)$, defined equivalently by (6.2) and by (6.7), must be equal for all $r, \nu \in \mathbf{C}$,
+$\alpha \in \mathbf{R}/2\pi$ and all $w_0 \in \mathbf{C}$ and $\omega \in \mathbf{R}$. In particular, for $r = \nu = 0$ and $\alpha = 0$, we obtain
+that $A_n(1 - e^{i(n+1)\pi/2})w_0 - B_n(1 - e^{i(n-1)\pi/2})\bar{w}_0 = 0$ and $C_n(1 - e^{in\pi/2})\omega = 0$ for all $w_0 \in \mathbf{C}$,
+$\omega \in \mathbf{R}$ and $n \ge 1$. We deduce that $(A_n \ne 0) \Leftrightarrow (n \equiv -1 \pmod 4)$, $(B_n \ne 0) \Leftrightarrow (n \equiv 1 \pmod 4)$ and
+$(C_n \ne 0) \Leftrightarrow (n \equiv 0 \pmod 4)$.
+
+Let us consider the problem of detection now we got some extra information about $\mathcal{A}_n, \mathcal{B}_n$ and
+$\mathcal{C}_n$. Equation (6.6) with $n=2$, gives $\lambda_1(\nu-r)+\lambda_2(\nu)=0$ for all $\nu \in \mathbf{C}$. Two cases have to be
+considered:
+
+* Either $\lambda_1 = 0$, which means also that $\lambda_2(\nu) = 0$ for all $\nu \in C$. It entails, according to equation (6.6) with $n=1$, that $B_1 w_0 = 0$. But $B_1 = |c_1|^2 \ne 0$ and hence $w_0 = 0$. We get $\omega \ne 0$ otherwise we would have $\xi = 0$. Let now *m* be the smallest index such that $C_m \ne 0$. We know that such an index exists, otherwise it would mean that $C_k = 0$ for all $k \ge 1$ and hence that any rotational motion of the solid generates a complex potential equal to 0. This is impossible for solid not listed in Theorem 4.1. By induction on *n* with
+---PAGE_BREAK---
+
+equation (6.6), we must have now $\lambda_n(\nu) = 0$ for all $n < m$. We use then equation (6.6) with $n = m$ to obtain $i\omega C_m = e^{-im\alpha}\lambda_m(\nu)$ and next equation (6.2) with $n = m + 1$ to get $\lambda_{m+1}(\nu) = me^{im\alpha}(r - \nu)i\omega C_m$. Combining these two identities, we eventually obtain $\lambda_{m+1}(\nu) = m\lambda_m(\nu)(r - \nu)$, whence we deduce the value of $r$.
+
+* Or $\lambda_1 \neq 0$. In this case, we deduce easily the value of $r$ from the relation $\lambda_1(\nu-r)+\lambda_2(\nu)=0$.
+
+Then, equation (6.6) with $n=1$ gives us the value of $w_0e^{i\alpha}$. There exists at least one $m > 0$ such that $C_m \neq 0$. Once again, equation (6.6) with $n=m$ allows us to deduce the value of $|\omega|$.
+
+If we have more information on $S_0$, we can go further in our study:
+
+**Proposition 6.5.** In the preceding proposition, assume furthermore that one of the following assumption is satisfied:
+
+1. $w_0 \neq 0$ and there exists an integer $m$ such that either $\mathcal{A}_m$ and $\mathcal{A}_{m+4}$ or $\mathcal{B}_m$ and $\mathcal{B}_{m+4}$ are both different from 0;
+
+2. $\omega \neq 0$ and there exists an integer $m$ such that $C_m$ and $C_{m+4}$ are both different from 0;
+
+*Then, the solid is detectable.*
+
+*Proof.* We know that we can compute $r$ and $w_0e^{i\alpha}$. Assume for instance that the second assumption holds (it would be the same reasoning with the first one). In this case, with equation (6.6) specifying $n=m$ and $n=m+4$, we can compute $\omega e^{im\alpha}$ and $\omega e^{i(m+4)\alpha}$. We next deduce $e^{i4\alpha}$ and hence the orientation of the solid (because of the symmetry property, $e^{i4\alpha}$ suffices to provide the orientation). Since $m \equiv 0 \pmod{4}$ (because $C_m \neq 0$) we next deduce the values of $e^{im\alpha}$ and $\omega$. We know that there exists at least one index $p$ such that $\mathcal{A}_p \neq 0$. We use equation (6.6) with $n=p$ to deduce the value of $\mathcal{A}_p \bar{w}_0 e^{ip\alpha}$ or equivalently $\bar{\mathcal{A}}_p w_0 e^{-ip\alpha}$. Since necessarily $p \equiv -1 \pmod{4}$ then $-p \equiv 1 \pmod{4}$ and we can deduce the values of both $\omega$ and $e^{i4\alpha}$ with $w_0 e^{i\alpha}$.
+
+To conclude this section, we give examples of solids without any symmetry, which are detectable
+(one is pictured in Figure 7):
+
+## 6.3 Example of detection
+
+We consider a shape $S_0$ described by a complex sequence $c$ such that $c_1 \neq 0$, $c_{-4} \neq 0$ and $c_{-7} \neq 0$, all the other coefficients $c_k$ being null. Direct computations lead to:
+
+| k | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---|
| Ak | 0 | 0 | 0 | c14c-4 | 0 | 0 | c17c-7 | 0 | | Bk | |c1|2 | 0 | 0 | 0 | 0 | |c1|2c14c-4 | 0 | 0 | | Ck | 0 | 0 | c13c̅-4c-7 | 0 | 2|c1|2c14c-4 | 0 | 0 | (2|c1|2 + 3|c-4|2)c17c-7 |
+---PAGE_BREAK---
+
+Plugging these results into System (6.6), we obtain that:
+
+$$B_1 w_0 = e^{-i\alpha}\lambda_1, \quad (6.8a)$$
+
+$$0 = e^{-i2\alpha}[\lambda_1(\nu - r) + \lambda_2(\nu)], \quad (6.8b)$$
+
+$$C_3 i\omega = e^{-i3\alpha} \left[ \sum_{k=1}^{3} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right], \quad (6.8c)$$
+
+$$-A_4 \bar{w}_0 = e^{-i4\alpha} \left[ \sum_{k=1}^{4} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right]. \quad (6.8d)$$
+
+* If $\omega \neq 0$ and $w_0 \neq 0$: From equation (6.8a) we deduce that $\lambda_1 \neq 0$ and from (6.8b) we deduce the value of $r$. Equation (6.8c) allows us to determine $|\omega|$ and $3\alpha$ up to $\pi$. Combining next equation (6.8a) and (6.8d), we get $5\alpha$. Using Bezout's relation: $\alpha = u3\alpha + v5\alpha$ with $u=2$ and $v=-1$, we get $\alpha$. We determine $w_0$ with equation (6.8a) and $\omega$ with equation (6.8c).
+
+* if $\omega = 0, w_0 \neq 0$: We compute $r$ as in the preceding case. Then, we need further calculations:
+
+$$B_6 w_0 = e^{-i6\alpha} \left[ \sum_{k=1}^{6} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right], \quad (6.9a)$$
+
+$$-A_7 \bar{w}_0 = e^{-i7\alpha} \left[ \sum_{k=1}^{7} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right]. \quad (6.9b)$$
+
+With (6.9a) and (6.8a) we get $5\alpha$ and with (6.9b) and (6.8a) we get $8\alpha$. Since 5 and 8 are coprime numbers, we deduce the value of $\alpha$. We conclude as in the preceding case.
+
+* If $\omega \neq 0$ and $w_0 = 0$: With (6.8a) and (6.8b) we deduce that $\lambda_1 = \lambda_2(\nu) = 0$ for all $\nu \in C$. We rewrite (6.8c) and (6.8d) as
+
+$$C_3 i\omega = e^{-i\alpha}\lambda_3,$$
+
+$$0 = \lambda_4(\nu) + 3(\nu - r)\lambda_3,$$
+
+and we deduce first that $\lambda_3 \neq 0$ and then the value of $r$. We next add the equations:
+
+$$C_5 i\omega = e^{-i5\alpha} \left[ \sum_{k=1}^{5} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right], \quad (6.10a)$$
+
+$$C_8 i\omega = e^{-i8\alpha} \left[ \sum_{k=1}^{5} \binom{n-1}{k-1} (\nu-r)^{n-k} \lambda_k(\nu) \right]. \quad (6.10b)$$
+
+Equations (6.8c) and (6.10a) give us $2\alpha$ and (6.10a) and (6.10b) give us $3\alpha$. Since 2 and 5 are coprime numbers we get $\alpha$ and next $\omega$ with (6.8c).
+---PAGE_BREAK---
+
+Figure 7: Example of detectable solid as described in Subsection 6.3.
+
+# 7 Tracking
+
+In this Section, we perform the proof of Theorem 1.4. So we assume that we know the complex potential for all $t$ in a time interval $[0, T]$ ($T > 0$). For all $t \in [0, T]$, we have an expression
+
+$$ \xi(t, z) := \sum_{j \ge 1} \frac{\lambda_j(t, \nu)}{(z - \nu)^j}, \quad |z - \nu| > R(t, \nu), \qquad (7.1) $$
+
+where $\lambda_j(t, \nu)$ are complex numbers and $R(t, \nu) := \limsup_{j \to +\infty} |\lambda_j(t, \nu)|^{1/j}$. At any time $t$, the series is uniformly convergent on $\{z \in \mathbf{C} : |z - \nu| > R(t, \nu)\}$. For all $N \ge 1$, we denote $\Lambda_N(t, \nu) := (\lambda_1(t, \nu), \dots, \lambda_N(t, \nu))^T$ and we have, according to the results of the preceding section:
+
+$$ \mathcal{G}_N U(t) = \Theta_N(r(t) - \nu, \alpha(t))^{-1} \Lambda_N(t, \nu), $$
+
+where we recall that $U(t) := (\Re(w_0(t)), \Im(w_0(t)), \omega(t))^T$. In the proof of Proposition 6.3 we have shown that if the solid is not one of those described in Theorem 4.1, then there exists $N \ge 1$ such that $\mathcal{G}_N$ has rank 3. It means that there exists (at least) one inverse $\mathcal{G}_N^{-1}$ allowing one to express the velocity as:
+
+$$ U(t) := \mathcal{G}_N^{-1} \Theta_N (r(t) - \nu, \alpha(t))^{-1} \Lambda_N(t, \nu). $$
+
+We next get:
+
+$$ \begin{pmatrix} e^{i\alpha(t)}w_0(t) \\ \omega \end{pmatrix} = \begin{pmatrix} e^{i\alpha(t)} & i e^{i\alpha(t)} & 0 \\ 0 & 0 & 1 \end{pmatrix} \mathcal{G}_N^{-1} \Theta_N (r(t) - \nu, \alpha(t))^{-1} \Lambda_N (t, \nu). $$
+
+This equation can be rewritten as:
+
+$$ \frac{d}{dt} \begin{pmatrix} r(t) \\ \alpha(t) \end{pmatrix} = \begin{pmatrix} e^{i\alpha(t)} & i e^{i\alpha(t)} & 0 \\ 0 & 0 & 1 \end{pmatrix} \mathcal{G}_N^{-1} \Theta_N (r(t) - \nu, \alpha(t))^{-1} \Lambda_N (t, \nu), $$
+
+to which we can apply the Cauchy-Lipschitz Theorem. The proof is then completed.
+---PAGE_BREAK---
+
+# 8 Conclusion
+
+In this article, we have proved that not all the solids moving in a perfect fluid are detectable by measuring the potential of the fluid. This observation has led us to define the notion of detectable solids, which is a purely geometric property. When the geometry was described by means of a conformal mapping, we were able to exhibit examples of detectable (or partially detectable) solids. However, the complete characterization of such solids in terms of the complex sequence $(c_k)_{k \in \mathbb{Z}}$ remains to be done.
+
+## References
+
+[1] C. Alvarez, C. Conca, L. Friz, O. Kavian, and J. H. Ortega. Identification of immersed obstacles via boundary measurements. *Inverse Problems*, 21(5):1531–1552, 2005.
+
+[2] G. S. Call and D. J. Velleman. Pascal’s matrices. *Amer. Math. Monthly*, 100(4):372–376, 1993.
+
+[3] C. Conca, P. Cumsille, J. Ortega, and L. Rosier. On the detection of a moving obstacle in an ideal fluid by a boundary measurement. *Inverse Problems*, 24(4):045001, 18, 2008.
+
+[4] A. Doubova, E. Fernández-Cara, and J. H. Ortega. On the identification of a single body immersed in a Navier-Stokes fluid. *European J. Appl. Math.*, 18(1):57–80, 2007.
+
+[5] H. Heck, G. Uhlmann, and J.-N. Wang. Reconstruction of obstacles immersed in an incompressible fluid. *Inverse Probl. Imaging*, 1(1):63–76, 2007.
+
+[6] L. M. Milne-Thomson. *Theoretical hydrodynamics*. 4th ed. The Macmillan Co., New York, 1960.
+
+[7] W. Rudin. *Real and complex analysis*. McGraw-Hill Book Co., New York, third edition, 1987.
\ No newline at end of file
diff --git a/samples/texts_merged/4172868.md b/samples/texts_merged/4172868.md
new file mode 100644
index 0000000000000000000000000000000000000000..1102ad2149c88e9652b1fdb00f8b456c34ff8703
--- /dev/null
+++ b/samples/texts_merged/4172868.md
@@ -0,0 +1,704 @@
+
+---PAGE_BREAK---
+
+# Probable Innocence Revisited
+
+Konstantinos Chatzikokolakis, Catuscia Palamidessi
+
+► To cite this version:
+
+Konstantinos Chatzikokolakis, Catuscia Palamidessi. Probable Innocence Revisited. Third International Workshop on Formal Aspects in Security and Trust (FAST 2005), Jul 2005, Newcastle Upon Tyne, United Kingdom. pp.142-157. inria-00201109
+
+HAL Id: inria-00201109
+
+https://hal.inria.fr/inria-00201109
+
+Submitted on 23 Dec 2007
+
+**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
+---PAGE_BREAK---
+
+# Probable Innocence Revisited*
+
+Konstantinos Chatzikokolakisa, Catuscia Palamidessia
+
+a INRIA Futurs and LIX, École Polytechnique
+
+## Abstract
+
+In this paper we propose a formalization of probable innocence, a notion of probabilistic anonymity that is associated to “realistic” protocols such as Crowds. We analyze critically two different definitions of probable innocence from the literature. The first one, corresponding to the property that Reiter and Rubin have proved for Crowds, aims at limiting the probability of detection. The second one, by Halpern and O’Neill, aims at constraining the attacker’s confidence. Our proposal combines the spirit of both these definitions while generalizing them. In particular, our definition does not need symmetry assumptions, and it does not depend on the probabilities of the users to perform the action of interest. We show that, in case of a symmetric system, our definition corresponds exactly to the one of Reiter and Rubin. Furthermore, in the case of users with uniform probabilities, it amounts to a property similar to that of Halpern and O’Neill.
+
+Another contribution of our paper is the study of probable innocence in the case of protocol composition, namely when multiple runs of the same protocol can be linked, as in the case of Crowds.
+
+## 1 Introduction
+
+Often we wish to ensure that the identity of the user performing a certain action is maintained secret. This property is called *anonymity*. Examples of situations in which we may wish to provide anonymity include: publishing on the web, retrieving information from the web, sending a message, etc. Many protocols have been designed for this purpose, for example, Crowds [15], Onion Routing [23], the Free Haven [7], Web MIX [1] and Freenet [4].
+
+* This work has been partially supported by the Project Rossignol of the ACI Sécurité Informatique (Ministère de la recherche et nouvelles technologies) and by the INRIA/ARC project ProNoBiS.
+
+Email addresses: kostas@lix.polytechnique.fr (Konstantinos Chatzikokolakis), catuscia@lix.polytechnique.fr (Catuscia Palamidessi).
+---PAGE_BREAK---
+
+Most of the protocols providing anonymity use random mechanisms. Consequently, it is natural to think of anonymity in probabilistic terms. Various notions of probabilistic anonymity have been proposed in literature, at different levels of strength. The notion of anonymity in [3], called conditional anonymity in [9,10], and investigated also in [2], describes the ideal situation in which the protocol does not leak any information concerning the identity of the user. This property is satisfied for instance by the Dining Cryptographers with fair coins [3]. Protocols used in practice, however, especially in presence of attackers or corrupted users, are only able to provide a weaker notion of anonymity.
+
+In [15] Reiter and Rubin have proposed a hierarchy of notions of probabilistic anonymity in the context of Crowds. We recall that Crowds is a system for anonymous web surfer aimed at protecting the identity of the users when sending (originating) messages. This is achieved by forwarding the message to another user selected randomly, which in turn forwards the message, and so on, until the message reaches its destination. Part of the users may be corrupted (attackers), and one of the main purposes of the protocol is to protect the identity of the originator of the message from those attackers.
+
+Quoting from [15], the hierarchy is described as follows. Here the *sender* stands for the user that forwards the message to the attacker.
+
+**Beyond suspicion** From the attacker's point of view, the sender appears no more likely to be the originator of the message than any other potential sender in the system.
+
+**Probable innocence** From the attacker's point of view, the sender appears no more likely to be the originator of the message than to not be the originator.
+
+**Possible innocence** From the attacker's point of view, there is a nontrivial probability that the real sender is someone else.
+
+In [15] the authors also considered a formal definition of probable innocence tailored to the characteristics of the Crowds system, and proved it to hold for Crowds under certain conditions. Later Halpern and O'Neill proposed in [10] a formal interpretation of the notions of the hierarchy above in more general terms. Their definitions are based on the confidence of the attacker. More precisely their definition of probable innocence holds if for the attacker, given the events that he has observed, the probability that an user $i$ has performed the action of interest is no more than $1/2$.
+
+However, the property of probable innocence that Reiter and Rubin express formally and prove for the system Crowds in [15] does not mention the user's probability of being the originator, but only the probability of the event observed by the attacker. More precisely, the property proved for Crowds is that the probability that the originator forwards the message to an attacker (given that an attacker receives eventually the message) is at most 1/2. In other words, their definition expresses a
+---PAGE_BREAK---
+
+limit on the probability of detection.
+
+The property proved for Crowds in [15] depends only on the way the protocol
+works, and on the number of the attackers. It is totally independent from the prob-
+ability of each user to be the originator. This is of course a very desirable property,
+since we do not want the correctness of a protocol to depend on the users' intentions
+of originating a message. For stronger notions of anonymity, this abstraction from
+the users' probabilities¹ leads to the notion of probabilistic anonymity defi ned in
+[2], which is equivalent to the conditional anonymity defi ned in [9,10]. Note that
+this defi nition is different from the notion of strong probabilistic anonymity given
+in [9,10]: the latter depends, again, on the probabilities of the users to perform the
+action of interest.
+
+Another intended feature of our notion of probable innocence is the abstraction
+from the specific characteristics of Crowds. In Crowds, there are certain symmetries
+that derive from the assumption that the probability that user *i* forwards the message
+to user *j* is the same for all *i* and *j*. The property of probable innocence proved for
+Crowds in [15] depends strongly on this assumption. We want a general notion
+that has the possibility to hold even in protocols which do not satisfy the Crowds'
+symmetries.
+
+For completeness, we also consider the composition of protocols executions, with
+specific focus on the case that in which the originator is the same and the protocol to
+be executed is the same. This situation can arise, for instance, when an attacker can
+induce the originator to repeat the protocol (multiple paths attack). We extend the
+definition of probable innocence to the case of protocol composition under the same
+originator, and we study how this property depends on the number of compositions.
+
+All the notions developed in this paper are defined by using a model, for protocols
+and systems, based on a simplified version of Probabilistic Automata ([18]). Proba-
+bilistic Automata, and similar models like the Concurrent Markov Chains, are now
+a mature field of research with a solid theory and well established model check-
+ing tools like PRISM [13]. This opens the way to the automatic verification of our
+notion of probable innocence. We refer to [5] for various examples of verification,
+using PRISM, of the related notion of weak anonymity developed within the same
+framework of simplified Probabilistic Automata. Furthermore, we are currently de-
+veloping a model checker for the probabilistic $\pi$-calculus [11,14]. This is a formal-
+ism whose semantics is again based on simplified Probabilistic Automata and it
+is a natural language for expressing protocols running on distributed systems like
+Crowds. We aim in particular at developing efficient model checking techniques
+for computing the conditional probability of events, which constitute the only kind
+of quantitative information needed for proving the formula expressing our notion of
+probable innocence.
+
+¹ For simplicity sometime we will refer to the users’ probability of performing the action of interest as “users’ probabilites”
+---PAGE_BREAK---
+
+The main goal of this paper is to establish a general notion of probable innocence which combines the spirits of the approches discussed above, namely it expresses a limit both on the attacker's confidence and on the probability of detection. Furthermore, we aim at a notion that does not depend on symmetry assumptions and on the probabilities of the users to perform the action of interest.
+
+We show that our definition, while being more general, corresponds exactly to the property that Reiter and Rubin have proved for Crowds, under the specific symmetry conditions which are satisfied by Crowds. We also show that in the particular case that the users have uniform probability of being the originator, we obtain a property similar to the definition of probable innocence given by Halpern and O'Neill.
+
+A second contribution is the analysis of the robustness of probable innocence under multiple paths attacks, which induce a repetition of the protocol. We show a general negative result, namely that no protocol can ensure probable innocence under an arbitrary number of repetitions, unless the system is strongly anonymous. This generalizes the result, already known in literature, about the fact that Crowds cannot guarantee probable innocence under unbound multiple path attacks.
+
+## 1.2 Plan of the paper
+
+In next section we recall some notions which are used in the rest of the paper: the Probabilistic Automata, the framework for anonymity developed in [2], and the definition of (strong) probabilistic anonymity given in [2]. In Section 3 we illustrate the Crowds protocol, we recall the property proved for Crowds and the definition of probable innocence by Halpern and O'Neill, and we discuss them. In Section 4 we propose our notion of probable innocence and we compare it with those of Section 3. In Section 5 we consider the repetition of an anonymity protocol and we show that we cannot guarantee probable innocence for arbitrary repetition unless the protocol is strongly anonymous. In Section 6 we discuss some related work from the literature. Section 7 concludes.
+---PAGE_BREAK---
+
+Fig. 1. Examples of probabilistic automata
+
+2 Preliminaries
+
+2.1 Probabilistic Automata
+
+In our approach we consider systems that can perform both probabilistic and nonde-
+terministic choice. Intuitively, a probabilistic choice represents a set of alternative
+transitions, each of them associated to a certain probability of being selected. The
+sum of all probabilities on the alternatives of the choice must be 1, i.e. they form
+a *probability distribution*. Nondeterministic choice is also a set of alternatives, but
+we have no information on how likely one alternative is selected.
+
+There have been many models proposed in literature that combine both nondeter-
+ministic and probabilistic choice. One of the most general is the formalism of *prob-
+abilistic automata* proposed in [18]. In this work we use this formalism to model
+anonymity protocols. We give here a brief description of it.
+
+A probabilistic automaton consists in a set of states, and labeled transitions between them. For each node, the outgoing transitions are partitioned in groups called *steps*. Each step represents a probabilistic choice, while the choice between the steps is nondeterministic.
+
+Figure 1 illustrates some examples of probabilistic automata. We represent a step by putting an arc across the member transitions. For instance, in (a), state $s_1$ has two steps, the first is a probabilistic choice between two transitions with labels *a* and *b*, each with probability 1/2. When there is only a transition in a step, like the one from state $s_3$ to state $s_6$, the probability is of course 1 and we omit it.
+
+In this paper, we use only a simplified kind of automaton, in which from each node
+we have either a probabilistic choice or a nondeterministic choice (more precisely,
+either one step or a set of singleton steps), like in (b). In the particular case that the
+choices are all probabilistic, like in (c), the automaton is called *fully probabilistic*.
+---PAGE_BREAK---
+
+Given an automaton $M$, we denote by $\mathit{etree}(M)$ its unfolding, i.e. the tree of all possible executions of $M$ (in Figure 1 the automata coincide with their unfolding because there is no loop). If $M$ is fully probabilistic, then each execution (maximal branch) of $\mathit{etree}(M)$ has a probability obtained as the product of the probability of the edges along the branch. In the finite case, we can define a probability measure for each set of executions, called *event*, by summing up the probabilities of the elements$^2$. Given an event $x$, we will denote by $p(x)$ the probability of $x$. For instance, let the event $c$ be the set of all computations in which $c$ occurs. In (c) its probability is $p(c) = 1/3 \times 1/2 + 1/6 = 1/3$.
+
+When nondeterminism is present, the probability can vary, depending on how we resolve the nondeterminism. In other words we need to consider a function $\zeta$ that, each time there is a choice between different steps, selects one of them. By pruning the non-selected steps, we obtain a fully probabilistic execution tree $\mathit{etree}(M, \zeta)$ on which we can define the probability as before. For historical reasons (i.e. since nondeterminism typically arises from the parallel operator), the function $\zeta$ is called *scheduler*.
+
+It should then be clear that the probability of an event is relative to the particular scheduler. We will denote by $p_\zeta(x)$ the probability of the event $x$ under the scheduler $\zeta$. For example, consider (a). We have two possible schedulers determined by the choice of the step in $s_1$. Under one scheduler, the probability of $c$ is $1/2$. Under the other, it is $2/3 \times 1/2 + 1/3 = 2/3$. In (b) we have three possible schedulers under which the probability of $c$ is 0, $1/2$ and $1$, respectively.
+
+## 2.2 Anonymity systems
+
+The concept of anonymity is relative to the set of anonymous users and to what is visible to the observer. Hence, following [17,16] we classify the actions of the automaton into the three sets $A$, $B$ and $C$ as follows:
+
+* $A$ is the set of the anonymous actions $A = \{a(i) | i \in I\}$ where $I$ is the set of the identities of the anonymous users and $a$ is an injective function from $I$ to the set of actions, which we call *abstract action*. We also call the pair $(I, a)$ *anonymous action generator*.
+
+* $B$ is the set of the observable actions. We will use $b, b', \dots$ to denote the elements of this set.
+
+* $C$ is the set of the remaining actions (which are unobservable).
+
+$^2$ In the infinite case things are more complicated: we cannot define a probability measure for all sets of execution, and we need to consider as event space the $\sigma$-field generated by the cones of $\mathit{etree}(M)$. However, in this paper, we consider only the finite case.
+---PAGE_BREAK---
+
+Note that the actions in $\mathcal{A}$ normally are not visible to the observer, or at least, not for the part that depends on the identity $i$. However, for the purpose of defining and verifying anonymity we model the elements of $\mathcal{A}$ as visible outcomes of the system.
+
+**Definition 1** An anonymity system is a tuple $(M, I, \alpha, B, Z, p)$, where $M$ is a probabilistic automaton, $(I, \alpha)$ is an anonymous action generator, $B$ is a set of observable actions, $Z$ is the set of all possible schedulers for $M$, and for every $\varsigma \in Z$, $p_\varsigma$ is the probability measure on the event space generated by $\textit{etree}(M, \varsigma)$.
+
+For simplicity, we assume the users to be the only possible source of nondeterminism in the system. If they are probabilistic, then the system is fully probabilistic, hence $Z$ is a singleton and we omit it.
+
+We introduce the following notation to represent the events of interest:
+
+• $a(i)$ : all the executions in $\textit{etree}(M, \varsigma)$ containing the action $a(i)$;
+
+• $\alpha$ : all the executions in $\textit{etree}(M, \varsigma)$ containing an action $a(i)$ for an arbitrary $i$;
+
+• $o$ : all the executions in $\textit{etree}(M, \varsigma)$ containing the sequence of observable actions $o$ (where $o$ is of the form $b_1b_2\dots b_n$ for some $b_1, b_2, \dots, b_n \in B$). We denote by $O$ (observables) the set of all $o$'s of interest.
+
+We use the symbols $\cup$, $\cap$ and $\neg$ to represent the union, the intersection, and the complement of events, respectively.
+
+We wish to keep the notion of observables as general as possible, but we still need to make some assumptions on them. First, we want the observables to be execution-disjoint events, in the sense that no execution can contain both $o_1$ and $o_2$ if $o_1 \neq o_2$. Second, they must cover all possible outcomes. Third, an observable $o$ must indicate unambiguously whether $\alpha$ has taken place or not, i.e. it either implies $\alpha$, or it implies $\neg \alpha$. In set-theoretic terms it means that either $o$ is a subset of $\alpha$ or of the complement of $\alpha$. Formally $^3$:
+
+Assumption 1 (on the observables)
+
+$$ (1) \quad \forall \varsigma \in Z. \ \forall o_1, o_2 \in O. \ o_1 \neq o_2 \Rightarrow p_{\varsigma}(o_1 \cup o_2) = p_{\varsigma}(o_1) + p_{\varsigma}(o_2) $$
+
+$$ (2) \quad \forall \varsigma \in Z. \ p_{\varsigma}(O) = 1 $$
+
+$$ (3) \quad \forall \varsigma \in Z. \ \forall o \in O. \ (p_{\varsigma}(o \cap a) = p_{\varsigma}(o) \lor p_{\varsigma}(o \cap \neg a) = p_{\varsigma}(o)) $$
+
+Analogously, we need to make some assumption on the anonymous actions. We consider first the conditions tailored for the nondeterministic users: each scheduler
+
+³ Note that the intuitive explanations here are stronger than the corresponding formal assumptions because, in the infinite case, there could be non-trivial sets of measure 0. However in the case of anonymity we usually deal with finite scenarios. In any case, these formal assumptions are enough for the ensuring the properties of the anonymity notions that we need in this paper.
+---PAGE_BREAK---
+
+determines completely whether an action of the form $a(i)$ takes place or not, and in
+the positive case, there is only one such $i$. Formally:
+
+**Assumption 2 (on the anonymous actions, for nondeterministic users)**
+
+$$
+\forall \zeta \in Z. p_{\zeta}(a) = 0 \lor (\exists i \in I. (p_{\zeta}(a(i)) = 1 \land \forall j \in I. j \neq i \Rightarrow p_{\zeta}(a(j)) = 0))
+$$
+
+We now consider the case in which the users are fully probabilistic. The assumption on the anonymous actions in this case is much weaker: we only require that there be at most one user that performs *a*, i.e. *a*(*i*) and *a*(*j*) must be disjoint for *i* ≠ *j*. Formally:
+
+**Assumption 3 (on the anonymous actions, for probabilistic users)**
+
+$$
+\forall i, j \in I. i \neq j \Rightarrow p(a(i) \cup a(j)) = p(a(i)) + p(a(j))
+$$
+
+## 2.3 Strong probabilistic anonymity
+
+In this section we recall the notion of strong anonymity proposed in [2].
+
+Let us first assume that the users are nondeterministic. Intuitively, a system is strongly anonymous if, given two schedulers $\varsigma$ and $\vartheta$ that both choose $a$ (say $a(i)$ and $a(j)$, respectively), it is not possible to detect from the probabilistic measure of the observables whether the scheduler has been $\varsigma$ or $\vartheta$ (i.e. whether the selected user was $i$ or $j$).
+
+Note that $\varsigma$ chooses $a$ if and only if $p_\varsigma(a) = 1$ or, equivalently, if and only if $p_\varsigma(a(i)) = 1$ for some $i$.
+
+**Definition 2** A system $(M, I, a, B, Z, p)$ with nondeterministic users is anonymous if
+
+$$
+\forall \varsigma, \vartheta \in Z. \forall o \in O. p_{\varsigma}(a) = p_{\vartheta}(a) = 1 \Rightarrow p_{\varsigma}(o) = p_{\vartheta}(o)
+$$
+
+The probabilistic counterpart of Definition 2 can be formalized using the concept of conditional probability. Recall that, given two events x and y with p(y) > 0, the conditional probability of x given y, denoted by p(x|y), is equal to p(x∩y)/p(y).
+
+**Definition 3** A system $(M, I, a, B, p)$ with probabilistic users is anonymous if
+
+$$
+\forall i, j \in I. \forall o \in O. (p(a(i)) > 0 \land p(a(j)) > 0) \Rightarrow p(o | a(i)) = p(o | a(j))
+$$
+
+The notions of anonymity illustrated so far focus on the probability of the observ-
+ables. More precisely, it requires the probability of the observables to be indepen-
+---PAGE_BREAK---
+
+dent from the selected user. In [2] it was shown that Definition 3 is equivalent to the notion adopted implicitly in [3], and called conditional anonymity in [9]. As illustrated in the introduction, the idea of this notion is that a system is anonymous if the observations do not change the probability of the $a(i)$'s. In other words, we may know the probability of $a(i)$ by some means external to the system, but the system should not increase our knowledge about it.
+
+**Proposition 4 ([2]** A system $(M, I, a, B, p)$ with probabilistic users is anonymous iff
+
+$$ \forall i \in I. \forall o \in O. p(o \cap a) > 0 \Rightarrow p(a(i) | o) = p(a(i) | a) $$
+
+**Note 1** To be precise, the probabilistic counterpart of Definition 2 should be stronger than that given in Definition 3, in fact it should be independent from the probabilities of the users to perform the action of interest, like Definition 2 is. We could achieve this by assuming the system to be parametric with respect to the probability distribution of the users, and then require the formula to hold for every possible distribution. Proposition 4 should be modified accordingly.
+
+**Note 2** The large number of anonymity definitions often leads to confusion. In the rest of the paper we will refer to Definition 3 as (strong) probabilistic anonymity. By conditional anonymity we will refer to the condition in Proposition 4 which corresponds to the definition of Halpern and O'Neill ([9]). Finally by strong anonymity we will refer to the corresponding definition in [9] which can be expressed as:
+
+$$ \forall i, j \in I. \forall o \in O : p(a(i) | o) = p(a(j) | o) \quad (1) $$
+
+## 3 Probable Innocence
+
+Strong and conditional anonymity are notions which are usually difficult to achieve in practice. For instance, in the case of protocols like Crowds, the originator needs to take some initiative, thus revealing himself to the attacker with greater probability than the rest of the users. As a result, more relaxed levels of anonymity, such as probable innocence, are provided by real protocols.
+
+Probable innocence is verbally defined by Reiter and Rubin ([15]) as 'the sender (the user who forwards the message to the attacker) appears no more likely to be the originator than not to be the originator'. Two different approaches to formalize this notion exist. The first focuses on the probability of the observables and constraints the probability of detecting a user. The second focuses on the probability of the users and constraints the attacker's confidence that the detected user is the originator.
+
+In this section we first present the Crowds protocol. Then we discuss the two exist-
+---PAGE_BREAK---
+
+ing definitions in literature, corresponding to the approaches above, and we argue that each of them has some shortcoming: the first does not seem satisfactory when the system is not symmetric. The second depends on the users (their probability to perform the action) while, intuitively, anonymity should be a property of the protocol only. In Section 4 we will present a new definition which combines the spirit of the existing ones, and that at the same time overcomes the above shortcomings.
+
+## 3.1 *The Crowds protocol*
+
+This protocol, presented in [15], allows Internet users to perform web transactions without revealing their identity. The idea is to randomly route the request through a crowd of users. Thus when the web server receives the request he does not know who is the originator since the user who sent the request to the server is simply forwarding it. The more interesting case, however, is when an attacker is a member of the crowd and participates in the protocol. In this case the originator is exposed with higher probability than any other user and strong anonymity cannot be achieved. However, it can be proved that Crowds provides probable innocence under certain conditions.
+
+More specifically a crowd is a group of *m* users who participate in the protocol. Some of the users may be corrupted which means they can collaborate in order to reveal the identity of the originator. Let *c* be the number of such users and *p**f* a parameter of the protocol, explained below. When a user, called the *initiator* or *originator*, wants to request a web page he must create a *path* between him and the server. This is achieved by the following process:
+
+* The initiator selects randomly a member of the crowd (possibly himself) and forwards the request to him. We will refer to this latter user as the *forwarder*.
+
+* A forwarder, upon receiving a request, flips a biased coin. With probability $1-p_f$ he delivers the request directly to the server. With probability $p_f$ he selects randomly, with uniform probability, a new forwarder (possibly himself) and forwards the request to him. The new forwarder repeats the same procedure.
+
+The response from the server follows the same route in the opposite direction to return to the initiator. It must be mentioned that all communication in the path is encrypted using a *path key*, mainly to defend against local eavesdroppers (see [15] for more details). In this paper we are interested in attacks performed by corrupted members of the crowd to reveal the initiator's identity. Each member is considered to have only access to the traffic routed through him, so he cannot intercept messages addressed to other members.
+---PAGE_BREAK---
+
+3.2 Definition of probable innocence
+
+3.2.1 First approach (limit on the probability of detection):
+
+Reiter and Rubin ([15]) give a defi nition which considers the probability of the
+originator being observed by a corrupted member, that is being directly before him
+in the path. Let I denote the event ‘the originator is observed by a corrupted mem-
+ber” and H1+ the event “at least one corrupted member appears in the path”. Then
+probable innocence can be defi ned as
+
+$$
+p(I | H_{1+}) \leq 1/2 \tag{2}
+$$
+
+In [15] it is proved that this property is satisfi ed by Crowds if $n \ge \frac{p_f}{p_f-1/2}(c+1)$.
+
+For simplicity, we suppose that a corrupted user will not forward a request to other crowd members, so at most one user can be observed. This approach is also followed in [15,21,24] and the reason is that by forwarding the request the corrupted users cannot gain any new information since forwarders are chosen randomly.
+
+We now express the above defi nition in the framework of this paper (Section 2.2).
+Since $I \Rightarrow H_{1+}$ we have $p(I|H_{1+}) = p(I)/p(H_{1+})$. If $A_i$ denotes that ‘user $i$ is the originator”and $D_i$ is the event ‘the user $i$ was observed by a corrupted member (appears in the path right before the corrupter member)” then $p(I) = \sum_i p(D_i \wedge A_i) = \sum_i p(D_i | A_i)p(A_i)$. Since $p(D_i | A_i)$ is the same for all $i$ then the defi nition (2) can be written $\forall i : p(D_i | A_i)/P(H_{1+}) \le 1/2$.
+
+Let $A$ be the set of all crowd members and $O = \{o_i | i \in A\}$ the set of observables.
+Essentially $a(i)$ denotes $A_i$ and $o_i$ denotes $D_i$. Note that $D_i$ is an observable since
+it can be observed by a corrupted user (remember that corrupted users share their
+information). Also let $h = \bigvee_{i \in A} o_i$, meaning that some user was observed. The
+defi nition (2) can now be written:
+
+$$
+\forall i \in A : p(o_i | a(i)) \leq \frac{1}{2} p(h) \quad (3)
+$$
+
+This is indeed an intuitive defi nition for Crowds. However there are many questions raised by this approach. For example, we are only interested in the probability of one specific event, what about other events that might reveal the identity of the initiator? For example the event $\neg o_i$ will have probability greater than $p(h)/2$, is this important? Moreover, consider the case where the probability of $o_i$ under a different initiator $j$ is negligible. Then, if we observe $o_i$, isn't it more probable that user $i$ sent the message, even if $p(o_i | a(i))$ is less than $p(h)/2$?
+
+If we consider arbitrary protocols, then there are cases where the condition (3) does
+not express the expected properties of probable innocence. We give two examples
+of such systems in Figure 2 and we explain them below.
+---PAGE_BREAK---
+
+Fig. 2. Examples of arbitrary (non symmetric) protocols. The value at position i, j repre-
+sents $p(o_j | a(i))$ for user i and observable $o_j$.
+
+**Example 5** On the left-hand side of Figure 2, *m* users are participating in a Crowds- like protocol. The only difference, with respect to the standard Crowds, is that user 1 is behind a firewall, which means that he can send messages to any other user but he cannot receive messages from any of them. In the corresponding table we give the conditional probabilities $p(o_j|a(i))$, where we recall that $o_j$ means that $j$ is the user who sends the message to the corrupted member, and $a(i)$ means that $i$ is the initiator. When user 1 is the initiator the probability of observing him is $c/(m-p_f)$ (there is a c/m chance that user 1 sends the message to a corrupted user and there is also a chance that he forwards it to himself and sends it to a corrupted user in the next round). All other users can be observed with the same probability $l$. When any other user is the initiator, however, the probability of observing user 1 is 0, since he will never receive the message. In fact, the protocol will behave exactly like a Crowd of $m-1$ users as it is shown in the table.
+
+Note that Reiter and Rubin's definition (3) requires the diagonal of this table to be less than $p(h)/2$. In this example the definition holds provided that $m - 1 \ge \frac{p_f}{p_f - 1/2}(c + 1)$. In fact, for all users $i \ne 1$, $p(o_i|a(i))$ is the same as in the original Crowds (which satisfies the definition) and for user 1 it is even smaller. However, if a corrupted member observes user 1 he can be sure that he is the initiator since no other initiator leads to the observation of user 1. The problem here is that Reiter and Rubin's definition constraints only the probability of detection of user 1 and says nothing about the attacker's confidence in case of detection. We believe that totally revealing the identity of the initiator with non-negligible probability is undesirable and should be considered as a violation of an anonymity notion such as probable innocence.
+---PAGE_BREAK---
+
+**Example 6** On the right-hand side we have an opposite counter-example. Three users want to communicate with a web server, but they can only access it through a proxy. We suppose that all users are honest but they do not trust the proxy so they do not want to reveal their identity to him. So they use the following protocol: the initiator first forwards the message to one of the users 1, 2 and 3 with probabilities 2/3, 1/6 and 1/6 respectively, regardless of which is the initiator. The user who receives the message forwards it to the proxy. The probabilities of observing each user are shown in the corresponding table. Regardless of which is the initiator, user 1 will be observed with probability 2/3 and the others with probability 1/6 each.
+
+In this example Reiter and Rubin's definition does not hold since $p(o_1 | a(1)) > 1/2$. However all users produce the same observables with the same probabilities hence we cannot distinguish between them. Indeed the system is strongly anonymous (Definition 3 holds)! Thus, in the general case, we cannot adopt (3) as the definition of probable innocence since we want such a notion to be implied by strong anonymity.
+
+However, it should be noted that in the case of Crowds the definition of Reiter and Rubin is correct, because of a special symmetry property of the protocol. This is discussed in detail in Section 4.1.
+
+Finally, note that the above definition does not mention the probability of the users to be the originator. It only considers such events as conditions in the conditional probability of the event $o_i$ given that *i* is the originator. The value of such conditional probability does not imply anything for the user, he might have a very small or very big probability of initiating the message. This is a major difference with respect to the next approach.
+
+### 3.2.2 Second approach (limit on the attacker's confidence):
+
+Halpern and O'Neill propose in [9] a general framework for defining anonymity properties. We give a very abstract idea of this framework, detailed information is available in [9]. In this framework a system consists of a group of agents, each having a local state at each point of the execution. The local state contains all information that the user may have and does not need to be explicitly defined. At each point $(r, m)$ user $i$ can only have access to his local state $r_i(m)$. So he does not know the actual point $(r, m)$ but at least he knows that it must be a point $(r', m')$ such that $r'_i(m') = r_i(m)$. Let $K_i(r, m)$ be the set of all these points. If a formula $\phi$ is true in all points of $K_i(r, m)$ then we say that $i$ knows $\phi$. In the probabilistic setting it is possible to create a measure on $K_i(r, m)$ and draw conclusions of the form "formula $\phi$ is true with probability $p$".
+
+To define probable innocence Halpern and O'Neill first define a formula $\theta(i, a)$ meaning "user $i$ performed the event $a$". We then say that a system has probable innocence if for all points $(r, m)$, the probability of $\theta(i, a)$ in this point for all users $j$ (that is, the probability that arises by measuring $K_j(r, m)$) is less than one half.
+---PAGE_BREAK---
+
+This definition can be expressed in the framework of Section 2.2. The probability of a formula $\phi$ for user $j$ at the point $(r, m)$ depends only on the set $K_j(r, m)$ which itself depends only on $r_j(m)$. The latter is the local state of the user, that is the only things that he can observe. In our framework this corresponds to the observables of the probabilistic automaton. Thus, we can reformulate the definition of Halpern and O'Neill as:
+
+$$ \forall i \in I, \forall o \in O : p(a(i)|o) \leq 1/2 \quad (4) $$
+
+This definition is similar to the one of Reiter and Rubin but not the same. The difference is that it considers the probability that, given a certain observation, the user has performed the action of interest, not the opposite. If this probability is less than one half then intuitively $i$ appear less likely to have performed $o$ than not to.
+
+The problem with this definition is that the probabilities of the users are not part of the system and we can make no assumptions about them. Consider for example the case where we know that user $i$ visits very often a specific web site, so even if we have 100 users, the probability that he performed a request to this site is 0.99. Then we cannot expect this probability to become less than one half under all observations. A similar remark about strong anonymity led Halpern and O'Neill to define conditional anonymity. If a user $i$ has higher probability of performing the action than user $j$ then we cannot expect this to change because of the system. Instead we can request that the system does not provide any new information about the originator of the action.
+
+# 4 A new definition of probable innocence
+
+In this section we give a new definition of probable innocence that combines the spirit of the two existing ones. The spirit of Reiter and Rubin's definition is to constrain the probability of detection of a user, which is captured in our Definition 8. The spirit of Halpern and O'Neill's definition is to constrain the attacker's confidence, which is captured in our Definition 7. The new definition combines both spirits in the sense that Definitions 7 and 8 are equivalent. Moreover it overcomes the shortcomings discussed in previous section, namely, it does not depend on the symmetry of the system and it does not depend on the users' probabilities. We also show that our definition is a generalization of the existing ones since it can be reduced to them under the assumption of symmetry for the first, and of uniform users' probability for the second.
+
+One of the goals of the new definition is to abstract from the probabilities of the users to perform the action of interest. These probabilities, although they affect the probability measure $p$ of the anonymity system, are not part of the protocol and can vary in different executions. To model this fact, let $u$ be a probability measure on the set $I$ of anonymous users. Then, we suppose that the anonymity system is
+---PAGE_BREAK---
+
+equipped with a probability measure $p_u$, which depends on $u$, satisfying the following conditions:
+
+$$p_u(a(i)) = u(i) \qquad (5)$$
+
+$$p_u(o | a(i)) = p_{u'}(o | a(i)) \qquad (6)$$
+
+for all users $i$, observables $o$ and user distributions $u, u'$ such that $u(i) > 0$, $u'(i) > 0$. Condition (5) requires that the selection of user is made using the distribution $u$. Condition (6) requires that, having selected a user, the distribution $u$ does not affect the probability of any observable $o$. In other words $u$ is used to select a user and only for that. This is typical in anonymity protocols where a user is selected in the beginning (this models the user's decision to send a message) and then some observables are produced that depend on the selected user. We will denote by $p(o|a(i))$ the probability $p_u(o|a(i))$ under some $u$ such that $u(i) > 0$.
+
+In general we would like our anonymity definitions to range over all possible values of $u$ since we cannot assume anything about the probabilities of the users to perform the action of interest. Thus, Halpern and O'Neill's definition (4) should be written: $\forall u \forall i \forall o : p_u(a(i)|o) \le 1/2$ which makes even more clear the fact that it cannot hold for all $u$, for example if we take $u(i)$ to be very close to 1. On the other hand, Reiter and Rubin's definition contains only probabilities of the form $p(o|a(i))$. Crowds satisfies condition (6) so these probabilities are independent from $u$.
+
+In [9], where they define conditional anonymity, Halpern and O'Neill make the following remark about strong anonymity. Since the probabilities of the users to perform the action of interest are generally unknown we cannot expect that all users appear with the same probability. All that we can ensure is that the system does not reveal any information, that is that the probability of every user before and after making an observation should be the same. In other words, the fraction between the probabilities of any couple of users should not be one, but should at least remain the same before and after the observation.
+
+We apply the same idea to probable innocence. We start by rewriting relation (4) as
+
+$$\forall i \in A, \forall o \in O : 1 \ge \frac{p_u(a(i)|o)}{p_u(\bigvee_{j \ne i} a(j)|o)} \qquad (7)$$
+
+As we already explained, if $u(i)$ is very high then we cannot expect this fraction to be less than 1. Instead, we could require that it does not surpass the corresponding fraction of the probabilities before the execution of the protocol. So we generalize condition (7) in the following definition.
+
+**Definition 7** A system $(M, I, a, B, p_u)$ has probable innocence if for all user dis-
+---PAGE_BREAK---
+
+tributions $u$, users $i \in I$ and observables $o \in O$, the following holds:
+
+$$
+(n-1) \frac{p_u(a(i))}{p_u(\bigvee_{j \neq i} a(j))} \geq \frac{p_u(a(i) | o)}{p_u(\bigvee_{j \neq i} a(j) | o)}
+$$
+
+where $n = |I|$ is the number of anonymous users.
+
+In probable innocence we consider the probability of a user to perform the action of interest compared to the probability of all the other users together. Definition 7 requires that the fraction of these probabilities after the execution of the protocol should be no bigger than $n-1$ times the same fraction before the execution. The $n-1$ factor comes from the fact that in probable innocence some information about the sender's identity is leaked. For example, if users are uniformly distributed, each of them has probability $1/n$ before the protocol and the sender could appear with probability $1/2$ afterwards. In this case, the fraction between the sender and all other users is $\frac{1}{n-1}$ before the protocol and becomes 1 after. Definition 7 states that this fraction can be increased, thus leaking some information, but no more than $n-1$ times.
+
+Definition 7 generalizes relation (4) and can be applied in cases where the distribu-
+tion of users is not uniform. However it still involves the probabilities of the users
+to perform the action of interest, which are not a part of the system. What we would
+like is a defi nition similar to Def. 3 which involves only probabilities of events that
+are part of the system. To achieve this we rewrite Defi nition 7 using the following
+transformations. For all users we assume that $u(i) > 0$. Users with zero probability
+to perform the action could be removed from Defi nition 7 before proceeding.
+
+$$
+\begin{align*}
+(n-1) \frac{p_u(a(i))}{\sum_{j \neq i} p_u(a(j))} &\ge \frac{p_u(a(i) | o)}{\sum_{j \neq i} p_u(a(j) | o)} &&\Leftrightarrow \\
+(n-1) \frac{p_u(a(i))}{\sum_{j \neq i} p_u(a(j))} &\ge \frac{\frac{p_u(o|a(i))p_u(a(i))}{p_u(o)}}{\sum_{j \neq i} \frac{p_u(o|a(j))p_u(a(j))}{p_u(o)}} &&\Leftrightarrow \\
+(n-1) \sum_{j \neq i} p_u(o|a(j))p_u(a(j)) &\ge p_u(o|a(i)) \sum_{j \neq i} p_u(a(j))
+\end{align*}
+$$
+
+We obtain a lower bound of the left clause by replacing all $p_u(o|a(j))$ with their minimum. So we require that
+
+$$
+(n-1) \min_{j \neq i} \{p_u(o|a(j))\} \sum_{j \neq i} p_u(a(j)) \geq p_u(o|a(i)) \sum_{j \neq i} p_u(a(j)) \Leftrightarrow \quad (8)
+$$
+
+$$
+(n - 1) \min_{j \neq i} p_u(o | a(j)) \geq p_u(o | a(i)) \quad (9)
+$$
+
+Condition (9) can be interpreted as follows: for each observable, the probability that user *i* performs the action should be balanced by the corresponding probabilities of
+---PAGE_BREAK---
+
+the other users. It would be more natural to have the sum of all $p_u(o|a(j))$ at the left side, in fact the left side of (9) is a lower bound of this sum. However, since the probabilities of the users are unknown, we have to consider the “worst” case where the user with the minimum $p_u(o|a(j))$ has the greatest probability of appearing.
+
+Finally, condition (9) is equivalent to the following definition that we propose as a general definition of probable innocence.
+
+**Definition 8** A system $(M, I, a, B, p_u)$ has probable innocence if for all observables $o \in O$ and for all users $i, j \in I$:⁴
+
+$$ (n-1)p(o|a(j)) \geq p(o|a(i)) $$
+
+The meaning of this definition is that in order for $p_u(a(i))/p_u(\bigvee_{j \neq i} a(j))$ to increase at most by $n-1$ times (Def. 7), the corresponding fraction between the probabilities of the observables must be at most $n-1$. Note that in probabilistic anonymity (Def. 3) $p(o|a(i))$ and $p(o|a(j))$ are required to be equal. In probable innocence we allow $p(o|a(i))$ to be bigger, thus losing some anonymity, but no more than $n-1$ times.
+
+Definition 8 has the advantage of including only the probabilities of the observables and not those of the users, similarly to the Definition 3 of probabilistic anonymity. It is clear that Definition 8 implies Definition 7 since we strengthened the first to obtain the second. Since Definition 7 considers all possible distributions of the users, the inverse implication also holds.
+
+**Proposition 9** *Definitions 7 and 8 are equivalent.*
+
+*Proof* Def. 8 $\Rightarrow$ Def. 7 is trivial, since we strengthen the second to obtain the first. For the inverse suppose that Def. 7 holds but Def. 8 does not, so there exist users $k, l$ and observable $o$ such that $(n-1)p_u(o|a(k)) < p_u(o|a(l))$. Thus there exist an $\epsilon > 0$ s.t.
+
+$$ (n-1)(p_u(o|a(k)) + \epsilon) \leq p_u(o|a(l)) \quad (10) $$
+
+Def. 7 should hold for all user distributions $u$ so we select one which assigns a very small probability $\delta$ to all users except $k, l$. That is $u(i) = \frac{\delta}{n-2} \forall i \neq k, l$. From Def. 7 (for $i=k$) we have:
+
+⁴ Remember that $p_u(o|a(i))$ is independent from $u$ so we can take any distribution such that $u(i) > 0$, for example a uniform one.
+---PAGE_BREAK---
+
+$$
+\begin{align}
+& (n-1)(p_u(a(k))p_u(o|a(k))) + \nonumber \\
+& \quad \sum_{j \neq k, l} \delta p_u(o|a(j))) \geq p_u(o|a(l))(\delta + p_u(a(k)))^{p_u(o|a(j)) \leq 1} \tag{11} \\
+& (n-1)(p_u(a(k))p_u(o|a(k)) + \delta) \geq p_u(o|a(l))(\delta + p_u(a(k)))^{(10)} \nonumber \\
+& \qquad p_u(a(k))p_u(o|a(k)) + \delta \geq (p_u(o|a(k)) + \epsilon)(\delta + p_u(a(k))) \Rightarrow \nonumber \\
+& \qquad \delta(1 - p_u(o|a(k))) - \epsilon) \geq \epsilon p_u(a(k))^{(10)} \Rightarrow \nonumber \\
+& \qquad \delta \geq \frac{\epsilon p_u(a(k))}{1 - \frac{p_u(o|a(l))}{n-1}} \tag{12}
+\end{align}
+$$
+
+If $n > 2$ then the right side of inequality 12 is strictly positive so it is suffi cient to
+take a smaller $\delta$ and end up with a contradiction. If $n = 2$ then there are no other
+users except $k, l$ and we can proceed similarly.
+$\square$
+
+**Example 10** Recall now the two examples of Figure 2. If we apply Definition 8 to the first one we see that it doesn't hold since $(n-1)p(o_1|a(2)) = 0 \not\ge \frac{c}{n-p_f} = p(o_1|a(1))$. This agree with our intuition of probable innocence being violated when user 1 is observed. In the second example the definition holds since $\forall i, j$: $p(o_i|a(i)) = p(o_j|a(j))$. Thus, we see that in these two examples our definition reflects correctly the notion of probable innocence.
+
+4.1 Relation to other definitions
+
+4.1.1 Definition by Reiter and Rubin
+
+Reiter and Rubin's definition can be expressed by the condition (3). It considers
+the probabilities of the observables (not the users) and it requires that for any user
+which originates the message, a special observable, representing the detection of
+the user by a corrupted member, has probability less than $p(h)/2$. As we saw at the
+examples of Figure 2 what is important is not the actual probability of an observ-
+able when a specific user is the originator, but its relation with the corresponding
+probabilities when the other users are the originators.
+
+However in Crowds there are some important symmetries. First of all the number
+of the observables is the same as the number of users. For each user *i* there is an
+observable *o**i* meaning that the user *i* is observed. When *i* is the initiator, *o**i* has
+clearly a higher probability than the other observables. However, since forwarders
+are randomly selected, the probability of *o**j* is the same for all *j* ≠ *i*. The same
+holds for the observables. *o**i* is more likely to have been performed by *i*. However
+all other users *j* ≠ *i* have the same probability of producing it. These symmetries
+can be expressed as:
+---PAGE_BREAK---
+
+$$ \forall i \in I, \forall k, l \neq i : p(o_k | a(i)) = p(o_l | a(i)) \quad (13) $$
+
+$$ p(o_i | a(k)) = p(o_i | a(l)) \quad (14) $$
+
+Because of these symmetries, we cannot have a situation similar to the ones of Figure 2. On the left-hand side, for example, the probability $p(o_1 | a(2)) = 0$ should be the same as $p(o_3 | a(2))$. To keep the value 0 (which is the reason why probable innocence is not satisfied) we should have 0 everywhere in the row (except $p(o_2 | a(2))$) which is impossible since the sum of the row should be $p(h)$ and $p(o_2 | a(2)) \le p(h)/2$.
+
+So the reason why probable innocence is satisfied in Crowds is not the fact that observing the initiator has low probability (what definition (2) ensures) by itself, but the fact that definition (2), because of the symmetry, forces the probability of observing any of the other users to be high enough.
+
+Note that the number of anonymous users *n* is not the same as the number of users *m* in Crowds, in fact *n* = *m* − *c* where *c* is the number of corrupted users.
+
+**Proposition 11** *Under the symmetry requirements (13) and (14), Definition 8 is equivalent to the one of Reiter and Rubin.*
+
+**Proof** Due to the symmetry it is easy to see that there are only two possible values for $p(o_i | a(j))$. Namely when *i* is the sender, the probability to observe *i* is the same for all *i*. Similarly the probability of observing a different user *j* $\neq$ *i* is the same for all *j*. So
+
+$$ p(o_i | a(j)) = \begin{cases} \phi & \text{if } i = j \\ \chi & \text{if } i \neq j \end{cases} $$
+
+Note that $\phi + (n-1)\chi = p(h)$. So Def. 8 for $o_i$ becomes
+
+$$
+\begin{align*}
+p(o_i | a(i)) &\le (n-1)p(o_i | a(j)) \Rightarrow \\
+&\quad \phi \le (n-1)\chi \Rightarrow \\
+&\quad \phi \le p(h) - \phi \Rightarrow \\
+p(o_i | a(i)) &\le \frac{1}{2}p(h)
+\end{align*}
+$$
+
+which corresponds to Reiter and Rubin's definition. $\square$
+
+### 4.1.2 Definition of Halpern and O'Neill
+
+One of the motivations behind the new definition of probable innocence is that it should make no assumptions about the probabilities of the users. If we assume a uniform distribution of users then it can be shown that our definition becomes the same as the one of Halpern and O'Neill.
+---PAGE_BREAK---
+
+Fig. 3. Relation between the various anonymity definitions
+
+**Proposition 12** *The definition of Halpern and O’Neill can be obtained by Defini-
+tion 7 if we consider a uniform distribution of users, that is a distribution u such
+that ∀i, j ∈ I : u(i) = u(j) = 1/n.*
+
+**Proof** Trivial. Since all users have the same probability then $\forall i \in I : p(a(i)) = 1/n$ and the left side of defi nition 7 is equal to 1. $\square$
+
+Note that the equivalence of Def. 7 and Def. 8 is based on the fact that the former ranges over all possible distributions *u*. Thus Def. 8 is strictly stronger than the one of Halpern and O’Neill.
+
+4.1.3 Probabilistic anonymity
+
+It is easy to see that strong anonymity (equation (1)) implies Halpern and O’Neill’s
+defi nition of probable innocence. Defi nition 8 preserves the same implication in the
+case of probabilistic anonymity.
+
+**Proposition 13** *Probabilistic anonymity implies probable innocence (Definition 8).*
+
+**Proof** Trivial. If Definition 3 holds then $p(o|a(j)) = p(o|a(i))\forall o, i, j.$ $\square$
+
+The relation between the various defi nitions of anonymity is summarized in Fig-
+ure 3. The classification in columns is based on the type of probabilities that are
+considered. The first column considers the probability of different users, the sec-
+ond the probability of the same user before and after an observation and the third
+the probability of the observables. Concerning the lines, the first corresponds to the
+---PAGE_BREAK---
+
+strong case and the second to probable innocence. It is clear from the table that
+the new defi nition is to probable innocence as conditional anonymity is to strong
+anonymity.
+
+**5 Protocol Composition**
+
+In protocol analysis, it is often easier to split complex protocols in parts, analyze
+each part separetely and then combine the results. In this section we will consider
+the case where a protocol is “repeated” multiple times but with only one user-
+selection phase in the beginning. This situation arises when an attacker can force a
+user to repeat the protocol many times. We will examine the anonymity guarantees
+of the resulting protocol with respect to the existing one, obtaining a general result
+for a class of attacks that appear in protocols such as Crowds.
+
+First, we defi ne the “sequential composition” of two anonymity systems.
+
+**Definition 14** Let $A_1 = (M_1, I, a_1, B_1, p_1)$, $A_2 = (M_2, I, a_2, B_2, p_2)$ be two anonymity systems with the same set of anonymous users *I*. The sequential composition of $A_1$ and $A_2$, denoted as $A_1; A_2$ is an anonymity system $(M, I, a, B, p)$ such that:
+
+$$
+\begin{align}
+exec(M) &\subseteq exec(M_1) \times exec(M_2) \tag{15} \\
+a_1^{-1}(\xi_1) &= a_2^{-1}(\xi_2) \quad \forall \xi_1 \xi_2 \in exec(M) \tag{16} \\
+p(o_1 o_2 | a(i)) &= p_1(o_1 | a(i)) \cdot p_2(o_2 | a(i)) \quad \forall o_1 o_2 \in O_1 \times O_2 \tag{17}
+\end{align}
+$$
+
+where exec(*M*) is the set of all executions in *etree*(*M*), $a_i^{-1}$ is the inverse function of $a_i$ and $O_i$ is the set of observables of $A_i$.
+
+Intuitively, $A_1$; $A_2$ emulates $A_1$ in the beginning. When $A_1$ terminates then it emulates $A_2$ but without re-selecting a user, keeping the same user that was selected in $A_1$. So the executions of $A_1$; $A_2$ are of the form $\xi_1\xi_2$, where $\xi_i$ is an execution of $A_i$, with the constraint that $\xi_1, \xi_2$ should correspond to the same user. Since the user is selected once, the probability of the event $o_1o_2$ given a user $i$ is the product of the corresponding probabilities of each system. We are not interested in the exact structure of the automaton $M$, however it should be relatively simple to construct it from $M_1$ and $M_2$.
+
+Repetition is a special case of sequential composition when the two systems are the
+same.
+
+**Definition 15** Let *A* be an anonymity system. We define the *m*-repetition of *A* as
+*A**m* = *A*; ...; *A*, *m* times.
+
+Let *A* be an anonymity system and *O* its set of observables. We will examine the
+---PAGE_BREAK---
+
+anonymity guarantees of $A^m$ with respect to the ones of $A$. From Definition 3 and equation (17) it is easy to conclude that $A^m$ is strongly anonymous if and only if $A$ is strongly anonymous too, which is expected since the probability of each single event is the same under any user. However, the case of probable innocence is more interesting since an event might have greater probability under user $i$ that under user $j$.
+
+Consider a system with three users, and one event $o$ with probabilities $p(o|a(1)) = 1/2$ and $p(o|a(2)) = p(o|a(3)) = 1/4$. This system satisfies Definition 8 thus it provides probable innocence. If we repeat the protocol two times then the probabilities for the event $oo$ will be $p(oo|a(1)) = 1/4$ and $p(oo|a(2)) = p(oo|a(3)) = 1/16$, but now Definition 8 is violated. In the original protocol the probability of $o$ under user 1 was two times bigger than the corresponding probability of the other users, but after the repetition it became 4 times bigger and Definition 8 does not allow it.
+
+In the general case, the system $A^m$ satisfies (by definition) probable innocence if
+
+$$ (n-1)p(o_1 \dots o_m | a(i)) \geq p(o_1 \dots o_m | a(j)) \quad \forall o_1, \dots, o_m \in O, \forall i, j \in I \quad (18) $$
+
+The following lemma states that it is sufficient to check only the events of the form $o\dots o$ (the same event repeated $m$ times), and expresses the probable innocence of $A^m$ using probabilities of $A$.
+
+**Lemma 16** Let $A = (M, I, a, B, p)$ be an anonymity system, $n = |I|$ and $O$ its set of observable events. $A^m$ satisfies probable innocence if and only if:
+
+$$ (n-1)p^m(o|a(i)) \geq p^m(o|a(j)) \quad \forall o \in O, \forall i, j \in I \quad (19) $$
+
+*Proof* (only if) We can use equation (18) with $o_1 = \dots = o_m = o$ and then (17) to obtain (19). (if) We can write (19) as $\sqrt[m]{n-1}p(o|a(i)) \geq p(o|a(j))$. Let $o_1, \dots, o_m$ be events, by applying this inequality to all of them we have:
+
+$$ \sqrt[m]{n-1}p(o_1|a(i)) \geq p(o_1|a(j)) \\ \vdots \\ \sqrt[m]{n-1}p(o_m|a(i)) \geq p(o_m|a(j)) $$
+
+Then by multiplying these inequalities we obtain (18). $\square$
+
+Lemma 16 explains our previous example. The probability $p(o|a(2)) = 1/4$ was smaller than $p(o|a(1)) = 1/2$ but sufficient to provide probable innocence. But when we raised these probabilities to the power of two, 1/16 was too small so the event *oo* would expose user 1. In fact, if we allow an arbitrary number of
+---PAGE_BREAK---
+
+repetitions equation (19) can never hold, unless the probability of all events under any user is the same, that is if the system is strongly anonymous.
+
+**Proposition 17** Let A be an anonymity system. $A^m$ satisfies probable innocence for all $m$ if and only if A is strongly anonymous.
+
+**Proof** We rewrite equation (19) as⁵:
+
+$$n - 1 \geq \left( \frac{p(o | a(j))}{p(o | a(i))} \right)^m \quad \forall o \in O, \forall i, j \in I \quad (20)$$
+
+If $\mathcal{A}$ is strongly anonymous then by Definition 3: $p(o|a(i)) = p(o|a(j))$ for all $o, i, j$ so the right side of inequality 20 is 1 thus it always holds (for $n \ge 2$). Otherwise there exist $o, i, j$ such that $p(o|a(j)) > p(o|a(i))$. So (20) cannot hold for all $m$ since $\alpha^m \to \infty$ when $m \to \infty$ for $\alpha > 1$. $\square$
+
+## 5.1 Multiple paths attack
+
+As stated in the original paper of Crowds, after creating a random path to a server, a user should use the same path for all the future requests to the same server. However there is a chance that some node in the path leaves the network, in that case the user has to create a new path using the same procedure. In theory the two paths cannot be linked together, that is the attacker cannot know that it is the same user who created the two paths. In practice, however, such a link could be achieved by means unrelated to the protocol such as the url of the server, the data of the request etc. By linking the two requests the attacker obtains more observables that he can use to track down the originator. Since the attacker also participates in the protocol he could voluntarily break existing paths that pass through him in order to force the users to recreate them.
+
+If $\mathcal{C}$ is an anonymity system that models Crowds, then the $m$-paths version corresponds to the $m$-repetition of $\mathcal{C}$, which repeats the protocol $m$ times without re-selecting a user. From proposition 17 and since Crowds is not strongly anonymous, we have that probable innocence cannot be satisfied if we allow an arbitrary number of paths. Intuitively this is justified. Even if the attacker sees the event $\alpha_i$ meaning that user 1 was detected (was right before a corrupted user in the path) it could be the case (with non-trivial probability) that user 2 was the real originator, he sent the message to user 1 and he sent it to the attacker. However, if there are ten paths and the attacker sees $o_1...o_{10}$ (ten times) then it is much more unprobable
+
+⁵ Note that in order to have probable innocence (or strong anonymity) $p(o|a(i))$ should be non-zero for all $o$ and $i$ except from trivial systems where all observables have zero probabilities. Thus, we consider only non-zero values for $p(o|a(i))$.
+---PAGE_BREAK---
+
+that all of the ten times user 2 sent the message to user 1 and user 1 to the attacker.
+It appears much more likely that user 1 was indeed the originator.
+
+This attack had been foreseen in the original paper of Crowds and further analysis
+was presented in [24,20]. However our result is more general since we prove that
+probable innocence is impossible for any protocol that allows ‘multiple paths’, in
+other words that can be modeled as an *m*-repetition, unless the original protocol is
+strongly anonymous. Also our analysis is simpler since we did not need to calculate
+the actual probabilities of any observables in a specifi c protocol.
+
+6 Related Work
+
+Anonymity and privacy have been an area of research for over two decades now, with an increasing interest on the subject during the last five years, resulting in a great number of publications. The most related work to ours, as we already discussed in the introduciton and section 3, is the one of Reiter and Rubin ([15]) and the one of Halpern and O'Neill ([10]).
+
+Apart from the above two, there are many papers in the anonymity bibliography in which formal defi nitions of various notions of anonymity are given. Schneider and Sidiropoulos ([17]) propose a defi nition of anonymity based on CSP. Hughes and Shmatikov ([12]) developed a modular framework to formalize a range of properties (including numerous flavors of anonymity and privacy) using the notion of *function views* to represent a mathematical abstraction of partial knowledge of a function. Syverson and Stubblebine ([22]) introduce the notion of *group principals* and an associated epistemic logic to axiomatize anonymity. In these papers, possibilistic frameworks are used and it is not clear how the defi nitions could be extended in a probabilistic setting.
+
+On the other hand, Bhargava and Palamidessi ([2]) propose a probabilistic defi ni-
+tion of strong anonymity using the same framework as this paper. The resulting
+defi nition can be seen as the strong variant of Defi nition 8 (in fact, it implies Defi -
+nition 8 as shown in section 4.1.3). Serjantov and Danezis ([19]) and Diaz et al ([6])
+take an information theoretical approach by considering the *entropy* of the proaba-
+bility distribution that the attacker assigns to the anonymous agents after observing
+the system.
+
+Finally, we should mention an interesting work by Evfi mievski et al ([8]) on the
+field of privacy preserving data mining. Their defi nition requires that the proba-
+bility of a private value $x_1$ producing an output $y$ should be at most $\gamma$ times the
+corresponding probability of a different value $x_2$. This is very close in spirit to our
+defi nition of probable innocence.
+---PAGE_BREAK---
+
+In this paper we have considered probable innocence, a weak notion of anonymity provided by real-world systems such as Crowds. We have analyzed the definitions of probable innocence existing in literature, in particular: the one by Reiter and Rubin which is suitable for systems which, like Crowds, satisfy certain symmetries, and the one given by Halpern and O'Neill, which expresses a condition on the probability of the users.
+
+Our first contribution is a definition of probable innocence which is (intuitively) adequate for a general class of protocols, abstracts from the probabilities of the users and involves only the probabilities that depend solely on the system. We have shown that the new definition is equivalent to the existing ones under symmetry conditions (Reiter and Rubin) or uniform distribution of the users (Halpern and O'Neill).
+
+A second contribution is the extension of the definition of probable innocence to the case of protocol repetition, which is induced by multiple paths attacks. We have shown a general negative result, namely that no protocol can ensure probable innocence under an arbitrary number of repetitions.
+
+## References
+
+[1] Oliver Berthold, Hannes Federrath, and Stefan Köpsell. Web mixes: A system for anonymous and unobservable internet access. In *Designing Privacy Enhancing Technologies, International Workshop on Design Issues in Anonymity and Unobservability*, volume 2009 of *Lecture Notes in Computer Science*, pages 115–129. Springer, 2000.
+
+[2] Mohit Bhargava and Catuscia Palamidessi. Probabilistic anonymity. In Martín Abadi and Luca de Alfaro, editors, *Proceedings of CONCUR 2005*, volume 3653 of *Lecture Notes in Computer Science*, pages 171–185. Springer-Verlag, 2005.
+
+[3] David Chaum. The dining cryptographers problem: Unconditional sender and recipient untraceability. *Journal of Cryptology*, 1:65–75, 1988.
+
+[4] Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong. Freenet: A distributed anonymous information storage and retrieval system. In *Designing Privacy Enhancing Technologies, International Workshop on Design Issues in Anonymity and Unobservability*, volume 2009 of *Lecture Notes in Computer Science*, pages 44–66. Springer, 2000.
+
+[5] Yuxin Deng, Catuscia Palamidessi, and Jun Pang. Weak probabilistic anonymity. In *Proceedings of SecCo 2005*, Electronic Notes in Theoretical Computer Science. Elsevier Science Publishers, 2005. To appear.
+---PAGE_BREAK---
+
+[6] Claudia Díaz, Stefaan Seys, Joris Claessens, and Bart Preneel. Towards measuring anonymity. In *Proceedings of PET 2002*, pages 54–68, 2002.
+
+[7] Roger Dingledine, Michael J. Freedman, and David Molnar. The free haven project: Distributed anonymous storage service. In *Designing Privacy Enhancing Technologies, International Workshop on Design Issues in Anonymity and Unobservability*, volume 2009 of *Lecture Notes in Computer Science*, pages 67–95. Springer, 2000.
+
+[8] Alexandre V. Evfimievski, Johannes Gehrke, and Ramakrishnan Srikant. Limiting privacy breaches in privacy preserving data mining. In *Proceedings of PODS 2003*, pages 211–222, 2003.
+
+[9] Joseph Y. Halpern and Kevin R. O'Neill. Anonymity and information hiding in multiagent systems. In *Proc. of the 16th IEEE Computer Security Foundations Workshop*, pages 75–88, 2003.
+
+[10] Joseph Y. Halpern and Kevin R. O'Neill. Anonymity and information hiding in multiagent systems. *Journal of Computer Security*, 2005. To appear.
+
+[11] Oltea Mihaela Herescu and Catuscia Palamidessi. Probabilistic asynchronous π-calculus. In Jerzy Tiuryn, editor, *Proceedings of FOSSACS 2000 (Part of ETAPS 2000)*, volume 1784 of *Lecture Notes in Computer Science*, pages 146–160. Springer-Verlag, 2000.
+
+[12] Dominic Hughes and Vitaly Shmatikov. Information hiding, anonymity and privacy: a modular approach. *Journal of Computer Security*, 12(1):3–36, 2004.
+
+[13] Marta Z. Kwiatkowska, Gethin Norman, and David Parker. PRISM 2.0: A tool for probabilistic model checking. In *Proceedings of the First International Conference on Quantitative Evaluation of Systems (QEST 2004)*, pages 322–323, 2004.
+
+[14] Catuscia Palamidessi and Oltea M. Herescu. A randomized encoding of the π-calculus with mixed choice. *Theoretical Computer Science*, 335(2-3):73–404, 2005.
+
+[15] Michael K. Reiter and Aviel D. Rubin. Crowds: anonymity for Web transactions. *ACM Transactions on Information and System Security*, 1(1):66–92, 1998.
+
+[16] Peter Y. Ryan and Steve Schneider. *Modelling and Analysis of Security Protocols*. Addison-Wesley, 2001.
+
+[17] Steve Schneider and Abraham Sidiropoulos. CSP and anonymity. In *Proc. of the European Symposium on Research in Computer Security (ESORICS)*, volume 1146 of *Lecture Notes in Computer Science*, pages 198–218. Springer-Verlag, 1996.
+
+[18] Roberto Segala and Nancy Lynch. Probabilistic simulations for probabilistic processes. *Nordic Journal of Computing*, 2(2):250–273, 1995. An extended abstract appeared in *Proceedings of CONCUR '94*, LNCS 836: 481-496.
+
+[19] Andrei Serjantov and George Danezis. Towards an information theoretic metric for anonymity. In *Proceedings of PET 2002*, pages 41–53, 2002.
+---PAGE_BREAK---
+
+[20] V. Shmatikov. Probabilistic model checking of an anonymity system. *Journal of Computer Security*, 12(3/4):355-377, 2004.
+
+[21] Vitaly Shmatikov. Probabilistic analysis of anonymity. In *IEEE Computer Security Foundations Workshop (CSFW)*, pages 119-128, 2002.
+
+[22] Paul F. Syverson and Stuart G. Stubblebine. Group principals and the formalization of anonymity. In *World Congress on Formal Methods (1)*, pages 814-833, 1999.
+
+[23] P.F. Syverson, D.M. Goldschlag, and M.G. Reed. Anonymous connections and onion routing. In *IEEE Symposium on Security and Privacy*, pages 44-54, Oakland, California, 1997.
+
+[24] M. Wright, M. Adler, B. Levine, and C. Shields. An analysis of the degradation of anonymous protocols. In *ISOC Network and Distributed System Security Symposium (NDSS)*, 2002.
\ No newline at end of file
diff --git a/samples/texts_merged/419557.md b/samples/texts_merged/419557.md
new file mode 100644
index 0000000000000000000000000000000000000000..b5da762889374e6b035a6ea011002b1dbb233a85
--- /dev/null
+++ b/samples/texts_merged/419557.md
@@ -0,0 +1,191 @@
+
+---PAGE_BREAK---
+
+# Conditions Where the Chaotic Set Has a Non-Empty Residual Julia Set for Two Classes of Meromorphic Functions*
+
+Patricia Domínguez, Iván Hernández
+
+Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico
+Email: pdsoto@fcfm.buap.mx, ivanho_5@hotmail.com
+
+Received August 15, 2013; revised September 15, 2013; accepted September 23, 2013
+
+Copyright © 2013 Patricia Domínguez, Iván Hernández. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
+
+## ABSTRACT
+
+We define the Fatou and Julia sets for two classes of meromorphic functions. The Julia set is the chaotic set where the fractals appear. The chaotic set can have points and components which are buried. The set of these points and components is called the *residual Julia set*, denoted by $J_r(f)$, and is defined to be the subset of those points of the Julia set, chaotic set, which do not belong to the boundary of any component of the Fatou set (stable set). The points of $J_r(f)$ are called *buried points* and the components of $J_r(f)$ are called *buried components*. In this paper we extend some results related with the residual Julia set of transcendental meromorphic functions to functions which are meromorphic outside a compact countable set of essential singularities. We give some conditions where $J_r(f) \neq \emptyset$.
+
+**Keywords:** Fatou Set; Julia Set; Residual Julia Set; Buried Points; Buried Components
+
+## 1. Introduction
+
+Let X,Y be Riemann surfaces (complex 1-manifolds) and $D_j$ be an arbitrary non-empty open subset of X. We define
+
+$$ \mathrm{Hol}(X,Y) = \{ f : D_j \to Y \mid f \text{ is analytic} \} $$
+
+and $\mathrm{Hol}(X,X) = \mathrm{Hol}(X)$.
+
+The set of singular values of $f \in \mathrm{Hol}(X,Y)$ is $\mathrm{SV}(f) = \bar{C}(f) \cup A(f)$, where $C(f)$ is the set of
+
+critical values and $A(f)$ is the set of asymptotic values.
+
+Let $f \in \mathrm{Hol}(X)$, the sequence formed by its iterates will be defined and denoted by $f^0 := \mathrm{Id}$, $f^n := f \circ f^{n-1}$, $n \in \mathbb{N}$. The study makes sense and is non-trivial when $X$ is either the Riemann sphere $\hat{\mathbb{C}}$, the complex plane $\mathbb{C}$ or the complex plane minus one point, this is $\mathbb{C} \setminus \{0\}$.
+
+Taking $X = \mathbb{C}$ and $Y = \hat{\mathbb{C}}$ we deal with the following classes of meromorphic maps.
+
+$$ M = \{ f : X \to Y \mid f \text{ is transcendental meromorphic with at least one not omitted pole} \}. $$
+
+$$ K = \{ f : Y \setminus B \to Y \mid B \text{ is a compact countable set and } f \text{ is meromorphic} \}. $$
+
+The set B is formed by the essential singularities of f, where f is non-constant. We assume B to have at least two elements and f to have poles. With this assumption we have $M \cap K = \emptyset$.
+
+If f is a map in any of the classes above the Fatou set
+
+$F(f)$ consists of all points $z \in X$ (or $z \in Y \setminus B$) such that the sequence of iterates of f is well defined and forms a normal family in a neighbourhood of z. The Julia set is the complement of the Fatou set, denoted by $J(f) = (F(f))^c$. The Fatou and the Julia sets are also known as *the stable and the chaotic sets* respectively. In
+
+*The authors were supported by CONACYT grant 128005.
+---PAGE_BREAK---
+
+the Julia set or chaotic set is easy to find fractals, examples of this fact are below. The fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far” [1].
+
+Examples of functions in class $\mathcal{M}$ live in the family $f_{\lambda,\mu} = \lambda e^{\epsilon} + \frac{\mu}{z}$ studied in [2]. The stable set (Fatou set) and the chaotic set (Julia set) for the parameters $\lambda = -4$, $\mu = -1$ can be seen in **Figure 1**.
+
+Examples of functions in class $\mathcal{K}$ can be found in the family $f_c = R(z)e^{z^{-c}}$, where $R(z)$ is a rational function, $\epsilon \in \mathbb{R}$, $\epsilon \neq 0$ and $c \in \mathbb{C}$. We do not have any picture of the Fatou and Julia set but the Julia set should be a fractal for some parameters $c$ and $\epsilon$ sufficiently small.
+
+Class $\mathcal{M}$ was initially studied by Baker, Kotus and Yi Nian [3-6]. The class $\mathcal{K}$ has been introduced and studied by Bolsch in [7-9].
+
+Many properties of $J(f)$ and $F(f)$ are much the same for all classes above but different proofs are needed and some discrepancies arise. For functions in classes $\mathcal{M}$ or $\mathcal{K}$ we recall some properties of the Fatou and Julia sets: the Fatou set $F(f)$ is open and the Julia set $J(f)$ is closed; the Julia set is perfect and non-empty; the sets $J(f)$ and $F(f)$ are completely invariant under $f$; and finally the repelling periodic points are dense in $J(f)$.
+
+A Fatou component for a function in class $\mathcal{M}$ or $\mathcal{K}$ can be periodic, pre-periodic or wandering. The possible dynamics of a periodic component of the Fatou set is either attracting, parabolic, Siegel disc, Herman ring or Baker domain. **Figure 1** is an example of a Baker domain for the function $f_{-4,-1} = -4e^{\epsilon} + \frac{-1}{z}$, see [2] for details.
+
+It was proved in [5], for functions in class $\mathcal{M}$, and in [8], for functions in class $\mathcal{K}$, that a periodic Fatou component (of arbitrary period) is simply, doubly or infinitely connected.
+
+In [6] the authors proved that for functions in class
+
+$\mathcal{M}$ with a finite set of singular values there are neither wandering components nor Baker domains. The same statement works for functions in class $\mathcal{K}$ and the proofs are similar to those in [6].
+
+We define the *residual Julia set* of $f$ denoted by $J_r(f)$ as the set of those points of $J(f)$ which do not belong to the boundary of any component of the Fatou set $F(f)$. The points of $J_r(f)$ are called *buried points* and the components of $J_r(f)$ are called *buried components*. This is the Residual Julia set (buried points and buried components) which are in the chaotic set.
+
+This concept was first introduced in the context of Kleinian groups by Abikoff in [10,11]. In [12], McMullen defined a buried component of a rational function to be a component of the Julia set which does not meet the boundary of any component of the Fatou set. Similarly, for a buried point of the Julia set, McMullen gave an example of a rational function with buried components.
+
+Baker and Domínguez in [13] extended some results of Qiao [14] (for rational functions) to have buried points or buried components to functions in class $\mathcal{M}$. In Section 2 we prove that the same results can be extended to functions in class $\mathcal{K}$.
+
+Finally, Section 3 contains Theorems A and B which assure with some conditions that the residual Julia set is not empty for functions in classes $\mathcal{M}$ and $\mathcal{K}$.
+
+## 2. Basic Results of the Residual Julia Set for Functions in Classes $\mathcal{M}$ and $\mathcal{K}$
+
+In this Section we will state some basic results about the residual Julia set which hold for functions in classes $\mathcal{M}$ and $\mathcal{K}$. The proofs of these results can be found in [13] and [15].
+
+**Proposition 2.1.** Let $f$ be in class $\mathcal{M}$ or $\mathcal{K}$. If the Fatou set of $f$ has a completely invariant component, then the residual Julia set is empty.
+
+**Proposition 2.2.** Let $f$ be in class $\mathcal{M}$ or $\mathcal{K}$. If there exists a buried component of $J(f)$, then $J(f)$ is disconnected.
+
+**Proposition 2.3.** Let $f$ be in class $\mathcal{M}$ or $\mathcal{K}$. If $J_r(f) \neq \emptyset$, then $J_r(f)$ is completely invariant, dense in $J(f)$ and uncountably infinite.
+
+**Proposition 2.4.** If $f \in \mathcal{K}$ has no wandering domains and $J_r(f) = \emptyset$, then there is a periodic Fatou component $U$ such that $J(f) = \partial U$.
+
+## 3. Some Conditions When $J_r(f) \neq \emptyset$ for Functions in Class $\mathcal{K}$
+
+In this section we will extend some results related with the residual Julia set for functions in class $\mathcal{M}$ to functions in class $\mathcal{K}$. Qiao in [14] proved the following theorem for rational functions.
+
+**Theorem 3.1.** Let $f$ be a rational function and
+
+Figure 1. The chaotic set, which is a fractal, with colors and the Fatou set on black.
+---PAGE_BREAK---
+
+$J(f) \neq \hat{\mathbb{C}}$. The Julia set $J(f)$ contains buried components if and only if $f$ is disconnected and $J(f)$ is disconnected and $f$ has no completely invariant component.
+
+Baker and Domínguez in [13] gave the following result which was step towards a generalisation of Theorem 3.1 for functions in class $\mathcal{M}$.
+
+**Theorem 3.2.** Let $f$ be a meromorphic function in $\mathbb{C}$ with no wandering domains. Assume that the $J(f)$ is not connected and that $F(f)$ has no completely invariant component. Then the residual Julia set $J_r(f)$ is non-empty.
+
+If we removed the hypothesis of no wandering domains of Theorem 3.2 and extend it to functions in class $\mathcal{K}$, the statement is as follows.
+
+**Theorem A.** Let $f$ be a function in class $\mathcal{K}$. If $J(f)$ is not connected and $F(f)$ has no completely invariant component. Then the residual Julia set $J_r(f)$ is non-empty, this is $J_r(f) \neq \emptyset$.
+
+In order to prove Theorem A we need to state some results for functions in classes $\mathcal{M}$ and $\mathcal{K}$. The following lemma was given in [16] for functions in class $\mathcal{M}$, since the proof works for functions in class $\mathcal{K}$ we do not write it.
+
+**Lemma 3.3.** If $f \in \mathcal{M}$ or $\mathcal{K}$ and $U$ is a multiply connected periodic Fatou component such that $\partial U = J(f)$, then $U$ is completely invariant.
+
+The following result was given in [17] for functions in class $\mathcal{K}$.
+
+**Theorem 3.4.** Let $f \in \mathcal{K}$. Suppose that the Fatou set has no completely invariant domain and the Julia set is disconnected in such a way that the Fatou set has a component $H$ of connectivity at least five. Then singleton components are dense and buried in $J(f)$.
+
+**Proof of Theorem A.**
+
+If $U$ is a component of the Fatou set, then it can be either periodic, preperiodic or wandering. We will split the proof in two cases the no wandering case and the wandering case.
+
+**No wandering case.**
+
+Let $f \in \mathcal{K}$. Assume that there are not wandering domains in the Fatou set and that $J_r(f) \neq \emptyset$. By Proposition 2.4 there is a periodic Fatou component $U$ such that $\partial U = J(f)$. The component $U$ is multiply connected since the Julia set, by hypothesis, is not connected. By Lemma 3.3 the component $U$ must be completely invariant which gives us a contradiction. Therefore, the residual Julia set is not empty.
+
+**Wandering case.**
+
+We assume that the Fatou set has wandering components. We prove the result in two cases: 1) $f$ has only finite connected Fatou components and 2) $f$ has at least one infinitely connected Fatou component.
+
+1) Since the Julia set is disconnected it consists of uncountable many components. Now as the connectivity of
+
+each component of the Fatou set of $f$ is finite, then the number of the boundary components of all Fatou components is countable. Thus the Julia set has uncountably many buried components. Therefore, $J_r(f) \neq \emptyset$.
+
+2) If we take $U$ a multiply-connected Fatou component of connectivity $n$, $n \ge 5$, $n \in \mathbb{N}$, then the proof follows as the proof of Theorem 3.4 in [17]. Thus singleton buried components are dense in the Julia set. Therefore, $J_r(f) \neq \emptyset$.
+
+The following theorem is an extension of Proposition 6.1 given in [16], since the proof given in [16] extends easily to our case, functions in class $\mathcal{K}$, we shall give just a sketch of it.
+
+**Theorem B.** Let $f \in \mathcal{K}$, and $A \subset (\hat{\mathbb{C}} \setminus B)$ a closed set with non-empty interior. Suppose the following two conditions are satisfied:
+
+* $( (\hat{\mathbb{C}} \setminus B ) \setminus A ) \cap J(f) \neq \emptyset $.
+
+* All the Fatou components off eventually iterate inside A and never leave again. That is, if $\Omega$ is a Fatou component, $f^n(\Omega) \subset A$ for all $n > N$, where N depends on $\Omega$. Then $J_r(f) \neq \emptyset$.
+
+**Sketch of Proof B.**
+
+Take any point $z \in ((\hat{\mathbb{C}} \setminus B) \setminus A) \cap J(f)$ and a neighbourhood $V \subset (\hat{\mathbb{C}} \setminus B) \setminus A$ of z. Since periodic points are dense in Julia, then V must contain a periodic point $\zeta$ of the Julia set. Under iteration the point $\zeta$ has to come back to itself infinitely often.
+
+By hypothesis, points on the boundary of any Fatou component must iterate inside A and never leave again. Then points in the Julia which leaves A infinitely often are not in the boundary of a Fatou component, thus $\zeta \in J_r(f)$ since it lies in the complement of A. Therefore $J_r(f) \neq \emptyset$.
+
+## REFERENCES
+
+[1] J. F. Gouyet, “Physics and Fractal Structures,” Masson Springer, Paris, New York, 1996.
+
+[2] M. A. Montes de Oca Balderas, G. J. F Sienra Loera and J. E. King Dávalos, “Baker Domains for Period Two for the Family $f_{\lambda,\mu} = \lambda e^z + \frac{\mu}{z}$,” *International Journal of Bifurcation and Chaos*, in Press, 2013.
+
+[3] I. N. Baker, J. Kotus and Y. N. Lü, “Iterates of Meromorphic Functions II: Examples of Wandering Domains,” *Journal of the London Mathematical Society*, Vol. 42, No. 2, 1990, pp. 267-278.
+http://dx.doi.org/10.1112/jlms/s2-42.2.267
+
+[4] I. N. Baker, J. Kotus and Y. N. Lü, “Iterates of Meromorphic Functions: I,” *Ergodic Theory and Dynamical Systems*, Vol. 11, No. 2, 1991, pp. 241-248.
+
+[5] I. N. Baker, J. Kotus and Y. N. Lü, “Iterates of Mer-
+---PAGE_BREAK---
+
+morphic Functions III. Preperiodic Domains," *Ergodic Theory Dynamical Systems*, Vol. 11, 1991, pp. 603-618.
+
+[6] I. N. Baker, J. Kotus and Y. N. Lü, "Iterates of Meromorphic Functions IV. Critically Finite Functions," *Results in Mathematics*, Vol. 22, No. 2-4, 1992, pp. 651-656.
+
+[7] B. Bolsch, "Repulsive Periodic Points of Meromorphic Function," *Complex Variables Theory and Application*, Vol. 31, No. 1, 1996, pp. 75-79.
+http://dx.doi.org/10.1080/17476939608814947
+
+[8] A. Bolsch, "Iteration of Meromorphic Functions with Countably Many essential Singularities," *Technische Universität Berlin*, Berlin, 1997.
+
+[9] A. Bolsch, "Periodic Fatou Components of Meromorphic Functions," *Bulletin of the London Mathematical Society*, Vol. 31, No. 5, 1999, pp. 543-555.
+http://dx.doi.org/10.1112/S0024609399005950
+
+[10] W. Abikoff, "Some Remarks on Kleinian Groups," *Annals of Mathematics*, Vol. 66, 1971, pp. 1-5.
+
+[11] W. Abikoff, "The Residual Limits Sets of Kleinian Groups," *Acta Mathematica*, Vol. 130, No. 1, 1973, pp. 127-144.
+http://dx.doi.org/10.1007/BF02392264
+
+[12] C. McMullen, "Automorphisms of Rational Maps, Holo-morphic Functions and Modulii I," MSRI Publications 10, Springer Verlag, New York, 1988.
+
+[13] I. N. Baker and P. Domínguez, "Residual Julia Sets," *Journal of Analysis*, Vol. 8, 2000, pp. 121-137.
+
+[14] J. Y. Qiao, "The Buried Points on the Julia Sets of Rational and Entire Functions," *Science in China Series A*, Vol. 38, No. 12, 1995, pp. 1409-1419.
+
+[15] P. Domínguez and N. Fagella, "Residual Julia Sets for Rational and Transcendental Functions," Cambridge University Press, Cambridge, 2008, pp. 138-164.
+http://dx.doi.org/10.1017/CBO9780511735233.008
+
+[16] P. Domínguez and N. Fagella, "Existence of Herman Rings for Meromorphic Functions," *Complex Variables*, Vol. 49, No. 12, 2004, pp. 851-870.
+http://dx.doi.org/10.1080/02781070412331298589
+
+[17] P. Domínguez, "Residual Julia Sets for Meromorphic Functions with Countably Many Essential Singularities," *Journal of Difference Equations and Applications*, Vol. 16, No. 5-6, 2010, pp. 519-522.
+http://dx.doi.org/10.1080/10236190903203879
\ No newline at end of file
diff --git a/samples/texts_merged/4254409.md b/samples/texts_merged/4254409.md
new file mode 100644
index 0000000000000000000000000000000000000000..42f5bb725cdeee22e423b34e2836779bea5ce00f
--- /dev/null
+++ b/samples/texts_merged/4254409.md
@@ -0,0 +1,507 @@
+
+---PAGE_BREAK---
+
+On Bubble Generators in Directed Graphs
+
+Vicente Acuña, Roberto Grossi, Giuseppe Italiano, Leandro Lima, Romeo Rizzi, Gustavo Sacomoto, Marie-France Sagot, Blerina Sinaimeri
+
+► To cite this version:
+
+Vicente Acuña, Roberto Grossi, Giuseppe Italiano, Leandro Lima, Romeo Rizzi, et al.. On Bubble Generators in Directed Graphs. Algorithmica, Springer Verlag, 2019, pp.1-19. 10.1007/s00453-019-00619-z. hal-02284946
+
+HAL Id: hal-02284946
+
+https://hal.inria.fr/hal-02284946
+
+Submitted on 12 Sep 2019
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+# On Bubble Generators in Directed Graphs*
+
+V. Acuña · R. Grossi · G. F. Italiano ·
+L. Lima · R. Rizzi · G. Sacomoto ·
+M.-F. Sagot · B. Sinaimeri*
+
+Received: date / Accepted: date
+
+**Abstract** Bubbles are pairs of internally vertex-disjoint (s, t)-paths in a directed graph, which have many applications in the processing of DNA and RNA data. Listing and analysing all bubbles in a given graph is usually unfeasible in practice, due to the exponential number of bubbles present in real data graphs. In this paper, we propose a notion of bubble generator set, i.e., a polynomial-sized subset of bubbles from which all the other bubbles can be obtained through a suitable application of a specific symmetric difference operator. This set provides a compact representation of the bubble space of
+
+* A shorter version of this paper appeared as the conference article in [1].
+
+* Corresponding author: Blerina Sinaimeri, E-mail: blerina.sinaimeri@inria.fr
+
+V. Acuña
+Center for Mathematical Modeling, Universidad de Chile and UMI CNRS 2807, Santiago, Chile.
+E-mail: viacuna@dim.uchile.cl
+
+R. Grossi
+Università di Pisa, Pisa, Italy and Erable, INRIA, France.
+E-mail: grossi@di.unipi.it
+
+G. F. Italiano
+LUISS University, Roma, Italy and Erable, INRIA, France.
+E-mail: gitaliano@luiss.it
+
+R. Rizzi
+Università di Verona, Verona, Italy.
+E-mail: Romeo.Rizzi@univr.it
+
+L. Lima
+Erable INRIA Grenoble Rhône-Alpes, Université Lyon 1; CNRS, UMR5558, LBBE, Villeurbanne, France and Università di Roma “Tor Vergata”, Roma, Italy.
+E-mail: leandro.ishi-soares-de-lima@inria.fr
+
+G. Sacomoto · M.-F. Sagot · B. Sinaimeri
+Erable INRIA Grenoble Rhône-Alpes, Université Lyon 1; CNRS, UMR5558, LBBE, Villeurbanne, France.
+E-mail: gustavo.sacomoto@gmail.com marie-france.sagot@inria.fr blerina.sinaimeri@inria.fr
+---PAGE_BREAK---
+
+a graph. A bubble generator can be useful in practice, since some pertinent
+information about all the bubbles can be more conveniently extracted from
+this compact set. We provide a polynomial-time algorithm to decompose any
+bubble of a graph into the bubbles of such a generator in a tree-like fashion.
+Finally, we present two applications of the bubble generator on a real RNA-seq
+dataset.
+
+**Keywords** Bubbles · Bubble generator set · Decomposition algorithm
+
+# 1 Introduction
+
+Bubbles are pairs of internally vertex-disjoint (s, t)-paths in a directed graph, which find many applications in the processing of DNA and RNA data. For example, in the genomic context, genome assemblers usually identify and remove bubbles in order to linearise the graph [17,22,26]. Bubbles can also represent interesting biological events, e.g., allelic differences (SNPs and indels) when processing DNA data [10,24,25], and alternative splicing events in RNA data [19,18,13,20]. Due to their practical relevance, several theoretical studies concerning bubbles were carried out in the past few years [3,5,16,19,23], usually related to bubble-enumeration algorithms.
+
+Although the enumeration of bubbles could be important to describe biological events appearing in the sequences, this approach has a significant disadvantage. Indeed, while many biological events can be represented by bubbles in a de Bruijn graph (see e.g. [20,15,18]) (the graph built from the reads provided by a sequencing process), the opposite is not true: most of the bubbles do not correspond to any biological phenomena and appear just because of a combination of other events [13,18]. In practice, due to the high throughput of second-generation sequencing machines, the genomic and transcriptomic de Bruijn graphs tend to be huge, usually containing from millions to billions of vertices. As expected, the number of bubbles also tends to be huge, in the worst case exponential in the number of vertices. As a consequence, algorithms that deal with bubbles either tend to simplify the graph by removing them, or just enumerate a small subset of the bubbles. Such subsets usually correspond to bubbles with some predefined characteristics, and may not be the best representatives of the biological phenomena under study. More worrying is the fact that, by focusing only on these particular bubbles, all the relevant events described by bubbles that do not satisfy the constraints may be lost. On the other hand, any algorithm that tries to be more exhaustive, say by enumerating a large portion of the bubbles, will certainly spend a prohibitive amount of time in real data graphs and thus it is not likely to be practical [13, 18]. This motivates further work for finding efficient ways to recognise bubbles that correspond to relevant events and/or to represent the set of bubbles in a more concise way.
+
+In this paper, we propose an elementary bubble generator, i.e., a subset of bubbles that is able to generate any other bubble in the graph. More specifically, we show how to identify, for any given directed graph *G*, a generator
+---PAGE_BREAK---
+
+set $\mathcal{G}(G)$ of bubbles which is of polynomial size in the input graph, and such that any bubble in $G$ can be obtained in a polynomial number of steps by properly combining the bubbles in the generator $\mathcal{G}(G)$ through a symmetric difference operator. In several biological applications, it is desirable to decompose a bubble into elementary bubbles in such a way that only bubbles can be generated at each step of the decomposition. This happens, for instance, when one wishes to decompose complex alternative splicing events [20] into several elementary alternative splicing events. Our bubble generator enjoys this property: in order to take this into account, we consider a constrained version of the symmetric difference operator, where two bubbles are combinable only if the output is also a bubble (i.e., the operator is undefined if the output is not a bubble). Moreover, we present a $O(n^3)$ time decomposition algorithm that, given a bubble $B$ in the graph $G$ with $n$ vertices, finds a sequence of bubbles from the generator $\mathcal{G}(G)$ whose combination results in $B$. Our algorithm can be applied when one needs to know how to decompose a bubble into its elementary parts, e.g., when one is interested in identifying and decomposing complex alternative splicing events [20] into several elementary alternative splicing events.
+
+At first sight, a bubble generator might seem related to a cycle basis, which represents a compact description of all Eulerian subgraphs in a graph. The study of cycle bases started a long time ago [14] and has attracted much attention in the last fifteen years, leading to many interesting results, such as the classification of different types of cycle bases, the generalisation of these notions to weighted and to directed graphs, as well as to several complexity results for constructing bases. We refer the interested reader to the books of Deo [7] and Bollobás [4], and to the survey of Kavitha et al. [11] for an in-depth coverage of cycle bases. Unfortunately, problems related to bubble generators appear to be very different (and more difficult) from their counterparts in cycle bases, so that it does not seem possible to apply directly to bubble generators all the techniques developed for cycle bases. Indeed, a cycle basis in a directed graph contains subgraphs that are *not* necessarily directed cycles in the original graph, but more generally cycles in the underlying undirected graph [12]. As a consequence, the techniques developed for cycle bases in undirected and directed graphs cannot be applied to our problem, since they do not guarantee a decomposition into elementary bubbles, which generates only bubbles at each step.
+
+To test the practical effectiveness of our generator set of bubbles, we applied it in two different directions in the analysis of a real RNA-seq dataset. First, we consider the use of the generator as a preprocessing step to reduce the graph in input for algorithms that find bubbles, by “cleaning” from the graph all unnecessary arcs (i.e. arcs that do not belong to any bubble). Second, we use it to find alternative splicing (henceforth denoted by AS) events in a reference-free context. In particular, some bubbles in our generator set correspond to AS events that are hard to find by the state-of-art algorithm for AS events enumeration [13]. However, this application should still be seen just as a proof-of-concept on the practical potential of the bubble generator or as
+---PAGE_BREAK---
+
+complementary to current methods, since it is still limited for the exhaustive enumeration of AS events. The latter would require a non-trivial procedure to enumerate AS-associated bubbles by combining generator bubbles and would be beyond the scope of this paper (see Section 6).
+
+The remainder of this paper is organised as follows. Section 2 presents some definitions that will be used throughout the paper. Section 3 introduces our bubble generator. Section 4 presents a $O(n^3)$ time algorithm for decomposing any bubble in a graph into elements of our bubble generator. Section 5 presents two applications of the bubble generator in processing and analysing RNA data. Finally, we conclude with open problems in Section 6.
+
+## 2 Preliminaries
+
+Throughout the paper we assume that the reader is familiar with the standard graph terminology, as contained for instance in [6]. A graph is a pair $G = (V, E)$, where $V$ is the set of vertices, and $E \subseteq V \times V$ is the set of edges. For convenience, we may also denote the set of vertices $V$ of $G$ by $V(G)$ and its set of edges $E$ by $E(G)$. We further set $n = |V(G)|$ and $m = |E(G)|$. A graph may be *directed* or *undirected*, depending on whether its edges are directed or undirected. In this paper, we will deal with graphs that are directed, unweighted, finite and without parallel edges or self-loops. An edge $e = (u, v)$ is said to be *incident* to the vertices $u$ and $v$, and $u$ and $v$ are said to be the endpoints of $e = (u, v)$. For a directed graph, edge $e = (u, v)$ is said to be leaving vertex $u$ and entering vertex $v$. Alternatively, $e = (u, v)$ is an outgoing edge for $u$ and an incoming edge for $v$. The *in-degree* of a vertex $v$ is given by the number of edges entering $v$, while the *out-degree* of $v$ is the number of edges leaving $v$. The *degree* of $v$ is the sum of its in-degree and out-degree.
+
+We say that a graph $G' = (V', E')$ is a *subgraph* of a graph $G = (V, E)$ if $V' \subseteq V$ and $E' \subseteq E$. Given a subset of vertices $V' \subseteq V$, the subgraph of $G$ induced by $V'$, denoted by $G_{V'}$, has $V'$ as vertex set and contains all edges of $G$ that have both endpoints in $V'$. Given a subset of edges $E' \subseteq E$, the subgraph of $G$ induced by $E'$, denoted by $G_{E'}$, has $E'$ as edge set and contains all vertices of $G$ that are endpoints of edges in $E'$. Given a subset of vertices $V' \subseteq V$ and a subset of edges $E' \subseteq E$, we denote by $G \setminus V'$ the graph induced by $V \setminus V'$ and by $G \setminus E'$ the graph induced by $E \setminus E'$. Given a set $S$ of subgraphs of $G$, $G_S$ denotes the graph induced by the edges in $\cup_{s \in S} E(s)$. Given two subgraphs $G$ and $H$, their union $G \cup H$ is the graph $F$ for which $V(F) = V(G) \cup V(H)$ and $E(F) = E(G) \cup E(H)$. Their intersection $G \cap H$ is the graph $F$ for which $V(F) = V(G) \cap V(H)$ and $E(F) = E(G) \cap E(H)$.
+
+Let $s, t$ be any two vertices in $G$. A *(directed) path* from $s$ to $t$ in $G$ is a sequence of vertices and edges $s = v_1, e_1, v_2, e_2, \dots, v_{k-1}, e_{k-1}, v_k = t$, such that $e_i = (v_i, v_{i+1})$ for $i = 1, 2, \dots, k-1$. Since there is no danger of ambiguity, in the remainder of the paper we will also denote a path simply as $s = v_1, v_2, \dots, v_{k-1}, v_k = t$ (i.e., as a sequence of vertices). A path is *simple* if it does not contain repeated vertices, except possibly for the first and the last vertex.
+---PAGE_BREAK---
+
+Fig. 1: An example of a graph $G$ and the set $B(G)$ of all the bubbles in $G$. The set $\mathcal{G}(G) = \{B_1, B_2, B_4\}$ is a generator set that satisfies conditions of Theorem 1.
+
+Throughout this paper, all the paths considered will be simple and referred to as paths. A path from $s$ to $t$ is also referred to as an $(s,t)$-path. The length of a path $p$ is the number of edges in $p$ and will be denoted by $|p|$. Note that, as a special case, we also allow a single vertex to be a path, i.e., a path of length 0. If $p$ and $q$ are paths, we say that $p$ is a subpath of $q$ if $p$ is contained in $q$, and we denote this $p \subseteq q$. Given a path $p_1$ from $x$ to $y$ and a path $p_2$ from $y$ to $z$, we denote by $p_1 \cdot p_2$ their concatenation, i.e., the path from $x$ to $z$ defined by the path $p_1$ followed by $p_2$. A path $q$ is a prefix of a path $p$ if there exists a path $r$ such that $p = q \cdot r$. Similarly, a path $q$ is a suffix of a path $p$ if there exists a path $r$ such that $p = r \cdot q$. A (directed) cycle is a simple path (of length greater than zero) starting and ending on the same vertex.
+
+**Definition 1** Given a directed graph $G$ and two (not necessarily distinct) vertices $s, t \in V(G)$, an $(s,t)$-bubble consists of two directed $(s,t)$-paths that are internally vertex disjoint. Vertex $s$ is the source and $t$ is the target of the bubble. If for a bubble $B$ it holds that $s = t$ then exactly one of the paths of the bubble has length 0, and therefore $B$ corresponds to a directed cycle. In this case, we say that $B$ is a degenerate bubble.
+
+In Fig. 1 we show an example of a graph and all the bubbles in it. We denote by $B(G)$ the set of all bubbles in $G$. Before giving formally the definition of bubble generator of $G$, we recall some basic definitions of cycle bases in undirected graphs.
+
+Let $G$ be an undirected graph. Two subgraphs $G_1, G_2$ of $G$ can be combined by the operator $\trianglelefteq$ that simply consists in the symmetric difference of the set of edges. More formally, $G_1 \trianglelefteq G_2$ is the graph induced by the set of edges$(E(G_1) \cup E(G_2)) \setminus (E(G_1) \cap E(G_2))$. It has been shown that the space of all Eulerian subgraphs of $G$ (called the *cycle space* of $G$) forms a vector space over $\mathbb{GF}(2)$ with the $\trianglelefteq$ operation and scalar multiplication $1 \cdot A = A, 0 \cdot B = \emptyset$ for $A, B$ in the cycle space of $G$ [9, 11, 12, 14]. In the theory of vector spaces, a set of
+---PAGE_BREAK---
+
+vectors is said to be *linearly dependent* if one of the vectors in the set can be
+defined as a linear combination of the others; if no vector in the set can be
+written in this way, then the vectors are said to be *linearly independent* [21]. A
+basis is a minimum set of vectors, such that any vector in the space is a linear
+combination of this set. Clearly a basis is a set of linearly independent vectors.
+Furthermore, given a vector space and a set of *k* linearly independent vectors
+*F*, the subspace of vectors generated starting from elements in *F* is called the
+span of *F* and its dimension is *k*. It is well-known that a cycle basis for a
+connected undirected graph *G*, denoted by *C(G)*, has dimension *m* − *n* + 1. If
+the graph *G* is not connected this is generalised to *m* − *n* + *c*, where *c* is the
+number of connected components (see, e.g., [9,11,12,14]).
+
+As mentioned in Section 1, we are interested in decomposing a bubble into
+elementary bubbles in such a way that, at each step of the decomposition,
+only bubbles are generated. To ensure this property, we define next a suitable
+symmetric difference operator which takes as input two bubbles and produces
+one bubble as output. Given two bubbles $B_1$ and $B_2$, the constrained symmetric
+difference operator $\Delta$ is such that $B_1\Delta B_2$ is defined if and only if the subgraph
+induced by $(E(B_1)\cup E(B_2))\setminus(E(B_1)\cap E(B_2))$ is a bubble. Otherwise, we say
+that $B_1\Delta B_2$ is undefined. If $B_1\Delta B_2$ is defined, we also say that $B_1$ and $B_2$
+are combinable. Given two combinable bubbles $B_1$ and $B_2$, we refer to $B_1\Delta B_2$
+as the sum of $B_1$ and $B_2$, and denote it also by $B_1+B_2$. We also say that the
+bubble $B_1+B_2$ is generated from bubbles $B_1$ and $B_2$, or alternatively that
+it can be decomposed into the bubbles $B_1$ and $B_2$. Let $\mathcal{B}$ be a set of bubbles
+in $G$. We say that a bubble $\mathcal{B}$ is spanned by $\mathcal{B}$ if it can be generated starting
+from bubbles in $\mathcal{B}$. The set of all the bubbles spanned by $\mathcal{B}$ is called the span
+of $\mathcal{B}$. $\mathcal{B}$ is a bubble generator if each bubble in $G$ is spanned by $\mathcal{B}$, i.e., each
+bubble in $G$ can be generated by starting from the bubbles in $\mathcal{B}$.
+
+Due to our constrained symmetric difference operator $\Delta$, all subgraphs
+generated by the elements in $\mathcal{B}$ are necessarily bubbles. Since not all pairs of
+bubbles of $G$ are combinable, the bubble space is not closed under $\Delta$, and
+therefore it does not form a vector space (over $\mathbb{Z}_2$). Hence, the techniques
+developed for cycle bases cannot be applied directly to bubble generators.
+
+A generator is *minimal* if it does not contain a proper subset that is also a generator; and a generator is *minimum* if it has the minimum cardinality. We are interested in finding a minimum bubble generator of a given directed graph *G*.
+
+**3 The bubble generator**
+
+In this section, we present a bubble generator for a directed graph *G*. Through-
+out, we assume that shortest paths in *G* are unique. This is without loss of
+generality, since there are many standard techniques for achieving this, in-
+cluding perturbing edge weights by infinitesimals. However, for our goal, it
+suffices to use a “lexicographic ordering”. Namely, we define an arbitrary or-
+dering *v*₁, ..., *v*ₙ on the vertices of *G*. A path *p* is considered lexicographically
+---PAGE_BREAK---
+
+smaller than a path $q$ if the length of $p$ is strictly smaller than the length of $q$, or, if $p$ and $q$ have the same length, the sequence of vertices associated with $p$ is lexicographically smaller than the sequence associated with $q$. We denote this by $p <_{lex} q$.
+
+We denote by $B = (p, q)$ the bubble having $p, q$ as its two internally vertex-disjoint paths, referred to as *legs*. We denote by $\ell(B)$ (resp., by $\mathcal{L}(B)$) the shorter (resp., longer) between the two legs $p, q$ of $B$. Note that, because of the lexicographic order, there are no ties. We also denote by $|B|$ the number of edges of bubble $B$. Note that $|B| = |\ell(B)| + |\mathcal{L}(B)|$. Next, we define a total order on the set of bubbles.
+
+**Definition 2** Let $B_1$ and $B_2$ be any two bubbles. $B_1$ is smaller than $B_2$ (in symbols, $B_1 < B_2$) if one of the following holds: either (i) $\mathcal{L}(B_1) <_{lex} \mathcal{L}(B_2)$; or (ii) $\mathcal{L}(B_1) = \mathcal{L}(B_2)$ and $\ell(B_1) <_{lex} \ell(B_2)$.
+
+**Definition 3** A bubble $B$ is composed if it can be obtained as a sum of two smaller bubbles. Otherwise, the bubble $B$ is called *simple*.
+
+For a directed graph $G$, we denote by $\mathcal{S}(G)$ the set of simple bubbles of $G$. It is not difficult to see that $\mathcal{S}(G)$ is a generator. We are not able for now to prove that any bubble in $G$ can be obtained in a polynomial number of steps from bubbles in $\mathcal{S}(G)$. Nevertheless, to achieve the latter goal, we will introduce next another generator $\mathcal{G}(G) \supset \mathcal{S}(G)$. Let $p : s = x_0, x_1, \dots, x_h = t$ be a path from $s$ to $t$ and let $0 \le i \le j \le h$. To ease the notation, we denote by $p_{i,j}$ the subpath of $p$ from $x_i$ to $x_j$, and refer also to $p_{0,j}$ as $p_{s,j}$ and to $p_{i,h}$ as $p_{i,t}$. The next theorem provides some properties of simple bubbles.
+
+**Theorem 1** Let $B$ be a simple $(s,t)$-bubble in a directed graph $G$. The following holds:
+
+(1) $\ell(B)$ is the shortest path from $s$ to $t$ in $G$;
+
+(2) Let $\mathcal{L}(B) = s, v_1, \dots, v_r, t$. Then $s, v_1, \dots, v_r$ is the shortest path from $s$ to $v_r$ in $G$.
+
+*Proof* Let $B$ be a simple $(s,t)$-bubble: we show that both conditions (1) and (2) must hold.
+
+We first consider condition (1). If $B$ is degenerate, then it trivially satisfies condition (1). Therefore, assume that $B$ is non-degenerate and, for contradiction, that $\ell(B)$ is not the shortest path from $s$ to $t$. Let $p^* : s = x_0, x_1, \dots, x_h = t$ be the shortest path from $s$ to $t$ in $G$. For $0 \le i \le j \le h$, by subpath optimality, $p_{i,j}^*$ is the shortest path from $x_i$ to $x_j$. Let $k$ be the smallest index, $0 \le k < h$, for which the edge $(x_k, x_{k+1})$ does not belong to either one of the legs of $B$. Such an index $k$ must exist, as otherwise $p^*$ would coincide with a leg of $B$. Furthermore, let $l$, $k < l \le h$, be the smallest index greater than $k$ for which $x_l \in V(B)$. Such a vertex $x_l$ must also exist, since $x_h = t \in V(B)$. In other words, $x_k$ is the first vertex of the bubble $B$ where $p^*$ departs from $B$ and $x_l$, $l > k$, is the first vertex where the shortest path $p^*$ intersects again the bubble $B$. By definition of $x_k$ and $x_l$, $p_{k,l}^*$ is internally vertex-disjoint with
+---PAGE_BREAK---
+
+Fig. 2: Case (1) of the proof of Theorem 1. The prefix of the shortest path from $s$ to $t$ is shown as a solid line.
+
+both legs of B. We now claim that B can be obtained as the sum of two smaller
+bubbles, thus contradicting our assumption that B is a simple bubble.
+
+To prove the claim, we distinguish two cases, depending on whether $x_k$ and $x_l$ are on the same leg of B or not. Consider first the case when $x_k$ and $x_l$ are on the same leg $p$ of B (see Fig. 2(a)). Let $B_1$ be the bubble with $\ell(B_1) = p_{k,l}^*$ and $\mathcal{L}(B_1) = p_{k,l}$. First, note that if either $x_k \neq s$ or $x_l \neq t$, then $p_{k,l}$ is a proper subpath of a leg of B. Hence, $|\mathcal{L}(B_1)| = |p_{k,l}| < |\mathcal{L}(B)|$, and $B_1 < B$. Otherwise, suppose $s = x_k$ and $t = x_l$. Then either $\mathcal{L}(B_1) = \ell(B) <_{lex} \mathcal{L}(B)$, or $\mathcal{L}(B_1) = \mathcal{L}(B)$ and $\ell(B_1) = p_{k,l}^* = p^* <_{lex} \ell(B)$. In both cases, $B_1 < B$. Let $B_2$ be the bubble which is obtained from B by replacing $p_{k,l}$ by $p_{k,l}^*$ (see Fig. 2(a)). Since $p_{k,l}^*$ is a shortest path, by subpath optimality, $p_{k,l}^* <_{lex} p_{k,l}$, thus $B_2 < B$. As a result, B can be obtained as the sum of two smaller bubbles $B_1, B_2$, thus contradicting the assumption that B is simple.
+
+Consider now the case where $x_k$ and $x_l$ are on different legs of B (see Fig. 2(b)). Notice that this means $x_k \neq s$ and $x_l \neq t$. Let $p$ be the leg containing $x_l$ and $q$ the one containing $x_k$. Note that $p = p_{0,l} \cdot p_{l,h}$ and $q = p_{0,k}^* \cdot q_{k,h}$. Moreover, the two legs of bubble $B_1$ are $p_{0,k}^* \cdot p_{k,l}^* <_{lex} q$ and $p_{0,l}$, which is a proper subpath of $p$. Hence, $B_1 < B$. The two legs of bubble $B_2$ are $q_{k,h}$ which is a proper subpath of $q$ and $p_{k,l}^* \cdot p_{l,h} <_{lex} p$. Hence, $B_2 < B$, and $B = B_1 + B_2$ which implies again that B is not simple.
+
+We show now that B satisfies also condition (2). Assume, for contradiction, that B satisfies condition (1) but not (2), and so $p = s, v_1, \dots, v_r$ (note that $p$ is equal to $\mathcal{L}(B)$ without its last edge) is not the shortest path from $s$ to $v_r$ in G. Let $p^*: s = x_0, \dots, x_{h-1} = v_r, p^* \neq p$, be such a shortest path in G. Similarly to the previous case, let $k$ be the smallest index, $0 \le k < h-1$, for which the edge $(x_k, x_{k+1})$ does not belong to either one of the legs of B, i.e. $x_k$ is the first vertex where the shortest path $p^*$ departs from B. Such an index $k$ must exist, as otherwise $p^*$ would be contained in a leg of B. Let $l, k < l \le h-1$, be the smallest index such that $x_l \in V(B)$. Namely, $x_l$ is the first vertex after $x_k$ where the shortest path $p^*$ intersects again bubble B. Such a vertex $x_l$ must always exist, since $x_{h-1} = v_r \in V(B)$. Since $k < l$, we have that $|p_{k,l}^*| \ge 1$. Furthermore, we claim that $x_l$ must be in $\mathcal{L}(B) \setminus \{s,t\}$. If this were not the case, i.e. $x_l \in \ell(B)$, using the assumption that B satisfies condition (1) and hence $\ell(B)$ is a shortest path, we would have two distinct
+---PAGE_BREAK---
+
+Fig. 3: Case (2) of the proof of Theorem 1. The shortest path from $s$ to $t$ and the prefix of the shortest path from $s$ to $v_r$ are shown as solid lines.
+
+shortest paths from $s$ to $x_l$ in $G(p^*_{x_0,x_l}$ and the subpath of $\ell(B)$ from $s = x_0$
+to $x_l$), which contradicts our assumption that shortest paths are unique.
+
+As $x_l \in \mathcal{L}(B)$ and $x_k \in V(B)$, we need to distinguish two cases: when both $x_k, x_l$ belong to $\mathcal{L}(B)$, and when $x_k \in \ell(B)$ and $x_l \in \mathcal{L}(B)$. We set $p = \mathcal{L}(B)$, $q = \ell(B)$.
+
+In the first case (see Fig. 3(a)), let $B_1$ be the bubble such that: (a) one leg coincides with $\ell(B)$ and as $\ell(B)$ is the shortest path from $s$ to $t$ and that shortest paths are unique, then this necessarily should be the shorter leg of $B_1$, hence $\ell(B_1) = \ell(B)$; and (b) the other leg $\mathcal{L}(B_1) = p^*_{0,k} \cdot p^*_{k,l} \cdot p_{l,h}$. Since $p^*_{k,l} <_{lex} p_{k,l}$ then $\mathcal{L}(B_1) <_{lex} \mathcal{L}(B)$, and thus $B_1 < B$. Let $B_2$ be the bubble with $\ell(B_2) = p^*_{k,l}$, and $\mathcal{L}(B_2) = p_{k,l}$. Since $\mathcal{L}(B_2) \subset \mathcal{L}(B)$ (as $x_k \neq t$), $B_2 < B$. As a result, $B$ can be obtained as the sum of two smaller bubbles $B_1, B_2$, thus contradicting the assumption that $B$ is simple.
+
+In the second case (see Fig. 3(b)), let $B_1$ be the bubble with $\ell(B_1) = p^*_{0,k} \cdot p^*_{k,l}$ (notice that $\ell(B_1)$ is indeed the shorter path of $B_1$ as a subpath of a unique shortest path in the graph) and $\mathcal{L}(B_1) = p_{0,l}$. Since $\mathcal{L}(B_1) \subset \mathcal{L}(B)$, $B_1 < B$. Let $B_2$ be the bubble with $\ell(B_2) = q_{k,h}$, and $\mathcal{L}(B_2) = p^*_{k,l} \cdot p_{l,h}$. As $\mathcal{L}(B) = p_{0,l} \cdot p_{l,h}$ and $p^*_{0,k} \cdot p^*_{k,l}$ is strictly smaller than $p_{0,l}$, we have $\mathcal{L}(B_2) <_{lex} \mathcal{L}(B)$, $B_2 < B$. Again, $B$ can be obtained as the sum of two smaller bubbles $B_1, B_2$, thus contradicting the assumption that $B$ is simple. Finally, notice that this includes also the case $x_k = t$ and the argument holds identically with $B_2$ being a degenerate bubble. For the sake of clarity, we depicted this case separately in Fig. 3($b_1$). ■
+
+Given a directed graph $G$, we denote by $\mathcal{G}(G)$ the set of bubbles in $G$
+satisfying conditions (1) and (2) of Theorem 1. An example of a graph together
+with a generator $\mathcal{G}(G)$ is given in Fig. 1.
+
+**Theorem 2** Let G be a directed graph. The following holds:
+---PAGE_BREAK---
+
+(1) $\mathcal{G}(G)$ is a generator set for all the bubbles of $G$;
+
+(2) $|\mathcal{G}(G)| \leq nm.$
+
+*Proof* (1) Recall that $\mathcal{S}(G)$ is the set of simple bubbles. By Theorem 1, $\mathcal{S}(G) \subseteq \mathcal{G}(G)$, and thus $\mathcal{G}(G)$ is a generator set for all the bubbles of $G$.
+
+(2) Since every bubble *b* in $\mathcal{G}(G)$, with $l(\mathbf{b}) = s, u_1, \dots, t$ and $\mathcal{L}(\mathbf{b}) = s, v_1, \dots, v_r, t$, can be uniquely identified by its vertex *s* and its edge (*v*r, *t*), then the number of bubbles in $\mathcal{G}(G)$ is upper-bounded by *nm*.
+
+The upper bound given in Theorem 2 is asymptotically tight, as shown by
+the family of simple directed graphs on vertex set $V_n = \{1, 2, \dots, n\}$ and all
+possible $n * (n-1)$ edges in their edge set $E_n = \{(u,v) : u \neq v, u, v \in V\}$.
+
+**Remark 1** Conditions (1) and (2) of Theorem 1 are not sufficient to guarantee that a bubble is simple, e.g., see Fig. 4. Thus, the generator $\mathcal{G}(G)$ is not necessarily minimal. Recall that a generator is *minimal* if it does not contain a proper subset that is also a generator; and a generator is *minimum* if it has the minimum cardinality.
+
+Fig. 4: An example showing that conditions (1) and (2) of Theorem 1 are not sufficient to guarantee that a bubble is simple. (a) A directed graph $G$. (b) The three bubbles $B_1, B_2$ and $B_3$ of $G$ satisfying conditions (1) and (2) of Theorem 1, in which $B_1$ and $B_2$ are simple, but $B_3$ is composed, since $B_1 < B_3$, $B_2 < B_3$ and $B_3 = B_1 + B_2$.
+
+**4 A polynomial-time algorithm for decomposing bubbles**
+
+The main result of this section is to provide a polynomial-time algorithm for
+decomposing any bubble B of G into bubbles of G(G). To do so, we make use of
+a tree-like decomposition. Intuitively, a bubble B has a tree-like decomposition,
+if B can be decomposed following a rooted tree structure where each node
+---PAGE_BREAK---
+
+corresponds to a bubble; in particular, the root and the leaves correspond to $B$ and bubbles in the generator, respectively. Moreover, each bubble in an internal node is obtained by the sum of its children. We need to take extra care in this decomposition since a naive approach could generate (several times) all the bubbles that are smaller than $B$, yielding an exponential number of steps.
+
+**Definition 4** A bubble $B$ is short if it satisfies condition (1) of Theorem 1, but not necessarily condition (2). Namely, let $\mathcal{L}(B) = \{s, v_1, \dots, v_r, t\}$ be such that $\ell(B)$ is a shortest path from $s$ to $t$ in $G$ but $s, v_1, \dots, v_r$ is not necessarily the shortest path from $s$ to $v_r$ in $G$.
+
+We next introduce a measure for describing how “close” a bubble is to being short.
+
+**Definition 5** Given an $(s,t)$-bubble $B$, let $p^*$ be the shortest path from $s$ to $t$. We say that $B$ is *k-short*, for $k \ge 0$, if there is a leg $p \in \{\ell(B), \mathcal{L}(B)\}$ for which $p^*$ and $p$ share a prefix of exactly $k$ edges.
+
+Since in our case shortest paths are unique, only one leg of a bubble $B$ can share a prefix with the shortest path $p^*$. Furthermore, any bubble $B$ is $k$-short for some $k$, $0 \le k \le |\ell(B)|$. In particular, a bubble is short if and only if it is $k$-short for $k = |\ell(B)|$.
+
+**Definition 6** Given a $k$-short bubble, we define the short residual of $B$ as follows: $\text{residual}_s(B) = |B| - k$.
+
+Since $0 \le k \le |\ell(B)|$, and $|B| = |\ell(B)| + |\mathcal{L}(B)|$, we have that $|\mathcal{L}(B)| \le \text{residual}_s(B) \le |B|$.
+
+We now present our polynomial time algorithm for decomposing a bubble of the graph $G$ into bubbles of $\mathcal{G}(G)$. In the following, we assume that we have done a preprocessing step to compute all-pairs shortest paths in $G$ in $O(mn + n^2 \log n)$ time.
+
+**Lemma 1** Let $B$ be an $(s,t)$-bubble that is not short. Then, $B$ can be decomposed into two bubbles $B_1$ and $B_2$ ($B = B_1 + B_2$), such that: (a) $B_1$ is short, and (b) $\text{residual}_s(B_2) < \text{residual}_s(B)$. Moreover, $B_1$ and $B_2$ can be found in $O(n)$ time.
+
+*Proof* Let $B$ be a $k$-short $(s,t)$-bubble, $0 \le k < |\ell(B)|$ and let $p^* : s = x_0, x_1, \dots, x_h = t$ be the shortest path from $s$ to $t$ in $G$. To prove (a), we follow a similar approach to Theorem 1. Since $B$ is $k$-short, there is a leg $p \in \{\ell(B), \mathcal{L}(B)\}$ such that $p^*$ and $p$ share a prefix of exactly $k$ edges, $0 \le k < h$. In other terms, leg $p$ starts with edges $(x_0, x_1), \dots, (x_{k-1}, x_k)$, the edge $(x_k, x_{k+1})$ is not in leg $p$, i.e., $x_k$ is the first vertex where the shortest path $p^*$ departs from the leg $p$. Note that as a special case, $k=0$ and $x_k = x_0 = s$. Let $l, k < l \le h$, be the smallest index such that $x_l \in V(B)$. Namely, $x_l$ is the first vertex after $x_k$ where the shortest path $p^*$ intersects again the bubble $B$. Such a vertex $x_l$ must always exist, since $x_h = t \in V(B)$. Since $k < l$, we have
+---PAGE_BREAK---
+
+that $|p_{k,l}^*| \ge 1$. We have two possible cases: either the vertices $x_k$ and $x_l$ are on the same leg of B (see Fig. 2(a)) or $x_k$ and $x_l$ are on different legs of B (see Fig. 2(b)). In either case, we can decompose B as $B = B_1 + B_2$, as illustrated in Fig. 2. Note that in both cases, the bubble $B_1$ is short since one leg of $B_1$ is a subpath of the shortest path $p^*$, and hence a shortest path itself by subpath optimality.
+
+Consider now $B_2$ in Fig. 2. To prove (b), we distinguish among the following three cases: (1) $x_k \neq s$ and vertices $x_k$ and $x_l$ are on the same leg of $B$; (2) $x_k \neq s$ and vertices $x_k$ and $x_l$ are on different legs of $B$; (3) $x_k = s$. First, consider case (1) (see Fig. 2(a)) and note that $\text{residual}_s(B) = |p_{k,l}| + |p_{l,h}| + |q_{0,h}|$ where $q$ is the other leg of $B$ different from $p$. Moreover, $\text{residual}_s(B_2) = |p_{l,h}| + |q_{0,h}|$. Hence, $\text{residual}_s(B) - \text{residual}_s(B_2) = |p_{k,l}| \ge |p_{k,l}^*| \ge 1$. Consider now case (2), (see Fig. 2(b)) and note that $\text{residual}_s(B) = |p_{0,l}| + |p_{l,h}| + |q_{k,h}|$ and $\text{residual}_s(B_2) = |p_{l,h}| + |q_{k,h}|$, and thus $\text{residual}_s(B) - \text{residual}_s(B_2) = |p_{0,l}| \ge |p_{0,k}^*| + |p_{k,l}^*| \ge 1$. The proof of case (3) is completely analogous to the one of case (1), with $x_k = s$ and $p_{0,k}^* = \emptyset$, and again $\text{residual}_s(B) - \text{residual}_s(B_2) = |p_{k,l}| \ge |p_{k,l}^*| \ge 1$. In all cases, $\text{residual}_s(B) - \text{residual}_s(B_2) > 0$, and thus the claim follows. Finally, note that in order to compute $B_1$ and $B_2$ from $B$, it is sufficient to trace the shortest path $p^*$. Since all shortest paths are pre-computed in a preprocessing step, this can be done in $O(n)$ time. ■
+
+**Lemma 2** Any bubble B can be represented as a sum of $O(n)$ (not necessarily distinct) short bubbles. This decomposition can be found in $O(n^2)$ time in the worst case.
+
+*Proof* Each time we apply Lemma 1 to a bubble B, we produce in $O(n)$ time a short bubble $B_1$ and a bubble $B_2$ such that $\text{residual}_s(B_2) < \text{residual}_s(B)$. Since $\text{residual}_s(B) \le |B| \le n$, the lemma follows. ■
+
+We next show how to further decompose short bubbles. Before doing that, we define the notion of *residual* for short bubbles, which measures how “close” is a short bubble to being a bubble of our generator set $\mathcal{G}(\mathcal{G})$.
+
+**Definition 7** Let $B$ be a short $(s,t)$-bubble, let $\ell(B) = p_1^*$ be the shortest path from $s$ to $t$ in $\mathcal{G}$, and let $\mathcal{L}(B) = s, v_1, \dots, v_r, t$ be the other leg of $B$. Let $p$ be the longest prefix of $\mathcal{L}(B) - (v_r, t)$ such that $p$ is a shortest path in $\mathcal{G}$. Then, the *residual* of $B$ is defined as $\text{residual}(B) = |\mathcal{L}(B)| - 1 - |p|$.
+
+Since $p$ is a prefix of $\mathcal{L}(B) - (v_r, t)$, we have that $0 \le |p| \le |\mathcal{L}(B)| - 1$. Thus, $0 \le \text{residual}(B) \le |\mathcal{L}(B)| - 1$.
+
+**Lemma 3** Let $B$ be a short $(s,t)$-bubble such that $\text{residual}(B) > 0$. $B$ can be decomposed into two bubbles $B_1$ and $B_2$ ($B = B_1 + B_2$) such that $B_1$ and $B_2$ are short and $\text{residual}(B_1) + \text{residual}(B_2) < \text{residual}(B)$. Moreover, it is possible to find the bubbles $B_1$ and $B_2$ in $O(n)$ time.
+---PAGE_BREAK---
+
+*Proof* Since *B* is a short (s, t)-bubble, it satisfies condition (1) of Theorem 1. Furthermore, as *residual*(*B*) > 0, it does not satisfy condition (2). Therefore, there exists two bubbles *B*₁ < *B* and *B*₂ < *B* such that *B* = *B*₁ + *B*₂ (from Theorem 1). Since ℓ(*B*) is the shortest path from *s* to *t*, using arguments similar to the ones in Theorem 1, it can be shown that *B* can be decomposed into *B*₁ and *B*₂ and the only possible cases are the ones depicted in Fig. 3. Note that in all three cases of Fig. 3, each of the bubbles *B*₁ and *B*₂ has one leg that is a shortest path. Thus, in all three cases, *B*₁ and *B*₂ are short. Moreover, in Fig. 3(a), *residual*(*B*₁) ≤ |*p*ᵢ,ᵣ* - 1 and *residual*(*B*₂) ≤ |*p*ₖ,ᵢ* - 1. Therefore, *residual*(*B*₁) + *residual*(*B*₂) ≤ |*p*ᵢ,ᵣ* - 1 + |*p*ₖ,ᵢ* - 1 = *residual*(*B*) - 1 < *residual*(*B*). Similarly, in Fig. 3(b) and (*b*₁), *residual*(*B*₁) ≤ |*p*₀,ᵢ* - 1, *residual*(*B*₂) ≤ |*p*ᵢ,ᵣ* - 1, and thus, *residual*(*B*₁) + *residual*(*B*₂) ≤ |*p*₀,ᵢ* - 1 + |*p*ᵢ,ᵣ* - 1 = *residual*(*B*) - 1 < *residual*(*B*). In all three cases, *B*₁ and *B*₂ are short and *residual*(*B*₁) + *residual*(*B*₂) < *residual*(*B*). The claim thus follows.
+
+Once again, observe that in order to compute $B_1$ and $B_2$ from $B$, it is sufficient to trace the shortest path $p^*$. Since all shortest paths are pre-computed in a preprocessing step, this can be done in $O(n)$ time. ■
+
+**Lemma 4** Any short bubble *B* has a tree-like decomposition into *O*(*n*) (not necessarily distinct) bubbles from the generator *G*(*G*). This decomposition can be found in *O*(n²) time in the worst case.
+
+Proof Each time we apply Lemma 3 to a short bubble B, we produce in O(n)
+time two short bubbles B₁ and B₂ such that residual(B₁) + residual(B₂) <
+residual(B). Since |ℓ(B)| + residual(B) ≤ n, this implies that a short bubble
+can be decomposed in O(n) bubbles from the generator set G(G) in O(n²)
+time. ■
+
+**Theorem 3** Given a graph *G*, any bubble *B* in *G* can be represented as a sum of *O*(n²) bubbles that belong to *G*(*G*). This decomposition can be found in a total of *O*(n³) time.
+
+Proof The theorem follows by Lemma 2 and Lemma 4.
+
+5 Applications of the bubble generator in analysing RNA-seq data
+
+In this section, we describe as a proof-of-concept, two applications of the bub-
+ble generator to the analysis of RNA-seq data.
+
+Our test dataset is a subset (coming from the same chromosome) of reads
+of the 58 million RNA-seq Illumina paired-end reads extracted from the mouse
+brain tissue (available in the ENA repository under the following study: PR-
+JEB25574). The length of the reads is 151bp. We mapped all reads to the Mus
+Musculus reference genome and annotations (Ensembl release 94) using STAR
+[8]. We then selected only the reads mapping to chromosome 10 of the genome,
+comprising 4,932,572 reads, as our test dataset. We built the de Bruijn graph
+from these reads and applied standard sequencing-error-removal procedures,
+
+400
+401
+402
+403
+404
+405
+406
+407
+408
+409
+410
+411
+412
+413
+414
+415
+416
+
+417
+
+**Lemma 4** Any short bubble *B* has a tree-like decomposition into *O*(n) (not necessarily distinct) bubbles from the generator *G*(*G*). This decomposition can be found in *O*(n²) time in the worst case.
+
+420
+421
+422
+423
+424
+
+Proof Each time we apply Lemma 3 to a short bubble B, we produce in O(n)
+time two short bubbles B₁ and B₂ such that residual(B₁) + residual(B₂) <
+residual(B). Since |ℓ(B)| + residual(B) ≤ n, this implies that a short bubble
+can be decomposed in O(n) bubbles from the generator set G(G) in O(n²)
+time. ■
+
+425
+426
+427
+
+428
+
+Proof The theorem follows by Lemma 2 and Lemma 4.
+
+429
+
+**5 Applications of the bubble generator in analysing RNA-seq data**
+
+In this section, we describe as a proof-of-concept, two applications of the bub-
+ble generator to the analysis of RNA-seq data.
+
+Our test dataset is a subset (coming from the same chromosome) of reads
+of the 58 million RNA-seq Illumina paired-end reads extracted from the mouse
+brain tissue (available in the ENA repository under the following study: PR-
+JEB25574). The length of the reads is 151bp. We mapped all reads to the Mus
+Musculus reference genome and annotations (Ensembl release 94) using STAR
+[8]. We then selected only the reads mapping to chromosome 10 of the genome,
+comprising 4,932,572 reads, as our test dataset. We built the de Bruijn graph
+from these reads and applied standard sequencing-error-removal procedures,
+
+430
+431
+432
+433
+434
+435
+436
+437
+438
+439
+---PAGE_BREAK---
+
+by using KIS SPLICE [13, 18], a method to find alternative splicing events in a
+reference-free context by enumerating bubbles in a de Bruijn Graph. Finally,
+we extracted the bubble generator from the resulting graph, and evaluated it
+on two aspects: (i) how well it can preprocess the de Bruijn graph to reduce
+the work required by a subsequent bubble enumeration algorithm, and (ii) how
+it performs in terms of finding alternative splicing events. These applications
+are detailed in the following subsections.
+
+5.1 Preprocessing the de Bruijn graph
+
+Similarly to the practical application of a cycle base, the bubble generator
+can be used as a preprocessing step in all algorithms that find bubbles, by
+"cleaning" from the graph all unnecessary edges and vertices, i.e. those that
+do not belong to any bubble. In KISPLICE [13,18], this cleaning is based
+on a biconnected component (BCC) decomposition. A biconnected undirected
+graph G is a connected graph such that, for any $v \in V(G)$, $G-v$ is connected.
+Biconnected components (BCCs) are the maximal biconnected subgraphs of
+a graph G. Given a directed graph, consider its underlying undirected version
+by ignoring the direction of its edges. Clearly a bubble in the directed graph
+corresponds to a cycle in the underlying graph, and every edge that belongs
+to a cycle, belongs also to a BCC of the graph. The graph can then be cleaned
+by removing every vertex or edge that does not belong to a BCC. This clean-
+ing partitions a potentially massive graph into smaller subgraphs, which are
+then processed by a bubble enumeration algorithm (e.g. [13,18]). However,
+the BCC-decomposition-based cleaning is not perfect: some vertices and edges
+might belong only to undirected cycles and not to bubbles.
+
+To improve over this, we perform a more refined cleaning: we compute a
+bubble generator $\mathcal{G}(G)$ of the directed graph $G$ and we remove every edge and
+vertex that do not belong to any bubble in $\mathcal{G}(G)$. Notice that this would be a
+perfect cleaning, meaning that after applying it, every edge of the graph would
+belong to some bubble.
+
+We evaluated this cleaning procedure on the de Bruijn graph constructed
+from our test dataset. We first applied the BCC-decomposition-based cleaning
+on this de Bruijn graph. Then to the result obtained, which is now irreducible
+by this cleaning, we apply a second cleaning procedure using the bubble gener-
+ator. The bubble generator cleaning led to a reduction of 40.1% on the number
+of vertices and of 39.8% on the number of edges. This shows that the generator
+can indeed yield a better procedure for cleaning the graph, although comput-
+ing the generator requires more time than computing the BCCs (recall that
+the BCCs can be computed in linear time). In other words, as expected, a
+better cleaning comes at the expense of a higher computing time.
+---PAGE_BREAK---
+
+5.2 Calling alternative splicing events
+
+As a second application, we consider the problem of finding AS events in a reference-free context. As already mentioned in the introduction, this is a challenging problem in bioinformatics. Indeed, local assemblers such as KIS SPLICE [13] are faced with a dramatically large (and often practically unfeasible) running time due to the exponentially large number of bubbles present, most of which are not interesting as they are not related to AS events. Indeed, a significantly large number of bubbles is due to artifacts of the de Bruijn graph created by repeats longer than the reads (i.e., artificial bubbles not associated with biological events). Hence, in order not to get “lost” in listing false positives, KIS SPLICE relies on heuristics that try to avoid listing bubbles that traverse a repeat-induced subgraph. More specifically, based on the idea that subgraphs of the de Bruijn graph related to repeats have many branching vertices (i.e. vertices with in-degree or out-degree at least 2), KIS SPLICE enumerates only bubbles with a number of branching vertices that is below some threshold *b*. This constraint significantly improved the scalability of KIS SPLICE to the cost of losing the AS events that correspond to bubbles with more than *b* branching vertices.
+
+The question we tackle in this section is how many AS events we are able to find just by looking at the bubbles in the generator set. Notice that the bubble generator can generate all the bubbles in the graph, thus a first idea is to focus on a subset of it in order to filter out bubbles that are not real AS events. To this purpose, given our dataset we consider the set of bubbles belonging to the generator and the set of bubbles generated by KIS SPLICE (KIS SPLICE being run with default parameters, with a maximum number of branching vertices set to 5). In both cases some simple filters are applied to filter out bubbles that probably do not correspond to AS events (e.g. the shorter leg of AS events usually has a length between 2k − 8 and 2k − 2, with k being the size of the k-mer in the de Bruijn graph [13, 18]). Defining the bubble generator took 716 seconds while KIS SPLICE took 129 seconds. We obtained, as putative AS events, 1403 bubbles for the generator set and 1293 bubbles for KIS SPLICE. In order to assess the precision of our method, we mapped the bubbles output by both methods to the *Mus Musculus* reference genome and annotations (Ensembl release 94) using STAR [8], which were then analysed by KIS SPLICE2REFGENOME [2]. KIS SPLICE2REFGENOME provides, for each bubble, the gene name, the AS event type (exon skipping, alternative acceptor/donor splice site, intron retention, etc), the genomic coordinates and the list of splice sites used (novel or annotated). We retrieved only those that corresponded to AS events.
+
+Among the generator bubbles classified as putative AS events, 1085 bubbles correspond to true AS events, according to KIS SPLIC E2REFGENOME, yielding a precision (AS events / putative AS events) of 77.3%. Note that the preci- sion of KIS SPLIC E is 90.3% for this dataset. However, what is interesting to see is that 18.5% of the putative AS events from our bubble generator will never be found by KIS SPLIC E using the default parameters, as they have more
+---PAGE_BREAK---
+
+than 5 branching vertices. Moreover, 10% of these bubbles correspond to true AS events that are missed by KISSPLICE. Increasing the maximum number of allowed branching vertices will increase the running time of KISSPLICE's algorithm exponentially. A large threshold of $b$ is in practice unfeasible. Since we have bubbles corresponding to putative AS events in the generator that have more than 20 branching vertices, these will be missed by KISSPLICE.
+
+This analysis shows the practical interest of the bubble generator. Even this simple application led to results that were comparable with the state-of-art algorithm KISSPLICE and sometimes complementary.
+
+## 6 Conclusions and open problems
+
+Bubbles in de Bruijn graphs represent interesting biological events, like alternative splicing and allelic differences (SNPs and indels). However, the set of all bubbles in a de Bruijn graph built from real data is usually too large to be efficiently enumerated and analysed. To tackle this issue, in this paper we have proposed a bubble generator, which is a polynomial-sized subset of the bubble space that can be used to generate all and only the bubbles in a directed graph. In particular, we have presented efficient algorithms to identify, for any given directed graph $G$, a generator set of bubbles $\mathcal{G}(G)$, and to decompose any bubble $B$ in $G$ into bubbles from $\mathcal{G}(G)$. Concerning the applications of the bubble generator, we showed its usefulness in analysing RNA data. In particular, we indicated that our bubble generator can be used in addition to KISSPLICE to find AS events corresponding to bubbles with a high branching number.
+
+Our work raises several open theoretical questions. First, our generator $\mathcal{G}(G)$ is not necessarily minimal, i.e. it might happen that there exists three bubbles $B_1, B_2, B_3 \in \mathcal{G}(G)$ such that $B_1 < B_3$, $B_2 < B_3$, and $B_3 = B_1 + B_2$. Is it possible to find in polynomial time a generator $\mathcal{G}'(G)$ that is minimal? Second, it seems natural to ask whether all minimal generators for bubbles in directed graphs have the same cardinality. Third, it would be interesting to find a generator $\mathcal{G}(G)$ with some additional biologically motivated constraints, as for example the maximum length of the legs of a bubble [19]. Given an integer $k$ and a graph $G$, is it possible to find a generator $\mathcal{G}(G)$ that generates all and only the bubbles of $G$ which have both legs of length at most $k$? Fourth, are there faster algorithms to find a bubble generator? Fifth, this work is related to the research done in the direction of cycle bases. However, as we already mentioned, our problem displays characteristics that make it very different from the ones related to cycle bases. Thus, it may be of independent interest to further investigate the connections between those two problems.
+
+There are also some practical questions that need to be addressed in future work, and which might be interesting on their own. We see three possible directions: (i) reduce the false positive AS events by adding more biologically motivated constraints (e.g. the ones mentioned in the previous paragraph) to the bubbles in the generator, (ii) find “complex” AS events by listing also
+---PAGE_BREAK---
+
+the bubbles that result from a combination of two or more bubbles from the
+generator.
+
+Finally, our polynomial-time decomposition algorithm could be useful in
+the case where we want to identify and decompose complex alternative splicing
+events [20] into their elementary parts. We defer all those problems to further
+investigations.
+
+Acknowledgments
+
+V. Acuña is supported by Fondecyt 1140631, PIA Fellowship AFB170001 and Center for Genome Regulation FONDAP 15090007. R. Grossi and G. F. Italiano are partially supported by MIUR, the Italian Ministry for Education, University and Research, under PRIN Project AHEAD (Efficient Algorithms for Harnessing Networked Data). Part of this work was done while G. F. Italiano was visiting Université de Lyon. L. Lima is supported by the Brazilian Ministry of Science, Technology and Innovation (in portuguese, Ministério da Ciência, Tecnologia e Inovação - MCTI) through the National Counsel of Technological and Scientific Development (in portuguese, Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq), under the Science Without Borders (in portuguese, Ciências Sem Fronteiras) scholarship grant process number 203362/2014-4. B. Sinaimeri, L. Lima and M.-F. Sagot are partially funded by the French ANR project Aster (2016-2020), and together with V. Acuña, also by the Stic AmSud project MAIA (2016-2017). This work was performed using the computing facilities of the CC LBBE/PRABI.
+
+References
+
+1. Acuña, V., Grossi, R., Italiano, G.F., Lima, L., Rizzi, R., Sacomoto, G., Sagot, M., Sinaimeri, B.: On bubble generators in directed graphs. In: Graph-Theoretic Concepts in Computer Science - 43rd International Workshop, WG 2017, Eindhoven, The Netherlands, June 21-23., Lecture Notes in Computer Science, vol. 10520, pp. 18–31. Springer (2017)
+2. Benoit-Pilven, C., Marchet, C., Chautard, E., Lima, L., Lambert, M.P., Sacomoto, G., Rey, A., Cologne, A., Terrone, S., Dulaurier, L., Claude, J.B., Bourgeois, C., Auboeuf, D., Lacroix, V.: Complementarity of assembly-first and mapping-first approaches for alternative splicing annotation and differential analysis from RNAseq data. Scientific Reports 8(1) (2018)
+3. Birmelé, E., Crescenzi, P., Ferreira, R., Grossi, R., Lacroix, V., Marino, A., Pisanti, N., Sacomoto, G., Sagot, M.F.: Efficient Bubble Enumeration in Directed Graphs. In: SPIRE, pp. 118–129 (2012)
+4. Bollobás, B.: Modern graph theory, Graduate Texts in Mathematics, vol. 184. Springer-Verlag, Berlin (1998)
+5. Brankovic, L., Iliopoulos, C.S., Kundu, R., Mohamed, M., Pissis, S.P., Vayani, F.: Linear-time superbubble identification algorithm for genome assembly. Theoretical Computer Science 609, 374–383 (2016)
+6. Cormen, T.H., Leiserson, C.E., Rivest, R.L.: Introduction to Algorithms. The MIT Electrical Engineering and Computer Science Series. MIT Press, Cambridge, MA (1991)
+7. Deo, N.: Graph theory with applications to engineering and computer science. Prentice-Hall series in automatic computation. Englewood Cliffs, N.J. Prentice-Hall (1974)
+---PAGE_BREAK---
+
+8. Dobin, A., Davis, C.A., Schlesinger, F., Drenkow, J., Zaleski, C., Jha, S., Batut, P., Chaisson, M., Gingeras, T.R.: Star: ultrafast universal rna-seq aligner. Bioinformatics 29(1), 15–21 (2013)
+
+9. Gleiss, P.M., Leydold, J., Stadler, P.F.: Circuit bases of strongly connected digraphs. *Discussiones Mathematicae Graph Theory* **23**(2), 241–260 (2003)
+
+10. Iqbal, Z., Caccamo, M., Turner, I., Flicek, P., McVean, G.: De novo assembly and genotyping of variants using colored de bruijn graphs. *Nat Genet* **44**(2), 226–232 (2012)
+
+11. Kavitha, T., Liebchen, C., Mehlhorn, K., Michail, D., Rizzi, R., Ueckerdt, T., Zweig, K.A.: Cycle bases in graphs characterization, algorithms, complexity, and applications. Computer Science Review **3**(4), 199 – 243 (2009). DOI http://dx.doi.org/10.1016/j.cosrev.2009.08.001
+
+12. Kavitha, T., Mehlhorn, K.: Algorithms to compute minimum cycle bases in directed graphs. *Theory of Computing Systems* **40**(4), 485 – 505 (2007)
+
+13. Lima, L., Sinaimeri, B., Sacomoto, G., Lopez-Maestre, H., Marchet, C., Miele, V., Sagot, M.F., Lacroix, V.: Playing hide and seek with repeats in local and global de novo transcriptome assembly of short rna-seq reads. *Algorithms Mol Biol* **12**, 2–2 (2017). DOI 10.1186/s13015-017-0091-2
+
+14. MacLane, S.: A combinatorial condition for planar graphs. Fundamenta Mathematicae **28**, 22–32 (1937)
+
+15. Miller, J.R., Koren, S., Sutton, G.: Assembly algorithms for next-generation sequencing data. *Genomics* **95**(6), 315–327 (2010)
+
+16. Onodera, T., Sadakane, K., Shibuya, T.: Detecting Superbubbles in Assembly Graphs. In: *Algorithms in Bioinformatics*, Lecture Notes in Computer Science, vol. 8126, pp. 338–348. Springer Berlin Heidelberg (2013)
+
+17. Pevzner, P.A., Tang, H., Tesler, G.: De Novo Repeat Classification and Fragment Assembly. Genome Research **14**(9), 1786–1796 (2004)
+
+18. Sacomoto, G., Kielbassa, J., Chikhi, R., Uricaru, R., Antoniou, P., Sagot, M.F., Peterlongo, P., Lacroix, V.: Kiss splice: de-novo calling alternative splicing events from rna-seq data. BMC Bioinformatics **13**(S-6), S5 (2012)
+
+19. Sacomoto, G., Lacroix, V., Sagot, M.F.: A polynomial delay algorithm for the enumeration of bubbles with length constraints in directed graphs and its application to the detection of alternative splicing in RNA-seq data. In: WABI, pp. 99–111 (2013)
+
+20. Sammeth, M.: Complete alternative splicing events are bubbles in splicing graphs. Journal of Computational Biology **16**(8), 1117–1140 (2009)
+
+21. Shilov, G.E.: Linear Algebra. Dover Publications, New York (1977). (Trans. R. A. Silverman)
+
+22. Simpson, J.T., Wong, K., Jackman, S.D., Schein, J.E., Jones, S.J.M., Birol, I.: ABySS: A parallel assembler for short read sequence data. Genome Research **19**(6), 1117–1123 (2009)
+
+23. Sung, W.K., Sadakane, K., Shibuya, T., Belorkar, A., Pyrogova, I.: An $O(m \log m)$-time algorithm for detecting superbubbles. IEEE/ACM Trans. Comput. Biol. Bioinformatics **12**(4), 770–777 (2015)
+
+24. Uricaru, R., Rizk, G., Lacroix, V., Quillery, E., Plantard, O., Chikhi, R., Lemaitre, C., Peterlongo, P.: Reference-free detection of isolated SNPs. Nucleic Acids Research **43**(2), e11 (2015)
+
+25. Younsi, R., MacLean, D.: Using 2k + 2 bubble searches to find single nucleotide polymorphisms in k-mer graphs. *Bioinformatics* **31**(5), 642–646 (2015)
+
+26. Zerbino, D., Birney, E.: Velvet: Algorithms for De Novo Short Read Assembly Using De Bruijn Graphs. Genome Res. (2008)
\ No newline at end of file
diff --git a/samples/texts_merged/4283718.md b/samples/texts_merged/4283718.md
new file mode 100644
index 0000000000000000000000000000000000000000..a968f9e8301c747908545e3267848b0f6f5c4a25
--- /dev/null
+++ b/samples/texts_merged/4283718.md
@@ -0,0 +1,30 @@
+
+---PAGE_BREAK---
+
+**Methodological note**
+
+PIN Flash 11 – En route to safer mobility in EU capitals
+
+Regression estimation of the average annual percentage change in road
+mortality rates over the past decade using centred 3-year moving averages
+
+To estimate the average yearly percentage change in road death occurring over a given period, one should make use of the whole time series of counts, not just the counts in the first and the last year.
+
+Since the road death counts are for certain jurisdictions small numbers subject to randomness, it is preferred to use central moving average numbers instead of single year values. The recorded number of deaths is replaced by the average of the counts registered this year, the previous year and the following year.
+
+$$Y_i^* = (Y_{i-1} + Y_i + Y_{i+1})/3$$
+
+The resulting estimate will be less sensitive to the randomness and likely more reliable.
+
+Fig. 1: Death counts for Vienna together with 3-year averages.
+
+The task is now to estimate the average annual change in road mortality in the period 1997-2007, while using three-year centred averages instead of single year values. For year 2007, the average of 2006 and 2007 is used.
+
+We assume a priori a reduction in risk of mortality rate over time, so to fix the sign of a change; we will assume reduction, so that a minus sign indicates an increase. Let the average reduction per year as a percentage of the previous year be p. If $\lambda_n$ is the risk of deaths in year n, then we wish to fit a model $\lambda_n = \lambda_0(1 - p/100)^n$, where in this case year 0 is 1997 and n = 10 in 2007.
+---PAGE_BREAK---
+
+This is equivalent to $\ln(\lambda_r/\lambda_o) = n \cdot \ln(1-p/100)$ so if we fit $\ln(\lambda_r/\lambda_o) = an$ by linear regression, then $a$ is the estimate of $\ln(1-p/100)$ and $p$ is estimated by $100(1 - e^a)$.
+
+Fig.2: Linear regression function for logarithmically transformed changes in death counts since 2001 as baseline.
+
+In this figure illustrating the use of the method and constructed for Vienna, the function $\ln(\lambda_r/\lambda_o) = an$ corresponds to the function $y=ax$, so the $a$ is equal -0.0254. The $p$ can now be estimated as $100(1 - e^a) = 100(1 - e^{-0.0254}) = 2.51$. Average yearly reduction in road deaths is thus estimated as 2.5%.
\ No newline at end of file
diff --git a/samples/texts_merged/4399087.md b/samples/texts_merged/4399087.md
new file mode 100644
index 0000000000000000000000000000000000000000..771ef3fdbcc60deef7b108fb5f0100a0d2b2c5d6
--- /dev/null
+++ b/samples/texts_merged/4399087.md
@@ -0,0 +1,846 @@
+
+---PAGE_BREAK---
+
+Technical Report No. 177
+
+# Approximation Algorithms for Bregman Clustering
+Co-clustering and Tensor Clustering
+
+Suvrit Sra¹, Stefanie Jegelka¹
+Arindam Banerjee²
+
+17 Oct 2008
+
+¹ MPI für biologische Kybernetik, AGBS; ² University of Minnesota, MN, USA.
+---PAGE_BREAK---
+
+# Approximation Algorithms for Bregman Clustering Co-clustering and Tensor Clustering
+
+Suvrit Sra, Stefanie Jegelka, and Arindam Banerjee
+
+**Abstract.** The Euclidean K-means problem is fundamental to clustering and over the years it has been intensely investigated. More recently, generalizations such as Bregman k-means [8], co-clustering [10], and tensor (multi-way) clustering [40] have also gained prominence. A well-known computational difficulty encountered by these clustering problems is the NP-Hardness of the associated optimization task, and commonly used methods guarantee at most local optimality. Consequently, approximation algorithms of varying degrees of sophistication have been developed, though largely for the basic Euclidean K-means (or $ℓ_1$-norm K-median) problem. In this paper we present approximation algorithms for several Bregman clustering problems by building upon the recent paper of Arthur and Vassilvitskii [5]. Our algorithms obtain objective values within a factor $O(\log K)$ for Bregman k-means, Bregman co-clustering, Bregman tensor clustering, and weighted kernel k-means. To our knowledge, except for some special cases, approximation algorithms have not been considered for these general clustering problems. There are several important implications of our work: (i) under the same assumptions as Ackermann et al. [2] it yields a much faster algorithm (non-exponential in $K$, unlike [2]) for information-theoretic clustering, (ii) it answers several open problems posed by [4], including generalizations to Bregman co-clustering, and tensor clustering, (iii) it provides practical and easy to implement methods—in contrast to several other common approximation approaches.
+
+## 1 Introduction
+
+Partitioning data points into clusters is a fundamentally hard problem. The well-known Euclidean k-means problem that seeks to partition the input data into $K$ clusters, so that the sum of squared distances of the input points to their corresponding cluster centroids is minimized, is an NP-Hard problem [22]. Simple and frequently used procedures that rapidly obtain local minima exist since a long time [26, 32]. For example, Lloyd's algorithm [32], which is commonly referred to as the K-means algorithm is arguably the most popular approach to solving Euclidean k-means. Here, one begins with $K$ centers (usually chosen randomly) and assigns points to their closest centers. Each cluster center is then recomputed as the mean of the points assigned to it, and these two steps are repeated until the procedure converges. A similar greedy procedure also exists for the Bregman k-means problem, as shown in [8]. Despite enjoying properties such as monotonic descent in the objective function value and utter simplicity of implementation, these simplistic iterative approaches can often get stuck in poor local-optima. Therefore, heuristic local search strategies (e.g., [21]), or even guaranteed approximation algorithms have been designed for it (e.g., [31] or references therein). Heuristic strategies can be quite effective but are not accompanied by better than local optimality guarantees, while standard approximation algorithms quickly sacrifice the simplicity, and thereby the efficiency of the K-means algorithm.
+
+Fortunately, in a recent paper Arthur and Vassilvitskii [5] presented a simple initialization scheme for Euclidean k-means along with an elegant analysis guaranteeing an $O(\log K)$ approximation to the globally optimal objective function value. The greatest advantage of their scheme is that it retains the simplicity and efficiency of the K-means algorithm, while still maintaining theoretical guarantees. This
+---PAGE_BREAK---
+
+paper is directly motivated by their work, which we greatly extend to obtain approximation algorithms for several Bregman clustering problems. We summarize our main results below.
+
+## 1.1 Results.
+
+We present approximation algorithms for the following Bregman divergence based clustering problems:
+
+1. Bregman k-means [8] (§2),
+
+2. Bregman co-clustering [10] (§5),
+
+3. Bregman tensor clustering [9] (§5).
+
+Additionally as an easy generalization of [5] we also obtain an approximation algorithm for weighted kernel k-means [20] (§4).
+
+**Implications.** Our results have several important implications. Under assumptions similar to that of (Ackermann et al. [2], 2008), we obtain a much faster approximation algorithm for information-theoretic clustering as a special case of our approximation for Bregman k-means (§3.1). Ackermann et al. [2] require time exponential (or worse) in $K$, while our methods run in time linear in $K$. In fact, while preparing this paper we became aware of a very recent SODA 2009 paper of Ackermann and Blömer [1] (yet to appear in print), who provide new approximation algorithms for Bregman k-means. However, their new algorithms, while faster than those in [2], are *still* exponential (or worse) in $K$—our algorithm operates under the *same* assumptions on the Bregman divergences as made by [1], and is much faster (non-exponential in $K$).
+
+Our results for Bregman co-clustering and Bregman tensor clustering answer two open problems posed by [4], and yield the first (to our knowledge) known approximation algorithms for these problems. Finally, using our $O(\log K)$ approximation for weighted kernel K-means, one can obtain potentially better algorithms for graph-cut objectives and certain semi-supervised clustering problems by exploiting the equivalences described by [20, 30].
+
+## 1.2 Related work
+
+There exist several books and a vast array of papers dealing with the problem of clustering. However, as our focus is on approximation algorithms for Bregman divergence based clustering problems, we summarize below only work dealing with approximation algorithms for clustering. Graph partitioning also forms a large class of clustering problems and algorithms. However, it lies outside the scope of this paper, apart from the connection via weighted kernel k-means as mentioned above.
+
+### 1.2.1 Clustering
+
+The most directly related work is the paper [5] that has motivated our algorithms for clustering. If one fixes the number of clusters $K$, and the data dimensionality $d$, then Euclidean k-means can be solved exactly, in time $O(n^{Kd})$ [28]. We remark that using the Bregman Voronoi ideas of Nielsen et al. [35], it might be possible to generalize the work of Inaba et al. [28] to Bregman k-means.
+
+Several other polynomial time approximation algorithms for K-means have been proposed in the literature, for example, [17, 25, 31] (also see the references therein). All of the algorithms proposed in these papers suffer from a common problem, namely exponential (or poly-exponential) dependence on $K$, rendering them impractical despite their theoretical pleasantness.
+
+Of particular interest is the paper of Ackermann et al. [2], who extended clustering guarantees of Kumar et al. [31] to generic divergence measures (including Bregman divergences). Under the *same* assumptions on the underlying Bregman divergence as Ackermann et al. [2] we obtain a much faster and practical approximation algorithm for Bregman k-means than their methods. Their approximation factor is $(1 + \epsilon)$ with a running time of $O(dn^2 (K/\epsilon)^{O(1)})$, while our factor is $O(\log K)$ with a running time of
+---PAGE_BREAK---
+
+$O(dnK)$. In an even more recent paper that will appear in SODA 2009, Ackermann and Blömer [1] (preprint available from the authors' website) have introduced a new $O(dKn + d^2 2^{(K/\epsilon)\Theta(1)} \log^{K+2} n)$ approximation algorithm for Bregman k-means that again achieves $(1+\epsilon)$ approximation, with the same assumptions as [2] on the underlying Bregman divergences. Our approximation algorithms for Bregman k-means are much more practical than theirs, both because of the lower running time as well as the implementational simplicity.
+
+Related to Bregman k-means is the important special case of information theoretic clustering, wherein one minimizes the sum of KL divergences of input data points to their cluster centroids. In addition to the generic methods already summarized above, recent important work worth mentioning here is the paper of Chaudhuri and McGregor [14], who present a KL-divergence clustering algorithm that does not make any assumptions on the input data, but yields a non-constant $O(\log n)$ approximation.
+
+Other relevant works such as [29, 34, 36] are summarized in [1, 2, 5], and we refer the reader to those papers for additional information.
+
+### 1.2.2 Co-clustering and Tensor clustering.
+
+For a detailed discussion of co-clustering and several relevant references we refer the reader to [10], while for the lesser known problem of tensor clustering we refer the reader to [3, 9, 11, 23, 33, 40].
+
+Approximation algorithms for co-clustering are much less well-studied. We are aware of only two very recent attempts (both papers are from 2008), namely, [38] and [4]–and both of the papers follow similar approaches to obtain their approximation guarantees. In this paper, we build upon [4] and obtain approximation algorithms for Bregman co-clustering as well as Bregman tensor clustering. We therefore answer *two open problems* posed by [4], namely, whether their methods for Euclidean co-clustering could be generalized to Bregman co-clustering, and more importantly, whether generalizations to tensors could be found. Our approximation results for co-clustering and tensor clustering may be viewed independently of our results for Bregman clustering, because they are based on being able to solve the 1-dimensional (standard) clustering problem with *any* guaranteed approximation method. One can, and we do, however, invoke our Bregman clustering results to obtain actual efficient algorithms.
+
+Now we are ready to discuss details of our methods and we begin with Bregman clustering below.
+
+## 2 Bregman Clustering
+
+Bregman k-means (BREGM) was introduced by Banerjee et al. [8], and it can be viewed as a generalization of Euclidean k-means and information theoretic clustering (ITC) [19]. Below we derive a randomized algorithm called BREG++ for Bregman k-means and prove it to be within $O(\log K)$ of the optimal. In §3.1 we discuss the particularly interesting special case of ITC in further detail, especially because ITC has only recently (in 2008) witnessed some progress in terms of approximation algorithms [2, 14]. We also discuss implications of our BREG++ for mixture-modeling in §3.2.
+
+### 2.1 Setup and Algorithm.
+
+Let $\mathcal{X} = \{\mathbf{x}_1, \dots, \mathbf{x}_n\}$ be the input data, and $\{w_1, \dots, w_n\}$ corresponding non-negative weights. For a strictly convex function $f$, let $B_f$ denote a Bregman divergence defined as [13]
+
+$$B_f(\mathbf{x}, \mathbf{y}) = f(\mathbf{x}) - f(\mathbf{y}) - \nabla f(\mathbf{y})^T (\mathbf{x} - \mathbf{y}). \quad (2.1)$$
+
+Given $B_f$, Bregman k-means seeks a partition $C = \{C_1, \dots, C_K\}$ of $\mathcal{X}$, such that the following objective is minimized:
+
+$$J(C) = \sum_{h=1}^{K} \sum_{\mathbf{x}_i \in C_h} w_h B_f(\mathbf{x}_i, \boldsymbol{\mu}_h), \quad (2.2)$$
+---PAGE_BREAK---
+
+where $\mu_h$ is the weighted mean of cluster $C_h$, i.e.,
+
+$$ \mu_h = \frac{\sum_{\mathbf{x}_i \in C_h} w_i \mathbf{x}_i}{\sum_{\mathbf{x}_i \in C_h} w_i}. \quad (2.3) $$
+
+The means given by (2.3) are optimal for a given clustering, as shown formally below.
+
+**Lemma 2.1.** Let $\mathcal{A}$ be a set of points with weighted mean $\boldsymbol{\mu}_{\mathcal{A}}$, and let $z$ be an arbitrary point. Then,
+
+$$ \sum_{\mathbf{x}_i \in \mathcal{A}} w_i B_f(\mathbf{x}_i, z) = \sum_{\mathbf{x}_i \in \mathcal{A}} w_i B_f(\mathbf{x}_i, \boldsymbol{\mu}_{\mathcal{A}}) + W_{\mathcal{A}} B_f(\boldsymbol{\mu}_{\mathcal{A}}, z), $$
+
+where $W_{\mathcal{A}} = \sum_{\mathbf{x}_i \in \mathcal{A}} w_i$.
+
+*Proof.* Follows directly from the definition (2.1) of $B_f$ and equation (2.3)—for details see [8]. $\square$
+
+**Algorithm.** Given $K$ initial means, the Bregman k-means (BREGM) algorithm [8] follows the outline:
+
+1. For each $i = 1...N$, assign point $\mathbf{x}_i$ to its nearest (in $B_f$) mean updating cluster $C_h$.
+
+2. For each $h = 1...K$, compute $\mu_h$ using (2.3).
+
+3. Repeat steps 1 and 2 to convergence.
+
+This simple k-means type approach monotonically decreases the objective function, finally stopping once the clusters stabilize. To obtain approximation guarantees, we must modify this basic algorithm a little. To that end, we generalize the careful initialization technique of [5], as shown below.
+
+As its initialization, BREG++ selects cluster centers from $\mathcal{X}$ sequentially following a weighted farthest-first scheme. The first center $\mu_1$ is chosen with probability proportional to its weight, i.e., for some $\mathbf{x}_i \in \mathcal{X}$
+
+$$ P(\boldsymbol{\mu}_1 = \mathbf{x}_i) = \frac{w_i}{\sum_{j=1}^n w_j}. $$
+
+The remaining centers are chosen from $\mathcal{X}$ with a different weighting. At a given stage in the initialization, let $C$ be the set of centers already chosen. Let $D(\mathbf{x})$ denote the smallest Bregman divergence of a point $\mathbf{x}$ in $\mathcal{X}$ to an already chosen center, i.e.,
+
+$$ D(\mathbf{x}) = \min_{\boldsymbol{\mu} \in C} B_f(\mathbf{x}, \boldsymbol{\mu}). \qquad (2.4) $$
+
+Then BREG++ chooses the next center $\mu_h$ by letting $\mu_h = x_i \in \mathcal{X}$, with probability
+
+$$ P(\boldsymbol{\mu}_h = \mathbf{x}_i) = \frac{w_i D(\mathbf{x}_i)}{\sum_{j=1}^{n} w_j D(\mathbf{x}_j)}. \quad (2.5) $$
+
+The initialization steps (2.4) and (2.5) are repeated until we have chosen $K$ centers, which are then used to initialize the standard BREGM algorithm. Interestingly, this weighted farthest-first initialization alone is sufficient to bring BREG++ within a factor $O(\log K)$ of the optimal, as the analysis below shows.
+
+## 2.2 Analysis.
+
+Arthur and Vassilvitskii's [2007] analysis does not directly generalize to Bregman k-means. Some additional details must be developed as outlined in this section. We assume that the Bregman divergence being minimized has bounded curvature, i.e., $\exists \sigma_1, \sigma_2$ with $0 < \sigma_1 \le \sigma_2 < \infty$, such that
+
+$$ \sigma_1 \| \mathbf{x} - \mathbf{y} \|^2 \le B_f(\mathbf{x}, \mathbf{y}) \le \sigma_2 \| \mathbf{x} - \mathbf{y} \|^2. \quad (2.6) $$
+---PAGE_BREAK---
+
+Note that the bounds (2.6) need not hold over the entire domain of $f$—they can be limited to convex hulls of the input data points. Specifically, we can select
+
+$$ \sigma_1 = \inf_{\mathbf{x} \in \mathcal{X}} \frac{B_f(\mathbf{x}, \mathbf{y})}{\|\mathbf{x} - \mathbf{y}\|^2}, \quad \sigma_2 = \sup_{\mathbf{y} \in \mathcal{Y}} \frac{B_f(\mathbf{x}, \mathbf{y})}{\|\mathbf{x} - \mathbf{y}\|^2} \qquad (2.7) $$
+
+where $\mathrm{conv}(\mathcal{X})$ denotes the convex hull of $\mathcal{X}$, i.e., the set of points that can be expressed as $\mathbf{y} = \sum_{\mathbf{x} \in \mathcal{X}} \alpha_{\mathbf{x}}\mathbf{x}$ with $\alpha_{\mathbf{x}} \ge 0$ and $\sum_{\mathbf{x} \in \mathcal{X}} \alpha_{\mathbf{x}} = 1$. Though these bounds might appear restrictive, they are in fact not that limiting, as our treatment of information theoretic clustering in Section 3.1 shows. In fact, it turns out the similar bounds were assumed in the very recent work of Ackermann et al. [2], and [1]. In the language of convex-optimization, these bounds are nothing but bounds on curvature (Hessian) of the convex function $B_f$ (e.g., in the context of strong convexity [12, §9.1.2]).
+
+On a more intriguing note, it seems that without such assumptions on the curvature of $B_f$, one might not be able to obtain constant approximation ratios; this intuition is reinforced by the recent results of [14], who avoided making such assumptions, but ended up with an $O(\log n)$ approximation.
+
+We now prove the approximation in three steps (following [5]). First we show that BREG++ is competitive in those clusters out of the optimal clustering $C_{OPT}$ from which it happens to sample a center.
+
+**Lemma 2.2.** Let $\mathcal{A}$ be an arbitrary cluster in $C_{OPT}$ and let $\mathcal{C}$ be the clustering with just one center that was chosen with probability proportional to the weight of points in $\mathcal{A}$. Then, if $J(\mathcal{A})$ is the contribution of points in $\mathcal{A}$ to the final objective, we have
+
+$$ E[J(\mathcal{A})] \leq \left(1 + \frac{\sigma_2}{\sigma_1}\right) J_{OPT}(\mathcal{A}). $$
+
+*Proof.* Let $\mu_A$ denote the weighted mean of cluster $\mathcal{A}$. Since $C_{OPT}$ is optimal, it must be using $\mu_A$ as its center. Let $W_A = \sum_{x_i \in A} w_i$. Now invoking Lemma 2.1 we note that $E[J(A)]$ is given by
+
+$$
+\begin{align*}
+& \sum_{\mu_0 \in A} \frac{w_o}{W_A} \left( \sum_{x_i \in A} w_i B_f(x_i; \mu_0) \right) \\
+&= \sum_{\mu_0 \in A} \frac{w_o}{W_A} \sum_{x_i \in A} w_i B_f(x_i; \mu_A) + \sum_{\mu_0} W_A B_f(\mu_A; \mu_0) \\
+&= J_{OPT}(A) + \sum_{\mu_0 \in A} W_A \frac{B_f(\mu_A; \mu_0)}{B_f(\mu_0; \mu_A)} B_f(\mu_0; \mu_A) \\
+&\leq J_{OPT}(A) + \frac{\sigma_2}{\sigma_1} J_{OPT}(A) = (1 + \frac{\sigma_2}{\sigma_1}) J_{OPT}(A),
+\end{align*}
+$$
+
+where the last inequality follows from (2.7). $\square$
+
+The second step consists of showing how the algorithm behaves for the remaining centers that are chosen with the weighted farthest-first sampling.
+
+**Lemma 2.3.** Let $\mathcal{A}$ be an arbitrary cluster in $C_{OPT}$, and let $\mathcal{C}$ be an arbitrary clustering. If we add a random point of $\mathcal{A}$ as a center to $\mathcal{C}$ using (2.4) and (2.5), then
+
+$$ E[J(\mathcal{A})] \leq 4 \frac{\sigma_2}{\sigma_1} \left(1 + \frac{\sigma_2}{\sigma_1}\right) J_{OPT}(\mathcal{A}). $$
+
+*Proof.* After choosing a center $\mathbf{x}_0$ from $\mathcal{A}$, any point $\mathbf{x}_i \in \mathcal{A}$ will contribute $w_i \min(D(\mathbf{x}_i), B_f(\mathbf{x}_i, \mathbf{x}_0))$ to the objective. Since we sample according to (2.5), the expected value of the objective $E[J(A)]$ is
+
+$$ \sum_{\mathbf{x}_0 \in A} \frac{w_0 D(\mathbf{x}_0)}{\sum_{\mathbf{x} \in A} w_\mathbf{x} D(\mathbf{x})} \sum_{\mathbf{x}_i \in A} w_i \min(D(\mathbf{x}_i), B_f(\mathbf{x}_i, \mathbf{x}_0)). $$
+---PAGE_BREAK---
+
+Let $c_0$ and $c_i$ be the centers closest to $\mathbf{x}_0$ and $\mathbf{x}_i$, respectively. From the triangle inequality we have
+
+$$
+\|\mathbf{x}_0 - \mathbf{c}_i\| \leq \|\mathbf{x}_0 - \mathbf{x}_i\| + \|\mathbf{x}_i - \mathbf{c}_i\|.
+$$
+
+Then using (2.7) we can bound the divergence
+
+$$
+\begin{align*}
+B_f(\mathbf{x}_0, \mathbf{c}_i) &\le \sigma_2 \|\mathbf{x}_0 - \mathbf{c}_i\|^2 \le \sigma_2 (\|\mathbf{x}_0 - \mathbf{x}_i\| + \|\mathbf{x}_i - \mathbf{c}_i\|)^2 \\
+&\le 2\sigma_2 \|\mathbf{x}_0 - \mathbf{x}_i\|^2 + 2\sigma_2 \|\mathbf{x}_i - \mathbf{c}_i\|^2 \\
+&\le 2 \frac{\sigma_2}{\sigma_1} B_f(\mathbf{x}_i, \mathbf{x}_0) + 2 \frac{\sigma_2}{\sigma_1} B_f(\mathbf{x}_i, \mathbf{c}_i).
+\end{align*}
+$$
+
+Noting that $D(\mathbf{x}_0) = B_f(\mathbf{x}_0, \mathbf{c}_0) \le B_f(\mathbf{x}_0, \mathbf{c}_i)$ and $D(\mathbf{x}_i) = B_f(\mathbf{x}_i, \mathbf{c}_i)$, we have the bound
+
+$$
+D(\mathbf{x}_0) \leq 2 \frac{\sigma_2}{\sigma_1} B_f(\mathbf{x}_i, \mathbf{x}_0) + 2 \frac{\sigma_2}{\sigma_1} D(\mathbf{x}_i).
+$$
+
+Multiplying both sides by $w_i$ and summing over all $\mathbf{x}_i \in \mathcal{A}$, we have (for $W_{\mathcal{A}} = \sum_{\mathbf{x}_i \in \mathcal{A}} w_i$)
+
+$$
+W_A D(\mathbf{x}_0) \leq 2 \frac{\sigma_2}{\sigma_1} \left( \sum_{\mathbf{x}_i \in A} w_i B_f(\mathbf{x}_i, \mathbf{x}_0) + w_i D(\mathbf{x}_i) \right), \text{ i.e.,}
+$$
+
+$$
+w_0 D(\mathbf{x}_0) \leq 2 \frac{\sigma_2}{\sigma_1} \frac{w_0}{W_A} \left( \sum_{\mathbf{x}_i \in A} w_i B_f(\mathbf{x}_i, \mathbf{x}_0) + w_i D(\mathbf{x}_i) \right).
+$$
+
+Now letting $R(A) = \sum_{\mathbf{x}_i \in A} w_i \min(D(\mathbf{x}_i), B_f(\mathbf{x}_i, \mathbf{x}_0))$, we see that $E[J(A)]$ is upper bounded by
+
+$$
+\begin{align*}
+& 2 \frac{\sigma_2}{\sigma_1} \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \left( \frac{\sum_{\mathbf{x}_i \in A} w_i (B_f(\mathbf{x}_i, \mathbf{x}_0) + D(\mathbf{x}_i))}{\sum_{\mathbf{x} \in A} w_\mathbf{x} D(\mathbf{x})} R(A) \right) \\
+&\le 2 \frac{\sigma_2}{\sigma_1} \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in A} w_i B_f(\mathbf{x}_i, \mathbf{x}_0) \\
+&+ 2 \frac{\sigma_2}{\sigma_1} \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in A} w_i B_f(\mathbf{x}_i, \mathbf{x}_0) \\
+&= 4 \frac{\sigma_2}{\sigma_1} \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in A} w_i B_f(\mathbf{x}_i, \mathbf{x}_0) \\
+&\le 4 \frac{\sigma_2}{\sigma_1} \left(1 + \frac{\sigma_2}{\sigma_1}\right) J_{\text{OPT}}(A),
+\end{align*}
+$$
+
+where in the second line we simplified $R(A)$ by using $\min(D(\boldsymbol{x}_i), B_f(\boldsymbol{x}_i, \boldsymbol{x}_0)) \le D(\boldsymbol{x}_i)$, while for the
+third line we used $\min(D(\boldsymbol{x}_i), B_f(\boldsymbol{x}_i, \boldsymbol{x}_0)) \le B_f(\boldsymbol{x}_i, \boldsymbol{x}_0)$. The last inequality follows from Lemma 2.2.
+$\square$
+
+**Remark 2.3 (K-means).** For $f(x) = \frac{1}{2}x^T x$ we have $\sigma_1 = \sigma_2$, and Bregman k-means is reduces to Euclidean k-means, and our analysis reduces to that of [5].
+
+**Remark 2.4 (Mahalanobis).** For $f = \frac{1}{2} x^T A x$, where $A$ is a positive-definite matrix, the resulting Bregman divergence is a Mahalanobis distance. Here, one has $\sigma_1 = \lambda_{\min}(A)$, and $\sigma_2 = \lambda_{\max}(A)$, independent of the input data (the $\lambda$s denote eigenvalues of $A$). The approximation ratio depends naturally on the condition number $\lambda_{\max}(A)/\lambda_{\min}(A)$ of $A$.
+
+Given Lemmas 2.2 and 2.3, the third step of our proof is simple as we can essentially invoke
+Lemma 3.3 of [5] to show that the total error incurred via the weighted farthest-first sampling is within a
+factor $O(\log K)$ of the optimal.
+---PAGE_BREAK---
+
+**Lemma 2.4.** Let $\mathcal{C}$ be an arbitrary clustering. Choose $u > 0$ “uncovered”¹ clusters from $C_{OPT}$, and let $\mathcal{X}_u$ denote the set of points in these clusters. Also let $\mathcal{X}_c = X \setminus \mathcal{X}_u$. Now suppose we add $t \le u$ random centers to $\mathcal{C}$, chosen with the weighted farthest-first sampling. Let $\mathcal{C}'$ denote the resulting clustering, and let $J'$ denote the corresponding objective. Then, $E[J']$ is at most
+
+$$ (J(\mathcal{X}_c) + 4\frac{\sigma_2}{\sigma_1}(1+\frac{\sigma_2}{\sigma_1})J_{OPT}(\mathcal{X}_u)) \cdot (1+H_t) + \frac{u-t}{u} \cdot J(\mathcal{X}_u), $$
+
+where $H_t$ denotes the Harmonic number $1 + \frac{1}{2} + \dots + \frac{1}{t}$.
+
+*Proof.* Direct from the proof of Lemma 3.3 of [5]. □
+
+Finally, we have the main approximation theorem.
+
+**Theorem 2.5.** A clustering $\mathcal{C}$ obtained via BREG++ satisfies
+
+$$ E[J(\mathcal{C})] \leq 4 \frac{\sigma_2}{\sigma_1} \left(1 + \frac{\sigma_2}{\sigma_1}\right) (\log K + 2) J_{OPT}. $$
+
+*Proof.* Immediate from Theorem 3.1 of [5] by a direct application of Lemma 2.4. □
+
+# 3 Implications of BREG++
+
+We now describe some important implications of our BREG++ method derived above.
+
+## 3.1 Information Theoretic Clustering.
+
+With $f(x) = \sum_j x_j \log x_j$, the Bregman divergence $B_f$ becomes the (un-normalized) Kullback-Leibler divergence, and Bregman k-means reduces to information theoretic clustering (ITC). Even though local and greedy methods for ITC have been well studied [6, 19, 37], approximation algorithms for it have only been developed very recently [2, 14] (both papers are from 2008).
+
+For ITC, our assumptions on bounded $\sigma_1$ and $\sigma_2$ are equivalent to those of [2] as mentioned previously. Under these assumptions we obtain an efficient k-means type $O(\log K)$ approximation algorithm, while Ackermann et al. [2] obtain an $O(1+\epsilon)$ approximation algorithm, with an impractical running time of $O(dn^2(\frac{K}{\epsilon})^{O(1)})$. The even more recent paper of Ackermann and Blömer [1] yields an ITC algorithm faster than that of [2], but still has an impractical running time of $O(dKn + d^2 2^{K/\epsilon} \Theta^{(1)} \log^{K+2} n)$.
+
+Chaudhuri and McGregor [14] develop an approximation algorithm ITC that does not make any assumptions on the data. They first lower bound the KL-divergence using a Hellinger distance, then cluster approximately using this distance, before recovering clusters for KL. Their method is clever, but leads to a non-constant approximation ratio of $O(\log n)$ ($n$ is the number of input points). Obtaining an $O(\log K)$ algorithm for ITC without any assumptions on the data therefore remains an open problem—though we suspect that without additional assumptions, ITC might be inapproximable to better than a polylog factor.
+
+**Details.** Observe that given the definition the Bregman divergence (2.1), using Taylor series expansion of $f$ around $\boldsymbol{x}$ we immediately have
+
+$$ B_f(\boldsymbol{x}, \boldsymbol{y}) = \frac{1}{2}(\boldsymbol{x} - \boldsymbol{y})^T \nabla^2 f(\xi_{\boldsymbol{x},\boldsymbol{y}})(\boldsymbol{x} - \boldsymbol{y}), $$
+
+where $\xi_{\boldsymbol{x},\boldsymbol{y}}$ is some point between $\boldsymbol{x}$ and $\boldsymbol{y}$. Since $\nabla^2 f$ is positive definite, the constants $\sigma_1$ and $\sigma_2$ can be obtained by bounding the minimum and maximum eigenvalues of $\nabla^2 f$. For ITC, one assumes the input data to be normalized, i.e., for each $\boldsymbol{x} \in \mathcal{X}$, $\sum_j x_j = 1$. Since the Hessian for the KL-divergence is $\nabla^2 f(\boldsymbol{x}) = \text{Diag}(x_1^{-1}, \dots, x_d^{-1})$ at a point $\boldsymbol{x} = (x_1, \dots, x_d)$, its maximum eigenvalue is $(1 - \|1 - \boldsymbol{x}\|_\infty)^{-1}$ and the minimum eigenvalue is $\|\boldsymbol{x}\|_\infty^{-1}$.
+
+¹An “uncovered” cluster is one from which a center has not been chosen by the weighted farthest-first sampling procedure.
+---PAGE_BREAK---
+
+Since the interpolating point $\xi_{x,y}$ lies in $\text{conv}(\mathcal{X})$ for $x$, $y \in \text{conv}(\mathcal{X})$, we can select
+
+$$\sigma_1^{-1} = \max_{y \in \text{conv}(\mathcal{X})} \|y\|_\infty = \max_{x \in \mathcal{X}} \|x\|_\infty.$$
+
+Analogously we have,
+
+$$\sigma_2^{-1} = \min_{y \in \text{conv}(\mathcal{X})} (1 - \|1 - y\|_{\infty}) = \min_{x \in \mathcal{X}} (1 - \|1 - x\|_{\infty}).$$
+
+The reduction from $\text{conv}(\mathcal{X})$ to $\mathcal{X}$ results from each $y \in \text{conv}(\mathcal{X})$ being a convex combination of the data points, whereby its coordinates are a convex combination of the data point coordinates. This convex combination is maximized by putting all weight on the maximum component. Thus, $\sigma_1 \ge 1$ and $\sigma_2$ corresponds to inverse of the minimum coordinate entry $\gamma$ in the data set, so $\sigma_2/\sigma_1 \le \gamma^{-1}$. Ackermann et al. [2] and Ackermann and Blömer [1] will also have a similar dependence on $\gamma$.
+
+## 3.2 Mixture Modeling on Exponential Families
+
+The parametric mixture modeling problem entails fitting a mixture of $K$ distributions from a pre-defined family to a set of observations. Let $x$ denote an observation, $\pi$ a prior over the mixture components, and $\theta_h$ the parameters corresponding to the $z^{th}$ mixture component. Then, a mixture model assumes the following generative process: (i) sample $z \sim \pi$, and (ii) sample $x \sim p(x|\theta_z)$. Given a set of observations, the basic mixture modeling problem is that of finding the parameters $\Theta = (\pi, \theta_z, \{z\}_1^K)$ such that $\log p(x|\Theta)$ is maximized.² More formally, if $X$ denotes the random variable corresponding to the observations and $Z$ denotes the one corresponding to the mixture components, a direct calculation [7] shows that the problem is equivalent to maximizing
+
+$$J_{MM}(\Theta) = E_{Z|X}[\log p(X, Z|\Theta)] + H(Z|X) \quad (3.1)$$
+
+over $\Theta$, where
+
+$$p(z|\boldsymbol{x}) = \frac{\pi_z p(\boldsymbol{x}|\boldsymbol{\theta}_z)}{p(\boldsymbol{x})},$$
+
+and $H(\cdot)$ denotes the Shannon entropy of $Z|X$. For the purposes of analysis one can focus on the expected log-likelihood of the data, i.e., the first term in (3.1). In practice, for several real datasets, the distribution $p(z|\boldsymbol{x})$ is typically skewed in that it has a high value $\approx 1$ for some $z^*$, and low values $\approx 0$ for other $z$, so that the entropy $H(Z|X)$ is rather small. As a result, ignoring the entropy term may be reasonable in an application. A more theoretically well motivated justification can be given by considering “hard clustering” for the mixture modeling problem, where we focus on the family of posterior distributions
+
+$$q(z|\boldsymbol{x}) = \begin{cases} 1, & \text{if } p(z|\boldsymbol{x}) > p(z'|\boldsymbol{x}), \forall z' \neq z, \\ 0, & \text{otherwise.} \end{cases}$$
+
+For simplicity, we let $z_i^* = z$ such that $p(z|x_i) > p(z'|x_i), \forall z' \neq z$. If $J_Q(\Theta)$ is the corresponding objective, then following [7] we have
+
+$$J_{MM}(\Theta) - H(Z|X) \le J_Q(\Theta) \le J_{MM}(\Theta).$$
+
+Thus, $J_Q(\Theta)$ forms a tight lower bound to the original objective $J_{MM}(\Theta)$, especially when entropy of $Z|X$ is small, which is true for several real world problems.
+
+²Note that the objective $\log p(x|\Theta)$ is always non-positive since $p(x|\Theta) \le 1$. All objective functions in this section share the same property.
+---PAGE_BREAK---
+
+We focus on mixture models over exponential family distributions, whose density functions can be written as
+
+$$p(\boldsymbol{x}|\theta) = \exp(\langle \boldsymbol{x}, \boldsymbol{\theta} \rangle - \psi(\boldsymbol{\theta}_z))p_0(\boldsymbol{x}),$$
+
+where $\psi$ is a convex function of Legendre [39] type known as the cumulant, $\theta$ is the natural parameter, and $p_0(x)$ is a base measure. A particular choice of $\psi$ determines a family, such as Gaussian or Poisson, while a particular choice of $\boldsymbol{\theta}$ determines a specific distribution in the family. The expectation parameter $\boldsymbol{\mu} = E[X]$ of an exponential family distribution is uniquely tied to the natural parameter through a Legendre transform $\boldsymbol{\mu} = \nabla\psi(\boldsymbol{\theta})$, and $\boldsymbol{\theta} = \nabla\phi(\boldsymbol{\mu})$ where $\phi$ is the Legendre conjugate of $\psi$ [39].
+
+Our results below rely on the following key connection between exponential family distributions and Bregman divergences [8]. The density function $p(x|\theta_z)$ of an exponential family distribution can be uniquely written as
+
+$$p(x|\theta_z) = \exp(-B_{\phi}(x, \boldsymbol{\mu}))f_0(x), \quad (3.2)$$
+
+where $\phi$ is the Legendre conjugate of $\psi$, and $\boldsymbol{\mu} = E[X] = \nabla\psi(\boldsymbol{\theta})$ is the expectation parameter.
+
+Now we use BREG++ to optimize $J_Q(\boldsymbol{\Theta})$ based on the expectation parameter $\boldsymbol{\mu}_z$ of each component $z=1, \dots, K$. The only additional step is to set (for each $i$) $z_i^* = z$ if $\mathbf{x}_i$ is assigned to cluster $z$. (recall $J_Q$ is negative)
+
+**Lemma 3.1.** Let $\boldsymbol{\Theta}_{MM}$ denote the natural parameters corresponding to the final mean parameters after convergence, and let $\pi_z = |C_z|/n$, where $|C_z|$ denotes the number of elements in the z-th cluster. Then, (recall $J_Q$ is negative)
+
+$$E[J_Q(\boldsymbol{\Theta}_{MM})] \geq 4 \frac{\sigma_2}{\sigma_1} \left(1 + \frac{\sigma_2}{\sigma_1}\right) J_Q(\boldsymbol{\Theta}^*),$$
+
+where $\boldsymbol{\Theta}^*$ denotes an optimum set of parameters.
+
+*Proof.* By definition,
+
+$$
+\begin{align*}
+\max_{\boldsymbol{\Theta}} J_Q(\boldsymbol{\Theta}) &= \max_{\boldsymbol{\Theta}} E_{Z|X \sim Q}[\log p(X, Z|\boldsymbol{\Theta})] + H_Q(Z|X) \\
+&= \max_{\boldsymbol{\Theta}} \frac{1}{n} \sum_{i=1}^{n} \log p(\mathbf{x}_i | \boldsymbol{\theta}_{z_i}^*) \\
+&= \max_{\boldsymbol{\mu}} -\frac{1}{n} \sum_{i=1}^{n} B_{\phi}(\mathbf{x}_i, \boldsymbol{\mu}_{z_i}^*) = \min_{\boldsymbol{\mu}} \frac{1}{n} \sum_{i=1}^{n} B_{\phi}(\mathbf{x}_i, \boldsymbol{\mu}_{z_i}^*),
+\end{align*}
+$$
+
+which is precisely the objective function for Bregman k-means. Thus, the result follows from Lemma 2.3, and a change in the direction of the inequality due to conversion of the minimization problem to a maximization problem by multiplying both sides with -1. $\square$
+
+# 4 Weighted Kernel K-means
+
+In this section we present WKKM++, an $O(\log K)$ approximation algorithm for the weighted kernel k-means (WKKM) problem [20].
+
+Let $\mathcal{X} = \{\mathbf{x}_1, \dots, \mathbf{x}_n\}$ denote the set of input data points (which may or may not be available explicitly), and let $\phi: \mathbf{x} \to \phi(\mathbf{x}) \in \mathcal{H}$, denote the feature map that takes $\mathbf{x}$ to its corresponding point in an RKHS $\mathcal{H}$. Further, let $w_1, w_2, \dots, w_n$ denote non-negative weights corresponding to each input point.
+
+WKKM seeks a clustering $C = \{C_1, C_2, \dots, C_K\}$ such that the following objective is minimized
+
+$$J(C) = \sum_{h=1}^{K} \sum_{\mathbf{x}_i \in C_h} w_i ||\phi(\mathbf{x}_i) - \boldsymbol{\mu}_h||^2, \quad (4.1)$$
+---PAGE_BREAK---
+
+where $\mu_h$ is the weighted mean of cluster $C_h$, i.e.,
+
+$$ \mu_h = \frac{\sum_{\mathbf{x}_i \in C_h} w_i \phi(\mathbf{x}_i)}{\sum_{\mathbf{x}_i \in C_h} w_i}. \quad (4.2) $$
+
+For a given clustering, the weighted centroid $\mu_h$ (4.2) is optimal; formally stated
+
+**Lemma 4.1 (Optimality of Mean).** Let $\mathcal{A}$ be a set of points with weighted mean $\boldsymbol{\mu}_{\mathcal{A}}$, and let $\phi(\mathbf{z})$ be an arbitrary point. Then,
+
+$$ \begin{aligned} & \sum_{\mathbf{x}_i \in \mathcal{A}} w_i \|\phi(\mathbf{x}_i) - \phi(\mathbf{z})\|^2 \\ &= \sum_{\mathbf{x}_i \in \mathcal{A}} w_i \|\phi(\mathbf{x}_i) - \boldsymbol{\mu}_{\mathcal{A}}\|^2 + W \|\boldsymbol{\mu}_{\mathcal{A}} - \phi(\mathbf{z})\|^2, \end{aligned} $$
+
+where $W = \sum_{\mathbf{x}_i \in \mathcal{A}} w_i$.
+
+*Proof.* Elementary; similar to Lemma 2.2. $\square$
+
+The WKKM++ algorithm proceeds exactly like the BREG++ algorithm of Section 2. Specifically, WKKM++ selects $K$ initial means from amongst the data points using a particular sampling procedure. These $K$ means are then used as an initialization for the WKKM algorithm:
+
+1. For each $i = 1..N$ assign point $\mathbf{x}_i$ to its nearest mean, update corresponding cluster $C_h$
+
+2. For each $h = 1..K$ update $\mu_h$ using (4.2).
+
+3. Repeat steps 1 and 2 until convergence.
+
+This standard approach is guaranteed to only monotonically decrease the objective function. However, the crux of the analysis is in showing that after just the weighted farthest-first initialization based on (4.3) and (4.4), WKKM++ comes to within a factor $O(\log K)$ of the optimal. We elaborate on this below.
+
+The first mean $\mu_1 = \phi(\mathbf{x}_i)$ is chosen uniformly at random from the data points. The remaining means are chosen with a weighted farthest-first sampling procedure outlined below.
+
+First, we define the weighting function
+
+$$ D(\mathbf{x}) = \min_{\boldsymbol{\mu} \in C} \| \phi(\mathbf{x}) - \boldsymbol{\mu} \|^2, \quad (4.3) $$
+
+which is easily computable using dot-products only because
+
+$$ \|\phi(\mathbf{x}) - \boldsymbol{\mu}\|^2 = \phi(\mathbf{x})^T \phi(\mathbf{x}) + \boldsymbol{\mu}^T \boldsymbol{\mu} - 2\phi(\mathbf{x})^T \boldsymbol{\mu}, $$
+
+and $\boldsymbol{\mu}$ is just one of the points $\mathbf{x}_i$ during the initialization.
+
+At a given stage in the algorithm, suppose we wish to select the next mean $\mu_h$. The probability that a given point $\phi(\mathbf{x}_i)$ is chosen to be $\mu_h$ is set to
+
+$$ P(\boldsymbol{\mu}_h = \phi(\mathbf{x}_i)) = \frac{w_i D(\mathbf{x}_i)^2}{\sum_{j=1}^{n} w_j D(\mathbf{x}_j)^2}. \quad (4.4) $$
+
+The probability (4.4) is also computable using dot-products only, as it involves only the weighting function $D(\mathbf{x})$, which itself is so computable. We repeatedly select means using (4.3) and (4.4) until we have selected $K$ different means.
+
+Now we proceed to show that the weighted farthest-first initialization as described above is sufficient to guarantee an $O(\log K)$ factor. First we show that WKKM++ is competitive in those clusters out of the optimal clustering $C_{OPT}$ from which it happens to sample a center.
+---PAGE_BREAK---
+
+**Lemma 4.2** (First mean). Let $\mathcal{A}$ be an arbitrary cluster in $C_{OPT}$, and let $\mathcal{C}$ be the clustering with just one center, which is chosen with probability proportional to the weight of points in $\mathcal{A}$. Then, if $J(\mathcal{A})$ is the contribution of points in $\mathcal{A}$ to the final objective, $E[J(\mathcal{A})] \le 2J_{OPT}(\mathcal{A})$.
+
+*Proof.* Let $\mu_A$ denote the weighted mean of cluster $\mathcal{A}$. Since $C_{OPT}$ is optimal, it must be using $\mu_A$ as its center. Let $W_A = \sum_{\mathbf{x}_i \in A} w_i$. With the random initialization, assuming all points in $\mathcal{A}$ stay assigned to the cluster $\mathcal{A}$ till the end, using Lemma 4.1 we see that $E[J(\mathcal{A})]$ is given by
+
+$$
+\begin{align*}
+& \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \left( \sum_{\mathbf{x}_i \in A} w_i \|\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)\|^2 \right) \\
+&= \sum_{\mathbf{x}_0 \in A} \frac{w_0}{W_A} \left( \sum_{\mathbf{x}_i \in A} w_i \|\phi(\mathbf{x}_i) - \boldsymbol{\mu}_A\|^2 + W_A \|\phi(\mathbf{x}_0) - \boldsymbol{\mu}_A\|^2 \right) \\
+&= \sum_{\mathbf{x}_i \in A} w_i \|\phi(\mathbf{x}_i) - \boldsymbol{\mu}_A\|^2 + \sum_{\mathbf{x}_0 \in A} w_0 \|\phi(\mathbf{x}_0) - \boldsymbol{\mu}_A\|^2 \\
+&= 2J_{OPT}(\mathcal{A}).
+\end{align*}
+$$
+
+Since the contribution of each point can only decrease in subsequent WKKM++ ++ iterations, we have
+$E[J(\mathcal{A})] \le 2J_{OPT}(\mathcal{A})$. $\square$
+
+Next we show how WKKM++ behaves for the remaining centers that it picks.
+
+**Lemma 4.3 (Other means).** Let $\mathcal{A}$ be an arbitrary cluster in $C_{OPT}$, and let $\mathcal{C}$ be an arbitrary clustering. If we add a random point from $\mathcal{A}$ as a center to $\mathcal{C}$ using the farthest-first sampling, then $E[J(\mathcal{A})] \le 8J_{OPT}(\mathcal{A})$.
+
+*Proof.* After choosing a center $\phi(\mathbf{x}_0)$, any point $\mathbf{x}_i \in \mathcal{A}$ will contribute $w_i \min(D(\mathbf{x}_i), \|\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)\|)^2$ to the objective. Since subsequent assignments can only decrease the contribution, we have
+
+$$ E[J(\mathcal{A})] \leq \sum_{\mathbf{x}_0 \in A} \frac{w_0 D(\mathbf{x}_0)^2}{\sum_{\mathbf{x}' \in A} w_{\mathbf{x}'} D(\mathbf{x}')^2} \times \sum_{\mathbf{x}_i \in A} w_i \min(D(\mathbf{x}_i), \|\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)\|^2). $$
+
+Let $c_0, c_i$ be the closest centers to $\phi(x_0)$, $\phi(x_i)$, respectively. Then, from the triangle inequality we have
+
+$$ \| \phi(\mathbf{x}_0) - c_i \| \leq \| \phi(\mathbf{x}_0) - \phi(\mathbf{x}_i) \| + \| \phi(\mathbf{x}_i) - c_i \| . $$
+
+Further, from the Cauchy-Schwartz inequality we have
+
+$$ \| \phi(\mathbf{x}_0) - c_i \|^{2} \leq 2 \| \phi(\mathbf{x}_0) - \phi(\mathbf{x}_i) \|^{2} + 2 \| \phi(\mathbf{x}_i) - c_i \|^{2}. $$
+
+Noting that $D(\mathbf{x}_0) = \| \phi(\mathbf{x}_0) - c_0 \| \leq \| \phi(\mathbf{x}_0) - c_i \|$, we hence have
+
+$$ D(\mathbf{x}_0)^2 \leq 2\|\phi(\mathbf{x}_0) - \phi(\mathbf{x}_i)\|^2 + 2D(\mathbf{x}_i)^2. $$
+
+Multiplying both sides by $w_i$ and summing over $\mathbf{x}_i \in \mathcal{A}$ we obtain
+
+$$
+\begin{align*}
+W_A D(\mathbf{x}_0)^2 &\le 2 \left( \sum_{\mathbf{x}_i \in A} w_i |\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)|^2 + w_i D(\mathbf{x}_i) \right) \\
+&\Rightarrow w_0 D(\mathbf{x}_0)^2 \le 2 \frac{w_0}{W_A} \left( \sum_{\mathbf{x}_i \in A} w_i |\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)|^2 + w_i D(\mathbf{x}_i) \right).
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Hence, $E[J(\mathcal{A})]$ is upper bounded by
+
+$$
+\begin{align*}
+& 2 \sum_{\mathbf{x}_0 \in \mathcal{A}} \frac{w_0}{W_A} \left( \frac{\sum_{\mathbf{x}_i \in \mathcal{A}} w_i ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||^2}{\sum_{\mathbf{x}' \in \mathcal{A}} w_{\mathbf{x}'} D(\mathbf{x}')^2} \sum_{\mathbf{x}_i \in \mathcal{A}} w_i \min(D(\mathbf{x}_i), ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||)^2 \right) \\
+& + 2 \sum_{\mathbf{x}_0 \in \mathcal{A}} \frac{w_0}{W_A} \left( \frac{\sum_{\mathbf{x}_i \in \mathcal{A}} w_i D(\mathbf{x}_i)^2}{\sum_{\mathbf{x}' \in \mathcal{A}} w_{\mathbf{x}'} D(\mathbf{x}')^2} \sum_{\mathbf{x}_i \in \mathcal{A}} w_i \min(D(\mathbf{x}_i), ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||)^2 \right) \\
+& \leq 2 \sum_{\mathbf{x}_0 \in \mathcal{A}} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in \mathcal{A}} w_i ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||^2 + 2 \sum_{\mathbf{x}_0 \in \mathcal{A}} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in \mathcal{A}} w_i ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||^2 \\
+& = 4 \sum_{\mathbf{x}_0 \in \mathcal{A}} \frac{w_0}{W_A} \sum_{\mathbf{x}_i \in \mathcal{A}} w_i ||\phi(\mathbf{x}_i) - \phi(\mathbf{x}_0)||^2 \\
+& \leq 8 J_{\text{OPT}}(\mathcal{A}),
+\end{align*}
+$$
+
+where in the first line we used $\min(D(\boldsymbol{x}_i), ||\phi(\boldsymbol{x}_i) - \phi(\boldsymbol{x}_0)||)^2 \le D(\boldsymbol{x}_i)^2$ in the first expression, and $\min(D(\boldsymbol{x}_i), ||\phi(\boldsymbol{x}_i) - \phi(\boldsymbol{x}_0)||)^2 \le ||\phi(\boldsymbol{x}_i) - \phi(\boldsymbol{x}_0)||^2$ in the second expression, and the last line follows from Lemma 4.2. $\square$
+
+**Theorem 4.4 (Approximation Ratio).** A clustering *C* obtained via WKKM++ satisfies
+
+$$ E[J(C)] \leq 8(\log K + 2)J_{OPT}. $$
+
+*Proof.* Exactly follows proof structure in Section 2. $\square$
+
+**Remark 4.1 (Implications).** WKKM++ has two important implications. First, by exploiting the equivalence between several graph-cut criteria and WKKM [20], one can hope to obtain better graph-cuts using WKKM++. A formal proof of this observation however remains an open problem. Second, the connection of WKKM to semi-supervised graph-clustering [30] leads to a potentially improved algorithm for semi-supervised clustering.
+
+# 5 Approximation Algorithms for Tensor Clustering and Co-clustering
+
+In its simplest formulation, co-clustering refers to the simultaneous partitioning of the rows and columns of the input data matrix into $K \times L$ co-clusters (sub-matrices). Co-clustering, also called bi-clustering [15, 27] has witnessed increasing interest over the years (see [10] and references therein). Anagnostopoulos et al. [4] seem to be the first to present an approximation algorithm for co-clustering based on a minimum-sum squared residue criterion of [16]. They (i.e., [4]) posed two open questions:
+
+* Could one extend their ideas to obtain approximation algorithms for Bregman co-clustering?
+
+* Could one design approximation algorithms for 3-way co-clustering of a tensor in $\mathbb{R}^{n_1 \times n_2 \times n_3}$?
+
+Below we answer both these questions in the affirmative, leading to the first (to our knowledge) approximation algorithms for Bregman matrix and tensor co-clustering. In fact, our results hold for arbitrary *m*-way tensor co-clustering, not just the 3-way case.
+
+We directly develop an approximation algorithm for *m*-way tensor co-clustering (hereafter *clustering*) that yields an approximation algorithm for Bregman co-clustering, which is nothing but tensor clustering for *m* = 2. Specifically, we show a competitiveness of $O(m \log K)$ for *m*-way Bregman co-clustering.
+
+Tensors are well-studied in multilinear algebra [24], but they are not so widespread in the machine learning community. Therefore, to facilitate an easier understanding of our proofs, we briefly summarize some important tensor notation below for those unfamiliar to it.
+---PAGE_BREAK---
+
+## 5.1 Background on Tensors
+
+Most of the material in this section is taken from the well-written paper of de Silva and Lim [18], whose notation turns out to be particularly suitable for our analysis. An order-$m$ tensor $\mathbf{A}$ may be viewed as an element of the vector space $\mathbb{R}^{n_1 \times \cdots \times n_m}$. A particular component of the tensor $\mathbf{A}$ is represented by the multiply-indexed value $a_{i_1 i_2 \dots i_m}$, where $i_j = 1 \dots n_j$ for $1 \le j \le m$.
+
+**Multilinear matrix multiplication.** The most important operation that we do with tensors is that of multilinear matrix multiplication, which is a generalization of the familiar concept of matrix multiplication. Matrices act on other matrices by either left or right multiplications. For an order-3 tensor, there are three dimensions along which a matrix may act via matrix multiplication. For example, given an order-3 tensor $\mathbf{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, and three matrices $\mathbf{P} \in \mathbb{R}^{p_1 \times n_1}$, $\mathbf{Q} \in \mathbb{R}^{p_2 \times n_2}$, and $\mathbf{R} \in \mathbb{R}^{p_3 \times n_3}$, *multilinear matrix multiplication* is the operation defined by the action of these three matrices on the different dimensions of $\mathbf{A}$ that yields the tensor $\mathbf{A}' \in \mathbb{R}^{p_1 \times p_2 \times p_3}$. Formally, the entries of the tensor $\mathbf{A}'$ are given by
+
+$$ a'_{lmn} = \sum_{i,j,k=1}^{n_1,n_2,n_3} p_{li} q_{mj} r_{nk} a_{ijk}, \quad (5.1) $$
+
+and this operation is written compactly as
+
+$$ A' = (P, Q, R) \cdot A. \quad (5.2) $$
+
+The notation (5.2) is particularly nice and may be viewed as the *group action* of $(\mathbf{P}, \mathbf{Q}, \mathbf{R})$ on $\mathbf{A}$ (group-action refers to the situation when a group with a particular algebraic structure “acts” on another set; for the multilinear multiplication notation on can view it as the set $G = \mathbb{R}^{p_1 \times n_1} \times \mathbb{R}^{p_2 \times n_2} \times \mathbb{R}^{p_3 \times n_3}$ acting on the set $X = \mathbb{R}^{n_1 \times n_2 \times n_3}$). Addition in $G$ is defined entry-wise:
+
+$$ (\mathbf{P}_1, \mathbf{Q}_1, \mathbf{R}_1) + (\mathbf{P}_2, \mathbf{Q}_2, \mathbf{R}_2) = (\mathbf{P}_1 + \mathbf{P}_2, \mathbf{Q}_1 + \mathbf{Q}_2, \mathbf{R}_1 + \mathbf{R}_2). $$
+
+Multilinear multiplication extends naturally to tensors of arbitrary (finite) order. If $\mathbf{A} \in \mathbb{R}^{n_1 \times n_2 \times \cdots \times n_m}$, and $\mathbf{P}_1 \in \mathbb{R}^{p_1 \times n_1}$, $\dots$, $\mathbf{P}_m \in \mathbb{R}^{p_m \times n_m}$, then $\mathbf{A}' = (\mathbf{P}_1, \dots, \mathbf{P}_m) \cdot \mathbf{A}$ has entries
+
+$$ a'_{i_1 i_2 \dots i_m} = \sum_{j_1, \dots, j_m = 1}^{n_1, \dots, n_m} p_{i_1 j_1}^{(1)} \cdots p_{i_m j_m}^{(m)} a_{j_1 \dots j_m}, \quad (5.3) $$
+
+where $p_i^{(k)}$ denotes the $ij$-entry of matrix $\mathbf{P}_k$.
+
+**Example 5.1 (Matrix Multiplication).** Let $\mathbf{A} \in \mathbb{R}^{n_1 \times n_2}$, $\mathbf{P} \in \mathbb{R}^{p_1 \times n_1}$, and $\mathbf{Q} \in \mathbb{R}^{q_1 \times n_2}$ be given. The matrix product $\mathbf{P}\mathbf{A}\mathbf{Q}^T$ can be written as the multilinear multiplication $(\mathbf{P}, \mathbf{Q}) \cdot \mathbf{A}$.
+
+**Example 5.2 (Basic Properties).** The following properties of multilinear multiplication are easily verified (and generalized to tensors of arbitrary order):
+
+1. **Linearity:** Let $\alpha, \beta \in \mathbb{R}$, and $\mathbf{A}$ and $\mathbf{B}$ be tensors with same dimensions, then
+
+$$ (\mathbf{P}, \mathbf{Q}) \cdot (\alpha \mathbf{A} + \beta \mathbf{B}) = \alpha (\mathbf{P}, \mathbf{Q}) \cdot \mathbf{A} + \beta (\mathbf{P}, \mathbf{Q}) \cdot \mathbf{B} $$
+
+2. **Product rule:** For matrices $\mathbf{P}_1, \mathbf{P}_2, \mathbf{Q}_1, \mathbf{Q}_2$ of appropriate dimensions, and a tensor $\mathbf{A}$ we have
+
+$$ (\mathbf{P}_1, \mathbf{P}_2) \cdot ((\mathbf{Q}_1, \mathbf{Q}_2) \cdot \mathbf{A}) = (\mathbf{P}_1 \mathbf{Q}_1, \mathbf{P}_2 \mathbf{Q}_2) \cdot \mathbf{A} $$
+
+3. **Multilinearity:** Let $\alpha, \beta \in \mathbb{R}$, and $\mathbf{P}, \mathbf{Q},$ and $\mathbf{R}$ be matrices of appropriate dimensions. Then, for a tensor $\mathbf{A}$ the following holds
+
+$$ (\mathbf{P}, \alpha\mathbf{Q} + \beta\mathbf{R}) \cdot \mathbf{A} = \alpha(\mathbf{P}, \mathbf{Q}) \cdot \mathbf{A} + \beta(\mathbf{P}, \mathbf{R}) \cdot \mathbf{A} $$
+---PAGE_BREAK---
+
+**Inner Product:** The Frobenius norm induces an inner-product that can be defined as
+
+$$
+\langle \mathbf{A}, \mathbf{B} \rangle = \sum_{i_1, \dots, i_m} a_{i_1 \dots i_m} b_{i_1 \dots i_m}, \qquad (5.4)
+$$
+
+so that $\|\mathbf{A}\|_F^2 = \langle \mathbf{A}, \mathbf{A} \rangle$ holds as usual. The following property of this inner product is easily verified
+
+$$
+\langle (\mathbf{P}_1, \ldots, \mathbf{P}_m) \cdot \mathbf{A}, (\mathbf{Q}_1, \ldots, \mathbf{Q}_m) \cdot \mathbf{B} \rangle = \langle \mathbf{A}, (\mathbf{P}_1^T \mathbf{Q}_1, \ldots, \mathbf{P}_m^T \mathbf{Q}_m) \cdot \mathbf{B} \rangle. \quad (5.5)
+$$
+
+*Proof:* Using definition (5.3) along with (5.4) we have
+
+$$
+\begin{align*}
+\langle (\mathbf{P}_1, \dots, \mathbf{P}_m) \cdot \mathbf{A}, (\mathbf{Q}_1, \dots, \mathbf{Q}_m) \cdot \mathbf{B} \rangle &= \sum_{i_1, \dots, i_m} \sum_{j_1, \dots, j_m} p_{i_1 j_1}^{(1)} q_{i_1 k_1}^{(1)} \cdots p_{i_m j_m}^{(m)} q_{i_m k_m}^{(m)} a_{j_1 \dots j_m} b_{k_1 \dots k_m}, \\
+&= \sum_{\substack{j_1, \dots, j_m \\ k_1, \dots, k_m}} \left(\sum_{i_1} p_{i_1 j_1}^{(1)} q_{i_1 k_1}^{(1)}\right) \cdots \left(\sum_{i_m} p_{i_m j_m}^{(m)} q_{i_m k_m}^{(m)}\right) a_{j_1 \dots j_m} b_{k_1 \dots k_m} \\
+&= \sum_{\substack{j_1, \dots, j_m \\ k_1, \dots, k_m}} (\mathbf{P}_1^T \mathbf{Q}_1)_{j_1 k_1} \cdots (\mathbf{P}_m^T \mathbf{Q}_m)_{j_m k_m} a_{j_1 \dots j_m} b_{k_1 \dots k_m} = \sum_{j_1, \dots, j_m} a_{j_1 \dots j_m} b'_{j_1, \dots, j_m} = \langle \mathbf{A}, \mathbf{B}' \rangle,
+\end{align*}
+$$
+
+where $\mathbf{B}' = (\mathbf{P}_1^T Q_1, \dots, \mathbf{P}_m^T Q_m) \cdot \mathbf{B}$.
+
+5.2 Tensor clustering
+
+Given the background above, we are now ready to formally state the Bregman tensor clustering problem.
+
+Let $\mathbf{A} \in \mathbb{R}^{n_1 \times \cdots \times n_m}$ be an order-$m$ tensor. Tensor clustering refers to a partitioning of $\mathbf{A}$ into sub-tensors or simply clusters, so that the entries of each cluster are as coherent as possible. The goal of (one simple version) of Bregman tensor clustering is to partition $\mathbf{A}$ into sub-tensors so that the sum of the Bregman divergences of individual elements in the sub-tensor to their corresponding cluster representatives is minimized. The cluster representatives turn out to be simply the means of the associated sub-tensors because we are minimizing Bregman divergences.
+
+A cluster (sub-tensor) is indexed by subsets of indices along each dimension. Let $I_j \subseteq \{1, \dots, n_j\}$ denote such an index subset for dimension $j$. Then the cluster representative corresponding to a sub-tensor is simply its mean, i.e.
+
+$$
+M_{I_1...I_m} = \frac{1}{|I_1| \cdots |I_m|} \sum_{i_1 \in I_1, \dots, i_m \in I_m} a_{i_1 \dots i_m}. \tag{5.6}
+$$
+
+Assuming that each dimension *j* is partitioned into *k**j* clusters, we can collect all the different representatives (each of which can be written in the form (5.6)) into a *means tensor* $\mathcal{M} \in \mathbb{R}^{k_1 \times \cdots \times k_m}$. Thus, we have a total of $\prod_j k_j$ tensor clusters. Let $\bar{\mathcal{C}}_j \in \{0, 1\}^{n_j \times k_j}$ denote the cluster indicator matrix for dimension *j*. In such a matrix, entry *i* of column *k* is one if and only if *i* is in the *k*-th index set for tensor dimension *j*.
+
+Given this notation, we can now formally state the Bregman tensor clustering problem:
+
+$$
+\underset{\bar{\mathcal{C}}_1, \dots, \bar{\mathcal{C}}_m}{\text{minimize}} \quad B_f(\mathbf{A}, (\bar{\mathcal{C}}_1, \dots, \bar{\mathcal{C}}_m) \cdot \mathbf{M}), \quad \text{s.t.} \quad \bar{\mathcal{C}}_j \in \{0, 1\}^{n_j \times k_j}. \tag{5.7}
+$$
+
+Problem (5.7) can be rewritten in a more useful form. To that end, let $C_j$ be the normalized cluster indicator matrix obtained from $\bar{C}_j$ by normalizing the columns to have unit-norm (i.e., $C_j^T C_j = I_{k_j}$). Then (5.7) may be rewritten as
+
+$$
+\operatorname*{minimize}_{\mathcal{C}_1, ..., \mathcal{C}_m} J(\mathcal{C}) = B_f(\mathbf{A}, (\mathbf{P}_1, ..., \mathbf{P}_m) \cdot \mathbf{A}), \quad \text{where } \mathbf{P}_j = \mathcal{C}_j \mathcal{C}_j^T, \tag{5.8}
+$$
+---PAGE_BREAK---
+
+In the sequel, we will refer to a clustering by its parametrizations via both indicator and projection matrices. A little remark on the side: since $C_j^T C_j = I_{k_j}$, one can relax the “hard”-clustering constraints on $C_j$ to just orthogonality constraints. Indeed, such relaxations form the basis of spectral-relaxations for Euclidean K-means as well as co-clustering [16]. However, we will not use such relaxations in this paper.
+
+**Summary of the Algorithm:** Broadly speaking, our tensor clustering approximation algorithm is based on clustering along subsets of dimensions using a guaranteed approximation algorithm, and then combining the resulting clusterings to obtain tensor clusters. A particular example of such a scheme would be to cluster along single dimensions using a method such as BREG++. clustering. Note that clustering along a single dimension in a tensor is a generalization of clustering the one-dimensional sub-tensors, i.e. vectors, in a matrix. In a tensor, we form groups of $m-1$-way tensors, for instance, grouping matrices in a 3-way tensor. Thanks to the separability of our Bregman divergences, BREG++directly extends to sub-tensor objects. Taking the 3-way example with sub-matrices **A**, **B**, recall that $B_f(\mathbf{A}, \mathbf{B}) = B_f(\text{vec}(\mathbf{A}), \text{vec}(\mathbf{B}))$. So we simply treat the sub-tensors as vectors.
+
+Our analysis below establishes that given a clustering algorithm that clusters along *t* of the dimensions at a time with an approximation factor of $\alpha_t$, we can achieve an objective within $O(|m/t| \frac{\sigma_2}{\sigma_1} \alpha_t)$ of the optimal; the scaling factors $\sigma_1$ and $\sigma_2$ are defined as
+
+$$ \sigma_1 = \inf_{\substack{x \in \{a_{ij}\} \\ y \in \text{conv}(\{a_{ij}\})}} \frac{B_f(x, y)}{(x - y)^2}, \quad \sigma_2 = \sup_{\substack{x \in \{a_{ij}\} \\ y \in \text{conv}(\{a_{ij}\})}} \frac{B_f(x, y)}{(x - y)^2}. \qquad (5.9) $$
+
+**Note:** For simplicity of exposition we assume that we cluster an order-$m$ tensor along $t$ dimensions at a time and to eventually combine the resulting $m/t$ sub-clusterings. Our analysis can be generalized (at the expense of laborious algebra) to the case where we cluster along partitions of varying sizes, say $\{t_1, \dots, t_r\}$, where $t_1 + \dots + t_r = m$.
+
+### 5.2.1 Analysis
+
+In this section we prove our tensor clustering approximation theorem, which yields as corollaries efficient approximation algorithms based on BREG++ for both Bregman co-clustering, and Bregman tensor clustering.
+
+**Theorem 5.3 (Approximation guarantee).** Let $\mathbf{A}$ be the input order-$m$ tensor, and let $C_j$ denote the clustering of $\mathbf{A}$ along the $j$th subset of $t$ dimensions ($1 \le j \le m/t$), as obtained by a multiway clustering algorithm with guarantee$^3 \alpha_t$. Let $\mathcal{C} = (\mathcal{C}_1, \dots, \mathcal{C}_{m/t})$ denote the induced tensor clustering. Then$^4$
+
+$$ J(\mathcal{C}) \le 2^{\log(m/t)} \frac{\sigma_2}{\sigma_1} \alpha_t J_{OPT}(m) $$
+
+**Corollary 5.4 (Approximation with BREG++)**. Let $\mathbf{A}$ be the input order-$m$ tensor, and let $C_j$ denote the clustering of $\mathbf{A}$ along dimension $j$ ($1 \le j \le m$), as obtained via BREG++. Let $\mathcal{C} = (\mathcal{C}_1, \dots, \mathcal{C}_m)$ denote the induced tensor clustering. Then,
+
+$$ E[J(\mathcal{C})] \le 4m \frac{\sigma_2^2}{\sigma_1^2} \left(1 + \frac{\sigma_2}{\sigma_1}\right) (\log K^* + 2) J_{OPT}(m), $$
+
+where $K^* = \max_{1 \le j \le m} k_j$ is the maximum number of clusters across all dimensions.
+
+³By “guarantee α”, we mean that the algorithm yields a solution that is guaranteed to have an objective value within a factor of $O(\alpha_t)$ of the optimum.
+
+⁴Here and in the sequel, the argument $m$ to $J_{OPT}$ denotes the best $m$-way clustering to avoid confusions about dimensions.
+---PAGE_BREAK---
+
+To establish Theorem 5.3, we will first bound the quality of a combination of dimension-wise clus-
+terings for the Frobenius norm, with the help of the Pythagorean Property (Lemma 5.5). It is clear that
+compressing along only a subset of dimensions achieves lower divergence than clustering along all di-
+mensions. Generalizing an idea of [4], we upper bound the full combined clustering in terms of the
+(approximately) optimal clustering along a subset of dimensions (Prop. 5.6). Finally, we extend this
+upper bound to general Bregman divergences and relate it to the optimal tensor clustering.
+
+In the analysis below we assume without loss of generality that $m = 2^h t$ for an integer $h$ (otherwise, pad in empty dimensions). We assume that we have access to an algorithm that can cluster along a subset of $t$ dimensions, while achieving an objective function within a factor $\alpha_t$ of the optimal (for those $t$ dimensions), i.e., $\alpha_t J_{OPT}(t)$. For example, when $t = 1$ we can use BREG++ (or in theory, even the approximation algorithms of [1]).
+
+**Lemma 5.5 (Pythagorean Property).** Let $\mathcal{P} = (\mathbf{P}_1, \dots, \mathbf{P}_t)$, $\mathcal{Q} = (\mathbf{P}_{t+1}, \dots, \mathbf{P}_m)$, and $\mathcal{P}^\perp = (\mathbf{I} - \mathbf{P}_1, \dots, \mathbf{I} - \mathbf{P}_t)$ be combinations of projection matrices $\mathbf{P}_j$. Then
+
+$$
+\|(\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A} + (\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B}\|^2 = \|(\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A}\|^2 + \|(\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B}\|^2, \quad (5.10)
+$$
+
+where $\mathcal{R}$ is some arbitrary combination of $m-t$ projection matrices.
+
+*Proof.* Using the inner-product (5.4) we can rewrite (5.10) as
+
+$$
+\|(\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A} + (\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B}\|^2 = \|(\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A}\|^2 + \|(\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B}\|^2 + 2 \langle (\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A}, (\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B} \rangle.
+$$
+
+With (5.5) the latter term simplifies to
+
+$$
+\langle (\mathcal{P}, \mathcal{Q}) \cdot \mathbf{A}, (\mathcal{P}^{\perp}, \mathcal{R}) \cdot \mathbf{B} \rangle = \langle \mathbf{A}, (\mathcal{P}^T \mathcal{P}^{\perp}, \mathcal{Q}^T \mathcal{R}) \cdot \mathbf{B} \rangle = 0,
+$$
+
+thus yielding the claim (5.10). $\square$
+
+**Some more notation.** Before diving into the proofs, we outline some more useful notation. Since we can only cluster along *t* dimensions at a time, we recursively half the initial set of *m* dimensions until, after log(*m*/*t*) + 1 recursions, the sets have length *t*. Let *l* denote the level of recursion, starting at *l* = log(*m*/*t*) = *h* down to *l* = 0, where the sets have length *t*. At level *l*, the sets will have length 2*^l*t. Each clustering along a subset of 2*^l*t dimensions is represented by the corresponding 2*^l*t projection matrices. We denote their combination by $\mathcal{P}_i^l$. At level *l*, *i* ranges from 1 to 2^{h-l}.
+
+For illustration, consider an order-8 tensor, and $t = 2$. Then $h = \log(m/t) = 2$, so we will need
+3 levels. For simplicity, we always partition the set of dimensions in the middle, i.e. $\{1, \dots, 8\}$ into
+$\{1, \dots, 4\}$ and $\{5, \dots, 8\}$ and so on, ending with $\{\{1, 2\}, \{3, 4\}, \{5, 6\}, \{7, 8\}\}$. The projection matrix
+for dimension $i$ is $\mathbf{P}_i$. The full tensor clustering is $(\mathbf{P}_1, \dots, \mathbf{P}_8)$. So here we get
+
+$$
+\begin{align*}
+\mathcal{P}_1^2 &= (\mathbf{P}_1, \mathbf{P}_2, \mathbf{P}_3, \mathbf{P}_4, \mathbf{P}_5, \mathbf{P}_6, \mathbf{P}_7, \mathbf{P}_8) \\
+\mathcal{P}_1^1 &= (\mathbf{P}_1, \mathbf{P}_2, \mathbf{P}_3, \mathbf{P}_4), & \mathcal{P}_2^1 &= (\mathbf{P}_5, \mathbf{P}_6, \mathbf{P}_7, \mathbf{P}_8) \\
+\mathcal{P}_1^0 &= (\mathbf{P}_1, \mathbf{P}_2), & \mathcal{P}_2^0 &= (\mathbf{P}_3, \mathbf{P}_4), & \mathcal{P}_3^0 &= (\mathbf{P}_5, \mathbf{P}_6), & \mathcal{P}_4^0 &= (\mathbf{P}_7, \mathbf{P}_8)
+\end{align*}
+$$
+
+To represent a clustering of the tensor along only a subset of dimensions, we pad the corresponding $\mathcal{P}_i^l$
+with $m - 2^l t$ identity matrices for the non-clustered dimensions. We refer to this padded collection as $\mathcal{Q}_i^l$.
+In the example above, e.g. $\mathcal{Q}_1^0 = (\boldsymbol{\mathrm{P}}_1, \boldsymbol{\mathrm{P}}_2, I, I, I, I, I)$,
+$\mathcal{Q}_2^1 = (I, I, I, I, I, I)$, and $\mathcal{Q}_1^2 = \mathcal{P}_1^2$. With
+recursive partitions of the dimensions, $\mathcal{Q}_i^l$ subsumes $\mathcal{Q}_j^0$ for $2^l (i-1) < j \le 2^l i$; $\mathcal{Q}_i^l = \sum_{j=2^l(i-1)}^{2^l i} \mathcal{Q}_j^0$.
+The algorithm for the subsets of dimensions will yield the $\mathcal{Q}_i^0$ and $\mathcal{P}_i^0$. The remaining clusterings are
+simply combinations of those level-0 clusterings. Finally, we refer to the collection of $m - 2^l t$ identity
+matrices (for simplicity, we assume that they have the correct dimensionalities) as $\mathscr{J}^l$, so, for instance,
+$\mathcal{Q}_1^l = (\mathcal{P}_1^l, \mathscr{J}^l)$.
+---PAGE_BREAK---
+
+Note that the order of the dimensions is arbitrary, as long as the index sets remain the same and we reorder the dimensions of all tensors and matrices correspondingly. Hence, we always shift the identity matrices to the back for “ease” of notation. Furnished with this notation, we can now turn towards the details of the proofs. We start with the relation of the combined clustering to a subclustering with the Frobenius norm objective function.
+
+**Proposition 5.6.** Let $\mathbf{A}$ be an order-m tensor and $m \ge 2^l t$. The objective function for any $2^l t$-way clustering $\mathcal{P}_1^0 = (\mathcal{P}_1^0, \dots, \mathcal{P}_{2^l}^0)$ can be bounded via the subclusterings along only one set of dimensions of size $t$:
+
+$$
+\| \mathbf{A} - \mathcal{Q}_1^l \cdot \mathbf{A} \|^{2} = \| \mathbf{A} - (\mathcal{P}_1^l, \mathcal{I}_1^l) \cdot \mathbf{A} \|^{2} \leq \max_{1 \leq j \leq 2^l} 2^l \| \mathbf{A} - \mathcal{Q}_j^0 \cdot \mathbf{A} \|^{2}. \quad (5.11)
+$$
+
+Not that the Proposition actually holds for any set of $2^l$ sub-clusterings by permuting dimensions
+accordingly:
+
+$$
+\|\mathbf{A} - \mathcal{Q}_i^l \cdot \mathbf{A}\|^2 \leq \max_{2^l(i-1)nd: 2:00 P.M., Sunday
+
+3rd: 9:00 P.M., Sunday
+
+4th: 4:00 A.M., Monday
+
+5th: 11:00 A.M., Monday
+
+6th: 6:00 P.M., Monday
+
+**10. Answer: A.**
+
+$$
+\begin{align*}
+\text{Red} &= x \\
+\text{Green} &= 3x \\
+\text{White} &= 2(3x) = 6x \\
+\text{Total} &= 10x
+\end{align*}
+$$
+
+$$P(\text{white}) = \frac{6x}{10x} = \frac{3}{5}$$
+
+**11. Answer: C.**
+
+The first term $a_1$ is 2 and the common difference is equal to: $5 - 2 = 8 - 5 = 3$
+
+Hence using the formula for the nth term, $a_n = a_1 + (n - 1)d$ to the term equal to 227, we can write the equation:
+
+$$227 = 2 + (n - 1)3$$
+
+Solve the above for n
+
+$n - 1 = \frac{(227 - 2)}{3} = 75$ and $n = 76$
+
+The 76th term is equal to 227.
+
+**12. Answer: D.**
+
+The first term $a_1 = 9$ and $d = 2$ (the difference between any two consecutive odd integers).
+Hence the sum $S_n$ of the n terms may be written as follows
+
+$$S_n = (n/2)[2*a_1 + (n-1)d] = 15,960$$
+
+With $a_1 = 9$ and $d=2$, the above equation in n may be written as follows
+
+$$n_2 + 8n - 15860 = 0$$
+---PAGE_BREAK---
+
+Solve the above for n
+
+$n = 122 \text{ and } n = -130$
+
+The solution to the problem is that 122 consecutive odd numbers must be added in order to obtain a sum of 15,860.
+
+**13. Answer: A**
+
+A probability is always greater than or equal to 0 and less than or equal to 1, hence only A above cannot represent probabilities.
+
+**14. Answer: C.**
+
+We construct a table of frequencies for the blood groups as follows
+
+group frequency
+
+a 50
+
+B 65
+
+O 70
+
+AB 15
+
+We use the empirical formula of the probability
+
+Frequency for O blood
+
+$$P(E) = \frac{\text{Total frequencies}}{n}$$
+
+$$= \frac{70}{200} = 0.35$$
+
+**15. Answer: C.**
+
+The probability of selecting a red marble on the first draw is 10/32 because there are 10 red marbles and 32 total marbles. After removing the first red marble there are now 9 red marbles and 31 total marbles left so 9/31 chance of selecting the second red marble. To find the probability of both events occurring, we multiply the probabilities to get 9 * 10/32 * 31 which reduces to 45/496.
+---PAGE_BREAK---
+
+UPCAT
+Mathematics
+Answer Key
+
+Set 3
+
+**16. Answer: C.**
+
+Recall that when provided with a point and the slope of a line, we can use point-slope formula to write an equation for the line. The point slope formula is $y - y_1 = m(x - x_1)$ where $(x_1, y_1)$ is the point provided and $m$ is the slope. Plug in the point and slope provided and solve for $y$:
+
+$$y - 3 = -\frac{1}{3}(x - 2)$$
+
+$$y = -\frac{1}{3}x + \frac{11}{3}$$
+
+**17. Answer: D.**
+
+In approaching this problem, consider the number of options the students have for each role. As a role is taken up, there is 1 less student to fill the next role. For President there are 6 options, for Vice President 5 options, for Secretary 4 options, and for Treasurer 3 options. Multiply each of these to find 360 different groups:
+
+$$6 * 5 * 4 * 3 = 360$$
+
+**18. Answer: C.**
+
+Begin by rounding the number to the nearest hundredth: 89.88. Now add the tenths place, 8, and the hundredths, 8, to get 16.
+
+**19. Answer: C.**
+
+Use an equation to represent the situation, $x + x + 2 + x + 4 + x + 6 = 36$. Solve for $x$ to find 6, but recognize that this is the smallest integer in the set, the 3rd largest would be 10.
+
+**20. Answer: C.**
+
+Find $n$:
+
+$$a_n = a_1 + (n-1)d$$
+
+$$-29 = 91 + (n-1)(-6)$$
+
+$$6(n - 1) = 91 + 29$$
+
+$$n - 1 = 120 \div 6$$
+
+$$n = 21$$
+
+$$S = \text{sum}$$
+
+To get more UPCAT review materials, visit https://filipiknow.net/upcat-reviewer/
+
+To God be the glory!
+---PAGE_BREAK---
+
+UPCAT
+Mathematics
+Answer Key
+
+Set 3
+
+$$
+\begin{aligned}
+&= n (a_1 + a_n) \div 2 \\
+&= 21 (91 + [-29]) \div 2 \\
+&= 21 (62) \div 2 \\
+&= 21 (31)
+\end{aligned}
+$$
+
+S = 651
+
+**21. Answer: D.**
+
+Since in this case the number of scores is even, the median is the average of the two middlemost scores.
+
+$$ \text{median} = \frac{50 + 51}{2} = \frac{101}{2} = 50.5 $$
+
+**22. Answer: C.**
+
+The number of ways of arranging n objects in a round table is $(n-1)!$. Ways. For the five students the number of arrangements is $(5-1)! = 4! = 24$.
+
+**23. Answer: B.**
+
+Solve this problem by setting up a proportion. We are told the ratio of milk to juice is 13:x and that there are 39 milk and 18 juice so $x = \frac{39}{18}$, cross multiply and solve for x to get 6.
+
+**24. Answer: C.**
+
+$$ \frac{P_1}{P_2} = \sqrt{\frac{A_1}{A_2}} = \sqrt{\frac{64}{729}} = \frac{8}{27} = 8:27 $$
+
+**25. Answer: C.**
+
+$$
+\begin{aligned}
+\frac{a+b+c}{3} &= \frac{a+b}{2} \rightarrow 2(a+b+c) = 3(a+b) \rightarrow 2a+2b+2c = 3a+3b \rightarrow 2c = a+b \rightarrow c \\
+&= \frac{a+b}{2}
+\end{aligned}
+$$
+
+To get more UPCAT review materials, visit https://filipiknow.net/upcat-reviewer/
+
+To God be the glory!
+---PAGE_BREAK---
+
+UPCAT
+Mathematics
+Answer Key
+
+Set 3
+
+**26. Answer: B.**
+
+Let n be the number of tickets.
+
+$$n = 4 + \frac{(n-4)}{2} + 12 + 15 \rightarrow n = \frac{(n-4)}{2} + 31 \rightarrow 2n = n - 4 + 62 \rightarrow n = 58$$
+
+**27. Answer: B.**
+
+Let x = original number of candies
+
+Catherine: $\frac{1}{6}x$
+
+Farah: $\frac{2}{5}x$
+
+Wendy: 4
+
+Jane: 100
+
+$$x = \frac{1}{6}x + \frac{2}{5}x + 4 + 100$$
+
+$$x = \frac{1}{6}x + \frac{2}{5}x + 104$$
+
+$$30x = 5x + 12x + 3120$$
+
+$$13x = 3120$$
+
+$$x = 240$$
+
+**28. Answer: C.**
+
+Begin by setting up an equation representing the average. $(\frac{2+x+31}{5}) \div 7 = 24$. Solve for x to find 135 and recognize that this x represents the sum of the remaining 5 scores.
+To find the average, divide 135 by 5 to find 27.
+
+**29. Answer: B.**
+
+Recall that vertex-form of a parabola is:
+
+To get more UPCAT review materials, visit https://filipiknow.net/upcat-reviewer/
+
+To God be the glory!
+---PAGE_BREAK---
+
+a(x - h)² + k, where (h, k) represents the vertex.
+
+We wish to translate our vertex from (0,0) to (4,-6) so h = 4 and k = -6.
+
+$f(x) = (x - 4)^2 - 6$
+
+**30. Answer: B.**
+
+Recall that slope-intercept form is y = mx + b where m is the slope and b is the y-intercept. Solve for y:
+
+$8x - 2y = -6$
+
+$2y = 8x + 6$
+
+Divide everything by 2:
+
+$y = 4x + 3$
+
+**31. Answer: B.**
+
+If the product of two numbers is positive, the two numbers must have the same sign.
+That is, if ab > 0, then either a > 0 and b > 0, or a < 0 and b < 0.
+
+We are told that A < -1 (which implies that A < 0).
+
+So we know that B < 0.
+
+We also know that AB = 1, so A = 1/B
+
+Since A = 1/B, and A < -1, we can infer that 1/B < -1
+---PAGE_BREAK---
+
+UPCAT
+Mathematics
+Answer Key
+
+Set 3
+
+If we take reciprocals on both sides of the last inequality, we must flip the inequality sign. Hence: $B > -1$
+
+So we know that $B < 0$, and $B > -1$. We can represent this as a compound inequality: $-1 < B < 0$
+
+**32. Answer: C.**
+
+Because both of these equations are already solved for the variable x, we can set them equal to each other to find the value of y. Begin by multiplying both sides by 3 to remove the denominator.
+
+$$y - 7 = y + 4$$
+
+Notice that this equation will never be true. Since there is no solution, so we can conclude that the lines do not intersect.
+
+**33. Answer: B.**
+
+Recall the slope-intercept form of a line:
+
+$y = mx + b$ where m is the slope.
+
+Solve the given equation for y to find the slope:
+
+$$2x - 6 - 6y = 10$$
+
+$$-6y = -2x + 16$$
+
+$$y = \frac{1}{3}x - \frac{16}{6}$$
+
+Slope is equal to $\frac{1}{3}$.
+
+To get more UPCAT review materials, visit https://filipiknow.net/upcat-reviewer/
+
+To God be the glory!
+---PAGE_BREAK---
+
+UPCAT
+Mathematics
+Answer Key
+
+Set 3
+
+**34. Answer: D.**
+
+She runs for 20 minutes and arrived 5 minutes late → She needs to be exactly there in 15 minutes.
+
+Using a bike with a speed of 1/3 km per minute → $t = d/r \rightarrow t = 2/1/3 \rightarrow t = 6$
+
+15 minutes - 6 minutes = 9 minutes earlier.
+
+**35. Answer: C.**
+
+Solve the inequality for one variable:
+
+$$y + 3 > -3x + 6$$
+
+$$y > -3x + 3$$
+
+This states that the y-coordinate must be larger than -3 times the x-coordinate plus 3.
+Test the points provided to see which one satisfies the given inequality (this can also be done graphically). Only (-3, 15) satisfies the inequality.
\ No newline at end of file
diff --git a/samples/texts_merged/4767451.md b/samples/texts_merged/4767451.md
new file mode 100644
index 0000000000000000000000000000000000000000..d3a46b8897dbb6d02ad20c68f2e01c9dcd8ed892
--- /dev/null
+++ b/samples/texts_merged/4767451.md
@@ -0,0 +1,51 @@
+
+---PAGE_BREAK---
+
+Name:
+
+Class:
+
+Date:
+
+Mark / 10 %
+
+1) Evaluate, giving your answer as a simplified fraction [4]
+
+a) $26^{-1}$
+
+b) $2^{-1}$
+
+c) $2^{-2}$
+
+d) $10^{-2}$
+
+2) Give your answer in the form $\frac{1}{a^b}$, where a and b are positive integers [2]
+
+a) $7^{-3}$
+
+b) $2^{-5}$
+
+3) Give your answer in the form $a^b$, where a and b are integers [2]
+
+a) $\frac{1}{7^5}$
+
+b) $\frac{1}{2^5}$
+
+4) Give your answer in the form $\frac{a}{b^c}$, where a, b and c are positive integers [2]
+
+a) $2 \times 3^{-6}$
+
+b) $6 \times 2^{-3}$
+---PAGE_BREAK---
+
+Solutions for the assessment Indices Rules - Negative Index
+
+1) a) $\frac{1}{26}$ b) $\frac{1}{2}$
+
+c) $\frac{1}{4}$ d) $\frac{1}{100}$
+
+2) a) $\frac{1}{7^3}$ b) $\frac{1}{2^5}$
+
+3) a) $7^{-5}$ b) $2^{-5}$
+
+4) a) $\frac{2}{3^6}$ b) $\frac{6}{2^3}$
\ No newline at end of file
diff --git a/samples/texts_merged/47713.md b/samples/texts_merged/47713.md
new file mode 100644
index 0000000000000000000000000000000000000000..b2f798e1c230ecc50ec531622c7a25fc2a6dabb5
--- /dev/null
+++ b/samples/texts_merged/47713.md
@@ -0,0 +1,1319 @@
+
+---PAGE_BREAK---
+
+The causal topology of neutral
+4-manifolds with null boundary
+
+Nikos Georgiou and Brendan Guilfoyle
+
+**ABSTRACT.** This paper considers aspects of 4-manifold topology from the point of view of the null cone of a neutral metric, a point of view we call neutral causal topology. In particular, we construct and investigate neutral 4-manifolds with null boundaries that arise from canonical 3- and 4-dimensional settings.
+
+A null hypersurface is foliated by its normal and, in the neutral case, inherits a pair of totally null planes at each point. This paper focuses on these plane bundles in a number of classical settings.
+
+The first construction is the conformal compactification of flat neutral 4-space into the 4-ball. The null foliation on the boundary in this case is the Hopf fibration on the 3-sphere and the totally null planes in the boundary are integrable. The metric on the 4-ball is a conformally flat, scalar-flat, positive Ricci curvature neutral metric.
+
+The second constructions are subsets of the 4-dimensional space of oriented geodesics in a 3-dimensional space-form, equipped with its canonical neutral metric. We consider all oriented geodesics tangent to a given embedded strictly convex 2-sphere. Both totally null planes on this null hypersurface are contact, and we characterize the curves in the null boundary that are Legendrian with respect to either totally null plane bundles. The Reeb vector field associated with the alpha-planes are shown to be the oriented normal lines to geodesics in the surface.
+
+The third is a neutral geometric model for the intersection of two surfaces in a 4-manifold. The surfaces are the sets of oriented normal lines to two round spheres in Euclidean 3-space, which form Lagrangian surfaces in the 4-dimensional space of all oriented lines. The intersection of the boundaries of their normal neighbourhoods form tori that we prove are totally real and Lorentz if the spheres do not intersect.
+
+We conclude with possible topological applications of the three con-
+structions, including neutral Kirby calculus, neutral knot invariants and
+neutral Casson handles, respectively.
+
+CONTENTS
+
+1. Introduction 478
+
+2. Conformal compactification 480
+
+Received October 15, 2017.
+
+2010 Mathematics Subject Classification. Primary: 53A35; Secondary: 57N13.
+Key words and phrases. Neutral metric, null boundary, hyperbolic 3-space, 3-sphere,
+spaces of constant curvature, geodesic spaces, contact.
+---PAGE_BREAK---
+
+
+
+ | 3. |
+ Tangent hypersurfaces |
+ 484 |
+
+
+ | 4. |
+ Intersection tori of null hypersurfaces |
+ 497 |
+
+
+ | 5. |
+ Neutral causal topology |
+ 501 |
+
+
+ | References |
+ 504 |
+
+
+
+# 1. Introduction
+
+This paper considers certain 4-manifolds with boundary which carry a neutral metric (pseudo-Riemannian of signature (2,2)) with respect to which the boundary is a null hypersurface. We seek to extract geometric and topological information from the null cone of such metrics in a number of canonical situations.
+
+The results can be viewed as the first steps in the development of a neutral causal topology for 4-manifolds with boundary.¹ From this point of view, section 2 presents the 0-handle of a neutral Kirby calculus, with preferred curves along which to do surgery. The neutral metric appears to be ideally suited to 2-handle constructions in which the framing is tracked by the null cone on the associated tori.
+
+Section 3 develops the theory of knots in tangent hypersurfaces in order to identify neutral knot invariants in null boundaries, while Section 4 constructs a local geometric model for the normal neighbourhood of a transverse double point of a Lagrangian disc.
+
+In more detail, we consider the conformal compactification of an open neutral 4-manifold. Conformal compactifications of both Riemannian and Lorentzian 4-manifolds have been long studied [3] [34]. For neutral 4-manifolds even the flat case has not received much attention. In the next section we seek to remedy this by providing the canonical example:
+
+**Theorem 1.1.** There exists a smooth embedding $f : (\mathbb{R}^{2,2}, \mathcal{G}) \to (B^4, \tilde{\mathcal{G}})$
+and a function $\Omega : B^4 \to \mathbb{R}$: such that
+
+(i) $f$ is a conformal diffeomorphism onto the interior of $B^4$ with $f*\tilde{\mathcal{G}} = \Omega^2\mathcal{G}$,
+
+(ii) $\Omega = 0$ on $\partial B^4 = S^3$,
+
+(iii) *the boundary is null*,
+
+(iv) $d\Omega = 0$ on the boundary $S^3$ precisely on an embedded Hopf link.
+
+The metric $\tilde{\mathcal{G}}$ on the 4-ball is a conformally flat, scalar-flat neutral metric with positive definite Ricci tensor, analogous to the Einstein static universe. Thus, space-like infinity and timelike infinity are Hopf-linked in the boundary of a flat universe with two times.
+
+¹Expository video clips explaining the results and motivations of this paper can be found at the following link: https://www.youtube.com/watch?v=VU1PMPwT-hA
+---PAGE_BREAK---
+
+The null boundary inherits a degenerate Lorentz metric, whose null cone is a pair of transverse totally null planes at each point ($\alpha$-planes and $\beta$-planes). In the conformal compactification of $\mathbb{R}^{2,2}$ these plane fields are both integrable, and contain the tangents to the (1,1) and (1,-1) curves on the Hopf tori about the link.
+
+This 4-ball should be viewed as the 0-handle of a neutral Kirby calculus so that one can consider attaching handles along framed curves in the boundary [17] [27]. In order to carry the neutral metric along, certain causal conditions must be fulfilled, conditions that mirror the restrictions on neutral metrics in the compact case [22] [32]. One can then develop neutral surgery on conformal classes of neutral metrics. In this case, the foliation by Lorentz tori tracks the framing for such surgery along the Hopf link.
+
+The second type of 4-manifold, detailed in Section 3, are subsets of the space $L(M^3)$ of oriented geodesics in a 3-dimensional space-form $(M^3, g)$. It is well-known that $L(M^3)$ admits an invariant neutral metric $G$ [15] [19] [24] [36] [37].
+
+Given a smoothly embedded surface $S \subset M^3$, define the *tangent hypersurface* of $S$, denoted $H(S) \subset L(M^3)$, to be the set of oriented geodesics that are tangent to $S$. This 3-manifold is locally a circle bundle over $S$, with projection $\pi: H(S) \to S$ and fibre generated by rotation about the normal to $S$.
+
+In this paper we investigate the geometric properties of $H(S)$ induced by the neutral metric on $L(M^3)$. If $S \subset M^3$ is a smooth surface, then $H(S)$ is an immersed hypersurface which is null with respect to $G$.
+
+Thus, $H(S)$ is foliated by null geodesics and contains an $\alpha$-plane and a $\beta$-plane at each point. A knot $C \subset H(S)$, which is an oriented tangent line field over a curve $c \subset S$, is said to be *$\alpha$-Legendrian* ($\beta$-Legendrian) if its tangent lies in the $\alpha$-planes ($\beta$-planes, respectively).
+
+Given a contact structure on a 3-manifold with contact 1-form $\omega$, the Reeb vector field $X$ is characterised by
+
+$$d\omega(X, \cdot) = 0 \qquad \omega(X) = 1$$
+
+In the case where $S$ is a strictly convex 2-sphere, the tangent hypersurface bounds a disc bundle of Euler number 2 in $L(M^3)$, and we prove:
+
+**Theorem 1.2.** If $S \subset M^3$ is a smooth convex 2-sphere in a 3-dimensional space-form, then the $\alpha$-planes and $\beta$-planes of the neutral metric are both contact.
+
+Moreover, a knot $C \subset H(S)$, with contact curve $c = \pi(C) \subset S$, is $\alpha$-Legendrian iff $\forall \gamma \in C$, $\gamma$ is tangent to $c \subset S \subset M^3$.
+
+In addition, any two of the following imply the third:
+
+(i) $C$ is $\beta$-Legendrian,
+
+(ii) $\forall \gamma \in C$, $\gamma$ is normal to $c$,
+
+(iii) either $c$ is a line of curvature of $S$, or $S$ is umbilic along $c$.
+---PAGE_BREAK---
+
+Finally, the Reeb vector field of the $\alpha$-planes consists of the oriented lines normal to a geodesic of S.
+
+The proof requires separate formalisms in the flat and non-flat cases.
+
+Section 4 contains a local geometric model of the normal neighbourhood of an isolated double point on an immersed surface, given by the intersection of two Lagrangian surfaces in $L(\mathbb{R}^3)$. These surfaces are the oriented normal lines to two round spheres in $\mathbb{R}^3$ and the boundaries of a normal neighbourhood of the surfaces can be identified with the tangent hypersurfaces of the spheres.
+
+**Theorem 1.3.** Let $S_1, S_2 \subset \mathbb{R}^3$ be round spheres of radii $r_1 \ge r_2$ with centres separated by a distance $l$ in $\mathbb{R}^3$. Then,
+
+(i) $\mathcal{H}(S_1) \cap \mathcal{H}(S_2) = \emptyset$ if and only if $l < r_1 - r_2$,
+
+(ii) $\mathcal{H}(S_1) \cap \mathcal{H}(S_2) = S^1$ if and only if $l = r_1 - r_2$,
+
+(iii) $\mathcal{H}(S_1) \cap \mathcal{H}(S_2) = T^2$ if and only if $r_1 - r_2 < l \le r_1 + r_2$,
+
+(iv) $\mathcal{H}(S_1) \cap \mathcal{H}(S_2) = T^2 \sqcup T^2$ if and only if $r_1 + r_2 < l$.
+
+If $l > r_1 + r_2$ so that $S_1 \cap S_2 = \{\emptyset\}$, then the intersection tori $T^2$ are
+totally real and Lorentz.
+
+In the final section, we discuss these three constructions from a topological
+point of view.
+
+## 2. Conformal compactification
+
+**2.1. Neutral geometry.** Let us assemble some facts of neutral geometry that will be required in this paper. The statements are in $\mathbb{R}^4$, but hold in the tangent space at a point in any neutral 4-manifold.
+
+Consider the flat neutral metric $\mathcal{G}$,
+
+$$ds^2 = (dx^1)^2 + (dx^2)^2 - (dx^3)^2 - (dx^4)^2,$$
+
+on $\mathbb{R}^4$ in standard coordinates $(x^1, x^2, x^3, x^4)$. Throughout, denote $\mathbb{R}^4$ endowed with this metric by $\mathbb{R}^{2,2}$.
+
+**Definition 2.1.** The neutral *null cone* is the set of null vectors in $\mathbb{R}^{2,2}$:
+
+$$\mathcal{K} = \{X \in \mathbb{R}^{2,2} | G(X, X) = 0\}.$$
+
+The null cone is a cone over a torus, in distinction to the Lorentz $\mathbb{R}^{3,1}$ case where the null cone is a cone over a 2-sphere. To see the torus, note that the map $f: \mathbb{R} \times S^1 \times S^1 \to \mathcal{K}$
+
+$$f(a, \theta_1, \theta_2) = (a \cos \theta_1, a \sin \theta_1, a \cos \theta_2, a \sin \theta_2),$$
+
+parameterizes the null vectors as a cone over $\mathbb{T}^2$.
+
+**Definition 2.2.** A plane $P \subset \mathbb{R}^{2,2}$ is *totally null* if every vector in P is null with respect to G, and the inner product of any two vectors in P is zero.
+---PAGE_BREAK---
+
+Since every vector that lies in a totally null plane is null, we can picture a
+null plane as a cone over a circle in $\mathcal{K}$. A straight-forward calculation shows
+that:
+
+**Proposition 2.3.** *A totally null plane is a cone over either a (1,1)-curve or a (1,-1)-curve on the torus, the former for an α-plane, the latter for a β-plane.*
+
+Here the (1, ±1)-curves on the torus are given by the equations θ₁ ± θ₂ = constant. By rotating around the meridian we see that the set of totally null planes is S¹ ⋊ S¹.
+
+The metric has two natural compatible complex structures (up to an overall sign), which in coordinates (x¹, x², x³, x⁴) take the form
+
+$$
+\mathcal{J}^{+} = \begin{bmatrix}
+0 & 1 & 0 & 0 \\
+-1 & 0 & 0 & 0 \\
+0 & 0 & 0 & 1 \\
+0 & 0 & -1 & 0
+\end{bmatrix} \qquad \mathcal{J}^{-} = \begin{bmatrix}
+0 & 1 & 0 & 0 \\
+-1 & 0 & 0 & 0 \\
+0 & 0 & 0 & -1 \\
+0 & 0 & 1 & 0
+\end{bmatrix}.
+$$
+
+**Proposition 2.4.** [16] An α-plane (β-plane) is invariant under the com-
+plex structure J⁺ (J⁻), respectively
+
+Note that, in general, these only extend to compatible almost complex
+structures on a neutral manifold. The space of compatible almost complex
+structures on a neutral 4-manifold is referred to as the hyperbolic twistor
+space of the metric [30].
+
+Composition of either of the complex structures with the metric yields a
+2-form, which is symplectic in the flat case. However, the 2-form does not
+tame the almost complex structure in the sense of Gromov [18] - neutral
+metrics walk on the wild side.
+
+Now consider a null vector $X \in \mathbb{R}^{2,2}$. The set of vectors orthogonal to $X$ is
+3-dimensional and contains the vector $X$ itself. Choosing another null vector
+$Y$ which has $G(X,Y) = 1$, complete this to a frame $\{e_+, e_-, e_0 = X, f_0 = Y\}$
+such that
+
+$$
+G = \begin{bmatrix}
+1 & 0 & 0 & 0 \\
+0 & -1 & 0 & 0 \\
+0 & 0 & 0 & 1 \\
+0 & 0 & 1 & 0
+\end{bmatrix}.
+$$
+
+Clearly the hypersurface orthogonal to $X$ has a degenerate Lorentz metric and the set of null vectors at each point consists of two totally null planes, intersecting along the normal vector $X$.
+
+In particular, given any null vector $X$ there exists a pair of totally null planes containing $X$,
+
+$$
+P_{\pm} = \operatorname{span}_{\mathbb{R}} \{e_{+} \pm e_{-}, X\},
+$$
+
+which are exactly the α-planes and β-planes. This structure exists on any
+null hypersurface in a neutral 4-manifold and will be considered in some
+detail in the constructions of this paper.
+---PAGE_BREAK---
+
+**2.2. The conformal compactification of $\mathbb{R}^{2,2}$.** We will now conformally embed $\mathbb{R}^{2,2}$ as an open 4-ball in $\mathbb{R}^4$ so that the points at infinity in $\mathbb{R}^{2,2}$ form the boundary 3-sphere.
+
+First, let us introduce the coordinate change
+
+$$ (x^1, x^2, x^3, x^4) \rightarrow (R_1, R_2, \theta_1, \theta_2) $$
+
+defined by the double polar transformation:
+
+$$ x^1 + ix^2 = R_1 e^{i\theta_1} \qquad x^3 + ix^4 = R_2 e^{i\theta_2}. \tag{2.1} $$
+
+To bring the points at infinity (i.e. $R_1$ or $R_2$ going to infinity) in to a finite
+distance define
+
+$$ \tan p = R_1 + R_2 \qquad \tan q = R_1 - R_2. $$
+
+Clearly the coordinates $(p, q, \theta_1, \theta_2)$, with
+
+$$ 0 \le p < \pi/2 \qquad -p \le q \le p \qquad 0 \le \theta_1, \theta_2 < 2\pi, $$
+
+cover all of $\mathbb{R}^{2,2}$. Moreover, infinity has been brought in to the boundary
+$p = \pi/2$.
+
+This boundary is in fact a 3-sphere bounding a 4-ball $B^4$, as can be seen
+by the identification of $(z_1, z_2) \in \mathbb{C}^2 = \mathbb{R}^4$
+
+$$ z_1 = p \sin(\psi/2) e^{i\theta_1} \qquad z_2 = p \cos(\psi/2) e^{i\theta_2}, \tag{2.2} $$
+
+where $q = p \cos \psi$ with $0 \le \psi \le \pi$. The boundary is the 3-sphere $\partial B^4 = S^3$
+of radius $\pi/2$ and the tori parameterized by $(\theta_1, \theta_2)$ are exactly the Hopf
+tori in $S^3$.
+
+Consider the neutral metric $\tilde{\mathcal{G}}$ on the 4-ball given by
+
+$$ d\tilde{s}^2 = dpdq + \frac{1}{4}\sin^2(p+q)d\theta_1^2 - \frac{1}{4}\sin^2(p-q)d\theta_2^2. \tag{2.3} $$
+
+Under the diffeomorphism $f(x^1, x^2, x^3, x^4) = (p, q, \theta_1, \theta_2)$ the pull-back of $\tilde{\mathcal{G}}$ is conformal to $\mathcal{G}: f^*\tilde{\mathcal{G}} = \Omega^2\mathcal{G}$ where $\Omega$ is the real map on the 4-ball $\Omega = 2\cos p\cos q$. Note that this vanishes at the boundary $p=\pi/2$.
+
+The metric $\tilde{\mathcal{G}}$ is obviously conformally flat and is also scalar-flat neutral
+metric, being the neutral analog of the Einstein static universe. The Ricci
+tensor has non-vanishing components:
+
+$$ \tilde{R}_{pp} = \tilde{R}_{qq} = 2 \qquad \tilde{R}_{\theta_1\theta_1} = \sin^2(p+q) \qquad \tilde{R}_{\theta_2\theta_2} = \sin^2(p-q). $$
+
+Clearly, the boundary 3-sphere is null and this has interesting consequences. The normal vector lies in the sphere. In general the set of points on the boundary at which $d\Omega$ vanishes would be zero dimensional, the fact that the hypersurface is null (so that $|d\Omega| = 0$ everywhere on the boundary) means that the zero locus is 1-dimensional.
+
+A short calculation shows that $d\Omega = 0$ on $S^3$ when $q = \pm \frac{\pi}{2}$. Since $p = \frac{\pi}{2}$,
+we have $\psi \in \{0, \pi\}$ and equations (2.2) tell us that the gradient of the
+conformal factor vanishes on a pair of Hopf-linked circles in the boundary.
+
+We have now proven Theorem 1.1 and propose that the four conditions of
+this Theorem are natural for the conformal compactification of more general
+---PAGE_BREAK---
+
+neutral 4-manifolds - with the Hopf link replaced by some other link in the boundary.
+
+The metric induced on a null hypersurface by a neutral metric has degenerate signature $(0, +, -)$ and the null cone degenerates to a pair of totally null planes, called $\alpha$-planes and $\beta$-planes, which intersect on the normal to the hypersurface, which, being null, lies in the tangent space to the hypersurface.
+
+**Proposition 2.5.** *Both the $\alpha$-planes and $\beta$-planes on the boundary are integrable.*
+
+**Proof.** The pullback of the metric (2.3) onto the boundary 3-sphere $p = \frac{\pi}{2}$ is
+
+$$d\tilde{s}^2|_{S^3} = \frac{1}{4} \cos^2 q (d\theta_1^2 - d\theta_2^2),$$
+
+and so the null cone is spanned by
+
+$$X_{\pm} = a \frac{\partial}{\partial q} + b \left( \frac{\partial}{\partial \theta_1} \pm \frac{\partial}{\partial \theta_2} \right).$$
+
+The 1-forms that vanish on these two planes are proportional to
+
+$$\omega_{\pm} = d\theta_1 \mp d\theta_2,$$
+
+so that $\omega_{\pm} \wedge d\omega_{\pm} = 0$ and the distributions are integrable. $\square$
+
+Note here that the null planes intersect the tori $q = \text{constant}$ in the (1,1) and (1,-1) curves, which gives the null cone structure on these Lorentz tori.
+
+The existence of a conformal compactification with null boundary means
+that the metric $\mathcal{G}$ must be scalar flat at infinity in the original 4-manifold,
+since by the well-known conformal change
+
+$$\Omega^2 \bar{R} = R - 6\Omega \bar{\Delta}\Omega + 12|\bar{\nabla}\Omega|^2$$
+
+along the null boundary $|\bar{\nabla}\Omega|^2 = 0$ and so $R \to 0$ as $\Omega \to 0$. In the 4-manifolds we consider, it is scalar flat throughout and so this obstruction does not arise.
+---PAGE_BREAK---
+
+### 3. Tangent hypersurfaces
+
+#### 3.1. Flat 3-space.
+
+**3.1.1. The neutral metric.** Interest in the neutral metric on the space of oriented geodesics of a 3-dimensional space of constant curvature has grown recently [15] [19] [24] [36] [37]. The underlying smooth 4-manifold in the $\mathbb{R}^3$ case is the total space of the tangent bundle to the 2-sphere $L(\mathbb{R}^3) \equiv TS^2$, and we adopt the notation of [20] for the local description.
+
+This identification is made concrete by choosing Euclidean coordinates $(x^1, x^2, x^3)$ and considering tangent vectors to the unit 2-sphere in the same $\mathbb{R}^3$. Thus, choosing holomorphic coordinates about the north pole on $S^2$, the tangent vector
+
+$$V = \eta \frac{\partial}{\partial \xi} + \bar{\eta} \frac{\partial}{\partial \bar{\xi}},$$
+
+for $\eta \in \mathbb{C}$ is identified with the oriented parameterized line $\gamma : \mathbb{R} \to \mathbb{R}^3$:
+$r \mapsto \gamma(r)$ given by
+
+$$z = x^1 + ix^2 = \frac{2(\eta - \bar{\xi}^2\bar{\eta})}{(1 + \xi\bar{\xi})^2} + \frac{2\xi}{1 + \xi\bar{\xi}}r, \quad (3.1)$$
+
+$$x^3 = -\frac{2(\xi\bar{\eta} + \bar{\xi}\eta)}{(1 + \xi\bar{\xi})^2} + \frac{1 - \xi\bar{\xi}}{1 + \xi\bar{\xi}}r. \quad (3.2)$$
+
+Fixing the two complex numbers $\xi$ and $\eta$, as we vary $r$ the point $(x^1, x^2, x^3)$ in $\mathbb{R}^3$ moves along a straight line. The parameter $r$ is arc-length along the line, with $r=0$ determining the point on the line that is closest to the origin.
+
+Moreover, it is easily seen that the direction of the line is $\xi$, obtained by stereographic projection from the south pole. The perpendicular displacement of the line from the origin is determined by the complex number $\eta$.
+
+Thus, $(\xi, \eta)$ are local coordinates on the space of oriented line $L(\mathbb{R}^3)$ with the fibre over the south pole removed. A similar local patch obtained by stereographic projection from the north pole can be glued together to cover all of the 2-sphere of directions.
+
+Computing the rotation of $\eta$ as one traverses a circle in the overlap of the two charts, one obtains a vector bundle with Euler number 2, thus identifying $L(\mathbb{R}^3)$ with the total space of the tangent bundle to the 2-sphere $TS^2$.
+
+In fact $L(\mathbb{R}^3)$ admits a pair of canonical complex structures $J^+$ and $J^-$ which when expressed in the coordinates $(\xi, \bar{\xi}, \eta, \bar{\eta})$ take the form
+
+$$J^+ = \begin{bmatrix} i & 0 & 0 & 0 \\ 0 & -i & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & 0 & 0 & -i \end{bmatrix}$$
+
+$$J^- = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}.$$
+---PAGE_BREAK---
+
+In addition, there is a neutral metric $\mathbb{G}$ on $L(\mathbb{R}^3)$ that is invariant under the Euclidean group, which takes the form
+
+$$ \mathbb{G} = 2(1 + \xi\bar{\xi})^{-2}\mathbb{I}\mathrm{m} \left( d\bar{\eta}d\xi + \frac{2\bar{\xi}\eta}{1+\xi\bar{\xi}}d\xi d\bar{\xi} \right). \quad (3.3) $$
+
+Up to the addition of a spherical factor, this is the unique metric (of any signature) on the space of lines that is invariant under the Euclidean group - in any dimension [36].
+
+Clearly, the metric is compatible with $\mathcal{J}^+$, but not with $\mathcal{J}^-$. The complex structure $\mathcal{J}^+$ has played a significant role in holomorphic methods applied to Euclidean problems, such as monopoles [23] and minimal surfaces [38].
+
+The composition of $\mathcal{J}^+$ and the neutral metric $\mathbb{G}$ yields a symplectic form
+
+$$ \Omega = 2(1 + \xi\bar{\xi})^{-2}\mathbb{R}\mathrm{e} \left( d\bar{\eta} \wedge d\xi + \frac{2\bar{\xi}\eta}{1+\xi\bar{\xi}} d\xi \wedge d\bar{\xi} \right). \quad (3.4) $$
+
+While this symplectic structure does not tame $\mathcal{J}^+$, it has the following property: a surface $\Sigma$ in $L(\mathbb{R}^3)$, that is, a 2-parameter family of oriented lines, is normal to a surface in $\mathbb{R}^3$ iff $\Sigma$ is Lagrangian: $\Omega_{\Sigma} = 0$ [19].
+
+This symplectic form coincides with the pull-back of the canonical symplectic form $\Omega'$ on $T^*S^2$ via the round metric on $S^2$, considered as a map $g:TS^2 \to T^*S^2$: $\Omega = g*\Omega'$.
+
+**3.1.2. Tangent hypersurfaces.** For any smoothly embedded convex surface $S \subset \mathbb{R}^3$ define the tangent hypersurface $\mathcal{H}(S) \subset L(\mathbb{R}^3)$ to be
+
+$$ \mathcal{H}(S) = \{\gamma \in L(\mathbb{R}^3) \mid \gamma \in T_{\gamma \cap S} S\}. $$
+
+Clearly rotation about the normal to $S$ at a point $p$ generates a circle in $\mathcal{H}(S)$, so that the hypersurface is the unit circle bundle of the tangent bundle over $S$.
+
+From now on we assume that $S$ is a closed strictly convex surface, so that $\mathcal{H}(S)$ is an embedded copy of the unit tangent bundle to $S$ and we have no lines that are tangent to $S$ at more than one point.
+
+**Proposition 3.1.** *The hypersurface $\mathcal{H}(S)$ is null with respect to $\mathbb{G}$ and foliated by null circles which are geodesics of the ambient metric.*
+
+**Proof.** Rotating an oriented line about a line in $\mathbb{R}^3$ generates a null circle in $L(\mathbb{R}^3)$ which is geodesic in $TS^2$ [19]. The tangent to these circles are in fact normal to $\mathcal{H}(S)$ in $TS^2$, as can be seen as follows.
+
+Since $S \subset \mathbb{R}^3$ is convex it can be parameterized by the direction of its normal line. In local coordinates we have $C \to L(\mathbb{R}^3): v \mapsto (\xi = v, \eta = \eta_0(v, \bar{v}))$. It is well known that this is a Lagrangian section of the canonical bundle $\pi: L(\mathbb{R}^3) \to S^2$.
+
+The point along the normal line where it intersects $S$ is determined by the support function $r_0: S \to \mathbb{R}$ which satisfies
+
+$$ \partial_{\nu} r_0 = \frac{2\eta_0}{1 + \nu\bar{\nu}}. \quad (3.5) $$
+---PAGE_BREAK---
+
+The sum and difference of the radii of curvature $r_1 \ge r_2$ of S are
+
+$$r_1 + r_2 = \psi_0 \qquad r_1 - r_2 = |\sigma_0|,$$
+
+where
+
+$$\psi_0 = r_0 + 2(1 + \nu\bar{\nu})^2 \text{Re} \, \partial_\nu \left[ \frac{\eta_0}{(1 + \nu\bar{\nu})^2} \right] \qquad \sigma_0 = -\partial_\nu \bar{\eta}_0. \quad (3.6)$$
+
+We are interested in the oriented lines that are tangent to S, that is, they
+are orthogonal to the normal.
+
+**Lemma 3.2.** The oriented great circle in $S^2$ which is dual to the point with holomorphic coordinate $\nu$ is generated by
+
+$$\xi = \frac{\nu + e^{iA}}{1 - \bar{\nu}e^{iA}}, \qquad (3.7)$$
+
+for $A \in [0, 2\pi)$.
+
+An oriented line $(\xi, \eta)$ passes through a point $(x^1, x^2, x^3) \in \mathbb{R}^3$ iff
+
+$$\eta = \frac{1}{2} (x^1 + i x^2 - 2x^3\xi - (x^1 - i x^2)\xi^2). \qquad (3.8)$$
+
+Substituting equations (3.1), (3.2) with $(\xi, \eta) = (\nu, \eta_0)$ and $r=r_0$, and
+(3.7) into (3.8) yields
+
+$$\eta = \frac{\eta_0 - e^{2iA}\bar{\eta}_0 - (1 + \nu\bar{\nu})e^{iA}r_0}{(1 - \bar{\nu}e^{iA})^2}. \qquad (3.9)$$
+
+Thus, the hypersurface $\mathcal{H}(S)$ is locally parameterized by (3.7) and (3.9)
+for $(\nu, \bar{\nu})$ varying over the normal directions of $S$ and $A \in S^1$.
+
+Pulling back the metric onto $\mathcal{H}$, we find that the induced metric in these
+coordinates (making use of equation (3.5) and definitions (3.6)) is
+
+$$ds^2 = - \frac{2}{(1 + \nu\bar{\nu})^2} \mathrm{Im} \left[ (\sigma_0 + \psi_0 e^{-2iA}) d\nu^2 + \sigma_0 e^{2iA} d\nu d\bar{\nu} \right]. \qquad (3.10)$$
+
+Thus the metric is degenerate along the null vector in the A-direction. This
+completes the proof.
+□
+
+The null vectors tangent to $\mathcal{H}(S)$ form a pair of planes, the $\alpha$-planes
+and $\beta$-planes, which intersect on the null normal. The former planes are
+preserved by the complex structure $\mathbb{J}^+$ and the latter by $\mathbb{J}^-$ [16].
+
+The first part of Theorem 1.2 is established by the following proposition:
+
+**Proposition 3.3.** If $S \subset \mathbb{R}^3$ is a smooth convex 2-sphere, then the $\alpha$-planes and $\beta$-planes of $\mathcal{H}(S)$ are both contact.
+
+**Proof.** Consider the induced metric (3.10) and write down the null planes.
+In particular,
+
+**Lemma 3.4.** The vector $\vec{X} \in T_{(\nu,A)}\mathcal{H}(S)$
+
+$$\vec{X} = a \frac{\partial}{\partial A} + b \mathbb{R} e \left[ e^{iB} \frac{\partial}{\partial \nu} \right],$$
+---PAGE_BREAK---
+
+for $a, b \in \mathbb{R}$, is null iff either
+
+$$B = A + \frac{1}{2i} \ln \left( \frac{\psi_0 + \bar{\sigma}_0 e^{-2iA}}{\psi_0 + \sigma_0 e^{2iA}} \right) \quad \text{or} \quad B = A + \frac{\pi}{2}.$$
+
+The former spans the $\alpha$-plane, while the latter the $\beta$-plane.
+
+The 1-form $\omega^+$ that vanishes on the $\alpha$-plane is
+
+$$\omega^+ = -2\mathbb{I}m \frac{e^{-iA}\psi_0 + e^{iA}\sigma_0}{1 + \nu\bar{\nu}} d\nu, \qquad (3.11)$$
+
+and so
+
+$$\omega^+ \wedge d\omega^+ = - \frac{2i(\psi_0^2 - \sigma_0\bar{\sigma}_0)}{(1 + \nu\bar{\nu})^2} dA \wedge dv \wedge d\bar{\nu}.$$
+
+For a convex surface $\psi_0^2 - \sigma_0\bar{\sigma}_0$ is never zero and so the distribution of $\alpha$-plane is contact.
+
+On the other hand, the 1-form $\omega^-$ that vanishes on the $\beta$-plane is
+
+$$\omega^- = 2\mathbb{R}e e^{-iA}d\nu,$$
+
+and so
+
+$$\omega^- \wedge d\omega^- = -2idA \wedge dv \wedge d\bar{\nu}.$$
+
+Thus the distribution of $\beta$-plane is contact. $\square$
+
+Note that these tangent hypersurfaces sit within a wider class of oriented
+lines passing through $S$ making an angle $0 \le a \le \pi/2$ with the outward
+pointing normal:
+
+$$\mathcal{H}_a(S) = \{\gamma \in L(\mathbb{R}^3) | \gamma \cap S \neq \emptyset, \langle \dot{\gamma}, \hat{N} \rangle = \cos a\},$$
+
+where $\dot{\gamma}$ is the direction of the oriented line $\gamma$ and $\hat{N}$ is the unit outward
+pointing normal vector.
+
+For $a=0$ this hypersurface degenerates to a Lagrangian surface in $L(\mathbb{R}^3)$,
+while for $a = \pi/2$ it is the tangent hypersurface. We refer to $\mathcal{H}_a(S)$ in the
+general $0 < a \le \pi/2$ case as the *constant angle hypersurface to $S$* which were
+first introduced in [20] while constructing a mod 2 neutral knot invariant.
+
+The local equations for the $\mathcal{H}_a(S)$ (generalizing equations (3.7) and (3.9))
+are
+
+$$\xi = \frac{\nu + \epsilon e^{iA}}{1 - \bar{\nu}\epsilon e^{iA}} \qquad \eta = \frac{\eta_0 - \epsilon^2 e^{2iA} \bar{\eta}_0 - (1 + \nu\bar{\nu})\epsilon e^{iA} r_0}{(1 - \bar{\nu}\epsilon e^{iA})^2} \quad (3.12)$$
+
+where $\epsilon = \tan(a/2)$.
+
+For $a < \pi/2$, these hypersurfaces are not null but they have the following
+property:
+
+**Proposition 3.5.** A hypersurface $\mathcal{H}_a(S)$ with $a < \pi/2$ is null exactly at the oriented lines through an umbilic point on $S$ and at the oriented lines whose projection orthogonal to the normal is tangent to the lines of curvature of $S$.
+---PAGE_BREAK---
+
+**Proof.** This follows from pulling back the neutral metric (3.3) to the hypersurface (3.12) and taking the determinant. The result is
+
+$$ \det \mathcal{G}|_{\mathcal{H}_a(S)} = - \frac{2\epsilon^2(1-\epsilon^2)^2(\sigma_0 e^{2iA} - \bar{\sigma}_0 e^{-2iA})(\psi_0^2 - \sigma_0\bar{\sigma}_0)}{(1+\epsilon^2)^4(1+\nu\bar{\nu})^4}, $$
+
+and the result follows. $\square$
+
+We return to these hypersurfaces in Section 4 when considering normal neighbourhoods of Lagrangian discs in $L(\mathbb{R}^3)$.
+
+Given the two contact distributions, introduce the following terminology:
+
+**Definition 3.6.** A knot $C \subset \mathcal{H}(S)$ is $\alpha$-Legendrian ($\beta$-Legendrian) if its tangent lies in an $\alpha$-plane ($\beta$-plane) at each point.
+
+The *contact curve* of $C$ is the curve $c = \pi(C) \subset S$ obtained by the canonical projection $\pi: \mathcal{H}(S) \to S$.
+
+We now prove the second part of Theorem 1.2.
+
+Let $c \subset S$ be a curve on a convex surface parameterized by arc-length $u \mapsto (x^1(u), x^2(u), x^3(u))$. Let $(\nu, \eta_0)$ be the outward pointing normal line to $S$ along $c$ so that
+
+$$ z = x^1 + ix^2 = \frac{2(\eta_0 - \bar{\nu}^2 \bar{\eta}_0)}{(1 + \nu \bar{\nu})^2} + \frac{2\nu}{1 + \nu \bar{\nu}} r_0, \quad (3.13) $$
+
+$$ x^3 = - \frac{2(\nu\bar{\eta}_0 + \bar{\nu}\eta_0)}{(1 + \nu\bar{\nu})^2} + \frac{1 - \nu\bar{\nu}}{1 + \nu\bar{\nu}} r_0. \quad (3.14) $$
+
+where $r_0 : S \to \mathbb{R}$ is the support function of $S$.
+
+To find the oriented line fields along $c$, differentiate equations (3.13) and (3.14) with respect to $u$ to find
+
+$$ \dot{z} = \frac{2}{(1 + \nu\bar{\nu})^2} [(\psi_0 + \sigma_0\nu^2)\dot{\nu} - (\psi_0\nu^2 + \sigma_0)\dot{\bar{\nu}}] $$
+
+$$ \dot{x}^3 = - \frac{2}{(1 + \nu\bar{\nu})^2} [(\psi_0\bar{\nu} - \sigma_0\nu)\dot{\nu} + (\psi_1\xi_1 - \bar{\sigma}_0\bar{\nu})\dot{\bar{\nu}}], $$
+
+where we have substituted for the derivatives of $\eta_0$ and $r_0$ using equation (3.5) and the definitions of $\sigma_0$ and $\psi_0$ which yield:
+
+$$ \dot{\eta}_0 = \frac{\partial \eta_0}{\partial \nu} \dot{\nu} + \frac{\partial \eta_0}{\partial \bar{\nu}} \dot{\bar{\nu}} = \left( \psi_0 - r_0 + \frac{2\bar{\nu}\eta_0}{1 + \nu\bar{\nu}} \right) \dot{\nu} - \bar{\sigma}_0 \dot{\bar{\nu}} $$
+
+$$ \dot{r}_0 = \frac{\partial r_0}{\partial \nu} \dot{\nu} + \frac{\partial r_0}{\partial \bar{\nu}} \dot{\bar{\nu}} = \frac{2\bar{\eta}_0}{(1 + \nu\bar{\nu})^2} \dot{\nu} + \frac{2\eta_0}{(1 + \nu\bar{\nu})^2} \dot{\bar{\nu}}. $$
+
+The curve is parameterized by arc length iff
+
+$$ |\vec{T}|^2 = \dot{z}\dot{z} + (\dot{x}^3)^2 = \frac{4}{(1 + \nu\bar{\nu})^2} |\psi_0\dot{\nu} - \bar{\sigma}_0\dot{\bar{\nu}}|^2 = 1, $$
+
+where $\vec{T}$ is the tangent vector to $c$. That is, there exists $\hat{\beta} \in [0, 2\pi)$ such that
+
+$$ \psi_0 \dot{\nu} - \bar{\sigma}_0 \dot{\bar{\nu}} = \frac{1}{2}(1 + \nu\bar{\nu})e^{i\hat{\beta}}, $$
+---PAGE_BREAK---
+
+inverting this last equation (with the aid of its conjugate)
+
+$$
+\dot{\nu} = \frac{(1 + \nu \bar{\nu})}{2(\psi_0^2 - |\sigma_0|)} \left[ \psi_0 e^{i\beta} + \bar{\sigma}_0 e^{-i\bar{\beta}} \right].
+$$
+
+Now comparing this with
+
+$$
+\vec{T} = \dot{z}\frac{\partial}{\partial z} + \dot{\bar{z}}\frac{\partial}{\partial \bar{z}} + x^3\frac{\partial}{\partial x^3} = \frac{2\xi}{1+\xi\bar{\xi}}\frac{\partial}{\partial z} + \frac{2\bar{\xi}}{1+\xi\bar{\xi}}\frac{\partial}{\partial \bar{z}} + \frac{1-\xi\bar{\xi}}{1+\xi\bar{\xi}}\frac{\partial}{\partial x^3}
+$$
+
+where $\xi$ is given by equation (3.7), we find that the oriented line is tangent
+to its contact curve iff $\hat{\beta} = A$. Moreover, the tangent to the curve $C \subset H(S)$
+at a point is of the form
+
+$$
+\vec{X} = a \frac{\partial}{\partial A} + b \Re \left[ e^{iB} \frac{\partial}{\partial \nu} \right],
+$$
+
+for *a*, *b* ∈ ℝ with
+
+$$
+B = A + \frac{1}{2i} \ln \left( \frac{\psi_0 + \bar{\sigma}_0 e^{-2iA}}{\psi_0 + \sigma_0 e^{2iA}} \right).
+$$
+
+We conclude by Lemma 3.4 that the tangent vector to a knot $C \subset \mathcal{H}(S)$ is contained in an $\alpha$-plane iff the oriented line field is tangent to its contact curve.
+
+We now prove the second part of Theorem 1.2.
+
+On the other hand, the normal $\vec{N}$ to the curve $c$ gives rise to the vector
+$\vec{X}$ with
+
+$$
+B = A + \frac{1}{2i} \ln \left( \frac{\psi_0 + \bar{\sigma}_0 e^{-2iA}}{\psi_0 + \sigma_0 e^{2iA}} \right) + \frac{\pi}{2},
+$$
+
+and this is contained in a $\beta$-plane iff either $\sigma_0 = 0$, in which case the point is umbilic, or if $\sigma_0 e^{2iA}$ is real, in which case the curve $c$ is a line of curvature.
+
+Similarly, if $\mathcal{C}$ is $\beta$-Legendrian, then the oriented lines are normal to $c$ iff
+either $\sigma_0 = 0$, in which case the point is umbilic, or if $\sigma_0 e^{2iA}$ is real, in which
+case the curve $c$ is a line of curvature.
+
+To prove the final part of Theorem 1.2 consider the contact 1-form $\omega^+$
+defined in equation (3.11).
+
+The Reeb vector field associated with $\omega^+$ is easily found to be
+
+$$
+X = \frac{i(1 + \nu\bar{\nu})}{2(\psi_0^2 - \sigma_0\bar{\sigma}_0)} \left[ (\psi_0 e^{iA} - \bar{\sigma}_0 e^{-iA}) \frac{\partial}{\partial\nu} + (\psi_0 e^{-iA} - \sigma_0 e^{iA}) \frac{\partial}{\partial\bar{\nu}} \right] \\
+\qquad + \frac{1}{2(\psi_0^2 - \sigma_0\bar{\sigma}_0)} \left[ (\psi_0\bar{\nu} - \sigma_0\nu)e^{iA} + (\psi_0\nu - \bar{\sigma}_0\bar{\nu})e^{-iA} \right] \frac{\partial}{\partial A}
+$$
+
+We conclude that flowing by the Reeb vector using a parameter $r$ leads
+to the flow
+
+$$
+\begin{gather*}
+\frac{d\nu}{dr} = \frac{i(1 + \nu\bar{\nu})}{2(\psi_0^2 - |\sigma_0|^2)} (\psi_0 e^{iA} - \bar{\sigma}_0 e^{-iA}) \\
+\frac{dA}{dr} = \frac{1}{2(\psi_0^2 - |\sigma_0|^2)} [(\psi_0\bar{\nu} - \sigma_0\nu)e^{iA} + (\psi_0\nu - \bar{\sigma}_0\bar{\nu})e^{-iA}]
+\end{gather*}
+$$
+---PAGE_BREAK---
+
+This flow can be understood by considering the geodesic flow on $S$ which
+induces the following flow on $\mathcal{H}(S)$:
+
+$$ \frac{d\nu}{d\tau} = \frac{(1 + \nu\bar{\nu})}{2(\psi_0^2 - |\sigma_0|^2)} (\psi_0 e^{iA} + \bar{\sigma}_0 e^{-iA}) $$
+
+$$ \frac{dA}{d\tau} = - \frac{i}{2(\psi_0^2 - |\sigma_0|^2)} [(\psi_0 \bar{\nu} - \sigma_0 \nu) e^{iA} - (\psi_0 \nu - \bar{\sigma}_0 \bar{\nu}) e^{-iA}] $$
+
+The Reeb flow is obtained from the geodesic flow by replacing $A$ by $A + \pi/2$. Thus integral curves of the Reeb flow consists of the oriented lines along a geodesic of $S$ that are orthogonal to the geodesic.
+
+This completes the proof of Theorem 1.2 in the flat case.
+
+**3.2. The non-flat case.**
+
+**3.2.1. The neutral metric.** For $\epsilon \in \{-1, 1\}$ consider the following flat metrics in $\mathbb{R}^4$:
+
+$$ \langle \cdot, \cdot \rangle_{\epsilon} = \epsilon(dx^1)^2 + \epsilon(dx^2)^2 + \epsilon(dx^3)^2 + (dx^4)^2. $$
+
+Let $\mathbb{S}_{\epsilon}^3 = \{x \in \mathbb{R}^4 : \langle x, x \rangle_{\epsilon} = 1\}$ be the 3-(pseudo)-sphere in the Euclidean space $\mathbb{R}_{\epsilon}^4 := (\mathbb{R}^4, \langle \cdot, \cdot \rangle_{\epsilon})$. Note that $\mathbb{S}_1^3$ is the standard 3-sphere, while $\mathbb{S}_{-1}^3$ is anti-isometric to the hyperbolic 3-space $\mathbb{H}^3$.
+
+Let $\iota: \mathbb{S}_{\epsilon}^3 \hookrightarrow \mathbb{R}^4$ be the inclusion map and denote by $g_{\epsilon}$ the induced metric
+$\iota^* \langle \cdot, \cdot \rangle_{\epsilon}$. The space of oriented geodesics $\mathcal{L}(\mathbb{S}_{\epsilon}^3)$ of $(\mathbb{S}_{\epsilon}^3, g_{\epsilon})$ is 4-dimensional
+and $\mathcal{L}(\mathbb{S}_1^3)$ can be identified with the Grassmannian of oriented planes in $\mathbb{R}_1^4$,
+while $\mathcal{L}(\mathbb{S}_{-1}^3)$ can be identified with the Grassmannian of oriented planes in
+$\mathbb{R}_{-1}^4$ such that the induced metric is Lorentzian [4].
+
+Thus, $\mathcal{L}(\mathbb{S}_{\epsilon}^3)$ is the following sub-manifold of the space $\Lambda^2(\mathbb{R}^4)$ of bivectors
+in $\mathbb{R}^4$:
+
+$$ \mathcal{L}(\mathbb{S}_{\epsilon}^{3}) = \{x \wedge y \in \Lambda^{2}(\mathbb{R}^{4}) : y \in T_{x}\mathbb{S}_{\epsilon}^{3}, \langle y, y \rangle_{\epsilon} = \epsilon\}. $$
+
+In fact, an element $x \wedge y \in L(\mathbb{S}_\epsilon^3)$ is the oriented geodesic $\gamma \subset \mathbb{S}_\epsilon^3$ passing
+through $x \in \mathbb{S}_\epsilon^3$ and has direction $y \in T_x\mathbb{S}_\epsilon^3$ with $\langle y, y \rangle_\epsilon = \epsilon$.
+
+Endow $\Lambda^2(\mathbb{R}^4)$ with the flat metric $\langle\langle., .\rangle\rangle_\epsilon$ defined by:
+
+$$ \langle\langle x_1 \wedge y_1, x_2 \wedge y_2 \rangle\rangle_\epsilon = \langle x_1, x_2 \rangle_\epsilon \langle y_1, y_2 \rangle_\epsilon - \langle x_1, y_2 \rangle_\epsilon \langle y_1, x_2 \rangle_\epsilon. $$
+
+If $x \wedge y \in L(\mathbb{S}_\epsilon^3)$, the tangent space $T_{x\wedge y}L(\mathbb{S}_\epsilon^3)$ is the vector space consisting
+of vectors of the form $x \wedge X + y \wedge Y$, where $X, Y \in (x \wedge y)^\perp = \{\xi \in \Lambda^2(\mathbb{R}^4) :$
+$\langle\xi, x\rangle_\epsilon = \langle\xi, y\rangle_\epsilon = 0\}$.
+
+A complex (resp. paracomplex) structure $J$ can be defined in the oriented
+plane $x \wedge y \in L(\mathbb{S}_1^3)$ (resp. $L(\mathbb{S}_{-1}^3)$) by $Jx = y$ and $Jy = -x$ (resp. $Jy = x$)
+and let $J'$ be the complex structure on the oriented plane $(x \wedge y)^\perp$. Define
+the endomorphisms $\jmath$ and $\jmath'$ on $T_{x\wedge y}L(\mathbb{S}_\epsilon^3)$ as follows:
+
+$$ J(x \wedge X + y \wedge Y) = Jx \wedge X + Jy \wedge Y = y \wedge X - \epsilon x \wedge Y, $$
+
+and
+
+$$ J'(x \wedge X + y \wedge Y) = x \wedge J'(X) + y \wedge J'(Y). $$
+---PAGE_BREAK---
+
+For $\epsilon = 1$ (resp. $\epsilon = -1$) $\mathcal{J}$ is a complex (resp. paracomplex) structure on $\mathbb{L}(\mathbb{S}_\epsilon^3)$, while $\mathcal{J}'$ is a complex structure for $\epsilon = \pm 1$ [1] [2] [4] [6].
+
+Denoting the inclusion map by $\iota : \mathbb{L}(\mathbb{S}_\epsilon^3) \hookrightarrow \Lambda^2(\mathbb{R}^4)$, the metric $\iota^* \langle \cdot, \cdot \rangle_\epsilon$ is Riemannian and Einstein [31]. The metric $G_\epsilon = -\iota^* \langle \langle \cdot, \cdot \rangle_{\mathcal{J}'^\bullet} \rangle_\epsilon$ is of neutral signature, locally conformally flat and is invariant under the natural action of the group $SO((7+\epsilon)/2, (1-\epsilon)/2)$ of isometries of $\mathbb{S}_\epsilon^3$. Additionally, both structures $(\mathbb{L}(\mathbb{S}_\epsilon^3), \mathcal{J}, \iota^* \langle \langle \cdot, \cdot \rangle_\epsilon)$ and $(\mathbb{L}(\mathbb{S}_\epsilon^3), \mathcal{J}', G_\epsilon)$ are (para-) Kähler manifolds [1] [2] [4] [15] [24].
+
+**3.2.2. Tangent hypersurfaces.** Consider an oriented smooth surface $S$ of $\mathbb{S}_\epsilon^3$ given by the immersion $\phi : \bar{S} \to \mathbb{S}_\epsilon^3$, with $S = \phi(\bar{S})$. Let $(e_1, e_2)$ be an oriented orthonormal frame of the tangent bundle of $S$ and let $N$ be the unit normal vector field such that $(\phi, e_1, e_2, N)$ is a positive oriented orthonormal frame in $\mathbb{R}_\epsilon^4$. Then
+
+$$ \langle \phi, \phi \rangle_\epsilon = \epsilon \langle e_1, e_1 \rangle_\epsilon = \epsilon \langle e_2, e_2 \rangle_\epsilon = \epsilon \langle N, N \rangle_\epsilon = 1. $$
+
+For $\theta \in \mathbb{S}^1$, define the following tangential vector fields
+
+$$ v(x, \theta) = \cos \theta e_1 + \sin \theta e_2, \qquad v^\perp(x, \theta) = -\sin \theta e_1 + \cos \theta e_2. $$
+
+As in the flat case, the tangent hypersurface $\mathcal{H}(S)$ in $\mathbb{L}(\mathbb{S}_\epsilon^3)$ is the image of the immersion $\bar{\phi} : \bar{S} \times S^1 \to \mathbb{L}(\mathbb{S}_\epsilon^3) : (x, \theta) \mapsto \phi(x) \wedge v(x, \theta)$.
+
+Identify $e_i$ with $d\phi(e_i)$ and assume that $(e_1, e_2)$ diagonalize the shape operator, that is, $h(e_i, e_j) = k_i\delta_{ij}$, where $k_i$ and $h$ denote the principal curvatures and second fundamental form, respectively.
+
+If $\nabla$ denotes the Levi-Civita connection of the induced metric $\phi^*g_\epsilon$ and setting $v_1 := \langle\nabla_{e_1} v, v^\perp\rangle_\epsilon$ and $v_2 := \langle\nabla_{e_2} v, v^\perp\rangle_\epsilon$, the derivative of $\bar{\phi}$ is given by:
+
+$$
+\left.
+\begin{aligned}
+d\bar{\phi}(e_1) &= v_1\phi \wedge v^\perp + k_1 \cos\theta \phi \wedge N + \sin\theta v \wedge v^\perp \\
+d\bar{\phi}(e_2) &= v_2\phi \wedge v^\perp + k_2 \sin\theta \phi \wedge N - \cos\theta v \wedge v^\perp \\
+d\bar{\phi}(\partial/\partial\theta) &= \phi \wedge v^\perp.
+\end{aligned}
+\right\} (3.15)
+$$
+
+A direct computation shows that
+
+$$ G_\epsilon(d\bar{\phi}(\partial/\partial\theta), d\bar{\phi}(e_1)) = G_\epsilon(d\bar{\phi}(\partial/\partial\theta), d\bar{\phi}(e_2)) = 0. $$
+
+In addition, $d\bar{\phi}(\partial/\partial\theta)$ is null, that is,
+
+$$ G_\epsilon(d\bar{\phi}(\partial/\partial\theta), d\bar{\phi}(\partial/\partial\theta)) = 0. $$
+
+Now, a brief computation gives
+
+$$ G_\epsilon(d\bar{\phi} e_1, d\bar{\phi} e_1)G_\epsilon(d\bar{\phi} e_2, d\bar{\phi} e_2) - G_\epsilon(d\bar{\phi} e_1, d\bar{\phi} e_2)^2 = -(k_2 \sin^2 \theta + k_1 \cos^2 \theta)^2. $$
+
+Thus, $d\bar{\phi}(\partial/\partial\theta)$ is a tangential vector field and a normal vector field of the hypersurface $\mathcal{H}(S)$. The induced metric $\bar{\phi}^*G_\epsilon$ is of signature $(+ - 0)$.
+---PAGE_BREAK---
+
+Let $\rho_1 = d\bar{\phi}(e_1)$ and $\rho_2$ be defined by
+
+$$ \rho_2 = \frac{2k_1v_2 \cos\theta \sin\theta + (k_1 \cos^2\theta - k_2 \sin^2\theta)v_1}{k_1 \cos^2\theta + k_2 \sin^2\theta} \phi \wedge v^\perp + k_1 \cos\theta \phi \wedge N - \sin\theta v \wedge v^\perp. \quad (3.16) $$
+
+Consider the null vectors $e_+$ and $e_-$ by $e_+ = \rho_1 + \rho_2$ and $e_- = \rho_1 - \rho_2$.
+
+If $e_0 = d\bar{\phi}(\partial/\partial\theta)$, define the null planes $\Pi_+ := \text{span}\{e_+, e_0\}$ and $\Pi_- := \text{span}\{e_-, e_0\}$. A brief computation shows that
+
+$$ \Pi_+ = \text{span}\{\phi \wedge v^\perp, \phi \wedge N\} \text{ and } \Pi_- = \text{span}\{\phi \wedge v^\perp, v \wedge v^\perp\}. $$
+
+**Proposition 3.7.** The plane $\Pi_+$ is an $\alpha$-plane, while $\Pi_-$ is a $\beta$-plane.
+
+**Proof.** If $\xi \in \Pi_+$, we have that $\xi = \xi_1 \phi \wedge v^\perp + \xi_2 \phi \wedge N$ and thus,
+
+$$ J'\xi = -\xi_1 \phi \wedge N + \xi_2 \phi \wedge v^\perp \in \Pi_+. $$
+
+Therefore, the null plane $\Pi_+$ is $J'$-holomorphic and since it is totally null, it is therefore an $\alpha$-plane.
+
+If $\xi \in \Pi_-$ we have that $\xi = \xi_1 \phi \wedge v^\perp + \xi_2 v \wedge v^\perp$. Then,
+
+$$ J\xi = \xi_1 v \wedge v^\perp - \epsilon\xi_2 \phi \wedge v^\perp \in \Pi_-, $$
+
+which shows that $\Pi_-$ is $J$-holomorphic, and thus, $\Pi_-$ is a $\beta$-plane. $\square$
+
+The following proposition establishes the first part of Theorem 1.2 in the non-flat cases:
+
+**Proposition 3.8.** Let $S$ be a smooth oriented convex surface in $S_\epsilon^3$ and let $\mathcal{H}(S)$ be its tangent hypersurface. Then, $(\mathcal{H}(S), \Pi_+)$ and $(\mathcal{H}(S), \Pi_-)$ are both contact 3-manifolds.
+
+**Proof.** Assuming that $S$ is convex, we have that $k_1k_2 > 0$ and thus
+
+$$ k_1(x) \cos^2\theta + k_2(x) \sin^2\theta \neq 0, \quad \forall (x, \theta) \in \mathcal{H}(S). $$
+
+Set $\eta_1 = \phi \wedge v^\perp$, $\eta_2 = \phi \wedge N$ and $\eta_3 = v \wedge v^\perp$. We simply write $e_i$ for the tangential vector fields $d\bar{\phi}(e_i)$ and $\partial/\partial\theta$ for the tangential vector field $d\bar{\phi}(\partial/\partial\theta)$. Then solving the relations (3.15) for $\eta_i$ we have
+
+$$
+\begin{align*}
+\eta_1 &= \partial/\partial\theta \\
+\eta_2 &= \frac{\cos\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} e_1 + \frac{\sin\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} e_2 - \frac{v_1 \cos\theta + v_2 \sin\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} \frac{\partial}{\partial\theta} \\
+\eta_3 &= \frac{k_2 \sin\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} e_1 - \frac{k_1 \cos\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} e_2 - \frac{v_1 k_2 \sin\theta - v_2 k_1 \cos\theta}{k_1 \cos^2\theta + k_2 \sin^2\theta} \frac{\partial}{\partial\theta}.
+\end{align*}
+$$
+
+Thus, $\{\eta_1, \eta_2, \eta_3\}$ are tangential vector fields and let $\eta^i$ be the dual orthonormal frame. Then $\eta^i\eta_j = \delta^i_j$ and $\eta^i \in T^*(\mathcal{H}(S))$.
+
+Observe that $\Pi_+$ is generated by the vectors $\eta_1, \eta_2$, and thus $\eta^3(\Pi_+) = 0$. If $(e^1, e^2, d\theta)$ is the dual frame of $(e_1, e_2, \partial/\partial\theta)$ we have
+
+$$ \eta^3 = \sin\theta e^1 - \cos\theta e^2. $$
+---PAGE_BREAK---
+
+Hence,
+
+$$ \eta^3 \wedge d\eta^3 = e^1 \wedge e^2 \wedge d\theta. \tag{3.17} $$
+
+which implies that $\eta^3 \wedge d\eta^3 \neq 0$ and thus $(\mathcal{H}(S), \Pi_+)$ is a contact manifold.
+
+The $\beta$-plane $\Pi_-$ is generated by the vectors $\eta_1, \eta_3$, and thus $\eta^2(\Pi_-) = 0$. A brief computation gives
+
+$$ \eta^2 = k_1 \cos \theta e^1 + k_2 \sin \theta e^2, \tag{3.18} $$
+
+and then,
+
+$$ \eta^2 \wedge d\eta^2 = -k_1 k_2 e^1 \wedge e^2 \wedge d\theta. $$
+
+Using the fact that $S$ is convex, it follows that $\eta^2 \wedge d\eta^2 \neq 0$ and thus $(\mathcal{H}(S), \Pi_−)$ is a contact manifold. $\square$
+
+For any smoothly embedded convex surface of $S \subset \mathbb{S}_\epsilon^3$ consider the constant angle hypersurface $\mathcal{H}_a(S)$ of $\mathbb{L}(\mathbb{S}_\epsilon^3)$ which is the set of all oriented geodesics passing through $S$ and making an angle $a$ with the normal vector field $N$ of $S$. As in the flat case, $\mathcal{H}_0(S)$ is a Lagrangian surface in $\mathbb{L}(\mathbb{S}_\epsilon^3)$, while $\mathcal{H}_{\pi/2}(S)$ is the tangent hypersurface. The following Proposition covers the other cases:
+
+**Proposition 3.9.** For $a \in (0, \pi/2)$, the hypersurface $\mathcal{H}_a(S)$ is null exactly at the oriented geodesics either
+
+(1) passing through an umbilic point on $S$, or
+
+(2) whose direction projected to the tangent bundle $TS$ is tangent to a line of curvature of $S$.
+
+**Proof.** Let $\phi$ be an immersion of $S$ in $\mathbb{S}_\epsilon^3$ and consider, as before, the oriented orthonormal frame $(\phi, e_1, e_2, N)$, where $(e_1, e_2)$ are the principal directions.
+
+For $a \in (0, \pi/2)$, the hypersurface $\mathcal{H}_a(S)$ is given by the image of the immersion
+
+$$ \bar{\phi}_a(x, \theta) = \phi(x) \wedge v_a(x, \theta), $$
+
+where, $x \in S$ and $\theta \in [0, 2\pi)$ with
+
+$$ v_a(x, \theta) = (\cos \theta e_1(x) + \sin \theta e_2(x)) \sin a + N \cos a. $$
+
+Consider the following normal vector field of $\mathcal{H}_a(S)$:
+
+$$
+\begin{aligned}
+\bar{N}_a = & -(\cos \theta \langle \bar{\nabla}_{e_1} v_a, \xi_2 \rangle_\epsilon + \sin \theta \langle \bar{\nabla}_{e_2} v_a, \xi_2 \rangle_\epsilon) \phi \wedge \xi_1 + \cos a v_a \wedge \xi_1 \\
+& - \cos a (\sin \theta \langle \bar{\nabla}_{e_1} v_a, \xi_2 \rangle_\epsilon - \cos \theta \langle \bar{\nabla}_{v_a} v_a, \xi_2 \rangle_\epsilon) \phi \wedge \xi_2,
+\end{aligned}
+ $$
+
+where $\bar{\nabla}$ denotes the Levi-Civita connection of $g_\epsilon$. Then,
+
+$$ G(\bar{N}_a, \bar{N}_a) = (k_2 - k_1) \cos^2 a \sin 2\theta, $$
+
+and the Proposition follows. $\square$
+---PAGE_BREAK---
+
+Let $S$ be a smooth convex 2-sphere in $\mathbb{S}_\epsilon^3$ and let $C : I \to \mathcal{H}(S) : u \mapsto \phi(u) \wedge v(u)$ be an $\alpha$-Legendrian curve, where the curve $\phi(u)$ in $S$ is parameterised by the arc-length $u$.
+
+By definition we have $\dot{C} = \dot{\phi} \wedge v + \phi \wedge \dot{v} \in \operatorname{span}\{\phi \wedge v^\perp, \phi \wedge N\}$, and thus there exist two real functions $\lambda_1$ and $\lambda_2$ such that
+
+$$ \dot{\phi} \wedge v + \phi \wedge \dot{v} = \lambda_1 \phi \wedge v^\perp + \lambda_2 \phi \wedge N. $$
+
+We have
+
+$$ \dot{\phi} \wedge v + \phi \wedge (\dot{v} - \lambda_1 v^\perp - \lambda_2 N) = 0. \quad (3.19) $$
+
+If $\dot{\phi} = a_1v + a_2\phi + a_3v^\perp + a_4N$, it is obvious that $a_3 = a_4 = 0$. Then $\dot{\phi} = a_1v + a_2\phi$ and since $\langle \dot{\phi}, \phi \rangle_\epsilon = 0$ we have that $a_2 = 0$. Then $\dot{\phi} = a_1v$ and since $|\dot{\phi}|_\epsilon^2 = \epsilon$ we have that either $a_1 = 1$ or $a_1 = -1$. In any case, $v = a_1\dot{\phi}$ and thus,
+
+$$ C = \phi \wedge v = \phi \wedge a_1 \dot{\phi}, $$
+
+where $a_1^2 = 1$.
+
+We turn now to the proof of the second part of Theorem 1.2 in the non-flat case in 3 steps.
+
+(i) and (ii) imply (iii):
+
+The fact that $C$ is $\beta$-Legendrian implies that there exist functions $a, b$ along the curve $\phi$ such that,
+
+$$ \dot{C} = \dot{\phi} \wedge v + \phi \wedge \dot{v} = a\phi \wedge v^\perp + bv \wedge v^\perp, \quad (3.20) $$
+
+and since $C = \phi \wedge v$ is normal to the curve $\phi$ we have that
+
+$$ \langle \dot{\phi}, v \rangle_{\epsilon} = 0. $$
+
+Since $N$ is the unit normal vector field of $S \langle \dot{\phi}, N \rangle_{\epsilon} = 0$, and therefore $\dot{\phi} = \pm v^\perp$. Now, (3.20) yields $\phi \wedge (\dot{v} - av^\perp) = 0$, which implies $\dot{v} = \mu\phi + av^\perp$. Then
+
+$$ \langle \dot{v}, N \rangle_{\epsilon} = 0, \quad (3.21) $$
+
+and since
+
+$$ 0 = \langle \dot{N}, \phi \rangle_{\epsilon} = -\langle \dot{\phi}, N \rangle_{\epsilon} = \langle v^{\perp}, N \rangle_{\epsilon}, $$
+
+we have that $\dot{N} = \lambda v^\perp = \lambda \dot{\phi}$ and therefore $\phi$ is a line of curvature.
+
+On the other hand,
+
+$$
+\begin{aligned}
+0 = \langle \dot{v}, N \rangle_{\epsilon} &= -\langle v, \nabla_{v^{\perp}} N \rangle_{\epsilon} = \langle \cos \theta e_1 + \sin \theta e_2, A(-\sin \theta e_1 + \cos \theta e_2) \rangle_{\epsilon} \\
+&= \langle \cos \theta e_1 + \sin \theta e_2, -\sin \theta A e_1 + \cos \theta A e_2 \rangle_{\epsilon} = \epsilon (k_2 - k_1) \cos \theta \sin \theta,
+\end{aligned}
+$$
+
+which shows that $S$ is umbilic along the curve $\phi$.
+
+(i) and (iii) imply (ii):
+---PAGE_BREAK---
+
+The fact that $\mathcal{C}$ is $\beta$-Legendrian gives (3.20). Suppose that the curve $\phi$ is also a line of curvature. Then $\dot{N} = \lambda \dot{\phi}$, where $\lambda$ is a non zero function along the curve. We also have,
+
+$$ \dot{\phi} = a_1 v + a_2 \perp v^{\perp}. $$
+
+From (3.20) we have
+
+$$ a_2 v^{\perp} \wedge v + \phi \wedge \dot{v} = a\phi \wedge v^{\perp} + bv \wedge v^{\perp}, $$
+
+which gives $\phi \wedge (\dot{v} - av^{\perp}) = 0$, and thus $\dot{v} = av^{\perp} + \mu\phi$. Then, $\langle \dot{v}, N \rangle_{\epsilon} = 0$, which yields,
+
+$$ 0 = \langle \dot{v}, N \rangle_{\epsilon} = - \langle v, \dot{N} \rangle_{\epsilon} = -\lambda \langle v, \dot{\phi} \rangle_{\epsilon}. $$
+
+It follows that $\langle \dot{\phi}, v \rangle_{\epsilon} = 0$ and hence $\mathcal{C} = \phi \wedge v$ is normal to the curve $\phi$.
+
+Suppose now that $S$ is umbilic along the curve $\phi = \phi(u)$, i.e., $k_1 = k_2$. The relation (3.20) implies
+
+$$ (\dot{\phi} + bv^{\perp}) \wedge v + \phi \wedge (\dot{v} - av^{\perp}) = 0, $$
+
+which gives the following equations:
+
+$$ \begin{gather*} \dot{\phi} = -bv^{\perp} + \mu v \quad \text{and} \quad \dot{v} = av^{\perp} + s\phi. \\ 0 = \langle v, \dot{N} \rangle_{\epsilon} = \langle v, \nabla_{\dot{\phi}} N \rangle_{\epsilon} = \langle v, \nabla_{-bv^{\perp}+\mu v} N \rangle_{\epsilon} \\ = -b \langle v, \nabla_{v^{\perp}} N \rangle_{\epsilon} + \mu \langle v, \nabla_v N \rangle_{\epsilon}. \end{gather*} $$
+
+Then $\langle \dot{v}, N \rangle_{\epsilon} = 0$ and hence,
+
+The fact that $S$ is umbilic along the curve, implies that $\langle v, \nabla_{v^{\perp}} N \rangle_{\epsilon} = 0$. Then
+
+$$ 0 = \mu \langle v, \nabla_v N \rangle_{\epsilon} = -\epsilon\mu(k_1 \cos^2\theta + k_2 \sin^2\theta), $$
+
+and since $S$ is convex, we have that $k_1 \cos^2\theta + k_2 \sin^2\theta \neq 0$. It follows that $\mu = 0$ and thus $\dot{\phi} = -bv^\perp$. Therefore, $\langle \dot{\phi}, v \rangle_\epsilon = 0$ and hence $\mathcal{C} = \phi \wedge v$ is normal to the curve $\phi = \phi(u)$.
+
+(ii) and (iii) imply (i):
+
+The fact that $\mathcal{C}$ is normal to $\phi = \phi(u)$ implies that $\langle \dot{\phi}, v \rangle_{\epsilon} = 0$. Suppose that the curve $\phi$ is a line of curvature. Then $\dot{N} = \lambda \dot{\phi}$, where $\lambda$ is a non zero function along the curve. We also have that,
+
+$$ \dot{\phi} = a_1 v + b_1 v^{\perp}. \tag{3.22} $$
+
+Since
+
+$$ \langle \dot{v}, N \rangle_{\epsilon} = -\langle \dot{N}, v \rangle_{\epsilon} = -\lambda \langle \dot{\phi}, v \rangle_{\epsilon} = 0, $$
+
+we obtain
+
+$$ \dot{v} = a_2\phi + b_2v^{\perp}. \tag{3.23} $$
+---PAGE_BREAK---
+
+Using (3.22) and (3.23) we have:
+
+$$
+\begin{align*}
+\dot{C} &= \dot{\phi} \wedge v + \phi \wedge \dot{v} = (a_1 v + b_1 v^{\perp}) \wedge v + \phi \wedge (a_2 \phi + b_2 v^{\perp}) \\
+&= -b_1 v \wedge v^{\perp} + b_2 \phi \wedge v^{\perp} \in \Pi_{-},
+\end{align*}
+$$
+
+and thus $\mathcal{C}$ is $\beta$-Legendrian.
+
+Suppose that $S$ is umbilic along the curve $\phi$ and that $\mathcal{C}$ is normal to $\phi =
+\phi(u)$. Then $\langle \dot{\phi}, v \rangle_{\epsilon} = 0$ and hence the equation (3.22) becomes $\dot{\phi} = b_1 v^\perp$.
+It follows that $\langle \dot{v}, \phi \rangle_{\epsilon} = -\langle \dot{\phi}, v \rangle_{\epsilon} = 0$ and
+
+$$
+\begin{align*}
+\langle \dot{v}, N \rangle_{\epsilon} &= -\langle v, \dot{N} \rangle_{\epsilon} = -\langle v, \nabla_{\dot{\phi}} N \rangle_{\epsilon} \\
+&= -\langle v, \nabla_{v^{\perp}} N \rangle_{\epsilon} = \epsilon (k_1 - k_2) \cos \theta \sin \theta = 0.
+\end{align*}
+$$
+
+Thus,
+
+$$
+\dot{\phi} = b_1 v^{\perp} \quad \text{and} \quad \dot{v} = b_2 v^{\perp}.
+$$
+
+Therefore,
+
+$$
+\dot{C} = \dot{\phi} \wedge v + \phi \wedge \dot{v} = -b_1 v \wedge v^{\perp} + b_2 \phi \wedge v^{\perp} \in \Pi_{-},
+$$
+
+which shows again that $\mathcal{C}$ is $\beta$-Legendrian.
+
+We prove the final part of Theorem 1.2 for the case of $\mathbb{L}(S^3)$ while, the proof for the case of $\mathbb{L}(\mathbb{H}^3)$ is similar. Consider the contact 1-form $\eta^3$ of the contact manifold $(\mathcal{H}(S), \Pi_+)$ given in equation (3.17).
+
+A brief computation shows that the Reeb vector field $X$ associated with
+$\eta^3$ is
+
+$$
+X = \langle v, \nabla_{v^\perp} v^\perp \rangle \phi \wedge v^\perp + (k_1 - k_2) \cos \theta \sin \theta \phi \wedge N + v \wedge v^\perp.
+$$
+
+Let $C(t) = \phi(t) \wedge v(t)$ be a smooth regular curve in $\mathcal{H}(S)$, where $t$ is the arclength of the contact curve $\phi = \phi(t)$ and for every $t$, the velocity $\dot{C}(t)$ is a Reeb vector. It then follows,
+
+$$
+\dot{\phi} \wedge v + \phi \wedge \dot{v} = \langle v, \nabla_{v\perp} v^\perp \rangle \phi \wedge v^\perp + (k_1 - k_2) \cos\theta \sin\theta \phi \wedge N + v \wedge v^\perp,
+$$
+
+which yields,
+
+$$
+\dot{\phi} = -v^{\perp} \qquad \dot{v} = \langle v, \nabla_{v^{\perp}} v^{\perp} \rangle v^{\perp} + (k_1 - k_2) \cos\theta \sin\theta N, \quad (3.24)
+$$
+
+Thus,
+
+$$
+\langle \dot{\phi}, v \rangle = -\langle v^{\perp}, v \rangle = 0.
+$$
+
+Therefore, the curve $C(t)$ is formed by the oriented geodesics that are orthogonal to the contact curve $\phi$ and therefore we have proved the first statement.
+
+Using that $\langle \dot{\phi}, \dot{\phi} \rangle = 1$, we have
+
+$$
+\ddot{\phi} = -\dot{v}^{\perp} = -\phi + \langle v, \nabla_{v^{\perp}} v^{\perp} \rangle v + (k_1 \sin^2 \theta + k_2 \cos^2 \theta) N. \quad (3.25)
+$$
+---PAGE_BREAK---
+
+Denoting the vector fields $d\phi(\partial/\partial t)$, $d\phi(\partial/\partial\theta)$ by $\partial/\partial t$, $\partial/\partial\theta$, respectively and using (3.24), we have
+
+$$ \nabla_{\partial/\partial\theta} \nabla_{v^\perp} = -\nabla_{\partial/\partial\theta} \nabla_{\partial/\partial t} = -\nabla_{\partial/\partial t} \nabla_{\partial/\partial\theta} = \nabla_{v^\perp} \nabla_{\partial/\partial\theta}. \quad (3.26) $$
+
+Note that
+
+$$ \langle v, \nabla_{e_1} v^\perp \rangle = \langle e_1, \nabla_{e_1} e_2 \rangle \qquad \langle v, \nabla_{e_2} v^\perp \rangle = \langle e_2, \nabla_{e_2} e_1 \rangle, $$
+
+and thus for $i=1,2$,
+
+$$ (\partial/\partial\theta) \langle v, \nabla_{e_1} v^\perp \rangle = 0, \qquad (\partial/\partial\theta) \langle v, \nabla_{e_2} v^\perp \rangle = 0. $$
+
+We then have,
+
+$$ \begin{aligned} -\langle v, \nabla_v v^\perp \rangle &= -\cos\theta \langle v, \nabla_{e_1} v^\perp \rangle - \sin\theta \langle v, \nabla_{e_2} v^\perp \rangle \\ &= (\partial/\partial\theta) (-\sin\theta \langle v, \nabla_{e_1} v^\perp \rangle + \cos\theta \langle v, \nabla_{e_2} v^\perp \rangle) \\ &= \langle \partial v / \partial \theta, \nabla_{v^\perp} v^\perp \rangle + \langle v, \nabla_{\partial/\partial\theta} \nabla_{v^\perp} v^\perp \rangle, \end{aligned} $$
+
+and using (3.26) we get
+
+$$ \begin{aligned} \langle v, \nabla_v v^\perp \rangle &= -\langle v^\perp, \nabla_{v^\perp} v^\perp \rangle - \langle v, \nabla_{v^\perp} \nabla_{\partial/\partial\theta} v^\perp \rangle \\ &= \langle v, \nabla_{v^\perp} v \rangle, \\ &= 0 \end{aligned} \quad (3.27) $$
+
+Using (3.27), along the contact curve $\phi$, we have:
+
+$$ \langle e_1, \nabla_{e_1} e_2 \rangle = \langle e_2, \nabla_{e_2} e_1 \rangle = 0, $$
+
+and therefore,
+
+$$ \begin{aligned} \langle v, \nabla_{v^\perp} v^\perp \rangle &= -\cos\theta \langle e_1, \nabla_{e_1} e_2 \rangle - \sin\theta \langle e_2, \nabla_{e_2} e_1 \rangle \\ &= 0 \end{aligned} \quad (3.28) $$
+
+Substituting (3.28) into (3.25) we get
+
+$$ \ddot{\phi} = -\dot{v}^{\perp} = -\phi + (k_1 \sin^2 \theta + k_2 \cos^2 \theta) N. $$
+
+Hence $\ddot{\phi}$ lies in the plane $\phi \wedge N$ and thus $\nabla_\phi \dot{\phi} = 0$. Thus the Reeb vector field is the oriented lines tangent to S that are orthogonal to a geodesic.
+
+This completes the proof of Theorem 1.2. $\square$
+
+**4. Intersection tori of null hypersurfaces**
+
+Given a smooth convex surface $S \subset \mathbb{R}^3$, the set of oriented outward-pointing normal geodesics forms a surface $\Sigma$ in $\mathbb{L}(\mathbb{R}^3)$ which is Lagrangian and totally real away from umbilic points on $S$ [15] [19].
+
+A normal neighbourhood of $\Sigma$ can be constructed by considering the set
+
+$$ N_a(\Sigma) = \{ \gamma \in L(\mathbb{R}^3) | \exists \gamma_0 \in \Sigma \text{ s.t. } \gamma \cap \gamma_0 = p \in S \text{ and } \dot{\gamma} \cdot \dot{\gamma}_0 \geq \cos a \}, $$
+---PAGE_BREAK---
+
+for $a \in [0, \pi/2)$. It is not hard to see that $\mathcal{N}_0(\Sigma) = \Sigma$, while for $a > 0$ the 4-manifold $\mathcal{N}_a(\Sigma)$ is a disc bundle over $\Sigma$ which is a normal neighbourhood of $\Sigma$ in $\mathbb{L}(\mathbb{R}^3)$. Moreover the boundary of the normal neighbourhood are the constant angle hypersurfaces introduced in Section 3: $\partial\mathcal{N}_a(\Sigma) = \mathcal{H}_a(S)$.
+
+Thus for $a = \pi/2$, the null hypersurface $\mathcal{H}_{\pi/2}(S) = \mathcal{H}(S)$ that we have been studying are the boundaries of the normal neighbourhood of Lagrangian surfaces in $\mathbb{L}(\mathbb{R}^3)$.
+
+Consider as a local geometric model, a pair of Lagrangian discs intersecting at an isolated point, given by the oriented outward-pointing normal lines to two convex spheres $S_1$ and $S_2$, viewed as surfaces $\Sigma_1$ and $\Sigma_2$ in $\mathbb{L}(\mathbb{R}^3)$.
+
+These intersect in two points $\Sigma_1 \cap \Sigma_2 = \{\gamma_1, \gamma_2\}$, which when viewed in $\mathbb{R}^3$ are the pair of oriented lines through the centers of $S_1$ and $S_2$, where the following min/max quantities of the two-point distance function are attained:
+
+$$ \min_{p_1 \in S_1} \max_{p_2 \in S_2} d(p_1, p_2) \quad \text{and} \quad \max_{p_1 \in S_1} \min_{p_2 \in S_2} d(p_1, p_2). $$
+
+To remove the doubling due to orientation, choose $\gamma_1$ say and consider only discs $D_1 \subset S_1$ and $D_2 \subset S_2$ about the associated points of intersection $\gamma_1 \cap S_k$ for $k = 1, 2$. Let $\Sigma_1$ and $\Sigma_2$ be the oriented normal lines to these discs so that $\Sigma_1 \cap \Sigma_2 = \{\gamma_1\}$.
+
+About each disc the boundary of a normal neighbourhood as constructed above is $\mathcal{H}(D_j)$ and the intersection $\mathcal{H}(D_1) \cap \mathcal{H}(D_2)$ is the disjoint union of two tori - each common tangent line to $D_1$ and $D_2$ has two orientations.
+
+For simplicity, let $S$ be a round sphere of radius $r_0$ and centre the origin $(0, 0, 0)$ in $\mathbb{R}^3$. The set of oriented lines normal to $S$ is equal to the set of oriented lines passing through the origin. This is an embedded holomorphic Lagrangian sphere $\Sigma \equiv S^2$ given by the zero section of $TS^2$. In local coordinates this is $\xi \mapsto (\xi, \eta = 0)$.
+
+On the other hand, the set of oriented lines tangent to $S$ can be characterized as those oriented lines whose perpendicular distance from the origin is $r_0$. The perpendicular distance to the origin of an oriented line $(\xi, \eta)$ is given by:
+
+$$ \chi = \frac{2|\eta|}{1 + \xi\xi}. $$
+
+The hypersurface $\mathcal{H}(S)$ is given locally by:
+
+$$ \xi = \frac{\nu + e^{iA}}{1 - \bar{\nu}e^{iA}}, \qquad \eta = \frac{1}{2}(1 + \xi\bar{\xi})r_0 e^{iA} = -\frac{(1 + \nu\bar{\nu})}{(1 - \bar{\nu}e^{iA})^2} r_0 e^{iA}, $$
+
+for $(\nu, A) \in \mathbb{C} \times S^1$.
+
+Note that passing to the constant angle hypersurface $\mathcal{H}_a(S)$ here does not change the picture, as the constant angle hypersurface of a round sphere with a given radius is the tangent hypersurface of a round sphere with a different radius.
+---PAGE_BREAK---
+
+**Proof of Theorem 1.3:**
+
+Consider the intersection of the two such tangent hypersurfaces $\mathcal{H}(S_1)$ and $\mathcal{H}(S_2)$, where $S_1$ and $S_2$ are spheres of radii $r_1 \ge r_2$, respectively, whose centres are separated by a distance $l$. By a translation and a rotation move the centre of the larger sphere to the origin and the centre of the smaller sphere to the positive $x^3$ axis. The Lagrangian sections $\Sigma_1$ and $\Sigma_2$ then intersect at the oriented line along the $x^3$-axis
+
+The hypersurface $\mathcal{H}(S_1)$ is given by
+
+$$ \frac{2|\eta|}{1 + \xi\bar{\xi}} = r_1, $$
+
+while we can translate $\mathcal{H}(S_2)$ by $l$ to move the centre to the origin which yields a change $\eta \rightarrow \eta + l\xi$ and then it is given by
+
+$$ \frac{2|\eta + l\xi|}{1 + \xi\bar{\xi}} = r_2. $$
+
+These are the two equations we must solve to find the intersection.
+
+The first equation is readily solved in polar coordinates
+
+$$ \xi = Re^{i\theta} \qquad \eta = \frac{1}{2}(1 + R^2)r_1 e^{i\psi}, $$
+
+for $R \in \mathbb{R}_+$, $\theta \in [0, 2\pi)$ and $\psi = \psi(R, \theta)$.
+
+Substituting this into the second equation yields
+
+$$ l^2 \frac{R^2}{(1+R^2)^2} + lr_1 \cos(\psi-\theta) \frac{R}{1+R^2} + \frac{1}{4}(r_1^2 - r_2^2) = 0. $$
+
+The set of solutions to this equation depends upon the relative values of $l$, $r_1$ and $r_2$. Switching to spherical polar coordinates by the substitution $R = \tan(\phi/2)$, we can write this as a quadratic equation for $e^{i\psi}$ thus:
+
+$$ lr_1 \sin \phi e^{2i(\psi-\theta)} + (l^2 \sin^2 \phi + r_1^2 - r_2^2) e^{i(\psi-\theta)} + lr_1 \sin \phi = 0. $$
+
+A solution of this only exists if the discriminant is non-positive, so that
+
+$$ e^{i\psi} = (-K \pm \sqrt{1-K^2} i) e^{i\theta}, $$
+
+where $K(\phi)$ is the function
+
+$$ K = \frac{r_1^2 - r_2^2 + l^2 \sin^2 \phi}{2lr_1 \sin \phi}. $$
+
+Clearly we must have $K^2 \le 1$ for there to be a solution, which implies that
+
+$$ r_1 - r_2 \le l \sin \phi \le r_1 + r_2. \quad (4.1) $$
+
+Thus, for a solution to exist we must have $l \ge r_1 - r_2$, i.e. one sphere cannot lie completely inside the other sphere. When equality holds, the surfaces $S_1$ and $S_2$ intersect at a single point $p$, and the solution set is a circle (parameterized by $\theta$) with $\phi = \pi/2$. This is the circle of oriented lines in the common tangent plane $T_pS_1 = T_pS_2$ which forms a null curve in $\mathbb{L}(\mathbb{R}^3)$.
+---PAGE_BREAK---
+
+On the other hand, the surfaces $S_1$ and $S_2$ intersect for $r_1 - r_2 < l \le r_1 + r_2$ and the tangent lines (with both orientations) to the intersection circle are contained in both $\mathcal{H}(S_1)$ and $\mathcal{H}(S_2)$. In fact, $\mathcal{H}(S_1) \cap \mathcal{H}(S_2) = T^2$ in this range and the intersection set is connected.
+
+One of the circle factors in the torus $T^2 = S^1 \times S^1$ is paramtererized by $\theta$, which generates rotations about the axis of symmetry through the centers of the spheres. The second circle factor comes about by fixing $\theta$ and varying $\phi$ from $\phi = \sin^{-1}(r_1 - r_2)/l$ (one solution), through $\sin^{-1}(r_1 - r_2)/l < \phi < \sin^{-1}(r_1 + r_2)/l$ (two solutions) to $\phi = \sin^{-1}(r_1 + r_2)/l$ (one solution).
+
+This last circle can be identified with the intersection of the boundary of the image of $S_2$ with the horizon, as seen by someone standing on $S_1$. As the person moves on $S_1$ towards $S_2$ these points of intersection trace out a circle, starting with a single point of internal tangency (when $S_2$ is below the horizon), then two points as $S_2$ rises over $S_1$, ending with a single point of external tangency to the horizon.
+
+Finally, if $l > r_1 + r_2$, the intersection set has two connected components, both tori, which are related by flipping the orientation of the common tangent lines.
+
+Let us now compute the induced metric on the solution set when $l > r_1 - r_2$, i.e. the intersection tori. The torus is given by local sections in polar coordinates
+
+$$ \xi = Re^{i\theta} \qquad \eta = \frac{1}{2}(1+R^2)r_1(-K \pm \sqrt{1-K^2}i)e^{i\theta}, $$
+
+with
+
+$$ K = \frac{(r_1^2 - r_2^2)(1 + R^2)^2 + 4l^2 R^2}{4lr_1 R(1 + R^2)}. $$
+
+This surface has a complex point iff at the point [19]
+
+$$ \sigma = -\partial_{\xi}\bar{\eta} = -\frac{1}{2}e^{-i\theta}\left(\partial_{R} - \frac{i}{R}\partial_{\theta}\right)\bar{\eta} = 0. $$
+
+For the torus, a computation shows that
+
+$$ |\sigma|^2 = \frac{r_1^2 r_2^2 l^2 \cos^2 \phi}{(l^2 \sin^2 \phi - (r_1 - r_2)^2) ((r_1 + r_2)^2 - l^2 \sin^2 \phi)}, $$
+
+On the other hand, the pullback of the symplectic form (3.4) to a section is
+
+$$ \Omega|_{\Sigma} = \lambda \frac{2id\xi \wedge d\bar{\xi}}{(1+\xi\bar{\xi})^2} = \mathrm{Im} \left[ \partial\xi \left( \frac{\eta}{1+\xi\bar{\xi})^2} \right) \right] 2id\xi \wedge d\bar{\xi}; $$
+
+which in our case is
+
+$$ \lambda = - \frac{l(l^2 \sin^2 \phi - r_1^2 - r_2^2) \cos \phi}{2[(l^2 \sin^2 \phi - (r_1 - r_2)^2) ((r_1 + r_2)^2 - l^2 \sin^2 \phi)]^{1/2}}. $$
+
+Thus the determinant of the metric induced on $T^2$ by the neutral metric (3.3) is [19]
+
+$$ \det G|_{T^2} = \lambda^2 - |\sigma|^2 = -\frac{1}{4}l^2 \cos^2 \phi, $$
+---PAGE_BREAK---
+
+If $r_1 - r_2 < l < r_1 + r_2$ then the surfaces $S_1$ and $S_2$ intersect on a circle and the tangent lines to this circle are common to $\mathcal{H}(S_1)$ an $\mathcal{H}(S_1)$. These lines are horizontal, so $\phi = \pi/2$ along them. Thus, when the surfaces intersect, the tangent hypersurfaces intersect along a torus that is Lorenz and totally real, except at a curve of complex points where the induced metric is degenerate.
+
+If $l > r_1 + r_2$ then the surfaces $S_1$ and $S_2$ do not intersect, $\phi > \pi/2$, and so the tangent hypersurfaces intersect along a pair of tori (opposite orientations on the same line) that are totally real and Lorentz.
+
+This completes the proof of Theorem 1.3.
+
+**5. Neutral causal topology**
+
+This section contains a discussion of the preceding constructions with a view to explaining the motivation behind them, to put them in a broader context and to indicate their possible applications to 4-manifold topology.
+
+Neutral metrics offer us geometric tools that are sensitive to the underlying topology, at the level of the metric, rather than its curvature. This can be seen from a geometric, analytic and topological perspective. While the individual scenarios are classical in some sense, it is their concatenation that is of particular interest.
+
+As the discussion necessitates spanning a number of areas, the bibliography will be selective rather than exhaustive. Further aspects of neutral metrics which we do not discuss can be found for example in [5] [8] [28] [29] and references therein.
+
+A fundamental observation is that, point-wise, the null cone of a neutral metric is a cone over a torus. Since the cross-section is not simply connected, under the right circumstances, it is possible to encode topological information in the null cone of a neutral metric.
+
+Put another way, the metric must fit with the underlying 4-manifold topology and so, for example, there are obstructions to their existence. For compact smooth 4-manifolds the matter is clarified by the following theorem, which uses Hirzebruch and Hopf's 1950's work on plane fields [22]:
+
+**Theorem 5.1.** [26] [32] Let $\mathbb{N}^4$ be a smooth compact 4-manifold admitting a neutral metric. Then
+
+$$ \chi(\mathbb{N}^4) + \tau(\mathbb{N}^4) = 0 \mod 4 \quad \text{and} \quad \chi(\mathbb{N}^4) - \tau(\mathbb{N}^4) = 0 \mod 4, $$
+
+where $\chi(\mathbb{N}^4)$ is the Euler number and $\tau(\mathbb{N}^4)$ the Hirzebruch signature of $\mathbb{N}^4$.
+
+Moreover, if $\mathbb{N}^4$ is simply connected, these conditions are sufficient for the existence of a neutral metric.
+
+Thus, neither $\mathbb{S}^4$ nor $\mathbb{CP}^2$ admit a neutral metric, while K3 manifolds do. If one demands further that the neutral metric is Kähler with respect to some compatible complex structure, then the list of compact manifolds becomes smaller [35]. Thus, a K3 manifold admits a neutral metric, but not a neutral Kähler metric.
+---PAGE_BREAK---
+
+One motivation for this paper is to consider the extension of the above
+to 4-manifolds with boundary and to ask: what does the null boundary
+geometry see of the interior of a neutral 4-manifold?
+
+Similar to the holographic principle, but predating it by 60 years, the X-ray transform, or strictly speaking its symmetric reduction to the Radon transform, is used every day in hospitals' CAT-scans and achieves this feat at the level of functions [10].
+
+That is, given a real-valued function on $L(\mathbb{R}^3)$ (the difference between intensities along a ray, or oriented geodesic) reconstruct a function on $\mathbb{R}^3$ (the material density). The compatibility requirement is that the function on $L(\mathbb{R}^3)$ satisfy the flat ultra-hyperbolic equation [25].
+
+Hilbert and Courant showed that the appropriate cauchy hypersurface
+for the ultra-hyperbolic equation is null - as otherwise there are consistency
+conditions on the cauchy data in the initial value formulation [7]. Thus we
+encounter our first evidence, from analysis, that null boundaries are natural
+for neutral metrics.
+
+The nullity of the boundary introduces more structure than in the Riemannian case, in particular, a null foliation. Moreover, the neutral metric has more structure again than a Lorentz metric with null boundary, namely, two distributions of totally null planes. It is to this structure that we look for echoes of the interior geometry.
+
+It also follows from Hirzebruch and Hopf that all open 4-manifolds admit a neutral metric, and so the question arises about the compactification of such neutral 4-manifolds. Since the null cone is preserved by conformal transformations, it is natural then to look at the conformal compactification of open neutral 4-manifolds. In Section 2 we investigated the simplest case, neutral flat 4-space.
+
+Note that in this paper, the conformal compactification has non-empty
+boundary, in contrast to earlier consideration of neutral conformal compact-
+ifications into a 4-manifold without boundary [39].
+
+Conformal compactifications of both Lorentz and Riemannian cases have been considered in some detail (e.g. [3] [34]). In the Riemannian case it is natural to assume that the gradient of the conformal factor is nowhere zero on the boundary, while in the Lorentzian case it vanishes at points (at $i^0, i^\pm$ [34]). In the neutral case we consider, it vanishes along a link in the boundary (property (iv) in Theorem 1.1) and this is the manner in which the geometric topology intervenes.
+
+For a flat 4-dimensional universe with two times, the spacelike and time-
+like infinities are Hopf linked in the boundary. This is the simplest situation
+and one would expect these linked infinities to also link to whatever topology
+the boundary has.
+
+What's more, the boundary is foliated by Lorentz tori about the link.
+This feature persists for other neutral conformal compactifications and is
+---PAGE_BREAK---
+
+amenable to surgery along the link in a manner that preserves the null cone structure.
+
+Indeed, it should be possible to do surgery at infinity and preserve not just the conformal structure, but certain curvature conditions. What precisely the conditions are depends on the amount of flexibility required. We can impose a stiff restriction such as Kähler or a softer one such as anti-self dual, scalar flat.
+
+Certainly all of the examples considered in this paper are conformally flat and scalar flat, and this is a natural class in which to do the surgery. In fact, by 2-handle attachments to the 4-ball one can generate the conformal compactifications of all of the oriented geodesic spaces $L(M^3)$. We postpone the details of this aspect to a later paper.
+
+The fact that the $\alpha$-planes and $\beta$-planes are integrable in the boundary gives a sense in which the conformal compactification is asymptotically well-behaved. This can be traced back to the fact that $\mathbb{R}^{2,2}$ is 2-connected at infinity and therefore the neutral metric has nothing to hang on to at infinity. This observation suggests the use of neutral metrics to detect topology at infinity via their conformal compactifications.
+
+This example represents the 0-handle in a neutral version of Kirby calculus [17] [27] and therefore acts as a basis for handle-body constructions. The degenerate Lorentz structure on the boundary has preferred curves along which to attach 2-handles and the Lorentz tori give framings in the right circumstances.
+
+In contrast, in the tangent hypersurfaces of Section 3 the $\alpha$-planes and $\beta$-planes were found to be contact. The boundary is not a 3-sphere but a circle bundle with Euler number 2 over a 2-sphere. The fibres are null and the totally null planes rotate around the fibre as one traverses a fibre.
+
+The neutral 4-manifold bounded by the tangent hypersurface and its generalisation, the constant angle hypersurface, were introduced in [20] to prove a global version of a classical result of Joachimsthal.
+
+The study of Legendrian knots and their invariants is a well-established area in symplectic topology [13] [14]. Generally the knots are in the 3-sphere, but many results extend to more general contact 3-manifolds. In Section 3, we have a non-simply connected 3-manifold with a pair of independent contact structures.
+
+A curve $C$ on the null boundary $\mathcal{H}(S)$ corresponds to an oriented tangent line field along a curve $c$ in $S$. The classical Thurston-Bennequin index and the rotation index of the curve $C$ can be expressed in terms of the twisting of the oriented line and the rotation of $c$ in $S$ through the neutral structure.
+
+The fact that the Reeb vector field is the set of normal lines to the geodesics on the surface is important. This means that *Reeb chords* - Reeb flow-lines that begin and end on a Legendrian knot, minimize the induced two point distance function on the knot. Reeb chords play a critical role in knot contact homology [11] as they represent crossings in the Legendrian
+---PAGE_BREAK---
+
+projection. Further details of these neutral knot invariants will appear in a
+future paper.
+
+Many peculiarities of 4-dimensional manifolds (as distinct from higher di-
+mensions) arise because generic 2-discs are only immersed rather than em-
+bedded and one loses the ability to contract loops across such discs. Thus,
+attempts to exploit assumptions of simple connected-ness become more dif-
+ficult and higher dimensional techniques fail.
+
+The local model of these double points and their normal neighbourhood play key roles in our understanding of 4-manifolds (or lack thereof). In particular, the intersection of the boundaries of the normal neighbourhoods are tori, called *distinguished* in [27] and *characteristic* in [21]. It is this basic model we set out to find a neutral geometric interpretation for in Section 4.
+
+In the first instance, the intersecting discs should be flexible enough to
+be pushed around and stretched, for example in Casson's famous "finger-
+move" [21]. In the geometric category, Lagrangian discs are certainly flexible
+enough for this task since they satisfy the h-principle [12].
+
+The boundary of a normal neighbourhood of these Lagrangian discs can
+be identified with the tangent hypersurface introduced in Section 3. The
+work of Casson in the 1970's involves repeated attempts to remove unwanted
+intersections by adding thickened discs that remove the double point. The
+issue, peculiar to dimension 4, is that such discs may themselves have double
+points, leading to an iterative chain of operations seeking to push the double
+point out to infinity.
+
+While Casson achieved this at a homotopic level, giving rise to *flexible handles*, it certainly fails in the smooth category due to implications of the work of Donaldson [9]. A motivation for the present work is to explore this gap by geometerizing the boundary with a neutral metric and carrying it along in this iterative construction.
+
+In Section 4 we found a geometric model for the intersection torus of
+a double point. Similar natural neutral constructions exist for the other
+elements of the Casson handle, such as the Whitehead double, although in
+the tangent model a twisted version is more natural. Further details of these
+constructions will be given in a future paper.
+
+**References**
+
+[1] ALEKSEEVSKY, D.V.; GUILFOYLE, B.; KLINGENBERG, W. On the geometry of spaces of oriented geodesics. Ann. Global Anal. Geom. **40** (2011), 389–409. MR2836829, Zbl 1244.53046, doi: 10.1007/s10455-011-9261-5. 491
+
+[2] ALEKSEEVSKY, D.V.; GUILFOYLE, B.; KLINGENBERG, W. Erratum to: On the geometry of spaces of oriented geodesics. Ann. Global Anal. Geom. **50** (2016), 97–99. MR3521560, Zbl 06618586, doi: 10.1007/s10455-016-9515-3. 491
+
+[3] ANDERSON, M.T. $L^2$ curvature and volume renormalization of AHE metrics on 4-manifolds. Math. Res. Lett. **8** (2001), no. 2, 171–188. MR1825268, Zbl 0999.53034, doi: 10.4310/MRL.2001.v8.n2.a6. 478, 502
+
+[4] ANCIAUX, H. Space of geodesics of pseudo-Riemannian space forms and normal congruences of hypersurfaces. Trans. Amer. Math. Soc. **366** (2014), no. 5,
+---PAGE_BREAK---
+
+2699-2718. MR3165652, Zbl 1288.53032, doi:10.1090/S0002-9947-2013-05972-7.
+490, 491
+
+[5] BROZOS-VÁZQUEZ, M.M.; GARCÍA-RIO, E.; GILKEY, P.; VÁZQUEZ-LORENZO, R. Compact Osserman manifolds with neutral metric. *Results Math.* **59** (2011), no. 3-4, 495-506. MR2793470, Zbl 1223.53050, doi:10.1007/s00025-011-0116-y. 501
+
+[6] CASTRO, I.; URBANO, F. Minimal Lagrangian surfaces in $S^2 \times S^2$. *Comm. Anal. Geom.* **15** (2007), no. 2, 217-248. MR2344322, Zbl 1185.53063, doi:10.4310/CAG.2007.v15.n2.a1. 491
+
+[7] COURANT, R.; HILBERT, D. Methods of Mathematical Physics, Vol. 2. *Wiley clas-
+sic Editions, New York*, 1989. xxii, 830 pp, ISBN: 978-0471504399, MR1013360,
+Zbl 0729.35001. 502
+
+[8] DAVIDOV, J.; GRANTCHAROV, G.; MUSHKAROV, O. Geometry of neutral metrics in dimension four. *Proceedings of the Thirty Seventh Spring Conference of the Union of Bulgarian Mathematicians*, Borovets, April 2-6, 2008. arXiv:0804.2132. 501
+
+[9] DONALDSON, S.K.; KRONHEIMER, P.B. The Geometry of Four-Manifolds. *Oxford University Press, UK*, 1990. ix, 440 pp, ISBN: 978-0198502692. MR1079726, Zbl 0904.57001. 504
+
+[10] DUNAJSKI, M.; WEST, S. Anti-self-dual conformal structures in neutral signature, in Recent Developments in Pseudo-Riemannian Geometry, D.V. Alekseevsky and H. Baum (Eds), *Eur. Math. Soc., Zurich*, ESI Lect. Math. Phys., (2008) 113–148. MR2436230, Zbl 1158.53014. 502
+
+[11] EKHOLM, T.; ETNYRE, J.B.; NG, L.; SULLIVAN, M.G. Knot contact homology. *Geom. Topol.* **17** (2013), no. 2, 975–1112. MR3070519, Zbl 1267.53095, doi:10.2140/gt.2013.17.975. 503
+
+[12] ELIASHBERG, Y.; MISHACHEV, N.M. Introduction to the h-Principle. *American Mathematical Society, Providence, RI*, 2002. 206 pp, ISBN: 978-0-8218-3227-1. MR1909245, Zbl 1008.58001. 504
+
+[13] ETNYRE, J.B.; HONDA, K. Knots and contact geometry I: torus knots and the figure eight knot. *J. Symplectic Geom.* **1** (2001), no. 1, 63–120. MR1959579, Zbl 1037.57021. 503
+
+[14] ETNYRE, J.B.; HONDA, K. On connected sums and Legendrian knots. *Adv. Math.* **179** (2003), no. 1, 59–74. MR2004728, Zbl 1047.57006, doi:10.1016/S0001-8708(02)00027-0. 503
+
+[15] GEORGIOU, N.; GUILFOYLE, B. On the space of oriented geodesics of hyperbolic 3-space. *Rocky Mountain J. Math.* **40** (2010), no. 4, 1183–1219. MR2718810, Zbl 1202.53045. 479, 484, 491, 497
+
+[16] GEORGIOU, N.; GUILFOYLE, B.; KLINGENBERG, W. Totally null surfaces in neutral Kähler 4-manifolds. *Balkan J. Geom. Appl.* **21** (2016), no. 1, 27–41. MR3511135, Zbl 1356.53027, https://www.emis.de/journals/BJGA/v21n1/B21-ige-a87.pdf. 481, 486
+
+[17] GOMPF, R.E.; STIPSICZ, A.I. 4-Manifolds and Kirby Calculus, Vol. 20. *American Mathematical Society, Providence, RI*, 1999. 558 pp. ISBN: 978-0821809945. MR1707327, Zbl 0933.57020. 479, 503
+
+[18] GROMOV, M. Pseudo holomorphic curves in symplectic manifolds. *Invent. Math.* **82** (1985), no. 2, 307–347. MR0809718, Zbl 0592.53025, doi:10.1007/BF01388806. 481
+
+[19] GUILFOYLE, B.; KLINGENBERG, W. An indefinite Kähler metric on the space of oriented lines. *J. London Math. Soc.* **72** (2005), no. 2, 497–509. Zbl 1084.53017, doi:10.1112/S0024610705006605. 479, 484, 485, 497, 500
+---PAGE_BREAK---
+
+[20] GUILFOYLE, B.; KLINGENBERG, W. A global version of a classical result of Joachimsthal. *Houston J. Math.* **45** (2019), no. 2, 455–467. MR3995479, Zbl 07103362, arXiv:1404.5509. 484, 487, 503
+
+[21] GUILLOU, L.; MARIN, A. A la recherche de la topologie perdue. *Birkhauser, Boston*, 1986. 244 pp. ISBN: 978-0817633295. MR1001966, Zbl 0597.57001, doi: 10.1002/bimj.4710290505. 504
+
+[22] HIRZEBRUCH, F.; HOPF, H. Felder von Flächenelementen in 4-dimensionalen Manigfaltigkeiten. *Math. Ann.* **136** (1958), 156–172. MR0100844, Zbl 0088.39403, doi: 10.1007/BF01362296. 479, 501
+
+[23] HITCHIN, N.J. Monopoles and geodesics. *Comm. Math. Phys.* **83** (1982), no. 4, 579–602. MR0649818, Zbl 0502.58017, doi: 10.1007/BF01208717. 485
+
+[24] HONDA, A. Note on the space of oriented geodesics in the three-sphere. *RIMS Kokyuroku Bessatsu* **38** (2013), 169–187. MR3156909, Zbl 1293.53052. 479, 484, 491
+
+[25] JOHN, F. The ultrahyperbolic wave equation with four independent variables. *Duke Math. J.* **4** (1938), no. 2, 300–322. MR1546052, Zbl 0019.02404, doi: 10.1215/S0012-7094-38-00423-5. 502
+
+[26] KAMADA, H. Self-dual Kähler metrics of neutral signature on complex surfaces. *Tohoku Mathematical Publications* **24** (2002), no. 24, 1–94. MR1938369, Zbl 1016.53028. 501
+
+[27] KIRBY, ROBION C. The topology of 4-manifolds. Lecture Notes in Mathematics, 1374. Springer-Verlag, Berlin, 1989. vi+108 pp. ISBN: 3-540-51148-2. MR1001966, Zbl 0668.57001, doi: 10.1007/BFb0089031. 479, 503, 504
+
+[28] LAW, P. Neutral Einstein metrics in four dimensions. *J. Math. Phys.* **32** (1991), no. 11, 3039–3042. MR1131685, Zbl 0749.53043, doi: 10.1063/1.529048. 501
+
+[29] LAW, P. Classification of the Weyl curvature spinors of neutral metrics in four dimensions. *J. Geom. Phys.* **56** (2006), no. 10, 2093–2108. MR2241739, Zbl 1110.53016, doi: 10.1016/j.geomphys.2005.11.008. 501
+
+[30] LEBRUN, C.; MASON, L. Nonlinear gravitons, null geodesics and holomorphic discs. *Duke Math. J.* **136** (2007), no. 2, 205–273. MR2286630, Zbl 1113.53032, arXiv:math/0504582. 481
+
+[31] LEICHTWEISS, K. Zur Riemannschen geometrie in Grassmannschen manigfaltigkeiten. *Math. Z.* **76** (1961), no. 1, 334–366. MR0126808, Zbl 0113.37102, doi: 10.1007/BF01210982. 491
+
+[32] MATSUSHITA, Y. Fields of 2-planes and two kinds of almost complex structures on compact 4-dimensional manifolds. *Math. Z.* **207** (1991), no. 1, 281–291. MR1109666, Zbl 0724.57020, doi: 10.1007/BF02571388. 479, 501
+
+[33] MATSUSHITA, Y.; LAW, P. Hitchin-Thorpe type inequalities for pseudo-Riemannian 4-manifolds of metric signature (+ + --). *Geom. Dedicata* **87** (2001), no. 1-3, 65–89. MR1866843, Zbl 1018.53031, doi: 10.1023/A:1012002211862.
+
+[34] PENROSE, R.; RINDLER, W. Spinors and space-time: Volume 2, Spinor and twistor methods in space-time geometry. Cambridge University Press, UK 1988. 512 pp, ISBN: 978-0521347860. MR0944085, Zbl 0591.53002. 478, 502
+
+[35] PETEAN, J. Indefinite Kaehler-Einstein metrics on compact complex surfaces. *Commun. Math. Phys.* **189** (1997), no. 1, 227–235. MR1478537, Zbl 0898.53046, doi: 10.1007/s002200050197. 501
+
+[36] SALVAI, M. On the geometry of the space of oriented lines of Euclidean space. *Manuscripta Math.* **118** (2005), no. 2, 181–189. MR2177684, Zbl 1082.53049, doi: 10.1007/s00229-005-0576-z. 479, 484, 485
+
+[37] SALVAI, M. On the geometry of the space of oriented lines of hyperbolic space. *Glasg. Math. J.* **49** (2007), no. 2, 357–366. MR2347266, Zbl 1130.53013, doi: 10.1017/S0017089507003710. 479, 484
+---PAGE_BREAK---
+
+[38] WEIERSTRASS, K. Untersuchungen über die Flächen, deren mittlere Krümmung überall gleich Null ist. *Monatsber. Akad. Wiss. Berlin* (1866), 612–625. 485
+
+[39] WOODHOUSE, N.M.J. Contour integrals for the ultrahyperbolic wave equation. *Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.* **438** (1992), no. 1902, 197–205.
+MR1174728, Zbl 0803.58053, doi:10.1098/rspa.1992.0102.502
+
+(Nikos Georgiou) DEPARTMENT OF MATHEMATICS, WATERFORD INSTITUTE OF TECHNOLOGY, WATERFORD, CO. WATERFORD, IRELAND
+ngeorgiou@wit.ie
+
+(Brendan Guilfoyle) SCHOOL OF SCIENCE, TECHNOLOGY, ENGINEERING AND MATHEMATICS, INSTITUTE OF TECHNOLOGY, TRALEE, CLASH, TRALEE, CO. KERRY, IRELAND
+brendan.guilfoyle@ittralee.ie
+
+This paper is available via http://nyjm.albany.edu/j/2021/27-20.html.
\ No newline at end of file
diff --git a/samples/texts_merged/478912.md b/samples/texts_merged/478912.md
new file mode 100644
index 0000000000000000000000000000000000000000..65c6b30f70aa555c2401e2044d94d0d5d5bced86
--- /dev/null
+++ b/samples/texts_merged/478912.md
@@ -0,0 +1,372 @@
+
+---PAGE_BREAK---
+
+# THE SECOND BOUNDARY VALUE PROBLEM OF RIEMANN'S TYPE FOR BIANALYTICAL FUNCTIONS WITH DISCONTINUOUS COEFFICIENTS
+
+I. B. BOLOTIN
+
+Smolensk State Pedagogical University
+Przevalskogo 4, 214000 Smolensk, Russia
+E-mail: ivan_bolotin@list.ru
+
+Received October 13, 2003; revised February 5, 2004
+
+**Abstract.** The paper is devoted to the investigation of one of the basic boundary value problems of Riemann's type for bianalytical functions with discontinuous coefficients. In the course of work there was made out a constructive method for solution of the problem in a unit circle. There was also found out that the solution of the problem under consideration consists in consequent solutions of two Riemann's boundary value problems for analytical functions in a unit circle. Besides, the example is constructed.
+
+**Key words:** bianalytical function, boundary value problem, plane with slots, index
+
+## 1. Introduction
+
+Let $L = \{t : |t| = 1\}$, $D^+ = \{z : |z| < 1\}$ and $D^- = \tilde{C}\{D^+ \cup L\}$.
+
+Let $G_k(t)$ ($k = 1, 2$) – given on the contour $L$ functions, satisfying the condition of Holder everywhere on $L$, except for a finite number of points, where they have simple discontinuity, and $G_k(t) \neq 0$ on the contour. Also we shall consider, that function $G_0(t)$ has derivative which satisfies the condition of Holder, except for a finite number of points, where it may have simple discontinuities. Hereinafter, following N.I. Muskhelishvili (see, for example, [2]), we shall call points of discontinuity of the functions $G_0(t)$, $G'_0(t)$ and $G_1(t)$ as knots, and remaining points of the contour $L$ we shall name ordinary. Besides we shall rank all points of discontinuity of the function $G_0(t)$ and its derivative as knots of function $G_1(t)$.
+
+Further we shall generally use terms and definitions accepted in [3].
+
+**DEFINITION 1.** We shall speak, that bianalytical function $F^\pm(z)$ in domain $D^\pm$
+belongs to the class $A_2(D^\pm) \cap I^{(2)}(L)$, if it proceeds on the contour $L$ together with
+---PAGE_BREAK---
+
+the partial derivatives $\frac{\partial^{\alpha+\beta} F^{\pm}(z)}{\partial z^{\alpha} \partial \bar{z}^{\beta}}$ ($\alpha = 0, 1$; $\beta = 0, 1$), and so that boundary values of this function and all specified derivatives satisfy on $z$ the condition of Holder everywhere, except for, possibly, knots, where the reversion in infinity of integrable order is possible when $\alpha + \beta < 2$.
+
+It is required to find all piecewise bianalytical functions $F(z) = \{F^{+}(z), F^{-}(z)\}$, belonging to the class $A_2(D^{+}) \cap I^{(2)}(L)$, vanishing on infinity, limited near the knots of the contour and satisfying in all ordinary points of L the following boundary conditions:
+
+$$F^{+}(t) = G_{0}(t)F^{-}(t) + g_{0}(t), \qquad (1.1)$$
+
+$$\frac{\partial F^{+}(t)}{\partial n_{+}} = -G_{1}(t)\frac{\partial F^{-}(t)}{\partial n_{-}} - g_{1}(t), \qquad (1.2)$$
+
+where $\frac{\partial}{\partial n_{+}}(\frac{\partial}{\partial n_{-}})$ – derivative on interior (exterior) normal to the contour L,
+
+$g_k(t)$ ($k=0,1$) – given on L functions of the class $H^{(1-k)}(L)$, and $g_0(t) = (t-c)\gamma_c g^*(t)$, $c$ – any of knots of the function $G_0(t)$, $\gamma_c > 0$ – quite defined numbers.
+
+Here, in equality (1.2), factor $(-1)$ at $G_1(t)$ and $g_1(t)$ is entered for convenience hereinafter.
+
+We shall name the formulated problem as *the second basic boundary value problem* of Riemann's type for bianalytical functions with discontinuous coefficients in the unit circle or in short *the problem R2,2, and appropriate homogeneous problem* ($g_0(t) \equiv g_1(t) \equiv 0$) *shall be named as a problem R2,20*.
+
+Let's notice, that the problem $R_{2,2}$, stated by F. D. Gakhov as one of basic boundary value problems for bianalytical functions (see, for example, [1], p. 316) in case of continuous coefficients and smooth closed loops was explicitly investigated in the work of K. M. Rasulov (see [3]).
+
+In the above mentioned statement the problem $R_{2,2}$ is investigated in the present work for the first time.
+
+## **2. About the Solution of the Problem R2,2**
+
+It is known (see [1, 3]), that any vanishing on infinity piecewise bianalytical function $F(z)$ with line of saltuses $L$ is possible to represent as:
+
+$$F(z) = \begin{cases} F^{+}(z) = \varphi_{0}^{+}(z) + \bar{z}\varphi_{1}^{+}(z), & z \in D^{+}, \\ F^{-}(z) = \varphi_{0}^{-}(z) + \bar{z}\varphi_{1}^{-}(z), & z \in D^{-}, \end{cases} \qquad (2.1)$$
+
+where $\varphi_k^\pm(z)$ – analytical functions in domain $D^+$ (analytical components of piece-wise bianalytical function), for which the following conditions are fulfilled:
+
+$$\Pi\{\varphi_k^-, \infty\} \geq 1+k, \quad k=0,1;$$
+
+here $\Pi\{\varphi_k^-, \infty\}$ means the order of the function $\varphi_k^-(z)$ in the point $z = \infty$.
+Let's search for the solution of the problem $R_{2,2}$ as
+---PAGE_BREAK---
+
+$$F(z) = f_0(z) + (z\bar{z} - 1)f_1(z). \quad (2.2)$$
+
+Then the functions $f_k(z)$ ($k = 0, 1$) will be connected with analytical components of the required bianalytical function $F(z)$ by the formulas:
+
+$$\varphi_0(z) = f_0(z) - f_1(z), \qquad \varphi_1(z) = zf_1(z). \quad (2.3)$$
+
+As known (see [1] p. 304)
+
+$$\frac{\partial}{\partial n_{\pm}} = \pm i \left( t' \frac{\partial}{\partial t} - \bar{t}' \frac{\partial}{\partial \bar{t}} \right), \quad (2.4)$$
+
+then taking into account (2.2) and the fact that the equality $\bar{t} = \frac{1}{t}$ is fulfilled on L,
+the boundary conditions (1.1) and (1.2) can be copied accordingly in the aspect:
+
+$$f_0^+(t) = G_0(t)f_0^+(t) + g_0(t), \quad (2.5)$$
+
+$$f_1^+(t) = G_1(t)f_1^-(t) + \frac{1}{2}\left(-t\frac{df_0^+(t)}{dt} + tG_1(t)\frac{df_0^-(t)}{dt} + g_1(t)\right). \quad (2.6)$$
+
+The equalities (2.5) and (2.6) represent boundary conditions of usual Riemann's problems for analytical functions with discontinuous coefficients in the unit circle (see [1] or [2]).
+
+Thus, as a matter of fact, solution of the initial problem $R_{2,2}$ is reduced to sequential solution of two auxiliary problems of Riemann (2.5) and (2.6) in classes of piecewise analytical functions with the line of saltuses $L$. But as in the problem $R_{2,2}$ we search the solutions, limited close to the knots of the contour and vanishing on infinity, there arises the necessity in a choice of defined classes of analytical functions at the solution of auxiliary problems (2.5) and (2.6). Therefore, at first we shall find out, in what classes it is necessary to search for solutions of boundary value problems (2.5) and (2.6).
+
+From equalities (2.3) we can see that the function $f_0^-(z)$ on the infinity should have zero not below than the first order, and $f_1^-(z)$ – zero not below than the third order.
+
+Let's study the behaviour of function $F(z)$ near the knots of the contour $L$. Let $c$ be any of knots, then $\bar{c}c = 1$ and $|c| = 1$.
+
+We have the following serieses of inequalities:
+
+$$
+\begin{align*}
+|F(z)| &= |f_0(z) + (z\bar{z} - 1)f_1(z)| \le |f_0(z)| + |f_1(z)||z\bar{z} - 1| \\
+&= |f_0(z)| + |f_1(z)||(z - c + \bar{c})(\bar{z} - \bar{c} + \bar{\bar{c}}) - 1| \\
+&\le |f_0(z)| + 2|f_1(z)||z - c| + |f_1(z)||z - c|^2; \tag{2.7}
+\end{align*}
+$$
+
+$$
+\begin{align*}
+|F(z)| &= |f_0(z) + (z\bar{z} - 1)f_1(z)| \ge |f_0(z)| - |f_1(z)||z\bar{z} - 1| \\
+&= |f_0(z)| - |f_1(z)||(z - c + \bar{c})(\bar{z} - \bar{c} + \bar{\bar{c}}) - 1| \\
+&\ge |f_0(z)| - 2|f_1(z)||z - c| - |f_1(z)||z - c|^2. \tag{2.8}
+\end{align*}
+$$
+
+Thus, for the function $F(z)$ would be limited close to the knots of the contour $L$, it is necessary and enough that the function $f_0(z)$ would be limited, and the function $f_1(z)$ supposed the evaluation:
+---PAGE_BREAK---
+
+$$|f_1(z)| \le \frac{\text{const}}{|z-c|^\alpha}, \quad 0 \le \alpha < 1. \qquad (2.9)$$
+
+Really, if the function $f_0(z)$ is limited close to $c$ and function $f_1(z)$ supposes the evaluation (2.9), from inequalities (2.7) it follows, that the required bianalytical function $F(z)$ will be limited in a neighbourhood of the knot $c$.
+
+Back, if the function $F(z)$ of the class $A_2(D^+) \cap I^{(2)}(L)$ is limited close to $c$, from inequalities (2.8) it follows, that the function $f_1(z)$ has to suppose the evalu- ation (2.9) (otherwise all solutions of the problem $R_{2,2}$ will not be found), so the function $f_0(z)$ has to be limited in a neighbourhood of the knot $c$.
+
+Therefore, it is required to search the solution of the problem (2.5) in the class of functions, vanishing on infinity and limited near the knots, and solution of the problem (2.6) is required to search in the class of functions, having zero of the third order on infinity and infinity of the integrable order near the knots of the contour $L$.
+
+Let's solve the boundary value problem of Riemann (2.5) using the method of-
+fered by F.D. Gakhov (see, for example, [1], p. 448).
+
+Let index of the problem (2.5) be equal $\kappa_0$ in the specified class.
+
+Then, if $\kappa_0 \ge 0$, a common solution of the problem (2.5) is set by the formula
+(see [1, 2]):
+
+$$f_0(z) = X_0(z) \left( \frac{1}{2\pi i} \int_L \frac{g_0(\tau)}{X_0^+(\tau)} \frac{d\tau}{\tau - z} + P_{\kappa_0-1}(z) \right), \quad (2.10)$$
+
+where $X_0(z)$ – canonical function of the problem (2.5), $P_{\kappa_0-1}(z)$ – the polynomial
+of a degree not higher then $\kappa_0 - 1$ with arbitrary complex coefficients.
+
+In the case when $\kappa_0 < 0$, the solution of the problem (2.5) also will be expressed
+by the formula (2.10) with only one modification, that $P_{\kappa_0-1}(z) \equiv 0$, at observance
+of $|\kappa_0|$ conditions of solvability of the aspect:
+
+$$\int_L \frac{g_0(\tau)}{X_0^+(\tau)} \tau^{k-1} d\tau = 0, \quad k = 1, \dots, |\kappa_0|.$$
+
+Let's define numbers $\gamma_c$ specified in the statement of the problem $R_{2,2}$. Let $c_1, c_2, \dots, c_m$ be knots of the function $G_0(t)$.
+
+Below we shall consider that
+
+$$\begin{align}
+\gamma_{c_k} &> \frac{1}{2\pi} (\arg G(c_k - 0) - \arg G(c_k + 0)), && k = 2, \dots, m \nonumber \\
+\gamma_{c_1} &> \frac{1}{2\pi} (\arg G(c_1 - 0) - \arg G(c_1 + 0) - 2\pi\kappa_0). && (2.11)
+\end{align}$$
+
+Further, on the found function $f_0(z)$ with the help of differentiation and in view
+of the formulas Sokhotzky-Plemelj (see [4], p. 333, [1, 2]), we shall find out bound-
+ary values $\frac{df_0^\pm(t)}{dt}$ of the function $\frac{df_0(z)}{dz}$.
+
+**Note 1.** We shall notice, that if the knot *c* is not singular or *c*-singular, but
+$\ln|G_0(c-0)| - \ln|G_0(c+0)| = 0$ from the conditions (2.11), it follows that the
+---PAGE_BREAK---
+
+functions $\frac{df_0^\pm(t)}{dt}$ satisfy the condition of Holder everywhere on $L$ except for, possibly, knots, where they may have a singularity of the integrable order (knots of the first type). Otherwise functions $\frac{df_0^\pm(t)}{dt}$ will have a singularity of the first order near the knots (knots of the second type).
+
+Further we shall solve the boundary value problem of Riemann (2.6).
+
+Let the index of the problem (2.6) be equal $\kappa_1$ in the specified class.
+
+As it is known (see [1, 2]), if $\kappa_1 \ge 3$, a common solution of the problem (2.6) is set by the formula:
+
+$$f_1(z) = X_1(z) \left( \frac{1}{2\pi i} \int_L \frac{Q_1(\tau)}{X_1^+(\tau)} \frac{d\tau}{\tau - z} + P_{\kappa_1-3}(z) \right), \quad (2.12)$$
+
+where $X_1(z)$ – canonical function of the problem (2.6), $P_{\kappa_1-3}(z)$ – the polynomial of a degree not higher than $\kappa_1 - 3$ with arbitrary complex coefficients,
+
+$$Q_1(t) = \frac{1}{2} \left( -t \frac{df_0^+(t)}{dt} + t G_1(t) \frac{df_0^-(t)}{dt} + g_1(t) \right).$$
+
+If $\kappa_1 \le 2$, the solution of the problem (2.6) also will be expressed by the formula (2.12) with only one modification that $P_{\kappa_1-3}(z) \equiv 0$, at observance of $-\kappa_1 + 2$ conditions of solvability of the aspect:
+
+$$\int_L \frac{Q_1(\tau)}{X_1^+(\tau)} \tau^{k-1} d\tau = 0, \quad k = 1, \dots, -\kappa_1 + 2.$$
+
+**Note 2.** Generally speaking, absolute term $Q_1(t)$ of the problem (2.6) satisfies the condition of Holder everywhere on $L$ except for, possibly, knots $c_1, c_2, \dots, c_m$, where it may have singularity of the first order (knots of the second type), and remaining knots, where it may have an integrable singularity. And, if the knot of the second type of the problem (2.5) is the singular knot of the problem (2.6), then the problem $R_{2,2}$ will be *insoluble* in the class $A_2(D^\pm) \cap I^{(2)}(L)$.
+
+Further on the found functions $f_0(z)$ and $f_1(z)$, using the formulas (2.3), we restore analytical components of the required piecewise bianalytical function, and then the piecewise bianalytical function $F(z)$ itself under the formula (2.1).
+
+Thus, the following basic outcome is fair.
+
+**Theorem 1.** Let $L = \{t : |t| = 1\}$, $D^+ = \{z : |z| < 1\}$ and $D^- = \bar{C}\{D^+ \cup L\}$. Then the solution of the problem $R_{2,2}$ is reduced to the sequential solution of two scalar boundary value problems of Riemann (2.5) and (2.6) with discontinuity coefficients in classes of analytical functions in the unit circle, and that the solution of the problem (2.5) is searched in the class of functions vanishing on infinity and limited in the knots of the contour; and the solution of the problem (2.6) is searched in the class of functions, having on infinity zero of the third order and infinity of the integrable order in the knots of the contour $L$. The problem $R_{2,2}$ is solvable if and only if
+---PAGE_BREAK---
+
+the problems (2.5) and (2.6) in the specified classes of functions are simultaneously
+solvable and knots of the second type are not singular for the coefficient $G_1(t)$ of the
+problem (2.6).
+
+*Example 1.* Let $L = \{t : |t| = 1\}$, $D^+ = \{z : |z| < 1\}$ and $D^- = \bar{C}\{D^+ \cup L\}$.
+It is required to find all piecewise bianalytical functions $F(z) = \{F^+(z), F^-(z)\}$
+belonging to the class $A_2(D^+) \cap I^{(2)}(L)$ vanishing on infinity, limited near the
+knots of the contour and satisfying in all ordinary points L the following boundary
+conditions:
+
+$$
+F^{+}(t) = G_{0}(t)F^{-}(t) + (t-1)^{\frac{3}{2}}(t+1)^{\frac{3}{2}}, \quad (2.13)
+$$
+
+$$
+\frac{\partial F^{+}(t)}{\partial n_{+}} = -t^{4} \frac{\partial F^{-}(t)}{\partial n_{-}} - (3t^{2}(t^{2}-1)^{\frac{1}{2}} + 2t^{2}), \quad (2.14)
+$$
+
+Here
+
+$$
+G_0(t) = \begin{cases} 1, & t \in L_1 = \{t : t = e^{is}, 0 \le s \le \pi\}, \\ -1, & t \in L_2 = \{t : t = e^{is}, \pi \le s \le 2\pi\}, \end{cases}
+$$
+
+$G_1(t) = t^4, g_0(t) = (t-1)^{\frac{3}{2}}(t+1)^{\frac{3}{2}}$ and $g_1(t) = (3t^2(t^2-1)^{\frac{1}{2}} + t^2)$.
+
+Using equalities (2.2) – (2.4), the boundary condition (2.13) will take the follow-
+ing aspect:
+
+$$
+f_0^+(t) = G_0(t)f_0^-(t) + (t-1)^{\frac{3}{2}}(t+1)^{\frac{3}{2}}, \quad (2.15)
+$$
+
+Knots of the problem (2.15) are the points $t = 1$ and $t = -1$, in which function
+$G_0(t)$ has simple discontinuity.
+
+Let's calculate the index of the problem (2.15). Let's choose as the initial point
+$t = 1$. We have,
+
+$$
+G_0(1 + 0) = 1 = e^{i0}, \quad \theta_1 = 0,
+$$
+
+the change of argument of the function $G_0(t)$ on the arc $L_1$ will be equal
+
+$$
+\Delta\theta_1 = [\arg G_0(t)]_{L_1} = 0.
+$$
+
+Therefore
+
+$$
+G_0(-1 - 0) = 1 = e^{i0}.
+$$
+
+Let $G_0(-1 + 0) = -1 = e^{i\theta_2}$. Let's choose a value $\theta_2$ so that the inequality is fulfilled
+
+$$
+0 \le -\theta_2 < 2\pi,
+$$
+
+That is $\theta_2 = -\pi$
+
+The change of argument of the function $G_0(t)$ on the arc $L_2$ will be equal to zero.
+So $G_0(1 - 0) = -1 = e^{-i\pi}$.
+
+Let's define the whole number $\kappa_0$, satisfying the following condition:
+
+$$
+0 \leq -\pi - 2\pi\kappa_0 < 2\pi.
+$$
+
+Thus, the index of the problem (2.15) will be equal to -1.
+
+The common solution of the problem will look like:
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+f_0^+(z) &= (z^2 - 1)^{\frac{3}{2}}, \\
+f_0^-(z) &\equiv 0,
+\end{align*}
+$$
+
+at observance of one condition of solvability:
+
+$$
+\int_L \tau (\tau^2 - 1) d\tau = 0,
+$$
+
+which is obviously fulfilled.
+
+The boundary values of the derivative of the solution of the problem (2.15) will be set by the formulas:
+
+$$
+\frac{df_0^+(t)}{dt} = 3t(t^2 - 1)^{\frac{1}{2}}, \quad (2.16)
+$$
+
+$$
+\frac{df_{0}^{-}(t)}{dt} \equiv 0. \tag{2.17}
+$$
+
+With the account (2.16) – (2.17), the boundary condition (2.14) will take the
+aspect:
+
+$$
+f_1^+(t) = t^4 f_1^-(t) + t^2. \tag{2.18}
+$$
+
+Generally speaking, the coefficient of the problem (2.18) is the continuous function on L, but following the statement of the problem (2.18) the knots will be represented by the points t = 1 and t = -1.
+
+Let's calculate the index of the problem (2.18). Let's choose as the initial point $t = 1$. We have,
+
+$$
+G_1(1+0) = 1 = e^{i0}, \quad \theta_1 = 0,
+$$
+
+The change of argument of function $G_1(t)$ on the arc $L_1$ will be equal
+
+$$
+\Delta\theta_1 = [\arg G_1(t)]_{L_1} = 4\pi.
+$$
+
+Therefore
+
+$$
+G_1(-1-0) = 1 = e^{i4\pi}.
+$$
+
+Let $G_1(-1+0) = -1 = e^{i\theta_2}$. Let's choose a value $\theta_2$ so that the inequality is fulfilled
+
+$$
+-2\pi < 4\pi - \theta_2 \leq 0,
+$$
+
+That is $\theta_2 = 4\pi$
+
+The change of argument of function $G_1(t)$ on the arc $L_2$ will be equal $4\pi$. So
+$G_1(1-0) = 1 = e^{i8\pi}$.
+
+Let's define whole number $\kappa_1$, satisfying the following condition:
+
+$$
+-2\pi < 8\pi - 2\pi\kappa_{1} \leq 0.
+$$
+
+Thus, the index of the problem (2.18) will be equal to 4.
+
+Hence, the common solution of the problem (2.18) will look like:
+
+$$
+f_1^+(z) = z^2 + a_1 z + a_0,
+$$
+
+$$
+f_{1}^{-}(z)=\frac{a_{1}}{z^{3}}+\frac{a_{0}}{z^{4}},
+$$
+---PAGE_BREAK---
+
+where $a_0$ and $a_1$ – arbitrary complex coefficients.
+
+On the found functions $f_0(z)$ and $f_1(z)$, using (2.3), we restore analytical components of the required piecewise bianalytical function $F(z)$:
+
+$$ \varphi_0^+(z) = (z^2 - 1)^{\frac{3}{2}} - z^2 + a_1 z + a_0, \quad (2.19) $$
+
+$$ \varphi_0^-(z) = -\frac{a_1}{z^3} - \frac{a_0}{z^4}, \quad (2.20) $$
+
+$$ \varphi_1^+(z) = z^3 + a_1 z^2 + a_0 z, \quad (2.21) $$
+
+$$ \varphi_1^-(z) = \frac{a_1}{z^2} + \frac{a_0}{z^3}. \quad (2.22) $$
+
+Thus, common solution of the problem (2.13) – (2.14) is represented by the formula:
+
+$$ F(z) = \begin{cases} F^{+}(z) = \varphi_{0}^{+}(z) + \bar{z}\varphi_{1}^{+}(z), & z \in D^{+}, \\ F^{-}(z) = \varphi_{0}^{-}(z) + \bar{z}\varphi_{1}^{-}(z), & z \in D^{-}, \end{cases} $$
+
+where the functions $\varphi_0^\pm(z)$ and $\varphi_1^\pm(z)$ are defined by the formulas (2.19) – (2.22).
+
+## References
+
+[1] F.D. Gakhov. *Boundary value problems*. Nauka, Moscow, 1977. (in Russian)
+
+[2] N.I. Muskhelishvili. *Singular integral equations*. Nauka, Moscow, 1968. (in Russian)
+
+[3] K.M. Rasulov. *Boundary value problems for polyanalytical functions and some of their applications*. Smolensk State Pedagogical University, Smolensk, 1998. (in Russian)
+
+[4] N.P. Vekua. *Systems of singular integral equations and some boundary value problems*. Nauka, Moscow, 1970. (in Russian)
+
+## Antrasis kraštinis uždavinys Rimano tipo bianalizinėms funkcijoms su trūkiais koeficientais
+
+I.B. Bolotin
+
+Darbe sprendžiamas antrasis kraštinis uždavinys Rimano tipo bianalizinėms funkcijoms su trūkiais koeficientais. Parodoma, kad sprendžiamas uždavinys suvedamas į sprendimą dviejų Rimano uždavinių analizinėms funkcijoms su trūkiais koeficientais. Randamos analizinių funkcijų klasės, kuriose gali būti sprendinys. Pateikiamas pavyzdys, iliustruojantis nagrinėjamo uždavinio sprendimo algoritmą.
\ No newline at end of file
diff --git a/samples/texts_merged/482562.md b/samples/texts_merged/482562.md
new file mode 100644
index 0000000000000000000000000000000000000000..a00732452bcaafd1dd06124883f1d408ddb4a3ac
--- /dev/null
+++ b/samples/texts_merged/482562.md
@@ -0,0 +1,134 @@
+
+---PAGE_BREAK---
+
+Revised Evaluation of Electric Cars
+
+The article below titled **Why Electric Cars Are Not The Answer** was posted October 21, 2012. Today I am posting reconsideration of some salient issues. While these considerations make electric cars more acceptable serious reservations remain.
+
+The key conversion formula for horsepower (HP) and watts (W) is still
+
+$$1\text{HP} = 745.69987... \text{ W}$$
+
+Thus, a 400 HP car is also a 300 kilowatt (kW) car to within 0.6 percent. While this conversion enables one to talk about engine power in two languages it does not reflect the differences in efficiency of function between a combustion-of-gasoline engine and an electric motor. A reliable source for quantitative comparisons is found at https://www.fueleconomy.gov/. The important fact is that electric motors are about 3 times more efficient than gasoline motors in converting latent energy content into motive power. Electric motors are about 60% efficient whereas gasoline motors are about 20% efficient, plus or minus a percent or two in each case. Another conversion formula pertinent here is the electric energy equivalent of the combustion energy produced by a gallon of octane. Octane is the principal hydrocarbon in gasoline (87% - 91%) and its combustion can be represented by the chemical formula
+
+$$2 C_8H_{18} + 25 O_2 \leftrightarrow 16 CO_2 + 18 H_2O$$
+
+Going to the right this reaction releases 33.3 kWh (kilowatt-hour) of energy for each gallon of octane oxidized.
+
+Cars do not normally operate at their HP ratings. For example a 400 HP car only approaches its 400 HP maximum when rapidly accelerating in a low gear.
+---PAGE_BREAK---
+
+When traveling at a constant speed of 60 mph it operates at a lower HP that is determined by wind drag and wheel/motor friction. If there were no wind drag or wheel/motor friction, the 60 mph speed could be maintained by almost no power expenditure at all (Newton's second law for no friction). However there is drag and friction so power must be expended to maintain a constant speed, even on a flat surface, but a 400 HP car may operate at only 200 HP, or less, to do so. The wind drag increases with the square of the speed and efficiency at 60 mph can be reduced by as much as 20% at 70 mph, and much more at 80 mph. A gallon of octane generates 33.3 kWh of energy but at 20% efficiency for a combustion engine, this is only 6.6 kWh of energy actually expended for motion of the car. At 60% efficiency for an electric motor, the same amount of energy needed electrically is 11 kWh.
+
+Let us compare costs. A gallon of octane is needed to produce 6.6 kWh of energy in the motion of a combustion car whereas the same motion is achieved by 11 kWh of energy provided to an electric car. In recent times that gallon of octane costs somewhere around $2 to $4. At the national average for electrical energy that is 12 cents per kWh the cost is $1.32. That is quite a savings in fuel costs. Current all electric vehicles can take you 100 miles for 25 – 40 kWh of electricity. That's equivalent to 15 – 24 kWh of useful energy at 60% efficiency. At 20% efficiency a combustion engine needs 75 – 120 kWh of octane energy, or 2.4 – 3.8 gallons (about 41 mpg down to 26 mpg). Thus the 100 mile trip costs $3 - $4.8 electrically and $4.8 - $7.6 at $2 a gallon of octane, or $9.6 - $15.2 at $4 a gallon of octane. These numbers reflect current rates for electricity and gasoline. Increased use of electricity could force prices up although at present most power suppliers give lower prices for higher levels of usage. This could reverse if demand becomes very great. Gasoline engines pollute because they release carbon dioxide and water vapor, both of which are major greenhouse gases. Coal burning power plants also pollute but wind power and photocells produce electricity without releasing greenhouse gases. Hydroelectric power is non-polluting from the gases viewpoint but it causes ecological problems and dams have a finite lifetime. Nuclear power production raises a whole host of long term problems.
+---PAGE_BREAK---
+
+The bottom line is that my earlier estimate of power usage from 10 times to 20 times the present household usage rate if two car families convert from gasoline to electricity is too large by a factor of 3, that is electric power usage will increase 3.3 fold to 6.7 fold. A household using 35 kWh a day on average with two gasoline cars will use 120 kWh – 240 kWh, or more, a day if the cars become all electric.
+This assumes the equivalent of 200 HP - 400 HP cars used for commuting and all other transportation needs.
+
+Ronald F. Fox
+Smyrna, Georgia
+
+June 30, 2016
+
+# Why Electric Cars Are Not The Answer
+
+As the USA's dependence on foreign oil becomes increasingly problematic and as gasoline combustion byproducts continue to pollute the air there has been a surge in the promotion of personal, electric-powered transportation. Using some easily available numbers and a little common sense I will argue below why this approach is not the answer. To be clear, I am talking about cars that are purely electric and do not use any internal combustion. This excludes hybrid cars that can be made to use the energy of motion, while braking, to charge a battery and then use that stored electric energy for propulsion or other purposes. These types of hybrids simply do not waste all the energy that standard cars do not exploit while braking. They require no outside source of electricity and generate their own electric power. As is well understood, hybrids only work well as gasoline savers in stop and go city traffic and not so well on long distance trips.
+
+The key to understanding my subsequent remarks is the basic conversion formula for converting mechanical power, "horsepower," into electric power, "watts." The conversion is
+
+$$1\text{HP} = 745.69987... \text{W}$$
+---PAGE_BREAK---
+
+where *HP* denotes horsepower and *W* denotes watts. There is also a term in usage called the “electrical horsepower” that is simply 746 *W* and is sufficiently accurate for our purposes. We owe the unit of horsepower to the Scottish inventor James Watt (1736-1819), after whom the unit of a watt was named. Watt established the unit of *HP* by comparing steam engines with draft horses. From 1960 until 1993 I thought Watt’s horses must have been sick or weak. However, in 1993 it was shown that for a few seconds a horse could achieve a peak power of 14.9 *HP* but for sustained effort it could only achieve, fittingly enough, 1 *HP*, the power of seven or eight 100 *W* light bulbs.
+
+Several car companies have put purely electric cars on the market and a sampling of what is available indicates how much power they can generate:
+
+Nissan LEAF: 80 kW
+
+Tesla Roadster 2.5 Sport: 215 kW
+
+Mitsubishi i-MiEV: 47 kW
+
+Ford Focus Electric: 107 kW
+
+in which *kW* denotes kilowatts, i.e. 1,000 *W*. To convert to horsepower, multiply the number of *kW* by 1.34
+
+$$1 \text{kW} = \frac{1}{746} \text{HP} = 1.34 \text{HP}$$
+
+where I have used the electrical horsepower for simplicity and have dropped digits beyond the second decimal place in the second equality. Clearly all four cars have relatively low horsepower, indeed very low horsepower by contemporary SUV standards (the top ten SUV horsepowers go from 400 HP to well over 500 HP).
+
+For a purely electric car to work there must be a source of electricity to charge up its storage batteries. Presently storage batteries are expensive, heavy and take up space. They propel the car a rather short distance (less than 100 miles)
+---PAGE_BREAK---
+
+before they need recharging and a typical commuter would need to recharge daily.
+Long road trips are not possible with present technology and the currently available
+electricity depots along our highways. Compared to refilling the gasoline tank, a
+complete recharging takes a long time as well.
+
+Everyone should remember the times when there have been electric
+blackouts, especially during hot summers when too many air conditioners are used
+simultaneously. These events give a sense of how close to capacity our normal
+electric needs operate. Let us consider just the demands for a household and how
+they would change if all of us used electric cars. This leaves out the considerable
+drains on electricity caused by industries and city infrastructures. The average
+power use per home in the USA is estimated to be:
+
+$$
+\frac{958 \text{ kWhr}}{\text{month}} = \frac{32 \text{ kWhr}}{\text{day}} = 1\frac{1}{3} \text{ kW}
+$$
+
+Remember that *energy is power multiplied by time*. So a household averaging
+
+$1\frac{1}{3}kW$
+
+of power over a whole day uses
+
+$$
+24 \text{ hr} \times 1\frac{1}{3} \text{ kW} = 32 \text{ kWhr}
+$$
+
+of energy in a day (*hr* denotes hour and *kWhr* is the unit of energy *kilowatt-hour*).
+The census bureau estimates that there are 115,000,000 households in the USA (a
+third of which are apartments). It is also estimated that there are 220,000,000 light
+duty vehicles (cars) in the USA, or almost two per household. There are about
+220,000,000 adults averaging 1.5 hours a day per person operating these cars. Let
+us now tally up the results. The daily total energy consumption by all households is
+---PAGE_BREAK---
+
+$$115,000,000 \times 32 \text{ kWhr} = 3,680,000,000 \text{ kWhr}$$
+
+The daily total energy consumption by all electric cars, if everyone owned only electric cars (assume they are all Ford Focus Electrics for simplicity), is
+
+$$220,000,000 \times 107 \text{ kW} \times 1.5 \text{ hr} = 35,310,000,000 \text{ kWhr}$$
+
+This is **ten times** the energy consumption per day per household for all other purposes. Switch to the higher **kW** generating Tesla Roadsters and it is **twenty times** as much.
+
+Where is all of this electric energy going to originate? More hydroelectric dams, more coal burning generators, more windmills, more nuclear power plants, ...? Note that electricity is always at least a secondary energy source, that is, some other energy source has to be converted into electricity: gravity in hydroelectric dams, coal oxidation in coal burning generators, wind energy for windmills and nuclear fission in nuclear power plants. This is less efficient than direct conversion. For example one could argue that each household could do its own conversion, say by running an electric generator. Imagine that you used a gasoline (kerosene, propane, natural gas, etc.) powered generator. It is more efficient to simply use a gasoline (kerosene, propane, natural gas, etc.) burning car instead of first making electricity and then using an electric car. Each conversion step is less than 100% efficient. Maybe a solar cell converter could be used by each household. At two cars per household, each at 107 kW for 1.5 hour a day, 321 kWhr of energy would be needed per day. Remember that this is **ten times** the energy per day consumption per household (see above). In October of 2012 roof top solar photovoltaic converters rated at 3.5 kWhr per day were being highly touted in sunny California. Such solar photovoltaic cell converters are 11-13% efficient in converting sunlight into electricity. Even at 100% efficiency they would barely meet household needs, much less power the electric cars. More sophisticated versions of solar power using solar concentrators and sun trackers work better to
+---PAGE_BREAK---
+
+meet household needs but it is clear that none of them would ever meet the needs for two electric cars.
+
+Centrally located, large scale generation of power has advantages and disadvantages. You can generate electricity in a coal burning power plant but you cannot build a car that has a coal burning engine. The nature of the fuel and the nature of the vehicle are tightly connected. Size scale is critical. You can build a train locomotive that burns coal, and then many persons would have to share transportation rather than park their own little locomotive at home. Wind energy is ideal for powering a sailboat, even a small one. Coal burning can power a steamboat but not a canoe. Not everyone can build a hydroelectric dam on their own property. Few homes have a sufficient source of flowing water. We have saturated the reasonable sites for large scale dams in the USA already. Many are already silting up and otherwise becoming obsolescent. Private household windmills are also unfeasible. How would they help high rise apartment dwellers? Even roof top solar cell arrays require that you live where there are many sunny days per year. However, solar cell farms where there are many sunny days per year can distribute power to many homes where there are not. Nuclear power can be housed inside a nuclear submarine but it is out of the question for family cars. Only large scale plants can be adequately controlled for safety, and so far that hasn't always worked out well. The eventual embrittlement of the containment vessels produces the need to dispose of them and that creates radioactive waste the disposition of which is not easy.
+
+Biofuels are another approach to these problems, although not strictly an electric car solution. In this scenario, sunlight is used to grow crops such as corn. The corn is harvested and then fermented to produce alcohol, for example ethanol. The alcohol is burned by the car using an engine very much like a gasoline (or diesel) burning engine. Note that this is a three stage process each step of which is less than 100% efficient. Using these fuels to powered an electric generator instead introduces still another less than 100% efficient stage of conversion. So one would simply use the biofuel to burn it in a internal combustion car rather than generate electricity for an electric car. Biofuels require vast amounts of land to grow
+---PAGE_BREAK---
+
+sufficient amounts of corn (or sugar cane etc.). This raises environmental questions regarding irrigation (droughts could be catastrophic to drivers), fertilizers (phosphate runoff creates algal blooms and then anoxia in streams, ponds and lakes that kills a level of the food chain including fish amphibians and insect larvae), pesticides (not such a big problem if the crop is used to make biofuel rather than food), *etcetera* and adversely affects worldwide starvation. Burning a molecule of ethanol produces three molecules of water and two molecules of carbon dioxide, a greenhouse gas. Thus, one pollution issue is not solved. Some efficiency is lost to the energy costs involved in powering planters, harvesters and fermenters etc. and the lower energy density of ethanol compared to gasoline. These debits are relatively large compared to the energy production.
+
+Generally, using sunlight at the start of an energy conversion cascade is limiting because of the low power density that is not used at 100% efficiency (on a clear day at sea level the optimal *solar insolation* at the Earth's surface perpendicular to the Sun's rays is 1 *kW* per square meter). This is partly the reason why it only directly powers plant growth, a slow process, and not directly animals that use muscles and nerves, tissues that require high levels of power to operate properly (an adult human male averages over one day roughly 100 *W* of power consumption, 0.134 *HP*, that is provided by biochemical metabolism, not sunlight, and while running can exert a peak power of 1 *HP*).
+
+Electric cars are not the solution to the gasoline powered car problems, neither the petro-political issues nor the environmental impacts. If a complete replacement of gas powered cars by purely electric cars were to be undertaken there would be insufficient electric generation to run them. Total household electricity utilization would increase at least ten-fold. Local, house by house electric generation scenarios are inadequate and unfeasible to provide the needed electricity for a typical household with two cars. Adequate large scale electric generation by solar photovoltaic farms, by windmills, by hydroelectricity or by nuclear energy has many problems as well.
+---PAGE_BREAK---
+
+Hopefully this essay convinces you that some simple arithmetic homework
+needs to be done before jumping on the bandwagon for any of various alternatives
+to energy conversion and utilization that would replace gasoline powered cars by
+purely electric cars. These small scale, high powered, long ranging and rapidly
+refueled personal vehicles were indeed a remarkable invention.
+
+Ronald F. Fox
+
+Smyrna, Georgia
+
+October 21, 2012
\ No newline at end of file
diff --git a/samples/texts_merged/491078.md b/samples/texts_merged/491078.md
new file mode 100644
index 0000000000000000000000000000000000000000..d62e06ebd917724fbbe91a650de3b2d5b53f6468
--- /dev/null
+++ b/samples/texts_merged/491078.md
@@ -0,0 +1,172 @@
+
+---PAGE_BREAK---
+
+# Useful and Necessary Formulas
+
+http://www2.ucdsb.on.ca/tiss/stretton/Database/formulas_content.html
+
+## 1. Electromagnetic Radiation
+
+a) Speed of Light
+b) Wavelength
+c) Frequency
+d) Energy in a photon
+e) $\lambda = \Lambda * v$
+f) $\lambda = c / v$
+g) $v = c / \Lambda$
+h) $E = h * v$
+
+## 2. Concentration and Molar Mass
+
+a) Density (D)
+b) Moles (n)
+c) Moles (# of particles)
+d) Moles (solution)
+e) Moles (gas equation)
+f) Molarity (M)
+g) Molar mass (mm)
+h) $D = m / V$
+i) $n = g / mm$
+j) $n$ = number of particles / Avogadro's number
+k) $n$ = concentration • volume
+l) $n = PV / RT$
+m) $M = n / volume$
+n) $mm = m / n$
+
+## 3. Gases
+
+a) Boyle's Law
+b) Charles' Law
+c) Combined Gas Law
+d) Ideal Gas Law
+e) Dalton's Law of Partial Pressures
+f) $P_1 \cdot V_1 = P_2 \cdot V_2$
+g) $V_1 \cdot T_2 = V_2 \cdot T_1$
+h) $P_1 \cdot V_1 / T_1 = P_2 \cdot V_2 / T_2$
+i) $PV = nRT$
+j) $P_T = P_1 + P_2 + P_3 + ... + P_n$
+
+## 4. Acids and Bases
+
+a) pH
+b) pOH
+c) $[H_3O^{+1}]$
+d) $[OH^{-1}]$
+e) pH = -log[H+1]
+f) pOH = -log[OH-1]
+g) $[H_3O^{+1}] = 10^{-pH}$
+h) $[OH^{-1}] = 10^{-pOH}$
+
+## 5. Heat
+
+a) Quantity of Heat (Q)
+b) Quantity of Heat (fusion)
+c) Quantity of Heat (vaporization)
+d) Celsius to Kelvin
+e) Kelvin to Celcius
+f) Q = m • c • Δt
+g) Q = m • Lf
+h) Q = m • Lv
+i) K = °C + 273.15
+j) °C = K - 273.15
+
+## 6. Mathematics
+
+a) Quadratic Equation
+b) $x = -b \pm \sqrt{b^2 - 4ac} / 2a$
+---PAGE_BREAK---
+
+Common Physical and Chemical Constants
+
+http://www2.ucdsb.on.ca/tiss/stretton/Database/constants.htm
+
+
+
+ | Avogadro's Number |
+ 6.02217 X 1023 things/mole |
+
+
+ | Planck's Constant |
+ 6.6260755 X 10-34 Js |
+
+
+ | 1 atmosphere (atm) |
+ 101,325 Pascals (Pa) = 101.325 kPa = 760 mm of Hg = 760 Torr = 1.01325 bar |
+
+
+ | 1 mole of any gas at STP |
+ 22.4 L (0°C, 1 atm) |
+
+
+ | 1 mole of any gas at SATP |
+ 24.8 L (25°C, 1 atm) |
+
+
+ | Ideal Gas Law Constant (R) |
+ 0.0821 L atm mol-1 K-1 = 8.31430 L kPa mol-1 K-1 = 8.31441 J mol-1 K-1 |
+
+
+ | 1 calorie (cal) |
+ 4.184 J |
+
+
+ | 1 Cal |
+ 1 kcal = 1000 calories |
+
+
+ | 1 atomic mass unit (amu) |
+ 1.6605665 X 10-24 g |
+
+
+ | 1 tonne(t) |
+ 1000 kg = 1 Mg |
+
+
+ | Speed of light in a vacuum |
+ 299792458 m s-1 (3.0 X 108 m s-1) |
+
+
+ | Rest mass of an electron (me) |
+ 0.000548712 u = 9.1093897 X 10-28 g |
+
+
+ | Rest mass of a proton (mp) |
+ 1.00727605 u = 1.67262305 X 10-24 g |
+
+
+ | Rest mass of a neutron (mn) |
+ 1.008665 u = 1.674954 X 10-24 g |
+
+
+ | 1 kiloWattHour(kWh) |
+ 3.6 MJ |
+
+
+ | 1 Joule (J) |
+ 1 kg m2 s-2 = 1.0 X 107 erg |
+
+
+ | 1 Coulomb(C) |
+ 6.24 x 1018 e- |
+
+
+ | Electronic charge on an electron |
+ 1.60217733 X 10-19 C |
+
+
+ | 1 Ampere(A) |
+ 1 Coulomb/s |
+
+
+ | 1 Volt(V) |
+ 1 J/C = 96.5 kJ/mole |
+
+
+ | 1 electron volt (eV) |
+ 1.60219 x 10-19 J |
+
+
+ | Faraday's Constant |
+ 96,486.7 C/mole e- |
+
+
\ No newline at end of file
diff --git a/samples/texts_merged/4961582.md b/samples/texts_merged/4961582.md
new file mode 100644
index 0000000000000000000000000000000000000000..893ed9b742b978f0bfdc2bc8f3bfafac2585b9ca
--- /dev/null
+++ b/samples/texts_merged/4961582.md
@@ -0,0 +1,776 @@
+
+---PAGE_BREAK---
+
+# An Operator-Based Approach for the Construction of Closed-Form Solutions to Fractional Differential Equations
+
+Zenonas Navickasa, Tadas Telksnysa, Inga Timofejevaa, Romas Marcinkevičiusb and Minvydas Ragulskisa
+
+aResearch Group for Mathematical and Numerical Analysis of Dynamical Systems, Kaunas University of Technology
+
+Studentu 50-147, Kaunas LT-51368, Lithuania
+
+bDepartment of Software Engineering, Kaunas University of Technology
+
+Studentu 50-415, Kaunas LT-51368, Lithuania
+
+E-mail(corresp.): tadas.telksnys@ktu.lt
+
+Received March 1, 2018; revised September 13, 2018; accepted September 13, 2018
+
+**Abstract.** An operator-based approach for the construction of closed-form solutions to fractional differential equations is presented in this paper. The technique is based on the analysis of Caputo and Riemann-Liouville algebras of fractional power series. Explicit solutions to a class of linear fractional differential equations are obtained in terms of Mittag-Leffler and fractionally-integrated exponential functions in order to demonstrate the viability of the proposed technique.
+
+**Keywords:** fractional differential equation, operator calculus, analytical solution, closed-form solution.
+
+**AMS Subject Classification:** 34A08; 34A25; 26A33.
+
+## 1 Introduction
+
+Fractional differential equations (FDEs) have become one of the cornerstones in the modeling of various real-world systems in recent years. Fractional-order models are extensively utilized in the study of physics [17], nanomaterials [9], control problems in engineering [24], economics [14] and biomedicine [21].
+
+Several typical recent examples of research involving fractional calculus are reviewed below. A fractional calculus model of supercapacitor energy storage is presented in [18]. Viscoelastic constitutive laws for arterial wall mechanics are investigated via fractional-order models in [36]. Atomic chain dynamics
+
+Copyright © 2018 The Author(s). Published by VGTU Press
+This is an Open Access article distributed under the terms of the Creative Commons Attribution
+License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribu-
+tion, and reproduction in any medium, provided the original author and source are credited.
+---PAGE_BREAK---
+
+are approximated using fractional differential equations in [32]. Dynamics of the transition from laminar to turbulent fluid flow is described using fractional models in [15].
+
+A fractional-order model of anomalous cosmic ray diffusion with a finite velocity of free particle motion is considered in [34]. An efficient fractional-derivative based model for the prediction of multiaxial visco-hyperelastic behavior of elastomers is constructed in [8]. Fractional dynamical systems with types of attractors that are distinct from attractors of integer-order systems are considered in [10]. It is demonstrated in [20] that the dependence of the firing rate of single rat neocortical pyramidal neuron is a fractional derivative of slowly varying stimulus parameters.
+
+Due to such wide possibilities fractional-order model application, a number of methods for the construction of exact analytical solutions to FDEs have been developed. In [35], the Carleman embedding technique is used to transform the fractional logistic equation into an infinite-order set of linear equations from which the exact solution to the fractional logistic equation is obtained. Agrawal presents a scheme for the construction of analytical solutions for a class of FDEs that contain both left- and right-fractional derivatives in [1]. The joint Laplace and Fourier transform is employed to construct solutions to fractional partial differential equations occurring in quantum mechanics in [29]. A class of explicit particular solutions to the Cauchy-Euler fractional partial differential equation is obtained in [19].
+
+Analytical solutions to the fractional modified Telegraph and Rayleigh equations are constructed in terms of Mittag-Leffler, Hypergeometric, Hermite and Fox's H functions in [33]. The solution to a Hilfer-generalized Riemann-Liouville fractional diffusion equation is obtained using variable separation, Laplace transform and Sturm-Liouville analysis in [31]. A bivariate operational method is used to construct solution to the two-term time-fractional Thornley problem in [7]. Exact solutions to a class of fractional Hamiltonian equations are analyzed in [6]. An extension of Frobenius' method is applied to linear fractional differential equations with variable coefficients in [30].
+
+A number of techniques use power series with fractional powers to construct solutions to fractional differential equations. The viability of this approach is proven in [11] by generalizing some results from integer-power series to fractional-power series using the Caputo fractional derivative. A new approach for the iterated construction of series solutions to fractional differential equations is presented in [2] and expanded in [3].
+
+An analytical technique based on power series is exploited in [5] to predict and represent the multiplicity of solutions to nonlinear boundary value problems of fractional order. This approach is further generalized in [12] and [4] in which the authors propose schemes for the construction of exact analytical solutions to linear and nonlinear equations based on the generalized Taylor series formula.
+
+The main objective of this paper is to develop an operator-based approach for the construction of closed-form solutions to fractional differential equations. Some key concepts of such operator-based techniques are presented in [26] for fractional derivative order $\alpha = \frac{1}{2}$. A generalization of this technique for
+---PAGE_BREAK---
+
+rational-valued fractional derivative order $\alpha = \frac{m}{n}$ is presented in this paper.
+
+The presented technique is based on Caputo and Riemann-Liouville algebras of fractional power series. Fractional differentiation and integration operators are defined using the basis of these algebras. It is demonstrated that a one-to-one correspondence exists between Caputo and Riemann-Liouville series.
+
+To provide a more concise and clear presentation of these concepts, linear fractional differential equations are considered. Using the properties of Caputo and Riemann-Liouville algebras, explicit expressions of solutions to linear fractional differential equations are obtained via linear recurring sequences. Furthermore, it is shown that elements of Caputo and Riemann-Liouville algebras can be represented by a finite sum of fractionally integrated integer-power series, which enables the transformation of fractional differential equations into systems of ordinary differential equations (ODEs). The viability of our approach is demonstrated using the fractional damped harmonic oscillator – it is shown that as the fractional derivative order approaches the standard integer value, the exact solution of the FDE converges to the exponential-function exact solution of the respective ODE.
+
+## 2 Main concepts and definitions
+
+A generalization of algebras and operators given in [26] for fractional derivatives of order $\alpha = \frac{1}{n}, n \in \mathbb{N}$ is presented in this section.
+
+### 2.1 Power series and extensions
+
+As in [26], functions in this paper are represented by power series:
+
+$$y(z) := \sum_{j=0}^{+\infty} a_j \frac{z^j}{j!}; \quad a_j, z \in \mathbb{C}. \qquad (2.1)$$
+
+If series (2.1) is convergent (in the Cauchy sense) in any ball $|z| < \varepsilon$, classical function extension techniques can be used to extend (2.1) to a wider domain in the complex plane. Denoting $y(x), x \in \mathbb{R}$ as a real-valued function that is obtained by evaluating the extended power series on the real line allows to consider (2.1) for arguments that are not necessarily in the convergence radius $|x| < \varepsilon$. In such cases, the extended function $y(x)$ and the respective power series are considered congruent.
+
+If series (2.1) is divergent for $z \neq 0$, it is well-known that while such series cannot be evaluated, they still contain important structural information [16]. Thus, divergent series can be solutions to fractional differential equations in the structural sense.
+
+### 2.2 Fractional power series
+
+Solutions to fractional differential equations can be expressed via power series that are summed over non-integer powers. In the remainder of this paper, we let $x \ge 0$.
+---PAGE_BREAK---
+
+**DEFINITION 1.** Let $n \in \mathbb{N}$. The fractional power series basis $z_0^{(n)}$, $z_1^{(n)}$, ... is defined as:
+
+$$z_j^{(n)}(x) := x^{\frac{j-n+1}{n}} / \Gamma\left(\frac{j+1}{n}\right); \quad x \ge 0; \quad j = 0, 1, \dots$$
+
+$\Gamma$ denotes the Gamma function:
+
+$$\Gamma(x) := \int_{0}^{+\infty} \xi^{x-1} \exp(-\xi) \, d\xi.$$
+
+**DEFINITION 2.** The set of fractional power series with respect to parameter $n \in \mathbb{N}$ reads:
+
+$$\mathbb{F}_n := \left\{ \sum_{j=0}^{+\infty} c_j z_j^{(n)} ; \ c_j \in \mathbb{C}, \ j = 0, 1, \dots \right\}. \qquad (2.2)$$
+
+## 2.3 Riemann-Liouville algebra
+
+Consider the linear space over $\mathbb{C}$ defined in the set $\mathbb{F}_n$ with standard sum and product by a scalar operations. Let $z_k^{(n)}$, $z_l^{(n)}$; $k, l = 0, 1, \dots$ be basis elements of any fractional power series.
+
+**DEFINITION 3.** The Riemann-Liouville type product operation $\ast_n$ in the set $\mathbb{F}_n$ is defined as:
+
+$$z_k^{(n)} \ast_n z_l^{(n)} := \binom{(k+l)/n}{k/n} z_{k+l}^{(n)}, \qquad (2.3)$$
+
+where $\binom{\alpha}{\beta}$ is the generalized binomial coefficient:
+
+$$\binom{\alpha}{\beta} = \frac{\Gamma(\alpha + 1)}{\Gamma(\beta + 1)\Gamma(\alpha + \beta - 1)}; \quad \alpha, \beta \in \mathbb{C}.$$
+
+Note that (2.3) is not the natural product that is obtained by multiplying basis functions in a conventional way, since that would yield powers of $x$ that are lesser than negative unity.
+
+Using relation (2.3), the product of any $f_1^{(n)} = \sum_{j=0}^{+\infty} a_j z_j^{(n)}$, $f_2^{(n)} = \sum_{j=0}^{+\infty} b_j z_j^{(n)} \in \mathbb{F}_n$ is defined as:
+
+$$\begin{align}
+f_1^{(n)} *_n f_2^{(n)} &:= \sum_{j=0}^{+\infty} \sum_{k=0}^{j} a_k b_{j-k} z_k^{(n)} *_n z_{j-k}^{(n)} \nonumber \\
+&= \sum_{j=0}^{+\infty} \left( \sum_{k=0}^{j} a_k b_{j-k} \binom{j/n}{k/n} \right) z_j^{(n)} \in \mathbb{F}_n. \tag{2.4}
+\end{align}$$
+
+The neutral element with respect to $\ast_n$ is $z_0^{(n)}$. Furthermore, $\ast_n$ is distributive with respect to the conventional sum operation, thus the linear space $\mathbb{F}_n$ together with product $\ast_n$ defines an algebra.
+---PAGE_BREAK---
+
+**DEFINITION 4.** The commutative algebra $F_n := \langle \mathbb{F}_n; +, *_n | \mathbb{C} \rangle$ is called the Riemann-Liouville algebra.
+
+**DEFINITION 5.** Riemann-Liouville fractional differentiation and integration operators are defined for elements of $F_n$ in the classical sense [22,27]:
+
+$$ \mathbf{D}^{(1/n)} z_j^{(n)} := \begin{cases} 0, & j = 0; \\ z_{j-1}^{(n)}, & j = 1, 2, \dots, \end{cases}, \quad \mathbf{I}^{(1/n)} z_j^{(n)} := z_{j+1}^{(n)}, \quad j = 0, 1, \dots $$
+
+Note that Riemann-Liouville differentiation of a constant does not result in zero, since $\mathbf{D}^{(1/n)} 1 = \mathbf{D}^{(1/n)} z_{n-1}^{(n)} = z_{n-2}^{(n)}$. It is clear that $\mathbf{D}^{(1/n)} f, \mathbf{I}^{(1/n)} f \in \mathbb{F}_n$ for any $f \in \mathbb{F}_n$.
+
+## 2.4 Caputo algebra
+
+Consider the truncated set of basis functions (2.2), starting at $z_{n-1}^{(n)} = 1$:
+
+$$ w_j^{(n)} := z_{j+n-1}^{(n)}, \quad j = 0, 1, \dots $$
+
+**DEFINITION 6.** Fractional power series constructed using the basis $w_0^{(n)}, w_1^{(n)}, w_2^{(n)}, \dots$ comprise the set of Caputo power series:
+
+$$ C\mathbb{F}_n := \left\{ \sum_{j=0}^{+\infty} c_j w_j^{(n)} ; \ c_j \in \mathbb{C}, \ j = 0, 1, \dots \right\}. $$
+
+As in the previous subsection, the set of Caputo power series forms a linear space over $\mathbb{C}$ with conventional sum and product by a scalar operations. Since powers of $x$ in the set $C\mathbb{F}_n$ are non-negative, the standard algebraic product operation can be used to define products of basis functions:
+
+$$ w_k^{(n)} w_l^{(n)} = \left(\frac{k+l}{n}\right) w_{k+l}^{(n)}; \quad k, l = 0, 1, \dots \qquad (2.5) $$
+
+By (2.5), the product of any $g_1^{(n)} = \sum_{j=0}^{+\infty} a_j w_j^{(n)}$, $g_2^{(n)} = \sum_{j=0}^{+\infty} b_j w_j^{(n)} \in C\mathbb{F}_n$ is defined as:
+
+$$ g_1^{(n)} g_2^{(n)} = \sum_{j=0}^{+\infty} \left( \sum_{k=0}^{j} a_k b_{j-k} \binom{j/n}{k/n} \right) w_j^{(n)} \in C\mathbb{F}_n. \qquad (2.6) $$
+
+The neutral element with respect to (2.6) is $w_0^{(n)} = 1$. It is clear that standard algebraic sum, product and product by a scalar operations define an algebra in the set $C\mathbb{F}_n$.
+
+**DEFINITION 7.** The algebra $C\mathcal{F}_n := \langle C\mathbb{F}_n; +, |\mathbb{C}\rangle$ is called the Caputo algebra.
+---PAGE_BREAK---
+
+**DEFINITION 8.** Caputo differentiation and integration operators are defined for any $g \in C\mathbb{F}_n$ via the following relations:
+
+$$
+C\mathbf{D}^{(1/n)} w_j^{(n)} = \begin{cases} 0, & j=0, \\ w_{j-1}^{(n)}, & j=1, 2, \dots, \end{cases} \qquad C\mathbf{I}^{(1/n)} w_j^{(n)} = w_{j+1}^{(n)}, \quad j=0, 1, \dots
+$$
+
+Note that the Caputo differentiation of unity is equal to zero, since
+$C\mathbf{D}^{(1/n)}w_0^{(n)} = 0$. The set $C\mathbb{F}_n$ is closed with respect to operators $C\mathbf{D}^{(1/n)}$,
+$I^{(1/n)}$.
+
+## 2.5 Relationship between Riemann-Liouville and Caputo algebras and operators
+
+### 2.5.1 Relationship between algebras with equal differentiation order
+
+It has already been noted that the set $C\mathbb{F}_n$ consists of a subset of basis functions from the set $\mathbb{F}_n$. Let $\tau$ define the following mapping:
+
+$$
+\tau(z_j^{(n)}) = w_j^{(n)}; \quad \tau^{-1}(w_j^{(n)}) = z_j^{(n)}; \quad j = 0, 1, \dots \tag{2.7}
+$$
+
+Then, $\tau$ is a bijection between sets $\mathbb{F}_n$ and $C\mathbb{F}_n$. Note that:
+
+$$
+\tau \left(z_k^{(n)} *_n z_l^{(n)}\right) = \tau \left(z_k^{(n)}\right) \tau \left(z_l^{(n)}\right) = w_k^{(n)} w_l^{(n)}, \quad (2.8)
+$$
+
+$$
+\tau^{-1} (w_k^{(n)} w_l^{(n)}) = \tau^{-1} (w_k^{(n)}) *_n \tau^{-1} (w_l^{(n)}) = z_k^{(n)} *_n z_l^{(n)}, \quad (2.9)
+$$
+
+for $k,l = 0, 1, \dots$. Equations (2.8), (2.9) yield that:
+
+$$
+\tau(f_1 *_n f_2) = \tau(f_1) \tau(f_2); \quad \tau^{-1}(g_1 g_2) = \tau^{-1}(g_1) *_n \tau^{-1}(g_2),
+$$
+
+for any $f_1, f_2 \in \mathbb{F}_n$, $g_1, g_2 \in C\mathbb{F}_n$. Thus the mapping (2.7) defines a bijection between algebras $\mathcal{F}_n$ and $C\mathcal{F}_n$.
+
+It can be observed that the mappings $\tau, \tau^{-1}$ can be realized via operators
+$\mathbf{D}^{(1/n)}$, $\mathbf{I}^{(1/n)}$:
+
+$$
+\tau|_{\mathbb{F}_n} = (\mathbf{I}^{(1/n)})^{n-1}, \quad \tau^{-1}|_{C\mathbb{F}_n} = (\mathbf{D}^{(1/n)})^{n-1}.
+$$
+
+The relationship between algebras $\mathcal{F}_n$ and $C\mathcal{F}_n$ is summarized in Figure 1.
+Note that:
+
+$$
+C\mathbf{D}^{(1/n)}|_{C\mathbb{F}_n} = (\mathbf{I}^{(1/n)})^{n-1} (\mathbf{D}^{(1/n)})^n|_{C\mathbb{F}_n}, \quad C\mathbf{I}^{(1/n)}|_{C\mathbb{F}_n} = \mathbf{I}^{(1/n)}|_{C\mathbb{F}_n}
+$$
+
+Furthermore, if $f_k \in \mathbb{F}_n$ and $g_k \in C\mathbb{F}_n$, $k = 1, \dots, m$, then:
+
+$$
+(\mathbf{I}^{(1/n)})^{n-1} (f_1 *_n f_2 *_n \dots *_n f_m) = \prod_{k=1}^{m} ((\mathbf{I}^{(1/n)} f_k)^{n-1}),
+$$
+---PAGE_BREAK---
+
+$$
+\tau = (\mathbf{I}^{(1/n)})^{n-1} : \mathbb{F}_n \underset{\substack{\swarrow \\ \searrow}}{\overset{\searrow}{\longrightarrow}} \mathbb{C}\mathbb{F}_n : (\mathbf{D}^{(1/n)})^{n-1} = \tau^{-1}
+$$
+
+$$
+\tau = (\mathbf{I}^{(1/n)})^{n-1} : \mathbb{F}_n \underset{\substack{\swarrow \\ \searrow}}{\overset{\searrow}{\longrightarrow}} \mathbb{C}\mathbb{F}_n : (\mathbf{D}^{(1/n)})^{n-1} = \tau^{-1}
+$$
+
+**Figure 1.** Schematic diagram of the bijective mappings between Caputo and Riemann-Liouville algebras.
+
+$$
+\left(\mathbf{D}^{(1/n)}\right)^{n-1} \left(\prod_{k=1}^{m} g_k\right) = \left(\left(\mathbf{D}^{(1/n)}\right)^{n-1} g_1\right) *_{n} \left(\left(\mathbf{D}^{(1/n)}\right)^{n-1} g_2\right) *_{n} \dots *_{n} \left(\left(\mathbf{D}^{(1/n)}\right)^{n-1} g_m\right).
+$$
+
+Note that the mapping $\tau$ can be used to exchange Caputo differentiation with Riemann-Liouville and vice-versa:
+
+$$
+\tau \left( \left( \mathbf{D}^{(1/n)} \right)^m g \right) = \left( \mathbb{C} \mathbf{D}^{(1/n)} \right)^m f, \quad \tau^{-1} \left( \left( \mathbb{C} \mathbf{D}^{(1/n)} \right)^m f \right) = \left( \mathbf{D}^{(1/n)} \right)^m g,
+$$
+
+where $f \in C\mathbb{F}_n; g = \tau^{-1}(f) \in \mathbb{F}_n$ and $m=0,1,\dots$.
+
+### 2.5.2 Relationship between algebras with distinct differentiation order
+
+Let us consider the fractional derivative order parameter $n$ factored into powers of primes $p_1, \dots, p_m$:
+
+$$
+n = \prod_{j=1}^{m} p_j^{k_j}, \quad m, k_j \in \mathbb{N}.
+$$
+
+The following relations hold true for Caputo fractional power series:
+
+$$
+C\mathbb{F}_{p_j} \subset \cdots \subset C\mathbb{F}_{p_j^{k_j}} \subseteq C\mathbb{F}_n, \qquad (2.10)
+$$
+
+however, in general, $C\mathbb{F}_{p_1} \cup C\mathbb{F}_{p_1^2} \cup \dots \cup C\mathbb{F}_{p_m^{k_m}} \neq C\mathbb{F}_n$. Furthermore, $C\mathbb{F}_{p_j} \cap C\mathbb{F}_{p_l} = C\mathbb{F}_1, j \neq l$. The set $C\mathbb{F}_1$ contains basis elements with integer powers, thus, the unit element $w_0^{(p_j)} = w_0^{(p_l)} = w_0^{(n)} = 1$ is the same for any subset $C\mathbb{F}_{p_j^l}, l = 1, \dots, k_j$. This leads to the fact that algebras formed from the sets given in (2.10) are subalgebras of $C\mathbb{F}_n$:
+
+$$
+C\mathcal{F}_{p_j} \subset \dots \subset C\mathcal{F}_{p_j^{k_j}} \subseteq C\mathcal{F}_n.
+$$
+
+In the case of Riemann-Liouville derivative, the relation analogous to (2.10)
+holds true:
+
+$$
+\mathbb{F}_{p_j} \subset \dots \subset \mathbb{F}_{p_j^{k_j}} \subseteq \mathbb{F}_n.
+$$
+---PAGE_BREAK---
+
+However, in this case the unit elements are distinct for each algebra formed
+with the subsets $\mathbb{F}_{p'_j}$, $l = 1, \dots, k_j$. Furthermore, the definition of the product
+(2.4) depends on the order of the algebra. These observations yield that $\mathcal{F}_{p'_j}$
+are not sub-algebras of $\mathcal{F}_n$.
+
+An example of the results of this subsection for $n=6$ is given in Figure 2.
+Note that $^c\mathbb{F}_2$, $^c\mathbb{F}_3$ are subalgebras of $^c\mathbb{F}_6$, but $\mathbb{F}_2$, $\mathbb{F}_3$ are not subalgebras
+of $\mathbb{F}_6$, even though $\mathbb{F}_2, \mathbb{F}_3 \subset \mathbb{F}_6$. The bijection $\tau$ maps Caputo and Riemann-
+Liouville algebras of the same order to each other.
+
+**Figure 2.** An example of the relationship between Caputo and Riemann-Liouville algebras of orders $n = 2, 3, 6$. Caputo algebras $^c\mathbb{F}_2$, $^c\mathbb{F}_3$ are subalgebras of $^c\mathcal{F}$. Riemann-Liouville fractional power series sets satisfy $\mathbb{F}_2, \mathbb{F}_3 \subset \mathbb{F}_6$, but Riemann-Liouville algebras $\mathbb{F}_2, \mathbb{F}_3$ are not subalgebras of $\mathbb{F}_6$ due to the fact that the product operation $\ast_n$ depends on algebra order. $e$ denotes the unit element of all considered Caputo algebras; $e_2, e_3, e_6$ denote unit elements of Riemann-Liouville algebras that correspond to respective fractional differentiation orders.
+
+# 3 Linear fractional differential equations
+
+Linear fractional differential equations with respect to Caputo differentiation
+operators are analyzed in this section.
+
+## 3.1 Mittag-Leffler functions
+
+Mittag-Leffler functions, first introduced in [23], play a pivotal role in fractional calculus.
+
+**DEFINITION 9.** Let $\alpha, \beta \in \mathbb{C}$ and $\mathrm{Re}(\alpha) > 0$. Then, the Mittag-Leffler function is defined as [28]:
+
+$$E_{\alpha,\beta}(t) := \sum_{j=0}^{+\infty} \frac{t^j}{\Gamma(\alpha j + \beta)}. \quad (3.1)$$
+
+Special cases of (3.1) include exponent and hyperbolic functions.
+---PAGE_BREAK---
+
+Note that setting $\alpha = 1/n, \beta = 1, t = x^n$, where $n \in \mathbb{N}$ results in:
+
+$$E_{\frac{1}{n}, 1} \left( x^{\frac{1}{n}} \right) = \sum_{j=0}^{+\infty} w_j^{(n)}. \quad (3.2)$$
+
+Thus, the Mittag-Leffler functions can be considered as the analogy of exponential functions for fractional differential operators, since they are the sum of all basis elements. Furthermore, the following relation holds true for any $\rho \in \mathbb{C}$:
+
+$$E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right) = \sum_{j=0}^{+\infty} \rho^j w_j^{(n)},$$
+
+which leads to:
+
+$${}^C\mathbf{D}^{(1/n)} E_{\frac{1}{n}, 1} \left(\rho x^{\frac{1}{n}}\right) = \rho E_{\frac{1}{n}, 1} \left(\rho x^{\frac{1}{n}}\right). \quad (3.3)$$
+
+Let $s = 0, 1, \dots$. The following relation will be used to express solutions to linear differential equations in the Caputo algebra:
+
+$$\begin{aligned} \frac{d^s}{d\rho^s} E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right) &= \sum_{j=0}^{+\infty} \left( \frac{d^s}{d\rho^s} \rho^j \right) w_j^{(n)} = \sum_{j=s}^{+\infty} \frac{j!}{(j-s)!} \rho^{j-s} w_j^{(n)} \\ &= s! \sum_{j=0}^{+\infty} \binom{j}{s} \rho^{j-s} w_j^{(n)}. \end{aligned} \quad (3.4)$$
+
+Note that $\binom{j}{s} = 0$ when $j < s$ and $j, s \in \mathbb{N} \cup \{0\}$. Equation (3.4) yields:
+
+$$\sum_{j=0}^{+\infty} \binom{j}{s} \rho^{j-s} w_j^{(n)} = \frac{1}{s!} \frac{d^s}{d\rho^s} E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right). \quad (3.5)$$
+
+## 3.2 Linear equations with constant coefficients
+
+Consider the following differential equation with respect to $y \in {}^C\mathbb{F}$:
+
+$$( {}^C\mathbf{D}^{(1/n)} )^m y + b_{m-1} ( {}^C\mathbf{D}^{(1/n)} )^{m-1} y + \cdots + b_1 {}^C\mathbf{D}^{(1/n)} y + b_0 y = f, \quad (3.6)$$
+
+where $f = \sum_{j=0}^{+\infty} f_j w_j^{(n)} \in {}^C\mathbb{F}$ is a known function; $b_k \in \mathbb{C}, k = 0, \dots, m-1$ and $n, m \in \mathbb{N}$.
+
+Note that equation (3.6) can be transformed into a Riemann-Liouville differential equation using the previously described mapping $\tau^{-1}$. Letting $\hat{y} = \tau^{-1}(y)$ and applying the mapping $\tau^{-1}$ on both sides of (3.6) yields:
+
+$$(\mathbf{D}^{(1/n)})^m \hat{y} + b_{m-1} (\mathbf{D}^{(1/n)})^{m-1} \hat{y} + \dots + b_1 \mathbf{D}^{(1/n)} \hat{y} + b_0 \hat{y} = \tau^{-1}(f). \quad (3.7)$$
+
+Similarly, applying $\tau$ to both sides of (3.7) results in (3.6). Thus it is sufficient to consider Caputo differential equations (3.6).
+---PAGE_BREAK---
+
+### 3.2.1 Homogeneous case
+
+Consider the case $f = 0$. Let $y = \sum_{j=0}^{+\infty} c_j w_j^{(n)}$, where $c_j \in \mathbb{C}$ are undetermined coefficients. Note that:
+
+$$ ({}^C\mathbf{D}^{(1/n)})^k y = \sum_{j=0}^{+\infty} c_{j+k} w_j^{(n)}, \quad k = 0, 1, \dots \qquad (3.8) $$
+
+Inserting the expression of $y$ into (3.6) and using (3.8) yields:
+
+$$ \sum_{j=0}^{+\infty} (c_{j+m} + b_{m-1}c_{j+m-1} + \cdots + b_1c_{j+1} + b_0c_j) w_j^{(n)} = 0. $$
+
+Thus, the coefficients of the solution must satisfy:
+
+$$ c_{j+m} + b_{m-1}c_{j+m-1} + \cdots + b_1c_{j+1} + b_0c_j = 0, \quad j = 0, 1, \dots \qquad (3.9) $$
+
+Equation (3.9) defines a linear recurrence relation, which can be solved by considering roots of the following characteristic polynomial:
+
+$$ P(\rho) = \rho^m + b_{m-1}\rho^{m-1} + \cdots + b_1\rho + b_0 = 0. \qquad (3.10) $$
+
+Suppose (3.10) has roots $\rho_1, \dots, \rho_l$ with multiplicities $l_1, \dots, l_l$. Then, the solution to (3.9) takes the following form [13,25]:
+
+$$ c_j = \sum_{k=1}^{l} \sum_{s=0}^{l_k} \gamma_{ks} \binom{j}{s} \rho_k^{j-s}, \quad j = 0, 1, \dots, \qquad (3.11) $$
+
+where $\gamma_{ks} \in \mathbb{C}$ are constants that can be determined from a system of linear equations which is obtained by selecting $j = 0, \dots, l-1$ in (3.11). Note that $0^0 := 1$ and the product $\binom{j}{s}\rho_k^{j-s}$ is considered to be zero if at least one of its terms is equal to zero.
+
+Inserting (3.11) into the expression of $y$ and using (3.5) yields:
+
+$$ y = \sum_{j=0}^{+\infty} \left( \sum_{k=1}^{l} \sum_{s=0}^{l_k} \gamma_{ks} \binom{j}{s} \rho_k^{j-s} \right) w_j^{(n)} = \sum_{k=1}^{l} \sum_{s=0}^{l_k} \gamma_{ks} \left( \sum_{j=s}^{+\infty} \binom{j}{s} \rho_k^{j-s} w_j^{(n)} \right) \\ = \sum_{k=1}^{l} \sum_{s=0}^{l_k} C_{ks} \left( \frac{d^s}{d\rho^s} E_{\frac{1}{n},1} \left( \rho x^{\frac{1}{n}} \right) \right) \Bigg|_{\rho=\rho_k}, $$
+
+where $C_{ks} = {1\over s!}\gamma_{ks}$. The results of this subsection can be summarized with the following Lemma.
+
+**Lemma 1.** Let a linear homogeneous fractional differential equation with constant coefficients with respect to the Caputo derivative be given:
+
+$$ ({}^C\mathbf{D}^{(1/n)})^m y + b_{m-1}({}^C\mathbf{D}^{(1/n)})^{m-1} y + \cdots + b_1 {}^C\mathbf{D}^{(1/n)} y + b_0 y = 0. \quad (3.12) $$
+---PAGE_BREAK---
+
+The general solution to (3.12) has the following form:
+
+$$
+y = \sum_{k=1}^{l} \sum_{s=0}^{l_k} C_{ks} \left( \frac{d^s}{d\rho^s} E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right) \right) \Big|_{\rho=\rho_k} , \quad (3.13)
+$$
+
+where $C_{ks} \in \mathbb{C}$ are any constants.
+
+It can be observed that the structure of solution (3.13) to (3.12) mirrors that of ordinary linear differential equations with constant coefficients, where exponential functions are replaced with Mittag-Leffler functions.
+
+**3.2.2 Expression of solution via exponential functions**
+
+*Corollary 1.* Let us consider the homogeneous linear fractional differential equation with constant coefficients (3.12). Suppose that the roots $\rho_1, \dots, \rho_m$ of the characteristic polynomial (3.10) are distinct. Then, the solution (3.13) can be written in the following form:
+
+$$
+y = \sum_{s=0}^{n-1} \sum_{k=1}^{m} C_k \rho_k^s \left(C_I^{(1/n)}\right)^s \exp(\rho_k^n x). \quad (3.14)
+$$
+
+*Proof*. Note that the exponential function exp $(\lambda x)$, $\lambda \in \mathbb{C}$ can be written using basis functions $w_{jn}^{(n)}, j = 0, 1, ...:$
+
+$$
+\exp(\lambda x) = \sum_{j=0}^{+\infty} \lambda^j w_{jn}^{(n)}.
+$$
+
+It can be observed that selecting $\lambda = \rho^n$ yields:
+
+$$
+\rho^s \left( C_I^{(1/n)} \right)^s \exp(\rho^n x) = \sum_{j=0}^{+\infty} \rho^{jn+s} w_{jn+s}^{(n)}.
+$$
+
+Thus, (3.2) yields that:
+
+$$
+E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right) = \sum_{s=0}^{n-1} \rho^s \left( C I^{(1/n)} \right)^s \exp \left( \rho^n x \right). \quad (3.15)
+$$
+
+If roots of the characteristic polynomial (3.10) are distinct, inserting (3.15) into
+(3.13) yields (3.14), which finishes the proof. $\square$
+
+**3.2.3 Equivalence between fractional differential equations and systems of ODEs**
+
+It can be demonstrated that fractional differential equations can be reduced
+to systems of ordinary differential equations. Let us consider a special case of
+(3.12) where *m* = *n*:
+
+$$
+\left( C \mathbf{D}^{(1/n)} \right)^n y + b_{n-1} \left( C \mathbf{D}^{(1/n)} \right)^{n-1} y + \dots + b_1 C \mathbf{D}^{(1/n)} y + b_0 y = 0. \quad (3.16)
+$$
+---PAGE_BREAK---
+
+Note that the solution $y \in C\mathbb{F}_n$ can be written in the following form:
+
+$$ y = \sum_{l=0}^{n-1} \left( C\mathbf{I}^{(1/n)} \right)^l f_l, \quad (3.17) $$
+
+where $f_l \in C\mathbb{F}_1$ are series with integer powers of $x$. Inserting (3.17) into (3.16) yields:
+
+$$ \begin{aligned} & \sum_{j=1}^{n-1} \sum_{l=j}^{n-1} b_{l-j} \left(C\mathbf{I}^{(1/n)}\right)^j f_l + \sum_{j=1}^{n-1} \sum_{k=n-j}^{n} b_k \left(C\mathbf{I}^{(1/n)}\right)^j \left(\frac{d}{dx} f_{j+k-n}\right) \\ & + \frac{d}{dx} f_0 + \sum_{j=0}^{n-1} b_j f_j = 0. \end{aligned} \quad (3.18) $$
+
+Rearranging and simplifying (3.18) yields the following system of ordinary differential equations:
+
+$$ \sum_{j=0}^{n-l-1} b_j f_{j+l} + \sum_{j=n-l}^{n} b_j \left( \frac{d}{dx} f_{j+l-n} \right) = 0; \quad l = 0, 1, \dots, n-1. $$
+
+Note that this simple example is not the limit of this technique – using (3.17) any fractional differential equation can be converted into a system of ODEs. The solution of this system is then used to obtain the general solution to (3.16).
+
+### 3.2.4 Non-homogeneous case
+
+Let $f = \sum_{j=0}^{+\infty} f_j w_j^{(n)} \neq 0$ in (3.6). Expanding on the results of the previous subsection, the following statement can be formulated.
+
+*Remark 1.* The general solution to (3.6) has the following structure:
+
+$$ y = \bar{y} + y^*, \quad (3.19) $$
+
+where
+
+$$ \bar{y} = \sum_{k=1}^{l} \sum_{s=0}^{l_k} C_{ks} \left. \frac{d^s}{d\rho^s} E_{\frac{1}{n}, 1} \left( \rho x^{\frac{1}{n}} \right) \right|_{\rho=\rho_k} $$
+
+is the solution to the associated homogeneous equation (obtained by setting $f=0$ in (3.6)) and
+
+$$ y^* = \sum_{j=0}^{+\infty} q_j w_j^{(n)}, \quad q_j \in \mathbb{C} $$
+
+is a particular solution to the nonhomogeneous linear equation (3.6).
+
+*Proof.* Let $y = \sum_{j=0}^{+\infty} c_j w_j^{(n)} \in C\mathbb{F}$ be the unknown solution to (3.19). Inserting $y$ into (3.6) and simplifying results in:
+
+$$ \sum_{j=0}^{+\infty} \left( c_{j+m} + b_{m-1}c_{j+m-1} + \cdots + b_1c_{j+1} + b_0c_j - f_j \right) w_j^{(n)} = 0. $$
+---PAGE_BREAK---
+
+Thus $y$ is a solution to (3.6) if and only if the following recurrence relation holds true:
+
+$$c_{j+m} + b_{m-1}c_{j+m-1} + \cdots + b_1c_{j+1} + b_0c_j = f_j, \quad j = 0, 1, \ldots \quad (3.20)$$
+
+Relation (3.20) defines a non-homogeneous linear recurrence relation with respect to sequence $c_j, j = 0, 1, \dots$. It is well-known (see, for example, [13]) that the general solution to (3.20) reads:
+
+$$c_j = \sum_{k=1}^{l} \sum_{s=0}^{l_k} \gamma_{ks} \binom{j}{s} \rho_k^{j-s} + q_j, \quad j = 0, 1, \ldots \quad (3.21)$$
+
+The term $q_j$ is a particular solution to the recurrence (3.20) and the remaining terms are the general solution to the associated non-homogeneous recurrence relation (3.9).
+
+Inserting (3.21) into $y$ yields (3.19). $\square$
+
+**Example 1.** Consider the following non-homogeneous linear differential equation with constant coefficients:
+
+$$\begin{aligned} & \left( {}^C\mathbf{D}^{(1/n)} \right)^4 y - \left( {}^C\mathbf{D}^{(1/n)} \right)^3 y - 5 \left( {}^C\mathbf{D}^{(1/n)} \right)^2 y - {}^C\mathbf{D}^{(1/n)} y - 6y \\ &= a_0 + a_1 w_1^{(n)} + a_2 w_2^{(n)}, \end{aligned} \quad (3.22)$$
+
+where $a_0, a_1, a_2 \in \mathbb{C}$ are any constants. By the results of Remark 1, the coefficients of the solution series obey the following linear recurrence:
+
+$$c_{j+4} - c_{j+3} - 5c_{j+2} - c_{j+1} - 6c_j = a_j, \quad j = 0, 1, \ldots, \quad (3.23)$$
+
+where $a_j = 0$ for $j = 3, 4, \ldots$. The solution to homogeneous part of recurrence (3.23) can be constructed by considering its characteristic polynomial (which is also the characteristic polynomial for the associated homogeneous equation of (3.22)):
+
+$$P(\rho) = \rho^4 - \rho^3 - 5\rho^2 - \rho - 6 = 0. \quad (3.24)$$
+
+The roots of (3.24) are $\rho_{1,2} = \pm i$, $\rho_3 = -2$, $\rho_4 = 3$. By Remark 1, the solution to the homogeneous part of (3.22) reads:
+
+$$\bar{y}=C_1 E_{\frac{1}{n},1} \left(ix^{\frac{1}{n}}\right)+C_2 E_{\frac{1}{n},1} \left(-ix^{\frac{1}{n}}\right)+C_3 E_{\frac{1}{n},1} \left(-2x^{\frac{1}{n}}\right)+C_4 E_{\frac{1}{n},1} \left(3x^{\frac{1}{n}}\right). \quad (3.25)$$
+
+Next, a special solution $y^*$ to (3.22) must be constructed. This is equivalent to finding a special solution $q_j, j = 0, 1, \ldots$ to recurrence (3.23). Using the method of undetermined coefficients, an initial guess for $q_j$ reads:
+
+$$q_j = \nu_0 0^j + \nu_1 \binom{j}{1} 0^{j-1} + \nu_2 \binom{j}{2} 0^{j-2}; \quad j = 0, 1, \ldots, \quad (3.26)$$
+
+where $\nu_0, \nu_1, \nu_2 \in \mathbb{C}$ are undetermined constants. It is clear that $q_j = a_j = 0$ for $j = 3, 4, \ldots$. Letting $q_j = a_j$ for $j = 0, 1, 2$ and using (3.23) results in:
+
+$$\begin{aligned} & -5\nu_2 - \nu_1 - 6\nu_0 = a_0, \\ & -\nu_2 - 6\nu_1 = a_1, \\ & -6\nu_2 = a_2. \end{aligned} \quad (3.27)$$
+---PAGE_BREAK---
+
+Solving (3.27) and inserting into (3.26) yields the special solution to (3.22):
+
+$$y^* = -\frac{a_0}{6} + \frac{a_1}{36} + \frac{29a_2}{216} + \frac{1}{36}(a_2 - 6a_1)w_1 - \frac{a_2}{6}w_2. \quad (3.28)$$
+
+Thus, by Remark 1, the general solution to (3.22) is given by combining (3.25) and (3.28):
+
+$$y = C_1 E_{n,1}^{1/2} (ix^{1/n}) + C_2 E_{n,1}^{1/2} (-ix^{1/n}) + C_3 E_{n,1}^{1/2} (-2x^{1/n}) + C_4 E_{n,1}^{1/2} (3x^{1/n}) \\ - \frac{a_0}{6} + \frac{a_1}{36} + \frac{29a_2}{216} + \frac{1}{36}(a_2 - 6a_1) w_1^{(n)} - \frac{a_2}{6} w_2^{(n)}.$$
+
+### 3.2.5 Cauchy initial value problems
+
+Corollary 2. Consider the following initial value problem on (3.6):
+
+$$ (C\mathbf{D}^{(1/n)})^k y|_{x=0} = \sigma_k, \quad \sigma_k \in \mathbb{C}, \quad k = 0, 1, \dots, m-1. \qquad (3.29) $$
+
+By Remark 1, the solution to (3.6) depends on $m$ undetermined constants $C_{ks}$. Evaluating the solution (3.19) at $x=0$ and using initial conditions (3.29) yields a system of linear equations that can be used to compute the values of $C_{ks}$. Since $\rho_k \neq \rho_l, k \neq l$, the linear system is consistent and has a unique solution. Thus, the initial value problem (3.6), (3.29) has a unique solution.
+
+Example 2. Consider the initial value problem on (3.23) with $a_0 = 6$, $a_1 = -7$, $a_2 = -6$ and the following initial conditions:
+
+$$ (C\mathbf{D}^{(1/n)})^k y|_{x=0} = \sigma_k, \quad k = 0, 1, 2, 3. \qquad (3.30) $$
+
+The general solution to (3.23), as constructed in Example 1, reads:
+
+$$ y = C_1 E_{n,1}^{1/2} (ix^{1/n}) + C_2 E_{n,1}^{1/2} (-ix^{1/n}) + C_3 E_{n,1}^{1/2} (-2x^{1/n}) \\ + C_4 E_{n,1}^{1/2} (3x^{1/n}) - 2 + w_1^{(n)} + w_2^{(n)}. \qquad (3.31) $$
+
+Note that (3.3) yields that for any $\rho \in \mathbb{C}$:
+
+$$ (C\mathbf{D}^{(1/n)})^k E_{n,1}^{1/2} (\rho x^{1/n}) |_{x=0} = \rho^k, \quad k = 0, 1, \dots $$
+
+Thus, applying initial conditions (3.30) to the general solution (3.31) yields:
+
+$$ \begin{aligned} & C_1 + C_2 + C_3 + C_4 - 2 = \sigma_0, \\ & iC_1 - iC_2 - 2C_3 + 3C_4 + 1 = \sigma_1, \\ & -C_1 + C_2 + 4C_3 + 9C_4 + 1 = \sigma_2, \\ & -iC_1 + iC_2 - 8C_3 + 27C_4 = \sigma_3. \end{aligned} \qquad (3.32) $$
+---PAGE_BREAK---
+
+Solving system(3.32) yields the following constants:
+
+$$
+\begin{align*}
+C_1 &= \left(-\frac{11}{300} - \frac{17i}{50}\right) \sigma_1 - \left(\frac{31}{300} + \frac{17i}{300}\right) \sigma_2 + \left(\frac{1}{50} + \frac{17i}{300}\right) \sigma_3 + \frac{57}{50} + \frac{119i}{300} + \frac{1}{2}\sigma_0, \\
+C_2 &= \frac{1+i}{12}\sigma_1 - \frac{1-i}{12}\sigma_2 - \frac{i}{12}\sigma_3 + 1 - \frac{7i}{12} + \frac{1}{2}\sigma_0, \\
+C_3 &= \left(-\frac{3}{50} - \frac{3i}{25}\right) \sigma_1 + \frac{7-i}{50}\sigma_2 - \frac{2}{25} + \frac{7i}{50} - \left(\frac{1}{25} - \frac{i}{50}\right)\sigma_3, \\
+C_4 &= \left(\frac{1}{150} - \frac{i}{50}\right)(i\sigma_3 + 2i\sigma_2 - 3 - 2i + \sigma_2 + 2\sigma_1).
+\end{align*}
+$$
+
+**3.3 Viability of the presented approach for linear FDEs with variable coefficients**
+
+The fractional power series approach for construction of closed form analytical solutions to fractional differential equations has been illustrated using linear FDEs with constant coefficients. However, the presented technique is not lim- ited to such equations – it can also be used to construct solutions to linear FDEs with variable coefficients or nonlinear problems. It has already been shown for the special case of fractional derivative order $1/2$ in [26] that an ap- proach based on fractional power series can be used to construct solutions to nonlinear fractional differential equations.
+
+Note that it is not always possible to construct closed form solutions to such equations in terms of Mittag-Leffler or other standard functions, but the computation of fractional power series coefficients can always be performed. As an example, let us consider the following linear fractional differential equation with variable coefficients:
+
+$$
+\begin{align}
+{}^{C}\mathbf{D}^{(1/n)}y - {}^{v}\nabla x y = v, \quad n \in \mathbb{N}, \ v \in \mathbb{R}, \tag{3.33} \\
+y(0) = A; \quad A \in \mathbb{R}. \tag{3.34}
+\end{align}
+$$
+
+Letting $y = \sum_{j=0}^{+\infty} c_j w_j^{(n)}$ and using previously described techniques yields:
+
+$$
+\begin{equation}
+{}^c\mathbf{D}^{(1/n)} y = \sum_{j=0}^{+\infty} c_{j+1} w_j^{(n)}, \tag{3.35}
+\end{equation}
+$$
+
+$$
+\sqrt[n]{x} y = \Gamma\left(\frac{1}{n} + 1\right) w_1^{(n)} \sum_{j=0}^{+\infty} c_j w_j^{(n)} = \sum_{j=1}^{+\infty} c_{j-1} \frac{\Gamma(j/n + 1)}{\Gamma((j-1)/n + 1)} w_j^{(n)}. \quad (3.36)
+$$
+
+Inserting (3.35) and (3.36) into (3.33) results in:
+
+$$
+c_1 - v + \sum_{j=1}^{+\infty} \left( c_{j+1} - c_{j-1} \frac{\Gamma(j/n+1)}{\Gamma((j-1)/n+1)} \right) w_j^{(n)} = 0.
+$$
+
+The above equation together with initial condition (3.34) yields the following relations for coefficients $c_j$:
+
+$$
+c_0 = A, c_1 = v, \dots, c_{j+1} = c_{j-1} \frac{\Gamma(j/n+1)}{\Gamma((j-1)/n+1)}, \quad j=1,2,\dots \quad (3.37)
+$$
+---PAGE_BREAK---
+
+The solution to recurrence relation (3.37) reads:
+
+$$c_{2j} = \frac{\prod_{k=0}^{j-1} \Gamma\left(\frac{2j-2k-1}{n} + 1\right)}{\prod_{l=1}^{j} \Gamma\left(\frac{2j-2l}{n} + 1\right)} A; \quad j = 1, 2, \dots, \qquad (3.38)$$
+
+$$c_{2j+1} = \frac{\prod_{k=0}^{j-1} \Gamma\left(\frac{2j-2k}{n} + 1\right)}{\prod_{l=1}^{j} \Gamma\left(\frac{2j-2l+1}{n} + 1\right)} \nu; \quad j = 1, 2, \dots \qquad (3.39)$$
+
+It is clear that using coefficients (3.38), (3.39) the solution to (3.33) cannot be expressed in closed form using standard functions. However, the presented approach allowed to construct a series solution that can be used to approximate the solution with arbitrary accuracy.
+
+# 4 Computational experiments
+
+## 4.1 Fractional damped harmonic oscillator
+
+Consider the paradigmatic model of the damped harmonic oscillator:
+
+$$\frac{d^2 z}{dx^2} - 2\lambda \frac{dz}{dx} + (\lambda^2 + \mu^2) z = 0, \qquad (4.1)$$
+
+$$z(0) = A, \quad \left.\frac{dz}{dx}\right|_{x=0} = B, \qquad (4.2)$$
+
+where $\lambda, \mu, A, B \in \mathbb{R}$ and $\lambda \pm i\mu$ are the eigenvalues of (4.1). The general solution to (4.1) reads:
+
+$$z = A \exp(\lambda x) \cos(\mu x) - \frac{1}{\mu} (\lambda A - B) \exp(\lambda x) \sin(\mu x). \qquad (4.3)$$
+
+Now consider the fractional version of (4.1), where the second derivative is replaced with a fractional Caputo derivative of order $m/n, m > n$:
+
+$$\left( {}^C D^{(1/n)} \right)^m y - 2\lambda \left( {}^C D^{(1/n)} \right)^n y + (\lambda^2 + \mu^2) y = 0, \qquad (4.4)$$
+
+where $y \in {}^C F_n$. Note that the first order derivation operator $\frac{d}{dx}$ is undefined in the algebra ${}^C F_n$, thus it is replaced by the operator $\left({}^C D^{(1/n)}\right)^n$. The effects of applying $\frac{d}{dx}$ or $\left({}^C D^{(1/n)}\right)^n$ on $z \in {}^C F_1$ are identical.
+
+* The initial conditions (4.2) also define the initial value problem for the fractional differential equation (4.4). Thus, initial conditions (4.2) are rewritten as:
+
+$$y(0) = A, \quad \left({}^C D^{(1/n)}\right)^n y \bigg|_{x=0} = B, \quad \left({}^C D^{(1/n)}\right)^k y \bigg|_{x=0} = 0,$$
+
+where $k = 1, \dots, n-1, n+1, \dots, m$.
+---PAGE_BREAK---
+
+* The values of parameters $\lambda$ and $\mu$ define the equilibrium points of both (4.1) and (4.4). If $\lambda > 0$, there exists an unstable node, while the cases $-1 < \lambda < 0$ and $\lambda \le -1$ result in a stable spiral and node respectively.
+
+By Lemma 1, the solution to (4.4) reads:
+
+$$y = \sum_{k=1}^{m} C_k E_{\frac{1}{n}, 1} (\rho_k x),$$
+
+where $\rho_k, k = 1, \dots, m$ are the roots of the characteristic polynomial:
+
+$$\rho^m - 2\lambda\rho^n + \lambda^2 + \mu^2 = 0.$$
+
+Note that if $m = 2n$ is selected, constants $C_k$ can be chosen in such a way to obtain solution (4.3). In the case $\lambda = -0.1, \mu = 1$, if $m$ is smaller than $2n$, it can be observed that the resulting fractional damped harmonic oscillator exhibits smaller amplitudes and the damping comes into effect more quickly (see Figure 3 (a)). The opposite effect can be observed when the oscillator is overdamped (for values of $\lambda = -1, \mu = 1$): the solutions decay more rapidly for values of $\frac{m}{n}$ closer to 2 (see Figure 3 (b)).
+
+Note that equation (4.4) can be transformed into a system of ODEs using the procedure described in subsection 3.2.3. This is discussed in detail in Appendix A.
+
+**Figure 3.** The solutions to damped harmonic oscillator (solid black line) and fractional damped harmonic oscillator (dashed, dotted and dash-dotted lines represent $n = 20, 10, 5$ respectively) for $\lambda = -0.1, \mu = 1, A = 1, B = 0$ in (a) and $\lambda = -1, \mu = 1, A = 1, B = 0$ in (b). The derivative order $m$ is set to $2n - 1$ in all cases. It can be seen that as $\frac{m}{n}$ becomes closer to 2, the solution of the fractional damped harmonic oscillator tends to the non-fractional solution (4.3). Also, for values of $\frac{m}{n}$ significantly smaller than 2, the damping effect is more powerful in the case $\lambda = -0.1$ (see (a)). However, if $\lambda = -1$, the opposite is true – the solutions decay more slowly when $\frac{m}{n}$ differs most from 2 (see (b)).
+
+# 5 Conclusions
+
+An operator-based approach for the construction of closed-form solutions to fractional differential equations is presented in this paper. The considered technique is a generalization of the results presented in [26] for rational-valued
+---PAGE_BREAK---
+
+fractional derivative order. Caputo and Riemann-Liouville fractional differentiation and integration operators are defined for respective sets of fractional power series.
+
+In order to demonstrate the viability of the proposed technique, explicit expressions of solutions to linear fractional differential equations are obtained in terms of Mittag-Leffler or fractionally-integrated exponential functions. It is also shown that the components of solutions to linear fractional differential equations satisfy associated systems of linear ordinary differential equations.
+
+Even though the operator-based approach is only illustrated using linear fractional differential equations, its applicability to nonlinear problems has already been considered for fractional derivatives of order $\alpha = \frac{1}{2}$ in [26]. The further development of this technique for nonlinear differential equations with rational-valued derivative order is a definite objective of future research.
+
+## Acknowledgements
+
+This research was funded by a grant (No. MIP078/2015) from the Research Council of Lithuania.
+
+## References
+
+[1] O.P. Agrawal. Analytical schemes for a new class of fractional differential equations. *J. Phys. A.*, **40**(21):5469-5477, 2007. https://doi.org/10.1088/1751-8113/40/21/001.
+
+[2] M. Al-Refai, M. Ali Hajji and M.I. Syam. An efficient series solution for fractional differential equations. *Abstr. Appl. Anal.*, **ID891837**, 2014. https://doi.org/10.1155/2014/891837.
+
+[3] M.K. Al-Srihin and M. Al-Refai. An efficient series solution for nonlinear multiterm fractional differential equations. *Discrete Dyn. Nat. Soc.*, **ID5234151**, 2017. https://doi.org/10.1155/2017/5234151.
+
+[4] O.A. Arqub, A. El-Ajou and S. Momani. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations. *J. Comp. Phys.*, **293**:385-399, 2015. https://doi.org/10.1016/j.jcp.2014.09.034.
+
+[5] O.A. Arqub, A. El-Ajou, Z. Al Zhou and S. Momani. Multiple solutions of nonlinear boundary value problems of fractional order: a new analytic iterative technique. *Entropy*, **16**(1):471-493, 2014. https://doi.org/10.3390/e16010471.
+
+[6] D. Baleanu and J.J. Trujillo. Exact solutions of a class of fractional Hamiltonian equations involving Caputo derivatives. *Phys Scr*, **80**(5):055101, 2009. https://doi.org/10.1088/0031-8949/80/05/055101.
+
+[7] E. Bazhlekova and I. Dimovski. Exact solution of two-term time-fractional Thornley's problem by operational method. *Integr. Transf. Spec. F.*, **25**(1):61-74, 2014. https://doi.org/10.1080/10652469.2013.815184.
+
+[8] S. Bouzidi, H. Bechir and F. Brémand. Phenomenological isotropic visco-hyperelasticity: a differential model based on fractional derivatives. *J. Engrg. Math.*, **99**(1):1-28, 2016. https://doi.org/10.1007/s10665-015-9818-6.
+---PAGE_BREAK---
+
+[9] S. Das. *Functional Fractional Calculus*. Springer, Berlin-Heidelberg, 2011.
+https://doi.org/10.1007/978-3-642-20545-3.
+
+[10] M. Edelman. Fractional standard map: Riemann-Liouville vs. Ca-
+puto. *Commun. Nonlin. Sci. Numer. Simulat.*, **16**(12):4573–4580, 2011.
+https://doi.org/10.1016/j.cnsns.2011.02.007.
+
+[11] A. El-Ajou, O. A. Arqub, Z. Al Zhou and S. Momani. New results on frac-
+tional power series: Theories and applications. *Entropy*, **15**(12):5305–5323, 2013.
+https://doi.org/10.3390/e15125305.
+
+[12] A. El-Ajou, O.A. Arqub, S. Momani, D. Baleanu and A. Alsaedi. A novel expansion iterative method for solving linear partial differential equations of fractional order. *Appl. Math. Comp.*, **257**:119–133, 2015.
+https://doi.org/10.1016/j.amc.2014.12.121.
+
+[13] G. Everest, A. van der Poorten, I. Shparlinski and T. Ward. *Recurrence sequences*. American Mathematical Society, Providence, RI, 2003.
+https://doi.org/10.1090/surv/104.
+
+[14] H. Fallahgoul, S. Focardi and F. Fabozzi. *Fractional Calculus and Fractional Processes with Applications to Financial Economics: Theory and Application*. Academic Press, 2016.
+
+[15] E.F. Doungmo Goufo and J.J. Nieto. *Attractors for fractional differential problems of transition to turbulent flows*. J. Comput. Appl. Math., 2017.
+
+[16] G.H. Hardy(Ed.). *Divergent Series*. Clarendon Press, Oxford, 1949.
+
+[17] R. Hilfer. *Applications of Fractional Calculus in Physics*. World Scientific, Singapore, 2000. https://doi.org/10.1142/3779.
+
+[18] R. Kopka. Estimation of supercapacitor energy storage based on frac-
+tional differential equations. *Nanoscale Res. Lett.*, **12**(1):636, 2017.
+https://doi.org/10.1186/s11671-017-2396-y.
+
+[19] S.-D. Lin, C.-H. Lu and S.-M. Su. Particular solutions of a certain class of associated Cauchy-Euler fractional partial differential equations via fractional calculus. *Bound. Value Probl.*, **2013**(1):126, 2013. https://doi.org/10.1186/1687-2770-2013-126.
+
+[20] B.N. Lundstrom, M.H. Higgs, W.J. Spain and A.L. Fairhall. Fractional differ-
+entiation by neocortical pyramidal neurons. *Nat. Neurosci.*, **11**(11):1335-1342,
+2008. https://doi.org/10.1038/nn.2212.
+
+[21] R.L. Magin. *Fractional Calculus in Bioengineering*. Begell House Redding, 2006.
+
+[22] K.S. Miller and B. Ross(Eds.). *An Introduction to the Fractional Calculus and Fractional Differential Equations*. Wiley, New York, 1993.
+
+[23] G.M. Mittag-Leffler. Sur la nouvelle fonction $E_\alpha(z)$. C. R. Acad. Sci., **137**:554-558, 1903.
+
+[24] C.A. Monje, Y.-Q. Chen, B.M. Vinagre, D. Xue and V. Feliu. *Fractional-order Systems and Controls*. Springer, London, 2010. https://doi.org/10.1007/978-1-84996-335-0.
+
+[25] Z. Navickas and L. Bikulsiene. Expressions of solutions of ordinary differential equations by standard functions. *Math. Model. Anal.*, **11**:399–412, 2006.
+
+[26] Z. Navickas, T. Telksnys, R. Marcinkevicius and M. Ragulskis. Operator-based approach for the construction of analytical soliton solutions to nonlinear fractional-order differential equations. *Chaos Solitons and Fractals*, **104**:625–634, 2017.
+https://doi.org/10.1016/j.chaos.2017.09.026.
+---PAGE_BREAK---
+
+[27] K.B. Oldham and J. Spanier (Eds.). *FThe Fractional Calculus: Theory and Applications of Differentiation and Integration to Arbitrary Order*. Academic Press, Cambridge, 1974.
+
+[28] F.W.J. Olver, D.M. Lozier, R.F. Boisvert and C.W. Clark. *NIST Handbook of Mathematical Functions*. Cambridge University Press, Cambridge, 2010.
+
+[29] S.D. Purohit and S.L. Kalla. On fractional partial differential equations related to quantum mechanics. *J. Phys. A, 44*(4):045202, 2010. https://doi.org/10.1088/1751-8113/44/4/045202.
+
+[30] M. Rivero, L. Rodriguez-Germa and J.J. Trujillo. Linear fractional differential equations with variable coefficients. *Appl. Math. Lett.*, **21**(5):892–897, 2008. https://doi.org/10.1016/j.aml.2007.09.010.
+
+[31] T. Sandev, R. Metzler and Ž. Tomovski. Fractional diffusion equation with a generalized Riemann-Liouville time fractional derivative. *J. Phys. A, 44*(25):255203, 2011. https://doi.org/10.1088/1751-8113/44/25/255203.
+
+[32] S. Tang and Y. Ying. Homogenizing atomic dynamics by fractional differential equations. *J. Comput. Phys.*, **346**:539–551, 2017. https://doi.org/10.1016/j.jcp.2017.06.038.
+
+[33] A.M. Tawfik, H. Fichtner, R. Schlickeiser and A. Elhanbaly. Analytical study of fractional equations describing anomalous diffusion of energetic particles. *Journal of Physics: Conference Series, IOP Publishing*, **869**:012050, 2017. https://doi.org/10.1016/j.jcp.2017.06.038.
+
+[34] V.V. Uchaikin and R.T. Sibatov. On fractional differential models for cosmic ray diffusion. *Gravitation Cosmol.*, **18**(2):122–126, 2012. https://doi.org/10.1134/S0202289312020132.
+
+[35] B.J. West. Exact solution to fractional logistic equation. *Physica A*, **429**:103–108, 2015. https://doi.org/10.1016/j.physa.2015.02.073.
+
+[36] Y. Yu, P. Perdikaris and G.E. Karniadakis. Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms. *J. Comput. Phys.*, **323**:219–242, 2016. https://doi.org/10.1016/j.jcp.2016.06.038.
+
+## Appendix A. Transformation of the fractional damped harmonic oscillator equation into a system of ODEs
+
+As shown in subsection 3.2.3, the fractional damped harmonic oscillator equation can be transformed into a system of ODEs. Letting $m = 2n - 1$ yields the following equation and initial conditions:
+
+$$ ({}^C\mathbf{D}^{(1/n)})^{2n-1} y - 2\lambda ({}^C\mathbf{D}^{(1/n)})^n y + (\lambda^2 + \mu^2) y = 0, \quad (0.1) $$
+
+$$ y(0) = A; \quad ({}^C\mathbf{D}^{(1/n)})^n y \bigg|_{x=0} = B, \quad ({}^C\mathbf{D}^{(1/n)})^k y \bigg|_{x=0} = 0, \quad (0.2) $$
+
+where $k = 1, \dots, n-1, n+1, \dots, 2n-2$. The solution to (0.1), (0.2) can be written in the following form:
+
+$$ y = \sum_{k=0}^{n-1} ({}^C\mathbf{I}^{(1/n)})^k f_k; \quad f_k = \sum_{j=0}^{+\infty} a_j^{(k)} \frac{x^j}{j!} \in {}^C\mathbb{F}_1. \quad (0.3) $$
+---PAGE_BREAK---
+
+Inserting (0.3) into (0.1) results in:
+
+$$
+\begin{equation}
+\begin{aligned}
+& \sum_{k=0}^{n-1} \left(C \mathbf{D}^{(1/n)}\right)^{2n-k-2} f_k - 2\lambda \sum_{k=0}^{n-1} \left(C \mathbf{D}^{(1/n)}\right)^{n-k} f_k \\
+& \qquad + (\lambda^2 + \mu^2) \sum_{k=0}^{n-1} \left(C \mathbf{I}^{(1/n)}\right)^k f_k = 0.
+\end{aligned}
+\tag{0.4}
+\end{equation}
+$$
+
+Note that
+
+$$
+\left(C \mathbf{D}^{(1/n)}\right)^l \left(C \mathbf{I}^{(1/n)}\right)^s f_k = \left(C \mathbf{D}^{(1/n)}\right)^{l-s} f_k; \quad l \ge s, \quad (0.5)
+$$
+
+and
+
+$$
+\left( C \mathbf{D}^{(1/n)} \right)^{\ln} f_k = \frac{d^l f_k}{dx^l}; \quad l = 0, 1, \dots \qquad (0.6)
+$$
+
+Applying the operator $\left(C \mathbf{D}^{(1/n)}\right)^{n-1}$ to (0.4) and using (0.5), (0.6) on (0.3)
+yields:
+
+$$
+\sum_{k=1}^{n-1} \left(C \mathbf{D}^{(1/n)}\right)^{n-k-1} \left( \frac{d^2 f_{k-1}}{dx^2} - 2\lambda \frac{df_k}{dx} + (\lambda^2 + \mu^2) f_k \right) \\
++ \left(C \mathbf{D}^{(1/n)}\right)^{n-1} \left( \frac{df_{n-1}}{dx} - 2\lambda \frac{df_0}{dx} + (\lambda^2 + \mu^2) f_0 \right) = 0.
+$$
+
+Thus, the components $f_0, \dots, f_{n-1}$ of solution (0.3) satisfy the following system
+of ODEs:
+
+$$
+\frac{d^2 f_{k-1}}{dx^2} - 2\lambda \frac{df_k}{dx} + (\lambda^2 + \mu^2) f_k = 0; \quad k = 1, \dots, n-1 \quad (0.7)
+$$
+
+$$
+\frac{d f_{n-1}}{dx} - 2\lambda \frac{d f_0}{dx} + (\lambda^2 + \mu^2) f_0. \quad (0.8)
+$$
+
+The initial conditions (0.2) are transformed as follows:
+
+$$
+f_0(0) = A; \quad f_k(0) = 0; \quad k = 1, \dots, n-1,
+\\
+\left. \frac{df_0}{dx} \right|_{x=0} = B, \quad \left. \frac{df_k}{dx} \right|_{x=0} = 0, \quad k = 1, \dots, n-1.
+$$
+
+The system (0.7), (0.8) can be solved to obtain the solution to (0.1).
\ No newline at end of file
diff --git a/samples/texts_merged/4966082.md b/samples/texts_merged/4966082.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb68539a3b0dd33bcf65add2af6c5b73abbe9030
--- /dev/null
+++ b/samples/texts_merged/4966082.md
@@ -0,0 +1,131 @@
+
+---PAGE_BREAK---
+
+## 14.1 Voltage Divider Circuit Review
+
+Before modeling the 2D touchscreen, let's review the most important concept that has enabled 1D touchscreen modeling: the voltage divider circuit.
+
+$$V_{\text{out}} = \frac{R_2}{R_1 + R_2} \times V_s. \quad (1)$$
+
+By using a voltage divider circuit, we can map $u_{mid}$ to $L_{touch}$. A relationship between $u_{mid}$ and $L_{touch}$ exists such that:
+
+$$u_{\text{mid}} = \frac{R_2}{R_1 + R_2} \times V_s = \frac{L_{\text{touch}}}{L} \times V_s. \quad (2)$$
+---PAGE_BREAK---
+
+## 14.2 EE16A Physics Revisited
+
+Before we dive into the modeling of 2D resistive touchscreen, let's review the I-V characteristics for some basic circuit elements.
+
+From the I-V plots, although a resistor, a wire and an open circuit can behave quite differently, their behaviors are exactly the same at (0, 0). This means that at (0, 0), these three circuit elements can be replaced by one another and the same behavior ($I = 0, V = 0$) is still expected.
+
+## 14.3 An Interesting Circuit
+
+Let's look at an example of different circuit elements behaving in the same way. The circuit we will analyze next (and the corresponding thought process) will be important when we analyze our 2D resistive touchscreen at the end of this note. Consider the following circuit:
+
+If we want to solve for $u_1, u_2, u_3$, we could use our general analysis procedure. However, we can simplify
+---PAGE_BREAK---
+
+our analysis by noticing that this circuit is very similar to the voltage divider we already analyzed. In fact, it
+is two voltage dividers — one consisting of resistors $R_1$, $kR_1$ and another consisting of $R_2$, $kR_2$.
+
+Therefore, we can apply our voltage divider equation twice to find $u_2$ and $u_3$. Note that the total voltage drop over both voltage dividers in $V_s$.
+
+$$
+\begin{aligned}
+u_2 &= \frac{kR_1}{R_1 + kR_1} V_s = \frac{k}{1+k} V_s \\
+u_3 &= \frac{kR_2}{R_2 + kR_2} V_s = \frac{k}{1+k} V_s
+\end{aligned}
+ $$
+
+We see that regardless of the resistances $R_1$ and $R_2$, the potentials $u_2$ and $u_3$ are the same! This holds as long as $k$ is constant.
+
+$$ u_2 = u_3 = \frac{k}{1+k} V_s $$
+
+Now, let's add another resistor $R_3$ to the circuit.
+
+Once again, we could analyze this circuit from scratch using our circuit analysis procedure, but maybe we
+can simplify the analysis. Let's make a bold assumption that adding $R_3$ will not affect circuit operation and
+therefore $u_2 = u_3$ from our analysis above. We can determine if this assumption is true by analyzing the
+circuit and seeing if there are any contradictions that arise. If there are no contradictions, then we know that
+this bold assumption is true.
+
+First, analyze the current flow through $R_3$.
+
+$$ R_3 i_3 = u_2 - u_3 \tag{3} $$
+
+Under the assumption that $R_3$ does not affect circuit operation, then $u_2 = u_3$. Plugging this in tells us that the current flowing through $R_3$ is zero. In addition, we can calculate that the voltage drop across $R_3$ is $(u_2 - u_3)$ which is also zero. This means that $R_3$ is at the special (0, 0) point on the I-V plot, where it behaves the same way as a wire or open circuit. This means that there is no contradiction from our bold assumption since a resistor and open circuit have the same current and voltage at this point! Our bold assumption that $R_3$ will not affect circuit operation is correct!
+
+To complete our analysis, we can replace $R_3$ with an open circuit:
+---PAGE_BREAK---
+
+Now, we can write $u_2, u_3$ directly, using the voltage divider equation as we did above:
+
+$$u_2 = u_3 = \frac{k}{1+k} V_s. \quad (4)$$
+
+When is it ok to replace $R_3$ with an open circuit? If $R_1, kR_1, R_2, kR_2$ were four arbitrary resistors, could we still replace $R_3$ with an open circuit? No, this simplification is only possible because we have set our resistor values so that $u_2 = u_3$. This means that the voltage drop over $R_3$ and current flowing in $R_3$ are both zero. In this case, the resistor $R_3$ is operating at (0, 0) on the I-V plot, so we can replace it without affecting circuit operation.
+
+Next, we will introduce the 2D resistive touchscreen — and we'll see a very similar circuit appear in our model!
+
+## 14.4 2D Resistive Touchscreen
+
+Now, let's introduce the physical structure of a 2D touchscreen: it consists of a top red plate and a bottom black plate. When a finger touches the screen, the top red plate is pushed into contact with the bottom black plate at the touch point.
+---PAGE_BREAK---
+
+The top and bottom ends of the top red plate as well as the left and right ends of the bottom black plate are made of materials that have very low resistivities ρ, we can treat them as ideal wires (ρ = 0). The materials of the transparent screen that we touch in the middle have much higher resistivity.
+
+In a 2D touchscreen, we want to figure out the vertical position and the horizontal position of the touch point: $L_{\text{touch, vertical}}$, $L_{\text{touch, horizontal}}$.
+
+Let's first analyze the physical structure of the top red plate. We can divide the top red plate into three segments (of equal width) represented by resistors, which are connected in between by horizontal resistors $R_{h1}$, $R_{h2}$.
+
+As we did with the 1D touchscreen, we connect a voltage supply $V_s$ to the top and bottom ends of the top red plate:
+---PAGE_BREAK---
+
+Let's analyze the circuit we just built with the top red plate and a voltage supply $V_s$:
+
+Does this circuit remind you of the "interesting circuit" we analyzed in the previous section? Since $R_{rest}$ and $R_{touch}$ are the same for each segment, we know that $u_2 = u_3 = u_4$. As with the "interesting circuit" can replace horizontal resistors $R_{h1}, R_{h2}$ with open circuits.
+---PAGE_BREAK---
+
+After replacing horizontal resistors with open circuit, we can use a voltmeter and measure $u_3$. Once again, using the voltage divider equation, we get:
+
+$$u_3 = \frac{R_{\text{touch}}}{R_{\text{rest}} + R_{\text{touch}}} \times V_s. \quad (5)$$
+
+Given $R_{\text{touch}} = \rho \frac{L_{\text{touch}}}{A}$, $R_{\text{rest}} = \rho \frac{L_{\text{rest}}}{A}$, $u_3$ can be further simplified:
+
+$$u_3 = \frac{L_{\text{touch}}}{L} \times V_s \quad \text{where} \quad L_{\text{touch}} = L_{\text{touch, vertical}} \qquad (6)$$
+
+This means that $u_3$ is mapped to the vertical position touched in the same way as the 1D touchscreen. When measuring the vertical position touched ($L_{\text{touch, vertical}}$), the bottom black plate connects to a voltmeter and measures $u_3$, the same way it did in the 1D touchscreen. Note that, although we have represented the top red plate by three segments of equal width in the circuit model we built, the value of $u_3$ will remain the same if we choose to represent the top red plate by an infinite number of segments.
+
+Now that we found $L_{\text{touch, vertical}}$, how can we find $L_{\text{touch, horizontal}}$? We know from linear algebra that if we want to find two values (i.e. vertical and horizontal position), we will need two measurements. What's another useful measurement that we can take? Well, the black bottom plate is rotated 90° compared to the red plate, so we can repeat this procedure on the black plate to get the horizontal touch position.
+
+To do this, we connect the supply voltage source $V_s$ to the bottom black plate, and connect the top red plate to a voltmeter. As before, we choose to represent the bottom black plate by three segments of equal width which are connected in between by vertical resistors $R_{v1}, R_{v2}$.
+---PAGE_BREAK---
+
+Let's analyze the circuit model for the bottom black plate:
+
+Once again, we see that this is very similar to the “interesting circuit” and we can replace $R_{v1}$ and $R_{v2}$ with open circuits. Then we perform the same analysis as for top red plate, and we can derive $u_3$, which is:
+
+$$u_3 = \frac{R_{\text{touch}}}{R_{\text{touch}} + R_{\text{rest}}} \times V_s \quad (7)$$
+---PAGE_BREAK---
+
+Here, $R_{touch} = \rho \frac{L_{touch, horizontal}}{A}$ in which $L_{touch, horizontal}$ is the horizontal position touched.
+
+The measurement of vertical and horizontal positions ($L_{touch, vertical}$, $L_{touch, horizontal}$) for a 2D touchscreen can be summarized as follows:
+
+* Vertical Position Measurement
+
+We connect a voltage source $V_s$ to the top red plate and connect a voltmeter to the bottom black plate.
+We can map the voltage measured to the vertical position touched:
+
+$$V_{out} = \frac{L_{touch, vertical}}{L} \times V_s. \quad (8)$$
+
+* Horizontal Position Measurement
+
+We connect a voltage source $V_s$ to the bottom black plate and connect a voltmeter to the top red plate.
+We can map the voltage measured to the horizontal position touched:
+
+$$V_{out} = \frac{L_{touch, horizontal}}{L} \times V_s. \quad (9)$$
+
+The important simplification used is replacing $R_{h1}, R_{h2}$ with open circuits for $L_{touch, horizontal}$ measurement,
+and replacing $R_{v1}, R_{v2}$ for $L_{touch, vertical}$ measurement. However, this kind of simplification is valid only if the
+resistor is at (0, 0) on the I-V plot, which means the resistor has zero current flow and therefore zero voltage
+drop ($IR = V$).
\ No newline at end of file
diff --git a/samples/texts_merged/4982835.md b/samples/texts_merged/4982835.md
new file mode 100644
index 0000000000000000000000000000000000000000..b18b0ccc81e4cea4640943e3514682659b768a4d
--- /dev/null
+++ b/samples/texts_merged/4982835.md
@@ -0,0 +1,422 @@
+
+---PAGE_BREAK---
+
+Department of
+Computer Science
+
+Analysis of Non-Strict
+Functional Implementations of
+the Dongarra-Sorensen
+Eigensolver
+
+S. Sur and W. Bohm
+
+Technical Report CS-93-133
+
+December 15, 1993
+
+Colorado State University
+---PAGE_BREAK---
+
+# Analysis of Non-Strict Functional Implementations of
+the Dongarra-Sorensen Eigensolver
+
+S. Sur and W. Böhm *
+
+Department of Computer Science
+Colorado State University
+Ft. Collins, CO 80523
+
+December 14, 1993
+
+## Abstract
+
+We study the producer-consumer parallelism of Eigensolvers composed of a tridiagonalization function, a tridiagonal solver, and a matrix multiplication, written in the non-strict functional programming language Id. We verify the claim that non-strict functional languages allow the natural exploitation of this type of parallelism, in the framework of realistic numerical codes. We compare the standard top-down Dongarra-Sorensen solver with a new, bottom-up version. We show that this bottom-up implementation is much more space efficient than the top-down version. Also, we compare both versions of the Dongarra-Sorensen solver with the more traditional QL algorithm, and verify that the Dongarra-Sorensen solver is much more efficient, even when run in a serial mode. We show that in a non-strict functional execution model, the Dongarra-Sorensen algorithm can run completely in parallel with the Householder function. Moreover, this can be achieved without any change in the code components. We also indicate how the critical path of the complete Eigensolver can be improved.
+
+## Address for Correspondence:
+
+A. P. W. Böhm
+
+Department of Computer Science
+Colorado State University
+Ft. Collins, CO 80523
+
+Tel: (303) 491-7595
+Fax: (303) 491-6639
+Email: bohm@CS.ColoState.Edu
+
+*This work is supported in part by NSF Grant MIP-9113268, Motorola Grant YCM002, and a Motorola Monsoon donation from ARPA
+---PAGE_BREAK---
+
+# 1 Introduction
+
+In our work we study the effectiveness of non-strict functional programming languages in expressing the parallelism of complex numerical algorithms in a machine independent style. A numerical application is often composed of a number of algorithms. In this paper, for example, we study an Eigensolver composed of a tridiagonalization function, a tridiagonal solver, and a matrix multiplication. We verify the claim that non-strict functional languages allow the natural exploitation of fine-grain parallelism of modular programs [7]. Elements of a non-strict data structure can be used before the whole structure is defined. Combined with the data-driven execution of functional modules, this provides for maximal exploitation of parallelism without the need for explicit specification of it.
+
+In this paper we compare the standard top-down Dongarra-Sorensen solver with a new, bottom-up version. We show that this bottom-up implementation is much more space efficient than the top-down version. Also, we compare both versions of the Dongarra-Sorensen solver with the more traditional QL algorithm, and verify that the Dongarra-Sorensen solver is much more efficient, even when run in a serial mode.
+
+Our algorithms are written in Id [8] and run on the Motorola Monsoon machine [6]. To obtain parallelism profiles, we run our programs on a Monsoon Interpreter. To obtain information about the space usage of our programs, we determine the largest problem size that can run on a one node (one processor module and one storage module) Monsoon machine.
+
+Dongarra and Sorensen mention the possibility of exploiting producer-consumer parallelism between Householder and their algorithm [4] and mention that "an efficient implementation of this scheme is difficult". We will show that in a non-strict functional execution environment, the Dongarra-Sorensen algorithm can run completely in parallel with the Householder function. Moreover, this has been achieved without any change in the code components.
+
+# 2 The Dongarra-Sorensen Eigensolver
+
+Let $A$ be a symmetric matrix, for which we want to find the eigenvectors and eigenvalues. The Householder transformation function takes $A$ and produces a tridiagonal matrix represented by the diagonal $d$ and upper diagonal $e$, and an orthogonal transformation matrix $Q$. The QL factorization function transforms $d$ and $e$ into a vector containing the eigenvalues and a matrix $Q'$ of eigenvectors of the tridiagonal system. The eigenvalues of $A$ are equal to the eigenvalues of the tridiagonal system, whereas the eigenvectors of $A$ are obtained by multiplying $Q$ and $Q'$. The Dongarra-Sorensen algorithm performs the same operation as QL, but in a divide and conquer fashion. For further details concerning eigensolvers we refer to [5].
+
+In this section we introduce the existing theory regarding Dongarra-Sorensen algorithm for solving the eigenvalue problem of a tridiagonal matrix in some more detail, because we will introduce a bottom up version of the algorithm later.
+
+The Dongarra-Sorensen algorithm is a *divide and conquer* approach [4, 2] for computing the eigen-
+---PAGE_BREAK---
+
+values and eigenvectors of a *symmetric tridiagonal* matrix. Let T be a symmetric tridiagonal
+matrix:
+
+$$
+\mathbf{T} =
+\begin{bmatrix}
+a_1 & b_1 & 0 & \dots & . & 0 \\
+b_1 & a_2 & b_2 & \dots & . & .
+\\
+. & . & . & \ddots & b_{n-2} & a_{n-1} \\
+. & . & . & \dots & 0 & b_{n-1} \\
+. & . & . & \dots & . & a_n \\
+0 & . & . & \dots & . & .
+\end{bmatrix}
+\quad (1)
+$$
+
+The Dongarra-Sorensen algorithm computes the Schur decomposition
+
+$Q^T T Q = \Lambda = \text{diag}(\lambda_1, \dots, \lambda_n), \quad Q^T Q = I$
+
+by gluing together the Schur decompositions of two half sized tridiagonal problems derived from
+the original matrix T. To obtain these half-sized problems we use partitioning by rank-one tearing
+discussed below. Each of these reductions can in turn be specified by a pair of quarter sized Schur
+decompositions and so on.
+
+**2.1 Partitioning by rank-one tearing:**
+
+One can easily check that any symmetric tridiagonal matrix T can be reduced to the following
+form:
+
+$$
+\mathbf{T} = \begin{pmatrix} T_1 & \beta e_k e_1^T \\ \beta e_1 e_k^T & T_2 \end{pmatrix} = \begin{pmatrix} \hat{T}_1 & 0 \\ 0 & \hat{T}_2 \end{pmatrix} + \theta^{-1} \beta \begin{pmatrix} e_k \\ \theta e_1 \end{pmatrix} (\begin{pmatrix} e_k^T & \theta e_1^T \end{pmatrix}) \quad (2)
+$$
+
+where $1 \le k \le n$ and $e_j$ represents the j-th unit vector of appropriate dimension and $\beta = b_k$. $\hat{T}_1$ is identical to top $k \times k$ tridiagonal sub-matrix of T except that the last diagonal element $\tilde{a}_k$ is modified so that $\tilde{a}_k = a_k - \rho$, where $\rho = \beta/\theta$. Similarly, $\hat{T}_2$ is the bottom $(n-k) \times (n-k)$ tridiagonal submatrix of T, with only the first diagonal element modified. This modified element $\tilde{a}_{k+1}$ is given by $\tilde{a}_{k+1} = a_{k+1} - \rho\theta^2$. The factor $\theta$ is incorporated to avoid certain numerical difficulties associated with cancellation of diagonal terms [4].
+
+**2.2 Divide and conquer step:**
+
+Now we have two smaller tridiagonal eigenvalue problems to solve. We can first find the Schur
+decompositions of $\hat{T}_1$ and $\hat{T}_2$ so that:
+
+$$
+\hat{T}_1 = Q_1 D_1 Q_1^T, \quad \hat{T}_2 = Q_2 D_2 Q_2^T
+$$
+---PAGE_BREAK---
+
+which gives,
+
+$$
+\mathbf{T} = \begin{pmatrix} Q_1 D_1 Q_1^T & 0 \\ 0 & Q_2 D_2 Q_2^T \end{pmatrix} + \theta^{-1} \beta \begin{pmatrix} e_k \\ \theta e_1 \end{pmatrix} \begin{pmatrix} e_k^T & \theta e_1^T \end{pmatrix} \quad (3)
+$$
+
+Therefore,
+
+$$
+T = \begin{pmatrix} Q_1 & 0 \\ 0 & Q_2 \end{pmatrix} \left( \begin{pmatrix} D_1 & 0 \\ 0 & D_2 \end{pmatrix} + \theta^{-1}\beta \begin{pmatrix} q_1 \\ \theta q_2 \end{pmatrix} \begin{pmatrix} q_1^T & \theta q_2^T \end{pmatrix} \right) \begin{pmatrix} Q_1^T & 0 \\ 0 & Q_2^T \end{pmatrix} \quad (4)
+$$
+
+where $q_1 = Q_1^T e_k$ (the last row of matrix $Q_1$) and $q_2 = Q_2^T e_1$ (the 1st row of matrix $Q_2$). The problem right now is reduced to computing the eigensystem of the interior matrix in the previous equation, which is discussed in the following section.
+
+## 2.3 The updating problem:
+
+The problem that is left to solve is that of computing the eigensystem of a matrix of the form
+
+$$
+\hat{\mathcal{Q}}\hat{\mathcal{D}}\hat{\mathcal{Q}}^T = \mathcal{D} + \rho z z^T
+$$
+
+where $\mathcal{D}$ is a real $n \times n$ diagonal matrix, $\rho$ is non-zero scalar and $z$ is real vector of order $n$. In our case
+
+$$
+D = \begin{pmatrix} D_1 & 0 \\ 0 & D_2 \end{pmatrix}, z = \begin{pmatrix} q_1 \\ \theta q_2 \end{pmatrix}, \text{ and } \rho = \frac{\beta}{\theta}
+$$
+
+In this study we implement this eigensolver for the case where all the eigenvalues are distinct and so we can write $D = \operatorname{diag}(\delta_1, \delta_2, \dots, \delta_n)$, where $\delta_i \neq \delta_j$ for $i \neq j$. Moreover, we can sort the $\delta$s and sort $z$ accordingly such that $\delta_i < \delta_j$, for $i < j$. We also assume that no component $\zeta_i$ of vector $z$ is zero. The eigensolver can be modified to solve problems with equal eigenvalues and zero components of $z$ by incorporating certain deflation techniques into the algorithm [4]. We will, however, not deal with these deflation techniques. Under above assumptions, the eigenpair $\lambda$ (the eigenvalue) and $q$ (the corresponding eigenvector) satisfying
+
+$$
+(D + \rho z z^T)q = \lambda q
+$$
+
+can be obtained from satisfying the following equations [4]:
+
+$$
+1 + \rho z^T (D - \lambda I)^{-1} z = 0
+$$
+
+and $q$ is obtained from
+
+$$
+q = (D - \lambda I)^{-1} z
+\quad (7)
+$$
+
+If equation (6) is written in terms of the components $\zeta_i$ of $z$, then $\lambda$ must be a root of the equation
+
+$$
+f(\lambda) = 1 + \rho \sum_{j=1}^{n} \frac{\zeta_j^2}{\delta_j - \lambda} = 0
+\quad (8)
+$$
+
+Equation (8) is referred to as the *secular equation*. A Newton's method to solve this will not converge [2] and the general bisection method would be too slow. However, this equation has the delightful property of having a distinct root between every pair of consecutive diagonal elements ($\delta_i, \delta_{i+1}$). This property is used by Dongarra and Sorensen [4] to come up with a fast root-finder described below.
+---PAGE_BREAK---
+
+## 2.4 The root-finder of the secular equation:
+
+Without loss of generality, one can assume that the coefficient $\rho$ of the secular equation is positive. If it is not, a change of variable can be used where $\rho$ can be replaced by $-\rho$. To achieve this without changing the secular equation $\delta_i$ needs to be replaced by $-\delta_{n-i+1}$ and $\zeta_i$ needs to be replaced by $\zeta_{n-i+1}$ for all i. Given that we wish to find the i-th root $\hat{\delta}_i$ of the function $f$ in equation (8), the function can be rewritten as
+
+$$f(\lambda) = 1 + \psi(\lambda) + \phi(\lambda) \tag{9}$$
+
+where
+
+$$\psi(\lambda) = \rho \sum_{j=1}^{i} \frac{\zeta_j^2}{\delta_j - \lambda}$$
+
+and
+
+$$\phi(\lambda) = \rho \sum_{j=i+1}^{n} \frac{\zeta_j^2}{\delta_j - \lambda}$$
+
+This root lies in the open interval $(\delta_i, \delta_{i+1})$ and for $\lambda$ in this interval all of the terms of $\psi$ are negative and all of the terms of $\phi$ are positive. This situation is very suitable for an iterative method for solving the equation
+
+$$-\psi(\lambda) = 1 + \phi(\lambda)$$
+
+One can start with an initial guess $\lambda_0$ close to $\delta_i$ in the appropriate interval so that $\lambda_0 < \lambda$ [2], and then construct simple rational interpolants of the form
+
+$$\frac{p}{q - \lambda}, r + \frac{s}{\delta - \lambda}$$
+
+where $\delta$ is fixed at $\delta_{i+1}$ (the i+1th diagonal element of D) and the parameters p, q, r, s are defined by the interpolation conditions
+
+$$\frac{p}{q - \lambda_0} = \psi(\lambda_0), \quad r + \frac{s}{\delta - \lambda_0} = \phi(\lambda_0), \quad \frac{p}{(q - \lambda_0)^2} = \psi'(\lambda_0), \quad \frac{s}{(\delta - \lambda_0)^2} = \phi'(\lambda_0)$$
+
+The new approximate $\lambda_1$ to the root $\hat{\delta}_i$ is then found by solving
+
+$$\frac{-p}{q - \lambda} = 1 + r + \frac{s}{\delta - \lambda} \tag{10}$$
+
+A sequence of iterates is thus derived following the same principle and the process is stopped when the value of the secular function at the current iterate goes below a certain threshold. Bunch et. al. [2] showed that this iteration converges quadratically from one side of the root and does not need any safeguarding.
+
+# 3 A bottom-up approach
+
+The theory described in the previous section is particularly suitable for a top-down implementation, where each problem is recursively reduced to two smaller size problems, until the trivial case (problem size 1) is reached. As we will see in the analysis and results section this implementation
+---PAGE_BREAK---
+
+is very inefficient in terms of space. In this section we develop the theory for an alternate version
+of the algorithm, which starts from the bottom instead of the top, glues the solutions of smaller
+problems together at every iteration, and finally arrives at the solution. It turns out that this
+approach is extremely efficient in terms of space and does not lose any bit of time efficocy. Our
+bottom-up approach provides more insight in the workings of the Dongarra-Sorensen algorithm
+too. Also, it is useful for other language implementations, as while loops are usually more efficient
+than the recursive calls. Moreover this form of implementation is more suitable for languages not
+supporting recursive calls, e.g. FORTRAN.
+
+In order to perform the Dongarra-Sorensen algorithm bottom-up, we need to compute the effects of
+all the rank-one tearings, so that we solve the same size-1 problems and combine them in the same
+way as the top-down algorithm does. The important observation on which our bottom-up approach
+is based, is that the mathematical theory behind Dongarra-Sorensen algorithm is independent of
+the position of the rank-one tearings. Therefore it is irrelevant where the matrix is partitioned as
+long the final result size-1 matrices are derived from $n-1$ tearings, one at each position. Thus,
+iteratively tearing at the top, done across all the elements, gives the same bottom case as the
+recursive half and half tearing. So we start with a tridiagonal matrix given in (1) and partition it
+by a rank one tearing about the first off-diagonal element $b_1$. We get,
+
+$$
+T = \begin{bmatrix}
+a_1 - b_1/\theta & 0 & 0 & \dots & . & 0 \\
+0 & a_2 - b_1\theta & b_2 & \dots & . & .
+\\
+. & . & . & \ddots & . & .
+\\
+. & . & . & \dots & . & .
+\\
+. & . & . & \dots & . & .
+\\
+0 & . & . & \dots & b_{n-2} & a_{n-1} \\
+. & . & . & \dots & . & b_{n-1} \\
+. & . & . & \dots & . & a_n
+\end{bmatrix}
++ \frac{b_1}{\theta}
+\begin{bmatrix}
+1 & \theta & 0 & \dots & . & 0 \\
+\theta & \theta^2 & 0 & \dots & . & .
+\\
+. & . & . & \ddots & . & .
+\\
+. & . & . & \dots & . & .
+\\
+. & . & . & \dots & . & .
+\\
+0 & . & . & \dots & 0 & 0 \\
+0 & . & . & \dots & 0 & 0
+\end{bmatrix}
+$$
+
+The second matrix of the above equation can be rewritten as $\frac{b_1}{\theta} \begin{pmatrix} e_1^1 \\ \theta e_1^{n-1} \end{pmatrix} \begin{pmatrix} e_1^1 \\ \theta e_1^{n-1} \end{pmatrix}^T$. Here $e_j^k$ represents the j-th unit vector of length k. Tearing the matrix again with respect to element $b_2$ gives $T =$
+
+$$
+\begin{bmatrix}
+a_1 - \frac{b_1}{\theta} & 0 & 0 & \dots & 0 \\
+0 & a_2 - b_1\theta - \frac{b_2}{\theta} & 0 & \dots \\
+0 & 0 & a_3 - b_2\theta & \dots \\
+\vdots & \vdots & \vdots & \ddots \\
+0 & \dots & b_{n-1} & a_n
+\end{bmatrix}
++ \frac{b_1}{\theta} \left( \begin{pmatrix} e_1^1 \\ \theta e_1^{n-1} \end{pmatrix} \right) \left( \begin{pmatrix} e_1^1 \\ \theta e_1^{n-1} \end{pmatrix} \right)^T + \frac{b_2}{\theta} \left( \begin{pmatrix} e_2^1 \\ \theta e_2^{n-1} \end{pmatrix} \right) \left( \begin{pmatrix} e_2^1 \\ \theta e_2^{n-1} \end{pmatrix} \right)^T
+$$
+
+Repeating this tearing process $n-1$ times, i.e. for each off-diagonal element, the tridiagonal matrix $T$ can be torn down to a diagonal matrix and summation of matrices of the form $\rho z z^T$, which is given by:
+---PAGE_BREAK---
+
+$$
+T = \begin{bmatrix}
+a_1 - \frac{b_1}{\theta} & 0 & 0 & \dots & 0 \\
+0 & a_2 - b_1\theta - \frac{b_2}{\theta} & 0 & \dots & 0 \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+0 & \dots & a_{n-1} - b_{n-2}\theta - \frac{b_{n-1}}{\theta} & 0 & 0 \\
+0 & \dots & 0 & a_n - b_{n-1}\theta & 0
+\end{bmatrix} + \sum_{i=1}^{n-1} \frac{b_i}{\theta} \left( \theta e_1^{n-i} \right) \left( \left( \begin{array}{c} e_i^i \\ \theta e_1^{n-i} \end{array} \right) \right)^T
+$$
+
+The diagonal matrix in equation (11) is the starting point of our bottom-up approach, and as mentioned before is the same bottom case of the top-down approach. Every element of the diagonal matrix can be considered as the eigensolution (Schur decomposition) $QDQ^T$ of a single element matrix, where D is the element itself and Q is the $1 \times 1$ matrix 1. Now, two of these solutions of size one problems can be updated to get a solution of a size two problem. The first combination step can be viewed as:
+
+$$
+T = \begin{bmatrix}
+\delta_1 + \frac{b_1}{\theta} \begin{pmatrix} 1 & \theta \\ \theta & \theta^2 \end{pmatrix} & 0 & \cdots & 0 \\
+0 & \delta_3 + \frac{b_3}{\theta} \begin{pmatrix} 1 & \theta \\ \theta & \theta^2 \end{pmatrix} & \cdots & 0 \\
+\vdots & \vdots & \ddots & \vdots \\
+0 & 0 & \cdots & \delta_n + \frac{b_n}{\theta} \begin{pmatrix} e_i^1 \\ \theta e_1^{n-i} \end{pmatrix} \begin{pmatrix} e_i^1 \\ \theta e_1^{n-i} \end{pmatrix}^T
+\end{bmatrix}
++ \sum_{i=\text{even}} b_i / \theta \quad (12)
+$$
+
+Where $\delta_i$ is the $2 \times 2$ matrix given by:
+
+$$
+\left(
+\begin{array}{cc}
+ a_i - b_{i-1}\theta - \frac{b_i}{\theta} & 0 \\
+ 0 & a_{i+1} - b_i\theta - \frac{b_{i+1}}{\theta}
+\end{array}
+\right)
+$$
+
+assuming $b_0 = 0$ and $b_n = 0$. Here the matrix $\delta_i + \frac{b_i}{\theta} \begin{pmatrix} 1 & \theta \\ \theta & \theta^2 \end{pmatrix}$ is in $D + \rho z z^T$ form and can be
+Schur decomposed to $Q_i D_i Q_i^T$ form by solving the updating problem given in equation 5. Hence
+after the first step of updating the tridiagonal matrix is:
+
+$$
+T = \begin{bmatrix}
+Q_1 D_1 Q_1^T & 0 & \dots & 0 \\
+0 & Q_3 D_3 Q_3^T & \dots & 0 \\
+\vdots & \vdots & \ddots & \vdots \\
+0 & 0 & \dots & Q_n D_n Q_n^T
+\end{bmatrix} + \sum_{i=\text{even}} \frac{b_i}{\theta} \left( \theta e_1^{n-i} \right) \left( \left( \begin{array}{c} e_i^1 \\ \theta e_1^{n-i} \end{array} \right) \right)^T
+\quad (13)
+$$
+
+We now continue the same process, and in the next step include the term associated with $b_2$, $b_6$
+etc. from the summation to inside of the updated matrix in the following form:
+
+$$
+\frac{b_i}{\theta} \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & \theta & 0 \\ 0 & \theta & \theta^2 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}
+$$
+---PAGE_BREAK---
+
+The same updating is then done as given by equation 4 and Schur decomposition of 4x4 matrices are obtained. The process is continued until all the terms of the summation outside the matrix are exhausted and the final result obviously is the eigensolution of the tridiagonal problem. A top-down recursive approach also does updating of the same elements in the recombination step (after the problem is broken down to the bottom-most level).
+
+# 4 Implementation Issues
+
+## 4.1 Top-down implementation
+
+Here we describe the functional implementation of the original top-down version of the Dongarra-Sorensen algorithm. A function `ds(d, e, n, θ)` takes `d`, the diagonal vector, `e`, the off-diagonal vector, `n`, the dimension and `θ`, the stability factor `θ` discussed in the section 2, as arguments. It returns `ev`, the vector of eigenvalues and `Q`, the corresponding matrix of eigenvectors as the output. If the size is 1 the only element in the diagonal vector is returned as the eigenvalue and the identity matrix of dimension $1 \times 1$ is returned as `Q`. Otherwise the problem is broken into two parts, half the size of the original. Uneven halves originate if the problem size is not even, which need not be treated specially because the position of the tearing does not affect the final result. Two half size vectors are generated from each of the vectors `d` and `e`. The last element of the first half of `d` and the first element of the second half of `d` are modified, as they represent the last diagonal element of $\hat{T}_1$ and first diagonal element of $\hat{T}_2$, respectively (described in section 2.1). Now, the two half eigenproblems are solved by recursively calling `ds` for each half. The rest of the work is to glue these two solutions together, i.e. the recombination step in a normal recursive function. Let us call the vector of eigenvalues and matrix of eigenvectors of the first half $v_1$ and $Q_1$ respectively. Call the corresponding ones for the second half $v_2$ and $Q_2$. Combining $v_1$ and $v_2$ gives the term `D` of equation (5). To obtain `z` of the same equation, we combine the last row of $Q_1$ and `θ` times the first row $Q_2$. The next step is to solve the updating problem described in section 2, which requires the elements in `D` to be sorted. Consequently, to keep the same correspondence between `D` and `z` in $D + ρzz^T$ eigenproblem, `z` also needs to be rearranged, according to `D`'s sorted indices. To solve the $D + ρzz^T$ eigenproblem the secular equation can now be formed and its roots will give us the eigenvalues. A function `rf(D, z, ρ)` computes `n` roots by choosing a starting point $\lambda_0$ infinitesimally right of the left endpoint within each interval of components of `D` and iterating within the interval as follows:
+
+$$ \lambda_{i+1} = \lambda_i + \frac{2b}{a + \sqrt{a^2 - 4b}}, $$
+
+where
+
+$$
+\begin{align*}
+a &= \frac{\Delta(1 + \phi_i) + \psi_i^2/\psi_i'}{c + \psi_i + i/\psi_i'}, \\
+b &= \frac{\Delta w \psi_i}{(\psi_i' c)}, \\
+c &= 1 + \phi_i - \Delta \phi_i', \\
+w &= 1 + \phi_i + \psi_i,
+\end{align*}
+$$
+
+and
+
+$$ \Delta = \delta_{i+1} - \lambda_i $$
+---PAGE_BREAK---
+
+Figure 1: Idealized Parallelism Profile for Top-Down Dongarra-Sorensen
+
+where $\phi$ and $\psi$ have the same meaning as in section 2. The roots of the secular equation thus obtained are the eigenvalues of the $D + \rho z z^T$ problem, which are also the eigenvalues of the symmetric tridiagonal problem we started with. The corresponding eigenvectors are obtained using equation (7) of section 2. These eigenvectors are normalized and the final matrix form of the eigenvectors of the tridiagonal problem is obtained by multiplying this matrix of eigenvectors (of $D + \rho z z^T$ problem) with the zero padded $Q$ matrix of equation (4).
+
+## 4.2 Iterative (bottom-up) Dongarra-Sorensen algorithm
+
+This implementation first creates the bottom case by filling in a vector *curv* (the current value of eigenvalues) with $a_i - b_i\theta - b_{i-1}/\theta$ (refer to the preceding section), assuming $b_0 = 0 = b_n$. The corresponding matrix of eigenvectors is the identity matrix at this point of time. In a *while* loop we start solving the updating problem (see section 2), and start with gluing the solutions of size 1 to solutions of size 2. Every two consecutive elements of the vector *curv* now contains eigenvalues of the size-2 problem (obtained from usual formulation and solution of the secular equation). The matrix of eigenvectors *curQ* contains 2x2 submatrices along the diagonal (and 0's elsewhere), that represents the two eigenvectors of problem size-2. In the next step two of these size-2 solutions are glued together to form solutions of size 4, and the process is continued, increasing step size by a factor of 2 in every successive iteration, until the whole problem is solved. This method requires that the problem size be a power of two. But this limitation can be overcome if the problem size *n* is broken into sum of powers of two (the binary representation of *n*), and each part is solved using the method described above, and then gluing the solutions together in the usual way (by solving the updating problem).
+---PAGE_BREAK---
+
+Figure 2: Idealized Parallelism Profile for Bottom-Up Dongarra-Sorensen
+
+Figure 3: Idealized Parallelism Profile for QL
+---PAGE_BREAK---
+
+| Function cost | HH + Top-down DS | HH + Bottom-up DS | HH +QL |
|---|
| HH eigenvalue computation | 93,320 | 89,622 | 91,754 | | HH eigenvector computation | 77,905 | 99,699 | 167,002 | | Matrix multiply | 114,553 | 112,781 | 109,756 | | Tridiagonal solver (HH or QL) | 78,560 | 53,775 | 256,391 | | Total instructions | 374,405 | 379,010 | 679,928 | | Critical path length | 109,640 | 110,450 | 189,340 | | Max size on monsoon 1PEIIS | 55×55 | 166×166 | 101×101 |
+
+Table 1: Quantitative Characteristics of the Eigensolvers
+
+# 5 Time and Space Analysis
+
+Figures 1, 2 and 3 display the idealized parallelism profiles of the three implementations of the eigensolver for an input matrix of size 16 × 16, where the (i,j)-th element equals (i+j). The algorithms behave similarly for other symmetric input matrices we have tried. The horizontal *cycles* axis represents the critical path measured in the number of execution cycles under the idealized assumption that an instruction will get executed as soon as its inputs are available. The vertical *instructions* axis represents the number of instructions that can execute in parallel at a certain cycle. The simulator allows us to mark the instructions of certain function with a certain “color”. Colors 0 and 1 display the instructions executed by the run-time system. In this way we can study the overlapping of function execution. In all figures, color 2 represents the part of the Householder function that produces the diagonals *d* and *e*, and color 6 represents the part of the Householder function that produces the orthogonal transformation matrix *Q*. Color 4 represents the particular tridiagonal solver, color 3 represents the matrix multiplication. The peak at the beginning of the execution in figure 1 represents the parallel unrolling of the divide and conquer tree of the recursive *ds* function. The second peak in figure 1 and the peak in figure 2 represent the execution of the root-finder.
+
+Table 1 gives for each implementation information regarding the number of instructions per function, the total number of instructions executed, the total critical path length, and the maximum size problem that can be run on a one node Monsoon machine [6], which has a 4 Megaword data memory. From the figures and table we draw the following conclusions.
+
+* Both top-down and bottom-up Dongarra-Sorensen implementations (color 4) exploit producer-consumer parallelism, and run in parallel with the part of Householder that creates the diagonals *d* and *e*. The Dongarra-Sorensen tridiagonal solvers end virtually at the same time as the part of Householder that creates the diagonals *d* and *e*, which indicates that the non-strict implementation of arrays in Id is highly effective for this application.
+
+* The part of the Householder function that produces *Q* (color 6) contributes to the critical path. Therefore, in order to shorten the critical path of the program, attention should be paid to this part of the algorithm. Improving the parallelism of Householder can be easily achieved by increasing the K-Bounds of the loops [3].
+---PAGE_BREAK---
+
+* The matrix multiplication (color 3) can only start after the matrix $Q$ has been produced by the Householder function, and contributes largely to the critical path length. Again, a more parallel version of matrix multiply would shorten the overall critical path length, and can also be realized by increasing K-bounds of loops.
+
+* Table 1 indicates that the bottom-up version of Dongarra-Sorensen is much more space efficient than the top-down version. The largest problem size solvable on a one node Monsoon machine is 9 times larger for the bottom-up than that for the top-down algorithm. The bottom-up algorithm runs out of heap memory (data structure space), whereas the top-down algorithm runs out of frame memory (run-time stack space). This is because the recursive algorithm is executed in an eager, breadth first order, which uses excessive amounts of frame store. This relates directly to the *throttling problem* [9] of controlling the amount of program parallelism such that the machine resources are utilized, but not swamped.
+
+* For the 16 × 16 problem, QL (color 4) has a 75% longer critical path length than the two Dongarra-Sorensen algorithms, and executes 80% more instructions. This verifies the claim in [4] that even in sequential mode, the Dongarra-Sorensen algorithm outperforms QL. Also note that QL cannot do much work until all of $d$ and $e$ have been produced, which is an inherent characteristic of the algorithm. This makes producer consumer parallelism hard to achieve for QL.
+
+# 6 Conclusions
+
+We have studied symmetric Eigensolvers, consisting of the Householder tridiagonalization, a tridiagonal solver, and a matrix multiplication in a non-strict functional environment. For the tridiagonal solver we have studied implementations of the Dongarra-Sorensen algorithm and compared them with the more traditional QL algorithm. We have introduced a new, bottom-up, implementation of the Dongarra-Sorensen algorithm, and have shown that it is considerably more space efficient than the top-down version without incurring time inefficiency.
+
+The Dongarra-Sorensen functions run completely in parallel with the Householder function, which verifies the claim that non-strict functional languages allow the modular design of applications, exploiting producer-consumer parallelism to the fullest.
+
+We have observed that the critical path of the programs can be improved by making Householder and Matrix Multiply more parallel, and indicated how this can be achieved.
+
+The QL algorithm needs all values of its input vectors in order to start executing and therefore does not benefit as much from the non-strictness of the data structures. QL has a longer critical pathlength and executes a considerably larger number of instructions, which verifies the claim that for sufficiently large problems, Dongarra-Sorensen is a better algorithm, even in a sequential execution environment.
+---PAGE_BREAK---
+
+References
+
+[1] Böhm, A. P. and R. E. Hiromoto, "The Dataflow Time and Space Complexity of FFTs", **J. Par and Dist Comp**, Vol. 18, pp. 301-313, 1993.
+
+[2] Bunch, J. R., C.P. Nielsen and D. C. Sorensen, "Rank-one Modification of the Symmetric Eigenproblem", **Numerische Mathematik**, 31, pp. 31-48, 1978. Reading, MA, 1972.
+
+[3] Culler, D. E., "Managing parallelism and resources in scientific dataflow programs", PhD Thesis, MIT, June 1989.
+
+[4] Dongarra, J. J. and D. C. Sorensen, "A Fully Parallel Algorithm for the Symmetric Eigenvalue Problem", **SIAM J. Sci and Stat Comp**, Vol. 8, pp. S139-S154, March 1987.
+
+[5] Golub, G. H. and C. F. Van Loan, "Matrix Computations", The Johns Hopkins University Press, 2nd edition, 1989.
+
+[6] Hicks, James, D. Chiou, B. S. Ang and Arvind, "Performance studies of Id on the Monsoon dataflow system," **Journal of Parallel and Distributed Computing** no. 18, pp 273-300, 1993.
+
+[7] Hughes, J. "Why Functional Programming Matters", **The Computer Journal**, April 1989.
+
+[8] Nikhil, R. S., "Id Reference Manual, version 90.1", Computation structures group memo 284-2, MIT Laboratory for Computer Science, Cambridge, MA, Sept 1990.
+
+[9] Ruggiero, C. A. and J. Sargeant, "Control of Parallelism in the Manchester Dataflow Computer", Lecture Notes in Computer Science no. 274, pp 1-15, 1987.
\ No newline at end of file
diff --git a/samples/texts_merged/5091277.md b/samples/texts_merged/5091277.md
new file mode 100644
index 0000000000000000000000000000000000000000..c77cdf91373c2a6cef66ca9647041f629b431ef4
--- /dev/null
+++ b/samples/texts_merged/5091277.md
@@ -0,0 +1,764 @@
+
+---PAGE_BREAK---
+
+# Decision Procedures Over Sophisticated Fractional Permissions
+
+Le Xuan Bach Cristian Gherghina\* Aquinas Hobor\*\*
+
+National University of Singapore
+
+**Abstract.** Fractional permissions enable sophisticated management of resource accesses in both sequential and concurrent programs. Entailment checkers for formulae that contain fractional permissions must be able to reason about said permissions to verify the entailments. We show how entailment checkers for separation logic with fractional permissions can extract equation systems over fractional shares. We develop a set decision procedures over equations drawn from the sophisticated boolean binary tree fractional permission model developed by Dockins et al. [4]. We prove that our procedures are sound and complete and discuss their computational complexity. We explain our implementation and provide benchmarks to help understand its performance in practice. We detail how our implementation has been integrated into the HIP/SLEEK verification toolset. We have machine-checked proofs in Coq.
+
+## 1 Introduction
+
+Separation logic is fundamentally a logic of resource accounting [13]. Control of some resource (i.e., a cell of memory) allows the owner to take certain actions with that resource. Traditionally, ownership is a binary property, with full ownership associated with complete control (e.g., the ability to read, modify, and deallocate the cell), and empty ownership associated with no control.
+
+Many programs, particularly many concurrent programs, are not easy to verify with such a coarse understanding of access control [2, 1]. Fractional permissions track ownership—i.e., access control—at a finer level of granularity. For example, partial ownership might allow for reading, while full ownership might in addition enable writing and deallocation. This access control scheme helps verify concurrent programs that allow multiple threads to share read access to heap cells as long as no thread has write access.
+
+A share model defines the representation for fractions $\pi$ (e.g., a rational number between 0 and 1) and a three-place join relation $\oplus$ that combines them (e.g., addition, allowed only when the sum is no more than 1). The join relation must satisfy a number of technical properties such as functionality, associativity, and commutativity. The fractional $\pi$-ownership of the memory cell $\ell$, whose value is currently $v$, can then be written in separation logic as $\ell \stackrel{\pi}{\mapsto} v$. When $\pi$ is full
+
+\* Supported by MoE Tier-2 grant MOE2009-T2-1-063
+
+\*\* Supported by a Lee Kuan Yew Postdoctoral Fellowship (T1-251RES0902)
+---PAGE_BREAK---
+
+ownership we simply omit it. We modify the standard separating conjunction $\star$ to respect fraction permissions via the equivalence $\ell^{\pi_1} \underset{\rightarrow}{\oplus} \ell^{\pi_2} v \Leftrightarrow \ell^{\pi_1} v \star \ell^{\pi_2} v$.
+
+Unfortunately, while they are very intuitive, rational numbers are not a good model for fractional ownership. Consider the following attempt at a recursively defined predicate for fractionally-owned binary trees:
+
+$$ \mathbf{tree}(\ell, \pi) \equiv (\ell = \text{null} \land \mathbf{emp}) \lor (\ell \stackrel{\pi}{\mapsto} (\ell_l, \ell_r) \star \mathbf{tree}(\ell_l, \pi) \star \mathbf{tree}(\ell_r, \pi)) \quad (1) $$
+
+This tree predicate is obtained directly from the standard recursive predicate for wholly-owned binary trees in separation logic by asserting only $\pi$ ownership of the root and recursively doing the same for the left and right substructures, and so at first glance looks obviously correct. The problem is that when $\pi \le 0.5$, then $\mathbf{tree_Q}$ can describe some non-tree directed acyclic graphs.
+
+Parkinson then developed a share model that avoided this problem, but at the cost of certain other technical shortcomings and a total loss of decidability (even for equality testing) [12]. Decidability is crucial for developing automated tools to reason about separation logic formulae containing fractional permissions. Finally, Dockins et al. then developed a tree-share model detailed in §3 that overcame the technical shortcomings in Parkinson's model and in addition enjoyed a decidable test for equality and a computable join relation $\oplus$ [4]. The pleasant theoretical properties of the tree-share model led to its use in the design and soundness proofs of several flavors of concurrent separation logic [7, 8], and the basic computability results led to its incorporation in two program verification tools: HIP/SLEEK [11] and Heap-Hop [15].
+
+However, it is one thing to incorporate a technique into a verification tool, and another thing to make it complete enough to work well. Heap-Hop employed a simplistic heuristic to prove entailments involving tree shares [14], and although HIP/SLEEK did better, the techniques were still highly incomplete [9]. Even verifying small programs can require hundreds of share entailment checks, so in practice this incompleteness was a significant barrier to the use of these tools to reason about programs whose verification required fractional shares.
+
+Our work overcomes this barrier. We show how to extract a system of equations over shares from separation logic formulae such that the truth of the system is equivalent to the truth of the share portion of the formulae. This extraction can be done with no knowledge about the underlying model for shares. These systems of equations are then handed to our solver: a standalone library of sound and complete decision procedures over fraction tree shares. Although the worst-case complexity is high, our benchmarks demonstrate that our library is fast enough in practice to be incorporated into practical entailment checkers.
+
+### Contributions.
+
+- We demonstrate how to extract a system of equations over fractional shares from separation logic formulae (§2).
+
+- We prove that the key problems over these systems are decidable.
+
+- We develop a tool that solves the problems and benchmark its performance.
+
+- We incorporated our tool into the HIP/SLEEK verification toolset.
+
+- Our prototype is available at:
+
+www.comp.nus.edu.sg/~cristian/projects/prover/shares.html
+---PAGE_BREAK---
+
+## 2 Extracting Shares from Separation Logic Formulae
+
+Program verification tools, such as HIP, usually do not verify programs on their own. Instead, a program verifier usually applies Hoare rules to verify program commands and then emits the associated entailments to separate checkers such as SLEEK. Entailment checkers usually follow in the footsteps of SMT solvers by dividing the input formulae according to the background theories and in turn rely on specialized provers for each theory, e.g. Omega for Presberger arithmetic.
+
+We plan to follow the same pattern for fractional shares. The program verifier itself needs to know almost nothing about fractional shares, because it will simply emit entailments over formulae containing such shares to its entailment checker. The entailment checker needs to know a bit more: how to separate share information from formulae into a specialized domain, i.e., systems of equations over shares. The choice of this domain is an important modularity boundary because it allows the entailment prover to treat shares as an abstract type. The entailment checker only knows about certain basic operations such as equality testing, combining, and splitting shares. To check entailments over shares it calls our new backend share prover (detailed in §4).
+
+To demonstrate that the entailment checker can treat the shares abstractly, we defer the share model until §3, and will first outline the extraction of systems of equations over shares from separation logic formulae. Here we will just write $\chi$ for share constants; if our domain were rationals between 0 and 1, then an example $\chi$ would be 0.25. Our actual domain is more sophisticated and is given in §3, but our point here is that extracting equations over shares can be done without actually knowing the underlying share model.
+
+Entailment checkers are complicated, in part because information discovered in one subdomain can impact another (e.g., alias analysis can affect share constraints). Due to the tight link between heap-specific reasoning and share reasoning, extra share constraints are generated while discharging heap obligations. This information seepage prevents a modular and compositional description of the share constraint generation process. For brevity, we will illustrate share constraint extraction from a core separation logic; interested readers are referred to the description for a richer logic given in [9, §8.4]. Extracting share information from more complex formulae depends on the exact nature of said formulae but usually follows the pattern we give here in a straightforward way; the end result is just larger systems of equations.
+
+The logic formulae we will consider here are of the following form:
+
+$$ \Phi := \exists v.\kappa \wedge \beta \quad | \quad \kappa \wedge \beta $$
+
+$$ \kappa := \kappa * \kappa \mid v \stackrel{\pi}{\mapsto} v $$
+
+$$ \beta := \beta \wedge \beta \mid v = \pi \mid \pi \oplus \pi = \pi $$
+
+$$ \pi := v \mid \chi $$
+
+Here, $v$ denotes variables (over shares, locations, and values) and $v \stackrel{\pi}{\mapsto} v$ is the fractional points-to predicate. Obtaining the share equation systems from the entailment $\Phi_a \vdash \Phi_c$ conceptually requires three steps.
+
+First, the formulae are normalized in order to ensure that the heap component does not contain two distinct points-to predicates when the pointers are provably aliased. This reduction step can be described as:
+
+$$ v_1 \xrightarrow{\pi_1} v_2 * v_3 \xrightarrow{\pi_2} v_4 \wedge \beta \xrightarrow{\beta \vdash v_1 = v_3} \exists \pi_3 . v_1 \xrightarrow{\pi_3} v_2 \wedge (\beta \wedge \pi_3 = \pi_1 \oplus \pi_2 \wedge v_2 = v_4) $$
+---PAGE_BREAK---
+
+Second, formulae are partitioned based on the domains (e.g., heaps, shares, arithmetics, bags) and all non heap related expressions are floated out of the heap fragment *k*. Share constants are floated out of the points-to relations by introducing a fresh share variable. Thus $v_1 \stackrel{\chi}{\mapsto} v_2$ becomes $\exists v'. v_1 \stackrel{v'}{\mapsto} v_2 \land v' = \chi$.
+
+Third, heap related obligations are discharged and any share constraint generated in the process is added to the share constraints previously extracted. SLEEK discharges heap constraints by pairing each points-to predicate $p_c \stackrel{s_c}{\mapsto} c_c$ in the consequent with a corresponding predicate in the antecedent $p_a \stackrel{s_a}{\mapsto} c_a$ when $p_a = p_c$. This pairing generates extra proof obligations over both the content of the memory ($c_a = c_c$) and the shares. For shares, SLEEK considers two possibilities: either the owned share $s_a$ in the antecedent is equal to the one in the consequent ($s_a = s_c$), or $s_a$ is strictly greater ($\exists s_r . s_a = s_c \oplus s_r$). This case split leads to the generation of two proof obligations, with the original entailment succeeding if at least one of the two new obligations is satisfied¹.
+
+Furthermore, it is common for separation logic entailment checkers to also infer a frame or residue—the part of the antecedent not required to prove the consequent. If $s_a$ is larger than $s_c$, then there exists a non-empty share $s_r$ such that $s_r \oplus s_c = s_a$. This share residue is captured by the instantiation of $s_r$.
+
+After the heap constraints are discharged, the share relevant portion of the entailment consists of sets of formulae over **non empty** shares with the syntax:
+
+$$ \phi ::= \exists v.\phi \mid \phi_1 \land \phi_2 \mid \pi_1 \oplus \pi_2 = \pi_3 \mid v_1 = v_2 \mid v = \chi $$
+
+That is, share formulae $\phi$ contain share variables $v$, existential binders $\exists$, conjunctions $\land$, join facts $\oplus$, equalities between variables, and assignments of variables to constants $\chi$. Unless bound by an existential, variables are assumed to be universally bound, with universals bound before existentials ($\forall \exists$ rather than $\exists \forall$); despite implementing a translation for the feature-rich separation logic for SLEEK [9] we have not needed arbitrary nesting of quantifiers. We will view the share formulae as equation systems $\Sigma$, i.e. as a pair of sets: (1) a set of equations of the form $a \oplus b = c$ or $v = w$, and (2) a set of existentially quantified variables.
+
+To clarify the interaction between entailment checkers and the share solver, we outline extraction of share equations from two entailments:
+
+$$ x \stackrel{\chi_1}{\mapsto} v_a * x \stackrel{\chi_3}{\mapsto} v_a \vdash \exists s_c. x \stackrel{s_c}{\mapsto} v_c \quad \left\| \quad x \stackrel{\chi_3}{\mapsto} v_a \vdash x \stackrel{\chi_5}{\mapsto} v_c $$
+
+First, the two entailments need to be normalized and the shares floated out²:
+
+$$ x \stackrel{s_1}{\mapsto} v_a \land \chi_1 \oplus \chi_2 = s_a \vdash \exists s_c. x \stackrel{s_2}{\mapsto} v_c \quad \left\| \quad x \stackrel{s_2}{\mapsto} v_a \land s_a = \chi_a \vdash \exists s_c. x \stackrel{s_3}{\mapsto} v_c \land s_c = \chi_c $$
+
+Discharging the heap obligations occurs by pairing the $x \stackrel{s_c}{\mapsto} v_c$ predicate with $x \stackrel{s_a}{\mapsto} v_a$, which generates the share obligations $s_a=s_c$ or $\exists s_r. s_a=s_c \oplus s_r$. These obligations are combined with the rest of the share constraints, resulting in two share proof obligations for each original entailment.
+
+$$ \begin{cases} \chi_1 \oplus \chi_2 = s_a \vdash \exists s_c . s_a = s_c \\ \chi_1 \oplus \chi_2 = s_a \vdash \exists s_c, s_r . s_c \oplus s_r = s_a \end{cases} \quad \left\| \quad \begin{cases} s_a = \chi_a \vdash \exists s_c . s_c = \chi_c \land s_a = s_c \\ s_a = \chi_a \vdash \exists s_c, s_r . s_c = \chi_c \land s_a = s_c \oplus s_r \end{cases} \end{cases} $$
+
+¹ We are almost always able to avoid a serious exponential search by using the search prunings described in [9].
+
+² The antecedent $\exists$ is automatically interpreted as a $\forall$ over the entailment using renaming when needed to avoid name clashes.
+---PAGE_BREAK---
+
+Although simple, the first original entailment often occurs when verifying a
+method that requires only read access to a heap location; the existential allows
+callers to be flexible regarding which specific share of x they have. One techni-
+cal point is that many separation logics (including those used in HIP/SLEEK,
+Heap-Hop, and coreStar) only allow positive (non-empty) fractional shares over
+a points-to predicate (the empty share over a points-to is equivalent to ⊥); thus,
+the above existential must be restricted to never choose the empty share.
+
+We have now given two examples of extracting share equations from separa-
+tion logic formulae. Once the translation is finished, a separation logic entailment
+checker can ask our share prover two questions:
+
+1. (SAT) A solution $S$ of $\Sigma$ is a (finite) mapping from the variables of $\Sigma$ into tree shares. We say that a solution $S$ satisfies the equation system $\Sigma$, written $S \models \Sigma$, when the mapping makes the equations and equalities in $\Sigma$ true. The SAT query asks if an equation system $\Sigma$ is satisfiable, i.e., $\exists S. S \models \Sigma$? SLEEK uses SAT checks to help prune unfeasible proof searches.
+
+2. (IMPL) Given two systems $\Sigma_a$ and $\Sigma_c$, does $\Sigma_a$ entail $\Sigma_c$?
+
+That is: $\Sigma_a \vdash \Sigma_c$ iff $\forall S. S \models \Sigma_a \Rightarrow S \models \Sigma_c$.
+
+In practice this is sufficient; in §4 we will detail how we answer these questions.
+
+3 Binary Boolean Trees as a Fractional Share Model
+
+Here we briefly explain the tree-share fractional permissions model of Dockins et al. [4]. A tree share $\tau$ is inductively defined as a binary tree with boolean leaves:
+
+$$ \tau ::= \text{○} | \bullet | \widehat{\tau\tau} $$
+
+Here ◦ denotes an “empty” leaf while • a “full” leaf. The tree ◦ is thus the empty
+share, and • the full share. There are two “half” shares: ⋄• and ⋄○, and four
+“quarter” shares, beginning with ⋄○○. Notice that the two half shares are not
+identical; this is a feature, not a bug: this property ensures that the definition
+of tree from equation (1) really describes fractional trees instead of DAGs.
+
+Notice also that we presented the first quarter share as ⋄○○ instead of
+<>. This is deliberate: the second choice is not a valid share because the
+tree is not in canonical form. A tree is in canonical form when it is in its most
+compact representation under the inductively-defined equivalence relation ≅:
+
+$$ \overline{\text{○} \cong \text{○}} \qquad \overline{\bullet \cong \bullet} \qquad \overline{\circ \cong \circ \overline{\text{○}}} \qquad \overline{\bullet \cong \bullet} \\ \frac{\tau_1 \cong \tau'_1 \quad \tau_2 \cong \tau'_2}{\widehat{\tau_1 \tau_2} \cong \widehat{\tau'_1 \tau'_2}} $$
+
+The canonical representation is needed to guarantee some of the technical prop-
+erties described below. Managing the canonicality is a minor performance cost
+for the computable parts of our system but a major technical hassle in the proofs.
+Our strategy for this presentation is to gloss over some of these details, folding
+and unfolding trees into canonical form when required by the narrative. We jus-
+tify our informalism in the presentation because all of the operations we define
+on trees have been verified in Coq to respect the canonicality.
+---PAGE_BREAK---
+
+**Functional:** $x \oplus y = z_1 \Rightarrow x \oplus y = z_2 \Rightarrow z_1 = z_2$
+
+**Commutative:** $x \oplus y = y \oplus x$
+
+**Associative:** $x \oplus (y \oplus z) = (x \oplus y) \oplus z$
+
+**Cancellative:** $x_1 \oplus y = z \Rightarrow x_2 \oplus y = z \Rightarrow x_1 = x_2$
+
+**Unit:** $\exists u. \forall x. x \oplus u = x$
+
+**Disjointness:** $x \oplus x = y \Rightarrow x = y$
+
+**Cross split:**
+$a \oplus b = z \wedge c \oplus d = z \Rightarrow \exists ac, ad, bc, bd.$
+$ac \oplus ad = a \wedge bc \oplus bd = b \wedge ac \oplus bc = c \wedge ad \oplus bd = d$
+
+$$\forall \left( \begin{array}{c|c} a & c \\ \hline d & d \end{array} \right) \exists \left( \begin{array}{c|c} ac & bc \\ \hline ad & bd \end{array} \right)$$
+
+**Infinite Splitability:** $x \neq \emptyset \Rightarrow \exists x_1, x_2. x_1 \neq \emptyset \wedge x_2 \neq \emptyset \wedge x_1 \oplus x_2 = x$
+
+**Fig. 1.** Properties of tree shares
+
+The join relation for trees is inductively defined by unfolding both trees to the same shape and joining leafwise using the rules ◦ ⊕ ◦ = ◦, ◦ ⊕ • = •, and • ⊕ ◦ = •; afterwards the result is refolded into canonical form as in this example:
+
+Because $\bullet \oplus \bullet$ is undefined, the join relation on trees is a partial operation. Dockins et al. prove that the join relation satisfies a number of useful properties detailed in Figure 1. The tree share model is the only model that simultaneously satisfies Disjointness (forces the tree predicate—equation 1— to behave properly), Cross-split (used e.g. in settings involving overlapping data structures), and Infinite splittability (to verify divide-and-conquer algorithms). In the domain of tree shares, Disjointness is equivalent to $x \oplus x = y \Rightarrow x = \emptyset$; the name Disjointness comes from a related axiom at the level of formulae by Parkinson.
+
+Unfortunately, while the $\oplus$ operation has many nice properties useful for verifying programs, they fall far short of those necessary to permit traditional algebraic techniques like Gaussian elimination. Dockins also defines a kind of multiplicative operation $\bowtie$ between shares used to manage a token counting setting (as opposed to the divide-and-conquer algorithms we can verify), but our decision procedures do not support $\bowtie$ at this time.
+
+# 4 Decision Procedures over Tree Shares
+
+Here we introduce a decision procedure for discharging tree share proof obligations generated by program verifiers. Recall from §2 that equation systems contain equations of the form $a \oplus b = c$ and $v = w$, plus a list of variables that should be quantified existentially. Moreover, a *solution* $S$ of $\Sigma$ is a (finite) mapping from the variables of $\Sigma$ into tree shares. We write $S \models \Sigma$ to mean that the system $\Sigma$ is satisfied by solution $S$; the SAT query is then simply $\exists S.S \models \Sigma$. Furthermore, we write $\Sigma_a \vdash \Sigma_c$ to mean that every solution $S$ that satisfies $\Sigma_a$ also satisfies $\Sigma_c$, i.e., $\forall S. S \models \Sigma_a \Rightarrow S \models \Sigma_c$; this is exactly the IMPL query.
+---PAGE_BREAK---
+
+Fig. 2. SAT
+
+Fig. 3. IMPL
+
+The key reason SAT and IMPL are nontrivial is that the space is dense$^3$. That is, there exist trees of arbitrary height, seeming to rule out a brute force search. If we cannot find a solution to $\Sigma$ at height 5, how do we know that one is not lurking at height 10,000? If we check $\Sigma_a \vdash \Sigma_c$ when the variables are restricted to constants of height 5, how do we know that the entailment will continue to hold when the variables range over constants of arbitrary height?
+
+Our key theoretical insight is that despite the infinite domain, both SAT and IMPL are decidable by searching in the finite subdomain of trees with bounded height. Define the *system height* $|\Sigma|$ as the height of the highest tree constant in $\Sigma$ or 0 if $\Sigma$ contains only variables$^4$. For solutions $S$, let $|S|$ be the highest constant in its codomain. In §5, we will prove our key theoretical result: that for both SAT and IMPL queries, if the height of the system(s) of equations is $n$, then it is sufficient to restrict the search to solutions of height $n$.
+
+Of course, we do not want to blindly search through an exponentially large space if we can avoid it! Our goal for this section is to describe and prove sound the algorithms for SAT and IMPL given in Figures 2 and 3. The core of our decision procedures are the REDUCE and REDUCEI functions, which use the shape of the tree constants in the system to guide their search. There are four subroutines: SIMPLIFY, DECOMPOSE, FORMULA, and SMT_SOLVER. SMT_SOLVER is just a call into an off-the-shelf SAT/SMT solver; our prototype attaches to both MiniSat and Z3. The other three subroutines are discussed in detail below.
+
+SIMPLIFY. SAT/SMT solvers can require a lot of computation time. Accordingly, SIMPLIFY attempts reduce the size of the problem with a combination of several techniques. First, each equation that contains two or three tree constants is simplified into an equality (or $\top/\bot$). To do so, SIMPLIFY sometimes uses the inverse operation of $\oplus$, written $\ominus$, and which satisfies $a \oplus b = c$ iff $c \ominus a = b$. To calculate the (partial) operation $a \ominus b$, unfold $a$ and $b$ to the same shape (just
+
+3 This is by design: density is needed to enable the “Infinite Splitability” axiom, which is needed to support the verification of divide-and-conquer algorithms.
+
+4 Since we are computer scientists, we start counting with 0, so $|\circ| = |\bullet| = 0$.
+---PAGE_BREAK---
+
+as with ⊕); calculate the difference leafwise using the rules ● ⊖ ● = ○, ● ⊖ ○ = ●, and ○ ⊖ ○ = ○; and then fold the result back into canonical form, e.g.,
+
+$$
+\begin{tikzpicture}[baseline=(current bounding box.center), scale=0.5]
+\draw (0,0) node[above] {$\bullet$} -- (1,0);
+\draw (1,0) node[above] {$\bullet$} -- (2,0);
+\draw (2,0) node[above] {$\bullet$} -- (3,0);
+\draw (3,0) node[above] {$\bullet$} -- (4,0);
+\draw (4,0) node[above] {$\bullet$} -- (5,0);
+\draw (5,0) node[above] {$\bullet$} -- (6,0);
+\node at (1.5,-0.2) {Θ};
+\node at (3.5,-0.2) {Θ};
+\node at (5.5,-0.2) {Θ};
+\node at (7.5,-0.2) {Θ};
+\node at (9.5,-0.2) {Θ};
+\draw[->] (1.5,0) -- (3.5,0);
+\draw[->] (3.5,0) -- (5.5,0);
+\draw[->] (5.5,0) -- (7.5,0);
+\draw[->] (7.5,0) -- (9.5,0);
+\end{tikzpicture}
+\cong
+\begin{tikzpicture}[baseline=(current bounding box.center), scale=0.5]
+\draw (0,0) node[above] {$\bullet$} -- (1,0);
+\draw (1,0) node[above] {$\bullet$} -- (2,0);
+\draw (2,0) node[above] {$\bullet$} -- (3,0);
+\draw (3,0) node[above] {$\bullet$} -- (4,0);
+\draw (4,0) node[above] {$\bullet$} -- (5,0);
+\draw (5,0) node[above] {$\bullet$} -- (6,0);
+\draw (6,0) node[above] {$\bullet$} -- (7,0);
+\draw (7,0) node[above] {$\bullet$} -- (8,0);
+\draw (8,0) node[above] {$\bullet$} -- (9,0);
+\draw[->] (1,0) -- (2,0);
+\draw[->] (2,0) -- (3,0);
+\draw[->] (3,0) -- (4,0);
+\draw[->] (4,0) -- (5,0);
+\draw[->] (5,0) -- (6,0);
+\draw[->] (6,0) -- (7,0);
+\draw[->] (7,0) -- (8,0);
+\draw[->] (8,0) -- (9,0);
+\end{tikzpicture}
+=
+\begin{tikzpicture}[baseline=(current bounding box.center), scale=0.5]
+\draw (0,0) node[above] {$\bullet$} -- (1,0);
+\draw (1,0) node[above] {$\bullet$} -- (2,0);
+\draw (2,0) node[above] {$\bullet$} -- (3,0);
+\draw (3,0) node[above] {$\bullet$} -- (4,0);
+\draw (4,0) node[above] {$\bullet$} -- (5,0);
+\draw (5,0) node[above] {$\bullet$} -- (6,0);
+\draw (6,0) node[above] {$\bullet$} -- (7,0);
+\draw (7,0) node[above] {$\bullet$} -- (8,0);
+\draw (8,0) node[above] {$\bullet$} -- (9,0);
+\draw[->] (1,0) -- (2,0);
+\draw[->] (2,0) -- (3,0);
+\draw[->] (3,0) -- (4,0);
+\draw[->] (4,0) -- (5,0);
+\draw[->] (5,0) -- (6,0);
+\draw[->] (6,0) -- (7,0);
+\draw[->] (7,0) -- (8,0);
+\draw[->] (8,0) -- (9,0);
+\end{tikzpicture}
+\cong
+\begin{tikzpicture}[baseline=(current bounding box.center), scale=0.5]
+\draw (0,0) node[above] {$\circ$} -- (\dots 1,\dots 1);
+\node at (\dots 1.5,-0.2) {Θ};
+\node at (\dots 1.5,-1.2) {Θ};
+\node at (\dots 1.5,-2.2) {Θ};
+\node at (\dots 1.5,-3.2) {Θ};
+\node at (\dots 1.5,-4.2) {Θ};
+\node at (\dots 1.5,-5.2) {Θ};
+\node at (\dots 1.5,-6.2) {Θ};
+\node at (\dots 1.5,-7.2) {Θ};
+\node at (\dots 1.5,-8.2) {Θ};
+\node at (\dots 1.5,-9.2) {Θ};
+\node at (\dots 1.5,-10.2) {Θ};
+\node at (\dots 1.5,-11.2) {Θ};
+\node at (\dots 1.5,-12.2) {Θ};
+\node at (\dots 1.5,-13.2) {Θ};
+\node at (\dots 1.5,-14.2) {Θ};
+\node at (\dots 1.5,-15.2) {Θ};
+\node at (\dots 1.5,-16.2) {Θ};
+\node at (\dots 1.5,-17.2) {Θ};
+\node at (\dots 1.5,-18.2) {Θ};
+\node at (\dots 1.5,-19.2) {Θ};
+\node at (\dots 1.5,-20.2) {Θ};
+\node at (\dots 1.5,-31.2) {Θ};
+\node at (\dots 1.5,-42.2) {Θ};
+\node at (\dots 1.5,-53.2) {Θ};
+\node at (\dots 1.5,-64.2) {Θ};
+\node at (\dots 1.5,-75.2) {Θ};
+\node at (\dots 1.5,-86.2) {Θ};
+\node at (\dots 1.5,-97.2) {Θ};
+\node at (\dots 1.5,-108.2) {Θ};
+\node at (\dots 1.5,-119.2) {Θ};
+\node at (\dots 1.5,-130.2) {Θ};
+\node at (\dots 1.5,-141.2) {Θ};
+\node at (\dots 1.5,-152.2) {Θ};
+\node at (\dots 1.5,-163.2) {Θ};
+\node at (\dots 1.5,-174.2) {Θ};
+\node at (\dots 1.5,-185.2) {Θ};
+\node at (\dots 1.5,-196.2) {Θ};
+\node at (\dots 1.5,-207.2) {Θ};
+\node at (\dots 1.5,-218.2) {Θ};
+\node at (\dots 1.5,-229.2) {Θ};
+\node at (\dots 1.5,-33.2) {Θ};
+\node at (\dots 1.5,-44.2) {Θ};
+\node at (\dots 1.5,-55.2) {Θ};
+\node at (\dots 1.5,-66.2) {Θ};
+\node at (\dots 1.5,-77.2) {Θ};
+\node at (\dots 1.5,-88.2) {Θ};
+\node at (\dots 1.5,-99.2) {Θ};
+\node at (\dots 1.5,-110.2) {Θ};
+\node at (\dots 1.5,-121.2) {Θ};
+\node at (\dots 1.5,-132.2) {Θ};
+\node at (\dots 1.5,-143.2) {Θ};
+\node at (\dots 1.5,-154.2) {Θ};
+\node at (\dots 1.5,-165.2) {Θ};
+\node at (\dots 1.5,-176.2) {Θ};
+\node at (\dots 1.5,-187.2) {Θ};
+\node at (\dots 1.5,-198.2) {Θ};
+\node at (\dots 1.5,-209.2) {Θ};
+\node at (\dots 1.5,-31.2) {Θ};
+\node at (\dots 1.5,-42.2) {Θ};
+\node at (\dots 1.5,-53.2) {Θ};
+\node at (\dots 1.5,-64.2) {Θ};
+\node at (\dots 1.5,-75.2) {Θ};
+\node at (\dots 1.5,-86.2) {Θ};
+\node at (\dots 1.5,-97.2) {Θ};
+\node at (\dots 1.5,-108.2) {Θ};
+\node at (\dots 1.5,-119.2) {Θ};
+\node at (\dots 1.5,-130.2) {Θ};
+\node at (\dots 1.5,-141.2) {Θ};
+\node at (\dots 1.5,-152.2) {Θ};
+\node at (\dots 1.5,-163.2) {Θ};
+\node at (\dots 1.5,-174.2) {Θ};
+\node at (\dots 1.5,-185.2) {Θ};
+\node at (\dots 1.5,-196.2) {Θ};
+\node at (\dots 1.5,-37.2) {Θ};
+\node at (\dots 1.5,-48.2) {Θ};
+\node at (\dots 1.5,-69.2) {Θ};
+\node at (\dots 1.5,-80.2) {Θ};
+\node at (\dots 1.5,-99.2) {Θ};
+\node at (\dots 1.5,-39999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
+= \begin{tikzpicture}[baseline=(current bounding box.center), scale=0.5]
+\draw (-6/3+(-π/6),-π/6+(-π/6)) arc (-π/6-π/6-π/6)- to (-π/6-π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-6/3+(-π/6),-π/6+π/6-π/6)- to (-π/6+π/6-π/6);
+\draw (-6/3+(-π/6),-π/6+π/6+π/6)- to (-π/6+π/6+π/6);
+\draw (-4/-3+(-π/-3),-π/-3+(-π/-3)) arc (-π/-3-π/-3-π/-3)- to (-π/-3-π/-3-π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3-π/-3)- to (-π/-3+π/-3-π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3+π/-3)- to (-π/-3+π/-3+π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3-π/-3)- to (-π/-3+π/-3-π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3+π/-3)- to (-π/-3+π/-3+π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3-π/-3)- to (-π/-3+π/-3-π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3+π/-3)- to (-π/-3+π/-3+π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3-π/-3)- to (-π/-3+π/-3-π/-3);
+\draw (-4/-3+(-π/-3),-π/-3+π/-3+π/-3)- to (-π/-3+π/-3+π/-3);
+\draw (-4/-3+(-π/-3),- π/-3 + π/-3 - π/-3 ) arc (- π/-3 - π/-3 - π/-3 )-- to (- π/-3 - π/-3 - π/-3 );
+\draw (-4/-3 + π/-3 - π/-3 ) arc (- π/-3 - π/-3 + π/-3 )-- to (- π/-3 + π/-3 + π/-3 );
+\draw (-4/-3 + π/-3 + π/-3 ) arc (- π/-3 + π/-3 - π/-3 )-- to (- π/-3 + π/-3 - π/-3 );
+\draw (-4/-3 + π/-3 - π/-3 ) arc (- π/-3 + π/-3 + π/-3 )-- to (- π/-3 + π/-3 + π/-3 );
+\end{tikzpicture}
+$$
+
+$\ominus$ is needed when one of the constants appears on the RHS of an equation, e.g.,⁵
+
+$$
+v_1 \oplus v_2 = \circ \quad \rightsquigarrow \quad v_1 = \circ \wedge v_2 = \circ \\
+v_1 \oplus \bullet = v_2 \quad \rightsquigarrow \quad v_1 = \circ \wedge v_2 = \bullet \\
+v_1 \oplus v_1 = v_2 \quad \rightsquigarrow \quad v_1 = \circ \wedge v_2 = \circ
+$$
+
+$$
+v_1 \oplus \circ = v_2 \quad \rightsquigarrow \quad v_1 = v_2 \\
+v_1 \oplus v_2 = v_1 \quad \rightsquigarrow \quad v_2 = \circ
+$$
+
+The result of SIMPLIFY is a new (in practice considerably smaller!) system of equations Σ' that has the same solutions, as expressed by the following Lemma:
+
+**Lemma 1.** For all solutions *S*, *S* ⊢ Σ iff *S* ⊢ SIMPLIFY(Σ).
+
+We will also need to know that SIMPLIFY does not increase the height of an equation system.
+
+To prove this, we need the following fact about ⊕ and ⊖:
+
+**Lemma 2.** If $a \oplus b = c$ or $a \ominus b = c$ then $|c| \le \max(|a|, |b|)$.
+
+Given that fact, it is straightforward to prove the associated fact on SIMPLIFY:
+
+**Lemma 3.** $|\text{SIMPLIFY}(\Sigma)| \leq |\Sigma|$
+
+Proper equation systems.
+An equation system $\Sigma$ is proper when all of the equa-
+tions and equalities in $\Sigma$ have no more than one constant.
+SIMPLIFY($\Sigma$) is
+always proper, which simplifies some of our upcoming soundness proofs; accord-
+ingly, hereafter we assume that all of our equation systems are proper.
+
+DECOMPOSE.
+The heart of our decision procedure is DECOMPOSE, which takes
+an equation system Σ of height n and produces two independent systems Σ_l and
+Σ_r with heights at most n−1.
+
+We decompose equalities and equations as follows:
+
+$$
+v \rightsquigarrow (v_l, v_r)
+$$
+
+vars
+
+$$
+o \rightsquigarrow (\circ, \circ)
+$$
+
+• $\rightsquigarrow$ $(•, •)$
+
+$\tau_1 \hat{\tau}_2 \rightsquigarrow (\tau_1, \tau_2)$
+
+`const`s
+
+$$
+a \rightsquigarrow \left.
+\begin{array}{l}
+(a_l, a_r)
+\\
+(b_l, b_r)
+\\
+(c_l, c_r)
+\end{array}
+\right|
+$$
+
+$a \oplus b = c \quad \rightsquigarrow \quad (a_l \oplus b_l = c_l, a_r \oplus b_r = c_r)$
+
+eqs
+
+$a=b \quad \rightsquigarrow \quad (a_l=b_l,a_r=b_r)$
+
+⁵ In §4 we use the symbol $\rightsquigarrow$ to indicate a transformation taken by the subroutine currently under discussion, so here it is referring to one of the operations of SIMPLIFY.
+
+
+---PAGE_BREAK---
+
+In addition, **DECOMPOSE** also transforms the list of existentially bound variables so that if $v$ was existentially bound in $\Sigma$ then $v_l$ is existentially bound in $\Sigma_l$ and $v_r$ is existentially bound in $\Sigma_r$. Fresh variable names are chosen so that the system can determine which “parent” variables are associated with which “child” variables. We write $\hat{x}$ for the parent variable function, e.g., $\hat{v}_l = \hat{v}_r = v$.
+
+The key fact about **DECOMPOSE** is that the solution of the original system is tightly related to the solutions of the decomposed systems, as follows:
+
+**Lemma 4.** *Given a system $\Sigma$ and a solution $S$ such that $\text{DECOMPOSE}(\Sigma) = (\Sigma_l, \Sigma_r)$ and $\text{DECOMPOSE}(S) = (S_l, S_r)$, then $S \models \Sigma$ iff $S_l \models \Sigma_l$ and $S_r \models \Sigma_r$.*
+
+By $\text{DECOMPOSE}(S)$ we mean the division of $S$ into two independent solutions:
+
+$$ \text{DECOMPOSE}(S) \equiv (\lambda v.\text{DECOMPOSE}(S(\hat{v})).1, \lambda v.\text{DECOMPOSE}(S(\hat{v})).2) $$
+
+Lemma 4 holds because the left and right subtrees of a binary tree are independent from each other. Moreover, **DECOMPOSE** decreases height:
+
+**Lemma 5.** If $\text{DECOMPOSE}(\Sigma) = (\Sigma_l, \Sigma_r)$, then $|\Sigma| > \max(|\Sigma_l|, |\Sigma_r|)$ or we were at height 0 to begin with, i.e., $|\Sigma| = |\Sigma_l| = |\Sigma_r| = 0$.
+
+**FORMULA.** After repeatedly applying **DECOMPOSE**, $|\Sigma| = 0$, i.e., the embedded constants are only ◦ and •. Tree constants at height zero have a natural interpretation as booleans, with ◦ as ⊥ and • as ⊤. Likewise, solutions at height zero can be used as valuations (maps from variables to ⊤ and ⊥) for logic formulae. Accordingly, **FORMULA** translates the equations and equalities in a system of equations of height zero into logic formulae as follows:
+
+$$
+\begin{align*}
+a \oplus b &= c & \rightsquigarrow (\neg a \land \neg b \land \neg c) \lor (\neg a \land b \land c) \lor (a \land \neg b \land c) \\
+a &= b & \rightsquigarrow (\neg a \land \neg b) \lor (a \land b)
+\end{align*}
+$$
+
+Each resulting formula is $\land$-conjoined together to get a single formula that represents the entire system, as indicated by the following lemma:
+
+**Lemma 6.** Let $|S| = |\Sigma| = 0$ and $v_1, \dots, v_n$ be the existentially bound variables in $\Sigma$. Then $S \models \Sigma$ iff $S \models \exists v_1 \dots \exists v_n$. $\text{FORMULA}(\Sigma)$
+
+To connect to a pure SAT solver (e.g., MiniSat) we then compile the existential into a disjunction; e.g., $\exists v.\phi \rightsquigarrow (v=\top \land \phi) \lor (v=\bot \land \phi)$. In contrast, SMT solvers such as Z3 can handle existentials over booleans directly.
+
+The proof of Lemma 6 is by simple case analysis, but critics will rightly observe that the hypothesis $|S| = 0$, which is crucial to make the case analysis finite, is in general not true. We will see below how to overcome this difficulty.
+
+SAT. We are almost ready to prove the correctness of the SAT function. The last puzzle piece we need is one of the two major theoretical insights of this paper:
+
+**Theorem 1.** $\Sigma$ is satisfiable if and only if $\Sigma$ can be satisfied with a solution $S$ whose height is $|\Sigma|$, i.e., $\exists S. S \models \Sigma \iff \exists S. |S| = |\Sigma| \land S \models \Sigma$.
+---PAGE_BREAK---
+
+We will defer the proof of Theorem 1 until §5.1; our task in this section is to
+show how it fits into our correctness proof for SAT, i.e.,
+
+**Theorem 2.** $\text{SAT}(\Sigma) = \top$ if and only if $\Sigma$ is satisfiable, i.e., $\exists S. S \models \Sigma$.
+
+*Proof.* Given $\Sigma$, we call REDUCE and feed the result into the SMT solver, so Theorem 2 depends on REDUCE turning $\Sigma$ into an equivalent logical formula.
+
+The proof of REDUCE is by (complete) induction on $|\Sigma|$. Both the base case and the inductive case begin by applying applying SIMPLIFY to reach $\Sigma'$. By Lemma 1, $\Sigma'$ is satisfiable iff $\Sigma$ was satisfiable; moreover, by Lemma 3, $|\Sigma'| \le |\Sigma|$. After simplification, the base case and the inductive case proceed differently.
+
+In the base case, $|\Sigma'| = 0$ and REDUCE calls FORMULA to produce a logical formula that by Lemma 6 is equivalent to $\Sigma'$ as long as the solution has height 0. Theorem 1 completes the base case by telling us that testing satisfiability at height 0 is sufficient to determine satisfiability in general.
+
+In the inductive case, we DECOMPOSE $\Sigma'$ into $\Sigma_l$ and $\Sigma_r$. Lemma 5 tells us that both new systems have lower height, so we can apply the induction hypothesis to verify the recursive call and get two new formulae whose truth are equivalent to $\Sigma^l$ and $\Sigma^r$. Lemma 4 completes the inductive step by telling us that the conjunction of $\Sigma^l$ and $\Sigma^r$ is equivalent to $\Sigma'$. $\square$
+
+IMPL. We need the second major theoretical insight of this paper to verify IMPL.
+
+**Theorem 3.** $\Sigma_a \vdash \Sigma_c$ iff $\Sigma_a \vdash \Sigma_c$ for all solutions $S$ s.t. $|S| = |\Sigma_a|$, i.e., $\forall S. S \models \Sigma_a \Rightarrow S \models \Sigma_c$ iff $\forall S. |S| = |\Sigma_a| \Rightarrow S \models \Sigma_a \Rightarrow S \models \Sigma_c$.
+
+We will defer the proof until §5.2; just as we did with Theorem 1 above, our task here is to show how Theorem 3 fits into our correctness proof for IMPL, i.e.,
+
+**Theorem 4.** IMPL($\Sigma_a, \Sigma_c$) if and only if $\Sigma_a \vdash \Sigma_c$, i.e., $\forall S. S \models \Sigma_a \Rightarrow S \models \Sigma_c$.
+
+*Proof.* The major effort is proving that REDUCEI correctly transforms $\Sigma_a$ and $\Sigma_c$ into equivalent logical formulae $\Phi_a$ and $\Phi_c$ such that $\Sigma_a \vdash \Sigma_c$ iff $\Phi_a \Rightarrow \Phi_c$; afterwards we simply use the standard SAT/SMT solver trick of converting a validity check for $\Phi_a \Rightarrow \Phi_c$ into an unsatisfiability check for $\Phi_a \wedge \neg \Phi_c$.
+
+The proof of REDUCEI is largely in parallel with the proof of REDUCE in Theorem 2. We proceed by complete induction, this time on $\max(|\Sigma_a|, |\Sigma_c|)$.
+
+Again the base and inductive cases begin in the same way. We apply SIMPLIFY to reach $\Sigma'_a$ and $\Sigma'_c$ and again use Lemma 1 to guarantee that $\Sigma'_a \vdash \Sigma'_c$ iff $\Sigma_a \vdash \Sigma_c$; Lemma 3 ensures that $\max(|\Sigma'_a|, |\Sigma'_c|) \le \max(|\Sigma_a|, |\Sigma_c|)$.
+
+After simplification, the base and inductive cases diverge. In the base case, $\max(|\Sigma'_a|, |\Sigma'_c|) = 0$ and we call FORMULA to reach two logical formulae, the first equivalent to $\Sigma'_a$ and the second equivalent to $\Sigma'_c$, as long as the solutions are of height zero (Lemma 6). Theorem 3 completes the base case by observing that it is sufficient to check only the solutions of height $|\Sigma'_a|$, i.e. zero.
+
+In the inductive case, we DECOMPOSE $\Sigma'_a$ and $\Sigma'_c$ into $\Sigma_a^l$, etc., decreasing the maximum of their heights (Lemma 5), and thus letting us use the induction hypothesis for the recursive calls. Afterwards, we have four formulae ($\Phi_a^l$, etc.); we then conjoin both antecedents and both consequents using Lemma 4. $\square$
+
+
+---PAGE_BREAK---
+
+*Optimizations.* The algorithms presented in figures 2 and 3 get the job done but yield far from optimal performance. Our prototype incorporates a number of additional optimizations including. Optimizations during SAT include dropping equalities after substitution and a lazier on-demand version of DECOMPOSE. In addition to utilizing the lazier version of DECOMPOSE, optimizations during IMPL include dropping existentials from the antecedent, substituting equalities from the antecedent into the consequent, and stopping decomposition when the antecedent has reached height zero and performing a SAT check on the antecedent if the consequent has not also reached height zero. Several optimizations require some additional theoretical insight; e.g., the last requires the following:
+
+**Lemma 7.** Let $S$ be a solution of $\Sigma$. Then $|S| \ge |\Sigma|$.
+
+*Proof.* Recall that we assume that $\Sigma$ is proper, i.e., each equation has at most one constant. If $|\Sigma| = 0$, we are done. Otherwise, by definition of $|\Sigma| = n$, there must be an equation $\sigma$ containing a constant $\chi$ with height $n$. Since $S \models \Sigma$ we know that $S \models \sigma$. Assume both variables $v_1$ and $v_2 \in \sigma$ have height lower than $n$ in $S$ (i.e., $\max(|S(v_1)|, |S(v_2)|) < |\chi|$. By Lemma 2 we also know that $|\chi| \le \max(|S(v_1)|, |S(v_2)|)$, so by transitivity we have $|\chi| < |\chi|$, a contradiction. Accordingly, at least one of the variables $v_i$ must have had height at least $n$. $\square$
+
+Unsurprisingly, the actual code used in the prototype is much more complicated than the algorithms presented above, and accordingly is much harder to verify. For future work we would like to develop a verified implementation.
+
+*Complexity.* One might wonder what complexity class SAT and IMPL belong to. Tree-SAT when restricted to systems of height zero is already NP-COMPLETE.
+
+*Proof.* We can use the following clause-by-clause reduction from 3-SAT, in which new variables $(X, Y, Z, M$, and $N$) are chosen fresh for each disjunctive clause:
+
+$$
+\begin{align*}
+A \lor B \lor C &\leadsto (A \oplus X = \bullet) \land (X \oplus Y = Z) \land (B \oplus M = Z) \land (M \oplus N = C) \\
+\neg A \lor B \lor C &\leadsto (A \oplus X = Y) \land (B \oplus Z = Y) \land (Z \oplus M = C) \\
+\neg A \lor \neg B \lor C &\leadsto (A \oplus X = Y) \land (B \oplus Z = M) \land (C \oplus X = M) \\
+\neg A \lor \neg B \lor \neg C &\leadsto (A \oplus X = Y) \land (B \oplus Z = M) \land (X \oplus Z = C)
+\end{align*}
+$$
+
+The clause on the LHS is satisfiable iff the clause on the RHS is satisfiable. $\square$
+
+We hypothesize that tree-SAT on systems of arbitrary height is still “only” NP-COMPLETE because our SAT algorithm seems to scale the formulae polynomially with the description of the system. Going a bit further onto a limb, we further hypothesize that tree-IMPL is no worse than NP$^{\text{NP}}$-COMPLETE. Happily, as we show in §7, performance seems to be adequate in practice.
+
+# 5 Sufficiency of Finite Search over Tree Shares
+
+The SAT and IMPL algorithms presented in §4 are basically doing a shape-guided search through a finite domain. Our key theoretical insight is that a finite
+---PAGE_BREAK---
+
+search is sufficient, as formalized in the statement of Theorems 1 and 3 in §4. Our next task is to prove these theorems, which is the focus of the remainder of this section. The most technical parts—Lemmas 8 and 10—have been mechanically verified in Coq. The remaining proofs have been carefully checked on paper.
+
+## 5.1 The Sufficiency of Finite Search for SAT
+
+We begin by explaining two related operations given a tree $\tau$ and natural $n$: left rounding, written $\leftarrow\overline{\tau}\right]_n$; and right rounding, written $\rightarrow\overline{\tau}\left]_n$. Because of the canonical form for tree shares, their associated formal definitions are somewhat unpleasant, but informally what is going on is simple. First, the tree $\tau$ is unfolded to height $n$. Second, we shrink the height of the tree by uniformly choosing the left (respectively, right) leaf from each pair of leaves at height $n$. Finally, we refold the resulting tree back into canonical form.
+
+For illustration, here we left and right round the tree ⋄○○● to height 3. To help visually track what is going on, we have highlighted the left leaf in each pair with the color red and the right leaf in each pair with the color blue.
+
+**Lemma 8 (Properties of $\leftarrow\overline{\tau}\right]$$_n$ and $\rightarrow\overline{\tau}\left]_n$).**
+
+1. If $n > |\tau|$ then $\leftarrow\overline{\tau}\right]_n = \rightarrow\overline{\tau}\right]_n = \tau$
+
+2. If $n = |\tau|$, $\tau_l = \leftarrow\overline{\tau}\right]_n$, and $\tau_r = \rightarrow\overline{\tau}\right]_n$ then $\max(|\tau_l|, |\tau_r|) < n$
+
+3. If $\tau_1 \oplus \tau_2 = \tau_3$ and $n = \max(|\tau_1|, |\tau_2|, |\tau_3|)$, then $\leftarrow\overline{\tau_1}\right]_n \oplus \leftarrow\overline{\tau_2}\right]_n = \leftarrow\overline{\tau_3}\right]_n$ and $\rightarrow\overline{\tau_1}\right]_n \oplus \rightarrow\overline{\tau_2}\right]_n = \rightarrow\overline{\tau_3}\right]_n$
+
+Proved in Coq. Lemma 8 states (1) that $\leftarrow\overline{\tau}\right]_n$ and $\rightarrow\overline{\tau}\right]_n$ do not affect $\tau$ if $n > |\tau|$; and (2) will decrease the height if $n = |t|$. Most importantly, (3) $\leftarrow\overline{\tau}\right]_n$ and $\rightarrow\overline{\tau}\right]_n$ preserve the join relation when $n$ is the height of the equation.
+
+We extend $\leftarrow\overline{\cdot}\right]_n$ and $\rightarrow\overline{\cdot}\left]_n$ to work over solutions $S$ pointwise as follows:
+
+$$ \leftarrow\overline{S}\right]_n \equiv \lambda v. \leftarrow\overline{S(v)}\right]_n \qquad \rightarrow\overline{S}\right]_n \equiv \lambda v. \rightarrow\overline{S(v)}\right]_n $$
+
+The key point of the rounding functions is given by the next lemma, a corollary of lemma 8 after using a solution $S$ to instantiate variables in a system $\Sigma$.
+
+**Lemma 9.** Let $S \models \Sigma$, $n = |S| > |\Sigma|$, $S_l = \leftarrow\overline{S}\right]_n$, and $S_r = \rightarrow\overline{S}\right]_n$. Then $S_l \models \Sigma$, $S_r \models \Sigma$, and $\max(|S_l|, |S_r|) < n$.
+---PAGE_BREAK---
+
+The key to this lemma is that since we are rounding only at a height $n > |\Sigma|$, all of the constants in $\Sigma$ are unchanged. Only the variables in $S$ with height greater than $|\Sigma|$ are modified, but their new values are also solutions for $\Sigma$. With the preliminaries out of the way, we are finally ready to prove Theorem 1.
+
+**Theorem 1.** $\Sigma$ is satisfiable if and only if $\Sigma$ can be satisfied with a solution $S$ whose height is $|\Sigma|$, i.e., $\exists S. S \models \Sigma \iff \exists S. |S| = |\Sigma| \wedge S \models \Sigma$.
+
+*Proof.* $\Leftarrow$: Immediate. $\Rightarrow$: Suppose $S \models \Sigma$. By Lemma 7, we have $|S| \geq |\Sigma|$, i.e., $|S| = |\Sigma| + n$ for some $n$. We proceed by strong induction on $n$. If $n=0$ we are done. Otherwise, by Lemma 9 we know that $S_l = [\overleftarrow{S}]_{|\Sigma|+n}$ satisfies $\Sigma$ and $|S_l| < |S|$, letting us apply the induction hypothesis. $\square$
+
+## 5.2 The Sufficiency of Finite Search for IMPL
+
+IMPL is more complicated than SAT due to the contravariance. Suppose we have computationally checked that all solutions $S$ of height $|\Sigma_a|$ that satisfy $\Sigma_a$ also satisfy $\Sigma_c$. Now suppose that $S \models \Sigma_a$ for some $S$ such that $|S| = |\Sigma_a| + 1$, and we wish to know if $S \models \Sigma_c$. Lemma 9 tells us that $[\overleftarrow{S}]_{|\Sigma_a|+1} \models \Sigma_a$. Our computational verification then tells us that $[\overleftarrow{S}]_{|\Sigma_a|+1} \models \Sigma_c$, but then we are stuck: on its own, $[\overleftarrow{S}]_{|\Sigma_a|+1} \models \Sigma_c$ is too weak to prove $S \models \Sigma_c$.
+
+The root of the problem is that $[\overleftarrow{\tau}]_n$ does not contain enough information about the original because half of the leaves are removed. Fortunately, the leaves that were dropped when we round left are exactly the leaves that are kept when we round right, and vice versa. We can define a third operation, written $\tau_l \triangledown_n \tau_r$ and pronounced “average”, that recombines the rounded trees back into the original. Just as was the case with the rounding functions, although the formal definition of $\tau_l \triangledown_n \tau_r$ is somewhat unpleasant due to the necessity of managing the canonical forms, the core idea is straightforward. First, $\tau_l$ and $\tau_r$ are unfolded to height $n-1$. Second, each leaf in $\tau_l$ is paired with its corresponding leaf in $\tau_r$. Finally, the resulting tree is folded back into canonical form.
+
+We illustrate with another example, highlighting again with **red** and **blue**:
+
+**Lemma 10 (Properties of $\tau_l \triangledown_n \tau_r$).**
+
+1. If $n > |\tau|$ then $\tau \triangledown_n \tau = \tau$.
+
+2. If $n \geq |\tau|$, then $[\overleftarrow{\tau}]_n \triangledown_n [\overrightarrow{\tau}]_n = \tau$.
+
+3. If $n > \max(|\tau_1|, |\tau_2|, |\tau_3|, |\tau'_1|, |\tau'_2|, |\tau'_3|)$, $\tau_1 \oplus \tau_2 = \tau_3$, and $\tau'_1 \oplus \tau'_2 = \tau'_3$, then $(\tau_1 \triangledown_n \tau'_1) \oplus (\tau_2 \triangledown_n \tau'_2) = (\tau_3 \triangledown_n \tau'_3)$.
+
+Proved in Coq. The key points are (1) $\tau$ is an identity with itself, (2) $\triangledown_n$ is the inverse of $[\overleftarrow{\tau}]_n$ and $[\overrightarrow{\tau}]_n$, and (3) $\triangledown_n$ preserves the join operation $\oplus$.
+---PAGE_BREAK---
+
+Given a system $\Sigma$, Lemma 10 contains the facts we need to prove that the $\triangledown_n$-combination of two solutions $S_l$ and $S_r$ as defined below is also a solution.
+
+$$S_l \triangledown_n S_r \equiv \lambda v. S_l(v) \triangledown_n S_r(v)$$
+
+**Lemma 11 (Properties of $S_l \triangledown_n S_r$).**
+
+1. For all $S$, if $n \ge |S|$ then $\left\lfloor \overrightarrow{S} \right\rfloor_n \triangledown_n \left\lfloor \overrightarrow{S} \right\rfloor_n = S$.
+
+2. Let $S_l, S_r$ be solutions of $\Sigma$ and $n > \max(|S_l|, |S_r|)$. Then $S = S_l \triangledown_n S_r$ is a solution of $\Sigma$.
+
+Direct from Lemma 10. We are now ready to attack the main IMPL theorem.
+
+**Theorem 3.** $\Sigma_a \vdash \Sigma_c$ iff $\Sigma_a \vdash \Sigma_c$ for all solutions $S$ s.t. $|S| = |\Sigma_a|$, i.e., $\forall S. S \models \Sigma_a \Rightarrow S \models \Sigma_c$ iff $\forall S. |S| = |\Sigma_a| \Rightarrow S \models \Sigma_a \Rightarrow S \models \Sigma_c$.
+
+*Proof.* $\Rightarrow$: Immediate. $\Leftarrow$: We apply complete induction, starting from $|\Sigma_a|$, on the height of solutions $S$ of $\Sigma_a$. The base case ($|S| = |\Sigma_a|$) is immediate. For the inductive case, we know $S \models \Sigma_a$ and that all solutions $S'$ of $\Sigma_a$ such that $|\overleftarrow{S}'| < |S|$ are also solutions of $\Sigma_c$. By Lemma 9, we know that $\left\lfloor \overleftarrow{S} \right\rfloor_{|S|}$ and $\left\lfloor \overrightarrow{S} \right\rfloor_{|S|}$ are both solutions to $\Sigma_a$ with lower heights. The induction hypothesis yields that $\left\lfloor \overleftarrow{S} \right\rfloor_{|S|}$ and $\left\lfloor \overrightarrow{S} \right\rfloor_{|S|}$ are also both solutions of $\Sigma_c$. Lemma 11 completes the proof by telling us that $\left\lfloor \overleftarrow{S} \right\rfloor_{|S|} \triangledown_{|S|} \left\lfloor \overrightarrow{S} \right\rfloor_{|S|} = S$ is also a solution of $\Sigma_c$. $\square$
+
+# 6 Handling Non-zeros
+
+For fear of cluttering the presentation we omitted showing how to restrict a variable to non-zero shares in SAT and IMPL queries.
+
+However, our methods are able to handle this detail: each system of equations also contains a list of “non-zero” variables. This list is taken into account when constructing the first-order boolean formula: for each non-zero variable, an extra disjunctive clause over all the decompositions of that variable is generated. This forces at least one of the boolean variables corresponding to the initial non-zero variable to be true in each solution. In the tree domain, this clause ensures that the non-zero variable has at least one • leaf.
+
+The full algorithms have an extra call to the `NON_ZERO` function, which returns a conjunction of the clauses encoding the non-zero disjunctions (not shown: sometimes we need to lift existentials to the top level). To force a variable $v$ to be strictly positive at least one of the variables decomposed from $v$ needs to be true (i.e., the tree value has at least one • leaf). If $v_i$ is the set of variables obtained from decomposing $v$ then positivity is encoded by $\bigvee_i v_i$.
+---PAGE_BREAK---
+
+Because these non-zero constraints relate otherwise disjoint equation subsystems to each other, it is not obvious how to verify each subsystem independently, which is why we produce one large boolean formula rather than many small ones.
+
+Furthermore, the non-zero set forces extra system decompositions. To illustrate this point, observe that the equation $v_1 \oplus v_2 = \bullet$ has no solution of depth 0 in which both $v_1$ and $v_2$ are non-empty. However, decomposing the system once will yield the system: $v_1^l \oplus v_2^l = \bullet \wedge v_1^r \oplus v_2^r = \bullet$ with two possible solutions ($v_1^l = \circ; v_1^r = \bullet; v_2^l = \bullet; v_2^r = \circ$) and ($v_1^l = \bullet; v_1^r = \circ; v_2^l = \circ; v_2^r = \bullet$) which translate into ($v_1 = \circ; v_2 = \circ$) and ($v_1 = \circ; v_2 = \circ$). We have proved that for each non-zero variable, $[\log_2(n)]$ is an upper bound on the number of extra decompositions, where *n* is the total number of variables. In practice we do not need to decompose nearly that much, and we have not noticed a meaningful performance cost. We speculate that we avoid most of the cost of the additional decompositions because the extra variables are often handled by some of the fast simplification rules we have incorporated into our tool.
+
+# 7 Solver Implementation
+
+Here we discuss some implementation issues. Our prototype is an OCaml library that implements (an optimized version of) the algorithms from §4 to resolve the SAT and IMPL queries issued by an entailment checker such as SLEEK.
+
+**Architecture.** Our library contains four modules with clearly delimited interfaces so that each component can be independently used and improved:
+
+1. An implementation of tree shares that exposes basic operations like equality testing, tree constructors, the join operation, and left/right projection.
+
+2. The core: which reduces equation systems to boolean satisfiability. The bulk of the core module translates equation systems into boolean formulas via an optimized version of the procedures given in §4. As we will see, a considerable number of queries reduce to tautologies after repeated simplification/decomposition and can thus be discharged without the SAT/SMT solver. If we are not that lucky, then the system is reduced to a list of existentially quantified variables, a list of variables that must be strictly positive, and a list of join facts over booleans of the form $v_1 \oplus v_2 = (\bullet|v_3)$.
+
+3. The backend: tasked with interfacing with the SAT/SMT solver: translating the output format from the core to the input format of the SAT/SMT solver and retrieving the result. Our backend is quite lightweight so changing the underlying solver is a breeze. We provide backends to MiniSat [5] and Z3 [3]; each add some final solver-specific optimizations.
+
+4. A frontend: although the prover can be used as an OCaml library, we believe users may also want to query it as a standalone program. We provide a module for parsing input files and calling the core module.
+---PAGE_BREAK---
+
+*Evaluation A: SLEEK embedding.* Our OCaml library is designed to be easily incorporated into a general verification system. Accordingly, we tested our implementation by incorporating it into the SLEEK separation logic entailment prover and comparing its performance with our previous attempt at a share prover [9, §8.1]. That prover attempted to find solution by iteratively bounding the range of variables and trying to reach a fixed point; for example from $\mathring{\bullet} \oplus x = y$ it would deduce $\circ \le x \le \mathring{\bullet}\circ$ and $\mathring{\circ}\bullet \le y$. The resulting highly incomplete solver was unable to prove most entailments containing more than one share variable, even for many extremely simple examples such as $v_1 \oplus v_2 = v_3 \vdash v_2 \oplus v_1 = v_3$.
+
+We denote the implementation of the method presented here as ShP (Share Prover), and use BndP (Bound Prover) for the previous prover and present our results in Table 1. In the first column, we name our tests, which are broken into three test groups. The next five columns deal with the SAT queries generated by the tests, and the final five columns with the IMPL queries.
+
+The first two test groups were developed for BndP in [9] and so the share problems they generate are not particularly difficult. The first four tests verify increasingly precise properties of a short (32-line) concurrent program in HIP, which calls SLEEK, which then calls BndP/ShP. In either case, the number of calls is the same and is given in the column labeled “call no.”; e.g., barrier-weak requires 116 SAT checks and 222 IMPL checks.
+
+The columns labeled “BndP (ms)” contain the cumulative time in milliseconds to run the BndP checker on all the queries in the associated test, e.g., barrier-weak spends 0.4ms to verify 116 SAT problems and 2.1ms to verify 222 IMPL checks. BndP may be highly incomplete, but at least it is rapidly highly incomplete. The columns labeled “ShP” contain the cumulative time in milliseconds to run the ShP checker, e.g., barrier-weak spends 610ms verifying 116 SAT problems and 650ms verifying 222 IMPL problems. Obviously this is quite a bit slower, but part of the context is that the rest of HIP/SLEEK is approximately 3,000ms on each of the first four tests—in other words, ShP, although much slower than BndP, is still considerably faster than the rest of HIP/SLEEK.
+
+The remaining columns shed some light on what is going on; “SAT no.” gives the number of queries that ShP actually submitted to the underlying SAT solver. For example, barrier-weak submitted 73 out of 116 queries to the underlying solver for SAT and 42 out of 222 queries to the underlying solver for IMPL; the remaining 43+180 queries were solved during simplification/decomposition. Finally “SAT (ms)” gives the total amount of time spent in the underlying SAT solver itself; in every case this is the dominant timing factor. While it is not surprising that the SAT solver takes a certain amount of time to work its mojo, we suspect that most of the time is actually spent with process startup/teardown and hypothesize that performance would improve considerably with some clever systems engineering. Of course, another way to improve the timings in practice is to run BndP first and only resort to ShP when BndP gets confused.
+
+Tests five through nine were also developed for BndP, but bypass HIP to test certain parts of SLEEK directly. Observe that when the underlying solver is not called, ShP is quite fast, although still considerably slower than BndP.
+---PAGE_BREAK---
+
+| test | SAT | IMPL |
|---|
| call no. | BndP (ms) | ShP (ms) | SAT no. | SAT (ms) | call no. | BndP (ms) | ShP (ms) | SAT no. | SAT (ms) |
|---|
| barrier-weak | 116 | 0.4 | 610 | 73 | 530 | 222 | 2.1 | 650 | 42 | 450 | | barrier-strong | 116 | 0.6 | 660 | 73 | 510 | 222 | 2.2 | 788 | 42 | 460 | | barrier-paper | 116 | 0.7 | 664 | 73 | 510 | 216 | 2.2 | 757 | 42 | 460 | | barrier-paper-ex | 114 | 0.8 | 605 | 71 | 520 | 212 | 2.3 | 610 | 40 | 430 | | fractions | 63 | 0.1 | 0.1 | 0 | 0 | 89 | 0.1 | 110 | 11 | 110 | | fractions1 | 11 | 0.1 | 0.1 | 0 | 0 | 15 | 0.1 | 31.3 | 3 | 30 | | barrier | 68 | 0.1 | 0.9 | 0 | 0 | 174 | 1.2 | 3.9 | 0 | 0 | | barrier3 | 36 | 0.2 | 0.1 | 0 | 0 | 92 | 0.2 | 2.2 | 0 | 0 | | barrier4 | 59 | 0.1 | 0.7 | 0 | 0 | 140 | 0.9 | 2.4 | 0 | 0 |
+
+
+
+
+ | read_ops |
+ FAIL |
+ 210 |
+ 14 |
+ 208 |
+ FAIL |
+ 317 |
+ 9 |
+ 150 |
+
+
+ | construct |
+ FAIL |
+ 70 |
+ 4 |
+ 65 |
+ FAIL |
+ 880 |
+ 17 |
+ 270 |
+
+
+ | join_ent |
+ FAIL |
+ 70 |
+ 3 |
+ 30 |
+ FAIL |
+ 50 |
+ 3 |
+ 48 |
+
+
+
+
+**Table 1.** Experimental timing results
+
+On the other hand, even if the total time is reasonable, what is the point of advocating a slower prover unless it can verify things the faster prover cannot? The tenth test tries to verify a simple 25-line **sequential** program whose verification uses fractional shares; we write FAIL to indicate that BndP is unable to verify the queries. Finally, the eleventh and twelfth tests bypass HIP and instruct SLEEK to check entailments that BndP is unable to help verify.
+
+For brevity, we report here the timings obtained only with the Z3 backend.
+
+Usually, choice of backend does not make much difference, but in a few cases, e.g. read_ops and join_ent, choosing MiniSat can degrade the performance by a factor of 10. We leave the investigation of this behavior for future work.
+
+*Evaluation B: Standalone.* While verifying programs, and their associated separation logic entailments is really the main goal, it is not so easy to casually develop HIP and SLEEK input files that exercise share provers aggressively. We designed a benchmark of 53 SAT and 50 IMPLY queries, many of which we specifically designed to stress a share prover in various tricky ways, including heavily skewed tree constants, evil mixes of non-zero variables, deep heterogeneous tree constants, numerous unconstrained variables, and a number of others.
+
+ShP solved the entire test suite in 1.4s; 24 SAT checks and 18 IMPL checks reached the underlying solver. BndP could solve fewer than 10% of the queries.
+
+# 8 Related and Future Work
+
+Simpler fractional permissions are used in a variety of logics [2, 1] and verification tools [10]. Their use is by no means restricted to separation logic as indicated by their use in CHALICE [6]. Despite the simpler domain, and associated loss of useful technical properties, we could find no completeness claims in the literate. It is our hope that other program verification tools will decide to incorporate more sophisticated share models now that they can use our solver.
+---PAGE_BREAK---
+
+In the future we would like to improve the performance of our tool by trying to mix the sound but incomplete bounds-based method [9] with the techniques described here; make a number of performance-related engineering enhancements, integrate the $\bowtie$ operation, and develop a mechanically-verified implementation.
+
+# 9 Conclusion
+
+We have shown how to extract a system of equations over a sophisticated fractional share model from separation logic formulae. We have developed a solver for the equation systems and proven that the associated problems are decidable. We have integrated our solver into the HIP/SLEEK verification toolset and benchmarked its performance to show that the system is usable in practice.
+
+## References
+
+1. Richard Bornat, Cristiano Calcagno, Peter O'Hearn, and Matthew Parkinson. Permission accounting in separation logic. In *POPL*, pages 259–270, 2005.
+
+2. John Boyland. Checking interference with fractional permissions. In *SAS*, pages 55–72, 2003.
+
+3. Leonardo de Moura and Nikolaj Bjrner. Z3: An efficient SMT solver. In *TACAS*, 2008.
+
+4. Robert Dockins, Aquinas Hobor, and Andrew W. Appel. A fresh look at separation algebras and share accounting. In *APLAS*, pages 161–177, 2009.
+
+5. N. Een and N. Sörensson. An extensible SAT-solver. In *SAT*, pages 502–508, 2003.
+
+6. Stefan Heule, K. Rustan M. Leino, Peter Müller, and Alexander J. Summers. Fractional permissions without the fractions. In *FTfJP*, 2011.
+
+7. Aquinas Hobor. *Oracle Semantics*. PhD thesis, Princeton University, Department of Computer Science, Princeton, NJ, October 2008.
+
+8. Aquinas Hobor and Cristian Gherghina. Barriers in concurrent separation logic. In *ESOP*, pages 276–296, 2011.
+
+9. Aquinas Hobor and Cristian Gherghina. Barriers in concurrent separation logic: Now with tool support! *Logical Methods in Computer Science*, 8(2), 2012.
+
+10. Bart Jacobs, Jan Smans, Pieter Philippaerts, Frederic Vogels, Willem Penninckx, and Frank Piessens. Verifast: A powerful, sound, predictable, fast verifier for C and Java. In *NASA Formal Methods*, volume 6617, pages 41–55, 2011.
+
+11. Huu Hai Nguyen, Cristina David, Shengchao Qin, and Wei-Ngan Chin. Automated verification of shape and size properties via separation logic. In *VMCAI*, pages 251–266, 2007.
+
+12. Matthew Parkinson. *Local Reasoning for Java*. PhD thesis, University of Cambridge, 2005.
+
+13. John C. Reynolds. Separation logic: A logic for shared mutable data structures. In *LICS*, pages 55–74, 2002.
+
+14. Jules Villard. personal communication, 2012.
+
+15. Jules Villard, Étienne Lozes, and Cristiano Calcagno. Tracking heaps that hop with Heap-Hop. In *TACAS*, pages 275–279, 2010.
\ No newline at end of file
diff --git a/samples/texts_merged/5137227.md b/samples/texts_merged/5137227.md
new file mode 100644
index 0000000000000000000000000000000000000000..bdfcaeacc11bab99fabcc09434f52d3dfc706227
--- /dev/null
+++ b/samples/texts_merged/5137227.md
@@ -0,0 +1,897 @@
+
+---PAGE_BREAK---
+
+# Truncated cluster algebras and Feynman integrals with algebraic letters
+
+Song He$^{a,b,c,d,e}$ Zhenjie Li$^{a,d}$ Qinglin Yang$^{a,d}$
+
+$^a$CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China
+
+$^b$School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China
+
+$^c$ICTP-AP International Centre for Theoretical Physics Asia-Pacific, Beijing/Hangzhou, China
+
+$^d$School of Physical Sciences, University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China
+
+$^e$Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, P. R. China
+E-mail: songhe@itp.ac.cn, lizhenjie@itp.ac.cn, yangqinglin@itp.ac.cn
+
+**ABSTRACT:** We propose that the symbol alphabet for classes of planar, dual-conformal-invariant Feynman integrals can be obtained as truncated cluster algebras purely from their kinematics, which correspond to boundaries of (compactifications of) $G_+(4, n)/T$ for the $n$-particle massless kinematics. For one-, two-, three-mass-easy hexagon kinematics with $n = 7, 8, 9$, we find finite cluster algebras $D_4, D_5$ and $D_6$ respectively, in accordance with previous result on alphabets of these integrals. As the main example, we consider hexagon kinematics with two massive corners on opposite sides and find a truncated affine $D_4$ cluster algebra whose polytopal realization is a co-dimension 4 boundary of that of $G_+(4, 8)/T$ with 39 facets; the normal vectors for 38 of them correspond to $g$-vectors and the remaining one gives a limit ray, which yields an alphabet of 38 rational letters and 5 algebraic ones with the unique four-mass-box square root. We construct the space of integrable symbols with this alphabet and physical first-entry conditions, whose dimension can be reduced using conditions from a truncated version of cluster adjacency. Already at weight 4, by imposing last-entry conditions inspired by the $n=8$ double-pentagon integral, we are able to uniquely determine an integrable symbol that gives the algebraic part of the most generic double-pentagon integral. Finally, we locate in the space the $n=8$ double-pentagon ladder integrals up to four loops using differential equations derived from Wilson-loop $d$ log forms, and we find a remarkable pattern about the appearance of algebraic letters.
+---PAGE_BREAK---
+
+# Contents
+
+| 1 | Introduction and review | 1 | | 1.1 | Review of cluster algebras and polytopes from stringy integrals | 6 | | 2 | Truncated cluster algebras for Feynman integrals | 9 | | 2.1 | Warm-up examples: truncated $D_4$, $D_5$ and $D_6$ cluster algebras | 10 | | 2.2 | Truncated affine $D_4$ cluster algebras | 14 | | 3 | The cluster function space and double-penta ladders to four loops | 16 | | 3.1 | First entries, cluster adjacency and algebraic letters | 16 | | 3.2 | Double-penta-ladders: last entries, differential equations etc. | 20 | | 3.3 | Locating the integrals and the pattern for algebraic letters | 22 | | 4 | Conclusion and Discussions | 25 |
+
+## 1 Introduction and review
+
+Recent years have witnessed enormous progress in computing and understanding analytic structures of scattering amplitudes in QFT. Not only do these developments greatly pushed the frontier of perturbative calculations relevant for high energy experiments, but they also offer deep insights into the theory itself and exhibit surprising connections with mathematics. An outstanding example is the $N = 4$ super-Yang-Mills theory (SYM), where one can perform calculations that were unimaginable before and discover rich mathematical structures underlying them. For example, positive Grassmannian [1] and the amplituhedron [2] have provided a new geometric formulation for its planar integrand to all loop orders.
+
+A related direction which we focus on in this paper concerns the deep connection between singularities of loop amplitudes in planar SYM and cluster algebras related to positive Grassmannians [3]. It was first discovered in [1] that Grassmannian cluster algebra [4] naturally appear from the quiver dual to plabic graphs that compute loop integrand of the theory. Remarkably, it has been realized in [1] that cluster algebras of Grassmannian $G(4, n)$ are directly relevant for branch cuts of loop amplitudes with $n$ particles. More precisely, the $\mathcal{A}$ coordinates of $G(4, n)$ cluster algebras are related to symbol [5, 6] letters of amplitudes: the 9 letters of six-particle amplitudes and 42 letters of seven-particle ones are nicely explained by $A_3$ and $E_6$ cluster algebras, respectively $^1$; they have been exploited for bootstrap program to impressively high
+
+¹The rank of the cluster algebra is given by the dimension of the kinematic space parameterized by momentum twistors, which is $3(n-5)$ for $G_+(4, n)/T$ due to dual conformal symmetry.
+---PAGE_BREAK---
+
+orders [7–18]. Perhaps even more surprisingly, cluster algebras seem to dictate how different singularities of amplitudes are related to each other, known as “cluster adjacency” [14, 19, 20], which are closely related to the so-called extended Steinmann relations [12, 13, 16, 18]. For $n = 6, 7$, all known amplitudes exhibit a remarkable pattern that only $\mathcal{A}$ coordinates that belong to the same cluster can appear adjacent in the symbol.
+
+Beyond $n = 6, 7$, Grassmannian cluster algebras for $G(4, n)$ for $n \ge 8$ become infinite, thus it is an important question how to truncate it to give a finite symbol alphabet. Moreover, as already seen for one-loop $N^2$MHV, amplitudes with $n \ge 8$ generally involve letters that cannot be expressed as rational functions of Plücker coordinates of the kinematics $G(4, n)/T$; more non-trivial algebraic (= non-rational throughout the paper) letters appear in new computations based on $\bar{Q}$ equations [21] for two-loop NMHV amplitudes for $n=8$ and $n=9$ [22, 23]. This means that in addition to the truncation, new ingredients are needed in the context of Grassmanian cluster algebras to explain these and more algebraic letters. A solution to both problems has been proposed using tropical positive Grassmannian [4] and related tools for $n=8$ [24–29] and very recently for $n=9$ [30, 31]$^2$. Another method for explaining the alphabet has been proposed using Yangian invariants or the associated collections of plabic graphs [39–42].
+
+On the other hand, $\mathcal{N} = 4$ SYM has been proven to be an extremely fruitful laboratory for the study of Feynman integrals (c.f. [43–48] and references therein). Remarkably, the connection to cluster algebras seems to extend to individual Feynman integrals as well, e.g. the same $A_3$ and $E_6$ control $n=6, 7$ multi-loop integrals in $\mathcal{N}=4$ SYM [19, 45]. Very recently, cluster algebra structures have been discovered for Feynman integrals beyond those in planar $\mathcal{N}=4$ SYM [49]: there is strong evidence for a $C_2$ cluster algebra and adjacency for four-point Feynman integrals with an off-shell leg, and various cluster-algebra alphabets have been found for one-loop integrals, and general five-particle alphabet which play an important role in recent two-loop computations [50–52]. Apart from connection to cluster algebras, knowledge of alphabet (and further information) can be used for bootstrapping Feynman integrals [46, 53] (see also [54]). In [55], we identified cluster algebras and certain adjacency conditions for a class of finite, dual conformal invariant (DCI) [56, 57] Feynman integrals to high loops, based on recently-proposed Wilson-loop $d \log$ representation [58] (see [47] for a closely-related Feynman-parameter representation). For ladder integrals with possible “chiral pentagons” on one or both ends (without any square roots), we find a sequence of cluster algebras $D_2, D_3, \dots, D_6$ for their alphabets, depending on $n$ and the kinematic configurations.
+
+$^2$In this paper, we consider polytopal realization of $G_+(4, n)/T$ as explicitly computed in [32] using Minkowski sum based on stringy integrals [33], which is dual to tropical positive Grassmannian. We will not consider tropical Grassmannian explicitly, though the latter can be recovered from our polytope easily. Also see [34–38] for recent studies in a different context.
+---PAGE_BREAK---
+
+Note that some integrals share the same (or almost the same) alphabet, such as $A_3$ or $E_6$, as the amplitudes for $n = 6, 7$ since the kinematics is just that of the $n$ massless particles; other integrals depend on less kinematic variables, e.g. double-penta-ladder integrals for $n = 7$ (with two legs on a corner) depend on 4 out of 6 variables and the alphabet turns out to be $D_4$. What is non-trivial about results in [55] is that for such a class of Feynman integrals we always find a cluster algebra, which is a sub-algebra of that of $G(4, n)$ (as opposed to a random subset), which is already interesting for $n = 7$ but more so for $n = 8, 9$ etc.³. Another intriguing observation is that the alphabet and cluster algebra structure of these DCI Feynman integrals seem to be independent of details such as numerators or loop orders, but controlled by the kinematics only. It is natural to ask if one can predict the alphabet and possible adjacency conditions for these DCI integrals, as well as those with algebraic letters, from cluster algebra considerations. In this paper, we take a first step in making prediction for the alphabet of DCI Feynman integrals from cluster algebras associated with their kinematics, which correspond to boundaries $G_+(4, n)/T$. We propose that for certain kinematics which can be parametrized by a positroid cell of $G_+(4, n)$, the candidate alphabet for Feynman integrals is given by a cluster algebra obtained from an initial quiver which is the dual of the plabic graph; we are done if the resulting cluster algebra is finite (these include the type-$D$ cases in [55]), but generically it becomes infinite just as $G(4, n)$ cluster algebra for $n \ge 8$, and we need truncation. The procedure is equivalent to that in [25–27] (see also [32]): we construct a polytopal realization for this boundary of $G_+(4, n)/T$ by taking Minkowski sum of Newton polytopes of (non-vanishing) Plücker coordinates, and the facets of the polytope (dual to the rays of tropical positive Grassmannian) teaches us how to truncate the cluster algebra and possibly include algebraic letters. We will loosely refer to the alphabet that comes from this procedure as a *truncated cluster algebra* associated with the kinematics.⁴ We expect that the truncation using Minkowski sum or tropicalization of all non-vanishing Plücker coordinates commutes with taking boundaries in $G_+(4, n)/T$, thus alternatively we can just take the truncated cluster algebra of the latter and go to the corresponding boundary. However, the computation for $G_+(4, n)/T$ becomes extremely complicated beyond $n = 8$, thus for low-dimensional boundaries it makes no sense to do the full computation and then go down. Our proposal makes it more practical to predict symbol alphabet of higher-point Feynman integrals, especially those with more massive corners (whose
+
+³At one or two loops, we usually only see a subset of the full alphabet, but at high enough loops, the alphabet becomes stable and exactly correspond to e.g. those type-$D$ cluster algebras.
+
+⁴It is important to note that compactification introduced by Minkowski sum/tropicalization always give truncations, even for cases with finite cluster algebra; e.g. for $G_+(4, 7)/T$, there are various different compactifications which all give an alphabet with 42 letters, but the polytopes/tropical fans are different! In this paper we stick to the analog of Speyer-Williams fan by using Minkowski sum using all (non-vanishing) Plücker coordinates.
+---PAGE_BREAK---
+
+kinematics depend on less variables). Moreover, boundaries of $G_+(4, n)/T$ and corresponding truncated cluster algebras deserve investigations on their own (c.f. [59]); a systematic study of these boundaries is beyond the scope of this paper, and we will illustrate our proposal with a few examples which can be applied to classes of Feynman integrals we are interested in. In particular we find a co-dimension 4 boundary of $G_+(4, 8)/T$ whose cluster algebra is an affine $D_4$ type. The Minkowski sum gives a polytope with 39 facets, and we obtain 38 rational letters plus 5 algebraic ones.
+
+Another motivation for our study comes from interests in Feynman integrals (and scattering amplitudes) with algebraic letters, which poses certain challenges for multi-loop computations. Using direct integration (either as $d\log$ forms [58] or in Feynman parametrization form [47]), it is straightforward to evaluate such DCI Feynman integrals to high loops for cases without any square roots. The presence of algebraic letters makes computation difficult and structures obscured due to the need of rationalization and cancellation of spurious square roots [60, 61]; for example, the symbol of most general double-pentagon integrals contains 16 square roots of four-mass-box type, and for each of them there are 5 (multiplicative independent) algebraic letters. The technical difficulty involved is almost identical to that in computing two-loop NMHV amplitudes from $\bar{Q}$ equations [22, 23], and extensions to higher loops become more and more difficult. It is an interesting and difficult problem in computing (the symbol) of these integrals and amplitudes at higher loops, and understanding structures of the result involving algebraic letters.
+
+Among these integrals we consider, arguably the simplest all-loop series involving non-trivial algebraic letters is the class of double-penta-ladder integrals $\Omega_L(1, 4, 5, 8)$ [47, 58] (as shown in the figure above); the kinematics involved can be drawn as a hexagon with two massive corners on the opposite sides, which corresponds to the co-dimension 4 boundary of $G_+(4, 8)/T$. As we will explain, the $L$-loop integral can be written as two-fold integral of $(L-1)$-loop on (alternatively a pair of nice first-order differential operators reduce the former to the latter). However, unlike the seven-point counterpart (or those higher-point cases without square roots), performing such integrations become tricky due to the presence of square roots, which also prevents us for seeing underlying structures concerning algebraic letters. Now equipped with the alphabet from truncated cluster algebra (and physical discontinuity conditions), we can construct the space of all possible multiple polylogarithm functions (MPL) at
+---PAGE_BREAK---
+
+symbol level which can be further reduced by adjacency conditions, and we conjecture that the space includes all DCI integrals with this kinematics. This “bootstrap” strategy can be viewed as an extension of results in [46] to include algebraic letters⁵. Already at weight 4, we find that simply by imposing last entry conditions implied by $d$ log form or differential equations, the part containing square root is uniquely determined! Moreover, this weight 4 function with algebraic letters turns out to be the “seed” for (the algebraic part of) the most general $n = 12$ double-pentagon integrals: the latter can be obtained by the sum of 16 functions with relabeling; this suggests that the $n = 12$ case contains 16 such truncated cluster algebras.
+
+Moving to higher weights, we can easily determine $\Omega_L(1, 4, 5, 8)$ to four loops (as a strong support for the alphabet and adjacency conditions), by imposing differential equations and boundary conditions obtained from $d$ log forms, which circumvent the need of rationalization at all. Furthermore, we discover some nice pattern about the algebraic letters, which confirms a conjecture we had for these integrals: at least through four loops, non-trivial algebraic letters only appear on the third entry of the symbol, with the accompanying first two entries being that of the four-mass box! Thus for the algebraic part, the highly non-trivial procedure of performing $d$ log integrals/solving differential equations amounts to simply “translating” the first three entries, and “attaching” rational letters in subsequent entries. Similar observations have been made for $k + \ell = 3$ level of $n = 8$ amplitudes [22], and we hope our results can provide a starting point for future investigation into similar structures of multi-loop integrals and amplitudes with algebraic letters.
+
+The rest of the paper is organized as follows. We first give a quick review of cluster algebras and polytopes from certain stringy integrals, which will be used in our study of truncated cluster algebras. In sec. 2, we describe our procedure for predicting alphabet of Feynman integrals: after presenting warm-up examples for finite cases $D_4, D_5$ and $D_6$, we give the truncated affine $D_4$ as the alphabet for “two-mass-opposite” hexagon kinematics. It consists of 38 rational letters associated with $g$-vectors, and 5 algebraic ones associated with the limit ray (containing the unique four-mass-box square root). In sec. 3, we move to constructing the cluster function space at symbol level and determine $\Omega_L(1, 4, 5, 8)$ inside the space. With first entry conditions and constraints from a truncated version of cluster adjacency, we obtain the reduced space up to weight 6, and already at weight 4 one can determine a unique function responsible for the algebraic part of most generic double-pentagon integrals. We then discuss constraints for $\Omega_L(1, 4, 5, 8)$ including last entries, differential equations etc.. Finally, we determine $\Omega_L(1, 4, 5, 8)$ up to four loops and discuss the pattern concerning the algebraic letters.
+
+⁵This is in spirit a bit different from bootstrapping amplitudes/form factors since in principle we have Wilson-loop $d$ log forms/differential equations which determine the answer; in some sense all we need to do is to locate the solution.
+---PAGE_BREAK---
+
+1.1 Review of cluster algebras and polytopes from stringy integrals
+
+Let us first give a lightening review of cluster algebras [62–65], where we only give
+necessary ingredients needed in this paper. Cluster algebras are commutative alge-
+bras with a particular set of generators $\mathcal{A}_i$, known as the cluster $\mathcal{A}$-coordinates; they
+are grouped into *clusters* which are subsets of rank $n$. From an initial cluster, one
+can construct all the $\mathcal{A}$-coordinates by *mutations* acting on $\mathcal{A}$'s (the so-called frozen
+coordinates or coefficients can also be included, which do not mutate).
+
+Cluster variables in a cluster are related by arrows, which forms a quiver $Q$
+(without 2-cycles, i.e. arrows $\ast \to \cdot \to \ast$). Then we associate $Q$ with an skew-
+symmetric exchange matrix $B(Q) = (b_{ij})$ by $b_{ij} = -b_{ji} = l$ whenever there are $l$
+arrows from vertex $i$ to vertex $j$. Suppose we mutate the vertex $k$, then the exchange
+matrix of the mutated quiver reads
+
+$$
+b'_{ij} = \begin{cases} -b_{ij} & \text{if } i=k \text{ or } j=k \\ b_{ij} + \operatorname{sgn}(b_{ik}) [b_{ik} b_{kj}]_+ & \text{otherwise,} \end{cases}
+$$
+
+where $[x]_+ := \max(x, 0)$, and the cluster variable $x_k$ on vertex $k$ is mutated to $x'_k$
+given by
+
+$$
+x'_{k} x_{k} = \prod_{i=1}^{n} x_{i}^{[b_{ik}]_{+}} + \prod_{i=1}^{n} x_{i}^{-[b_{ik}]_{+}}.
+$$
+
+In general, mutations generate infinite number of cluster variables. As classified in [63], only cluster algebras whose quiver can be mutated from a Dynkin diagram of type A, B, C, D, E, F, G have finite number of cluster variables, known as the cluster algebra of finite type. The number of cluster variables (dimension) N for finite types read:
+
+$$
+\begin{align*}
+N(A_n) &= n(n+3)/2, & N(B_n) &= N(C_n) = n(n+1), & N(D_n) &= n^2, \\
+N(E_6) &= 42, & N(E_7) &= 70, & N(E_8) &= 128, & N(F_4) &= 28, & N(G_2) &= 8.
+\end{align*}
+$$
+
+According to [65], one can further assign a coefficient to a vertex, where the
+coefficient should be a monomial of some given free variables. Then the mutation
+rule for cluster variable $x_k$ and the coefficient $y_k$ on vertex $k$ reads
+
+$$
+y_j' = \begin{cases} y_k^{-1} & \text{if } j=k, \\ y_j y_k^{[b_{kj}]_+} (y_k \oplus 1)^{-b_{kj}} & \text{if } j \neq k, \end{cases} \quad (1.1)
+$$
+
+and
+
+$$
+x'_{k}x_{k} = \frac{y_{k}}{y_{k} \oplus 1} \prod_{i=1}^{n} x_{i}^{[b_{ik}]_{+}} + \frac{1}{y_{k} \oplus 1} \prod_{i=1}^{n} x_{i}^{-[-b_{ik}]_{+}}, \quad (1.2)
+$$
+
+where the addition $\oplus$ for monomials of free variables $\{u_i\}$ is defined by
+
+$$
+\prod_j u_j^{a_j} \oplus \prod_j u_j^{b_j} = \prod_j u_j^{\min(a_j, b_j)}.
+$$
+---PAGE_BREAK---
+
+If the coefficients of a cluster are exactly the chosen free variables, then we call that
+this cluster has principal coefficients. Therefore, starting from a cluster {$x_i$}$_{i=1,...,n}$
+with exchange matrix $B = (b_{ij})$ and principal coefficients {$y_i$}$_{i=1,...,n}$, the cluster
+variable on vertex $k$ after a series of mutations of vertices **v** is a rational function of
+initial cluster variables and coefficients
+
+$$
+X_{v,k} = X_{v,k}(x_1, \dots, x_n; y_1, \dots, y_n).
+$$
+
+Furthermore, if one defines a $\mathbb{Z}^n$-grading on $\mathbb{Z}[x_1^{\pm 1}, \dots, x_n^{\pm 1}; y_1, \dots, y_n]$ by $\deg(x_i) = e_i$
+(1 in the *i*-th component and 0 in the rest) and $\deg(y_j) = -\sum_i b_{ij}e_i$, then $X_{v,k}$
+is homogeneous, and its degree $g_{v,k} = (g_{v,k}^1, \dots, g_{v,k}^n) \in \mathbb{Z}^n$ is called its *g*-vector.
+Another useful polynomial related to $X_{v,k}$ is the *F*-polynomial
+
+$$
+F_{v,k}(y_1, \ldots, y_n) := X_{v,k}(1, \ldots, 1; y_1, \ldots, y_n). \quad (1.3)
+$$
+
+Once known the *F*-polynomial and *g*-vector, we can recover the whole $X_{\mathbf{v},k}$ by
+
+$$
+X_{\mathbf{v},k}(x_1, \dots, x_n; y_1, \dots, y_n) = x_1^{g_{\mathbf{v},k}^1} \cdots x_n^{g_{\mathbf{v},k}^n} F_{\mathbf{v},k} \left( y_1 \prod_i x_i^{b_i 1}, \dots, y_n \prod_i x_i^{b_i n} \right). \quad (1.4)
+$$
+
+There is even a conjecture [65] to read *g*-vector from *F*-polynomial alone: If $F_{\mathbf{v},k} \neq 1$, then
+
+$$
+y_1^{g_{\mathbf{v},k}^1} \cdots y_n^{g_{\mathbf{v},k}^n} = \frac{F_{\mathbf{v},k}|_{\text{Trop}}(y_1^{-1}, \dots, y_n^{-1})}{F_{\mathbf{v},k}|_{\text{Trop}}(\prod_i y_i^{b_i 1}, \dots, \prod_i y_i^{b_i n})}, \quad (1.5)
+$$
+
+where $F_{\mathbf{v},k}|_{\text{Trop}}$ means that + is replaced by $\oplus$ in the $\mathcal{F}$-polynomial.
+
+Associated with a finite-type cluster algebra (or more generally a truncated clus-
+ter algebra), there is a natural space of polylogarithm functions, whose alphabet is
+given cluster variables (equivalently they can be chosen as $N-n$ $\mathcal{F}$-polynomials and
+the $n$ principle coefficients). A cluster function $I^{(w)}$ [66, 67] of transcendental weight
+$w$ is defined such that its differential has the form
+
+$$
+dI^{(w)} = \sum_{i} I_{i}^{(w-1)} d \log X_{i} \tag{1.6}
+$$
+
+where $I_i^{(w-1)}$ are cluster functions of transcendental weight $w-1$ and $X_i$ are cluster
+$\mathcal{A}$-coordinates (or $\mathcal{F}$- polynomials). We see that the alphabet of a cluster function
+is by definition the corresponding cluster algebra.
+
+Already for finite-type cluster algebras, it is natural to consider the so-called
+cluster string integrals which are “stringy canonical forms” [33] associated with clus-
+ter polytopes (they also give natural “cluster configuration spaces” [59] which will
+not be discussed here). For a finite-type (denoted as $\Phi_n$) cluster algebra with prin-
+ciple coefficients $\mathbf{f} = (f_1 \cdots f_n)$ and $\mathcal{F}$-polynomials $F_I(\mathbf{f})$ for $I = n+1 \cdots N$ $^6$, we
+
+⁶From here we will denote principle coefficients using $f_I \equiv F_I$ for $I = 1, \dots, n$, which in our subsequent studies can be chosen to be *face variables* of a plabic graph.
+---PAGE_BREAK---
+
+define:
+
+$$
+\mathcal{I}_{\Phi_n}(\{s\}) = (\alpha')^n \int_{\mathbb{R}_{>0}^n} \prod_{i=1}^{n} d \log f_i \prod_{I=1}^{N} F_I(\mathbf{f})^{\alpha' s_I}. \quad (1.7)
+$$
+
+As $\alpha' \to 0$, leading order of the integral computes the canonical function of cluster polytope of type $\Phi_n$, which is nicely given by the Minkowski sum of the Newton polytopes of the $F$-polynomials. Vertices of this polytope correspond to clusters: whenever two vertices are connected by an edge, one can mutate from one to the other in the cluster algebra. Furthermore, $N(\Phi_n)$ facets of the polytopes correspond to cluster variables $X_{v,k}(x_1, \cdots, x_n; f_1, \cdots f_n)$, where we can compute outward normal vectors of these facets [33, 68] in terms of the exponents of $\mathbf{f} = \{f_1, \cdots, f_n\}$, i.e. $\{s_1, \cdots, s_n\}$. Very nicely, these vectors are nothing but the corresponding $g$-vectors. Note that these $g$-vectors for cluster variables in any cluster give a cone: the cones for different clusters are non-overlapping, and the union of all cones (known as the cluster fan) covers the full space in any finite type.
+
+As mentioned, Grassmannian cluster algebras for $G(4, 6)$ and $G(4, 7)$ are $A_3$ and $E_6$ respectively, and starting at $n = 8$ they become infinite. A natural way for truncating an infinite cluster algebra to be finite has been proposed in [32] using a similar *Grassmanian string integrals*. With a positive parametrization of $G_+(k, n)/T$, we can write the integral where the positive polynomials are instead given by all (or a reasonable subset of) the Plücker coordinates of $G_+(k, n)$. The leading order as $\alpha' \to 0$ is given by the Minkowski sum of Newton polytopes of these polynomials, and one obtains a polytope for the compactification of $G_+(k, n)/T$ $^7$. For infinite type, e.g. $G(4, n)$ with $n \ge 8$ (or $G(3, n)$ with $n \ge 9$), by taking the normal vectors for facets of the polytope, we truncate the infinite cluster algebra by identifying a finite set of $g$-vectors, and there will also be some normal vectors which are not $g$-vectors (called exceptional rays). We remark that the truncation is not unique since it depends on the choice of Plücker coordinates, and it is equivalent to tropical Grassmannian method since the normal vectors are exactly the rays when choosing the same set of Plücker coordinates for tropicalization [24–26]. For $G(4, 8)$, if we choose the polynomials to be all Plücker coordinates, the Minkowski sum gives a polytope with 360 facets, where 356 normal vectors are $g$-vectors of $G(4, 8)$ cluster variables, and the remaining 4 are exceptional rays; if we only keep those of the form $\langle ii+1jj+1 \rangle$ and $\langle i-1ii+1j \rangle$ (which respect parity), we get a 274-facet polytope, where 272 are $g$-vectors and the other 2 are exceptional ones. Moreover, as we will see shortly, at least for $G(4, 8)$ case, an exceptional ray turns out to be a *limit ray* which naturally give algebraic letters associated with a square root, in addition to those rational ones corresponding to $g$-vectors.
+
+⁷For $k = 2$, this is the well-known Deligne-Mumford compactification [69–71] for the moduli space $\mathcal{M}_{0,n}^+$, which gives the $(n-3)$-dimensional associahedron. For $(k,n) = (3,6), (3,7), (3,8)$ we have polytopes that are related to $D_4, E_6, E_8$ cluster algebras, respectively.
+---PAGE_BREAK---
+
+## 2 Truncated cluster algebras for Feynman integrals
+
+In this section, we propose an algorithm which predicts symbol alphabet for classes of DCI Feynman integrals with same kinematics. Here the kinematics simply mean the *m* dual points which the class of Feynman integrals universally depend on, without referring to actual propagator structure or possible numerators. We will refer to such a kinematical configuration as an *m*-gon with certain massless and massive corners, where for each massless (massive) corner, we put one (two) massless legs, with *n* legs in total for $n \ge m$ $^8$; when $n = m$, all dual points are null separated, which is the kinematics of *n* massless legs. For example, all off-shell four-point integrals relevant for four-point CFT correlators share the kinematics of an $n = 8$ square ($m = 4$, with all four corners massive), and in particular all-loop box ladder integrals belong to this class. The $n = 7, 8$ pentagon-box ladder proposed in [44] belong to $n = 7, 8$ pentagon ($m = 5$) with two or three massive corners, respectively. It is fun to count the dimension of such kinematics with DCI: for each dual point we have 4 degree of freedoms, but when two points are null separated the degree of freedom is reduced by one, and DCI means subtracting 15 in the end. For two- or three-mass pentagon, the dimension is $4 \times 5 - 3(2) - 15 = 2(3)$ as expected; for four-mass square it is trickier: the kinematics is so special that one of the 15 redundancies no longer exists, thus we have $4 \times 4 - 14 = 2$ dimensions as expected $^9$.
+
+For $m=n=6,7$, the symbol alphabet of the amplitude and all DCI integrals computed so far is dictated by the kinematics, which is given by $A_3$ and $E_6$ respectively. What we propose here is a natural extension to more general kinematics with $m < n$, where we identify it as certain boundaries of $G_+(4, n)/T$. This first gives an equivalent way of counting: from $G_+(4, n)/T$ which has dimension $3(n-5)$, generically for each massive corner we go down in dimension by 2. It is generally unclear how to identify which boundary of $G_+(4, n)/T$ corresponds to a given kinematics, and to find a truncated cluster algebra for its symbol alphabet. In this paper we focus on special cases where the boundary can be identified with a positroid cell of $G_+(4, n)$ (mod torus action) [47], which can be labelled by plabic graphs.
+
+The algorithm we propose consists of the following steps, which crucially depends on the fact that the kinematics is associated with a positroid cell.
+
+* By imposing conditions on Plücker coordinates of *n* momentum twistors according to the kinematics, we identify a positroid cell $\Gamma$ of $G_+(4, n)$ represented by a plabic graph $G_\Gamma$, which gives a positive parametrization $\mathbb{Z}_\Gamma$ of the kinematics (after modding out torus action). More precisely, $\mathbb{Z}_\Gamma(\{f\})$ depends on internal face variables $f_i$ for $i = 1, 2, \dots, d$ where *d* is the dimension of $\Gamma/T$ (we set all but one external face variables to 1).
+
+⁸We can trivially add more than two legs at a massive corner, which gives higher-point Feynman integrals with the same kinematics, thus the *n* here is the minimal number of legs.
+
+⁹We thank Nima Arkani-Hamed for first explaining this to us.
+---PAGE_BREAK---
+
+* We define the cluster algebra $\mathcal{A}_\Gamma$ by applying mutations from the initial quiver diagram, which is the dual of the plabic graph. We use the face variables as principle coefficients which parametrize the positive part of the cluster variety, and we are interested in the $F$-polynomials. We obtain a finite alphabet if the cluster algebra is a finite type.
+
+* We consider all non-vanishing Plücker coordinates (or a subset of them) of $Z_\Gamma$, which are positive polynomials of $f$'s (a subset of $F$-polynomials); we take the Minkowski sum of their Newton polytopes, which gives a polytope denoted as $P_\Gamma$. We conjecture that $P_\Gamma$ is always a boundary of the polytopal realization of $G_+(4, n)/T$ (which is dual to tropical $G_+(4, n)$).
+
+* At least a subset of normal vectors for facets of the polytope $P_\Gamma$ should coincide with certain $g$-vectors of $\mathcal{A}_\Gamma$, and the rational alphabet consists of these $F$-polynomials which are associated with these facets (as well as $f_1, \cdots, f_d$). For those exceptional normal vectors that do not correspond to $g$-vectors, we conjecture that they are associated with non-rational letters etc. which need to be treated differently.
+
+## 2.1 Warm-up examples: truncated $D_4$, $D_5$ and $D_6$ cluster algebras
+
+Let us begin with warm-up examples for one-, two- and three-mass easy hexagon kinematics with $n = 7, 8, 9$. We will not give details of the computation for these finite-type cases, and simply list the positroid cells given in [47] (with decorated permutations and plabic graphs), positive parametrizations of the kinematics, the polytopes from Minkowski sum and the resulting cluster algebras.
+
+**Figure 1:** One-, two-, three-mass-easy hexagon kinematics with $n = 7, 8, 9$ legs
+
+Let's first consider one-mass kinematics with dual points $(x_1, x_2, x_4, x_5, x_6, x_7)$, which should correspond to a co-dimension 2 positroid. As explained in [47], the latter can be specified by $\langle n123 \rangle = \langle 2345 \rangle = 0$, which gives a decorated permutation $\sigma = \{6, 5, 7, 8, 9, 11, 10\}$, and we find plabic graph
+---PAGE_BREAK---
+
+For modding out torus action, we fix all but one external face variables to be unity,
+and the resulting **Z** matrix which positively parametrize the kinematics reads
+
+$$
+\begin{pmatrix}
+f_3 f_4 (1+f_3) f_4 & 1+f_4+f_3 f_4 & 1 & 0 & 0 & 0 \\
+0 & f_1 f_2 f_4 & f_2 (1+f_1+f_1 f_4) & 1+f_2+f_1 f_2 & 1 & 0 \\
+0 & 0 & f_2 & 1+f_2 & 1 & 0 \\
+0 & 0 & 0 & 1 & 1 & 0
+\end{pmatrix}.
+$$
+
+We have drawn the dual quiver diagram of the plabic graph, where we ignored
+all external facets, on the right. It is easy to see that this is a quiver for the *D*₄
+cluster algebra, and as mentioned above, the face variables correspond to principle
+coefficients assigned to each node. By applying mutation rules we find 16 cluster
+variables, which can be identified with *f*₁, *f*₂, *f*₃, *f*₄ and 12 *F*-polynomials of *f*'s:
+
+$$
+\{
+1 + f_1, 1 + f_2, 1 + f_3, 1 + f_4, 1 + f_2 + f_1 f_2, 1 + f_3 + f_1 f_3, 1 + f_1 + f_1 f_4, 1 + f_4 + f_2 f_4, \\
+\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \{1 + f_4 + f_3 f_4, 1 + f_2 + f_3 + f_2 f_3 + f_1 f_2 f_3, 1 + f_4 + f_2 f_4 + f_3 f_4 + f_2 f_3 f_4 + f_1 f_2 f_3 f_4\}
+\}
+\tag{2.1}
+$$
+
+Since this is a finite type we have a finite alphabet: we conjecture that any DCI
+integral with this one-mass hexagon kinematics has a symbol alphabet of the 16
+letters.
+
+On the other hand, by computing all non-vanishing Plücker coordinates of the
+**Z** matrix, we find 15 positive polynomials, which actually already contain 15 of the
+above alphabet, except for 1 + *f*4 + *f*2*f*4 + *f*3*f*4 + *f*2*f*3*f*4. Now, we compute the
+Minkowski sum of Newton polytopes of these 15 polynomials, remarkably we find a
+polytope with 16 facets whose *f*-vector is
+
+$$
+\mathbf{f} = (1, 49, 99, 66, 16, 1)
+$$
+
+which is almost a $D_4$ polytope (which has $\mathbf{f} = (1, 50, 100, 66, 16, 1)$). Moreover, the
+(outward) normal vectors of all these 16 facets are nothing but the $g$-vectors of the
+16 letters, which allow us to identify each letter with a facet of the polytope. Note
+---PAGE_BREAK---
+
+that both co-dimension 1 and 2 boundaries of this polytope agree with those of $D_4$
+polytope, but it misses one edge and one vertex (and becomes slightly non-simple).
+We can of course include the last F-polynomial for Minkowski sum/tropicalization,
+which will then give exactly the $D_4$ polytope.
+
+Next we consider two-mass-easy case with dual points $(x_1, x_2, x_4, x_5, x_7, x_8)$: the
+(co-dimension 4) positroid is given by the two conditions above and $\langle 3456 \rangle = \langle 5678 \rangle =$
+0, and we have decorated permutation $\sigma = \{7, 5, 6, 9, 8, 10, 12, 11\}$ and plabic graph
+
+We obtain the following **Z** matrix as a positive parametrization (after modding out torus action):
+
+$$
+\begin{pmatrix}
+f_5 & f_5 & 1+f_5 & & & 1 & & 0 & 0 & 0 & 0 \\
+0 & f_1 f_2 f_3 f_4 f_5 & f_2 f_4 (1+f_1+f_1 f_3+f_1 f_3 f_5) & 1+f_2+f_1 f_2+f_2 f_4+f_1 f_2 f_4+f_1 f_2 f_3 f_4 & 1+f_2+f_1 f_2 & 1 & 0 & 0 & \\
+0 & 0 & f_2 f_4 & 1+f_2+f_2 f_4 & 1+f_2 & 1 & 0 & -1 & \\
+0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 &
+\end{pmatrix}
+$$
+
+The dual quiver diagram (on the right) is also one for the $D_5$ cluster algebra; by applying mutation rules we find 25 letters, including $f_1, \cdots, f_5$ and 20 $F$-polynomials of $f$'s. We will not explicitly write this $D_5$ alphabet: it suffices to say that it consists of 23 positive polynomials from all non-vanishing Plücker coordinates of the **Z** matrix above, and two missing letters, which are $1+f_3$ and $1+f_2+f_5+f_2f_5+f_2f_4f_5$.
+
+To obtain a truncated cluster algebra, we take the Minkowski sum of Newton polytopes of the 23 polynomials, and we obtain a polytope with 25 facets whose $f$-vector is
+
+$f = (1, 178, 449, 408, 160, 25, 1),$
+
+which is a truncated $D_5$ polytope. The normal vectors of these facets turn out to be
+exactly the *g*-vectors of the 25 letters we find. We see that again it differs from $D_5$
+polytope starting from co-dimension 3 boundaries, and by including the two missing
+$F$-polynomials we of course recover the $D_5$ polytope.
+
+As our last warm-up example, we consider three-mass-easy kinematics with dual
+points $(x_1, x_2, x_4, x_5, x_7, x_8)$: the (co-dimension 6) positroid is given by the four con-
+---PAGE_BREAK---
+
+ditions above and additionally $\langle 6789 \rangle = \langle 8912 \rangle = 0$, thus we have the decorated permutation $\sigma = \{7, 5, 6, 10, 8, 9, 13, 11, 12\}$, and the plabic graph
+
+and after modding out torus action, the $Z$ matrix reads
+
+$$
+\begin{pmatrix}
+f_6 & f_6 & 1+f_6 & 1 & 0 & 0 & 0 & 0 \\
+0 & f_1 f_2 f_3 f_4 f_5 f_6 & f_3 f_5 (1+f_1+f_1 f_2+f_1 f_2 f_4+f_1 f_2 f_4 f_6) & * & 1+f_1+f_3+f_1 f_3+f_1 f_2 f_3 & 1+f_1 & 0 & -1 \\
+0 & 0 & f_3 f_5 & 1+f_3+f_3 f_5 & 1+f_3 & 1 & 0 & -1 \\
+0 & 0 & 0 & 1 & 1 & 1 & 1 & 0
+\end{pmatrix}
+$$
+
+where $* = 1+f_1+f_3+f_1f_3+f_3f_5+f_1f_2f_3+f_1f_3f_5+f_1f_2f_3f_5+f_1f_2f_3f_4f_5$. As we draw on the right, the quiver diagram is one for the $D_6$ cluster algebra, and the resulting alphabet consists of $f_1, \dots, f_6$ and 30 $F$-polynomials. They can be identified as the 33 positive polynomials from non-vanishing Plücker coordinates of the above $Z$ matrix, except for the following 3: $1+f_4, 1+f_3+f_6+f_3f_6+f_3f_5f_6$, and $f_1f_3f_2^2+f_1f_3f_5f_2^2+f_1f_3f_4f_5f_2^2+f_1f_2+2f_1f_3f_2+f_3f_2+2f_1f_3f_5f_2+f_3f_5f_2+f_1f_3f_4f_5f_2+f_1+f_1f_3+f_3+f_1f_3f_5+f_3f_5+1$. By taking the Minkowski sum of Newton polytopes of these 33 polynomials, we find a truncated $D_6$ polytope with 36 facets and $f$-vector
+
+$$
+\mathbf{f} = (1, 657, 1986, 2292, 1257, 330, 36, 1).
+$$
+
+The normal vectors agree with all the *g*-vectors of the 36 letters, but it differs from
+the *D*6 polytope starting at co-dimension 3 boundaries.
+
+As checked to at least three loops in [55], these alphabets apply to all DCI
+integrals we computed with such kinematics, including double-penta-ladder integrals
+for n = 6, 7, 8 with various possible numerators. It is remarkable that their symbol
+alphabets seem to be determined by truncated cluster algebras naturally associated
+with the kinematics.
+
+In the next subsection, we move to a more non-trivial case, where the cluster
+algebra from the dual quiver is an infinite type (affine $D_4$). The Minkowski sum
+becomes crucial for this case since it provides a natural truncation that gives a finite
+(rational) alphabet, as well as limit ray(s) that gives non-rational letters.
+---PAGE_BREAK---
+
+## 2.2 Truncated affine $D_4$ cluster algebras
+
+The main example we are interested in is the hexagon with two massive corners on opposite sides, where we have dual points $(x_1, x_2, x_4, x_5, x_6, x_8)$. The (co-dimension 4) positroid can be obtained by $\langle 8123 \rangle = \langle 2345 \rangle = \langle 4567 \rangle = \langle 6781 \rangle = 0$, thus the decorated permutation reads $\sigma = \{6, 5, 8, 7, 10, 9, 12, 11\}$. We have a rather symmetric plabic graph, and the dual quiver diagram can be identified with one for affine $D_4$
+
+type. This is an infinite-type cluster algebra (though it is mutation finite), and we must rely on Minkowski sum to obtain a finite alphabet. After modding out the torus action, we have the $Z$ matrix:
+
+$$ \begin{pmatrix}
+f_1 f_2 f_3 f_4 f_5 & f_1 f_2 f_3 f_4 f_5 & f_2 f_4 (-1+f_1 f_3 f_5) & -1-f_2-f_2 f_4 & -1-f_2-1 & 0 & 0 \\
+0 & f_3 f_4 f_5 & f_4 (1+f_3+f_3 f_5) & 1+f_4+f_3 f_4 & 1 & 0 & 0 \\
+0 & 0 & f_2 f_4 & 1+f_2+f_2 f_4 & 1+f_2 & 1 & 0-1 \\
+0 & 0 & 0 & 1 & 1 & 1 & 0
+\end{pmatrix} $$
+
+From all non-vanishing Plücker coordinates, we find exactly 25 positive polynomials, which we record as $W_i$ for $i = 1, \dots, 25$ (anticipating that they will be part of the full alphabet). The first 10 of them are linear in $f$'s: which we write as
+
+$$ W_i = f_i, \quad W_{5+i} = 1+f_i, \quad \text{for } i=1,\dots,5; \tag{2.2} $$
+
+The next 8 letters are degree-2 polynomials of the form $w_{i,j} := 1 + f_j + f_i f_j$:
+
+$$ \begin{align}
+W_{11} &= w_{1,2}, & W_{12} &= w_{3,1}, & W_{13} &= w_{2,3}, & W_{14} &= w_{4,2}, \nonumber \\
+W_{15} &= w_{3,4}, & W_{16} &= w_{1,5}, & W_{17} &= w_{5,3}, & W_{18} &= w_{4,5}. \tag{2.3}
+\end{align} $$
+
+Finally, the last 7 letters involve polynomials of degree 3, 4 or 5; introducing $w_{i,j,k} := 1+f_i+f_j+f_if_j+f_if_jf_k$; we have
+
+$$ \begin{align}
+W_{19} &= w_{1,4,3}, & W_{20} &= 1+f_3(f_2+w_{2,5}), & W_{21} &= 1+f_2w_{1,4,3}, & W_{22} &= 1+f_3w_{2,5,1}, \tag{2.4} \\
+W_{23} &= 1+f_5w_{1,4,3}, & W_{24} &= 1+f_3w_{2,5,4}, & W_{25} &= 1+f_3(w_{2,5,1}+w_{3,1}f_2f_4f_5). \notag
+\end{align} $$
+---PAGE_BREAK---
+
+By taking the Minkowski sum of their Newton polytopes, we obtain a 5-dimensional
+polytope with $f$-vector
+
+$$
+\mathbf{f} = (1, 280, 739, 694, 272, 39, 1),
+$$
+
+and it is easy to compute the normal vectors of these 39 facets. By comparing
+these 39 vectors with *g*-vectors of the affine $D_4$ cluster algebra above, we see that
+38 of them correspond to *g*-vectors, and for completeness we record them here. For
+$W_1, \dots, W_5$, their *g*-vectors are $g_i = e_i$ for $i = 1, \dots, 5$, and the remaining 33 $g_i$ for
+$i = 6, 7, \dots, 38$ read:
+
+$$
+\begin{align*}
+& (-1, 0, 1, 0, 0), (1, -1, 0, 1, 0), (0, 1, -1, 0, 1), (0, 0, 1, -1, 0), (1, 0, 0, 1, -1), (0, -1, 0, 1, 0), (-1, 0, 0, 0, 0), \\
+& (0, 0, -1, 0, 1), (1, -1, 0, 0, 0), (0, 0, 0, -1, 0), (0, 0, 0, 1, -1), (0, 1, -1, 0, 0), (1, 0, 0, 0, -1), (-1, 0, 1, -1, 0), \\
+& (1, 0, -1, 1, 0), (0, -1, 0, 0, 0), (0, 0, -1, 1, 0), (0, 0, 0, 0, -1), (1, 0, -1, 0, 0), (0, 0, -1, 0, 0), (0, -1, 1, 0, 0), \\
+& (1, -1, 0, 2, -1), (0, 0, 1, 0, -1), (2, -1, 0, 1, -1), (0, -1, 1, 1, -1), (0, -1, 0, 1, -1), (1, -1, -1, 1, 0), \\
+& (1, -1, 1, 0, -1), (1, -1, 0, 0, -1), (0, -1, 1, 0, -1), (1, -2, 0, 1, -1), (1, 0, -1, 1, -1), (1, -1, 0, 1, -2)
+\end{align*}
+$$
+
+These 38 facets then give $F$-polynomials including the above 25 polynomials,
+and we find additionally 13 polynomials. We denote these letters as $W_{26}, \cdots, W_{38}$.
+Note that some of the remaining ones are relabelling of what we have seen in the
+first 25 letters. For example, $W_{26} = 1 + f_2(f_1 + w_{1,4})$, $W_{27} = w_{2,5,1}$,
+$W_{28} = 1 + f_5(f_1 + w_{1,5})$, etc.. All 38 rational letters are recorded in the ancillary file. Note
+that these letters can also be obtained by simply parametrizing the 356 rational
+letters of $G_+(4,8)/T$ using our $\mathbb{Z}$ matrix. It is interesting to see that if we start with
+the smaller (rational) alphabet with 272 letters for $G_+(4,8)/T$, we obtain only 33
+letters with $\{W_{30}, W_{33}, W_{35}, W_{36}, W_{38}\}$ missing, and we will come back to this smaller
+alphabet later. It is, however, not clear to us how to directly obtain the 33 letters
+(plus algebraic ones) by Minkowski sum; e.g. if we use parity-invariant subset of
+non-vanishing minors of our $\mathbb{Z}$ matrix, we obtain a polytope with only 18 facets and
+all of them correspond to rational letters, which is insufficient.
+
+There is a remaining normal vector, $g_\infty = (1,-1,0,1,-1)$. After extensive search, it turns out not to be any $g$-vector of the infinite cluster algebra. As shown in [25], after infinite sequences of mutations on a quiver with doubled arrow , the directions of $g$-vectors on two ends of the doubled arrow will asymptote to the so-called *limit ray*. Difference between the two $g$-vectors on the end of doubled arrow will stay invariant in the infinite mutations, giving the limit rays they asymptote to, which is exactly our $g_\infty$! For instance, after mutation series {5, 1, 4} from the initial cluster, the quiver turns out to be
+---PAGE_BREAK---
+
+with $g(2) = g_2$, $g(5) = g_{10}$. It is straightforward to check that $g_\infty = g_{10} - g_2$, and
+the difference stays invariant in the infinite mutations.
+
+As can be computed from the algorithm in [25–27], $g_\infty$ is associated with exactly
+the square root for the unique four-mass-box $(x_2, x_4, x_6, x_8)$ for this kinematics, which
+is defined as
+
+$$
+\Delta := \sqrt{(1 - u_3 - v_3)^2 - 4u_3v_3}, \quad u_3 = \frac{\langle 1234 \rangle \langle 5678 \rangle}{\langle 1256 \rangle \langle 3478 \rangle}, \quad v_3 = \frac{\langle 1278 \rangle \langle 3456 \rangle}{\langle 1256 \rangle \langle 3478 \rangle}, \quad (2.5)
+$$
+
+where the two cross-ratios can be expressed using the letters as $u_3 = 1/W_{25}$, $v_3 = W_1W_2W_3^2W_4W_5/W_{25}$. It is convenient to introduce the two roots $\alpha_\pm = \frac{1}{2}(1+u_3-v_3\pm\Delta)$ (such that $\alpha_+ - \alpha_- = \Delta$), which appear in the (second entry of) symbol of the famous four-mass box
+
+$$
+S[F(x_2, x_4, x_6, x_8)] = -\frac{1}{2} (v_3 \otimes L_1 + u_3 \otimes L_2),
+$$
+
+where the two simplest *algebraic letters* are denoted as $L_1 = \frac{\alpha_+}{\alpha_-}$ and $L_2 = \frac{1-\alpha_-}{1-\alpha_+}$.
+In addition, we find infinite sequences of mutations which produce these and other
+algebraic letters, similar to what was done in [25, 26]. The upshot is that we find a
+space of 5 multiplicative independent algebraic letters: $L_1, L_2$ and
+
+$$
+L_3 = \frac{W_{17}^{-1} - \alpha_{-}}{W_{17}^{-1} - \alpha_{+}}, \quad L_4 = \frac{W_{13}/W_{25} - \alpha_{-}}{W_{13}/W_{25} - \alpha_{+}}, \quad L_5 = \frac{(1 - W_1 W_2 W_3)^{-1} - \alpha_{+}}{(1 - W_1 W_2 W_3)^{-1} - \alpha_{-}}. \quad (2.6)
+$$
+
+It is remarkable that this is precisely the 5-dimensional space of algebraic letters
+found for double-pentagon integral $\Omega_2(1, 4, 5, 8)$ [60]!
+
+# 3 The cluster function space and double-penta ladders to four loops
+
+## 3.1 First entries, cluster adjacency and algebraic letters
+
+Having obtained the alphabet with 38 rational letters and 5 non-rational ones, it
+is natural to construct the space of cluster functions, and we will content ourselves
+---PAGE_BREAK---
+
+with first building all integrable symbols. There are two important constraints we can impose: first, we are interested in symbols whose first entries consist of only physical discontinuities, which can be chosen to be 5 independent space-time cross-ratios. Moreover, we will impose cluster adjacency conditions, i.e. only letters that appear in the same cluster (of the truncated cluster algebras) can be adjacent to each other in the symbol.
+
+As discussed in [55], the 5 independent cross-ratios which can appear on the first entry are $u_3, v_3$ defined above, as well as the following three:
+
+$$u_1 = \frac{\langle 1245 \rangle \langle 5681 \rangle}{\langle 1256 \rangle \langle 4581 \rangle} = \frac{1}{W_8}, \quad u_2 = \frac{\langle 3481 \rangle \langle 4578 \rangle}{\langle 3478 \rangle \langle 4581 \rangle} = \frac{W_{13}W_{17}}{W_8W_{25}}, \quad u_4 = \frac{\langle 1234 \rangle \langle 4581 \rangle}{\langle 1245 \rangle \langle 3481 \rangle} = \frac{W_8}{W_{17}}$$
+
+With the alphabet and first entries, we are ready to build functions or integrable symbols, starting from the $\log(u_1)$, $\log(u_2)$, $\log(u_3)$, $\log(v_3)$, $\log(v_4)$ at weight 1. Our construction is recursive: at each weight $w$, we consider all integrable symbols of weight $w-1$ tensored with any of the 38+5 letters, and impose integrability conditions on the final two entries. We start from the ansatz $\sum_{i,j} c_{i,j} S_i^{(w-1)} d \log l_j$ where $S_i^{(w-1)}$ denotes weight-(w-1) integrable symbols, and $l_j$ the letters, i.e. $W_1, \dots, W_{38}, L_1, \dots, L_5$. The integrability condition reads
+
+$$\sum_{i,j,m} c_{i,j} S_{i;m}^{(w-2)} d \log l_m \wedge d \log l_j = 0, \qquad (3.1)$$
+
+where $S_{i;m}^{(w-2)}$ denote coefficients of $d \log l_m$ in $S_i^{(w-1)}$, which are linear combinations of weight-(w-2) integrable symbols. Therefore, all we need is to find all linear relations among $\binom{43}{2} d \log 2$-forms (some of them vanish identically, e.g. $d \log f_i \wedge d \log(1+f_i) = 0$), and all such relations are recorded in the ancillary file. In this way, we can easily construct the space to relatively high weight: it turns out that there are 5, 24, 113, 530 such integrable symbols at weight $w=1, 2, 3, 4$.
+
+Now we turn to possible cluster adjacency conditions to reduce the space, which forbid letters that cannot be in the same cluster to appear next to each other in the symbol. More precisely, we will use the truncated cluster algebra and its polytope for imposing these conditions: if two letters have facets that intersect in the polytope, then clearly they belong to the same cluster, otherwise we claim that they are a *forbidden pair* in the truncated cluster algebra. We do not know if there exists a cluster in the infinite affine $D_4$ cluster algebra which includes a forbidden pair, but for our purpose we will use this “truncated” version of cluster adjacency and forbid such a pair to appear next to each other in the symbol $^{\mathrm{10}}$.
+
+$^{\mathrm{10}}$This may sound too strong, but in fact what we have done is we first “bootstrapped” the integrals $\Omega_L(1, 4, 5, 8)$ up to $L=4$ without using such adjacency conditions; the result does respect these conditions, which means that they can indeed be imposed, and in the following we present the improved bootstrap in this reduced space.
+---PAGE_BREAK---
+
+We apply this version of cluster adjacency to the rational letters $W_1, \cdots, W_{38}$ (it is not clear to us how to extend it to include the remaining 5 $L$'s which are assigned to the same facet). Since $W_1, \cdots, W_5$ are not $F$-polynomials, we do not consider them in the study of adjacency conditions; in other words, we list the facets for $W_i$ with $i = 6, 7, \cdots, 38$, and find all pairs that do not intersect in the polytopes. In this way, we find 350 forbidden pairs out of $\frac{33 \times 34}{2}$, which we record in the ancillary file (in practice, this can be trivially done by using e.g. **polymake**). By applying these adjacency conditions to the construction, and we find that the space is reduced significantly (more and more so for higher weights). For $w = 1, 2, 3, 4$, the dimension of the space is reduced to 5, 23, 93, 340. Moreover, we have computed the reduced space for $w = 5, 6$, and find 1141, 3585 such integrable symbols respectively. The physical meaning of such adjacency is unclear as usual, but we conjecture that this reduced space contains DCI Feynman integrals with such “two-mass opposite” kinematics, and we can use it to bootstrap such integrals at least up to three loops.
+
+Before proceeding, we remark that similarly one can bootstrap for these warm-up cases such as $D_4$ functions for one-mass hexagon kinematics with $n=7$. Note that this $D_4$ alphabet can be obtained as a boundary of our truncated cluster algebra e.g. by sending $u_3 \to 0$. It is straightforward to construct the space of integrable symbols for $D_4$ functions, with first entries given by $u_1, u_2, u_4, v_3$. The dimensions of the space at weight 1, 2, 3, 4 are 4, 16, 63, 246; we can impose adjacency conditions which forbid 30 pairs out of $\frac{12 \times 13}{2}$ pairs of $F$-polynomials, and these conditions reduce the dimensions to 4, 15, 50, 155 up to weight 4. Nicely, any integral with one-mass hexagon kinematics up to weight 4 that we know of can be found in the space.
+
+What can we say about non-rational letters? Although we do not know how to impose conditions such as cluster adjacency on them, it turns out that they are still constrained at least at low weights. The first observation is that there is only one weight 2 function involving them: the four-mass box whose symbol we record above, where we have $L_1$ and $L_2$ in the second entry. Similarly we find only 11 weight 3 functions with algebraic letters. Among them, the first five have the form
+
+$$ S(F(x_2, x_4, x_6, x_8)) \otimes L_i + \text{rational part} \quad (3.2) $$
+
+with $i = 1 \cdots 5$. While other 6 functions are linear combinations of
+
+$$ \{\mathcal{S}(F(x_2, x_4, x_6, x_8)) \otimes W_j, \mathcal{S}(\text{dilog. with } W_j) \otimes L_i\} \quad (3.3) $$
+
+Note that under a "parity" $\Delta \to -\Delta$, those symbols in (3.2) stay invariant, while those in (3.3) picks up a minus sign. For any "parity-even" amplitude or integral, what we need are those even functions in (3.2), or those odd ones in (3.3) dressed with a prefactor that is an odd function in $\Delta$, such as $1/\Delta$¹¹.
+
+¹¹This is also true for the “odd” four-mass box at weight 2, which can be normalized with a prefactor $1/\Delta$ to make it “even”, e.g. when appearing in one-loop amplitudes.
+---PAGE_BREAK---
+
+For higher weights, the number of functions involving algebraic letters grows rapidly. However, we are mostly interested in a particular class of functions starting at weight 4. In the next subsection, we will locate $\Omega_L(1, 4, 5, 8)$ up to $L=4$, and for now let's see how much it takes to determine the part that contains algebraic letters at $L=2$. We will show shortly how to determine the *last entries* of $\Omega_L(1, 4, 5, 8)$ for $L \ge 2$ from the Wilson-loop $d\log$ form or differential equations: starting $L=2$, the symbol of $\Omega_L(1, 4, 5, 8)$ contains exactly 5 last entries, which we denote as $\{z_i\}_{i=1...5}$:
+
+$$ z_1 = -W_3, z_2 = -\frac{W_2 W_3 W_5 W_{12} W_{15}}{W_{13} W_{17}}, z_3 = \frac{W_2 W_3}{W_{13}}, z_4 = \frac{W_3 W_5}{W_{17}}, z_5 = \frac{W_2 W_3^2 W_5 W_{12}}{W_{13} W_{17}} \quad (3.4) $$
+
+Therefore it is natural to see what symbols with algebraic letters and only these 5 last entries can we find in the space. Surprisingly after imposing last-entry conditions on weight 4, only one independent weight-4 functions containing algebraic letters $L_i$ is left, and we record this integrable symbol up to the part involving purely rational letters $W_j$'s (the rational part depends on our basis of weight-4 functions):
+
+$$ S_{2,4,6,8} := S(F_{2,4,6,8}) \otimes \left( \frac{L_2 L_5}{L_1 L_3} \otimes z_1 + \frac{L_2 L_5}{L_1 L_4} \otimes z_2 + \frac{L_5}{L_1^2 L_3 L_4} \otimes z_3 + \frac{L_5}{L_1} \otimes z_4 + \frac{L_1^2 L_3 L_4}{L_2 L_5^2} \otimes z_5 \right) + \text{rational} \quad (3.5) $$
+
+where we have denoted the four-mass box as $F_{2,4,6,8} := F(x_2, x_4, x_6, x_8)$. We see that by restricting to the five last entries, exactly the first five weight-3 functions described above contribute, which can be viewed as generating the first derivatives $\partial_{z_i}$ for $i = 1, 2, \dots, 5$ of $S_{2,4,6,8}$. In fact, the first two weight-3 functions (involving $L_1$ and $L_2$) can be chosen to be the two weight-3 functions that appear when solving differential equation for double-box integral [44], and it is nice to see that we just have three additional weight-3 functions involving $L_3, L_4, L_5$, when solving weight-4 double-pentagon. We record the symbol of $\Omega_2(1, 4, 5, 8)$ in ancillary file, and it is easy to see that the algebraic part is given by (3.5).
+
+Having obtained a function that captures the algebraic part of $\Omega_2(1, 4, 5, 8)$, we remark that from it we can easily obtain the algebraic part of the most general double-pentagon integral; we denote it as $\Omega_2(i, j, k, l)$ with the first fully general case for $n=12$ and e.g. $(i,j,k,l) = (1,4,7,10)$ [60]. $\Omega_2(i,j,k,l)$ contains $2^4=16$ four-mass-box square roots, labelled by $(x_a,x_b,x_c,x_d)$ with $(a,b,c,d) = (i+\sigma_i,j+\sigma_j,k+\sigma_k,l+\sigma_l)$ with $\sigma=0,1$ [60]. For each $(a,b,c,d)$, all we need to do is simply relabel the momentum twistors of $\Omega_2(1,4,5,8)$ by $\{1 \to i, 4 \to j, 5 \to k, 8 \to l\}$ and $\{2 \to i \pm 1, 3 \to j \pm 1, 6 \to k \pm 1, 7 \to l \pm 1\}$ where the choice $\pm 1$ depends on $\sigma$'s, e.g. for $\sigma_i=1$ ($a=i+1$), $2 \to i+1$. By summing over 16 such relabelled symbol (with alternating signs), we obtain an integrable symbol that contains the algebraic part of $\Omega_2(i,j,k,l)$:
+
+$$ S(\Omega_2(i,j,k,l)) = \sum_{\{\sigma\}} (-)^{\sum \sigma} S_{i+\sigma_i,j+\sigma_j,k+\sigma_k,l+\sigma_l} + S(R) \quad (3.6) $$
+---PAGE_BREAK---
+
+where the sum is over $2^4 = 16$ choices of $\sigma$'s with a minus sign when $\sigma_i+\sigma_j+\sigma_k+\sigma_l$ is odd; $R$ denotes a weight-4 function with only rational letters. It is remarkable that, up to this $R$ function, the most generic double-pentagon integral can be obtained using 16 weight-4 integrable symbols found in our space.
+
+## 3.2 Double-penta-ladders: last entries, differential equations etc.
+
+Now we move to the computation of double-penta ladder integrals, $\Omega_L(1, 4, 5, 8)$, which can be defined directly from Wilson loop $d \log$ representation: we can rewrite an $L$-loop ladder as a two-fold integral over a $(L-1)$-loop integral as
+
+$$ \Omega_L(1, 4, 5, 8) = \int d\log \langle 148Y \rangle d\log \frac{\langle 1X4Y \rangle}{t} \times \text{Y X} \quad (3.7) $$
+
+where $X = Z_1 - tZ_3$, $Y = Z_3 - sZ_5$ with $t$ and $s$ integrated on $\mathbb{R}_{\ge 0}^2$. We can rescale $t$ and $s$ to make DCI property manifest, and we arrive at the recursion
+
+$$ \begin{aligned} \Omega_{L+\frac{1}{2}}(u_1, u_2, u_4, u_3, v_3) &= \int_0^\infty d\log \frac{t+1}{t} \Omega_L \left( \frac{u_1(t+u_4)}{t+u_1u_4}, \frac{u_4(t+1)}{t+u_4}, \frac{u_3(t+1)}{t+u_1u_4}, \frac{tv_3}{t+u_1u_4} \right), \\ \Omega_{L+1}(u_1, u_2, u_4, u_3, v_3) &= \int_0^\infty d\log(s+1) \Omega_{L+\frac{1}{2}} \left( u_1, \frac{u_2(s+1)}{u_2s+1}, \frac{s+u_4}{s+1}, \frac{u_3(1+s/u_4)}{1+su_2}, \frac{v_3}{1+su_2} \right), \end{aligned} \quad (3.8) $$
+
+Note that at the limit $u_3 \to 0$, $\Omega_L(1, 4, 5, 8)$ and the recursions degenerate to the $\Omega_L(1, 4, 5, 7)$ case. The source of the recursion is the one-loop 8-pt chiral hexagon whose result is well known [58] (e.g. in box expansion including $F_{2,4,6,8}$):
+
+$$ \Omega_1(1, 4, 5, 8) = \frac{3}{2} \times \text{Hexagon with three vertices at } (1, 8) \times \text{Hexagon with three vertices at } (6/7, 1). \quad (3.9) $$
+
+Note that at two loops, after some tedious calculation based on rationalization, the symbol (and even function [47]) of $S(\Omega_2)$ can be computed from the recursion (3.9). Its alphabet consists of 5 algebraic letters $\{L_1, L_2, L_3, L_4, L_5\}$ and 21 rational letters which are $\{W_1, \dots, W_{25}\}$ with $\{W_6, W_9, W_{22}, W_{24}\}$ absent. As mentioned, the last entries of the answer are the five $z$-variables (3.4), which are related to the cross ratios $\{u_1, u_2, u_3, u_4, v_3\}$ by
+
+$$ u_1 = \frac{1}{1-z_1}, \quad u_2 = \frac{1}{1-z_2}, \quad u_4 = 1-z_4, \quad u_3 = \frac{(1-z_3)(1-z_4)}{(1-z_1)(1-z_2)}, \quad v_3 = -\frac{(z_1 z_2 - z_5)(z_3 z_4 - z_5)}{(1-z_1)(1-z_2)z_5}. $$
+---PAGE_BREAK---
+
+These z-variables make many properties of the ladder integrals $\Omega_L(1, 4, 5, 8)$ manifest, and we will use them extensively in the following discussions. For instance, the integrals have two axial symmetries, which are given by
+
+$$z_1 \leftrightarrow z_2 \quad \text{and} \quad z_3 \leftrightarrow z_4.$$
+
+The deformations are also simplified in terms of z-variables to
+
+$$\Omega_{L+\frac{1}{2}}(z_1, \dots, z_5) = \int_0^\infty d\log \frac{t+1}{t} \Omega_L \left( \frac{tz_1}{t-z_4+1}, z_2, z_3, \frac{tz_4}{t-z_4+1}, \frac{tz_5}{t-z_4+1} \right) \quad (3.10)$$
+
+and
+
+$$\Omega_{L+1}(z_1, \dots, z_5) = \int_0^\infty d\log(s+1) \Omega_{L+\frac{1}{2}} \left( z_1, \frac{z_2}{s+1}, z_3, \frac{z_4}{s+1}, \frac{z_5}{s+1} \right). \quad (3.11)$$
+
+Following the same algorithm in determining last entries of all-loop penta-box integrals [58], it is straightforward to see that last entries of $\Omega_L(1, 4, 5, 8)$ remain unchanged for $L \ge 2$. Recall that with constants $a$ and $b$, last entries of the integral $\int_0^\infty F(t) \otimes (t+b) d\log(t+a)$ are $a$ or $(b-a)$ and those of $\int_0^\infty F(t) \otimes b d\log(t+a)$ are $a$ or $b$. After the first-step integration eq. (3.10) with $d\log((t+1)/t)$, the five original last entries give six last entries $\{z_1, z_2, z_3, z_4, 1-z_4, z_5\}$, where the new one $1-z_4$ is from the integration
+
+$$\int_0^\infty F(t) \otimes (t - z_4 + 1) d\log(t).$$
+
+However, deformed $1-z_4$ only contributes $z_4$ as last entry in the second-step integration eq. (3.11) as well, since after the deformation it only contributes terms like
+
+$$\int_0^\infty F(s) \otimes \frac{s+1-z_4}{s+1} ds.$$
+
+
+
+Therefore by induction, we have proven that last entries of the integral $\Omega_L(1, 4, 5, 8)$ are always $\{z_1, \dots, z_5\}$ for arbitrary $L$.
+
+Using $z$ variables, we also find remarkably simple first-order differential equations:
+
+$$\Omega_{L+\frac{1}{2}} = (z_2 \partial_{z_2} + z_4 \partial_{z_4} + z_5 \partial_{z_5}) \Omega_{L+1} \quad (3.12)$$
+
+and
+
+$$\Omega_L = (z_4 - 1)(z_1 \partial_{z_1} + z_4 \partial_{z_4} + z_5 \partial_{z_5}) \Omega_{L+\frac{1}{2}}. \quad (3.13)$$
+
+For example, consider the deformation of $L + 1/2 \rightarrow L + 1$:
+
+$$\begin{align*}
+\Omega_{L+\frac{1}{2}}(z_1, \dots, z_5) &= \int_0^\infty d\log(s+1) \Omega_L \left( z_1, \frac{z_2}{s+1}, z_3, \frac{z_4}{s+1}, \frac{z_5}{s+1} \right) \\
+&= \int_0^{z_5} d\log t \, \Omega_L \left( z_1, \frac{z_2}{z_5}t, z_3, \frac{z_4}{z_5}t, t \right),
+\end{align*}$$
+---PAGE_BREAK---
+
+its derivative with respect to $z_5$ is
+
+$$z_5 \partial_{z_5} \Omega_{L+\frac{1}{2}} = \Omega_L(z_1, \dots, z_5) - \frac{1}{z_5} \int_0^{z_5} d\log t (z_2 t \partial_2 + z_4 t \partial_4) \Omega_L \left( z_1, \frac{z_2}{z_5} t, z_3, \frac{z_4}{z_5} t, t \right),$$
+
+where $\partial_2$ and $\partial_4$ denotes partial derivative acting on the second and fourth argument respectively, and then (3.12) is given by the following identity
+
+$$\frac{1}{z_5}(z_2 t \partial_2 + z_4 t \partial_4) \Omega_L \left( z_1, \frac{z_2}{z_5}t, z_3, \frac{z_4}{z_5}t, t \right) = (z_2 \partial_{z_2} + z_4 \partial_{z_4}) \Omega_L \left( z_1, \frac{z_2}{z_5}t, z_3, \frac{z_4}{z_5}t, t \right).$$
+
+(3.13) can be found in a similar way from the deformation of $L \to L + 1/2$. With the DE (3.12) and the symmetry, it is also easy to see that the last entries of $\Omega_L$ for $L \ge 2$ can only be $\{z_1, \dots, z_5\}$ since $\Omega_{L+\frac{1}{2}}$ is pure for $L \ge 1$.
+
+Finally from the recursion, we can easily impose certain boundary conditions. Since the boundary value of $d\log$ form $d\log \frac{t+1}{t}$ diverges at $t=0$ in eq.(3.10), deformed function $\Omega_L(1, 4, 5, 8)$ should vanish when $t \to 0$, which gives the constraint:
+
+$$\lim_{t \to 0} \Omega_L(tz_1, z_2, z_3, tz_4, tz_5) = 0. \quad (3.14)$$
+
+We expect that in the space, differential equations (3.12), (3.13), together with boundary conditions, should determine the symbol of $\Omega_L(1, 4, 5, 8)$ recursively. This will be confirmed up to weight 8 in the next subsection.
+
+Finally, as we have mentioned, setting $u_3 \to 0$, i.e. $z_3 \to 1$, $\Omega_L(1, 4, 5, 8)$ degenerates to 7-point ladder integrals $\Omega_L(1, 4, 5, 7)$, which have been computed in [55, 58] up to $L=4$ easily. We use this colinear limit as a cross check for our result.
+
+## 3.3 Locating the integrals and the pattern for algebraic letters
+
+We have constructed the space with given first entries and adjacency conditions in the section 3.1, in this subsection we using the conditions above to bootstrap $\Omega_L$ up to $L=4$. As mentioned above, DE and boundary conditions are sufficient for the task, but computationally it is easier if we first impose last-entry conditions and symmetry of the integral.
+
+To impose DE explicitly, we use the derivative formula of a symbol:
+
+$$\partial_a(F \otimes w) = F \frac{\partial}{\partial a} \log w.$$
+
+In practice, the derivative in DE (3.12) takes the ansatz $\sum_i F_i \otimes z_i$ into
+
+$$ (z_2 \partial_{z_2} + z_4 \partial_{z_4} + z_5 \partial_{z_5}) \sum_i F_i \otimes z_i = F_2 + F_4 + F_5. $$
+
+For the other DE (3.13), one may need to calculate the derivative of letters by
+
+$$\frac{\partial W_i}{\partial z_j} = \sum_k \frac{\partial W_i}{\partial f_k} \frac{\partial f_k}{\partial z_j},$$
+---PAGE_BREAK---
+
+but it is more convenient to first require that the last entries of $F_2 + F_4 + F_5$ are
+$\{z_1, z_2, z_3, z_4, 1-z_4, z_5\}$ which is proven for $\Omega_{L+1/2}$ in the last subsection, and then
+the derivative $(z_4-1)(z_1\partial_{z_1} + z_4\partial_{z_4} + z_5\partial_{z_5})$ is trivial. To impose the symmetry, we
+rewrite two symmetries $z_1 \leftrightarrow z_2$ and $z_3 \leftrightarrow z_4$ in terms of alphabet. For example, the
+transformation $z_3 \leftrightarrow z_4$ is simply
+
+$$
+\begin{align*}
+& \{W_2 \leftrightarrow W_5, W_7 \leftrightarrow W_{10}, W_{11} \leftrightarrow W_{16}, W_{13} \leftrightarrow W_{17}, W_{14} \leftrightarrow W_{18}, W_{21} \leftrightarrow W_{23}, \\
+& \qquad W_{26} \leftrightarrow W_{28}, W_{32} \leftrightarrow W_{37}, W_{36} \leftrightarrow W_{38}, L_3 \rightarrow \frac{1}{L_1 L_4}, L_4 \rightarrow \frac{1}{L_1 L_3}, L_5 \rightarrow \frac{L_5}{L_1 L_3 L_4}\}.
+\end{align*}
+$$
+
+The other symmetry $z_1 \leftrightarrow z_2$ behaves more complicated because under the transformation $z_1 \leftrightarrow z_2$, rational letters $\{W_{30}, W_{33}, W_{35}, W_{36}, W_{38}\}$ will produce new factors, thus for bootstrapping $\Omega_L$, we only need the other 33 rational letters. It is remarkable that this is exactly the smaller rational alphabet obtained from the parity-invariant $G_+(4, 8)/T$ as mentioned above! It is intriguing that this smaller alphabet is exactly the one that respects the symmetry; for our purpose, it is sufficient to use only these 33 rational letters (plus 5 algebraic ones).
+
+Before going to higher loops, it is already interesting to re-derive Ω₂ from the bootstrapping strategy. As mentioned in subsection 3.1, after imposing the last entry condition, there is only one algebraic function left. By imposing DE (with the weight 2 function being one loop hexagon (3.9)) and boundary conditions, we arrive at the unique symbol of Ω₂, which is recorded in ancillary file.
+
+
+
+
+ |
+ conditions
+ |
+
+ # free parameters
+ |
+
+
+
+
+ |
+ weight-6 function space
+ |
+
+ 3585
+ |
+
+
+ |
+ last entry
+ |
+
+ 257
+ |
+
+
+ |
+ symmetry z
+
+ 3
+
+ ↔ z
+
+ 4
+
+ |
+
+ 146
+ |
+
+
+ |
+ symmetry z
+
+ 1
+
+ ↔ z
+
+ 2
+
+ |
+
+ 56
+ |
+
+
+ |
+ DE
+ |
+
+ 3
+ |
+
+
+ |
+ boundary conditions
+ |
+
+ 0
+ |
+
+
+
+
+**Table 1:** Number of free parameters left after using constrains on the left column for bootstrapping $\Omega_3$
+
+We continue to determine $\Omega_3$ in this way: the number of free parameters of the ansatz during the bootstrap of $\Omega_3$ is given in the Table 1. Note that the letters in the symbol after imposing the last entries is dramatically reduced, only 29+5 letters left. These 29 rational letters behave well under the transformation $z_1 \leftrightarrow z_2$. Then imposing the derivative ($z_2\partial_{z_2} + z_4\partial_{z_4} + z_5\partial_{z_5}$), we get an ansatz of $\Omega_{2+1/2}$ whose last entries are proven to be $\{z_1, z_2, z_3, z_4, z_5, 1-z_4\}$, but here in the ansatz naively we have 9 extra last entries. It is convenient to just eliminate these “spurious” last entries, and then apply the second DE and boundary conditions which allow us to immediately determine the symbol of $\Omega_3$.
+---PAGE_BREAK---
+
+Next, we want to determine $\Omega_4$. Since the function space (even after using adjacency) is too large at weight 8, we find it useful to directly impose last-entry conditions when constructing the space for weight 7 and above (we also use the alphabet with 33 rational letter which respect the symmetry $z_1 \leftrightarrow z_2$). After obtaining the reduced space, it becomes straightforward to apply DE and boundary conditions, which uniquely determine the symbol of $\Omega_4$ (it takes a few hours on a laptop).
+
+The symbol of $\Omega_3$ has about $8 \times 10^4$ terms which is recorded in the ancillary file, while $\Omega_4$ has more than $10^6$ terms which is too lengthy to be recorded. Although the complexity of the result grows fast with the number of loops, we find some hidden simplicity at least for the algebraic part. Just like $\Omega_2$, the part of $\Omega_3$ and $\Omega_4$ containing algebraic letters take a strikingly simple form:
+
+$$ \sum_{i=1}^{5} S(F(2, 4, 6, 8)) \otimes L_i \otimes S(F_i) $$
+
+where $F_i$ are weight 3 (5) MPL functions with rational letters only, for $\Omega_3$ ($\Omega_4$) respectively. This means that in addition to $L_1, L_2$ in the second entry as part of $S(F(2, 4, 6, 8))$, the 5 algebraic letters *only* appear on the third entries but not any subsequent ones. This phenomenon was observed at the special $R^{1,1}$ kinematics [72] as well; in $R^{1,1}$ $\Omega_L$ is a rational $A_2$ function $\Omega_L(v, w)$, and only non-trivial algebraic letters, $L_3, L_4, L_5$ become the “mixing letter” $v-w$. We have proven that in $R^{1,1}$ kinematics, the part of $\Omega_L(v, w)$ $L$ with mixing letter reads [72]:
+
+$$ S(F(2, 4, 6, 8)) \otimes (v-w) \otimes S\left(\log^{2L-3}\left(\frac{v}{w}\right)\right). $$
+
+This indicate that for all $L$, algebraic letters $L_i$ for $i = 3, 4, 5$ can only appear at the third entries (with symbol of $F(2, 4, 6, 8)$ in the first two), but still does not exclude the possibility that simpler algebraic letters, $L_1, L_2$, may appear in subsequent entries. Here we confirm that at least through four loops, no algebraic letters appear beyond the third entry. Recall that $\Omega_L(1, 4, 5, 8)$ is given by a two-fold integral of $\Omega_{L-1}(1, 4, 5, 8)$, and the pattern we observe means that the first three entries of the algebraic part remain unchanged! It is an interesting problem to prove this by carefully analysing rationalization and possible cancellation of spurious square roots in our integration routine. Note that the same phenomenon is expected to hold for three-loop MHV amplitudes, which follow from similar pattern of two-loop NMHV ones via $\bar{Q}$ equations, as observed in [73] in $R^{1,1}$ kinematics.
+
+We also note that the rational alphabet of $\Omega_3$ and $\Omega_4$ does not contain 38 or even 33 rational letters but only the first 25 $\{W_1, \cdots, W_{25}\}$, which can already be found in all non-zero Plücker coordinates. We believe that other integrals sharing the same kinematics as $\Omega_L(1, 4, 5, 8)$ may contain some of the remaining letters, and we leave it to future work for finding and studying such integrals.
+---PAGE_BREAK---
+
+## 4 Conclusion and Discussions
+
+In this paper, we have conjectured that symbol alphabets for certain classes of DCI Feynman integrals can be determined by truncated cluster algebras purely from their kinematics, which are boundaries of $G_+(4, n)/T$. The main example we study is the two-mass-opposite hexagon kinematics, and our method produces an alphabet of 38 rational letters and 5 algebraic ones (as a truncated affine $D_4$ cluster algebra). We construct the space of integrable symbols after imposing physical first-entry conditions and a truncated version of cluster adjacency, which we believe to be universal. When restricting to $\Omega_L(1, 4, 5, 8)$, we derive differential equations and last-entry conditions from our $d$ log recursion, which allow us to locate its symbol in the space up to weight 8. We also find a remarkable pattern up to four loops for the appearance of algebraic letters, which begs for some explanations. Since the rationalization is very similar to those required for computing multi-loop amplitudes using $\bar{Q}$ equations, it is tempting to look for similar pattern for higher-loop $n=8$ amplitudes.
+
+We have only focused on cases where the kinematics can be naturally given in terms of positroid cells of $G_+(4, n)$. If the kinematics for a class of Feynman integrals cannot be labelled by positroids, our method does not directly apply since we do not know the quiver to begin with. For example, currently we do not know any positroid cell for two-mass-hard hexagon kinematics. In [49], the alphabet of the latter was conjectured to be a subset of octagon alphabet that are annihilated by first-order differential operators encoding the kinematics. This seems to be a general method when we know the alphabet of $G_+(4, n)/T$, and it would be interesting to study the relation of such subsets to our truncated cluster algebras. For example, if we apply the differential operator to our two-mass-opposite case, we have 33 of the 38 rational letters and 5 algebraic letters. An important difference is that these subsets generally do not correspond to boundaries of $G_+(4, n)/T$ (while we expect our truncated cluster algebras do). We have looked at higher-dimensional cases, e.g. for one-mass heptagon kinematics with $n=8$, our method gives a co-dimension 2 boundary of $G_+(4, 8)/T$ which has 100 + 1 facets, where we have 100 $g$-vectors and 1 limit ray (the subset from differential operators of [49] is smaller). Since the computation for $G_+(4, n)/T$ cluster algebra becomes very difficult beyond $n=8$ (there are recent results for $n=9$ using a subset of all Plücker coordinates [30]), it is crucial to develop both methods for studying higher-point DCI integrals. It is also an interesting mathematical problem to systematically classify the boundaries of $G_+(4, n)/T$ (see [28]) and study their relevance for Feynman integrals.
+
+Both for amplitudes and integrals in $N=4$ SYM, the possibility of studying symbol alphabets pure from kinematics sounds like magic: despite more and more data supporting such conjectures, we do not have a good understanding of the mechanism. Compared to scattering amplitudes, there might be a better chance to systematically understand why alphabets of certain DCI Feynman integrals are related to
+---PAGE_BREAK---
+
+such truncated cluster algebra, especially via canonical differential equations [74, 75]. It would be also highly desirable to connect alphabets for these integrals to certain 4k-dimensional plabic graph of $G_+(k, n)$ as have been studied for amplitudes, which essentially amount to maps from such cells in $G_+(k, n)$ to (boundaries of) $G_+(4, n)/T$. A pressing question is to see if and how more complicated algebraic letters including those containing higher-order roots appear in truncated cluster algebras for corresponding integrals (one could even speculate something “elliptic” might appear for the “alphabet” of the kinematics for two-loop $n=10$ double-box integral). Last but not least, cluster algebra structures have been observed for Feynman integrals that are not DCI including those with IR divergence; their alphabet can sometimes be obtained from that in DCI case, e.g. the pentagon alphabet with one-massive leg can be obtained from sending a dual point to infinity in the two-mass-hard hexagon kinematics [49]. It would be extremely interesting to find possible truncated cluster algebras for these more general integrals, which should again be directly related to their canonical differential equations.
+
+## Acknowledgement
+
+It is a pleasure to thank Nima Arkani-Hamed, Yichao Tang, Yihong Wang, Chi Zhang, Yang Zhang, Yong Zhang and Peng Zhao for inspiring discussions, correspondence and collaborations on related projects. We would like to thank especially James Drummond and Ömer Gürdoğan for helpful comments on the first version of the paper. This research is supported in part by National Natural Science Foundation of China under Grant No. 11935013, 11947301, 12047502, 12047503.
+
+## References
+
+[1] N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, and J. Trnka, *Grassmannian Geometry of Scattering Amplitudes*. Cambridge University Press, 4, 2016. arXiv:1212.5605 [hep-th].
+
+[2] N. Arkani-Hamed and J. Trnka, “The Amplituhedron,” JHEP 10 (2014) 030, arXiv:1312.2007 [hep-th].
+
+[3] J. Golden, A. B. Goncharov, M. Spradlin, C. Vergu, and A. Volovich, “Motivic Amplitudes and Cluster Coordinates,” JHEP 01 (2014) 091, arXiv:1305.1617 [hep-th].
+
+[4] D. Speyer and L. Williams, “The tropical totally positive grassmannian,” *Journal of Algebraic Combinatorics* **22** no. 2, (2005) 189–210.
+
+[5] A. B. Goncharov, M. Spradlin, C. Vergu, and A. Volovich, “Classical Polylogarithms for Amplitudes and Wilson Loops,” *Phys. Rev. Lett.* **105** (2010) 151605, arXiv:1006.5703 [hep-th].
+---PAGE_BREAK---
+
+[6] C. Duhr, H. Gangl, and J. R. Rhodes, "From polygons and symbols to polylogarithmic functions," JHEP 10 (2012) 075, arXiv:1110.0458 [math-ph].
+
+[7] L. J. Dixon, J. M. Drummond, and J. M. Henn, "Bootstrapping the three-loop hexagon," JHEP (2011) 023, arXiv:1108.4461 [hep-th].
+
+[8] L. J. Dixon, J. M. Drummond, C. Duhr, M. von Hippel, and J. Pennington, "Bootstrapping six-gluon scattering in planar N=4 super-Yang-Mills theory," PoS LL2014 (2014) 077, arXiv:1407.4724 [hep-th].
+
+[9] L. J. Dixon and M. von Hippel, "Bootstrapping an NMHV amplitude through three loops," JHEP 10 (2014) 065, arXiv:1408.1505 [hep-th].
+
+[10] J. M. Drummond, G. Papathanasiou, and M. Spradlin, "A Symbol of Uniqueness: The Cluster Bootstrap for the 3-Loop MHV Heptagon," JHEP 03 (2015) 072, arXiv:1412.3763 [hep-th].
+
+[11] L. J. Dixon, M. von Hippel, and A. J. McLeod, "The four-loop six-gluon NMHV ratio function," JHEP 01 (2016) 053, arXiv:1509.08127 [hep-th].
+
+[12] S. Caron-Huot, L. J. Dixon, A. McLeod, and M. von Hippel, "Bootstrapping a Five-Loop Amplitude Using Steinmann Relations," Phys. Rev. Lett. 117 no. 24, (2016) 241601, arXiv:1609.00669 [hep-th].
+
+[13] L. J. Dixon, J. Drummond, T. Harrington, A. J. McLeod, G. Papathanasiou, and M. Spradlin, "Heptagons from the Steinmann Cluster Bootstrap," JHEP 02 (2017) 137, arXiv:1612.08976 [hep-th].
+
+[14] J. Drummond, J. Foster, Ö. Gürdoğan, and G. Papathanasiou, "Cluster adjacency and the four-loop NMHV heptagon," JHEP 03 (2019) 087, arXiv:1812.04640 [hep-th].
+
+[15] S. Caron-Huot, L. J. Dixon, F. Dulat, M. von Hippel, A. J. McLeod, and G. Papathanasiou, "Six-Gluon amplitudes in planar $N=4$ super-Yang-Mills theory at six and seven loops," JHEP 08 (2019) 016, arXiv:1903.10890 [hep-th].
+
+[16] S. Caron-Huot, L. J. Dixon, F. Dulat, M. von Hippel, A. J. McLeod, and G. Papathanasiou, "The Cosmic Galois Group and Extended Steinmann Relations for Planar $N=4$ SYM Amplitudes," JHEP 09 (2019) 061, arXiv:1906.07116 [hep-th].
+
+[17] L. J. Dixon and Y.-T. Liu, "Lifting Heptagon Symbols to Functions," JHEP 10 (2020) 031, arXiv:2007.12966 [hep-th].
+
+[18] S. Caron-Huot, L. J. Dixon, J. M. Drummond, F. Dulat, J. Foster, O. Gürdoğan, M. von Hippel, A. J. McLeod, and G. Papathanasiou, "The Steinmann Cluster Bootstrap for $N=4$ Super Yang-Mills Amplitudes," PoS CORFU2019 (2020) 003, arXiv:2005.06735 [hep-th].
+
+[19] J. Drummond, J. Foster, and Ö. Gürdoğan, "Cluster Adjacency Properties of Scattering Amplitudes in $N=4$ Supersymmetric Yang-Mills Theory," Phys. Rev. Lett. 120 no. 16, (2018) 161601, arXiv:1710.10953 [hep-th].
+---PAGE_BREAK---
+
+[20] J. Drummond, J. Foster, and Ö. Gürdoğan, “Cluster adjacency beyond MHV,” JHEP 03 (2019) 086, arXiv:1810.08149 [hep-th].
+
+[21] S. Caron-Huot and S. He, “Jumpstarting the All-Loop S-Matrix of Planar N=4 Super Yang-Mills,” JHEP 07 (2012) 174, arXiv:1112.1060 [hep-th].
+
+[22] S. He, Z. Li, and C. Zhang, “Two-loop Octagons, Algebraic Letters and $\bar{Q}$ Equations,” Phys. Rev. D 101 no. 6, (2020) 061701, arXiv:1911.01290 [hep-th].
+
+[23] S. He, Z. Li, and C. Zhang, “The symbol and alphabet of two-loop NMHV amplitudes from $\bar{Q}$ equations,” arXiv:2009.11471 [hep-th].
+
+[24] J. Drummond, J. Foster, Ö. Gürdoğan, and C. Kalousios, “Tropical Grassmannians, cluster algebras and scattering amplitudes,” arXiv:1907.01053 [hep-th].
+
+[25] J. Drummond, J. Foster, O. Gürdoğan, and C. Kalousios, “Algebraic singularities of scattering amplitudes from tropical geometry,” arXiv:1912.08217 [hep-th].
+
+[26] N. Henke and G. Papathanasiou, “How tropical are seven- and eight-particle amplitudes?,” JHEP 08 (2020) 005, arXiv:1912.08254 [hep-th].
+
+[27] N. Arkani-Hamed, T. Lam, and M. Spradlin, “Non-perturbative geometries for planar $N=4$ SYM amplitudes,” arXiv:1912.08222 [hep-th].
+
+[28] N. Arkani-Hamed, T. Lam, and M. Spradlin, “Positive configuration space,” Commun. Math. Phys. 384 no. 2, (2021) 909–954, arXiv:2003.03904 [math.CO].
+
+[29] A. Herderschee, “Algebraic branch points at all loop orders from positive kinematics and wall crossing,” arXiv:2102.03611 [hep-th].
+
+[30] N. Henke and G. Papathanasiou, “Singularities of eight- and nine-particle amplitudes from cluster algebras and tropical geometry,” arXiv:2106.01392 [hep-th].
+
+[31] L. Ren, M. Spradlin, and A. Volovich, “Symbol Alphabets from Tensor Diagrams,” arXiv:2106.01405 [hep-th].
+
+[32] S. He, L. Ren, and Y. Zhang, “Notes on polytopes, amplitudes and boundary configurations for Grassmannian string integrals,” JHEP 04 (2020) 140, arXiv:2001.09603 [hep-th].
+
+[33] N. Arkani-Hamed, S. He, and T. Lam, “Stringy canonical forms,” JHEP 02 (2021) 069, arXiv:1912.08707 [hep-th].
+
+[34] F. Cachazo, N. Early, A. Guevara, and S. Mizera, “Scattering Equations: From Projective Spaces to Tropical Grassmannians,” JHEP 06 (2019) 039, arXiv:1903.08904 [hep-th].
+
+[35] F. Cachazo, A. Guevara, B. Umbert, and Y. Zhang, “Planar Matrices and Arrays of Feynman Diagrams,” arXiv:1912.09422 [hep-th].
+
+[36] J. Drummond, J. Foster, O. Gürdoğan, and C. Kalousios, “Tropical fans, scattering equations and amplitudes,” arXiv:2002.04624 [hep-th].
+
+[37] M. Parisi, M. Sherman-Bennett, and L. K. Williams, “The m=2 amplituhedron and
+---PAGE_BREAK---
+
+the hypersimplex: signs, clusters, triangulations, Eulerian numbers,"
+*arXiv:2104.08254 [math.CO]*.
+
+[38] T. Lukowski, M. Parisi, and L. K. Williams, “The positive tropical Grassmannian,
+the hypersimplex, and the $m=2$ amplituhedron,” *arXiv:2002.06164 [math.CO]*.
+
+[39] J. Mago, A. Schreiber, M. Spradlin, and A. Volovich, “Symbol alphabets from plabic
+graphs,” *JHEP 10 (2020) 128*, arXiv:2007.00646 [hep-th].
+
+[40] S. He and Z. Li, “A Note on Letters of Yangian Invariants,” *JHEP 02 (2021) 155*,
+*arXiv:2007.01574 [hep-th]*.
+
+[41] J. Mago, A. Schreiber, M. Spradlin, A. Yelleshpur Srikant, and A. Volovich, “Symbol
+Alphabets from Plabic Graphs II: Rational Letters,” *arXiv:2012.15812 [hep-th]*.
+
+[42] J. Mago, A. Schreiber, M. Spradlin, A. Yelleshpur Srikant, and A. Volovich,
+“Symbol Alphabets from Plabic Graphs III: $n=9$,” *arXiv:2106.01406 [hep-th]*.
+
+[43] N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, and J. Trnka, “Local Integrals for
+Planar Scattering Amplitudes,” *JHEP 06 (2012) 125*, arXiv:1012.6032 [hep-th].
+
+[44] J. M. Drummond, J. M. Henn, and J. Trnka, “New differential equations for on-shell
+loop integrals,” *JHEP 04 (2011) 083*, arXiv:1010.3679 [hep-th].
+
+[45] S. Caron-Huot, L. J. Dixon, M. von Hippel, A. J. McLeod, and G. Papathanasiou,
+“The Double Pentaladder Integral to All Orders,” *JHEP 07 (2018) 170*,
+*arXiv:1806.01361 [hep-th]*.
+
+[46] J. Henn, E. Herrmann, and J. Parra-Martinez, “Bootstrapping two-loop Feynman
+integrals for planar $N = 4$ sYM,” *JHEP 10 (2018) 059*, arXiv:1806.06072
+[hep-th].
+
+[47] J. L. Bourjaily, A. J. McLeod, M. von Hippel, and M. Wilhelm, “Rationalizing Loop
+Integration,” *JHEP 08 (2018) 184*, arXiv:1805.10281 [hep-th].
+
+[48] E. Herrmann and J. Parra-Martinez, “Logarithmic forms and differential equations
+for Feynman integrals,” *JHEP 02 (2020) 099*, arXiv:1909.04777 [hep-th].
+
+[49] D. Chicherin, J. M. Henn, and G. Papathanasiou, “Cluster algebras for Feynman
+integrals,” *arXiv:2012.12285 [hep-th]*.
+
+[50] S. Abreu, L. J. Dixon, E. Herrmann, B. Page, and M. Zeng, “The two-loop
+five-point amplitude in $N = 4$ super-Yang-Mills theory,” *Phys. Rev. Lett.* **122**
+*no. 12*, (2019) 121603, arXiv:1812.08941 [hep-th].
+
+[51] D. Chicherin, T. Gehrmann, J. M. Henn, P. Wasser, Y. Zhang, and S. Zoia, “All
+Master Integrals for Three-Jet Production at Next-to-Next-to-Leading Order,” *Phys.
+Rev. Lett.* **123** no. 4, (2019) 041603, arXiv:1812.11160 [hep-ph].
+
+[52] D. Chicherin, T. Gehrmann, J. M. Henn, P. Wasser, Y. Zhang, and S. Zoia,
+“Analytic result for a two-loop five-particle amplitude,” *Phys. Rev. Lett.* **122** no. 12,
+(2019) 121602, arXiv:1812.11057 [hep-th].
+---PAGE_BREAK---
+
+[53] D. Chicherin, J. Henn, and V. Mitev, “Bootstrapping pentagon functions,” JHEP 05 (2018) 164, arXiv:1712.09610 [hep-th].
+
+[54] L. J. Dixon, A. J. McLeod, and M. Wilhelm, “A Three-Point Form Factor Through Five Loops,” arXiv:2012.12286 [hep-th].
+
+[55] S. He, Z. Li, and Q. Yang, “Notes on cluster algebras and some all-loop Feynman integrals,” arXiv:2103.02796 [hep-th].
+
+[56] J. M. Drummond, J. Henn, V. A. Smirnov, and E. Sokatchev, “Magic identities for conformal four-point integrals,” JHEP 01 (2007) 064, arXiv:hep-th/0607160 [hep-th].
+
+[57] J. Drummond, G. Korchemsky, and E. Sokatchev, “Conformal properties of four-gluon planar amplitudes and Wilson loops,” Nucl. Phys. B 795 (2008) 385–408, arXiv:0707.0243 [hep-th].
+
+[58] S. He, Z. Li, Y. Tang, and Q. Yang, “The Wilson-loop $d$ log representation for Feynman integrals,” JHEP 05 (2021) 052, arXiv:2012.13094 [hep-th].
+
+[59] N. Arkani-Hamed, S. He, and T. Lam, “Cluster configuration spaces of finite type,” arXiv:2005.11419 [math.AG].
+
+[60] S. He, Z. Li, Q. Yang, and C. Zhang, “Feynman Integrals and Scattering Amplitudes from Wilson Loops,” Phys. Rev. Lett. 126 (2021) 231601, arXiv:2012.15042 [hep-th].
+
+[61] J. L. Bourjaily, A. J. McLeod, C. Vergu, M. Volk, M. Von Hippel, and M. Wilhelm, “Rooting Out Letters: Octagonal Symbol Alphabets and Algebraic Number Theory,” JHEP 02 (2020) 025, arXiv:1910.14224 [hep-th].
+
+[62] S. Fomin and A. Zelevinsky, “Cluster algebras i: foundations,” Journal of the American Mathematical Society 15 no. 2, (2002) 497–529.
+
+[63] S. Fomin and A. Zelevinsky, “Cluster algebras ii: Finite type classification,” Inventiones mathematicae 154 no. 1, (2003) 63–121.
+
+[64] A. Berenstein, S. Fomin, A. Zelevinsky, et al., “Cluster algebras iii: Upper bounds and double bruhat cells,” Duke Mathematical Journal 126 no. 1, (2005) 1–52.
+
+[65] S. Fomin and A. Zelevinsky, “Cluster algebras iv: coefficients,” Compositio Mathematica 143 no. 1, (2007) 112–164.
+
+[66] J. Golden, M. F. Paulos, M. Spradlin, and A. Volovich, “Cluster Polylogarithms for Scattering Amplitudes,” J. Phys. A47 no. 47, (2014) 474005, arXiv:1401.6446 [hep-th].
+
+[67] D. Parker, A. Scherlis, M. Spradlin, and A. Volovich, “Hedgehog bases for $A_n$ cluster polylogarithms and an application to six-point amplitudes,” JHEP 11 (2015) 136, arXiv:1507.01950 [hep-th].
+
+[68] Z. Li and C. Zhang, “Blowing up Stringy Canonical Forms: An Algorithm to Win a Simplified Hironaka’s Polyhedra Game,” arXiv:2002.04528 [hep-th].
+---PAGE_BREAK---
+
+[69] P. Deligne and D. Mumford, "The irreducibility of the space of curves of given genus," *Publications Mathématiques de l'Institut des Hautes Études Scientifiques* **36** no. 1, (1969) 75-109.
+
+[70] S. L. Devadoss, *Tessellations of moduli spaces and the mosaic operad*. The Johns Hopkins University, 1999.
+
+[71] N. Arkani-Hamed, Y. Bai, S. He, and G. Yan, "Scattering Forms and the Positive Geometry of Kinematics, Color and the Worldsheet," *JHEP* **05** (2018) 096, [arXiv:1711.09102 [hep-th]](https://arxiv.org/abs/1711.09102).
+
+[72] S. He, Z. Li, Y. Tang, and Q. Yang, "Bootstrapping octagons in reduced kinematics from $A_2$ cluster algebras," *arXiv:2106.03709 [hep-th]*.
+
+[73] S. Caron-Huot and S. He, "Three-loop octagons and $n$-gons in maximally supersymmetric Yang-Mills theory," *JHEP* **08** (2013) 101, [arXiv:1305.2781 [hep-th]](https://arxiv.org/abs/1305.2781).
+
+[74] J. M. Henn, "Multiloop integrals in dimensional regularization made simple," *Phys. Rev. Lett.* **110** (2013) 251601, [arXiv:1304.1806 [hep-th]](https://arxiv.org/abs/1304.1806).
+
+[75] J. M. Henn, "Lectures on differential equations for Feynman integrals," *J. Phys. A* **48** (2015) 153001, [arXiv:1412.2296 [hep-ph]](https://arxiv.org/abs/1412.2296).
\ No newline at end of file
diff --git a/samples/texts_merged/5165017.md b/samples/texts_merged/5165017.md
new file mode 100644
index 0000000000000000000000000000000000000000..58693bce17e704636a06820b911c6594fffb51d5
--- /dev/null
+++ b/samples/texts_merged/5165017.md
@@ -0,0 +1,1107 @@
+
+---PAGE_BREAK---
+
+# Strategic Planning, Design and Development of the
+Shale Gas Supply Chain Network
+
+Diego C. Cafaro¹ and Ignacio E. Grossmann²*
+
+¹ INTEC (UNL - CONICET), Güemes 3450, 3000 Santa Fe, ARGENTINA
+
+² Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
+
+## ABSTRACT
+
+The long-term planning of the shale gas supply chain is a relevant problem that has not been addressed before in the literature. This paper presents a mixed-integer nonlinear programming (MINLP) model to optimally determine the number of wells to drill at every location, the size of gas processing plants, the section and length of pipelines for gathering raw gas and delivering processed gas and by-products, the power of gas compressors, and the amount of freshwater required from reservoirs for drilling and hydraulic fracturing so as to maximize the economics of the project. Since the proposed model is a large-scale non-convex MINLP, we develop a decomposition approach based on successively refining a piecewise linear approximation of the objective function. Results on realistic instances show the importance of heavier hydrocarbons to the economics of the project, as well as the optimal usage of the infrastructure by properly planning the drilling strategy.
+
+## KEYWORDS
+
+Shale gas, supply chain, strategic planning, MINLP, solution algorithm
+
+* Corresponding author. Tel.: +1 412 268 2230; fax: +1 412 268 7139.
+
+E-mail address: grossmann@cmu.edu (I.E. Grossmann).
+---PAGE_BREAK---
+
+# 1. INTRODUCTION
+
+Natural gas is the cleanest-burning fossil fuel. Natural gas extracted from dense shale rock formations has become the fastest-growing fuel in the U.S. and has the potential of becoming a significant new global energy source. Over the past decade, the combination of horizontal drilling and hydraulic fracturing has allowed access to large volumes of shale gas that were previously uneconomical to produce. The production of natural gas from shale formations has reinvigorated the natural gas and chemical industries in the U.S. The Energy Information Administration projects U.S. shale gas production to grow from 23% to almost 50% of the total gas production in the next 25 years.[1] Shale gas is found in plays containing significant accumulations of natural gas, sharing similar geologic and geographic properties. A decade of production has come from the Barnett Shale play in Texas. Experience gained from Barnett Shale has improved the efficiency of shale gas development around the country. Today, one of the most productive plays is the Marcellus Shale in the eastern U.S., mainly in Pennsylvania. Regarding both economic and environmental impacts, the long-term planning and development of the shale gas supply chain network around each play is a very relevant problem.
+
+However, to the best of our knowledge, it has not been addressed before in the literature.
+
+The raw gas extracted from shale formations is transported from wellbores to processing plants through pipelines. The processing of shale gas consists of the separation of the various hydrocarbons and fluids from the pure gas (methane) to produce what is known as “pipeline quality” dry natural gas.[2] This means that before the natural gas can be transported by midstream distributors, it must be purified to meet the requirement for pipeline, industrial and commercial uses. The associated hydrocarbons (ethane, propane, butane, pentanes and natural gasoline) known as Natural Gas Liquids (NGLs), are valuable byproducts after the natural gas has been purified and fractionated. These NGLs are sold separately (usually through dedicated pipelines) and have a variety of different uses, including enhancing oil recovery in wells, providing raw materials for oil refineries or petrochemical plants, and as sources of energy.[3] One of the most critical issues in the design and planning of the shale gas supply
+---PAGE_BREAK---
+
+chain network is the sizing and location of new shale gas processing and fractionation plants (as well as future expansions) due to their high cost.
+
+On the other hand, the number of wells drilled in each location can dramatically influence costs and the ecological footprint of natural gas operations.[4] The ability to drill multiple wells from a single location (or “pad”) is seen as a major technological breakthrough driving natural gas development as for instance has happened in the Marcellus Shale. The utilization of multi-well pads also has large environmental and socio-economic implications given that as many as 20 or more natural gas wells and associated pipeline infrastructure can be concentrated in a single location. Furthermore, the total amount of industrial activity can be compressed as these wells can be drilled in rapid succession and the technology now exists to perform hydraulic fracturing stimulations on multiple wells simultaneously. Hence, another key decision tackled by this paper is the drilling strategy, i.e. how many wells to set up or add on existing well pads at every time period.
+
+Another critical aspect in the shale gas production is water management. Shale gas production is a highly water-intensive process, with a typical well requiring around 5 million gallons of water normally over a 3-month period to drill and fracture, depending on the basin and geological formation.[5] The vast majority of this water is used during the fracturing process, with large volumes of water pumped into the well with sand and chemicals to facilitate the extraction of the gas. Although increasing amounts of water are being recycled and reused, freshwater is still required in high quantities for the drilling operations as flowback water usually only represents about 25-30% of the water injected into the well. The need for freshwater is an issue of growing importance, especially in waterscarce regions and in areas with high cumulative demand for water, leading to pressure on sources and competition for water withdrawal permits. Therefore, a long-term planning model for the development of shale gas fields should also account for water availability.
+
+The goal of this paper is to develop a mixed-integer nonlinear programming (MINLP) model for the sustainable long-term planning and development of shale gas supply chains, which should optimally determine: (a) the number of wells to drill on new/existing pads; (b) the size and location of new gas
+---PAGE_BREAK---
+
+processing plants (as well as future expansions); (c) the section, length and location of new pipelines for gathering raw gas, delivering dry gas, and moving NGLs; (d) the location and power of new gas compressors to be installed, according to the flowrate at every line; and (e) the amount of freshwater coming from available reservoirs used for well drilling and fracturing, so as to maximize the economic results (NPV-based approach) over a planning horizon comprising 10 years.
+
+## Literature Review
+
+Some of the first papers in the optimal design and planning of supply chains were published in the literature 40 years ago.[6] A complete review on more recent developments in supply chain optimization problems can be found in the work of Melo et al.[7] Regarding the strategic planning of natural gas supply chains, Durán and Grossmann[8] propose a superstructure representation, an MINLP model and a solution strategy for the optimal synthesis of gas pipelines, deciding on the gathering pipeline system configuration, compressors power and pipeline pressures. Iyer et al.[9] propose a multiperiod MILP model for the optimal planning and scheduling of offshore oilfield infrastructure investment and operations. Since the resulting model becomes intractable due to the large-scale, nonlinear reservoir equations are approximated through piecewise linear functions. Van den Heever and Grossmann[10] propose a multiperiod generalized nonlinear disjunctive programming model for oilfield infrastructure planning, whose optimal solution is found through a bilevel decomposition method. In this model, the number of wells is given beforehand through a fixed drilling plan. More recently, Gupta and Grossmann[11] address some new features of the same problem, accounting for all three components (oil, water, and gas) explicitly in the formulation. They also incorporate more accurate estimations of the nonlinear reservoir behaviour, variable number of wells for each field (to capture drill rig limitations) and facility expansions, including their lead times.
+
+On the other hand, some work has also been reported on the optimization of the operation of shale gas fields. Rahman et al.[12] present an integrated optimization model for hydraulic fracturing design, accounting for fracture geometry, material balances, operational limitations, characteristics of the gas
+---PAGE_BREAK---
+
+formation, and production profiles. By combining genetic algorithms and evolutionary techniques, improved hydraulic fracturing designs reduce the treatment (stimulation) costs up to 44% at the expense of a 12% reduction in the gas production. Knudsen et al.[13] propose a Lagrangean relaxation approach for scheduling shut-ins times in tight formation multi-well pads, so as to stimulate the shale gas production in different wells to comply with the gas rates required by the distribution company. In that work, a proxy model captures the physics during shut-ins operations. Based on the proxy model results, the time domain is discretized into daily time periods, and an MILP model is then solved using Lagrangean relaxation techniques.
+
+Finally, few recent publications deal with the strategic and operational management of water resources and other environmental concerns in the development of shale gas plays. Mauter et al.[14] argue that strategic planning by both companies and regulatory agencies is critical to mitigate the environmental impacts of unconventional extraction. Rahm and Riha[15] attempt to determine water resource impacts of shale gas extraction, from regional, collective based perspectives, seeking to balance the need for development with environmental concerns and regulatory constraints. Yang and Grossmann[16] present an MILP formulation whose main objective is to schedule the drilling and fracturing of well pads to minimize the transportation, treatment, and freshwater acquisition costs, as well as treatment infrastructure, while maximizing the number of well stages to be completed within the time frame. The goal is to find an optimal short-term fracturing schedule, the water recycling ratio, and the need for additional impoundment and treatment capacity.
+
+Like most enterprise-wide optimization (EWO) problems, the strategic planning of the shale gas supply chain has great economic potential. Considerable effort has been spent towards the solution of EWO problems during the last 20 years, particularly in the field of oil and gas production.[17] But none of them has been focused on the shale gas supply chain. The shale gas production has its own peculiarities, and is a problem of very recent development.[18] In fact, one of the major barriers is the size and complexity of computational optimization models for achieving the goal of EWO.[19] The strategic planning of shale gas infrastructure consists on the design of large supply chains, including
+---PAGE_BREAK---
+
+well-pads, processing plants, compressors, product delivery nodes, and the complex pipeline network transporting shale gas and the resulting hydrocarbons. As concluded by Oliveira et al.,[20] careful evaluation of the investment options in this kind of problems has particular importance, and the use of efficient decision-making tools that capture the problem complexity becomes crucial.
+
+**Figure 1.** A simplified superstructure of the shale gas supply chain (for the sake of clarity, only few arcs of each type are drawn).
+
+## 2. PROBLEM DESCRIPTION
+
+We address the problem of determining the optimal design for a shale gas supply chain network, the well drilling and hydraulic fracturing strategy over the planning horizon, together with the size and location of gas separation plants, compressors and pipeline infrastructure, in order to maximize the net present value of the project. This problem can be formally stated as follows.
+---PAGE_BREAK---
+
+A comprehensive shale gas supply chain network superstructure like the one depicted in Figure 1 is given. It includes: (a) potential or existing well pads where new wells can be drilled and hydraulically fractured over the planning horizon (nodes $i \in I$), (b) potential or existing junction nodes where shale gas flows coming from nearby well pads converge (nodes $j \in J$), (c) potential or existing flow pipelines connecting nodes i and j, (d) candidate sites for the installation/expansion of a new/existing shale gas processing plants (nodes $p \in P$), (e) potential/existing gathering pipelines connecting junction nodes j with plant sites p, (f) demand nodes for dry natural gas (nodes $k \in K$) and ethane (nodes $l \in L$), (g) potential/existing transmission and liquid pipelines connecting plant sites p with nodes k and l, respectively, and (h) freshwater source nodes from where the water required for drilling and fracturing new wells can be supplied.
+
+A strategic long term planning horizon is considered. In this paper, a planning horizon of 10 years is considered, and is divided into 40 time periods (quarters). The reasons for this time discretization is as follows: (1) Gas prices normally exhibit a seasonal behavior with a high peak in the winter. (2) The drilling and completion of wells normally takes between 50 and 90 days, plus the following 20 days during which the well does not produce a steady stream of gas, but a flowback of water that is captured and stored for further treatment. Overall, approximately 90 days (three months) are required since the well-pad is set up and wells start to be drilled until they begin to produce a steady flow of shale gas. (3) Freshwater availability in some water-scarce regions is strongly seasonal, and can be a critical issue if high cumulative demand for water leads to pressure on sources and competition for water withdrawal permits.
+
+Besides the network superstructure and the time horizon, the productivity profile of every well at any location is assumed to be deterministic and known beforehand. Dry and semi-dry shale gas wells exhibit many of the same characteristics: an early peak in the gas rate from the sudden release of gas stored in pores and natural fracture networks, followed by a long transient decline in the production rate. Such decline in the rate is caused both by pressure loss and the inherently low permeability of shale rocks. In this problem, the well productivity (measured in Mm³/day) is represented by a piecewise constant
+---PAGE_BREAK---
+
+function of the well age. The parameter $pw_{i,\tau}$ stands for the production rate of a shale gas well of age $\tau$ (given in quarters) drilled in location $i$ (see Figure 2). Moreover, the shale gas composition, and particularly its “wetness” (% of hydrocarbons others than methane), are assumed to be known and independent of both the well site and its age. This assumption can be relaxed as will be discussed later in this paper.
+
+**Figure 2.** Piecewise constant well productivity profile.
+
+Regarding the pipeline infrastructure, gas and liquid pipelines must be considered separately. On the one hand, gas pipelines (transporting either raw or processed gas) are assumed to handle an ideal mixture of ideal gases. Raw gas pipelines connecting nodes $i$ to $j$ (well pads to junction nodes) and $j$ to $p$ (junction nodes to plants) operate at medium-low pressures, while transmission pipelines $p-k$ supplying gas demand nodes from processing plants operate at higher pressures. For simplicity, gas suction/discharge pressures at every node of the network are assumed to be given constant values. These are as follows: (i) shale gas discharge pressure at the well pads is $Pd_i$, (ii) junction nodes receive the shale gas at a pressure of $Ps_j < Pd_i$, (iii) compressor stations installed at junction nodes increase the pressure from $Ps_j$ to $Pd_j$, to make the gas flow towards processing plants, (iv) the shale gas pressure at the inlet of processing plants is $Pi_p < Pd_j$, (v) processing plants deliver dry gas at a pressure of $Po_p$, (vi) compressor stations installed at the outlet of processing plants increase the dry gas pressure from $Po_p =$
+---PAGE_BREAK---
+
+$Ps_p$ to $Pd_p$ before sending flows to markets, and (vii) gas demand nodes receive dry gas at a pressure of $Pr_k < Pd_p$. By fixing such values, the maximum flow of a gas pipeline is directly proportional to the pipeline diameter raised to the power of 2.667, and the proportionality factor depends on the gas properties, the input/output pressures and the pipeline length.[8][21] Moreover, compressors are assumed to be adiabatic and their power is directly proportional to the gas flow, since the compression ratio is a given parameter. More details are given in the Appendix.
+
+On the other hand, liquid pipelines transport hydrocarbons like ethane, propane, butane, pentanes and natural gasoline (NGLs) in liquid state from separation plants to either petrochemical plants or LPG (liquefied petroleum gases) distribution facilities. In this problem, all NGLs except ethane are assumed to be separately sold to customers near the processing plants, while ethane is continuously delivered to petrochemical plants by dedicated pipelines. The maximum flow in liquid pipelines is assumed to be directly proportional to the pipeline section since a maximum mean velocity is imposed.
+
+**Figure 3.** Simplified network superstructure and alternative network designs.
+
+An illustrative example comparing two network designs is presented in Figure 3. The shale gas produced at two different well pads *i1* and *i2* is sent to a processing plant in two alternative ways: (A) through an intermediate junction node *j1*, or (B) directly, through separate lines. Typical values for the suction and discharge pressures at each node are given in the figure. The example also reveals one of the trade-offs to be determined by the model. Option (A) requires a compressor station at node *j1*, but pipelines are smaller in diameter and shorter than in option (B) which does not require a compressor.
+---PAGE_BREAK---
+
+Pipeline and compressor costs with regards to their size (usually determined by economies of scale functions) are the key to determine which option is the most convenient one.
+
+Finally, freshwater consumption, mainly for hydraulic fracturing, is considered to be a fixed amount required during the drilling period, which depends on the well-pad location and the possibility of reusing the flowback water. The selection of optimal sources for water supply is a key model decision, but no details on the water transportation logistics are considered at this planning level. Other operational issues like flowback water capture, treatment and final disposal, as well as planning shut-ins and well stimulations are also out of the scope of this work.
+
+Given all the items described above, the goal is to optimally determine: (a) the number of wells to drill on new/existing pads at every trimester; (b) the size and location of new gas processing plants (as well as future expansions); (c) the section, length and location of new pipelines for gathering raw gas, delivering dry gas, and transporting NGLs; (d) the location and power of new gas compressors to be installed, and (e) the amount of freshwater from available reservoirs for well drilling and fracturing so as to maximize the Net Present Value (NPV) of the project.
+
+**Assumptions**
+
+The main assumptions have already been discussed and can be summarized as follows:
+
+(1) Shale gas is assumed to be an ideal mixture of ideal gases.
+
+(2) The composition, and particularly the shale gas "wetness", are known constants independent of the well location. The relaxation of this assumption is discussed after the model presentation.
+
+(3) The planning horizon is discretized in time periods, commonly quarters.
+
+(4) Multiple wells can be drilled in a single pad over one time period, although not necessarily at the same time. It is assumed that all of them are hydraulic fractured and completed within the same time period they are drilled.
+---PAGE_BREAK---
+
+(5) Wells start to produce shale gas in the period following the drilling period. Once the wells are completed, their production cannot be delayed by shutting them in. This assumption can be relaxed as shown in the Appendix.
+
+(6) After the well is completed, its productivity rate is a piecewise constant function in terms of the well age. In other words, a decreasing function as depicted in Figure 2 is assumed to be given.
+
+(7) Multi-well pads can be set up, and multiple wells in the same pad can be drilled, fractured and completed during the same period. However, an upper bound is given due to technology limitations. Moreover, the total number of wells that can be drilled in the same pad over the given time horizon is also bounded.
+
+(8) The pressure at pipelines transporting raw gas from well pads to junction nodes decreases from $Pd_i$ to $Ps_j$ as a function of their length[21] (for further details see the Appendix). The same is also valid for pipelines transporting raw gas from junction nodes to processing plants (from $Pd_j$ to $Pi_p$), and dry gas from plants to demand nodes (from $Pd_p$ to $Pr_k$).
+
+(9) All the gas pressures ($Pd_i$ at the outlet of pad $i$; $Ps_j$ at the inlet of the junction node $j$; $Pd_j$ at the outlet of $j$; $Pi_p$ at the inlet of the plant $p$; $Po_p = Ps_p$ at the outlet of plant $p$; $Pd_p$ at the outlet of the $p$-compressor station; and $Pr_k$ at the gas demand node $k$) are given. Relaxing this assumption would imply solving a much more complex optimization problem.[8] Although pressure optimization is out of the scope for this model, it will be shown later that varying pressure levels within normal values does not lead to major changes in the optimal solution.
+
+(10) The liquid pipeline flow is bounded by a maximum mean velocity (commonly, 1.5 m/s).
+
+(11) Centrifugal pumps have negligible costs compared to processing plants, pipelines and gas compressors.
+
+(12) Shale gas processing plants separate NGLs (namely ethane, propane, butane, pentanes and natural gasoline) from the shale gas (methane), also removing $H_2S$, $CO_2$, $N_2$ and $H_2O$; and finally delivering the methane to consumer markets. All NGLs except ethane are sold to nearby markets, while ethane is sent to chemical plants by dedicated pipelines.
+---PAGE_BREAK---
+
+(13) Concave cost functions of the form $f(x) = c \cdot x^r$ (with $0 < r < 1$ and $c > 0$) are assumed for: (a) the cost $f_a(x_a)$ of a shale gas processing plant with a capacity of $x_a$ MMm³/day, [22] (b) the cost $f_b(x_b)$ of a pipeline of diameter $x_b$, (c) the cost $f_c(x_c)$ of a compressor station of power $x_c$, [23] and (d) the cost $f_d(x_d)$ of drilling and hydraulically fracturing $x_d$ wells during the same quarter year.
+
+(14) Pipeline diameters are treated as continuous variables, but after the solution they are rounded up to the closest commercial diameter. A rigorous model would explicitly handle discrete size diameters, but this is out of scope of this work.
+
+# 3. MATHEMATICAL FORMULATION
+
+The optimization problem for the long-term planning, design and development of the shale gas supply chain is formulated in terms of a mixed-integer nonlinear programming (MINLP) model described in the following sections.
+
+## 3.1 Model Constraints
+
+The feasible region of the model is determined by a set of linear constraints. They are grouped into five blocks: Shale Gas Production; Flow Balances; Plants, Pipelines and Compressors Sizing; Plant, Pipelines and Compressors Costing; Water Supplies; and Maximum Demands.
+
+### 3.1.1 Shale Gas Production
+
+*Number of Wells Drilled in a Pad.* The number of wells drilled, fractured and completed in the multi-well pad *i* during the period *t* is represented by the variable $N_{i,t}$. Its value is determined by eq. (1) in terms of 0-1 variables $y_{i,t,n}$, one of which is equal to one to make $N_{i,t} = n$. The index *n* stands for an integer number greater or equal to zero and lesser or equal to $\bar{n}_i$, where $\bar{n}_i$ is the maximum number of wells that can be drilled during a single quarter in pad *i*. For the examples solved in the results section, the value of $\bar{n}_i$ varies from 2 to 4. Moreover, the total number of wells that can be drilled in a pad over the given planning horizon is bounded by eq. (3) to a maximum of $\bar{N}_i$. The current trend in shale gas production is to increase this number as much as possible to reduce the environmental impact.[4]
+---PAGE_BREAK---
+
+$$N_{i,t} = \sum_{n=0}^{\bar{n}_i} n y_{i,n,t} \quad \forall i \in I, t \in T \qquad (1)$$
+
+$$\sum_{n=0}^{\bar{n}_i} y_{i,n,t} = 1 \quad \forall i \in I, t \in T \qquad (2)$$
+
+$$\sum_{t \in T} N_{i,t} \le \bar{N}_i \quad \forall i \in I \qquad (3)$$
+
+**Shale Gas Production at Every Well-Pad.** As stated in the model assumptions, the total production of shale gas (including methane, ethane and other NGLs) in a well-pad *i* at a certain period *t* depends on the age of every active well at that time. If *pw**i*,a is a model parameter standing for the productivity (in Mm³ of shale gas per day) of a well drilled in pad *i*, *a* quarters before the current time period *t*, then the total daily production coming from all the wells in pad *i* can be determined through eq. (4). Note that at time *t*, the age of a well drilled in time period *τ* < *t* is *a* = *t* − *τ*. Moreover, wells of age “0” (being drilled and fractured) do not produce gas until the following period (see Figure 2).
+
+$$\sum_{\tau=1}^{t-1} N_{i,\tau} pw_{i,\tau-\tau} = SP_{i,\tau} \quad \forall i \in I, t > 1 \qquad (4)$$
+
+**Methane, Ethane and other NGLs Produced at Well Pads.** Since the shale gas composition at every well is assumed to be the same (uniform gas “wetness”), the production of such fuels within the shale gas stream coming from each well pad can be easily determined from eqs. (5), (6) and (7).
+
+$$SP_{i,t}^{G} = gc SP_{i,t} \quad \forall i \in I, t > 1 \qquad (5)$$
+
+$$SP_{i,t}^{E} = ec SP_{i,t} \quad \forall i \in I, t > 1 \qquad (6)$$
+
+$$SP_{i,t}^{L} = lc SP_{i,t} \quad \forall i \in I, t > 1 \qquad (7)$$
+
+where *gc* is the volume methane composition, *ec* is the ethane composition, and *lc* is the remaining hydrocarbons composition. If these parameters become dependent on the well location, the model
+---PAGE_BREAK---
+
+structure must be modified in order to preserve the linearity of model constraints. This will be discussed
+in a later section.
+
+**3.1.2 Flow Balances**
+
+*Stream Flows from a Well Pad to Junction Nodes.* Shale gas production at a certain pad during a time period is sent to one or more junction nodes (depending on the network design), which is controlled by eq. (8).
+
+$$
+SP_{i,t} = \sum_{j \in J} FP_{i,j,t} \quad \forall i \in I, t > 1 \tag{8}
+$$
+
+The model variable $FP_{i,j,t}$ stands for the daily shale gas flowing from pad $i$ to junction node $j$ during period $t$. By simple extension of eqs. (5), (6) and (7), individual hydrocarbon flows in the shale gas stream ($FP^G_{i,j,t}$, $FP^E_{i,j,t}$, $FP^L_{i,j,t}$) can be easily obtained.
+
+*Flow Balances at Junction Nodes.* Eq. (9) states that the sum of incoming shale gas flows at a certain junction node equals the sum of outgoing streams sent to one or more processing plants, depending on the network design. Under the given assumptions, flow splitting at well pads and junction nodes are both allowed.
+
+$$
+\sum_{i \in I} FP_{i,j,t} = \sum_{p \in P} GP_{j,p,t} \quad \forall j \in J, t > 1 \tag{9}
+$$
+
+Similarly to variable $FP_{i,j,t}$, individual fuel flows can also be derived from the shale gas stream flowing between nodes $j$ and $p$ ($GP_{j,p,t}$, $GP^G_{j,p,t}$, $GP^E_{j,p,t}$, $GP^L_{j,p,t}$).
+
+*Flow Balances at Separation Plants.* Assuming that all the methane from the shale gas flows processed at plant *p* is separated and sent to one or more dry gas demand nodes *k*, eq. (10) is added to the formulation. *TP**p,k,t* is the flow of dry gas (methane) transported through pipeline *p-k* during period *t*.
+
+$$
+\sum_{j \in J} GP^G_{j,p,t} = \sum_{k \in K} TP_{p,k,t} \quad \forall p \in P, t > 1 \qquad (10)
+$$
+---PAGE_BREAK---
+
+The same also applies for ethane flows, which are received with the shale gas, separated and pumped to one or more petrochemical plants *l* in liquid state through dedicated pipelines *p*-*l*, at a rate of $LP_{p,l,t}$ tons per day, during the entire period *t*.
+
+$$s_g^E \sum_{j \in J} GP_{j,p,t}^E = \sum_{l \in L} LP_{p,l,t} \quad \forall p \in P, t > 1 \qquad (11)$$
+
+$s_g^E$ is the specific gravity of ethane in standard conditions, given in ton/MMm³.
+
+Finally, other NGLs from junction nodes are processed, and sold to nearby markets at a rate of $NP_{p,t}$ tons per day as stated by eq. (12).
+
+$$s_g^L \sum_{j \in J} GP_{j,p,t}^L = NP_{p,t} \quad \forall p \in P, t > 1 \qquad (12)$$
+
+### 3.1.3 Plants, Pipelines and Compressors Sizing
+
+**Separation Plants.** The total processing capacity of a plant *p* at time *t* (*SepCap**p,t*) is given in MMm³ of shale gas per day, and can be calculated from its capacity at the previous period (*t* − 1) plus the capacity expansion started at the beginning of period (*t* − τs), i.e. *SepInst**p,t-τ*. In other words, it is assumed that separation plants installations/expansions take τs time periods, as stated in eq. (13).
+
+$$SepCap_{p,t} = SepCap_{p,t-1} + SepInst_{p,t-\tau} \quad \forall p \in P, t > 1 \qquad (13)$$
+
+**Upper Bound on the Shale Gas Flows Converging to a Separation Plant.** The sum of the shale gas flows coming from one or several junction nodes to a single separation plant during every period *t*, should not exceed its processing capacity as expressed by eq. (14).
+
+$$\sum_{j \in J} GP_{j,p,t} \leq SepCap_{p,t} \quad \forall p \in P, t > 1 \qquad (14)$$
+
+**Installation of Gas Pipelines.** As shown in the Appendix, given the gas inlet and outlet pressures, the fluid properties and the pipeline length, maximum gas flows are directly proportional to the pipeline
+---PAGE_BREAK---
+
+diameter to the power of 2.667. It is also assumed that both raw and dry gases are ideal mixtures of ideal
+gases. In order to preserve linearity in the constraints, the diameter of the pipeline installed between a
+pair of nodes during a certain time period (a model decision) is substituted by a variable that stands for
+such diameter raised to the power of 2.667. In other words, the model variables $DFP_{i,j,t}$, $DGP_{j,p,t}$ and
+$DTP_{p,k,t}$ stand for the diameters of the pipelines installed at period $t$ between nodes $i-j$, $j-p$, and $p-k$,
+respectively, raised to the power of 2.667.
+
+In summary, the gas pipeline flows with regards to pipeline diameters are calculated from eqs. (15),
+(16) and (17).
+
+$$
+\begin{align}
+FPFlow_{i,j,t} &= k_{i,j} l_{i,j}^{-0.5} DFP_{i,j,t} && \forall i \in I, j \in J, t > 1 \tag{15} \\
+GPFLOW_{j,p,t} &= k_{j,p} l_{j,p}^{-0.5} DGP_{j,p,t} && \forall j \in J, p \in P, t > 1 \tag{16} \\
+TPFlow_{j,k,t} &= k_{p,k} l_{j,k}^{-0.5} DTP_{j,k,t} && \forall p \in P, k \in K, t > 1 \tag{17}
+\end{align}
+$$
+
+Due to assumption 9, parameters $k_{ij}$, $k_{j,p}$ and $k_{p,k}$ take fixed values that can be calculated as shown in the Appendix. Distances between every pair of nodes ($l_{ij}$, $l_{j,p}$ and $l_{p,k}$) are also given data.
+
+*Maximum Gas Flow between a Pair of Nodes.* The maximum gas flow between every pair of nodes depends on the size of the pipelines installed in previous periods, plus the additional flow capacity added due to a recent pipeline construction, as stated in eqs. (18), (19) and (20). It is assumed that pipelines are installed from period (*t* − *q*) to (*t* − 1) and are not able to transport gas until the period *t*, with *q* being the pipeline construction lead time in quarters.
+
+$$
+\begin{align}
+FPCap_{i,j,t} &= FPCap_{i,j,t-1} + FPFlow_{i,j,t-q} && \forall i \in I, j \in J, t > 1 \tag{18} \\
+GPCap_{g,j,t} &= GPCap_{g,j,t-1} + GPFlow_{j,p,t-q} && \forall j \in J, p \in P, t > 1 \tag{19} \\
+TPCap_{p,k,t} &= TPCap_{p,k,t-1} + TPFlow_{p,k,t-q} && \forall p \in P, k \in K, t > 1 \tag{20}
+\end{align}
+$$
+---PAGE_BREAK---
+
+Finally, shale gas and dry gas flows at every time period are bounded by the flow capacity connecting every pair of nodes, as enforced by eqs. (21), (22) and (23).
+
+$$FP_{i,j,t} \le FPCap_{i,j,t} \quad \forall i \in I, j \in J, t > 1 \quad (21)$$
+
+$$GP_{j,p,t} \le GPCap_{j,p,t} \quad \forall j \in J, p \in P, t > 1 \quad (22)$$
+
+$$TP_{p,k,t} \le TPCap_{p,k,t} \quad \forall p \in P, k \in K, t > 1 \quad (23)$$
+
+*Installation of Liquid Pipelines.* By assumption 10, a maximum mean velocity is imposed to liquid flows to make sure that head losses remain at specified values. In liquid pipeline network design, a typical value used is $v^{\max} = 1.5$ m/s. Under such an assumption, liquid flows are directly proportional to the pipeline section, and by extension, directly proportional to the pipeline diameter raised to the power of 2. As for gas pipelines, the diameter of a liquid pipeline installed between a gas processing plant *p* and a petrochemical plant *l* during a certain time period (a model decision) is substituted by an analogous variable, which stands for such diameter to the power of 2 (variable *DLP**p*,l,t*). As a result, gas pipeline flows (given in tons per day) with regards to pipeline diameters are calculated by eq. (24).
+
+$$LPFlow_{p,j,l} = k_{p,j} DLP_{p,l,t} \quad \forall p \in P, l \in L, t > 1 \quad (24)$$
+
+where $k_{p,l} = 3600 \cdot 24 \cdot \rho \pi v_{p,l}^{\max} / 4$, $\rho$ is the liquid (ethane) density given in ton/m³, and $v_{p,l}^{\max}$ the maximum mean velocity, in m/s.
+
+*Maximum Flow in Liquid Pipelines.* Similarly to gas pipelines, the model decides when to install a new pipeline and its corresponding size. Eq. (25) determines the flow capacity of each liquid pipeline at every time period (in ton/day), while constraint (26) imposes such value as an upper bound on the liquid flow from *p* to *l* during period *t*.
+
+$$LPCap_{p,l,t} = LPCap_{p,l,t-1} + LPFlow_{p,l,t-\tau} \quad \forall p \in P, l \in L, t > 1 \quad (25)$$
+---PAGE_BREAK---
+
+$$LP_{p,l,t} \leq LPCap_{p,l,t} \quad \forall p \in P, l \in L, t > 1 \quad (26)$$
+
+*Power of Compressors.* If the suction and discharge pressures are given (see assumption 9), and assuming that compressors are adiabatic, a simple expression can be derived in order to calculate the required compression power (in kW) as shown in the Appendix. Under such assumptions, the required power is directly proportional to the total flow of gas being compressed. In the case of raw gas, compressed at junction nodes and sent to processing plants, the total power installed up to time *t* (*JCP**j*,*t*) must be greater or equal than the power demanded by the total flows of raw gas compressed by *j* at time *t*, as expressed by eq. (27).
+
+$$JCP_{j,t} \geq kc_j \sum_{p \in P} GP_{j,p,t} \quad \forall j \in J, t > 1 \quad (27)$$
+
+Similarly, the power of compressors installed at the outlet of the processing plant *p* up to time *t* (*PCP**p*,*t*) sending dry gas to demand nodes *k* (e.g., gas distribution companies) is bounded from below by constraint (28).
+
+$$PCP_{p,t} \geq kc_p \sum_{k \in K} TP_{p,k,t} \quad \forall p \in P, t > 1 \quad (28)$$
+
+Moreover, compressor stations can be expanded in the planning horizon by installing new compressors at the same node. Eqs. (29) and (30) determine the total power of compressors installed up to time *t* at nodes *j* and *p*, respectively, and where *τc* is the compressor installation lead time in quarters.
+
+$$JCP_{j,t} = JCP_{j,t-1} + JCInst_{j,t-\tau} \quad \forall j \in J, t > 1 \quad (29)$$
+
+$$PCP_{p,t} = PCP_{p,t-1} + PCInst_{p,t-\tau} \quad \forall p \in P, t > 1 \quad (30)$$
+
+### 3.1.4 Water Supplies
+
+*Water Demand for Drilling and Fracturing Wells.* As explained before, a large amount of freshwater is required in the shale gas industry for the hydraulic fracturing of new wells. This model assumes that
+---PAGE_BREAK---
+
+the total amount of water required by a single well during the drilling, fracturing and completion processes ($wr_i$) is known (typically, 20 Mm³/well) but may depend on the well location. Eq. (31) states that the total number of wells drilled, fractured and completed in pad $i$ during period $t$ determines the total water requirement of that pad at that period, and such amount should be supplied from one of more freshwater sources $f$. The amount of freshwater supplied by source $f$ for drilling and fracturing new wells in pad $i$ during period $t$ is a key model decision represented by the continuous variable $WS_{f,i,t}$. In addition, if the well-pad $i$ has the infrastructure for flowback water treatment and reuse, a reuse factor $rf_i$ (usually below 20%) can reduce the need for freshwater, as shown in the LHS of eq. (31).
+
+$$N_{i,t} \ wr_i / (1 + rf_i) = \sum_{f \in F} WS_{f,i,t} \quad \forall i \in I, t \in T \quad (31)$$
+
+*Water Availability.* Every freshwater resource (rivers, lakes, underground water, etc.) usually has an upper limit on the amount of water that it can provide to the shale gas industry, often given by a seasonal profile. If the parameter $fwa_{f,t}$ stands for the maximum volume of freshwater that source $f$ can supply to the drilling and fracturing of new wells during the whole period $t$, the total amount supplied from $f$ to every pad $i$ should be bounded by above as in constraint (32).
+
+$$\sum_{i \in I} WS_{f,i,t} \leq fwa_{f,t} \quad \forall f \in F, t \in T \qquad (32)$$
+
+### 3.1.5 Maximum Demands
+
+A critical model decision is where to sell both the dry gas and the ethane flows produced by the shale gas processing plants. Every potential market (or demand node) is assumed to consume a maximum amount of product (dry gas for gas distributors, ethane for petrochemical plants) based on their own transportation or processing capacities. Moreover, such demand profile can be seasonal, especially in dry gas markets. Constraints (33) and (34) restrict the total flow of dry gas and ethane that can be sent from processing plants to each demand node during every period of the planning horizon.
+---PAGE_BREAK---
+
+$$ \sum_{p \in P} TP_{p,k,t} \le gasdem_{k,t} \quad \forall k \in K, t > 1 \qquad (33) $$
+
+$$ \sum_{p \in P} LP_{p,l,t} \le ethdem_{l,t} \quad \forall l \in L, t > 1 \qquad (34) $$
+
+## 3.2 Objective Function
+
+The objective function of the model is to maximize the Net Present Value (NPV) of the long-term planning project as expressed in eq. (35).
+
+$$
+\begin{aligned}
+NPV = & \sum_{t \in T} (1 + dr/4)^{-t} \left[ \sum_{p \in P} \sum_{k \in K} gasp_{k,t} nd_t TP_{p,k,t} + \sum_{p \in P} \sum_{l \in L} ethp_{l,t} nd_l LP_{p,l,t} + \sum_{p \in P} lpgp_t nd_t NP_{p,t} \right. \\
+& - \sum_{i \in I} \sum_{j \in J} shgc_{i,t} nd_t FP_{i,j,t} \\
+& - \sum_{p \in P} ks SepInst_{p,t}^{SepExp} \\
+& - \sum_{i \in I} \sum_{n=1}^{\bar{n}_i} kd_n^{WellExp} y_{i,n,t} \\
+& - \sum_{i \in I} \sum_{j \in J} kp_i l_{i,j} DFP_{i,j,t}^{GasPipeExp} - \sum_{j \in J} \sum_{p \in P} kp_j l_{j,p} DGP_{j,p,t}^{GasPipeExp} \\
+& - \sum_{p \in P} \sum_{k \in K} kp_l l_{p,k} DTP_{p,k,t}^{GasPipeExp} - \sum_{p \in P} \sum_{l \in L} kp_l l_{p,l} DLP_{p,l,t}^{LiqPipeExp} \\
+& - \sum_{j \in J} kc JCInst_{j,t}^{CompExp} - \sum_{p \in P} kc PCInst_{p,t}^{CompExp} \\
+& - \sum_{f \in F} \sum_{i \in I} (fix_f + var_f l_{f,i}) WS_{f,i,t}
+\end{aligned}
+\qquad (35)
+$$
+
+The objective function comprises positive and negative terms for every period of the planning horizon, discounted back to its present value by the annual discount rate of the project, *dr*. Positive terms are dry gas sales income, ethane sales income, and NGL other than ethane sales income. Negative terms are shale gas acquisition cost (including production, transportation and other operating costs), the cost of drilling, hydraulic fracturing and completion of shale gas wells, the cost of installing/expanding shale gas processing capacity at separation plants, the cost of constructing new pipelines either for
+---PAGE_BREAK---
+
+gathering raw gas or distributing dry gas and ethane, the cost of installing new compressor stations at junction nodes and processing plants, and freshwater acquisition and transportation costs for drilling and fracturing purposes.
+
+It should be noticed that instead of using linear costs with fixed charges, nonlinear expressions are used to represent economies of scale functions in some of the negative terms of eq. (35), featuring exponents between 0 and 1. Hence, the objective function can be classified as non-convex, with strictly concave separable terms. However, all constraints are linear as was shown in the previous sections.
+
+### 3.3 Cost Estimation
+
+Special attention must be paid to the equipment costing in the objective function (35). Regarding shale gas separation plants and gas compressors, typical values for the exponents *SepExp* and *CompExp* vary from 0.60 to 0.77.[24] However, a particular case arises in this model for pipeline construction. By assumption 13, the cost of pipelines also follows an economy of scale function with regards to the pipeline diameter, with a typical exponent of 0.60. However, it should be noticed that pipeline diameters are not directly considered in the model but through the substituted variables $DFP_{i,j,t}$, $DGP_{j,p}$, $DTP_{p,k,t}$ (for gas pipelines) and $DLP_{p,l,t}$ (for liquid pipelines). In fact, such variables account for the diameters raised to the power of 2.667 in the case of gas pipelines, and power 2 in the case of liquid pipelines. Therefore, if 0.60 is considered as exponent for the economy of scale regarding pipeline construction, the values of the exponents *GasPipeExp* and *LiqPipeExp* in the objective function (35) will be $0.60/2.667 = 0.225$ and $0.60/2 = 0.30$, respectively (see Appendix).
+
+### 3.4 Model Adaptation to Account for Shale Gas Composition Variations
+
+Relaxing the assumption of uniform shale gas composition so that the gas wetness is dependent of the well location significantly complicates the nature of the constraints. In order to precisely trace the composition of shale gas flows, the critical points in the proposed network superstructure are the junction nodes. If it is required to split flows to more than one separation plant, the model involves
+---PAGE_BREAK---
+
+bilinear equations so that the composition of all the outgoing flows takes a common value given that junctions are mixing-splitting nodes as stated by eqs. (36), (37) and (38).
+
+$$GComp_{j,t} GP_{j,p,t} = GP^{G}_{j,p,t} \quad \forall j \in J, p \in P, t > 1 \qquad (36)$$
+
+$$EComp_{j,t} GP_{j,p,t} = GP^{E}_{j,p,t} \quad \forall j \in J, p \in P, t > 1 \qquad (37)$$
+
+$$LComp_{j,t} GP_{j,p,t} = GP^{L}_{j,p,t} \quad \forall j \in J, p \in P, t > 1 \qquad (38)$$
+
+Eqs. (36), (37) and (38) include the additional variables $GComp_{j,t}$, $EComp_{j,t}$, $LComp_{j,t}$ (not dependent on the index $p$) which are the volume compositions of methane, ethane and LPG (hydrocarbons other than methane and ethane) in the shale gas, forcing all the flows departing from the junction node $j$ (a mixing-splitting node) to have the same composition. Moreover, individual component flow balances are incorporated to the formulation through eqs. (39), (40) and (41).
+
+$$\sum_{i \in I} FP^{G}_{i,j,t} = \sum_{p \in P} GP^{G}_{j,p,t} \quad \forall j \in J, t > 1 \qquad (39)$$
+
+$$\sum_{i \in I} FP^{E}_{i,j,t} = \sum_{p \in P} GP^{E}_{j,p,t} \quad \forall j \in J, t > 1 \qquad (40)$$
+
+$$\sum_{i \in I} FP^{L}_{i,j,t} = \sum_{p \in P} GP^{L}_{j,p,t} \quad \forall j \in J, t > 1 \qquad (41)$$
+
+It can be easily seen that equations (36), (37) and (38) involve bilinear terms that add significant difficulty to the MINLP model, especially because the feasible region can no longer be modeled with linear constraints. However, the next section presents a particular case in which no bilinear terms have to be added when incorporating shale gas composition variations.
+
+### 3.4.1 Shale Gas Flows Converging to a Single Processing Plant
+
+If the model is intended to select only one of the given locations to install a separation plant (as it is expected due to the high cost of this kind of plants), linear expressions hold since no splitting occurs at
+---PAGE_BREAK---
+
+junction nodes. The tendency of the model to select only one plant location is demonstrated in the
+results section with Example 1.
+
+Under this assumption the model modifications necessary to comply with shale gas composition variations are as follows. First, we include a new binary variable $w_p$, representing whether the location $p$ is selected to install the plant. As a result, the single plant condition leads to constraints (42) and (43).
+
+$$ \text{SepCap}_{p,t} \le \text{sepmax } w_p \quad \forall p \in P, t > 1 \tag{42} $$
+
+$$ \sum_{p \in P} w_p \le 1 \tag{43} $$
+
+where *sepmax* is an upper bound on the capacity of a single gas processing plant. Note that although the plant location must be unique, it may be installed and expanded in different time periods.
+
+In this way, upper bounds on the individual product flows emerging from every plant are imposed by
+constraints (44), (45) and (46) in place of eqs. (10), (11) and (12).
+
+$$ \max_{i \in I} \{gc_i\} \sum_{j \in J} GP_{j,p,t} \geq \sum_{k \in K} TP_{p,k,t} \quad \forall p \in P, t > 1 \tag{44} $$
+
+$$ s_g^E \max_{i \in I} \{ec_i\} \sum_{j \in J} GP_{j,p,t} \geq \sum_{l \in L} LP_{p,l,t} \quad \forall p \in P, t > 1 \tag{45} $$
+
+$$ s_g^L \max_{i \in I} \{lc_i\} \sum_{j \in J} GP_{j,p,t} \ge NP_{p,t} \quad \forall p \in P, t > 1 \tag{46} $$
+
+$gc_i$, $ec_i$ and $lc_i$ are the volume compositions of methane, ethane and LPG in the shale gas produced at pad $i$. Note that from eq. (14), if the plant is not selected (zero capacity) no shale gas flows can be sent to it, and the LHS of the last inequalities is zero. Finally, individual component balances are given by eqs. (47), (48) and (49). Eq. (47) for the methane balance is illustrated through the simple example depicted in Figure 4, comprising two well-pads, one junction node, the processing plant and the gas demand node.
+---PAGE_BREAK---
+
+**Figure 4.** Mixing flows at a single processing plant.
+
+$$ \sum_{i \in I} gc_i SP_{i,t} = \sum_{p \in P} \sum_{k \in K} TP_{p,k,t} \quad (47) $$
+
+$$ s_g^E \sum_{i \in I} ec_i SP_{i,t} = \sum_{p \in P} \sum_{l \in L} LP_{p,l,t} \quad (48) $$
+
+$$ s_g^L \sum_{i \in I} lc_i SP_{i,t} = \sum_{p \in P} NP_{p,t} \quad (49) $$
+
+In summary, the modified MINLP model accounting for shale gas composition variations according to the well site, assuming that a unique processing plant location is to be selected, seeks to minimize equation (35) subject to constraints (1)-(4), (8), (9), (13)-(34), (42)-(49).
+
+## 4. SOLUTION STRATEGIES
+
+Solving the MINLP models described in the previous sections is a very challenging task due to three main reasons: (1) the size of the model is large, (2) the objective function involves non-concave terms accounting for equipment costs (processing plants, pipelines and compressors), and (3) such non-linear functions have unbounded derivatives at zero values. The last two features directly follow from the economies of scale, usually used to model the equipment cost variation with regards to the equipment
+---PAGE_BREAK---
+
+size. In principle the MINLP model can be solved to global optimality with a spatial branch and bound search method with the use of convex envelopes for the concave terms in the objective (the secant). However, given the large size of the MINLP, the problem is intractable with such methods like the ones implemented in BARON, LINDOGLOBAL and COUENNE.[25] Therefore, in this section a tailored strategy is described for solving the large-scale MINLP problem.
+
+## 4.1 Plant and Equipment Cost Estimations
+
+Nonconvex power law expressions of the form $f(x) = c x^{\alpha}$ with exponents less than one as in Biegler et al.[23] are commonly handled with two approaches: (a) approximate the concave function by a piecewise linear function,[26] and (b) adding a small value $\varepsilon$ to the variable $x$ thus slightly displacing the curve so as to ensure non-zero argument in the function. Approximation (a) is computationally costly, but can be useful for generating global upper bounds for the maximization problem by solving approximate MILP problems with piecewise linear approximations (underestimations) of the concave equipment cost functions.[27][28] On the other hand, approximation (b) is meant to avoid unbounded derivatives but can have drawbacks, especially if the exponents are rather small.[29] To overcome this problem, a simple expression of logarithmic form is used here. In the following sections, both the piecewise linear approach and the logarithmic approximation used are briefly presented.
+
+## 4.2 Piecewise Linear Approximation of Concave Cost Functions
+
+Given the nonlinear concave cost functions in eq. (1) (generically referred to as $f(x)$), accounting for the cost $f(x)$ of a processing plant, pipeline, or compressor of size $x \in X$, it is simple to demonstrate that piecewise linear approximations like the one depicted in Figure 5 provide valid underestimations of $f(x)$. That is achieved by partitioning the domain of variable $x$ into intervals ($X = [a_0; a_1] \cup [a_1; a_2] \cup \dots [a_{m-1}; a_m]$) and introducing binary variables $z_v$ to determine to what interval the selected value of $x$ belongs. At every interval $v$, function $f(x)$ is approximated by: $\phi(x) = f(x_{v-1}) + (x - x_{v-1}) (f(x_v) - f(x_{v-1})) / (x_v - x_{v-1})$. According to Padberg,[30] such a piecewise linearization can be modeled through two formulations: $\delta$ and $\lambda$. In this case, we adopt the $\delta$-formulation that leads to:
+---PAGE_BREAK---
+
+$$x = a_0 + \sum_{v=1}^{m} y_v$$
+
+$$\phi(x) = f(a_0) + \sum_{v=1}^{m} y_v [f(a_v) - f(a_{v-1})] / [a_v - a_{v-1}]$$
+
+$$\begin{align*}
+y_v \ge (a_v - a_{v-1})z_{v+1} & \qquad v = 1 \dots m-1 \\
+y_v \le (a_v - a_{v-1})z_v & \qquad v = 2 \dots m \\
+y_1 \le a_1 - a_0 \\
+y_v \ge 0 & \qquad v = 1 \dots m \\
+z_v \in \{0, 1\} & \qquad v = 1 \dots m
+\end{align*}$$
+
+(50)
+
+Note that if $z_v = 0$ (with $v > 1$), $y_v = 0$ because of the fourth constraint in (50). From that, $z_{v+1}$ is also zero to satisfy the third constraint in (50), given that $(a_v - a_{v-1})$ is always greater than zero. In other words, $z_v = 0$ implies $z_{v+1} = 0$, and equivalently, $z_{v+1} = 1$ implies $z_v = 1$, for $v > 1$. To illustrate the meaning of the variables in (50) reconsider the example given in Figure 5. Assume that $x = 6.5$. From the constraints described above, it follows that $z_2 = z_3 = z_4 = 1$, $y_1 = a_1 - a_0 = 2$ (because $z_2 = 1$), $y_2 = a_2 - a_1 = 2$ (because $z_3 = 1$), $y_3 = a_3 - a_2 = 2$ (because $z_4 = 1$), and $y_4 = 0.5$.
+
+**Figure 5.** Concave cost function and piecewise linear underestimation.
+---PAGE_BREAK---
+
+By replacing the nonlinear terms in the objective function (35) with the piecewise linear approximations given in (50), the MINLP model reduces into an MILP model yielding valid upper bounds for the global optimum of the original problem. A key decision is how to divide the variable domain, i.e. how many intervals to consider. The finer the domain discretization, the closer is the upper bound to the actual objective value of the MINLP, but also the higher is the CPU time required by the MILP since the number of integer variables increases significantly. In this case, such a tradeoff is managed through the successive refining strategy presented in Section 4.4 based on the ideas of You and Grossmann[31][32] dealing with the nonlinear concave function $\sqrt{x}$.
+
+## 4.3 Logarithmic Approximation of Concave Cost Functions
+
+In order to avoid unbounded derivatives and estimation errors when solving NLP subproblems in the MINLP model, the alternate approximation function $g(x)$ for $f(x)$ by Cafaro and Grossmann[29] is used:
+
+$$ f(x) = c \ x^r \approx g(x) = k \ln(bx + 1) \tag{51} $$
+
+where $x$ is the size of the equipment, $f(x)$ is the actual cost of the equipment of size $x$, $g(x)$ is the estimated cost of the equipment of size $x$, and $k, b > 0$ are parameters selected to fit $f(x)$ as closely as possible. Further details of this approximation are given in Cafaro and Grossmann.[29]
+
+The proposed function has two main advantages with regards to the classic $\epsilon$-approximation
+$f(x) \approx h(x) = c (x + \epsilon)^r$: (1) the cost of $x=0$ is exactly zero: $g(0) = k \ln(b0+1) = k \ln(1) = 0$, and (2)
+the derivatives of $g(x)$ for all $x \ge 0$ are bounded positive values given by $g'(x) = b k / (b x + 1)$. In
+particular at the origin ($x=0$), $g'(x) = b k$. These properties are particularly useful when dealing with
+concave cost functions with small exponents, like the cost of pipelines (exponents 0.225 and 0.300, for
+gas and liquid pipelines, respectively).
+
+Appropriate values for parameters $k$ and $b$ in function $g(x)$ can be found relatively easily for liquid and
+gas pipelines, and the logarithmic approximation leads to very good results (less than 0.50 % error) in
+the calculation of pipeline costs in all of the case studies tackled in Section 5.
+---PAGE_BREAK---
+
+## 4.4 Solution Algorithm: Branch-Refine-Optimize (BRO) Strategy
+
+To find the global optimum of the nonconvex MINLP model presented in Section 3, a two-level branch-and-refine procedure is proposed (see Figure 6). In the upper level we successively solve MILP approximations of the original MINLP problem following two purposes: (1) provide valid (and increasingly tighter) upper bounds of the global optimum, and (2) propose efficient supply chain network configurations. Once the MILP approximation is solved, the corresponding supply chain network design is fixed by removing all the nodes and arcs of the original superstructure not active in the MILP solution so as to define the lower level optimizing procedure.
+---PAGE_BREAK---
+
+**Figure 6.** Branch-Refine-Optimize (BRO) algorithm.
+
+The aim of the lower level of the algorithm is to find the global optimal solution of a reduced MINLP problem (or subproblem) focused only on equipment sizing (plant, pipelines and compressors) and the
+---PAGE_BREAK---
+
+drilling strategy (integer variables $n_{i,t}$) as the network structure is fixed. Since the reduced MINLP problem is nonconvex, its global optimal solution is found by solving on the one hand the reduced MINLP with a non-global solver (DICOPT, SBB)[25] to determine a lower bound for the selected network, and then successively partitioning the equipment size domains and recursively solving piecewise linear approximations of the objective function to determine tighter upper bounds (inner loop). Finally, the global optimal solution of the reduced MINLP is a feasible solution of the original MINLP, and its objective value provides a valid global lower bound of the problem.
+
+Note that supply chain network designs proposed by the upper level at previous iterations are excluded with integer cuts in order to reduce the enumeration effort. Such integer cuts are similar to those proposed by Durán and Grossmann,[8] which eliminate particular binary combinations accounting for network configurations already analyzed. The cuts are derived from the values of the binary variables $z_v$ used by the piecewise linear approximation of the concave cost terms in the objective function of the MILP (see Section 4.2). As a result, if the approximate solution obtained by the upper level in a new iteration is worse than the best solution found (or global lower bound), the algorithm automatically stops. Otherwise, the outer loop refines the piecewise linear approximation of the original problem and might improve the network structure so that the global optimal solution can be obtained after a finite number of iterations.
+
+In summary, the proposed solution algorithm is as follows:
+
+**Step 1: Initialization.** A one-piece linear underestimation (secant) is used for all the concave cost terms of later periods (for instance, $t > 10$), while in earlier periods the starting piecewise linearization comprises two to four intervals. The global upper bound is set to $GUB = +\infty$, and the global lower bound $GLB = -\infty$.
+
+**Step 2. Global Piecewise Linear Approximation.** Solving the incumbent MILP approximation of the original MINLP problem (as shown in Section 4.2) provides a global upper bound $GUB$. Since all the constraints in the MINLP are linear, the optimal solution of the MILP is also a feasible solution of the MINLP problem. Thus, a global lower bound (GLB) can be directly obtained by substituting the
+---PAGE_BREAK---
+
+optimal solution of the MILP into the MINLP. However, this solution can be taken as the initial point of
+a non-global MINLP solving step to improve the GLB.
+
+**Step 3. Reduced Problem Optimization.** By fixing the network structure, i.e. removing all the nodes (well-pads, junctions, gas processing plants, compressors) and arcs (pipelines) that were not selected in the optimal solution of the MILP, we successively solve a reduced MINLP problem with non-global algorithm (DICOPT, SBB) that is intended to improve the best solution found. The MINLP model makes use of the logarithmic approximation presented in Section 4.3, which avoids the numerical difficulties reported by You and Grossmann.[³¹] In this way, solving the nonconvex reduced MINLP might yield an improved lower bound for the subproblem (RLB). Next, based on the optimal values of the equipment size variables, we bisect the corresponding intervals of the piecewise linear approximations. If the optimal solution of the MINLP problem lies at the bounds of some intervals, we do not add a new interval for these terms. After refining the domain partition, we can obtain a tighter upper bound for the reduced problem (RUB), as shown in the next step.
+
+**Step 4. Reduced Problem Piecewise Linear Approximation.** The MILP with the piecewise linear approximation of the reduced problem provides an upper bound RUB, whose value tends to decrease as the domain partition is refined. The inner optimization loop iterates until the lower bound from the MINLP and upper bound of the MILP are within an optimality tolerance $\epsilon_1$. Once that occurs, the global optimum of the reduced problem has been found, and the lower bound of the original problem (GLB) is updated.
+
+**Step 5. Stopping Criteria.** From the values of the variables in the best solution found by the reduced MINLP, the intervals of the piecewise linear approximations in the original problem are bisected, and a new integer cut is added to the upper level MILP to avoid network configurations already tried. Next, the algorithm returns to Step 2 and two cases may occur: (a) a tighter global upper bound is found, or (b) the approximate solution is worse than the global lower bound. In case (a), the main optimization loop keeps iterating until the global lower and upper bounds are close enough to satisfy the optimality criteria $\epsilon_2 (> \epsilon_1)$. In case (b), the algorithm stops and the optimal solution is the best solution found.
+---PAGE_BREAK---
+
+## 5. RESULTS AND DISCUSSION
+
+In order to illustrate the application of the MINLP model and the proposed optimization algorithm, three examples are considered in this section. Example 1 deals with a real-size illustrative problem for optimizing the supply chain network design for a new shale gas exploitation area covering more than 150,000 km². In this case, a different production profile is assumed for each potential site where the wells are drilled. However, the gas "wetness" (or hydrocarbon composition) is assumed to be the same in each well pad. In turn, Example 2 is a variant of the previous case where the gas wetness becomes dependent on the well location. The aim of the second example is twofold: (a) find out how the gas wetness distribution affects the drilling strategy, and (b) highlight the contribution of the hydrocarbons other than methane to the economics of the project. The third example introduces variations in the pipeline pressures in order to show changes in the optimal solution. Finally, a real-world case study of the U.S. shale gas industry is tackled at the end of this section.
+
+i1
+
+i2
+
+i3
+
+j1
+
+j2
+
+j3
+
+i4
+
+j4
+
+j5
+
+j6
+
+j7
+
+j8
+
+j9
+
+k3
+
+f3
+
+f2
+---PAGE_BREAK---
+
+**Figure 7.** Nodes of the supply chain network superstructure for Examples 1, 2 and 3.
+
+## 5.1 Example 1: The same shale gas wetness in all the wells.
+
+Consider the shale gas supply chain superstructure whose nodes are shown in Figure 7. It comprises nine potential sites for drilling wells (i1...i9), eight potential sites for junction/compression nodes (j1..j8), three possible sites for processing plant installation (p1...p3), three methane demanding nodes (k1...k3), three ethane demanding nodes (l1...l3), and three freshwater sources (f1..f3). The Cartesian coordinates of each site (in km) are given in Table 1. Distances between nodes for pipeline length and water transportation calculations are measured in Euclidean norm.
+
+**Table 1.** Cartesian Coordinates of Problem Nodes (in km)
+
+ | well pads | junction nodes |
|---|
| i1 | i2 | i3 | i4 | i5 | i6 | i7 | i8 | i9 | j1 | j2 | j3 | j4 | j5 | j6 | j7 | j8 |
|---|
| x | 0 | 50 | 100 | 0 | 50 | 100 | 0 | 50 | 100 | 25 | 75 | 125 | 50 | 100 | 25 | 75 | 125 |
|---|
| y | 0 | 0 | 0 | 50 | 50 | 50 | 100 | 100 | 100 | 25 | 25 | 25 | 50 | 50 | 75 | 75 | 75 |
|---|
+
+ | processing plants | methane demand nodes | ethane demand nodes | freshwater sources |
|---|
| p1 | p2 | p3 | k1 | k2 | k3 | l1 | l2 | l3 | f1 | f2 | f3 |
|---|
| x | 75 | 125 | 175 | 145 | 145 | 145 | 175 | 75 | 100 | 150 | 100 | 25 |
|---|
| y | 75 | 25 | 75 | 15 | 25 | 100 | 75 | 50 | -50 | -50 | 150 | 125 |
|---|
+
+The planning horizon comprises 40 time periods (quarters) and the annual rate that was considered for discounting back cash flows is 13.5%. The methane price is assumed to be seasonal, with a base price of $142.86/Mm³ for periods t1, t5, t9,..., and seasonality factors of 1.10 for periods t2, t6, t10,...; 1.25 for t3, t7, t11,...; and 1.10 for t4, t8, t12,... The shale gas cost is fixed at $35.71/Mm³, the price of liquid ethane is $329.48/ton, and other hydrocarbons (heavier than ethane) are liquefied petroleum gases (propane, butanes and pentanes) separately sold at $749.56/ton (more than double of the ethane price).
+
+The liquid ethane density is 0.546 ton/m³, while the LPG density is averaged at 0.600 ton/m³. Maximum methane demands are 10, 5 and 15 MMm³/day for nodes k1, k2 and k3, while maximum ethane demands
+---PAGE_BREAK---
+
+at nodes 11, 12 and 13 are 2500, 2000 and 1500 ton/day, respectively. LPG maximum demands are 3000 ton/day at every node *p*.
+
+Freshwater availability is also assumed to be seasonal, with reference values of 250, 80 and 190 Mm³/quarter for sources f1, f2 and f3, respectively, and seasonality factors of 1.20 for periods t1, t5, t9,...; 1.00 for periods t2, t6, t10,...; 0.80 for t3, t7, t11,...; and 1.10 for t4, t8, t12,... Every individual well requires 20 Mm³ to be drilled and hydraulically fractured regardless of its location. Moreover, no more than three wells can be drilled in a single location during one quarter, and a total of 20 wells is the maximum number permitted for a single well-pad. Overall, a total of 180 wells can be drilled over the time horizon.
+
+The shale gas pressure at well-pads is set to 2.1 MPa, compressors at junction nodes increase the shale gas pressure from 1.4 to 2.1 MPa, processing plants receive the shale gas at 1.4 MPa, while compressors increase the methane pressure from 4.0 to 6.0 MPa. Finally, methane is delivered at demand nodes at 4.0 MPa.
+
+Regarding the cost of processing plants, wells and compressors, economies of scale functions of the form $C(x) = c x^r$ are used, with $c = \text{MM}\$210$, $\text{MM}\$5$, $\text{MM}\$0.011150$, and $r = 0.60, 0.60, 0.77$, respectively. The units of the size variables are MMm³/day for plants, wells for drilling/fracturing, and kW for compressors. For costing pipelines, a function $C(l,D) = c l D^r$ is used, with $c = \text{MM}\$0.125594$, $r = 0.60$, $l$ (length) measured in km and $D$ (diameter) in inches. The same function is used regardless of the product transported (liquid or gas) and the nodes being joined. For instance, the cost of a pipeline of 10 inches in diameter and 100 km in length is MM\$50.
+
+The shale gas productivity (in MMm³/day) at every well is modeled as a decreasing function of the well age *t* (see Figure 2) with the form $P(t) = k_i t^{0.37}$, for *t* = 1...40. The constant $k_i$ is 0.0806 for wells drilled in locations *i* = i1, i4; 0.0732 for *i* = i2, i5, i7; 0.0659 for *i* = i3, i6, i8; and 0.0586 for *i* = i9. Finally, the shale gas composition (independent of the well location) and its water content are given in the second column of Table 2. Notice the relatively high composition of wet gas (about 25%, with half of it being ethane).
+---PAGE_BREAK---
+
+**Table 2.** Shale Gas Water Content (kg/MMm³) and Composition (Molar % in Dry Basis)
+
+ | Example 1 | Example 2 |
|---|
| | i1 | i2 | i3 | i4 | i5 | i6 | i7 | i8 | i9 |
|---|
| H2O (kg/MMm3) | 615 | 615 | 615 | 615 | 615 | 615 | 615 | 615 | 615 | 615 | | N2 (Mole %) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | CO2 (Mole %) | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | CH4 (Mole %) | 74.6 | 87.6 | 83.6 | 80.6 | 82.6 | 80.6 | 77.6 | 78.6 | 75.6 | 74.6 | | C2H6 (Mole %) | 12.8 | 5.8 | 7.8 | 8.8 | 9.8 | 9.8 | 11.8 | 10.8 | 12.8 | 12.8 | | C3H8 (Mole %) | 7.6 | 3.6 | 4.6 | 5.6 | 4.6 | 5.6 | 5.6 | 6.6 | 6.6 | 7.6 | | i-C4H10 (Mole %) | 1.2 | 0.5 | 0.7 | 1.2 | 0.5 | 0.8 | 1.1 | 0.7 | 0.8 | 1.2 | | n-C4H10 (Mole %) | 0.8 | 0.2 | 0.3 | 0.8 | 0.2 | 0.2 | 0.9 | 0.3 | 0.7 | 0.8 | | i-C5H12 (Mole %) | 0.5 | 0.2 | 0.5 | 0.5 | 0.2 | 0.7 | 0.5 | 0.7 | 1.0 | 0.5 | | n-C5H12 (Mole %) | 0.5 | 0.1 | 0.5 | 0.5 | 0.1 | 0.3 | 0.5 | 0.3 | 0.5 | 0.5 |
+
+After implementing the BRO solution algorithm for this example, the optimal design for the shale gas supply chain is depicted in Figure 8 and the strategic drilling plan yields a NPV of MM$1664.48. The optimal solution determines that only one shale gas processing plant is installed at site *p*1, with a maximum capacity of 6.594 MMm³ of shale gas per day and a total cost of MM$651.2. Due to the economies of scale, the plant and all the pipelines are installed in the first period of the time horizon with no expansions planned over the first ten years. Regarding shale gas compression power at junction nodes, 1236 kW, 706 kW and 818 kW are installed at nodes *j*4, *j*5 and *j*6, in that order.
+
+The selected destinations for methane and ethane are nodes *k*3 and *l*2, respectively. Methane is supplied by a gas pipeline of 74.33 km length, and 17 ½ inches in diameter (or the upper closest diameter for gas commercial pipelines), requiring a compressor of 2428 kW. In turn, ethane is transported through a liquid pipeline of 5 ¾ inches in diameter. The maximum flow for both pipelines is reached in quarter *t*7 when the plant is operated at full capacity to produce methane at the rate of 4.915 MMm³/day, and ethane at 1130 ton/day (see Figure 9). The production level at the plant keeps high for other 6 quarters, until the maximum number of wells at every pad (20) is reached.
+---PAGE_BREAK---
+
+**Figure 8.** Optimal design for the shale gas supply chain network of Example 1.
+
+**Figure 9.** Amount of methane and ethane produced during the first four years in the optimal solution of Example 1.
+
+One of the important features of the model is the ability to generate an optimal drilling strategy so as to keep the level of production well balanced over the entire time horizon (see Figure 9). In this way, plant, compressor and pipeline sizes can be smaller than those needed when a very intensive drilling plan is applied.
+---PAGE_BREAK---
+
+**Figure 10.** Optimal drilling strategy for Example 1.
+
+The optimal drilling plan is depicted in Figure 10 in which each pad is represented with up to 3 wells that are drilled in a single time period. The height of each single-colored column at every line (which can be 0, 1, 2 or 3) represents the number of wells being drilled at each location in every period. The drilling plan is developed over the first three years of the planning horizon, and two phases can be easily distinguished: (1) Intensive drilling phase, and (2) Flow maintenance phase. The first phase covers the first five quarters, and its main objective is to drill and fracture as many wells as possible since there are no wells at the initial time. However, this strategy is partially limited by the water availability, which is scarce in periods *t2* and *t3*. Even under these circumstances, the model tends to rapidly increase the shale gas production focusing on the most productive regions. The second drilling phase takes place
+---PAGE_BREAK---
+
+during the following six quarters, and seeks to maintain a stable flow of shale gas in every pipeline until
+the maximum number of wells (20) is reached in every well-pad.
+
+Overall, the optimal strategy yields a positive net present value of MM$1664.48, with a total investment in period *t*1 amounting to MM$1077.60. Of the initial investment, 60% corresponds to the gas processing plant, 30% to pipeline installation, 8% to well drilling and fracturing costs, 1% to compressors, and less than 1% to water acquisition and transportation charges. Finally, the discounted payback period of the project is 3 years. Most of the project revenues come from LPG sales (50%) followed by methane (34%), and at last ethane (16%). The next example is proposed to analyze how the solution changes when the shale gas wetness depends on the well location, with the gas being much drier in some regions.
+
+**5.2 Example 2: Variable shale gas wetness.**
+
+The second example is a variant of Example 1 in which the shale gas wetness is dependent on the well site. The shale gas composition with regard to the location is presented in Table 2 where it can be seen that the composition of wet gas is less than 25% in many well pads. The only site producing shale gas with exactly the same composition as in Example 1 is node i9. The shale gas becomes drier in the direction of node i1. In fact, the methane mole percentage increases from 74.6% (node i9) to 87.6% (node i1). All the other data remain unchanged. Regarding the MINLP model, we use the modified version of the model presented in Section 3.4.1, which preserves linearity in the constraints under the assumption that a single processing plant is installed. Comparing the solution with the one obtained in Example 1, it can be concluded that the assumption of a single processing plant installation is not such a restrictive assumption for our case study.
+
+In fact, the optimal network configuration obtained is exactly the same as for Example 1. This is an expected result, since the total amount of shale gas produced in every pad is the same. There are only minor variations in the pipeline diameters, compressor power and processing plant size. In particular, the plant capacity is reduced from 6.594 MMm³/day to 6.222 MMm³/day due to a more extended drilling
+---PAGE_BREAK---
+
+strategy. The main differences in the shale gas composition definitively affect the drilling strategy, as well as the economics of the project. As shown in Figure 11, the optimal drilling strategy now tends to prioritize the pads producing wetter gas (i.e., those producing a higher amount of heavier hydrocarbons). Wells drilled in less attractive pads (i1, i2, i3, i4, i5) are left for later periods (t8 to t13). As a result, the overall drilling strategy now takes 13 periods instead of 11.
+
+**Figure 11.** Optimal drilling strategy for Example 2.
+
+From Figure 12 it can be seen that the production of methane is extended through a longer period of time, but the amount of ethane and heavier hydrocarbons is significantly lower than in Example 1. In summary, the optimal strategic plan involves an initial investment of MM$1054.67, while the net
+---PAGE_BREAK---
+
+present value is MM$1202.54, 27.8% below the net present value for Example 1. As a consequence, the discounted payback period increases from 3 to 3.7 years. The economic differences can be clearly noticed in the product sales income distribution. Given the new shale gas composition for each well-pad, LPG and methane sales represent 43% of the total income, while ethane is only 14%; versus 34% and 16% in the previous example.
+
+**Figure 12.** Amount of methane and ethane produced during the first four years in the optimal solution of Example 2.
+
+### 5.3 Example 3: Changes in gas pipeline pressures.
+
+By assumption (9), all the gas pressures are specified at some fixed values before solving the model. In Examples 1 and 2, the shale gas pipeline pressures vary from 2.1 MPa (inlet pressure) to 1.4 MPa (outlet pressure), while transmission pipelines transport dry gas from 6.0 MPa to 4.0 MPa. In both cases, gas compressors are assumed to operate at a common compression ratio of 1.5 (a typical value for centrifugal compressors). Even though determining the optimal pressure for every pipeline is out of the scope for this model, Example 3 is intended to show how the results are affected by changes in the pressure values. Example 3 is a variation of Example 1 in which both shale gas and dry gas compressors operate at a pressure ratio of 2, while shale gas at wellbores is delivered at a pressure of 2.8 MPa instead
+---PAGE_BREAK---
+
+of 2.1 MPa. More precisely, gathering pipelines transport shale gas from 2.8 MPa to 1.4 MPa, while dry
+gas is transported through transmission pipelines from 8.0 to 4.0 MPa.
+
+The main findings of this example are related to pipeline and compressor sizing, since the pipeline
+network structure, the processing plant size and the drilling strategy do not change in the optimal
+solution. As expected, pipeline diameters can be reduced at the expense of using higher power in the
+compressors. On the one hand, the pipeline diameters are reduced 15.75% (from 12.38 to 10.43 inches
+on average), the gathering pipelines are reduced by 15.91% (from 16.59 to 13.95 inches) and the gas
+transmission pipeline by 14.71% (from 17.33 to 14.78 inches). On the other hand, shale gas compressors
+(a total of three, at junction nodes j4, j5 and j6) increase their total power by a factor of 1.71 (from
+2760.47 kW to 4723.31 kW), while the only dry gas compressor at the outlet of the processing plant has
+a total power of 4189.29 kW (3000 kW installed in period t1 and the remaining 1189.29 kW in period
+t4), which implies a 72.6% increase in the methane compressor power compared to Example 1.
+
+Overall, the pipeline installation cost is reduced from MM$321.35 to MM$291.10, the investment in
+compressor stations increases from MM$11.14 to MM$17.56, while the NPV of the project is improved
+by 1.15%. Although the difference is rather small, future work will focus on determining the optimal
+pressures for the gas pipelines.
+
+**5.4 Computational Results**
+
+The most time-consuming step in the BRO algorithm is the solution of the MILP approximation of
+the full-size problem, i.e. the global piecewise linear approximation. As proposed in Section 4.4, we
+initialize the algorithm with a one-piece linear underestimation for all the concave cost terms for periods
+$t > 10$, while the starting piecewise linearization involves two to four intervals for $t \le 10$ (two for
+pipelines and compressors, four for processing plants). Even under those conditions, the size of the first
+MILP approximation of Example 1 is rather large: 51,880 equations, 47,643 continuous variables, and
+3,490 binary variables (2,343 after pre-processing), as can be seen in Table 3. From the latter, 1,440
+determine the number of wells to drill at every period, while the remaining are the binaries of the $\delta$
+piecewise linearization. Even though the relaxation is somewhat tight (16.8% integrality gap), the first
+---PAGE_BREAK---
+
+MILP takes almost 5 hours of CPU time (using GAMS/GUROBI 5.5.0[25] on an Intel Core i7 CPU, 2.93 GHz, 12 GB RAM, with 6 parallel threads) to solve the problem with an optimality gap of 0.25%. Having solved this problem, the first global upper bound is found: MM$1709.04. Next, the solution found is used as the initial point of a non-global MINLP optimization algorithm (GAMS/DICOPT 24.1.3, with GUROBI 5.5.0 as the MILP solver, and CONOPT 3.15 as the NLP solver)[25] which after 4 major iterations and 190 CPUs finds the first optimized integer solution, yielding an NPV = MM$1655.83 (3.21% of global optimality gap).
+
+**Table 3.** Computational Results for Example 1.
+
+| Outer Loop # | Inner Loop # | MILP | MINLP | Red. Opt. Gap % (ε1) | Global Opt. Gap % (ε2) |
|---|
| Cont. Var. | Bin. Var. | Eq. | CPUs | Cont. Var. | Bin. Var. | Eq. | Major Iters. | CPUs |
|---|
| 1 | 1 | 47,643 | 3,490 | 51,880 | 17,530 | 31,633 | 1,440 | 28,900 | 4 | 190 | 3.21 | 3.21 | | 2* | 21,344 | 3,737 | 24,578 | 14 | 5,087 | 1,440 | 5,532 | 3 | 20 | 2.12 | 2.94 | | 3* | 21,591 | 3,984 | 25,072 | 35 | 5,087 | 1,440 | 5,532 | 3 | 13 | 1.34 | 2.94 | | 4* | 21,817 | 4,210 | 25,524 | 44 | 5,087 | 1,440 | 5,532 | 4 | 21 | 0.98 | 2.88 | | 2† | 1 | 47,644 | 3,491 | 51,883 | 30,922 | 31,633 | 1,440 | 28,900 | 5 | 364 | 2.27 | 2.27 | | 2* | 21,344 | 3,737 | 24,578 | 20 | 5,087 | 1,440 | 5,532 | 4 | 25 | 1.60 | 2.27 | | 3* | 21,591 | 3,984 | 25,072 | 17 | 5,087 | 1,440 | 5,532 | 3 | 7 | 1.14 | 2.27 | | 4* | 21,798 | 4,191 | 25,486 | 25 | 5,087 | 1,440 | 5,532 | 3 | 7 | 0.81 | 2.27 |
+
+* Network configuration is fixed
+
+† Plant cost estimation is refined
+
+At the next step, the BRO algorithm fixes the network configuration, and the inner loop starts to optimize the reduced MINLP problem, by successively refining piecewise linear approximations of the concave cost terms in the objective function. As observed in the second line of Table 3, the first reduced MILP problem has one half of the binaries of the full-size MILP approximation, while the number of
+---PAGE_BREAK---
+
+continuous variables and equations are cut down by a factor of 7 and 5, respectively. In fact, the MILP approximations of the reduced problem have small sizes, never requiring more than 60s to find the optimal solution (0.25% optimality gap). From Table 3 it follows that the optimality gap of the reduced MINLP problem falls below $\epsilon_1 = 0.10$ after 4 iterations. At that point, the algorithm adds an integer cut removing the network configuration already tried, and refines the full-size MILP piecewise linearization based on the values of the variables at the best solution found.
+
+Even though the size of the MILP does not increase considerably in the second iteration of the outer loop, the time to find the optimal solution increases to 8.5 hours. Figure 13 shows the progress of the global upper bound, the best solution found, and the upper bound for the solution of the reduced problem over two iterations of the outer loop of the BRO algorithm. Overall, after solving eight MILP and eight MINLP models in 13.7 h of CPU time, the global optimality gap is reduced below $\epsilon_2 = 2.5\%$.
+
+**Figure 13.** Progress of the global upper bound, the reduced problem upper bound and the best solution found in the solution of Example 1 through the BRO algorithm.
+---PAGE_BREAK---
+
+Regarding Example 2, the alternate formulation presented in Section 3.4.1 slightly increases the size of the models compared to Example 1. In the first iteration, the global MILP approximation has 51,998 constraints, 47,643 continuous variables, and 3,493 integer variables, taking more than 12h of computational time to reduce the optimality gap below 1.00%. After two iterations of the outer loop in more than 24h of computational time the global optimality gap is 7.5%.
+
+## 5.5 Real-world case study: Shale Gas Development Project
+
+A shale gas production company is interested in expanding the drilling and production activity in the Marcellus shale play. The company has determined more than 150 potential sites for well pads, which can be grouped into nine regions. All the shale gas produced in each region is collected by a low-pressure trunk pipeline that transports the gas to a nearby compressor station. Finally, the raw gas is dehydrated and sent through high pressure transmission pipelines tied to midstream lines owned by third party distribution companies. Pipeline construction and compressor installation require considerable lead-times (more than two years), which are considered in the formulation. In addition, the company has the possibility of drilling the wells and keeping them closed for some periods until the pipelines collecting the shale gas become available. Such an assumption requires a model modification shown in the Appendix. Besides, a maximum of four wells per pad can be drilled and completed in a single period (up to twelve wells in at most three pads per period), and each pad should not contain more than ten wells. Fourteen freshwater reservoirs are available in the area. Due to confidentiality reasons, further details on the problem cannot be given.
+
+Since the shale gas is dry (95% mol methane) the study does not account for gas processing and fractionation plants. However, the large number of pads yields a large-scale MINLP model with 4,815 discrete variables, 12,226 continuous variables and 16,815 constraints. After two major iterations of the BRO algorithm and 71,000 CPUs of computational time, the optimal solution yields an NPV of MM$815, with a global optimality gap of 8.2%. The most convenient regions to be exploited during the following 10 years are regions 2 and 6, as seen in Figure 14, where a total of 22 and 18 pads are
+---PAGE_BREAK---
+
+constructed, respectively. Gathering pipelines of 7 to 10 inches in diameter collect the shale gas in each region, while trunk pipelines of 23 to 24 inches, and transmission pipelines of 12 inches (to delivery point 1) and 18 inches (to delivery point 2) are planned. Finally, a compressor station with a total power of 32,000 kW should be installed. Freshwater for drilling and fracturing is supplied by three of the available reservoirs. Figure 15 shows the drilling strategy for the 380 selected wells of regions 2 and 6, while Figure 16 illustrates the shale gas flows in major pipelines for periods 14 to 40, showing the trend of the model toward maximizing the pipeline utilization by maintaining a stable flow over time.
+
+**Figure 14.** Schematic representation of the shale gas supply chain superstructure for the real-world case.
+---PAGE_BREAK---
+
+Figure 15. Optimal drilling strategy for the real-world case study.
+
+Figure 16. Shale gas flows in the optimal solution of the real-world case study.
+---PAGE_BREAK---
+
+## 6. CONCLUSIONS AND FURTHER WORK
+
+A new MINLP model for the strategic planning of the shale gas supply chain has been presented in this work. The proposed formulation determines many of the critical decisions to be simultaneously optimized in the development of a shale gas project: the drilling and fracturing plan over time; the location, sizing and expansion of gas processing and fractionation plants; the section, length and location of gas and liquid pipelines (the network configuration); the power of gas compressors; and the amount of freshwater used for well drilling and fracturing, transported from available reservoirs; so as to maximize the economic results of the project. All the problem conditions such as flow balances, equipment sizing and expansions are modeled in linear constraints, while concave terms arise in the objective function due to the economies of scale determining the cost of plants, pipelines and compressors. Moreover, through a simple adaptation, the model can also account for shale gas composition variations depending on the geographic location of the wells.
+
+Since the model becomes intractable for commercial global optimizers, a two-level decomposition algorithm successively refining piecewise linear approximations of the concave cost terms and solving reduced MINLP problems was implemented. The use of the $\delta$-piecewise linear formulation[30] yields good relaxations of the MILP models, while the logarithmic approximation recently proposed by Cafaro and Grossmann[29] avoids numerical difficulties in the execution of the non-global MINLP solver. The proposed Branch-Reduce-Optimize (BRO) algorithm proves to be a useful tool for solving large-scale supply chain design problems in reasonable CPU times, although reducing the global optimality gap below 2.5% is quite hard for more challenging problems.
+
+Results on realistic instances show the importance of heavier hydrocarbons to the economics of the project, and how the optimal planning of the drilling/fracturing strategy maximizes the utilization of gas processing/transportation infrastructure and improves the use of water resources. A real-world case study of the shale gas industry in north-western Pennsylvania involving more than 150 potential sites for well-pads was successfully solved. The solution obtained is of particular importance for industrial
+---PAGE_BREAK---
+
+decision-makers, who cannot readily optimize the drilling strategy together with the pipeline configuration and compressor sizing so as to obtain a higher profit.
+
+Future work will focus on the optimization of the gas pipeline pressures, as well as the consideration of stochastic conditions for products demands, gas prices, water availability and shale gas production profiles at the wells.
+
+ACKNOWLEDGMENTS
+
+Financial support from Fulbright Commission Argentina, CONICET and CAPD at Carnegie Mellon University is gratefully acknowledged. We are also most grateful to EQT Corporation for the case study provided to us for Section 5.5.
+
+NOTATION
+
+Sets
+
+
+
+ |
+ F
+ |
+
+ Freshwater sources
+ |
+
+
+ |
+ I
+ |
+
+ Well-pads
+ |
+
+
+ |
+ J
+ |
+
+ Junction nodes
+ |
+
+
+ |
+ K
+ |
+
+ Gas demand points
+ |
+
+
+ |
+ P
+ |
+
+ Gas processing and fractionation plants
+ |
+
+
+ |
+ L
+ |
+
+ Ethane demand points
+ |
+
+
+ |
+ T
+ |
+
+ Time periods
+ |
+
+
+
+Parameters
+
+
+
+ |
+ dr
+ |
+
+ Annual discount rate
+ |
+
+
+ |
+ ethdem
+
+ k,t
+
+ |
+
+ Maximum demand for ethane at node k in period t
+ |
+
+
+ |
+ ethp
+
+ t
+
+ |
+
+ Unit price of ethane in period t (forecast)
+ |
+
+
+ |
+ fix
+
+ f
+
+ |
+
+ Unit cost for freshwater acquisition from source f
+ |
+
+
+ |
+ fwa
+
+ f,t
+
+ |
+
+ Amount of freshwater available from source f during period t
+ |
+
+
+---PAGE_BREAK---
+
+| gasdemk,t | Maximum demand for methane at node k in period t | | gaspt | Unit price of methane in period t (forecast) | | gci, eci, lci | Methane, Ethane and LPG composition of the shale gas produced in pad i | | ks, kd, kp, kc | Base cost of plants, wells, pipelines and compressors in economy of scale functions | | lij | Distance between nodes i and j | | lpgpt | Average unit price of LPGs in period t (forecast) | | n̅i | Upper bound on the number of wells to drill in pad i during one period | | Ni | Upper bound on the number of wells to drill in pad i over the planning horizon | | pwi,a | Daily shale gas production of a well of age a (periods) drilled in pad i | | rfi | Water reuse factor in well-pad i | | sepmax | Upper bound on the shale gas processing capacity of a single plant | | sg | specific gravity in standard conditions | | shgpt | Unit cost of shale gas in period t | | varf | Unit cost for freshwater transportation from source f | | wri | Amount of water required to drill and fracture a single well in pad i | | τ, φ, τc | Lead times for installing gas plants, pipelines and compressors, in quarters |
+
+**Binary Variables**
+
+| wp | = 1 if the processing plant p is operative during the planning horizon | | yi,n,t | = 1 if n wells are drilled at pad i during period t |
+
+**Continuous Variables**
+
+| DFPi,j,t | Diameter of the gas pipeline installed between i and j in period t | | DGPj,p,t | Diameter of the gas pipeline installed between j and p in period t | | DTPp,k,t | Diameter of the gas pipeline installed between p and k in period t | | DLPp,l,t | Diameter of the liquid pipeline installed between p and l in period t | | ECompj,t | Ethane composition of the shale gas flow at the outlet of node j during period t |
+---PAGE_BREAK---
+
+FPi,j,t Shale gas flow from well-pad *i* to junction *j* during period *t*
+
+FPEi,j,t Individual ethane gas flow from well-pad *i* to junction *j* during period *t*
+
+FPGi,j,t Individual methane flow from well-pad *i* to junction *j* during period *t*
+
+FPLi,j,t Individual LPG flow from well-pad *i* to junction *j* during period *t*
+
+FPCapi,j,t Total shale gas transportation capacity between *i* and *j* in period *t*
+
+FPFlowi,j,t Shale gas transportation capacity installed between *i* and *j* in period *t*
+
+GCompj,t Methane composition of the shale gas flow at the outlet of node *j* during period *t*
+
+GPj,p,t Shale gas flow from junction node *j* to plant *p* during period *t*
+
+GPCapj,p,t Total shale gas transportation capacity between *j* and *p* in period *t*
+
+GPFlowj,p,t Shale gas transportation capacity installed between *j* and *p* in period *t*
+
+JCInstp,t Compression power installed at node *j* in period *t*
+
+JCPp,t Total compression power at *j* in period *t*
+
+LCompj,t LPG composition of the shale gas flow at the outlet of node *j* during period *t*
+
+LPp,l,t Ethane flow from plant *p* to demand point *l* during period *t*
+
+LPCapp,l,t Total ethane transportation capacity between *p* and *l* in period *t*
+
+LPFlowp,l,t Ethane transportation capacity installed between *p* and *l* in period *t*
+
+Ni,t Number of wells drilled in pad *i* during period *t*
+
+NPp,t Daily production of LPG in plant *p* during period *t*
+
+PCInstp,t Compression power installed at plant *p* in period *t*
+
+PCPp,t Total compression power at *p* in period *t*
+
+TPp,k,t Dry gas (methane) flow from plant *p* to demand point *k* during period *t*
+
+TPCapp,k,t Total methane transportation capacity between *p* and *k* in period *t*
+
+TPFlowp,k,t Methane transportation capacity installed between *p* and *k* in period *t*
+
+SepCapp,t Total shale gas processing capacity of plant *p* in period *t*
+
+SepInstp,t Daily shale gas processing capacity installed in plant *p* at period *t*
+
+SPi,t Daily shale gas production of well-pad *i* during period *t*
+---PAGE_BREAK---
+
+$$SP_{i,t}^E \quad \text{Daily ethane production of well-pad } i \text{ during period } t$$
+
+$$SP_{i,t}^G \quad \text{Daily methane production of well-pad } i \text{ during period } t$$
+
+$$SP_{i,t}^L \quad \text{Daily LPG production of well-pad } i \text{ during period } t$$
+
+$$WS_{f,i,t} \quad \text{Amount of freshwater supplied from source } f \text{ to pad } i \text{ during period } t$$
+
+# APPENDIX A: Pipeline Flow, Compressor Power and Cost Calculations
+
+## Gas Pipeline Diameter, Flow and Cost
+
+Similar to Durán and Grossmann,[8] the head loss in a gas pipeline segment *i-j* with diameter *D**ij* (in m), either transporting raw gas or methane, is assumed to be given by the Weymouth[21] flow equation (A1).
+
+$$D_{i,j} = l_{i,j}^{\alpha} (P_i^2 - P_j^2)^{-\alpha} (B_{i,j})^{\alpha} \quad (A1)$$
+
+where
+
+$$B_{i,j} = s_g T [P_o / (0.375 T_o)]^2 (Flow_{i,j})^2 \quad (A2)$$
+
+$s_g$ is the gas specific gravity (0.729 kg/m³ for shale gas, 0.554 kg/m³ for methane) in standard conditions ($P_o$ = 0.1013 MPa; $T_o$ = 288.9 K). $T$ is the average gas temperature, in this case fixed at 288.9 K, and $\alpha$ = 3/16. The input and output pressures ($P_i$, $P_j$ in MPa) are assumed to be known (see assumption 9 in Section 2) as well as the pipeline length $l_{i,j}$ (in km). By combining A1 and A2, the gas flow ($Flow_{i,j}$ in MMm³/day) can be expressed by eq. (A3).
+
+$$Flow_{i,j} = \left\{ \frac{(P_i^2 - P_j^2)}{s_g T [P_o / (0.375 T_o)]^2} \right\}^{1/2} I_{i,j}^{-1/2} D_{i,j}^{1/2 \alpha} \quad (A3)$$
+
+As a result, if the inlet and outlet pressures are given, the gas flow is a function of the pipeline diameter to the power of 2.667, as shown in eq. (A4).
+
+$$Flow_{i,j} = k_{i,j} l_{i,j}^{-0.5} D_{i,j}^{2.667} \quad (A4)$$
+---PAGE_BREAK---
+
+If shale gas pipelines transport raw gas from 2.1 to 1.4 MPa, the value of $k_{ij}$ is 115.35. If the diameter is given in inches, it is 0.006423. On the other hand, dry gas pipelines operating from 6.0 MPa to 4.0 MPa show a value of $k_{ij}$ equal to 378.06, or 0.02105 if the diameter is given in inches.
+
+Finally, we use the economy of scale function (A5) to determine the cost of the gas pipeline i-j.
+
+$$ Cost_{i,j} = k_{pl,i,j} l_{i,j} D_{i,j}^{0.60} \qquad (A5) $$
+
+By substituting $D_{ij}$ with the variable $DP_{ij} = D_{ij}^{2.667}$, eqs. (A4) and (A5) yield (A6) and (A7), which are the equations actually used in the MINLP model.
+
+$$ Flow_{i,j} = k_{i,j} l_{i,j}^{-0.5} DP_{i,j} \qquad (A6) $$
+
+$$ Cost_{i,j} = k_{pl,i,j} l_{i,j} DP_{i,j}^{0.225} \qquad (A7) $$
+
+## Liquid Pipeline Diameter, Flow and Cost
+
+To calculate liquid flows in a pipeline p-l, a mean velocity (normally equal to $v^{\max} = 1.5$ m/s) is assumed. That yields eq. (A8).
+
+$$ Flow_{p,l} = 86{,}400 \pi / 4 v^{\max} \rho D_{p,l}^2 \qquad (A8) $$
+
+where *Flow*p,l is given in ton/day, *D* (diameter) in meters, 86,400 is the total number of seconds per day the pipeline remains operative, and $\rho$ is the liquid density, in ton/m³ (0.546 ton/m³ for liquid ethane).
+
+Using the concave cost function given in (A5), and substituting $D_{p,l}$ with the variable $DP_{p,l} = D_{p,l}^2$ yield eqs. (A9) and (A10), which are the equations finally used in the MINLP model.
+
+$$ Flow_{p,l} = k_{p,l} DP_{p,l} \qquad (A9) $$
+
+$$ Cost_{p,l} = k_{pl,p,l} l_{p,l} DP_{p,l}^{0.3} \qquad (A10) $$
+---PAGE_BREAK---
+
+Compression Power
+
+Since the compressors are assumed to be adiabatic, the power requirement of a compressor installed at node j ($CP_j$ in kW) can be calculated through eq. (A11). [8]
+
+$$
+F_j CP_j = (P d_j / P s_j)^b - 1 \quad (A11)
+$$
+
+where
+
+$$
+F_j = \left[ \frac{(\gamma - 1) \eta}{4.0426 T \gamma} \right] Flow_j^{-1}
+\quad
+(A12)
+$$
+
+$$
+b = z (\gamma - 1) / \gamma
+\tag{A13}
+$$
+
+z is the gas compressibility factor (by the ideal gas assumption, z = 1), γ is the heat capacity ratio (typically, γ = 1.26), η is the compressor efficiency, and T is the gas temperature at suction conditions (T = 288.9 K). Flowj is given in MMm3/day. By assumption 9 (see Section 2) the compression ratio (Pdj / Psj) both at junction and plant compressors is given (usually equal to 1.5). Hence, combining (A11), (A12) and (A13) yields eq. (A14), stating that the power requirement is linearly proportional to the gas flow (see eqs. (27) and (28) of the MINLP model).
+
+$$
+CP_j = \left[ \frac{(4.0426 T \gamma)}{(\gamma - 1) \eta} \right] \left[ \left( \frac{P d_j}{P s_j} \right)^{\frac{\gamma-1}{\gamma}} - 1 \right] Flow_j = kc_j Flow_j \quad (A14)
+$$
+
+APPENDIX B: Other Model Features
+
+Delayed Production of a Well
+
+Some companies may often drill, fracture and complete a non-conventional gas well, but the production of the well is delayed until the required infrastructure (pipelines, compressors, etc.) becomes available. In that case, the model is adapted by incorporating an integer variable accounting for the number of wells of pad *i* that become productive at the beginning of period *t* (*NP**i*,t*). Then eq. (B1) is added to the formulation, and eq. (4) in the original model is replaced by (B2).
+
+$$
+\sum_{\tau \le t} NP_{i,\tau} \le \sum_{\tau < t} N_{i,\tau} \quad \forall i \in I, t \in T \qquad (B1)
+$$
+---PAGE_BREAK---
+
+$$ \sum_{\tau=1}^{t} NP_{i,\tau} pw_{i,t-\tau+1} = SP_{i,t} \quad \forall i \in I, t > 1 \qquad (B2) $$
+
+## Cost of Rigs and Crew for Drilling New Wells
+
+If the cost of moving rigs, drilling crews and other resources from one pad to the other is significant, the model should be able to determine the period in which the crew arrives at a pad to start or continue drilling new wells. With that purpose, we incorporate a new binary variable $x_{i,t}$ that is equal to one if at least one well is drilled and fractured in pad *i* during period *t*. That is controlled by eqs. (B3) and (B4).
+
+$$ N_{i,t} \le \bar{N}_i x_{i,t} \quad \forall i \in I, t \in T \qquad (B3) $$
+
+$$ N_{i,t} \ge x_{i,t} \quad \forall i \in I, t \in T \qquad (B4) $$
+
+As a result, the cost of arriving at a well pad *i* to start or continue the drilling of new wells in period *t* is lower bounded by eq. (B5), and is included in the objective function (35).
+
+$$ RC_{i,t} \ge rigc_i (x_{i,t} - x_{i,t-1}) \quad \forall i \in I, t \in T \qquad (B5) $$
+
+Finally, if the total number of rigs (and/or drilling crews) available is *rigmax*, eq. (B6) imposes an upper bound on the number of pads where new wells are drilled during a single period.
+
+$$ \sum_{i \in I} x_{i,t} \le rigmax \quad \forall t \in T \qquad (B6) $$
+
+## LITERATURE CITED
+
+[1] U.S. Energy Information Administration (EIA). *U.S. 2012 Annual Energy Outlook with Projects to 2035*. Washington, DC: US Department of Energy, 2012.
+
+[2] U.S. Energy Information Administration (EIA). *Natural Gas Processing Plants in the United States: 2010 Update*. Washington, DC: US Department of Energy, 2011.
+---PAGE_BREAK---
+
+[3] Laiglecia JI, Lopez-Negrete R, Diaz MS, Biegler LT. A simultaneous dynamic optimization approach for natural gas processing plants. *Proceedings of Foundations of Computer Aided Process Operations (FOCAPO)*, 2012.
+
+[4] Ladlee J, Jacquet J. *The Implications of Multi-Well Pads in the Marcellus Shale*. *Research & Policy Brief Series*. Ithaca, NY: Cornell University's Community & Regional Development Institute (CaRDI), 2011.
+
+[5] Stark M, Allingham R, Calder J, Lennartz-Walker T, Wai K, Thompson P, Zhao S. *Water and Shale Gas Development*. *Leveraging the US experience in new shale developments*. Dublin, Ireland: Accenture, 2012.
+
+[6] Geoffrion AM, Graves GW. Multicommodity distribution system design by Benders decomposition, *Management Science*. 1974; 20: 822-844.
+
+[7] Melo MT, Nickel S, Saldanha-da-Gama F. Facility location and supply chain management – A review. *European Journal of Operational Research*. 2009; 196: 401-412.
+
+[8] Durán MA, Grossmann I.E. A mixed-integer nonlinear programming algorithm for process systems synthesis. *AIChE J*. 1986; 32: 592-606.
+
+[9] Iyer RR, Grossmann IE. Vasantharajan S, Cullick AS. Optimal planning and scheduling of offshore oil field infrastructure investment and operations. *Ind. Eng. Chem. Res.* 1998; 37: 1380-1397.
+
+[10] Van Den Heever SA, Grossmann IE. An iterative aggregation/disaggregation approach for the solution of a mixed-integer nonlinear oilfield infrastructure planning model. *Ind. Eng. Chem. Res.* 2000; 39: 1955-1971.
+
+[11] Gupta V, Grossmann IE. An efficient multiperiod MINLP model for optimal planning of offshore oil and gas field infrastructure. *Ind. Eng. Chem. Res.* 2012; 51: 6823-6840.
+---PAGE_BREAK---
+
+[12] Rahman MM, Rahman MK, Rahman SS. An integrated model for multiobjective design optimization of hydraulic fracturing. *Journal of Petroleum Science and Engineering*. 2001; 31: 41-62.
+
+[13] Knudsen BR, Foss B, Grossmann IE, Gupta V. Lagrangian relaxation based production optimization of tight-formation wells. Submitted for publication to *Comput. Chem. Eng.* 2013.
+
+[14] Mauter MS, Palmer VR, Tang Y, Behrer RP. *The next frontier in United States unconventional shale gas and tight oil extraction: Strategic reduction of environmental impact*. Belfer Center for Science and International Affairs, Harvard Kennedy School, March 2013.
+
+[15] Rahm BG, Riha SJ. Toward strategic management of shale gas development: Regional, collective impacts on water resources. *Environmental Science and Policy*. 2012; 17: 12-23.
+
+[16] Yang L, Grossmann IE. Superstructure-Based Shale Play Water Management Optimization. 2013 AIChE Meeting. San Francisco, CA.
+
+[17] Nikolaou M. Computer-aided process engineering in oil and gas production. *Comput. Chem. Eng.* 2013; 51: 96-101.
+
+[18] Troner A. *Natural gas liquids in the shale revolution*. James A. Baker III Institute for Public Policy Rice University. April, 2013.
+
+[19] Grossmann IE. Enterprise-wide optimization: A new frontier in process systems engineering. *AIChE J.* 2005; 51: 1846-1857.
+
+[20] Oliveira F, Gupta V, Hamachera S, Grossmann IE. A Lagrangean decomposition approach for oil supply chain investment planning under uncertainty with risk considerations. *Comput. Chem. Eng.* 2013; 50: 184-195.
+
+[21] Weymouth TR. Problems in natural gas engineering. *ASME Transactions*. 1942; 34:185-234.
+---PAGE_BREAK---
+
+[22] Nahmias, S. *Production and operations analysis*. New York, NY: McGraw-Hill, 2009.
+
+[23] Biegler LT, Grossmann IE, Westerberg AW. *Systematic methods of chemical process design*. New Jersey, NJ: Prentice Hall, 1997.
+
+[24] Guthrie KM. Capital cost estimating. *Chemical Engineer*. 1969; 76: 114-142.
+
+[25] McCarl BA. *Expanded GAMS user guide version 23.6*. Washington, DC: GAMS Development Corporation, 2011.
+
+[26] Geoffrion AM. Objective function approximations in mathematical programming. *Math. Prog*. 1977; 13: 23-37.
+
+[27] Bergamini ML, Aguirre P, Grossmann IE. Logic-based outer approximation for globally optimal synthesis of process networks. *Comput. Chem. Eng.* 2005; 29: 1914-1933.
+
+[28] Bergamini ML, Grossmann IE, Scenna N, Aguirre P. An improved piecewise outer-approximation algorithm for the global optimization of MINLP models involving concave and bilinear terms. *Comput. Chem. Eng.* 2008; 32: 477-493.
+
+[29] Cafaro DC, Grossmann IE. Alternate approximation of concave cost functions for process design and supply chain optimization problems. Submitted for publication to *Comput. Chem. Eng.* 2013.
+
+[30] Padberg M. Approximating separable nonlinear functions via mixed zero-one programs. *Oper. Res. Lett.* 2000; 27: 1-5.
+
+[31] You F, Grossmann IE. Stochastic inventory management for tactical process planning under uncertainties: MINLP Model and Algorithms. *AIChE J.* 2011; 57: 1250-1277.
+
+[32] You F, Grossmann IE. Integrated multi-echelon supply chain design with inventories under uncertainty: MINLP models, computational strategies. *AIChE J.* 2010; 56: 419-440.
+---PAGE_BREAK---
+
+$$LP_{p,l,t} \leq LPCap_{p,l,t} \quad \forall p \in P, l \in L, t > 1 \quad (26)$$
+
+*Power of Compressors.* If the suction and discharge pressures are given (see assumption 9), and assuming that compressors are adiabatic, a simple expression can be derived in order to calculate the required compression power (in kW) as shown in the Appendix. Under such assumptions, the required power is directly proportional to the total flow of gas being compressed. In the case of raw gas, compressed at junction nodes and sent to processing plants, the total power installed up to time *t* (*JCP**j*,*t*) must be greater or equal than the power demanded by the total flows of raw gas compressed by *j* at time *t*, as expressed by eq. (27).
+
+$$JCP_{j,t} \geq kc_j \sum_{p \in P} GP_{j,p,t} \quad \forall j \in J, t > 1 \quad (27)$$
+
+Similarly, the power of compressors installed at the outlet of the processing plant *p* up to time *t* (*PCP**p*,*t*) sending dry gas to demand nodes *k* (e.g., gas distribution companies) is bounded from below by constraint (28).
+
+$$PCP_{p,t} \geq kc_p \sum_{k \in K} TP_{p,k,t} \quad \forall p \in P, t > 1 \quad (28)$$
+
+Moreover, compressor stations can be expanded in the planning horizon by installing new compressors at the same node. Eqs. (29) and (30) determine the total power of compressors installed up to time *t* at nodes *j* and *p*, respectively, and where *τc* is the compressor installation lead time in quarters.
+
+$$JCP_{j,t} = JCP_{j,t-1} + JCInst_{j,t-\tau} \quad \forall j \in J, t > 1 \quad (29)$$
+
+$$PCP_{p,t} = PCP_{p,t-1} + PCInst_{p,t-\tau} \quad \forall p \in P, t > 1 \quad (30)$$
+
+### 3.1.4 Water Supplies
+
+*Water Demand for Drilling and Fracturing Wells.* As explained before, a large amount of freshwater is required in the shale gas industry for the hydraulic fracturing of new wells. This model assumes that
+---PAGE_BREAK---
+
+the total amount of water required by a single well during the drilling, fracturing and completion processes ($wr_i$) is known (typically, 20 Mm³/well) but may depend on the well location. Eq. (31) states that the total number of wells drilled, fractured and completed in pad $i$ during period $t$ determines the total water requirement of that pad at that period, and such amount should be supplied from one of more freshwater sources $f$. The amount of freshwater supplied by source $f$ for drilling and fracturing new wells in pad $i$ during period $t$ is a key model decision represented by the continuous variable $WS_{f,i,t}$. In addition, if the well-pad $i$ has the infrastructure for flowback water treatment and reuse, a reuse factor $rf_i$ (usually below 20%) can reduce the need for freshwater, as shown in the LHS of eq. (31).
+
+$$N_{i,t} \ wr_i / (1+rf_i) = \sum_{f \in F} WS_{f,i,t} \quad \forall i \in I, t \in T \quad (31)$$
+
+*Water Availability.* Every freshwater resource (rivers, lakes, underground water, etc.) usually has an upper limit on the amount of water that it can provide to the shale gas industry, often given by a seasonal profile. If the parameter $fwa_{f,t}$ stands for the maximum volume of freshwater that source $f$ can supply to the drilling and fracturing of new wells during the whole period $t$, the total amount supplied from $f$ to every pad $i$ should be bounded by above as in constraint (32).
+
+$$\sum_{i \in I} WS_{f,i,t} \leq fwa_{f,t} \quad \forall f \in F, t \in T \qquad (32)$$
+
+### 3.1.5 Maximum Demands
+
+A critical model decision is where to sell both the dry gas and the ethane flows produced by the shale gas processing plants. Every potential market (or demand node) is assumed to consume a maximum amount of product (dry gas for gas distributors, ethane for petrochemical plants) based on their own transportation or processing capacities. Moreover, such demand profile can be seasonal, especially in dry gas markets. Constraints (33) and (34) restrict the total flow of dry gas and ethane that can be sent from processing plants to each demand node during every period of the planning horizon.
+---PAGE_BREAK---
+
+$$ \sum_{p \in P} TP_{p,k,t} \le gasdem_{k,t} \quad \forall k \in K, t > 1 \qquad (33) $$
+
+$$ \sum_{p \in P} LP_{p,l,t} \le ethdem_{l,t} \quad \forall l \in L, t > 1 \qquad (34) $$
+
+## 3.2 Objective Function
+
+The objective function of the model is to maximize the Net Present Value (NPV) of the long-term planning project as expressed in eq. (35).
+
+$$
+\begin{aligned}
+NPV = & \sum_{t \in T} (1 + dr/4)^{-t} \left[ \sum_{p \in P} \sum_{k \in K} gasp_{k,t} nd_t TP_{p,k,t} + \sum_{p \in P} \sum_{l \in L} ethp_{l,t} nd_l LP_{p,l,t} + \sum_{p \in P} lpgp_t nd_t NP_{p,t} \right. \\
+& - \sum_{i \in I} \sum_{j \in J} shgc_{i,t} nd_t FP_{i,j,t} \\
+& - \sum_{p \in P} ks SepInst_{p,t}^{SepExp} \\
+& - \sum_{i \in I} \sum_{n=1}^{\bar{n}_i} kd_n^{WellExp} y_{i,n,t} \\
+& - \sum_{i \in I} \sum_{j \in J} kp_i l_{i,j} DFP_{i,j,t}^{GasPipeExp} - \sum_{j \in J} \sum_{p \in P} kp_j l_{j,p} DGP_{j,p,t}^{GasPipeExp} \\
+& - \sum_{p \in P} \sum_{k \in K} kp_k l_{p,k} DTP_{p,k,t}^{GasPipeExp} - \sum_{p \in P} \sum_{l \in L} kp_l l_{p,l} DLP_{p,l,t}^{LiqPipeExp} \\
+& - \sum_{j \in J} kc JCInst_{j,t}^{CompExp} - \sum_{p \in P} kc PCInst_{p,t}^{CompExp} \\
+& \left. - \sum_{f \in F} \sum_{i \in I} (fix_f + var_f l_{f,i}) WS_{f,i,t} \right]
+\end{aligned}
+\qquad (35) $$
+
+The objective function comprises positive and negative terms for every period of the planning horizon, discounted back to its present value by the annual discount rate of the project, *dr*. Positive terms are dry gas sales income, ethane sales income, and NGL other than ethane sales income. Negative terms are shale gas acquisition cost (including production, transportation and other operating costs), the cost of drilling, hydraulic fracturing and completion of shale gas wells, the cost of installing/expanding shale gas processing capacity at separation plants, the cost of constructing new pipelines either for
\ No newline at end of file
diff --git a/samples/texts_merged/5244113.md b/samples/texts_merged/5244113.md
new file mode 100644
index 0000000000000000000000000000000000000000..1cd015d796825e8d05438fd156273c899c0b78db
--- /dev/null
+++ b/samples/texts_merged/5244113.md
@@ -0,0 +1,405 @@
+
+---PAGE_BREAK---
+
+Yet another approach to the inverse square law and to the
+circular character of the hodograph of Kepler orbits
+
+Adel H. Alameh
+
+Lebanese University, Department of Physics, Hadath, Beirut, Lebanon*
+
+(Dated: June 4, 2021)
+
+Abstract
+
+The law of centripetal force governing the motion of celestial bodies in eccentric conic sections, has been established and thoroughly investigated by Sir Isaac Newton in his Principia Mathematica. Yet its profound implications on the understanding of such motions is still evolving.
+
+In a paper to the royal academy of science, Sir Willian Hamilton demonstrated that this law underlies the circular character of hodographs for Kepler orbits. A fact which was the object of ulterior research and exploration by Richard Feynman and many other authors [1].
+
+In effect, a minute examination of the geometry of elliptic trajectories, reveals interesting geometric properties and relations, altogether, combined with the law of conservation of angular momentum lead eventually, and without any recourse to dealing with differential equations, to the appearance of the equation of the trajectory and to the derivation of the equation of its corresponding hodograph.
+
+On this respect, and for the sake of founding the approach on solid basis, I devised two mathematical theorems; one concerning the existence of geometric means, and the other is related to establishing the parametric equation of an off-center circle, altogether compounded with other simple arguments ultimately give rise to the inverse square law of force that governs the motion of bodies in elliptic trajectories, as well as to the equation of their inherent circular hodographs.
+
+* adel.alameh@eastwoodcollege.com
+---PAGE_BREAK---
+
+PRELIMINARY GEOMETRY OF ELLIPTIC TRAJECTORIES
+
+Let $S$ and $S'$, separated by $SS' = 2c$, be the foci of an ellipse ($E$), described by a celestial body $P$ (a planet), that is impelled by a force tending toward a center of force $S$ (a star). Figure 1. Draw the principal circle ($C_p$) of center $O$ and radius $OA = a$, where $a$ is the length of the semi major axis of ($E$). Draw the director circle ($C_d$) of center $S'$ and radius $S'M = 2a$. Produce $S'P$ to meet the director circle ($C_d$) in $M$, let fall the perpendicular $PH$ to $SM$, then produce $HP$ to meet the principal circle ($C_p$) in $Z$. Produce $MS$ to meet ($C_p$) in $R$.
+
+Figure 1: Geometry of elliptic orbits
+
+It is now required to prove that the radius vector $\mathbf{r} = \mathbf{SP}$ is parallel to $\mathbf{RZ}$. For that purpose, we start off with an alternative definition of the ellipse [2], which states that an ellipse is the locus of the centers of circles passing through the focus $F$ and internally tangent to the director circle whose center lies at the other focus $F'$ and of radius $2a$. The case being so, it is easy to infer that $\Delta PSM$ is isosceles, and that $H$ is the midpoint of $SM$. Let $PL$ be the bisector of the angle $\widehat{SPS'}$. We seek now to prove that $PL$ is parallel to $SM$. Evidently $\widehat{MPH} = \widehat{ZPS'}$ as they are vertically opposite angles, and $\widehat{MPH} = \widehat{SPH}$ corresponding parts of congruent triangles, hence $\widehat{ZPS'} = \widehat{HPS}$, and it is readily inferred that $\widehat{HPL} = 90^\circ$. Hence, $PL$ is parallel to $SM$ and cor relatively $PH$ is tangent to the
+---PAGE_BREAK---
+
+ellipse since it is perpendicular to PL which is the internal bisector of angle $\widehat{SPS'}$.
+
+Now in $\triangle S'SM$, the points $O$ and $H$ are the midpoints of $S'$$S$ and $SM$ respectively, then
+$OH = \frac{S'M}{2} = a$, therefore $H$ belongs to the principal circle ($C_p$), and having the angle
+$\widehat{RHZ} = 90^\circ$, leads to saying that $ZR$ is a diameter of the principal circle.
+Few more steps are still needed to achieve the required proof, thus we proceed by observing
+that $\widehat{PSH} = \widehat{PMH}$ since $\triangle PSM$ is isosceles, furthermore, $\triangle ORH$ is also isosceles, for
+$OR = OH$ radii of the same circle. Hence $\widehat{ORH} = \widehat{OHR}$; but $\widehat{OHR} = \widehat{PMH}$, since $OH$ is
+parallel to $S'M$, consequently $\widehat{OHR} = \widehat{PMH}$, thus $\widehat{PSH} = \widehat{ORH}$. And correlatively $SP$ is
+parallel to $RZ$, therefore $\widehat{PSW} = \widehat{ZOS} = \theta$.
+
+**THE GEOMETRIC MEAN THEOREM**
+
+**Theorem 1.** Let $f: x \to f(x)$ be a continuous function that does not vanish anywhere on the interval $[a,b]$ and differentiable within it, such that $f'(x) \neq 0$ within the interval, then there exist a point of abscissa $x = c$, where $a < c < b$ such that
+
+$$f^2(c) = f(a) \cdot f(b) \tag{1}$$
+
+**Proof.** Let us construct the auxiliary function $\beta(x)$ defined in the interval $[a, b]$ such that
+
+$$\beta(x) = \frac{f(x)}{f(a)} + \frac{f(b)}{f(x)} \tag{2}$$
+
+*Now*
+
+$$\beta(a) = 1 + \frac{f(b)}{f(a)} \tag{3}$$
+
+*And*
+
+$$\beta(b) = \frac{f(b)}{f(a)} + 1 \tag{4}$$
+
+*Therefore*
+
+$$\beta(a) = \beta(b) \tag{5}$$
+
+And so, by Rolle's theorem [3] there exist a point of abscissa $x = c$, such that $\beta'(c) = 0$
+
+$$\beta'(c) = \frac{f'(c)}{f(a)} - \frac{f(b) \cdot f'(c)}{f^2(c)} = 0 \tag{6}$$
+
+And by rearranging and canceling $f'(c)$ we get
+
+$$f^2(c) = f(a) \cdot f(b) \tag{7}$$
+
+And the theorem is proved.
+---PAGE_BREAK---
+
+PROBING INTO KEPLER ELLIPTIC TRAJECTORIES
+
+The radius vector **r** of a planet moving on an elliptic orbit, is a continuous function of the
+angle $\theta$ that it makes with the major axis from the perihelion side. The angle $\theta$ in turn
+changes also with time. The modulus of **r** takes a minimum value $r_p = a-c$ at $\theta=0$, when
+the planet passes through the perihelion and a maximum value $r_a = a+c$ at $\theta=\pi$, in its
+passage through the aphelion, then according to theorem (1), there exist a value $\theta_b$ of $\theta$
+such that $r^2(\theta_b) = r(0)r(\pi)$, that is $r^2(\theta_b) = a^2 - c^2$. But in the case of an ellipse, the length
+of the semi minor axis is given by the relation $b^2 = a^2 - c^2$. Therefore the modulus of the
+radius vector should take a value equals to the length of the semi minor axis of the ellipse
+at a well specified angle. I called it $\theta_b$.
+
+Returning to the geometric features of the figure 1, by a well known relation we have:
+
+$$
+SH \times SR = SA \times SB \tag{8}
+$$
+
+that means
+
+$$
+SH \times SR = (a-c)(a+c) = b^2 \quad (9)
+$$
+
+The position vector of the planet expressed in polar coordinates has the form **r** = *r* **e***r and
+its velocity vector is **V** = *d*/*dr* = *ṙ* **e***r + *ṙ*θ **e***θ. Now, since the planet is urged by a centripetal
+force towards the star, so the applied external torque on the planet is zero, hence its angular
+momentum **L** = **r** × *mV* is conserved [4]. The magnitude of the angular momentum is
+*L* = *mV*θ*r* where Vθ = *ṙ*θ is the transverse component of the velocity vector. The magnitude
+of *L* may also be expressed as *L* = *m* × *V* × *SH* owing to the fact that SH = *r* sin θ.
+So, we can say that:
+
+$$
+L = mV_{\theta}r = mV \times SH \qquad (10)
+$$
+
+By canceling *m*, we get the expression of *h* which is that of the angular momentum per unit mass as being:
+
+$$
+h = rV_{\theta} = SH \times V = r^{2}\dot{\theta} \qquad (11)
+$$
+
+In the course of motion, the radius vector should, at a certain moment, take the value r = b
+at the angular position θ = θ₀, accordingly h being constant can be expressed as h = b²θ'₀,
+where θ'₀ is the angular velocity at θ = θ₀. So, from equation (11) we can say that
+
+$$
+SH \times V = b^2 \dot{\theta}_b \tag{12}
+$$
+
+and
+
+$$
+r V_{\theta} = b^2 \dot{\theta}_b \tag{13}
+$$
+
+Now multiplying equation (9) by $\dot{\theta}_b$ we get
+
+$$
+SH \times SR \times \dot{\theta}_b = b^2 \dot{\theta}_b \tag{14}
+$$
+
+And by comparing equations (12) and (14) and we get $V = SR \times \dot{\theta}_b$.
+But, $SR = ZS'$, since $\Delta OS'Z$ is equal to $\Delta OSR$, for $OS = OS' = c$, $OZ = OR = a$ and
+$\widetilde{S'OZ} = \widetilde{SOR}$ vertically opposite angles, hence,
+
+$$
+V = ZS' \times \dot{\theta}_b \tag{15}
+$$
+---PAGE_BREAK---
+
+Therefore
+
+$$
+\mathbf{V} = ZS' \times \dot{\theta}_b \hat{\mathbf{e}}_t \quad (16)
+$$
+
+Where $\hat{\mathbf{e}}_t$ is a unit vector tangent to the trajectory in the same direction as $\mathbf{V}$.
+In the language of mathematics, the velocity vector $\mathbf{V}$ is the image of $\mathbf{ZS}'$ by a direct
+similitude of ratio $\dot{\theta}_b > 0$ and of an angle ($\mathbf{ZS}', \mathbf{V}$) $= -\frac{\pi}{2} + 2k\pi$.
+
+It remains only to put equation (16) in a more explicit form, by finding the expressions of S'Z
+and ê_t in terms of the dynamic parameters of the motion. For that purpose and returning
+to figure 1, we notice that S'Z = SO + OZ i.e. S'Z = cê_x + aê_r since OZ is parallel to r
+as proved before. But ê_r = cos θ ê_x + sin θ ê_y, therefore, S'Z = (c + a cos θ) ê_x + a sin θ ê_y.
+Then expressing it in polar coordinates by the well known transformation relation:
+
+$$
+\begin{pmatrix} \hat{\mathbf{e}}_x \\ \hat{\mathbf{e}}_y \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} \hat{\mathbf{e}}_r \\ \hat{\mathbf{e}}_\theta \end{pmatrix} \qquad (17)
+$$
+
+We get **S′Z** to be:
+
+$$
+\mathbf{S}'\mathbf{Z} = (a + c \cos \theta) \hat{\mathbf{e}}_r - c \sin \theta \hat{\mathbf{e}}_{\theta} \quad (18)
+$$
+
+Thus the modulus of **S′Z** will have the expression
+
+$$
+[S'Z] = \sqrt{a^2 + c^2 + 2ac \cos \theta} \tag{19}
+$$
+
+and a unit vector $\hat{\mathbf{e}}_n$ along the normal to the trajectory should be $\hat{\mathbf{e}}_n = -\frac{\mathbf{S}'\mathbf{Z}}{|\mathbf{S}'\mathbf{Z}|}$, So
+
+$$
+\hat{\mathbf{e}}_n = - \frac{a+c\cos\theta}{\sqrt{a^2+c^2+2ac\cos\theta}} \hat{\mathbf{e}}_r + \frac{c\sin\theta}{\sqrt{a^2+c^2+2ac\cos\theta}} \hat{\mathbf{e}}_\theta \quad (20)
+$$
+
+The unit vector $\hat{\mathbf{e}}_t$ that is in the direction of the velocity vector $\mathbf{V}$ is perpendicular to $\hat{\mathbf{e}}_n$
+and its expression will be
+
+$$
+\hat{\mathbf{e}}_t = + \frac{c \sin \theta}{\sqrt{a^2 + c^2 + 2ac \cos \theta}} \hat{\mathbf{e}}_r + \frac{a + c \cos \theta}{\sqrt{a^2 + c^2 + 2ac \cos \theta}} \hat{\mathbf{e}}_{\theta} \quad (21)
+$$
+
+Finally by substituting (21) and (19) in (16) we obtain the expression of the velocity vector
+
+$$
+\mathbf{V} = c \dot{\theta}_b \sin \theta \hat{\mathbf{e}}_r + (a + c \cos \theta) \dot{\theta}_b \hat{\mathbf{e}}_{\theta} \quad (22)
+$$
+
+Accordingly we pursue our search to retrieve the equation of the elliptic trajectory from
+what preceded. It is to be noticed in this context that, the components of the velocity
+vector in the polar system are given by equation (22) as:
+
+$$
+V_r = c \dot{\theta}_b \sin \theta
+$$
+
+$$
+V_{\theta} = (a + c \cos \theta) \dot{\theta}_{b} \tag{24}
+$$
+
+Substituting $V_\theta$ as given by equation (24) in equation (13) and canceling $\dot{\theta}_b$, will give rise
+to the equation of the ellipse in a harmonious fashion.
+
+$$
+r = \frac{b^2}{a + c \cos \theta} = \frac{a(1 - \epsilon^2)}{1 + \epsilon \cos \theta} \quad (25)
+$$
+---PAGE_BREAK---
+
+Where $\epsilon = \frac{c}{a}$ is the eccentricity of the ellipse.
+
+Further, the expression of **V** can also be transformed into the cartesian system by the use of equation (17) in the inverted form, thus:
+
+$$
+\mathbf{V} = -a \dot{\theta}_b \sin \theta \hat{\mathbf{e}}_x + (c + a \cos \theta) \dot{\theta}_b \hat{\mathbf{e}}_y \quad (26)
+$$
+
+and its modulus will be
+
+$$
+V = \dot{\theta}_b \sqrt{a^2 + c^2 + 2ac \cos \theta} \tag{27}
+$$
+
+and by squaring equation (27) we get
+
+$$
+V^2 = (a \dot{\theta}_b)^2 + 2(a c) \dot{\theta}_b^2 \cos \theta + (c \dot{\theta}_b)^2 \quad (28)
+$$
+
+then by calling $V_c = a\dot{\theta}_b$ and $V_0 = c\dot{\theta}_b$ we obtain
+
+$$
+V^2 = V_c^2 + 2 V_c V_0 \cos \theta + V_0^2
+$$
+
+It must be pointed out that equation (29) has a major importance in providing the value of $V$ in terms of the angle $\theta$ that the planet makes with the perihelion at any moment. One can also add a peculiar privilege to this approach among others yet to come, namely that of equation (16) that introduced the second focus of the ellipse into the scene.
+
+Moreover, it is to be noticed that if we plug $\theta = 0$ at the perihelion and $\theta = \pi$ at the aphelion into equations (23) and (24), one would obtain that in both cases the radial velocity is zero and that the transverse components of the velocity vector **V** will take respectively the values $V_{pe} = (a+c)\dot{\theta}_b$ and $V_{ap} = (a-c)\dot{\theta}_b$. But $V_{pe} = (a-c)\dot{\theta}_{pe}$ and $V_{ap}(a+c)\dot{\theta}_{ap}$. Then by matching the two expressions of $V_{pe}$ and $V_{ap}$, one obtains:
+
+$$
+\dot{\theta}_{pe} = \frac{a+c}{a-c} \dot{\theta}_b \quad \text{and} \quad \dot{\theta}_{ap} = \frac{a-c}{a+c} \dot{\theta}_b
+\qquad (30)
+$$
+
+Finally by multiplying the last two expressions we get
+
+$$
+\dot{\theta}_b^2 = \dot{\theta}_{pe} \dot{\theta}_{ap} \tag{31}
+$$
+
+and it turns out that our $\dot{\theta}_b$ is nothing but the geometric mean of $\dot{\theta}_{pe}$ and $\dot{\theta}_{ap}$.
+
+We now turn our attention to deriving the law of force, so we proceed by rearranging equation (22) to the form
+
+$$
+\mathbf{V} = c \dot{\theta}_b (\sin \theta \hat{\mathbf{e}}_r + \cos \theta \hat{\mathbf{e}}_\theta) + a \dot{\theta}_b \hat{\mathbf{e}}_\theta \quad (32)
+$$
+
+and knowing from equation (17) that $\hat{\mathbf{e}}_y = \sin \theta \hat{\mathbf{e}}_r + \cos \theta \hat{\mathbf{e}}_\theta$ we obtain
+
+$$
+\mathbf{V} = c \dot{\theta}_b \hat{\mathbf{e}}_y + a \dot{\theta}_b \hat{\mathbf{e}}_\theta
+\quad (33)
+$$
+
+then we derive equation (33) with respect to time, and knowing that $\frac{d\hat{\mathbf{e}}_y}{dt} = \mathbf{0}$,
+
+and $\frac{d\hat{\mathbf{e}}_\theta}{dt} = -\dot{\theta} \hat{\mathbf{e}}_r$ we thus obtain the expression of the acceleration vector
+
+$$
+\mathcal{A} = -a \dot{\theta}_b \dot{\theta} \hat{\mathbf{e}}_r
+\quad (34)
+$$
+---PAGE_BREAK---
+
+and given that $r^2 \dot{\theta} = b^2 \dot{\theta}_b$ means that $\dot{\theta}$ can be expressed as $\dot{\theta} = \frac{b^2 \dot{\theta}_b}{r^2}$ which when substituted in equation (34), gives the expression of the acceleration vector as
+
+$$ \boldsymbol{\mathcal{A}} = -\frac{ab^2 \dot{\theta}_b^2}{r^2} \hat{\mathbf{e}}_r \qquad (35) $$
+
+And on the basis of Newton's second law $\mathcal{F} = m \boldsymbol{\mathcal{A}}$, one obtains
+
+$$ \boldsymbol{\mathcal{F}} = -\frac{ma^2 \dot{\theta}_b^2}{r^2} \hat{\mathbf{e}}_r \qquad (36) $$
+
+and knowing that $b = a\sqrt{1-\epsilon^2}$ we get
+
+$$ \boldsymbol{\mathcal{F}} = -\frac{ma^3(1-\epsilon^2)\dot{\theta}_b^2}{r^2} \hat{\mathbf{e}}_r \qquad (37) $$
+
+In Newton's law of universal gravitation, the force is given as:
+
+$$ \boldsymbol{\mathcal{F}} = -\frac{GmM}{r^2} \hat{\mathbf{e}}_r \qquad (38) $$
+
+and by comparing equations (37) and (38) one obtains the value of $\dot{\theta}_b$ to be
+
+$$ \dot{\theta}_b = \sqrt{\frac{GM}{a^3(1-\epsilon^2)}} \qquad (39) $$
+
+Moreover, it appears that a closer inspection of equation (39), and given that the star is permanently consuming its mass in favor of energy of electromagnetic radiations and other particles, then one might infer that it had at an earlier stage a mass
+
+$$ M' = \frac{M}{1 - \epsilon^2} \qquad (40) $$
+
+and then the angular velocity of the planet would have been $\dot{\theta}_b = \sqrt{\frac{GM'}{a^3}}$ which is that of a uniform circular motion. Accordingly, one could infer that the planet should have been revolving at that stage in a uniform circular motion, and hence its orbit is becoming more and more elliptic with time.
+
+Furthermore, equation (40) gives rise to the law that governs the variation of the eccentricity of the elliptic orbit with time as follows
+
+$$ \epsilon(t) = \sqrt{1 - \frac{M(t)}{M'}} \qquad (41) $$
+
+A knowledge of the actual power of the star, along with the use of Einstein's mass energy relation $E = mc^2$ would not be enough to find exactly the relation between the current mass *M* of the star and its earlier mass *M'* when the planet was orbiting it in uniform circular motion, because a part of the mass of the star flees it randomly through stellar winds. Despite these difficulties, an interesting feature may be extracted from equation (41) namely that of the effect of the mass on the geometry, as it indicates that the geometry of
+---PAGE_BREAK---
+
+the orbit is changing from circular to elliptic as the mass of the star decreases with time. On this respect, one should also notice that these spontaneous modifications in the geometry of the orbit occur in a sense as to change the orbit from the most ordered shape (circle), to a less ordered shape (ellipse).
+
+Another implication of Newton’s law of gravitational interaction expressed in the form of equation (37) may be noticed when it comes to a moon of mass *m* orbiting a planet of presumably constant mass *M*, on an elliptic orbit, then one could predict that another moon of mass *m*′ = *m*(1 − ε²), let go with the same initial conditions as *m*, would orbit the planet in a uniform circular motion, because its angular velocity would then have been
+
+$$
+\dot{\theta}_b = \sqrt{\frac{GM}{a^3}}
+$$
+
+**Consequence.** A planet orbiting a star in an elliptic orbit should possess at two specific instants every one complete revolution around the star an angular velocity that is equal to its angular velocity had it been rotating in a uniform circular motion at an earlier stage of the life of the star.
+
+Before we proceed to extract the information from equation (29), a little digression into defining the hodograph is needed. Thus, the hodograph is the curve generated by the tip of a vector equipollent to the velocity vector and whose tail lies at the origin of the velocity space. The velocity vector of a moving body is permanently tangent to the trajectory described by that body at any instant. Except for uniform circular motion in which the modulus of the velocity vector remains constant, all other sorts of curvilinear motion are characterized by a changing velocity vector regarding modulus and direction. Nonetheless we still have need to construct the equation of an off-center circle in polar coordinates and for that sake we introduce theorem (2).
+
+**Theorem 2.** The modulus of the radius vector of a point M moving on a circle of radius ρ centered at (r₀, φ₀) and of parameter θ is defined by:
+
+$$
+r^2 = \rho^2 + 2\rho r_0 \cos \theta + r_0^2 \tag{42}
+$$
+
+**Proof.** Let us consider a circle $\Omega$ (figure 2) of radius $\rho$ and center $C(r_0, \phi_0)$. A point $M$ on the circle is located by its radius $r$ and by the azimuthal angle $\phi$.
+
+Figure 2: Relations satisfied by off-center circles in polar coordinates
+---PAGE_BREAK---
+
+$$\text{Now} \qquad \boldsymbol{r} = \boldsymbol{r}_0 + \boldsymbol{\rho} \tag{43}$$
+
+Projecting equation (43) successively on $\hat{\boldsymbol{e}}_x$ and $\hat{\boldsymbol{e}}_y$ we get
+
+$$r \cos \phi = r_0 \cos \phi_0 + \rho \cos(\theta + \phi_0) \tag{44}$$
+
+and
+
+$$r \sin \phi = r_0 \sin \phi_0 + \rho \sin(\theta + \phi_0) \tag{45}$$
+
+In reality equations (44) and (45) represent the parametric equations of the circle $\Omega$
+
+$$\begin{cases} x = \rho_0 \cos \phi_0 + \rho \cos(\theta + \phi_0) \\ y = \rho_0 \sin \phi_0 + \rho \sin(\theta + \phi_0) \end{cases} \tag{46}$$
+
+Then, by squaring and adding equations (44) and (45) we get
+
+$$r^2 = r_0^2 + \rho^2 + 2\rho_0\rho[\cos(\theta + \phi_0)\cos\phi_0 + \sin(\theta + \phi_0)\sin\phi_0] \tag{47}$$
+
+But
+
+$$\cos(m - n) = \cos m \cos n + \sin m \sin n \tag{48}$$
+
+Therefore
+
+$$r^2 = \rho^2 + 2\rho r_0 \cos \theta + r_0^2 \tag{49}$$
+
+And the theorem is proved.
+
+It is obvious that equation (29) is the analogue of equation (49), hence on the basis of theorem (2), the hodograph is a circle of parametric equations
+
+$$\begin{cases} V_x = -a\dot{\theta}_b \sin\theta \\ V_y = (c + a \cos\theta) \dot{\theta}_b \end{cases} \tag{50}$$
+
+So, in accordance with equation (29), The hodograph of the motion (figure 3) is an off-center circle of radius $V_c = a\dot{\theta}_b$ and center ($V_0 = c\dot{\theta}_b$, $\phi_0 = \frac{\pi}{2}$), traced in the velocity space by making use of its parametric equations (50).
+
+Figure 3: Hodograph of an elliptic trajectory
+---PAGE_BREAK---
+
+The hodograph of a body in uniform rectilinear motion is a fixed point in the velocity space. Correlatively, the existence of the hodograph curve is a manifestation of departure from uniform rectilinear motion. If two or more motions present the same hodograph, then, these motions undergo the same deviation from uniform rectilinear motion. It appears from equation (33) that the motion of planets in elliptic orbits is a combination of a uniform rectilinear part represented by the component $c\dot{\theta}_b \hat{e}_y$ and a uniform circular part represented by the component $a\dot{\theta}_b \hat{e}_{\theta}$. It is evident that the deviation from uniform rectilinear motion that the planet undergoes in elliptic motion is restricted to the uniform circular part, and that this deviation is exactly the same as that it would do had its motion been uniform circular at an earlier stage. Therefrom, one can speak about the uniqueness of the hodograph of planets vis à vis the changes in the eccentricity of their elliptic trajectories influenced by the decrease in the mass of the star about which they revolve. In a paper [6] published in 2019, I proved that a uniform circular motion of a spaceship around a planet consists of an infinite number of successive infinitesimal free falls, a fact that explains the absence of the sensation of gravity aboard a spaceship revolving a planet in a uniform circular motion. The same reasoning applies here, so, one can attribute the absence of the sensation of the gravity of stars on planets, to the sameness of the deviation from uniform rectilinear motion for elliptic and circular trajectories.
+
+To recapitulate, the hodograph of a planet orbiting a star is invariant under mass dissipations occurring in the star.
+
+CONCLUSION
+
+As a matter of fact, all credit goes to Newton who was the first to allude to relation (37) in Principia Mathematica by saying literally [5]:
+
+"If a body *P*, by means of a centripetal force tending to any given point *R*, move in the perimeter of any given conic section whose center is *C*; and the law of centripetal force is required: draw *CG* parallel to the radius *RP*, and meeting the tangent *PG* of the orbit in *G*; and the force required (by Cor.1, and Schol. X, and Cor.3, Prop.VII) will be as $\frac{CG^3}{RP^2}$."
+
+Equation (37) giving the expression of the central force acting on a body in elliptic motion around a center of force is in complete agreement with what Newton predicted. Nevertheless, it constitutes a step in advance by realizing that, the point *G* to which Newton referred and which is called *Z* in figure 1 belongs to the principal circle, and as such we recognize that his *CG* is nothing but the length of the semi major axis *OZ* = *a*, furthermore, it provides an explicit formula to the value of the centripetal force in terms of the geometric parameters of the trajectory and the mass of the planet i.e. an equality and not a proportionality. In other words the missing constant in Newton's prediction turned out to be $m(1 - \epsilon^2)\dot{\theta}_b^2$.
+
+[1] For a more detailed history on the subject of kepler orbits, see (A new look at the Feynman “hodograph” approach to the kepler first law.
+arXiv: 1605.01204v1 [math-ph] 4 May 2016
+---PAGE_BREAK---
+
+[2] A. Thuizat, G.Girault, E.Aspeele, M. Voilquin *mathématiques Terminales- Géométrie* (Collection Durande - Paris 13). p.200.
+
+[3] Thomas, *Calculas*
+(Addison-Wesley, 2001, tenth edition). p. 237.
+
+[4] Herbert Goldestein, *Classical Mechanics* (Addison-Wesley, 1980, second ed). p. 72.
+
+[5] Issac Newton, *Newton's Principia The mathematical principles of Natural Philosophy* (First American Edition, New York). p. 125.
+
+[6] Adel Alameh (2019), "Uniform circular motion of a spaceship and its relation to free fall" https://doi.org/10.1119/1.5126829 Name of repository. The physics teacher. vol 57, 478.
\ No newline at end of file
diff --git a/samples/texts_merged/5261757.md b/samples/texts_merged/5261757.md
new file mode 100644
index 0000000000000000000000000000000000000000..9a3b007322fc2e4095eb0c43f5b7fea26df3e7e9
--- /dev/null
+++ b/samples/texts_merged/5261757.md
@@ -0,0 +1,304 @@
+
+---PAGE_BREAK---
+
+# The CRLB for Bilinear Systems and Its Biomedical Applications
+
+Zhiping Lin, Qiyue Zou
+Centre for Signal Processing
+School of Electrical and Electronic Engineering
+Nanyang Technological University, Singapore 639798
+Email: ezplin@ntu.edu.sg
+
+Raimund J. Ober
+Center for Systems, Communications and Signal Processing
+Eric Jonsson School of EECS
+University of Texas at Dallas
+Richardson, TX 75083-0688, USA
+
+**Abstract**—The Cramer Rao lower bound (CRLB) provides a lower bound on the covariance matrix of any unbiased estimator of unknown parameters. It is shown in this paper that the CRLB for a data set generated by a bilinear system with additive Gaussian measurement noise can be expressed explicitly in terms of the outputs of its derivative system which is also bilinear. For bilinear systems with piecewise constant inputs the CRLB for uniformly sampled data can be efficiently computed through solving certain Lyapunov equations. The theoretical results are illustrated through an example arising from surface plasmon resonance experiments for the determination of the kinetic parameters of protein-protein interactions.
+
+## I. INTRODUCTION
+
+A fundamental problem in biomedical applications is to estimate unknown system parameters from output observations [1]. Although there are many methods for parameter estimations for linear systems, it is well known that linear models are not appropriate for some biomedical applications and hence bilinear and/or nonlinear system models have to be used [1], [2]. For parameter estimation for bilinear systems which will be discussed in this paper, an important question is the accuracy of the estimation that can be achieved based on the observed noisy outputs. The Cramer Rao lower bound (CRLB) gives a lower bound on the covariance matrix of any unbiased estimator of unknown parameters [3]. It is commonly used to evaluate the performance of an estimation algorithm and can provide guidance to improve the experimental design.
+
+The CRLB or Fisher information matrix for one-dimensional (1D) dynamic non-stationary systems with deterministic input and Gaussian measurement noise has been investigated in [4]. The calculation of the Fisher information matrix for the 1D data is performed in terms of the derivative system with respect to the system parameters and by using the solution to a Lyapunov equation. The above approach has been extended to multidimensional (nD) data sets generated by nD linear separable-denominator systems and applied to the analysis of nD nuclear magnetic resonance spectroscopy data sets [5].
+
+Here we generalize the results in [4] to bilinear systems. It is shown that the Fisher information matrix for the output data samples of a multiple-input-multiple-output (MIMO) bilinear system can be expressed in terms of the outputs of its derivative system which is also an MIMO bilinear system.
+
+The notion of derivative system is very useful in that it gives an explicit expression for the Fisher information matrix and the CRLB. Furthermore, for uniformly sampled data sets, the CRLB can be efficiently computed using algorithms based on solutions to certain Lyapunov equations. The results are then applied to estimation of kinetic constants of protein-protein interactions arising from surface plasmon resonance experiments [2], [6].
+
+## II. CRAMER RAO LOWER BOUND
+
+Consider the state-space model of a general MIMO bilinear system given by (see [7])
+
+$$ \dot{x}_{\theta}(t) = Ax_{\theta}(t) + \sum_{q=1}^{m} F_q u_q(t)x_{\theta}(t) + Bu(t), \quad x_{\theta}(t^{[0]}) = x_0, \quad (1) $$
+
+$$ y_{\theta}(t) = Cx_{\theta}(t), \quad t \ge t^{[0]}, \quad (2) $$
+
+where $x_{\theta}(t) \in \mathbb{R}^{n \times 1}$ is the state vector, $u(t) \in \mathbb{R}^{m \times 1}$ is the input vector with components $u_1(t), \dots, u_m(t)$, $y_{\theta}(t) \in \mathbb{R}^{p \times 1}$ is the system output vector, $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, $C \in \mathbb{R}^{p \times n}$, $F_q \in \mathbb{R}^{n \times n}$, $q = 1, \dots, m$, are the system matrices depending on the unknown parameter vector $\theta := [\theta_1 \dots \theta_K]^T$, and $x_0$ is the initial state vector, which can also depend on the parameter vector $\theta$. The $i^{\text{th}}$ element of $y_{\theta}(t)$ is represented by $y_{\theta,i}(t)$, and the $i^{\text{th}}$ row of $C$ is denoted by $C_i$, $i = 1, \dots, p$.
+
+In this paper, we consider only piecewise constant inputs $\mathbb{U}$ represented by
+
+$$ u(t) = \sum_{l=0}^{L-1} u^{[l]} \beta_l(t), \quad t^{[0]} \le t < t^{[L]}, \quad (3) $$
+
+where $u^{[l]} := [u_1^{[l]} \dots u_m^{[l]}]^T$ are constant vectors, and $\beta_l(t)$ ($l=0, \dots, L-1$) are the indicator functions defined by
+
+$$ \beta_l(t) = \begin{cases} 1, & \text{for } t \in [t_l^{[l]}, t_{l+1}^{[l]}), \\ 0, & \text{for } t \notin [t_l^{[l]}, t_{l+1}^{[l]}). \end{cases} \quad (4) $$
+
+Here, $t^{[0]}, \dots, t^{[L]}$ denote the starting and ending points of the time intervals with $t^{[0]} < \dots < t^{[L]}$, where $t^{[L]}$ can be either finite or infinite. Note that $u^{[l]}$ could be a zero vector, and that for a piecewise constant input $u \in \mathbb{U}$ as defined in (3)
+---PAGE_BREAK---
+
+we are only interested in the output $y_{\theta}(t)$ for $t^{[0]} \le t < t^{[L]}$. We denote $L_1 := L - 1$ throughout. For more general inputs and proofs of the results presented here, see [8].
+
+**Lemma 2.1:** Consider the bilinear system $\Phi = \{A, B, C, F_1, \dots, F_m\}$. Let $F^{[l]} := \sum_{q=1}^m F_q u_q^{[l]}$ and assume that $A + F^{[l]}$ is invertible, $l = 0, \dots, L_1$. Then the output of the system is given by
+
+$$y_{\theta}(t) = \sum_{l=0}^{L_1} \left[ C Q_l(t) (W^{[l]} + x_{\theta}(t^{[l]})) - C W^{[l]} \right] \beta_l(t), \quad (5)$$
+
+where $Q_l(t) := e^{(A+F^{[l]})(t-t^{[l]})}$ and $W^{[l]} := (A+F^{[l]})^{-1} Bu^{[l]}$, $l=0,\dots,L_1$, and $x_{\theta}(t^{[l]})$ is given by
+
+$$x_{\theta}(t^{[l]}) = \begin{cases} x_0, & l = 0, \\ Q_{l-1}(t^{[l]}) (W^{[l-1]} + x_{\theta}(t^{[l-1]})) - W^{[l-1]}, & l = 1, \dots, L_1. \end{cases}$$
+
+The following assumptions are made throughout the paper. Assume that we have acquired noise corrupted samples $s_{\theta,i}(j)$, $i=1,\dots,p$, $j=0,\dots,J-1$, of the measured output of the bilinear system, i.e.,
+
+$$s_{\theta,i}(j) = y_{\theta,i}(t_j) + w_i(j), \quad (6)$$
+
+where $y_{\theta,i}(t_j)$ is the $i$th noise free output element at the sampling point $t_j$ and $w_i(j)$ is the measurement noise component, $i=1,\dots,p$, $j=0,\dots,J-1$, $t^{[0]} \le t_0 < t_1 < \dots < t_{J-1}$. The measurement noise components are assumed to have independent Gaussian distributions with zero mean and variance $\sigma_{i,j}^2$, $i=1,\dots,p$, $j=0,\dots,J-1$. Hence the probability density function $p(S;\theta)$ for the acquired data set $S := \{s_{\theta,i}(j), i=1,\dots,p, j=0,\dots,J-1\}$ is given by
+
+$$p(S; \theta) = \prod_{i=1}^{p} \prod_{j=0}^{J-1} \frac{1}{\sqrt{2\pi}\sigma_{i,j}^2} \exp \left( -\frac{1}{2\sigma_{i,j}^2} [s_{\theta,i}(j) - y_{\theta,i}(t_j)]^2 \right).$$
+
+Assume $p(S;\theta)$ satisfies the standard regularity conditions (see e.g. [3]). The Fisher information matrix $I(\theta)$ is then defined as
+
+$$[I(\theta)]_{sr} = E \left\{ \left( \frac{\partial \ln p(S; \theta)}{\partial \theta_s} \right) \left( \frac{\partial \ln p(S; \theta)}{\partial \theta_r} \right) \right\}, \quad 1 \le s, r \le K,$$
+
+by the CRLB any unbiased estimator $\hat{\theta}$ of $\theta$ has a variance such that
+
+$$\operatorname{var}(\hat{\theta}) \ge I^{-1}(\theta),$$
+
+where $\operatorname{var}(\hat{\theta}) \ge I^{-1}(\theta)$ is interpreted as meaning that the matrix $(\operatorname{var}(\hat{\theta}) - I^{-1}(\theta))$ is positive semidefinite.
+
+In the following theorem we first show that the derivative system (with respect to the given parameter vector $\theta$) of a general MIMO bilinear system is also an MIMO bilinear system. The Fisher information matrix for the sampled output data of the bilinear system for Gaussian measurement noise is then expressed using the output samples of its derivative system.
+
+**Theorem 2.1:** Consider the bilinear system represented by $\Phi = \{A, B, C, F_1, \dots, F_m\}$. Assume that the partial
+
+derivatives of $A, B, C, F_1, \dots, F_m$ and $x_0$ with respect to the elements of $\theta$ exist for all $\theta \in \Theta$, and that the input $u(t)$ is independent of the parameter vector $\theta$. Let
+
+$$\begin{align*}
+Y_{\theta}(t) &:= \begin{bmatrix} Y_{\theta,1}(t) \\ \vdots \\ Y_{\theta,p}(t) \end{bmatrix}, && \text{with} \\
+Y_{\theta,i}(t) &:= \begin{bmatrix} \frac{\partial y_{\theta,i}(t)}{\partial \theta_1} \\ \vdots \\ \frac{\partial y_{\theta,i}(t)}{\partial \theta_K} \end{bmatrix}, && (i = 1, \dots, p), \quad t \ge t^{[0]}.
+\end{align*}$$
+
+Then, 1.) $\mathcal{Y}_{\theta}(t), t \ge t^{[0]}$, is the output of the derivative system $\Phi' := \{\mathcal{A}, \mathcal{B}, \mathcal{C}, \mathcal{F}_1, \dots, \mathcal{F}_m\}$, which is an MIMO time-invariant bilinear system with state vector $\mathcal{X}_{\theta}(t), t \ge t^{[0]}$, and has the same input $u$ as $\Phi$. The state vector $\mathcal{X}_{\theta}$, initial state $\mathcal{X}_0$, and system matrices $\mathcal{A}, \mathcal{B}, \mathcal{C}, \mathcal{F}_1, \dots, \mathcal{F}_m$ are given as follows.
+
+$$\begin{align*}
+\mathcal{X}_{\theta}(t) &:= \begin{bmatrix} \partial_1 x_{\theta}(t) \\ \vdots \\ \partial_K x_{\theta}(t) \end{bmatrix}, &
+\mathcal{X}_0 &:= \begin{bmatrix} \partial_1 x_{\theta}(t^{[0]}) \\ \vdots \\ \partial_K x_{\theta}(t^{[0]}) \end{bmatrix}, \\
+\mathcal{A} &:= \operatorname{diag}\{\partial_1 A, \dots, \partial_K A\}, &
+\mathcal{B} &:= \begin{bmatrix} \partial_1 B \\ \vdots \\ \partial_K B \end{bmatrix}, &
+\mathcal{C} &:= \begin{bmatrix} C_1 \\ \vdots \\ C_p \end{bmatrix}
+\end{align*}$$
+
+$$C_i := \operatorname{diag}\{\partial_1 C_i, \dots, \partial_K C_i\}, i = 1, \dots, p,$$
+
+$$F_q := \operatorname{diag}\{\partial_1 F_q, \dots, \partial_K F_q\}, q = 1, \dots, m,$$
+
+where for $s = 1, \dots, K$
+
+$$\begin{align*}
+&\partial_s x_\theta(t) := \begin{bmatrix} x_\theta(t) \\ \frac{\partial x_\theta(t)}{\partial \theta_s} \end{bmatrix}, && \partial_s x_\theta(t^{[0]}) := \begin{bmatrix} x_0 \\ \frac{\partial x_0}{\partial \theta_s} \end{bmatrix}, \\
+&\partial_s A := \begin{bmatrix} A & 0 \\ \frac{\partial A}{\partial \theta_s} & A \end{bmatrix}, && \partial_s B := \begin{bmatrix} B \\ \frac{\partial B}{\partial \theta_s} \end{bmatrix}, && \partial_s C_i := \begin{bmatrix} \frac{\partial C_i}{\partial \theta_s} & C_i \end{bmatrix},
+\end{align*}$$
+
+$$\partial_s F_q := \begin{bmatrix} F_q & 0 \\ 0 & F_q' \end{bmatrix}, q = 1, \dots, m;$$
+
+2.) The Fisher information matrix is given by
+
+$$I(\theta) = \sum_{i=1}^{p} \sum_{j=0}^{J-1} \frac{1}{\sigma_{i,j}^2} P_i Y_{\theta}(t_j) Y_{\theta}^T(t_j) P_i^T. \quad (7)$$
+
+Here $P_i \in R^{K\times p^K}$, $i = 1, \dots, p$, is defined as
+
+$$P_i = [\underbrace{\mathbf{0}_{(i-1)s}}_{(i-1)\mathbf{0}s}, I_K, \underbrace{\mathbf{0}_{(p-i)s}}_{(p-i)\mathbf{0}s}],$$
+
+(8)
+
+where $\mathbf{0}$ denotes the $K\times K$ zero matrix and $I_K$ the $K\times K$ identity matrix.
+
+For the data set generated by a bilinear system with a piece-wise constant input $u \in U$, the following corollary derives an explicit expression of its associated Fisher information matrix.
+
+**Corollary 2.1:** Assume that the bilinear system model and assumptions are the same as in Theorem 2.1, and that $A+F^{[l]}$ is invertible, where $F^{[l]} := \sum_{q=1}^m F_q u_q^{[l]}$, $l = 0, \dots, L_1$. Let
+---PAGE_BREAK---
+
+$\mathcal{F}^{[l]} := \text{diag}\{\partial_1 F^{[l]}, \dots, \partial_K F^{[l]}\}, l = 0, \dots, L_1$, where for
+$s = 1, \dots, K$
+
+$$
+\partial_s F^{[l]} := \left[ \begin{array}{cc} F^{[l]} & 0 \\ \frac{\partial F^{[l]}}{\partial \theta_s} & F^{[l]} \end{array} \right] = \sum_{q=1}^{m} \left[ \begin{array}{cc} F_q^{} & 0 \\ \frac{\partial F_q^{}}{\partial \theta_s} & F_q^{} \end{array} \right] u_q^{[l]}.
+$$
+
+Then, the output of the derivative system $\Phi'$ is given by
+
+$$
+\mathcal{Y}_{\theta}(t) = \sum_{l=0}^{L_1} \left[ c Q_l(t) (\mathcal{W}^{[l]} + \mathcal{X}_{\theta}(t^{[l]})) - c \mathcal{W}^{[l]} \right] \beta_l(t), (9)
+$$
+
+where $Q_l(t) := e^{(\mathcal{A}+\mathcal{F}^{[l]})(t-t^{[l]})}$, $\mathcal{W}^{[l]} := (\mathcal{A}+\mathcal{F}^{[l]})^{-1} B u^{[l]}$,
+$l=0,\dots,L_1$, and
+
+$$
+\begin{equation}
+\begin{aligned}
+\mathcal{X}_\theta(t^{[l]}) &= \\
+&\quad \begin{cases}
+\mathcal{X}_0, & l=0, \\
+Q_{l-1}(t^{[l]}) (\mathcal{W}^{[l-1]} + \mathcal{X}_\theta(t^{[l-1]})) - \mathcal{W}^{[l-1]}, & l=1, \dots, L_1,
+\end{cases}
+\end{aligned}
+\tag{9}
+\end{equation}
+$$
+
+When the output of a bilinear system is sampled uniformly,
+the associated Fisher information matrix and the CRLB can
+be computed efficiently through solving certain Lyapunov
+equations, as shown in the following theorem.
+
+**Theorem 2.2:** Assume that the data model is the same as in Corollary 2.1, and that the output signal is uniformly sampled with the sampling period $T_l$ in the $l^{th}$ interval of the piecewise constant input, i.e., at $t_{j[l]}^k = t^k + t^{[l,0]} + j[t_l^k]$, $j[l] = 0, \dots, J^{[l]} - 1$, $t^k \le t_{j[l]}^k < t^{[l+1]}$, where $t_{j[l]}^k$ denotes the $j[l]$-th sampling instant in the $l^{th}$ interval, $t^{[l,0]}$ is the starting time relative to $t^k$ for sampling in the $l^{th}$ interval, and $J^{[l]}$ is the total number of samples acquired in the $l^{th}$ interval, $l = 0, \dots, L_1$, and the independent measurement Gaussian noise variance $\sigma_{i,j}^{[l]^2} = \sigma^2$, $i = 1, \dots, p$, $j = 0, \dots, J^{[l]} - 1$, for $l = 0, \dots, L_1$. Then the Fisher information matrix for the given data set is
+
+$$
+I(\theta) = \frac{1}{\sigma^2} \\
+\cdot \sum_{i=1}^{p} P_i C \left\{
+\sum_{l=0}^{L_1} \left[
+\left( A_d^{[l]} \right)^{\frac{t_{j[l]}^{[l,0]}}{T_l}} P_1^{[l]}
+\left( \left( A_d^{[l]} \right)^{\frac{t_{j[l]}^{[l,0]}}{T_l}} \right)^T
+-
+\left( A_d^{[l]} \right)^{\frac{t_{j[l]}^{[l,0]}}{T_l}}
+\right.
+\\
+\left.
+\cdot P_2^{[l]} - \left( P_2^{[l]} \right)^T
+\left( \left( A_d^{[l]} \right)^{\frac{t_{j[l]}^{[l,0]}}{T_l}} \right)^T
++
+J^{[l]} W^{[l]} \left( W^{[l]} \right)^T
+\right]
+\right\} C^T P_i^T,
+$$
+
+where $P_1^{[l]}$ and $P_2^{[l]}$ are obtained as follows.
+
+$P_1^{[l]}, l = 0, \dots, L_1,$ is the unique solution to the following
+Lyapunov equation
+
+$$
+\begin{align*}
+& A_d^{[l]} P_1^{[l]} (A_d^{[l]})^T - P_1^{[l]} = \\
+& \quad - (\mathcal{W}^{[l]} + \mathcal{X}_\theta(t^{[l]})) (\mathcal{W}^{[l]} + \mathcal{X}_\theta(t^{[l]}))^T + \\
+& \quad (\mathcal{A}_d^{[l]})^{J^{[l]}} (\mathcal{W}^{[l]} + \mathcal{X}_\theta(t^{[l]})) (\mathcal{W}^{[l]} + \mathcal{X}_\theta(t^{[l]}))^T (\mathcal{A}_d^{[l]})^{J^{[l]}} \\
+& \quad \qquad T
+\end{align*}
+$$
+
+$\mathcal{P}_2^{[l]}, l = 0, \dots, L_1,$ is given by
+
+$$
+\begin{align*}
+\mathcal{P}_2^{[l]} &= \left(I - (\mathcal{A}_d^{[l]})^{J^{[l]}}\right) (I - \mathcal{A}_d^{[l]})^{-1} \\
+&\quad \cdot (\mathcal{W}^{[l]}) (\mathcal{W}^{[l]})^T + \chi_\theta(t^{[l]}) (\mathcal{W}^{[l]})^T.
+\end{align*}
+$$
+
+In the next section we illustrate the theoretical results pre-
+sented in this section using an example from surface plasmon
+resonance experiments for the determination of the kinetic
+parameters of protein-protein interactions.
+
+### III. BIOMEDICAL APPLICATIONS
+
+Surface plasmon resonance (SPR) (see, e.g. [2], [6]) occurs under certain conditions from a conducting film at the interface between two media of different refractive index. Biosensors such as instruments by the BIAcore company offer a technique for monitoring protein-protein interactions in real time using an optical detection principle based on SPR. In the experiments one of the proteins (ligand) is coupled to a sensor chip and the second protein (analyte) is flowed across the surface coupled ligand using a micro-fluidic device. SPR response reflects a change in mass concentration at the detector surface as molecules bind or dissociate from the sensor chip. It can be used to estimate the kinetic constants of protein-protein interactions.
+
+In this section we use the theoretical results presented in
+the previous section to analyze the SPR experiments for one-
+to-one protein-protein interactions that can be modeled by the
+differential equation
+
+$$
+\dot{R}(t) = k_a (R_{max} - R(t)) C_0(t) - k_d R(t), t \geq t^{[0]}, R(t^{[0]}) = 0, \quad (11)
+$$
+
+where $R(t)$ is the measured SPR response in resonance units (RU), $k_a$ and $k_d$ are the kinetic association and dissociation constants of the interaction respectively, $R_{max}$ is the maximum analyte binding capacity in RU, $C_0(t)$ is the concentration value of the analyte in the flow cell which can be controlled in the experiments, and the initial SPR response is assumed to be zero.
+
+Let $x_0(t) := R(t)$, $u(t) := C_0(t)$, $y_0(t) := R(t)$, $t \ge t_0^[$,
+and $x_0 := R(t_0^[$) = 0, (11) becomes the following bilinear
+system $\Phi = \{A, B, C, F_1\}$
+
+$$
+\begin{equation}
+\begin{split}
+& \dot{x}_{\theta}(t) = Ax_{\theta}(t) + F_1 u(t)x_{\theta}(t) + Bu(t), && x_{\theta}(t^{[0]}) = x_0, \\
+& y_{\theta}(t) = Cx_{\theta}(t), && t \ge t_0,
+\end{split}
+\normalsize
+\tag{12}
+\end{equation}
+$$
+
+$$
+y_{\theta}(t) = C x_{\theta}(t), \quad t \geq t_0^{\text{[0]}},
+$$
+
+where $A = -k_d$, $B = k_a R_{max}$, $C = 1$, $F_1 = -k_a$. The unknown parameter vector to be estimated in the experiments is $\theta = [ k_a \ k_d \ R_{max} ]^T$.
+
+A practical SPR experiment may consist of an association phase ($t^{[0]} \le t < t^{[1]}$) and a dissociation phase ($t^{[1]} \le t < t^{[2]}$), or one of these two phases. During the association phase analyte is flowed across the ligand on the sensor chip with constant concentration $C_0$ up to time $t^{[1]}$, i.e., $C_0(t) = C_0$, $t^{[0]} \le t < t^{[1]}$. The dissociation phase immediately follows the association phase and is characterized by analyte
+---PAGE_BREAK---
+
+free buffer being flowed across the sensor chip, i.e., $C_0(t) = 0$, $t^{[1]} \le t < t^{[2]}$. Hence, a two-phase SPR experiment can be modeled by the bilinear system $\Phi = \{A, B, C, F_1\}$ with a two-phase piecewise constant input
+
+$$u(t) = u^{[0]}\beta_0(t) + u^{[1]}\beta_1(t), \quad t^{[0]} \le t < t^{[2]},$$
+
+where $u^{[0]} = C_0$, $u^{[1]} = 0$ and $\beta_0(t)$, $\beta_1(t)$ are the indicators (see (4)). Note that in the two-phase SPR experiment the output samples are obtained from $y_\theta(t)$ for $t^{[0]} \le t < t^{[2]}$.
+
+The first step is the calculation of the derivative system by Theorem 2.1. We represent the derivative system of $\Phi = \{A, B, C, F_1\}$ by $\Phi' = \{A, B, C, F_1\}$ where $A, B, C, F_1$ are given as follows.
+
+$$A := \operatorname{diag}\{\partial_1 A, \partial_2 A, \partial_3 A\} \text{ where}$$
+
+$$\begin{align*}
+\partial_1 A &= \partial_3 A = \begin{bmatrix} -k_d & 0 \\ 0 & -k_d \end{bmatrix}, & \partial_2 A &= \begin{bmatrix} -k_d & 0 \\ -1 & -k_d \end{bmatrix}. \\
+B &:= \begin{bmatrix} \partial_1 B \\ \partial_2 B \\ \partial_3 B \end{bmatrix} & \text{where } \partial_1 B &= \begin{bmatrix} k_a R_{max} \\ R_{max} \end{bmatrix}, \\
+\partial_2 B &= \begin{bmatrix} k_a R_{max} \\ 0 \end{bmatrix}, & \partial_3 B &= \begin{bmatrix} k_a R_{max} \\ k_a \end{bmatrix}.
+\end{align*}$$
+
+$$C := \operatorname{diag}\{\partial_1 C_1, \partial_2 C_1, \partial_3 C_1\} \text{ where}$$
+
+$$\partial_1 C_1 = \partial_2 C_1 = \partial_3 C_1 = \begin{bmatrix} 0 & 1 \end{bmatrix}.$$
+
+$$F_1 := \operatorname{diag}\{\partial_1 F, \partial_2 F, \partial_3 F\} \text{ where}$$
+
+$$\partial_1 F_1 = \begin{bmatrix} -k_a & 0 \\ -1 & -k_a \end{bmatrix}, \quad \partial_2 F_1 = \partial_3 F_1 = \begin{bmatrix} -k_a & 0 \\ 0 & -k_a \end{bmatrix}.$$
+
+Since the initial state $x_0$ of $\Phi$ is equal to zero, the initial state vector $\mathcal{X}_0$ of $\Phi'$ is also equal to zero.
+
+The next step is to apply Theorem 2.2 to numerically calculate the CRLB. Here we use simulated data so that we could conveniently select various experimental settings. For comparison, typical numerical values from [6] are assigned to the unknown parameters, i.e.,
+
+Fig. 1. The CRLB for simulated two-phase one-to-one SPR experimental data with $T_0 = T_1 = 1$ s and $\sigma^2 = 1$. (a), (b) and (c) plot the standard deviations of the estimates of $k_a$, $k_d$ and $R_{max}$ respectively for different concentration values and different numbers of samples acquired in the association and dissociation phases.
+
+decrease with the increase of $C_0$, but remain almost constant when $C_0$ is greater than $2.0 \times 10^{-5}$ M. Therefore, a good choice of $C_0$ for practical two-phase SPR experiments would be around the value of $2.0 \times 10^{-5}$ M.
+
+$k_a = 1478 \text{ M}^{-1}\text{s}^{-1}$, $k_d = 4.5 \times 10^{-3} \text{ s}^{-1}$, $R_{max} = 7.75$ RU.
+
+The sampling intervals are chosen as $T_0 = T_1 = 1$ s, and the noise variance is assumed to be $\sigma^2 = 1$. Fig. 1 plots the CRLB in terms of the standard deviations of $k_a$, $k_d$ and $R_{max}$ as functions of $C_0$ and the number of data samples. Obviously, it shows that increasing the number of samples improves the accuracy of estimation. As can be seen from the figure, when the number of samples is sufficiently large, e.g. $J^{[0]} = J^{[1]} = 1000$, the CRLB approaches the asymptotic CRLB, which is the lowest possible CRLB, given fixed sampling intervals. The plot also reveals that the concentration value $C_0$ has an influence on the accuracy of parameter estimation. From Fig. 1(a), the optimal values of $C_0$ corresponding to the lowest variances of $k_a$ for different number of data samples lie between $1.0 \times 10^{-5}$ M and $2.0 \times 10^{-5}$ M, and for $C_0$ greater than the optimal values the variance increases slowly with $C_0$. On the other hand, the variances of $k_d$ and $R_{max}$
+
+REFERENCES
+
+[1] G. D. Baura, *System Theory and Practical Applications of Biomedical Signals*. New York: Wiley, 2001.
+
+[2] R. Karlsson and A. Falt, “Experimental design for kinetic analysis of protein-protein interactions with surface plasmon resonance biosensors,” *J. Immunol. Methods*, vol. 200, pp. 121–133, 1997.
+
+[3] S. M. Kay, *Fundamentals of Statistical Signal Processing : Estimation Theory*. New Jersey: Prentice-Hall, 1993, vol. I.
+
+[4] R. J. Ober, “The Fisher information matrix for linear systems,” *Systems and Control Letters*, vol. 47, pp. 221–226, 2002.
+
+[5] R. J. Ober, Q. Zou, and Z. Lin, “Calculation of the Fisher information matrix for multidimensional data sets,” *IEEE Trans. Signal Processing*, vol. 51, pp. 2679–2691, Oct. 2003.
+
+[6] R. J. Ober and E. S. Ward, “The influence of signal noise on the accuracy of kinetic constants measured by surface plasmon resonance experiments,” *Anal. Biochem.*, vol. 273, pp. 49–59, 1999.
+
+[7] R. R. Mohler, *Nonlinear systems*, vol. II, Applications to Bilinear Control. New Jersey: Prentice-Hall, 1991.
+
+[8] Q. Zou, Z. Lin, and R. J. Ober, “The cramer rao lower bound for bilinear systems,” *IEEE Trans. Signal Processing*, (submitted) 2004.
\ No newline at end of file
diff --git a/samples/texts_merged/5341423.md b/samples/texts_merged/5341423.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a4dbb827cbb454ea0aa0d168195e59012334ca8
--- /dev/null
+++ b/samples/texts_merged/5341423.md
@@ -0,0 +1,778 @@
+
+---PAGE_BREAK---
+
+COMPRESSIVE SENSING WITH HIGHLY COHERENT
+DICTIONARIES
+
+SARA COHEN
+
+**ABSTRACT.** Compressive sensing is an emerging field based on the discovery that sparse signals and images can be reconstructed from highly incomplete information. Conventional approaches follow Shannons theorem, which states that the sampling rate must be twice the maximum frequency present in the signal. In the case that the sensing matrix is highly coherent, which happens when signals are only sparse in a truly redundant dictionary, one must consider less traditional approaches to reconstruct the signals. Sensing matrices are highly coherent in imaging problems such as radar and medical imaging. This work compares existing methods to solve the coherence problem. The first method attempts recovery via an $l_1$-analysis optimization problem, the second uses $l_1$-minimization, and the third method uses algorithms based on band exclusion and local optimization. Detailed comparisons demonstrate the superiority of the $l_1$-minimization method which minimizes the distance of the nonzero entries in the sparse vector $x$.
+
+# 1. INTRODUCTION
+
+This paper is by no means an exhaustive survey of the literature on compressive sensing. It is merely an account of others own work and thinking in this area which includes a large number of references to other people's work. The following introduction is credited to the authors: Massimo Fornaier, Hoger Rauhut, Emmanuel J. Candes, Yonnia C. Eldar, Deanna Needell, Paige Randall, Albert Fannjiang, and Wenjing Liao.
+
+The Nyquist/Shannon sampling theorem states that to avoid losing information when measuring a signal, one must sample twice as fast as the bandwidth of the signal. Similarly, the fundamental theorem of algebra suggests that the number of collected measurements of a discrete finite-dimensional signal should be at least as large as its length or dimension in order to ensure reconstruction. However, in some applications, increasing the sampling rate is either quite expensive or not feasible in the first place.
+
+*Date:* May 23, 2012.
+---PAGE_BREAK---
+
+Compressive sensing is a rapidly growing field which presents a new method that allows signals to be captured and measured at a much lower sampling rate despite common wisdom. There are many applications of compressed sensing which range from medical imaging to radar and remote sensing to video electronics.
+
+Compressive sensing depends on the empirical observation that many types of signals can be well-approximated by a sparse expansion, i.e only a small number of non-zero coefficients, in terms of a suitable basis. This is the key to many lossy compression techniques such as JPEG or MP3 [3]. Lossy compression refers to a data encoding method that compresses data by discarding, or losing some of it. A compression is obtained by storing only the largest basis coefficients. When reconstructing the signal, the non-stored coefficients are set to zero. This is a reasonable strategy when full information of the signal is available. However, when the signal has to be obtained by a costly, lengthy, or otherwise difficult sensing procedure, this strategy seems to be a waste of resources. The time and money is spent in order to obtain full measurements and then most of the information is thrown away during the compression stage. The goal would be to be able to obtain the compressed version of the signal more directly by taking a smaller number of measurements of the signal in the first place. It is not clear whether or not this is possible since measuring the large coefficients requires knowing their location beforehand. Nevertheless, compressive sensing provides a way to reconstruct a compressed version of the original signal by taking only a small amount of linear and non-adaptive measurements. The particular number of measurements is comparable to the compressed size of the signal. The measurements have to be suitably designed and surprisingly, all good measurement matrices designed thus far have been random.
+
+In terms of compressive sensing, the interest is in the undersampled case, meaning there are fewer measurements than unknown signal values. There are countless numbers of applications in which this type of problem arises. For example, in radiology and biomedical imaging one is typically able to collect far fewer measurements about an image of interest than the number of unknown pixels. In wideband radio frequency signal analysis, one may be able to acquire a signal at a rate which is far lower than the Nyquist rate because of current limitations in Analog-to-Digital Converter technology. Lastly, gene expression studies also provide examples in that one would like
+---PAGE_BREAK---
+
+to infer the gene expression level of thousands of genes from a low number
+of observations.
+
+It is another important feature of compressive sensing that useful recon-
+struction can be performed by using efficient algorithms. Since the attention
+is in the immensely undersampled case, the linear system describing the mea-
+surements is underdetermined and therefore has infinitely many solutions.
+The main idea is that the sparsity helps in isolating the original vector. The
+first naïve approach to a reconstruction algorithm entails searching for the
+sparsest vector that is consistent with the linear measurements. This leads
+to the combinatorial $l_0$-problem which is unfortunately NP-hard in general.
+There are essentially two approaches for alternative algorithms. The first
+is convex relaxation leading to $l_1$-minimization while the second constructs
+greedy algorithms. A greedy algorithm is an algorithm that follows the
+problem solving heuristic of making the locally optimal choice at each stage
+with the hope of finding a global optimum. This paper will explain ba-
+sic properties of the measurement matrix which ensure sparse recovery by
+$l_1$-minimization such as the null space property (NSP) and the restricted
+isometry property (RIP).
+
+Compressed sensing suggests obtaining a signal $x \in \mathbb{R}^n$ by collecting $m$ linear measurements of the form $y_k = \langle a_k, x \rangle + z_k, 1 \le k \le m$, or in matrix notation
+
+$$ (1.1) \qquad y = Ax + z, $$
+
+where $A$ is an $m \times n$ sensing matrix with $m$ usually smaller than $n$ by one or
+numerous orders of magnitude and $z$ is an error term modeling measurement
+errors.
+
+The matrix $A$ is chosen independently of $x$. Under specific conditions of
+the matrix $A$, compressive sensing indicates that as long as the unknown
+signal $x$ is reasonably sparse (contains mostly zeros) it is possible to recover
+$x$. The solution simplifies to
+
+$$ (1.2) \qquad \min_{\tilde{x} \in \mathbb{R}^n} \| \tilde{x} \|_1 \text{ subject to } \| A\tilde{x} - y \|_2 \le \epsilon, $$
+
+where $\|\cdot\|_2$ denotes the standard Euclidean norm, $\|x\|_1 = \sum |x_i|$ is the $l_1$-norm and $\epsilon^2$ is a likely upper bound on the noise power $\|z\|_2^2$.
+
+Compressed sensing typically compares the quality of the reconstruction
+from the data $y$ and the model $y = Ax + z$ with the $s$ most significant
+entries of $x$. Let $x_s$ denote the vector consisting of the $s$ largest coefficients
+---PAGE_BREAK---
+
+of $x \in \mathbb{R}^n$ in magnitude
+
+$$ (1.3) \qquad x_s = \arg\min_{||\tilde{x}||_0 \le s} ||x - \tilde{x}||_2, $$
+
+where $||x||_0 = |\\{i: x_i \ne 0\\}|$. This vector $x_s$ has $s$ nonzero entries and is the best $s$-sparse approximation to the vector $x$. In other words, $x - x_s$ is the tail of the signal and consists of the smallest $n-s$ entries of $x$. It has been established by Romberg and Tao that (1.2) recovers a signal $\tilde{x}$ observing
+
+$$ (1.4) \qquad ||\tilde{x} - x||_2 \le C_0 \frac{||x - x_s||_1}{\sqrt{s}} + C_1 \epsilon, $$
+
+given that the 2s-restricted isometry constant of A obeys $\delta_{2s} < 0.4652$. Since the recovery error from (1.2) is proportional to the measurement error and the tail of the signal, $x-x_s$, the approximation error of a nearly sparse signal is very small and the error completely vanishes for precisely sparse signals.
+
+**Definition 1.1.** For an $m \times n$ measurement matrix $A$, the $s$-restricted isometry constant $\delta_s$ of $A$ is the smallest quantity such that
+
+$$ (1.5) \qquad (1-\delta_s)||x||_2^2 \le ||Ax||_2^2 \le (1+\delta_s)||x||_2^2, $$
+
+holds for all $s$-sparse signals $x$. Then matrix $A$ is said to satisfy the **s-restricted isometry property** with the $s$-restricted isometry constant $\delta_s$. The RIP characterizes matrices which are almost orthonormal when operating on sparse vectors and it assures accurate recovery of signals that are nearly sparse in a highly overcomplete and coherent dictionary (frame).
+
+[1]
+
+The condition (1.4) which $\tilde{x}$ must obey is quite natural since it prevents sparse signals from lying in the nullspace of the sensing matrix $A$. A matrix having a small restricted isometry constant means that every subset of $s$ or fewer columns is nearly orthonormal. Many matrices with Gaussian, Bernoulli, or Fourier entries have small restricted isometry constants when the number of measurements $m$ is on the order of $s \log(n/s)$.
+
+Compressive sensing is based on the observation that many types of real-world signals and images have a sparse expansion in terms of a suitable basis. This means that the expansion has only a small number of significant terms, or in other words, that the coefficient vector can be well-approximated with one having only a small number of nonzero entries.
+
+The **support** of a vector $x$ is denoted $\text{supp}(x) = \\{j: x_j \ne 0\\}$, and
+---PAGE_BREAK---
+
+$$
+\|x\|_0 := |\text{supp}(x)|.
+$$
+
+It has become common to call $\|\cdot\|_0$ the $l_0$-norm, although it is not even a quasi-norm. A vector $x$ is called **s-sparse** if $\|x\|_0 \le s$. For $s \in \{1, 2, ..., N\}$,
+
+$$
+\sum_s := \{x \in \mathbb{C}^N : \|x\|_0 \leq s\}
+$$
+
+denotes the set of s-sparse vectors. Furthermore, the best s-term approxi-
+mation error of a vector $x \in \mathbb{C}^N$ in $l_p$ is defined as
+
+$$
+\sigma_s(x)_p = \inf_{z \in \Sigma_s} \|x - z\|_p.
+$$
+
+If $\sigma_s(x)$ decays quickly in $s$ then $x$ is called **compressible**. In order to compress $x$ one may simply store only the $s$ largest entries. When reconstructing $x$ from its compressed version the nonstored entries are simply set to zero, and the reconstruction error is $\sigma_s(x)_p$. It is emphasized at this point that the procedure of obtaining the compressed version of $x$ is adaptive and nonlinear since it requires the search of the largest entries of $x$ in absolute value. Specifically, the location of the non-zeros is a nonlinear type of information.
+
+The best s-term approximation of x can be obtained using the nonincreas-
+ing rearrangement r(x) = (|x_i|, ..., |x_IN|)_T, where i_j denotes a permutation
+of the indices such that |x_{i_{j+1}}| ≤ |x_{i_j}| for j = 1, ..., N − 1.
+
+Then it is straightforward to check that
+
+$$
+\sigma_s(x)_p := \left( \sum r_j(x)^p \right)^{1/p}, \quad 0 < p < \infty
+$$
+
+and the vector $x_{[s]}$ derived from $x$ by setting to zero all the $N-s$ smallest
+entries in absolute value is the best $s$-term approximation,
+
+$$
+x_{[s]} = \arg \min_{z \in \Sigma_s} \|x - z\|_p,
+$$
+
+for any $0 < p \le \infty$.
+
+The next lemma states essentially that $l_q$-balls with small $q$ (ideally $q \le 1$)
+are good models for compressible vectors.
+
+**Lemma 1.2.** Let $0 < q < p \le \infty$ and set $r = \frac{1}{q} - \frac{1}{p}$. Then
+
+$$
+\sigma_s(x)_p \le s^{-r}, s = 1, 2, \dots, N \text{ for all } x \in B_q^N.
+$$
+
+*Proof.* Let *T* be the set of indices of the *s*-largest entries of *x* in absolute value. The non-increasing rearrangement satisfies |*r*s(*x*)| ≤ |*x*j| for all *j* ∈ *T*, and therefore
+
+$$
+sr_s(x)^q \leq \sum_{j \in T} |x_j|^q \leq \|x\|_q^q \leq 1.
+$$
+
+Hence, $r_s(x) \leq s^{1/q}$. Therefore
+---PAGE_BREAK---
+
+$$ \sigma_s(x)_p^2 = \sum_{j \notin T} |x_j|^p \leq \sum_{j \notin T} r_s(x)^{p-q} |x_j|^q \leq s^{-\frac{p-q}{q}} \|x\|_q^q \leq s^{-\frac{p-q-1}{q}}, $$
+
+which implies $\sigma_s(x)_p \leq s^{-r}$. $\square$
+
+The null space property is fundamental in the analysis of $l_1$-minimization.
+
+**Definition 1.3.** A matrix $A \in \mathbb{C}^{m \times N}$ is said to satisfy the **null space property** (NSP) of order $s$ with constant $\gamma \in (0, 1)$ if
+
+$$ (1.6) \qquad \| \eta_T \|_1 \leq \gamma \| \eta_{T^c} \| - 1, $$
+
+for all sets $T \subset \{1, \dots, N\}$, with $\#T \leq s$ and for all $\eta \in \ker A$.
+
+The following sparse recovery result is based on this notion.
+
+**Theorem 1.4.** Let $A \in \mathbb{C}^{m \times N}$ be a matrix that satisfies the NSP of order $s$ with constant $\gamma \in (0, 1)$. Let $x \in \mathbb{C}^N$ and $y = Ax$ and let $x^*$ be a solution of the $l_1$-minimization problem. Then
+
+$$ (1.7) \qquad \|x - x^*\|_1 \leq \frac{2(1+\gamma)}{1-\gamma} \sigma_s(x)_1. $$
+
+In particular, if $x$ is $s$-sparse $x^* = x$.
+
+*Proof.* Let $\eta = x^* - x$. Then $\eta \in \ker A$ and
+
+$$ \|x^*\|_1 \le \|x\|_1, $$
+
+because $x^*$ is a solution of the $l_1$-minimization problem. Let $T$ be the set of the $s$-largest entries of $x$ in absolute value. One has
+
+$$ \|x_T^*\|_1 + \|x_{T^c}^*\|_1 \le \|x_T\|_1 + \|x_T^c\|_1. $$
+
+It follows immediately from the triangle equality that
+
+$$ \|x_T\|_1 - \|η_T\|_1 + \|η_{T^c}\|_1 - \|x_{T^c}\|_1 \le \|x_T\|_1 + \|x_{T^c}\|_1. $$
+
+Hence,
+
+$$ \|η_{T^c}\|_1 \le \|η_T\|_1 + 2|x_{T^c}\|_1 \le γ\|η_{T^c}\|_1 + 2σ_s(x)_1, $$
+
+Or, equivalently,
+
+$$ (1.8) \qquad \| \eta_{T^c} \|_1 \leq \frac{2}{1-\gamma} \sigma_s(x)_1. $$
+
+Finally,
+
+$$ \|x - x^*\|_1 = \|η_T\|_1 + \|η_{T^c}\|_1 \le (\gamma+1)\|η_{T^c}\|_1 \le \frac{2(1+\gamma)}{1-\gamma} σ_s(x)_1 $$
+
+and the proof is completed. $\square$
+---PAGE_BREAK---
+
+One can also show that if all $s$-sparse $x$ can be recovered from $y = Ax$
+using $l_1$-minimization then necessarily $A$ satisfies the NSP of order $s$ with
+some constant $\gamma \in (0, 1)$. Therefore, the NSP is actually equivalent to sparse
+$l_1$-recovery.
+
+The RIP implies the NSP as shown in the following lemma.
+
+**Lemma 1.5.** Assume that $A \in \mathbb{C}^{m \times N}$ satisfies the RIP of order $S = s + h$
+with constant $\delta_S \in (0, 1)$. Then $A$ has the NSP of order $s$ with constant
+
+$$
+\gamma = \sqrt{\frac{s(1+\delta_S)}{h(1-\delta_S)}}.
+$$
+
+Proof. Let $\eta \in N = \ker A$ and $T \subset \{1, \dots, N\}$, with $\#T \le s$. Define $T_0 = T$ and $T_1, T_2, \dots, T_s$ to be disjoint sets of indexes of size at most $h$, associated to a non-increasing rearrangement of the entries of $\eta \in N$, i.e.,
+
+$$
+(1.9) \qquad |\eta_j| \le |\eta_i| \text{ for all } j \in T_l, i \in T_l, 1 \le l' \le l.
+$$
+
+Note that $A\eta = 0$ implies $A\eta_{T_0 \cup T_1} = -\sum_{j=2}^s A\eta_{T_j}$. Then, from the Cauchy-Schwarz inequality, the RIP, and the triangle inequality, the following sequence is deduced,
+
+$$
+\begin{align*}
+\|\eta_T\|_1 &\le \sqrt{s}\|\eta_T\|_2 \le \sqrt{s}\|\eta_{T_0 \cup T_1}\|_2 \\
+(1.10) \qquad &\le \sqrt{\frac{s}{1-\delta_S}}\|A\eta_{T_0 \cup T_1}\|_2 = \sqrt{\frac{s}{1-\delta_S}}\|A\eta_{T_2 \cup T_3 \cup \dots T_s}\|_2 \\
+&\le \sqrt{\frac{s}{1-\delta_S}} \sum_{j=2}^s \|A\eta_{T_j}\|_2 \le \sqrt{\frac{1+\delta_S}{1-\sigma_S}} \sqrt{s} \sum_{j=2}^s \|\eta_{T_j}\|_2.
+\end{align*}
+$$
+
+It follows from (1.9) that $|\eta_i| \le |\eta_l|$ for all $i \in T_{j+1}$ and $l \in T_j$. Taking the sum over $l \in T_j$ first and then the $l_2$-norm over $i \in T_{j+1}$ yields
+
+$$
+|\eta_i| \leq h^{-1} ||\eta_{T_j}||_1, \text{ and } ||\eta_{T_{j+1}}||_2 \leq h^{-1/2} ||\eta_{T_j}||_1.
+$$
+
+Using the latter estimates in (1.10) gives
+
+$$
+(1.11) \quad \| \eta_T \|_1 \le \sqrt{\frac{1+\delta_S s}{1-\delta_S h}} \sum_{j=1}^{s-1} \| \eta_{T_j} \|_1 \le \sqrt{\frac{1+\delta_S s}{1-\delta_S h}} \| \eta_{T^c} \|_1,
+$$
+
+and the proof is finished. $\square$
+
+Taking $h = 2s$ above shows that $\delta_{3s} < 1/3$ implies $\gamma < 1$. By Theorem 1.4 recovery of all $s$-sparse vectors by $l_1$-minimization is then guaranteed. Additionally, stability in $l_1$ is also ensured. The next theorem shows that RIP implies also a bound on the reconstruction error in $l_2$.
+---PAGE_BREAK---
+
+**Theorem 1.6.** Assume $A \in \mathbb{C}^{m \times N}$ satisfies the RIP of order $3s$ with $\delta_{3s} < 1/3$. For $x \in \mathbb{C}^N$, let $y = Ax$ and $x^*$ be the solution of the $l_1$-minimization problem. Then
+
+$$
+\|x - x^*\|_2 \le C \frac{\sigma_s(x)_1}{\sqrt{s}},
+$$
+
+with $C = \frac{2}{1-\gamma} \left( \frac{\gamma+1}{\sqrt{2}} + \gamma \right)$, and $\gamma = \sqrt{\frac{1+\delta_{3s}}{2(1-\delta_{3s})}}$.
+
+*Proof.* Similarly as in the proof of Lemma 1.5, let $\eta = x^* - x \in N =$ ker A, $T_0 = T$ the set of the 2s-largest entries of $\eta$ in absolute value, and $T_j$'s of size at most s corresponding to the non-increasing rearrangement of $\eta$. Then using (1.10) and (1.11) with $h = 2s$ of the previous proof,
+
+$$
+\|\eta_T\|_2 \leq \sqrt{\frac{1+\delta_{3s}}{2(1-\delta_{3s})}} s^{-1/2} \|\eta_{T^c}\|_1.
+$$
+
+From the assumption $\delta_{3s} < 1/3$ it follows that $\gamma := \sqrt{\frac{1+\delta_{3s}}{2(1-\delta_{3s})}} < 1$.
+Lemmas 1.2 and 1.5 yield
+
+$$
+\begin{equation}
+\begin{aligned}
+\|\eta_{T^c}\|_2 &= \sigma_{2s}(\eta)_2 \le (2s)^{-\frac{1}{2}} \|\eta\|_1 = (2s)^{-1/2} (\|\eta_T\|_1 + \|\eta_{T^c}\|_1) \\
+&\le (2s)^{-1/2} (\gamma \|\eta_{T^c}\|_1 + \|\eta_{T^c}\|_1) \le \frac{\gamma+1}{\sqrt{2}} s^{-1/2} \|\eta_{T^c}\|_1.
+\end{aligned}
+\tag{1.12}
+\end{equation}
+$$
+
+Since *T* is the set of 2s-largest entries of η in absolute value, it holds
+
+$$
+(1.13) \quad \|\eta_{T^c}\|_1 \le \|\eta_{(\text{suppx}_{[2s]})^c}\|_1 \le \|\eta_{(\text{suppx}_{[s]})^c}\|_1,
+$$
+
+where $x_{[s]}$ is the best $s$-term approximation to $x$. The use of this latter
+estimate, combined with the inequality 1.8, finally gives
+
+$$
+\begin{align*}
+\|x - x^*\|_2 &\le \|η_T\|_2 + \|η_{T^c}\|_2 \\
+(1.14) \qquad &\le \left(\frac{γ+1}{√2} + γ\right)s^{-1/2}\|η_{T^c}\|_1 \\
+&\le \frac{2}{1-γ}\left(\frac{γ+1}{√2} + γ\right)s^{-1/2}σ_s(x)_1.
+\end{align*}
+$$
+
+This concludes the proof. □
+
+The compressive sensing techniques described above are used when the signals are sparse with respect to an orthonormal basis. However, there are often times when a signal is not sparse in an orthonormal basis. Sparsity if often expressed in terms of an overcomplete dictionary. An overcomplete dictionary refers to a dictionary or matrix, which has many more columns than rows. The use of overcomplete dictionaries is now widespread in the
+---PAGE_BREAK---
+
+field of compressed sensing. They are often used when working in situations in which no good orthonormal basis is known to exist. Additionally, overcomplete dictionaries provide benefits in certain applications such as deconvolution, tomography, and other signal-denoising problems.
+
+In the overcomplete dictionary situation our signal $f \in \mathbb{R}^n$ is now expressed as $f = Dx$ where $D \in \mathbb{R}^{n \times d}$ is some overcomplete dictionary. Now, consider the case in which the sensing matrix $A$ has Gaussian entries. If $D$ is not a unitary matrix then the matrix $AD$ will have correlated columns and thus would not satisfy the traditional requirements imposed by compressive sensing. This paper will discuss the potential of good recovery when the columns are highly correlated.
+
+Traditional assumptions imposed by compressive sensing and sparse signal recovery say that the measurement matrix must have uncorrelated columns.
+
+**Definition 1.7.** The coherence of a matrix $B$ is defined as
+
+$$ (1.15) \qquad \mu(B) = \max_{j*N* we have the identity
+E||*Ax*||22 = ||*x*||22, where E denotes expectation.
+
+The starting point for the simple approach is a concentration inequality
+of the form
+
+$$
+(1.19) \quad \mathbb{P}(\|Ax\|_2^2 - \|x\|_2^2 \ge \delta \|x\|_2^2) \le 2e^{-c_0 \delta^2 m}, \quad 0 < \delta < 1,
+$$
+
+where $c_0 > 0$ is some constant. The two most relevant examples of random
+matrices which satisfy the above concentration are Gaussian and Bernoulli
+matrices. Based on the concentration equality the following estimate on RIP
+constants can be shown.
+
+**Theorem 1.8.** Let $A \in \mathbb{R}^{m \times N}$ be a random matrix satisfying the concentration property. Then there exists a constant $C$ depending only on $c_0$ such that the restricted isometry constant of $A$ satisfies $\delta_k \le \delta$ with probability exceeding $1 - \epsilon$ provided
+
+$$
+m \ge C\delta_{-2}(k \log(N/m) + \log(\epsilon_{-1})).
+$$
+---PAGE_BREAK---
+
+Combining this RIP estimate with the recovery results for $l_1$-minimization shows that all $s$-sparse vectors $x \in \mathbb{C}^N$ can be stably recovered from a random draw of A satisfying (1.19) with high probability provided
+
+$$ (1.20) \qquad m \ge Ck \log(N/m). $$
+
+Up to the log-factor this provides the desired linear scaling of the number $m$ of measurements with respect to the sparsity $s$. Furthermore, the above condition cannot be further improved; in particular, the log-factor cannot be removed.
+
+**Theorem 1.9.** If a system of linear equations $Ax = b$ has a solution obeying $\|x\|_0 < \frac{1}{2}(1 + 1/\mu(A))$, this solution is necessarily the sparsest possible.
+
+The coherence can never be smaller than $\frac{1}{\sqrt{n}/2}$, and therefore, the cardinality bound of the above theorem is never larger than $\frac{1}{\sqrt{n}}$.
+
+**Theorem 1.10.** For a system of linear equations $Ax = b$ ($A \in \mathbb{R}^{m \times n}$ full-rank with $n < m$), if a solution $x$ exists obeying
+
+$$ \|x\|_0 < \frac{1}{2} \left( 1 + \frac{1}{\mu(A)} \right), $$
+
+then $l_1$-minimization is guaranteed to find it exactly.
+
+Next I would like to discuss how well one can estimate the response $Ax$ where $A$ is a matrix and $x$ is an $s$-sparse vector. The generic $s$-sparse model is defined as follows:
+
+(1) The support $I \subset \{1, \dots, p\}$ of the $s$ nonzero coefficients of $A$ is selected uniformly at random.
+
+(2) Conditionally on $I$, the signs of the nonzero entries of $A$ are independent and equally likely to be -1 or 1.
+
+No assumptions are made on the amplitudes. In some sense, this is the simplest statistical model. It says that all subsets of a given cardinality are equally likely, or in other words, one is not biased towards certain variables nor is there any reason to believe that a given coefficient is positive or negative.
+
+On the other hand, suppose that our sparse vector $x$ does not satisfy a minimum distance between its nonzero entries. To test this scenario I chose a 64×256 DFT and a sparse vector with 10 consecutive nonzero entries. Using $l_1$-minimization I attempted to recover the signal; however, as apparent in Figure 1 the reconstruction was not very accurate. There was a relative error of 1.214253256401029 which is quite high. Therefore signal recovery
+---PAGE_BREAK---
+
+via $l_1$-minimization fails unless the sensing matrix or the signal itself satisfies
+certain conditions.
+
+FIGURE 1
+
+If two columns are highly correlated it would be nearly impossible to distinguish whether the signal comes from one or the other. For example, suppose we are not undersampling and that $A$ is the identity matrix. We then observe $y = Dx$. Suppose that the first two columns are identical. Then it would not be possible to reconstruct a unique sparse signal $x$ from measurements $y = ADx$. However, instead of recovering the coefficient vector $x$ we are interested in the actual signal $Dx$. Thus the high correlation between the columns in $D$ does not create a problem since differentiation between the coefficient vectors is not the goal. Therefore the low coherence of $D$ may not be a necessary requirement for recovery.
+
+To introduce my results, I will first discuss a concrete situation. I first assume that the sensing matrix or dictionary $D$ has Gaussian entries with a decaying rate $\epsilon$ and shift 0.25. Next, I conduct the same tests using Fourier matrices. I am interested in recovering the actual signal $f = Dx$ instead of the sparse vector $x$.
+---PAGE_BREAK---
+
+The goal is to $\min_{f \in \mathbb{R}} \|D^*f\|_1$ subject to $\|Af - y\|_2 \le \epsilon$ given measurements $y$ where $y = Af + z$ where $z$ is noise. I want to find an $f$ that fits the data up to $\epsilon$ which is related to the noise by $\|z\|_2 \approx \epsilon$.
+
+I tried three different methods to solve this compressed sensing problem. The first method is $l_1$-analysis, the second is $l_1$-minimization, and the third method is band exclusion. The results show the superiority of $l_1$-minimization.
+
+## 2. METHODS
+
+**2.1. $l_1$-analysis.** This section proposes a reconstruction from $y = Af + z$ by the method of $l_1$-analysis:
+
+$$ (2.1) \quad \tilde{f} = \arg \min_{\tilde{f} \in \mathbb{R}^n} \|D^*\tilde{f}\|_1 \text{ subject to } \|Af - y\|_2 \le \epsilon $$
+
+where again $\epsilon$ is a likely upper bound on the noise level $\|z\|_2$. Empirical studies have shown very promising results for the $l_1$-analysis problem. Its geometry has been studied as well as its applications to image restoration. However, there are no results in the literature about its performance in regard to the case where $D$ is a redundant dictionary made of Gaussian functions. The solution to (2.1) is very accurate provided that $D^*f$ has rapidly decreasing coefficients.
+
+**Theorem 2.1.** Let $D$ be an arbitrary $n \times n$ tight frame and let $A$ be a $m \times n$ Gaussian matrix with $m$ on the order of $s \log(d/s)$. Then the solution $\tilde{f}$ to (2.1) obeys
+
+$$ \| \tilde{f} - f \|_2 \le C_0 \epsilon + C_1 \frac{\| D^* f - (D^* f)_s \|_1}{\sqrt{s}}, $$
+
+for some numerical constants $C_0$ and $C_1$, and where $(D^*f)_s$ is the vector consisting of the largest $s$ entries of $D^*f$ in magnitude as in (1.3).
+
+**2.2. Band Exclusion.** This method relies on the importance of band exclusion. While many $l_1$-minimization algorithms require either incoherence or the Restricted Isometry Property to have good performances, this method does not. [2]
+
+According to theory of optimal recovery, for time sampling in [0,1], the minimum resolvable length in the frequency domain is unity. This is the Rayleigh threshold and this length will be referred to as the Rayleigh length (RL). Thus, for the traditional inversion methods to work, it is essential that the grid spacing be no less than 1 RL. In the compressed sensing setting,
+---PAGE_BREAK---
+
+the Rayleigh threshold is closely related to the decay property of the mutual coherence. [2]
+
+Without any prior information about the object support, the gridding error for the resolved grid, however, can be as large as the data itself, creating an unfavorable condition for sparse reconstruction. To reduce the gridding error, it is natural to consider the fractional grid
+
+$$ \mathbb{Z}/F = \{j/F : j \in \mathbb{Z}\} $$
+
+with some large integer $F \in \mathbb{N}$ called the refinement factor. The relative gridding error is roughly inversely proportional to the refinement factor; however, the mutual coherence increases with $F$ as the near-by columns of the sensing matrix become highly correlated.
+
+The hope is that if the objects are sufficiently separated with respect to the coherence band, then the problem of a huge condition number associated with unresolved grids can be somehow circumvented and the object support can be approximately reconstructed.
+
+The first technique that I will introduce to take advantage of the information that objects are widely separated is called Band Exclusion and it can be easily embedded in the greedy algorithm, Orthogonal Matching Pursuit (OMP). The following proposition is a standard performance guarantee for OMP.
+
+**Proposition 2.2.** Suppose that the sparsity $s$ of the signal vector $x$ satisfies
+
+$$ \mu(A)(2s - 1) + 2 \frac{\|e\|_2}{x_{min}} < 1, $$
+
+where $x_{min} = min_k|x_k| = |x_s|$. Let $\tilde{x}$ denote the output of OMP reconstruction. Then
+
+$$ \mathrm{supp}(\tilde{x}) = \mathrm{supp}(x), $$
+
+where $\mathrm{supp}(x)$ is the support of $x$. The ideal case where $e = 0$, reduces to
+
+$$ \mu(A) < \frac{1}{2s - 1}, $$
+
+which is near the threshold of OMP's capacity for exact reconstruction of arbitrary objects of sparsity $s$.
+
+Intuitively speaking, if the objects are not in each other's coherence band, then it should be possible to localize the objects approximately within their respective coherence bands, no matter how large the mutual coherence is.
+
+Define the $\eta$-coherence band of the index $k$ to be the set
+
+$$ B_{\eta}(k) = \{i \mid \mu(i, k) > \eta\}, $$
+---PAGE_BREAK---
+
+and the $\eta$-coherence band of the index set $S$ to be the set
+
+$$B_{\eta}(S) = \bigcup_{k \in S} B_{\eta}(k).$$
+
+Due to the symmetry $\mu(i, k) = \mu(k, i)$, for all $i, k \in B_{\eta}(k)$ if and only if $k \in B_{\eta}(i)$. Denote
+
+$$ (2.2) \qquad B_{\eta}^{(2)}(k) := B_{\eta}(B_{\eta}(k)) = \cup_{j \in B_{\eta}(k)} B_{\eta}(j), $$
+
+$$ (2.3) \qquad B_{\eta}^{(2)}(S) \equiv B_{\eta}(B_{\eta}(S)) = \cup_{k \in B_{\eta}(S)} B_{\eta}(k). $$
+
+To embed BE into OMP, we make the following change to the matching step
+
+$$ i_{\max} = \arg \min_i | \langle r^{n-1}, a_i \rangle |, \quad i \notin B_{\eta}^{(2)}(S^{n-1}), \quad n = 1, 2, \dots $$
+
+meaning that the double $\eta$-band of the estimated support in the previous iteration is avoided in the current search. This is natural if the sparsity pattern of the object is such that $B_\eta(j)$, $j \in \text{supp}(x)$ are pairwise disjoint. We call the modified algorithm the Band-excluded Orthogonal Matching Pursuit (BOMP) which is formally stated in the following Algorithm.
+
+**Algorithm 1** Band-excluded Orthogonal Matching Pursuit (BOMP)
+
+**Input:** A, b, η > 0
+**Initialization:** x^0 = 0, r^0 = b, S^0 = ∅
+**Iteration:** For n = 1, ..., s
+ (1) $i_{\max}$ = arg max_i |⟨r^(n-1), a_i⟩|, i ∉ $B_{\eta}^{(2)}$(S^(n-1))
+ (2) $S^n$ = $S^{n-1}$ ∪ {$i_{\max}$}
+ (3) $x^n$ = arg min_z ||Az - b||_2 s.t. supp(z) ∈ $S^n$
+ (4) $r^n$ = b - Ax^n
+**Output:** x^s
+
+A main theoretical result of the present paper is the following performance guarantee for BOMP.
+
+**Theorem 2.3.** Let $x$ be $s$-sparse. Let $\eta > 0$ be fixed. Suppose that
+
+$$ (2.4) \qquad B_{\eta}(i) \cap B_{\eta}^{(2)}(j) = \emptyset, \forall i, j \in \text{supp}(x), $$
+
+*and that*
+
+$$ (2.5) \qquad \eta(5s - 4) \frac{x_{\max}}{x_{\min}} + \frac{5 \|e\|_2}{2x_{\min}} < 1, $$
+
+*where*
+
+$$ x_{\max} = \max_k |x_k|, x_{\min} = \min_k |x_k|. $$
+---PAGE_BREAK---
+
+Let $\tilde{x}$ be the BOMP reconstruction. Then $\text{supp}(\tilde{x}) \subseteq B_{\eta}(\text{supp}(x))$ and more-over every nonzero component of $\tilde{x}$ is in the $\eta$-coherence band of a unique component of $x$.
+
+First, numerical evidence shows degradation in BOMPs performance with increased dynamic range consistent with the prediction of (2.5). Dynamic range of objects is clearly an essential factor determining the performance of recovery. This sensitivity to dynamic range can be drastically reduced by the local optimization technique which is introduced next. Secondly, condition (2.4) means that BOMP can resolve 3 RLs. Numerical experiments show that BOMP can resolve objects separated by close to 1 RL when the dynamic range is close to 1.
+
+Numerical experiments show that the main shortcoming with BOMP is in its failure to perform even when the dynamic range is only moderate. To overcome this problem, we now introduce the second technique: the Local Optimization (LO). LO is a residual-reduction technique applied to the current estimate $S^k$ of the object support. To this end, we minimize the residual $\|A\tilde{x} - b\|_2$ by varying one location at a time while all other locations are held fixed. In each step we consider a vector $\tilde{x}$ whose support differs from $S^n$ by at most one index in the coherence band of $S^n$ but whose amplitude is chosen to minimize the residual. The search is local in the sense that during the search in the coherence band of one nonzero component the locations of other nonzero components are fixed. The amplitudes of the improved estimate are carried out by solving the least squares problem. Because of the local nature of the LO step, the computation is not expensive.
+
+**Algorithm 2** Local Optimization (LO)
+
+Input: $A, b, \eta > 0, S^0 = \{i_1, \dots, i_k\}$
+
+Iteration: For $n = 1, 2, \dots, k$
+
+(1) $x^n = \arg \min_z \|Az - b\|_2,$
+
+$$ \operatorname{supp}(z) = (S^{n-1}\setminus\{i_n\}) \cup \{j_n\}, \quad j_n \in B_{\eta}(\{i_n\}) $$
+
+(2) $S^n = \text{supp}(x^n)$
+
+Output: $S^k$
+
+Embedding LO in BOMP gives rise to the Band-excluded, Locally Opti-mized Orthogonal Matching Pursuit (BLOOMP).
+
+We now give a condition under which LO does not spoil the BOMP re-construction of Theorem 2.3.
+---PAGE_BREAK---
+
+**Algorithm 3** Band-excluded, Locally Optimized Orthogonal Matching Pursuit (BLOOMP)
+
+Input: A, b, η > 0 Initialization: x⁰ = 0, r⁰ = b, S⁰ = ∅
+
+Iteration: For n = 1, ..., s
+
+(1) $i_{\max} = \arg \max_i |\langle r^{n-1}, a_i \rangle|$, $i \notin B_{\eta}^{(2)}(S^{n-1})$
+
+(2) $S^n = LO(S^{n-1} \cup \{i_{\max}\})$ where LO is the output from Algorithm 2
+
+(3) $x^n = \arg \min_z \|Az - b\|_2 \text{ s.t. } \text{supp}(z) \in S^n$
+
+(4) $r^n = b - Ax^n$
+
+Output: $x^s$
+
+**Theorem 2.4.** Let $\eta > 0$ and let $x$ be a $s$-sparse vector such that (2.4) holds. Let $S^0$ and $S^k$ be the input and output, respectively, of the LO algorithm. If
+
+$$
+(2.6) \quad x_{min} > (\epsilon + 2(s-1)\eta) \left( \frac{1}{1-\eta} + \sqrt{\frac{1}{(1-\eta)^2} + \frac{1}{1-\eta^2}} \right), \epsilon = \|e\|
+$$
+
+and each element of S^0 is in the η-coherence band of a unique nonzero component of x, then each element of S^k remains in the η-coherence band of a unique nonzero component of x.
+
+**Corollary 2.5.** Let $\tilde{x}$ be the output of BLOOMP. Under the assumptions of Theorems 2.3 and 2.4, $\text{supp}(\tilde{x}) \subseteq B_{\eta}(\text{supp}(x))$ and moreover every nonzero component of $\tilde{x}$ is in the $\eta$-coherence band of a unique nonzero component of $x$.
+
+Even though we cannot improve the performance guarantee for BLOOMP,
+in practice the LO technique greatly enhances the success probability of
+recovery that BLOOMP has the best performance among all the algorithms
+tested with respect to noise stability and dynamic range. In particular,
+the LO step greatly enhances the performance of BOMP with respect to
+dynamic range. Moreover, whenever Corollary 2.5 holds, for all practical
+purposes we have the residual bound for the BLOOMP reconstruction $\tilde{x}$
+
+$$
+(2.7) \qquad \|b - A\tilde{x}\|_2 \le c\|e\|_2, \quad c \sim 1.
+$$
+
+On the other hand, it is difficult to obtain bounds for the reconstruction
+error since ||x - $\tilde{x}$||₂ is not a meaningful error metric without exact recovery
+of an overwhelming majority of the object support.
+---PAGE_BREAK---
+
+3. NUMERICS
+
+3.1. **Method 1: l₁-analysis.** To test the accuracy for recovery using the l₁-analysis method I will start by defining the Gaussian matrix *D*.
+
+I define dictionary *D* by the following MATLAB code. [5] The variable
+
+$$
+\begin{array}{l}
+t = -((row/2) - 1) : (row/2); \\
+\text{for } k = -((column/2) - 1) : (column/2) \\
+\quad D(:, k + (column/2)) = e^{(-(t-(0.25*k))^2/(c)^2)}; \\
+\text{end}
+\end{array}
+$$
+
+$\epsilon$ is the decaying value of the Gaussian function. I have chosen to set this
+value to 3 due to the localization and specific coherence properties of the
+Gaussian function that I chose to use.
+
+The next step is to create an *s*-spare vector *x*. The method I will use
+utilizes the Matlab command `randperm` to choose the entries of the vector
+which will contain the random nonzero coefficients [5]. The number of non-
+zero coefficients is denoted by sparsity and the length of the vector *x* is
+denoted by leng.
+
+function [ vector ] = sparsevec( leng, sparsity )
+index=randperm(leng);
+index=index(1:round(sparsity));
+vector=zeros(leng, 1);
+vector(index)=randn(size(index));
+
+To solve this l₁-analysis problem I have chosen to use CVX, which is a
+Matlab-based modeling system for convex optimization. [4] [5] I was inter-
+ested in testing this method for sparsity ranging from 1 to round(row/log(column))
+where row is the number of rows in the matrix D and column is the number
+of columns. The Gaussian matrix I chose does not satisfy the requirements
+of Equation (1.20) thus I cannot claim that this upper bound holds; I can
+only use it as a guideline.
+
+I tested the percent error of recovering $f = Dx$ for sparsities between 1
+and $round(row/log(column))$. For my simulation I used a row length of
+64 and a column length of 256 which means that sparsity ranged from 1 to
+---PAGE_BREAK---
+
+The following code minimizes $D^*f$:
+
+```matlab
+cvx_begin
+variableg(row)
+minimize(normal(D' * g, 1))
+subject to
+normal(A * g - y, 2) <= delta * normal(y, 2)
+cvx_end
+```
+
+12. For each sparsity level I ran the code 10 times in order to compute an average error.
+
+The following graphs in Figure 2 show the resulting percent errors for a range of sparsities. The results for the $l_1$-analysis method are disappointing. The resulting error when minimizing $D^*x$ is higher than desired. The trend is somewhat random which would imply that the tail of $(D^*f) - (D^*f_s)$ is too large causing the $l_1$-analysis method to lack accuracy in reconstruction.
+
+As we have noticed from the results of the Matlab test it is apparent that $l_1$-analysis is not the best method for recovery when using a dictionary that contains slowly decaying Gaussian matrices. [5]
+
+Next, I would like to test the $l_1$-analysis method for Fourier matrices. I use the same code; however, I now define dictionary $D$ by the Discrete Fourier Transform (DFT) matrix by the following Matlab code. [5]
+
+```matlab
+for k = 1 : row
+ xi = rand/20;
+ D(k :) = e^(-i*2*pi*xi*(0:column-1))/sqrt(row);
+end
+```
+
+Similarly to the Gaussian case, a sparse vector $x$ is created and **CVX** is used to solve the convex optimization problem. [4] Again, I was interested in analyzing the relative error in accurately recovering $f = Dx$ using a range of sparsities. The following graph in Figure 3 shows the resulting percent errors for a range of sparsities. The results, similar to the $l_1$-analysis method using Gaussian matrices, are discouraging. The resulting error when minimizing
+---PAGE_BREAK---
+
+FIGURE 2
+
+$D^*x$ is higher than desired. There doesn't seem to be a straightforward trend with the increase in sparsity of the vector $x$ and similar to the Gaussian case the trend is somewhat random which would imply that the tail of $(D^*f) - (D^*f_s)$ is too large causing the $l_1$-analysis method to lack accuracy in reconstruction.
+
+As we have noticed from the results of the MATLAB test it is apparent that $l_1$-analysis is not the best method for recovery when using a dictionary that contains either Gaussian or Fourier matrices. [5]
+
+**3.2. Method 2: $l_1$-minimization.** Similar to the $l_1$-analysis method I used the same Gaussian matrix $D$ and tested the percent error of recovering $f = Dx$ for sparsities between 1 and $\text{round}(row/\log(column))$. For my simulation I used a row length of 64 and a column length of 256 which means that sparsity ranged from 1 to 12. For each sparsity level I ran the code 10 times in order to compute an average error.
+
+The following graphs in Figure 4 show the resulting percent errors for a range of sparsities. The results for the $l_1$-minimization method where the
+---PAGE_BREAK---
+
+FIGURE 3
+
+vector *x* is being minimized appear to show success. The errors for the range of sparsity are on an order of magnitude of 10-3, which is relatively low.
+
+Next, I tested the l₁-minimization method for Fourier matrices. I used
+the same Fourier matrix as I did in the l₁-analysis method and similarly
+tested the percent error of recovering *f* = *D*x for sparsity between 1 and
+*round*(*row*/log(*column*)). The following graph in Figure 5 shows the re-
+sulting percent errors for a range of sparsity. The results, similar to the
+l₁-analysis using Gaussian matrices, appear to be quite accurate and are
+again on an order of magnitude of 10-8.
+
+In both cases, using Gaussian and Fourier matrices, the *l*₁-minimization
+method recovers the signal with great accuracy.
+
+**3.3. Method 3: Band Exclusion.** To test the accuracy for recovery using the Band Exclusion method I considered multiple cases. I started off by using a dictionary *D* containing Gaussian matrices. I first considered the noiseless case and then I added noise using three different amounts: 5%, 10%, and 20% Gaussian noise.
+---PAGE_BREAK---
+
+FIGURE 4
+
+To create the code I used a refinement factor of 10. The larger the refinement factor is, the smaller the gridding error is, but more computations are involved. A value of 10 seemed to be a good balance between accuracy and computational complexity. Next I chose a 64 × 480 sensing matrix, which is significantly underdetermined. I chose the sensing vector to have sparsity equal to 8 due to the decaying properties of the function. Then I created the sensing matrix $D$ with $N = 64$, $M = 480$, and $\epsilon = 6$ (the decaying variable of the Gaussian function) via the following MATLAB algorithm. [5]
+
+$$
+\begin{align}
+t &= -(N-2) : 2 : (N)+1; \\
+\text{for } k &= -((M/2)-1) : (M/2) \\
+D(:, k + (M/2)) &= e^{-(t-(0.25*k))^2/\epsilon^2}; \end{align}
+\tag{3.3} $$
+
+end
+
+After defining the sensing matrix $D$, I needed to determine the band. The band is based off of the choice of the number of columns of the sensing
+---PAGE_BREAK---
+
+FIGURE 5
+
+matrix as well as the shift of the Gaussian function. I wrote a function called findband which determines the length of the band by first comparing a Gaussian curve to the threshold $3.5/\sqrt{\text{row length}}$ and then determining the index where the Gaussian curve is below the threshold. The code is as follows:
+
+```matlab
+function [ radius ] = findband( M, N, CoMatrix )
+curve=abs(CoMatrix(M/2,:)/max(CoMatrix(M/2,:)));
+diff=curve-(3.5/\sqrt{N});
+p zeros(1,M);
+t=p 0$ such that $-y$ is a regular value of $f$. One can show that there exists a closed set $\Gamma \subset f^{-1}(-y)$ such that a non-trivial trajectory of the gradient field is attracted by the origin if and only if it intersects $f^{-1}(-y)$ transversally at a point belonging to $\Gamma$. Thus one may equip the set of non-trivial trajectories attracted by 0 with the topology induced from $\Gamma$.
+
+By [18], the Čech–Alexander cohomology groups $\tilde{H}^*(\Gamma)$ are isomorphic to the cohomology groups $H^*(F_y)$ of the real Milnor fibre $F_y = \{x \in f^{-1}(-y) | |x| \le d\}$, where $0 < y \ll d \ll 1$. A more general version concerning analytic functions on manifold is presented in [19].
+
+By [8], if $n=3$ and $f$ is harmonic then $\Gamma$ may be stratified.
+
+Kurdyka et al. [11], in the course of proving Thom's conjecture, showed in particular that to each trajectory attracted by 0 (and so to each point in $\Gamma$) one may associate an element of a finite subset $L' \subset \mathbb{Q}^+ \times \mathbb{R}_-$. This way we obtain a natural partition
+
+$$ \Gamma = \bigcup_{(l,a) \in L'} \Gamma(l, a). $$
+
+2000 Mathematics Subject Classification: Primary 37B35, 58K05; Secondary 14B05, 34C08.
+
+Key words and phrases: gradient, characteristic exponents, asymptotic critical values.
+---PAGE_BREAK---
+
+In $Q^+ \times \mathbb{R}_-$ we may introduce the lexicographic order, so we may enumerate the elements of $L'$ according to this order: $L' = \{(l_1, a_1), \dots, (l_j, a_j), \dots, (l_s, a_s)\}$.
+
+We will show that
+
+$$\Gamma(l_1, a_1) \subset \dots \subset \bigcup_{j=1}^{i} \Gamma(l_j, a_j) \subset \dots \subset \bigcup_{j=1}^{s} \Gamma(l_j, a_j) = \Gamma$$
+
+is a filtration of $\Gamma$ by closed sets, and that there are regular values $0 < z_1 < \dots < z_i < \dots < z_s$ of the distance function $|x|$ restricted to the Milnor fibre $F_y$ such that each inclusion
+
+$$\bigcup_{j=1}^{i} \Gamma(l_j, a_j) \hookrightarrow \{x \in F_y \mid |x| \leq z_i\}$$
+
+induces isomorphism of Čech–Alexander cohomology groups. Hence one may apply techniques of differential topology to investigate the topology of the partition $\{\Gamma(l_i, a_i)\}$ of the set of trajectories attracted by the origin.
+
+Among the references we list several papers [2–7, 9, 10, 12, 13, 15, 17, 20–22] devoted to geometric and topological properties of solutions of the gradient equation.
+
+**2. Preliminaries.** Let $f : \mathbb{R}^n, 0 \to \mathbb{R}, 0$ be an analytic function defined in a neighbourhood of the origin, having a critical point at 0. We consider the gradient $\nabla f$ of $f$. We will denote by $x(t)$ a trajectory of this vector field, that is, a curve satisfying
+
+$$\dot{x}(t) = \nabla f(x(t)).$$
+
+It is easy to see that $\frac{d}{dt}f(x(t)) > 0$ unless $x(t)$ is constant, that is, $f$ is increasing along the trajectory $x(t)$. For $x$ with $f(x) \le 0$ and sufficiently close to the origin, we denote by $\tau_x$ the set of points on the trajectory passing through $x$ belonging to $\{y \mid f(y) \ge f(x)\}$. Denote by $\omega(x) \in f^{-1}(0)$ either the intersection point of $\tau_x$ and $f^{-1}(0)$ or the limit point of the trajectory if it tends to $f^{-1}(0)$. It is well known that $\omega$ is a strong deformation retraction.
+
+There is a neighbourhood $U_0$ of the origin, $0 < \varrho < 1$ and $c_\varrho, c_f > 0$ such that
+
+$$ (2.1) \qquad |\nabla f(x)| \geq c_\varrho |f(x)|^\varrho, $$
+
+$$ (2.2) \qquad |x| |\nabla f(x)| \geq c_f |f(x)|, $$
+
+for $x \in U_0$. Inequality (2.1) is due to Łojasiewicz (see [14]), and (2.2) is known as the Bochnak–Łojasiewicz inequality (see [1]). In particular as a consequence of (2.1) we have $\nabla f^{-1}(0) \subseteq f^{-1}(0)$.
+---PAGE_BREAK---
+
+The gradient $\nabla f(x)$ splits into its radial component $\frac{\partial f}{\partial r}(x) \frac{x}{|x|}$ and the spherical one $\nabla' f(x) = \nabla f(x) - \frac{\partial f}{\partial r}(x) \frac{x}{|x|}$. We shall denote $x/|x|$ by $\partial/\partial r$ and $\partial f/\partial r$ by $\partial_r f$. We will also often write $r$ instead of $|x|$. Then
+
+$$ \nabla f = \nabla' f + \partial_r f \frac{\partial}{\partial r} $$
+
+and
+
+$$ |\nabla f|^2 = |\nabla' f|^2 + |\partial_r f|^2. $$
+
+Now let $y, d$ be such that $0 < y \ll d \ll 1$, and $-y \in \mathbb{R}$ is a regular value of $f$. We call the set $F_y = \{x \mid |x| \le d, f(x) = -y\}$ the *real Milnor fibre* of $f$. It is either an $(n-1)$-dimensional compact manifold with boundary or an empty set (see [16]). If $f(x) \le -y$ and $0 \in \bar{\tau}_x$ then $\tau_x \cap f^{-1}(-y) \neq \emptyset$, because the function is increasing along the trajectory. The intersection is transversal and consists exactly of one point. This justifies
+
+**DEFINITION.** $\Gamma = \{x \in F_y \mid 0 \in \bar{\tau}_x\} = \{x \in F_y \mid \omega(x) = 0\}$.
+
+Nowel and the second-named author showed that each trajectory attracted by the origin intersects $F_y$ at a point in $\Gamma$ and the topology of the set $\Gamma$ is related to the topology of the Milnor fibre. We have (see [18])
+
+**THEOREM 1.** The inclusion $\Gamma \hookrightarrow F_y$ induces an isomorphism
+
+$$ \tilde{H}^*(\Gamma) \simeq H^*(F_y), $$
+
+where $\tilde{H}^*$ denotes the Čech–Alexander cohomology groups.
+
+**3. Invariants associated with trajectories.** In order to say more about the topology of the set $\Gamma$, we need some notions introduced in [11]. For $\varepsilon > 0$ define
+
+$$ W^\varepsilon = \{ x \mid f(x) \neq 0, \varepsilon |\nabla' f| \leq |\partial_r f| \}. $$
+
+Kurdyka et al. have defined the characteristic exponents, which are characterised by the following proposition ([11, Proposition 4.2]).
+
+**PROPOSITION 2.** There exists a finite subset of positive rationals $L \subset \mathbb{Q}^+$ such that for any sequence $W^\varepsilon \ni x \to 0$ there is a subsequence $W^\varepsilon \ni x' \to 0$ and $l \in L$ such that
+
+$$ \frac{|x'| |\partial_r f(x')|}{f(x')} \to l. $$
+
+In particular, as a germ at the origin, each $W^\varepsilon$ is the disjoint union
+
+$$ W^\varepsilon = \bigcup_{l \in L} W_l^\varepsilon, $$
+
+where
+
+$$ W_l^\varepsilon = \left\{ x \in W^\varepsilon \middle| \left||\frac{|x| \partial_r f}{f} - l\right| \le |x|^\delta \right\}, $$
+---PAGE_BREAK---
+
+for $\delta > 0$ sufficiently small. Moreover, there exist constants $0 < c_\varepsilon < C_\varepsilon$, which depend on $\varepsilon$, such that
+
+$$c_\varepsilon \leq \frac{|f|}{|x|^l} \leq C_\varepsilon \quad \text{on } W_l^\varepsilon.$$
+
+Fix $l > 0$, not necessarily in $L$, and consider $F = f/|x|^l$ defined in the complement of the origin. We say that $a \in \mathbb{R}$ is an asymptotic critical value of $F$ at the origin if there exists a sequence $x \to 0$, $x \neq 0$, such that
+
+(a) $|x| |\nabla F(x)| \to 0,$
+
+(b) $F(x) \to a.$
+
+By [11, Propositions 5.1 and 5.4] we have
+
+**PROPOSITION 3.** The set of asymptotic critical values of $F = f/|x|^l$ is finite. The real number $a \neq 0$ is an asymptotic critical value if and only if there exists a sequence $x \to 0$, $x \neq 0$, such that
+
+(a') $\frac{|\nabla' f(x)|}{|\partial_r f(x)|} \to 0,$
+
+(b) $F(x) \to a.$
+
+By the above proposition, the set
+
+$$L' = \{(l, a) \mid l \in L, a < 0 \text{ is an asymptotic critical value of } f/|x|^l\}$$
+
+is a finite subset of $\mathbb{Q}^+ \times \mathbb{R}_-$.
+
+For a given characteristic exponent $l \in L$ there can be more than one asymptotic critical value $a$. By Section 6 of [11] we have
+
+**THEOREM 4.** For every trajectory $x(t) \to 0$ of the gradient vector field there exists a unique pair $(l, a) \in L'$ such that $\int_{\tau_l}^{t} (x(t)) \to a.$
+
+**4. Partition of the set of trajectories**
+
+**DEFINITION.** There is a natural partition of $\Gamma$ associated with $L'$. Namely for $(l, a) \in L'$,
+
+$$\Gamma(l, a) = \{x \in \Gamma \mid f(x(t))/|x(t)|^l \to a \text{ on the trajectory } \tau_x\}.$$
+
+**DEFINITION.** In the set $\mathbb{Q}^+ \times \mathbb{R}_-$ we may introduce the lexicographic order
+
+$$(l, a) \le (l', a') \quad \text{if } l < l', \text{ or } l = l' \text{ and } a \le a'.$$
+
+It is obvious that $(l, a) \le (l', a')$ if and only if $a|x|^l \le a'|x|^{l'}$ near the origin.
+
+We enumerate the elements of $L'$ according to this order.
+
+Let $\langle \cdot, \cdot \rangle$ denote the standard inner product in $\mathbb{R}^n$. We have the following
+
+**LEMMA 5.** If $(l, a) \in (\mathbb{Q}^+ \times \mathbb{R}_-) \setminus L'$ then
+
+$$\langle \nabla(f - a|x|^l)(x), \nabla f(x) \rangle > 0$$
+
+for $x \in (f - a|x|^l)^{-1}(0) \setminus \{0\}$ near 0.
+---PAGE_BREAK---
+
+*Proof.* Suppose, contrary to our claim, that there is a sequence $x \to 0$, $x \neq 0$, such that $f(x) - a|x|^l = 0$ and
+
+$$ (4.3) \quad \begin{aligned} & 0 \ge \langle \nabla(f - a|x|^l), \nabla f \rangle \\ &= |\nabla f|^2 - \left\langle la|x|^{l-1} \frac{\partial}{\partial r}, \nabla'f + \partial_r f \frac{\partial}{\partial r} \right\rangle \\ &= |\nabla f|^2 - lar^{l-1}\partial_r f = |\nabla f|^2 - \frac{lf}{r} \partial_r f. \end{aligned} $$
+
+Using (2.2) we have
+
+$$ l|f| |\partial_r f| \ge r |\nabla f|^2 \ge c_f |f| |\nabla f|. $$
+
+Hence
+
+$$ (4.4) \quad \frac{c_f}{l} |\nabla f| \le |\partial_r f|, $$
+
+which means that $x \in W^{c_f/l}$. By Proposition 2, there are $l' \in L$ and a subsequence $x'$ such that
+
+$$ \frac{|x'| \partial_r f}{f} \to l'. $$
+
+All $x'$ lie in $W_{l'}^{c_f/l}$, hence
+
+$$ c \le \frac{f}{|x'|^{l'}} \le C, $$
+
+where $c = c_{cf/l}$ and $C = C_{cf/l}$. Since $f(x') = a|x'|^l$, $l = l'$ is a characteristic exponent.
+
+We shall now prove that $a$ is an asymptotic critical value. Let us transform the inequality (4.3):
+
+$$ 0 \ge |\nabla' f|^2 + |\partial_r f|^2 - \frac{lf}{r} \frac{|\partial_r f|^2}{\partial_r f} = |\nabla' f|^2 + |\partial_r f|^2 \left(1 - \frac{lf}{r \partial_r f}\right). $$
+
+Hence
+
+$$ (4.5) \quad \frac{|\nabla' f|^2}{|\partial_r f|^2} \le \left|1 - \frac{lf}{r \partial_r f}\right|. $$
+
+Since
+
+$$ \frac{r \partial_r f}{f} = \frac{|x'| \partial_r f(x')}{f(x')} \to l' = l, $$
+
+the right-hand side of the inequality (4.5) tends to 0. So does the left-hand side and we have
+
+$$ \frac{|\nabla' f|}{|\partial_r f|}(x') \to 0 \quad \text{and} \quad \frac{f(x')}{|x'|^l} = a. $$
+
+By Proposition 3, *a* is an asymptotic critical value of *f*/*r*¹. ■
+---PAGE_BREAK---
+
+Take $(l, a) \in \mathbb{Q}^{+} \times \mathbb{R}_{-} \setminus L'$ and $y > 0$ close to 0 such that $-y$ is a regular value of $f$. Define
+
+$$\Theta(l, a) = F_y \cap \{f - a|x|^l \le 0\} = F_y \cap \{|x| \le (y/(-a))^{1/l}\}.$$
+
+We will show a relation between the cohomologies of $\Theta(l, a)$ and
+
+$$\tilde{\Gamma}(l, a) = \bigcup_{(l_i, a_i) < (l, a)} \Gamma(l_i, a_i), \quad \text{where } (l_i, a_i) \in L'.$$
+
+**THEOREM 6.** For every $(l, a) \in \mathbb{Q}^{+} \times \mathbb{R}_{-} \setminus L'$ and every $y > 0$ small enough, $\tilde{\Gamma}(l, a)$ is closed, and there is an inclusion
+
+$$\tilde{\Gamma}(l, a) \hookrightarrow \Theta(l, a),$$
+
+which induces an isomorphism
+
+$$\check{H}^*(\tilde{\Gamma}(l, a)) \cong H^*(\Theta(l, a)).$$
+
+**LEMMA 7.** For every $\varepsilon > 0$ there exists $\eta = \eta(\varepsilon) > 0$ such that if $|x| < \eta$ then for every point $y$ on $\tau_x$ between $x$ and $\omega(x)$ we have $|y| < \varepsilon$.
+
+*Proof.* For $a \in \tau_x$ denote by $\ell(x, a)$ the length of the trajectory between $x$ and $a$. From the Łojasiewicz inequality (2.1) it follows (see [11]) that for $x$ close to the origin
+
+$$\ell(x, a) \le c_\varrho (1 - \varrho)^{-1} [|f(x)|^{1-\varrho} - |f(a)|^{1-\varrho}|].$$
+
+As $a \to \omega(x)$ we get
+
+$$\ell(x, \omega(x)) \le c_\varrho (1 - \varrho)^{-1} |f(x)|^{1-\varrho} = c_1 |f(x)|^{1-\varrho}.$$
+
+By continuity of $f$ there exists $\eta$, $0 < \eta < \varepsilon/2$, such that for $|x| < \eta$,
+
+$$\ell(x, \omega(x)) \le c_1 |f(x)|^{1-\varrho} < \varepsilon/2.$$
+
+That is, for $x'$ between $x$ and $\omega(x)$,
+
+$$|x'| \le |x| + \ell(x, x') < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon. \blacksquare$$
+
+Define $A_{\le} = \{x | -y \le f(x) \le a|x|^l\}$ and $A_= = \{x | -y \le f(x) = a|x|^l\}$. If $y$ is small enough then $A_{\le}$ is bounded by $A_=$ and $\Theta(l, a)$. By Corollary 6, $A_=$ and $\Theta(l, a)$ intersect transversally.
+
+If $x \in \Theta(l, a)$ then $\nabla f(x)$ is normal to $\Theta(l, a)$ and points into $A_{\le}$. If $x \in A_=\setminus\{0\}$ then $\nabla(f - a|x|^l)$ is normal to $A_=$ and points away from $A_{\le}$.
+
+We consider a mapping $\gamma : \Theta(l, a) \to A_=$ such that $\gamma(x)$ is the point of intersection of the trajectory $\tau_x$ with the set $A_=$ or $\gamma(x) = \omega(x) = 0$ if $\tau_x$ does not intersect $A_=$.
+
+**LEMMA 8.** $\gamma$ is well defined, and $\gamma^{-1}(0) = \tilde{\Gamma}(l, a).$
+
+*Proof.* Consider trajectories starting from $\Theta(l, a)$. Some of them will stay in the set $A_{\le}$ and others will leave it forever. (A trajectory cannot get back to $A_{\le}$, because for a point $x \in A_=\setminus\{0\}$ we have $\langle\nabla(f - a|x|^l)(x), \nabla f(x)\rangle > 0$.
+---PAGE_BREAK---
+
+The angle between the gradients $\nabla(f - a|x|^l)(x)$ and $\nabla f(x)$ is less than $\pi/2$, so the trajectory passing through $x$ leaves $A_\le$.)
+
+Consider a trajectory $\tau_x$ which stays in $A_\le$. By the Łojasiewicz inequality (2.1), $\nabla f$ does not vanish on $A_\le \setminus \{0\}$. Hence $x(t) \to 0$, i.e. $\gamma(x) = \omega(x) = 0$ and $x \in \Gamma$. That is, we proved $\gamma$ is well defined. By Theorem 4 there is $(l_i, a_i) \in L'$ such that $f(x(t))/|x(t)|^{l_i} \to a_i$.
+
+The trajectory stays inside $A_\le$, so
+
+$$f(x(t)) - a|x(t)|^l \le 0.$$
+
+For every $\varepsilon > 0$, if $x(t)$ is sufficiently close to the origin we have
+
+$$(a_i - \varepsilon)|x(t)|^{l_i} < f(x(t)) \le a|x(t)|^l.$$
+
+Therefore $l_i < l$ or $l_i = l$ and $a_i - \varepsilon < a$ for every $\varepsilon > 0$. Hence
+
+$$(l_i, a_i) \le (l, a).$$
+
+Since $(l, a) \notin L'$, $(l_i, a_i) < (l, a)$.
+
+Now consider a trajectory $\tau_x$ which leaves $A_\le$, i.e. $\gamma(x) \ne 0$. Then for $t$ large enough we have $f(x(t)) > a|x(t)|^l$. If $\tau_x$ starts from $\Gamma$, then $x(t) \to 0$ and there is $(l_i, a_i) \in L'$ such that $f(x(t))/|x(t)|^{l_i} \to a_i$. For every $\varepsilon > 0$,
+
+$$(a_i + \varepsilon)|x(t)|^{l_i} > f(x(t)) > a|x(t)|^l$$
+
+if $x(t)$ is sufficiently close to the origin. Applying similar arguments to the above we have $(l_i, a_i) > (l, a)$. Similarly for a trajectory which starts from $\Gamma$ outside $\Theta(l, a)$: it cannot enter the set $A_\le$ and hence $(l_i, a_i)$ corresponding to that trajectory is greater than $(l, a). \blacksquare$
+
+**LEMMA 9.** $\gamma$ is continuous, and $\gamma$ restricted to $\Theta(l, a) \setminus \tilde{\Gamma}(l, a)$ is a homomorphism onto $\text{Im}\,\gamma \setminus \{0\} = A_-\setminus \{0\}$. In particular, $\tilde{\Gamma}(l, a)$ is compact.
+
+*Proof.* Consider $x \in \Theta(l, a)$ such that $\gamma(x) \neq 0$. Then $\tau_x$ is transversal to $\Theta(l, a)$ at $x$ and to $A_=$ at $\gamma(x)$, therefore $\gamma$ is a Poincaré mapping in some neighbourhood of $x$. Hence $\gamma$ is a local homeomorphism at $x$.
+
+Now take $x$ such that $\gamma(x) = 0$. Then $\tau_x \subset A_\le$ and $0 \in \bar{\tau}_x$. Fix an $\varepsilon > 0$. There is $x' \in \tau_x$ such that $|x'| < \eta/2$, where $\eta = \eta(\varepsilon)$ comes from Lemma 7. Now consider a neighbourhood $V$ of $x'$ of diameter $\eta/2$ contained in $A_\le$. Reversing trajectories we get an open neighbourhood $W \subset \Theta(l, a)$ of $x$ such that $|\gamma(y)| < \varepsilon$ for $y \in W$. $\blacksquare$
+
+**LEMMA 10.** For every open neighbourhood $U$ of $\tilde{\Gamma}(l, a)$ in $\Theta(l, a)$, $\gamma(U)$ is an open neighbourhood of $0$ in $\text{Im}\,\gamma = A_=$.
+
+*Proof.* Rewrite the proof of Lemma 9 in [18] substituting $\Theta(l, a)$ for $F_r$ and $\text{Im}\,\gamma$ for $Z_r$. $\blacksquare$
+
+*Proof of Theorem 6.* The inclusion $\tilde{\Gamma}(l, a) \subseteq \Theta(l, a)$ follows from the fact that $\tilde{\Gamma}(l, a) = \gamma^{-1}(0)$ as stated in Lemma 8.
+---PAGE_BREAK---
+
+In order to prove that the inclusion induces an isomorphism of Čech-Alexander cohomology groups, we will construct a descending family $\Theta(l, a)$
+$= U_1 \supset U_2 \supset \dots$ of open neighbourhoods of $\tilde{\Gamma}(l, a)$ in $\Theta(l, a)$, which satisfies
+
+(u1) every inclusion $U_{n+1} \subset U_n$ is a homotopy equivalence,
+
+(u2) for every neighbourhood $U$ of $\tilde{\Gamma}(l, a)$ in $\Theta(l, a)$ there is $n$ such that $U_n \subset U$.
+
+The set Im $\gamma = A_= = \{x \mid f = a|x|^l, |x| \le (y/(-a))^{1/l}\}$, for $y$ small enough,
+is homeomorphic to a cone with vertex at 0, so there is a descending family
+$A_= = V_1 \supset V_2 \supset \dots$ of open neighbourhoods of 0 in $A_=$ such that every
+inclusion is a homotopy equivalence and for every open neighbourhood $V$
+of 0 in $A_=$ there is $n$ such that $V_n \subset V$. We put $U_n = \gamma^{-1}(V_n)$. Clearly
+$\{U_n\}$ is a family of open neighbourhoods of $\tilde{\Gamma}(l, a)$ in $\Theta(l, a)$. The mapping
+$\gamma$ restricted to $\Theta(l, a) \setminus \tilde{\Gamma}(l, a)$ is a homeomorphism onto $A_=\{0\}$, hence (u1)
+holds. If $U$ is an open neighbourhood of $\tilde{\Gamma}(l, a)$ then by Lemma 10, $\gamma(U)$ is
+an open neighbourhood of 0. There is $n$ such that $V_n \subset \gamma(U)$; then $U_n \subset U$,
+so (u2) holds.
+
+As the family $\{U_n\}$ is cofinal in the family of all open neighbourhoods of
+$\tilde{\Gamma}(l, a)$ in $\Theta(l, a)$ ordered by $\supseteq$, we have an isomorphism of direct limits
+
+$$
+\lim_{\underset{U}{\rightarrow}} H^*(U) \cong \lim_{\underset{U_n}{\rightarrow}} H^*(U_n) = \tilde{H}^*(\tilde{\Gamma}(l, a)).
+$$
+
+Since $H^*(U_n) \cong H^*(\Theta(l, a))$ by (u1), the theorem holds. $\blacksquare$
+
+For given $l \in \mathbb{Q}^+$ and $y, (y/(-a))^{1/l}$ is a regular value of $|x|_{F_y}$, for almost
+all $a \in \mathbb{R}_-$. In that case $\Theta(l, a)$ is either void or a compact $(n-1)$-manifold
+with boundary.
+
+PROPOSITION 11. For each $(l, a) \in (\mathbb{Q}^+ \times \mathbb{R}_-) \setminus L'$ and each $y > 0$ small enough, $z = (y/(-a))^{1/l}$ is a regular value for $|x|_{F_y}$ and the inclusion
+
+$$
+\tilde{\Gamma}(l, a) = \bigcup_{(l_i, a_i) < (l, a)} \Gamma(l_i, a_i) \hookrightarrow F_y \cap \{|x| \le z\}
+$$
+
+induces an isomorphism of Čech-Alexander cohomology groups.
+
+*Proof.* Consider the set of critical values of $|x|_{F_y}$. For a given $y$ we have finitely many critical values $w_1(y), \dots, w_p(y)$. We can treat $w_j(y)$ as a real function. The graph of $w_j$ is a subanalytic set. Since it lies in the plane, it is semianalytic. Hence we can write the Puiseux expansion for each $w_j$ (see [14]):
+
+$$
+w_j(y) = by^m + \cdots \quad (b > 0, m \in \mathbb{Q}_{+}).
+$$
+
+We will show that $(1/m, -b^{-1/m}) \in L'$.
+
+By the curve selection lemma we can choose a curve $\xi(r)$ of critical points
+corresponding to $w_j$. We parametrize the curve by the distance to the origin.
+---PAGE_BREAK---
+
+Put $y(r) = -f(\xi(r))$. That is, $\xi(r) \in F_{y(r)}$ is a critical point of $|x|_{F(y(r))}$ such that
+
+$$ (4.6) \qquad r = |\xi(r)| = w_j(y(r)) = b(y(r))^m + \dots $$
+
+We can also write a Puiseux expansion of $f$ along this curve,
+
+$$ f(\xi(r)) = -\alpha r^q + \dots \quad (\alpha > 0, q \in \mathbb{Q}_+). $$
+
+Thus
+
+$$ (4.7) \qquad y(r) = \alpha r^q + \dots $$
+
+By (4.7) and (4.6) we get
+
+$$ (4.8) \qquad r = b(\alpha r^q)^m + \dots = b\alpha^m r^{qm} + \dots $$
+
+along the curve $\xi(r)$. Hence $qm = 1$ and $b\alpha^m = 1$. That is,
+
+$$ (4.9) \qquad f(\xi(r)) = -b^{-1/m}r^{1/m} + \dots $$
+
+The curve $\xi(r)$ consists of critical points of $|x|_{F_{y(r)}}$ and therefore on $\xi(r)$ we have $|\nabla'f| \equiv 0$, $|\nabla f| = |\partial_r f|$. For every $\varepsilon > 0$ we have $\varepsilon|\nabla'f| < |\partial_r f|$, and that means the curve $\xi$ lies in every $W^\varepsilon$, so there exists a characteristic exponent $l'$ such that $\xi$ lies in $W_{l'}^\varepsilon$.
+
+Since $f(\xi(r))/|\xi(r)|^{1/m} \to -b^{-1/m}$, it follows that $l' = 1/m \in L$ by the last statement of Proposition 2. By Proposition 3, $-b^{-1/m}$ is the corresponding asymptotic critical value for $f/r^{1/m}$. In particular, $(1/m, -b^{-1/m}) \in L'$. Assume that $(l, a) \notin L'$. If $y$ is small enough, then $(y/(-a))^{1/l} = (-a)^{-1/l}y^{1/l}$ is different from any $w_j(y)$. Hence it is a regular value for $|x|_{F_y}$.
+
+Now it is enough to apply Theorem 7. ■
+
+The proof above gives us even more:
+
+**THEOREM 12.** Let $f: \mathbb{R}^n, 0 \to \mathbb{R}, 0$ be an analytic function defined in a neighbourhood of the origin, having a critical point at 0. For each $y$ small enough there is a finite sequence $0 < z_1 < \dots < z_i < \dots < z_s$ of regular values of $|x|_{F_y}$ such that
+
+$$ \Gamma(l_1, a_1) \subset \dots \subset \bigcup_{j=1}^{i} \Gamma(l_j, a_j) \subset \dots \subset \bigcup_{j=1}^{s} \Gamma(l_j, a_j) = \Gamma $$
+
+is a filtration of $\Gamma$ by closed sets, and the inclusions
+
+$$ \bigcup_{j=1}^{i} \Gamma(l_j, a_j) \hookrightarrow \{x \in F_y \mid |x| \le z_i\} $$
+
+induce isomorphisms of Čech-Alexander cohomology groups. One can take
+$z_i = (y/(-a))^{1/l}$, where $(l_i, a_i) < (l, a) < (l_{i+1}, a_{i+1})$.
+
+*Proof.* Let $s$ be the cardinality of $L'$. As we have seen in the proof of Corollary 11, if $(l, a) \notin L'$ then $(y/(-a))^{1/l}$ is a regular value of $|x|_{F_y}$. Since
+---PAGE_BREAK---
+
+$L'$ is totally ordered by the lexicographic ordering, for every $i$ we can choose
+a pair $(l, a)$ such that
+
+$$ (l_i, a_i) < (l, a) < (l_{i+1}, a_{i+1}), $$
+
+where $(l_{s+1}, a_{s+1})$ is greater than any pair in $L'$. Set $z_i(y) = (y/(-a))^{\frac{1}{l_i}}$. One can easily see that $z_i < z_{i+1}$ and $z_i(y) \neq w_j(y)$ for sufficiently small $y$.
+
+By Proposition 11, the vertical inclusions induce isomorphisms of the
+Čech–Alexander cohomology groups. $\blacksquare$
+
+The above theorem shows that applying well known methods of differ-
+ential topology and Morse theory to the distance function $|x|$ on the Milnor
+fibre may provide important information about the topology of families of
+trajectories of an analytic gradient vector field with given characteristic ex-
+ponent and asymptotic critical value.
+
+**Acknowledgments.** The authors wish to express their gratitude to the referee for helpful comments.
+
+Research partially supported by grant BW 5100-5-0148-4, and the European Community IHP-Network RAAG (HPRN-CT-2001-00271).
+
+References
+
+[1] J. Bochnak and S. Łojasiewicz, *A converse to the Kuiper-Kuo theorem*, in: Proc. Liverpool Singularities Symposium I, Lecture Notes in Math. 192, Springer, New York, 1971, 254-261.
+
+[2] F. Cano, R. Moussu et F. Sanz, *Oscillation, spiralement, tourbillonnement*, Coment. Math. Helv. 75 (2000), 284-318.
+
+[3] O. Cornea, *Homotopical dynamics III: Real singularities and Hamiltonian flows*, Duke Math. J. 209 (2001), 183-204.
+
+[4] D. D'Acunto and V. Grandjean, *On gradient at infinity of real polynomials*, preprint.
+
+[5] N. Dancer, *Degenerate critical points, homotopy indices and Morse inequalities*, J. Reine Angew. Math. 382 (1984), 1-22.
+
+[6] P. Fortuny and F. Sanz, *Gradient vector fields do not generate twister dynamics*, J. Differential Equations 174 (2001), 91-100.
+
+[7] V. Grandjean, *On the limit set at infinity of gradient of semialgebraic function*, preprint.
+
+[8] P. Goldstein, *Flows of gradients of harmonic functions on $\mathbb{R}^3$*, PhD. Thesis, Warsaw Univ., 2004 (in Polish).
+
+[9] F. Ichikawa, *Thom's conjecture on singularities of gradient vector fields*, Kodai Math. J. 15 (1992), 134-140.
+
+[10] K. Kurdyka, *On the gradient conjecture of R. Thom*, in: Seminari di Geometria 1998-1999, Università di Bologna, Istituto di Geometria, Dipartamento di Matematica, 2000, 143-151.
+
+[11] K. Kurdyka, T. Mostowski and A. Parusiński, *Proof of the gradient conjecture of R. Thom*, Ann. of Math. 152 (2000), 763-792.
+
+[12] K. Kurdyka and A. Parusiński, *$w_f$-stratification of subanalytic functions and the Łojasiewicz inequality*, C. R. Acad. Sci. Paris Sér. I 318 (1994), 129-133.
+---PAGE_BREAK---
+
+[13] J.-M. Lion, R. Moussu et F. Sanz, *Champs de vecteurs analytiques et champs de gradients*, Ergodic Theory Dynam. Systems 22 (2002), 525-534.
+
+[14] S. Łojasiewicz, *Ensembles semi-analytiques*, IHES, 1965.
+
+[15] —, *Sur les trajectoires du gradient d'une fonction analytique*, in: Seminari di Geometria 1982-1983, Università di Bologna, Istituto di Geometria, Dipartamento di Matematica, 1984, 115-117.
+
+[16] J. Milnor, *Singular Points on Complex Hypersurfaces*, Ann. of Math. Stud. 61, Princeton Univ. Press, Princeton, NJ, 1968.
+
+[17] R. Moussu, *Sur la dynamique des gradients. Existence de variétés invariants*, Math. Ann. 307 (1997), 445-460.
+
+[18] A. Nowel and Z. Szafraniec, *On trajectories of analytic gradient vector fields*, J. Differential Equations 184 (2002), 215-223.
+
+[19] —, —, *On trajectories of analytic gradient vector fields on analytic manifolds*, Topol. Methods Nonlinear Anal., to appear.
+
+[20] F. Sanz, *Non-oscillating solutions of analytic gradient vector fields*, Ann. Inst. Fourier (Grenoble) 48 (1998), 1045-1067.
+
+[21] F. Takens, *The minimal number of critical points of a function on a compact manifold and the Lusternik-Schnirelmann category*, Invent. Math. 6 (1968), 197-244.
+
+[22] R. Thom, *Problèmes rencontrés dans mon parcours mathématique: un bilan*, Publ. Math. IHES 70 (1989), 200-214.
+
+Institute of Mathematics
+University of Gdańsk
+Wita Stwosza 57
+80-952 Gdańsk, Poland
+E-mail: adam.dzedzej@math.univ.gda.pl
+zbigniew.szafraniec@math.univ.gda.pl
+
+Reçu par la Rédaction le 24.9.2004
+
+Révisé le 24.5.2005
+
+(1618)
\ No newline at end of file
diff --git a/samples/texts_merged/555274.md b/samples/texts_merged/555274.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed201bb1dbe869622094a2f54c6db095bf235819
--- /dev/null
+++ b/samples/texts_merged/555274.md
@@ -0,0 +1,321 @@
+
+---PAGE_BREAK---
+
+Concurrent multiple impacts modelling: Case-study of a
+3-ball chain
+
+Vincent Acary, Bernard Brogliato
+
+► To cite this version:
+
+Vincent Acary, Bernard Brogliato. Concurrent multiple impacts modelling: Case-study of a 3-ball chain. Computational Fluid and Solid Mechanics. Second Mit Conference 2003, MIT, Jun 2003, cambridge, United States. inria-00424298
+
+HAL Id: inria-00424298
+
+https://hal.inria.fr/inria-00424298
+
+Submitted on 14 Oct 2009
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+Concurrent multiple impacts modelling:
+Case study of a 3-ball chain
+
+Vincent ACARY, Bernard BROGLIATO
+
+INRIA Rhône-Alpes Projet BIP
+
+ZIRST, 655 avenue de l'Europe, MONTBONNOT, 38334 ST ISMIER Cedex FRANCE
+
+tel:+33 (0)4 76 61 52 29 fax:+33 (0)4 76 61 54 77 Web: http://www.inrialpes.fr/bip/
+
+e-mail: Vincent.Acary@inrialpes.fr, Bernard.Brogliato@inrialpes.fr
+
+**Abstract**
+
+This aim of this work is to exhibit an multiple impact law for rigid body dynamical system
+which meets the properties of closing the non-smooth dynamical equations and of corrobo-
+rating experiments. This law is based on the *impulse correlation ratio* which is computed
+from equivalent regularised model with compliant contact. A case-study on 3-ball chain and
+n-ball chain are delineated and results on finite dimensional system are stated.
+
+**Keywords**
+
+Non-smooth dynamics, multiple impacts, unilateral contact, numerical modelling, Newton's cradle.
+---PAGE_BREAK---
+
+# 1 Introduction and motivations
+
+Roughly speaking, a multiple impact can be defined as the occurrence of several shocks at the same time on various points of a mechanical system of rigid bodies. A chain of balls or the Newton's Cradle are academic examples of systems where concurrent multiple impacts occur.
+
+When a rigid body mechanical system with perfect unilateral constraints is subjected to impact, the definition of an *impact law* allows one to compute the post-impact velocity[2]. An impact law must possess the following properties:
+
+1. It closes the system of non-smooth equations of motions in the sense that it provides the post-impact velocities and the percussions for any pre-impact conditions. The fact that the dynamical system associated with an impact law is mathematically well-posed is an additional interesting feature,
+
+2. It corroborates the experimental observations, and the set of parameters which enter the law, must be measurable and physically justified. Particularly, the law must describe an energetic behavior which is compatible with the basic principles of thermodynamics, and must provide post-impact velocities in agreement with the experiments. Better, the parameters of the law may be correlated with the geometrical and the material characteristics of the bodies in impact.
+
+The aim of this work is to exhibit an impact law which meets both the preceding conditions.
+
+When multiple impacts occur, most of classical formulations do not respect both requirements 1 and 2 b. The algorithm of Han and Gilmore [7] provides a good energetic treatment but the existence of a solution is not guaranteed [3]. Moreau [10] proposes an impact law, numerically efficient, which always provides a solution, but the post-impact velocities are not always satisfying from an experimental point of view. Frémond [6] presents an elegant and rigorous framework to add internal constraints in mechanical systems, which are consistent with thermodynamic principles. Motivated by an experimental work on Newton's cradle,
+---PAGE_BREAK---
+
+Ceanga and Hurmuzlu [4] postulate the existence of an *impulse correlation ratio* (ICR) $\alpha$ for a triplet of balls. With the help of energetic restitution coefficients, the post-impact velocities are experimentally shown to be well approximated. However, in the two last works, a precise physical definition of the parameters of such laws somewhat lacks.
+
+In this paper, we shed new light on the ICR by studying the regularised system of a 3-ball chain with elastic contact springs. The physical justification of this choice may be found in the work of Falcon et al. [5] on one-dimensional columns of beads. The industrial application of this work is led through a fruitful collaboration with Abadie [1] from Schneider Electric, concerning the virtual prototyping of circuit breaker mechanisms, where a fine modelling of impact is an essential step.
+
+## 2 Case study of a 3-ball chain regularised with elastic springs
+
+In this section, we focus our attention on 3-ball chains, which are very interesting examples of systems with multiple impacts. A hard ball behaves as a rigid body with massless springs at contact. In other words, the impact process between hard balls does not excite the natural modes of each ball. Furthermore, Hertz theory of contact is very well correlated with the experiments at low velocity range [5].
+
+### 2.1 Rigid body model of a 3-ball chain
+
+A dynamical system of three rigid balls of equal mass $m$, described by their center of mass positions $q_1, q_2, q_3$ and velocities $v_1, v_2, v_3$ is considered. Each ball slides without friction on a straight line and the dynamics at the instant of impact is:
+
+$$ \begin{cases} m(v_1^+ - v_1) = -p_1 \\ m(v_2^+ - v_2) = p_1 - p_2 \\ m(v_3^+ - v_3) = p_2 \end{cases} \tag{1} $$
+---PAGE_BREAK---
+
+where $v_i, v_i^+$ are respectively the pre-impact and the post-impact velocities and $p_i$ the impulses. Without loss of generality, the pre-impact velocity of the middle ball is chosen equal to zero ($v_2 = 0$). An additional law is given to address the energetic behaviour at impact. For the conservative case, we have :
+
+$$v_1^2 + v_3^2 = (v_1^+)^2 + (v_2^+)^2 + (v_3^+)^2 \quad (2)$$
+
+If a multiple impact occurs (i.e. the three balls are in contact at the same instant), this system is not mathematically well-posed. Indeed, for $[v_1, v_3] = [1, 0]$, one can easily check that $[v_1^+, v_2^+, v_3^+] = [0, 0, 1]$ and $[v_1^+, v_2^+, v_3^+] = [-1/3, 2/3, 2/3]$ can be solution of this system in applying conservative Newton laws sequentially to the first or the second pairs of balls [3].
+
+If we introduce a value for the ICR, $\alpha = \frac{p_1}{p_2}$, the system becomes well-posed and the unique solution is given by:
+
+$$\left\{
+\begin{array}{l}
+v_1^+ = v1 - \frac{\alpha}{(1-\alpha+\alpha^2)} \cdot [\alpha v_1 - v_3] \\
+v_2^+ = \frac{\alpha-1}{(1-\alpha+\alpha^2)} \cdot [\alpha v_1 - v_3] \\
+v_3^+ = v_3 + \frac{1}{(1-\alpha+\alpha^2)} \cdot [\alpha v_1 - v_3]
+\end{array}
+\right.
+\qquad (3)$$
+
+## 2.2 Numerical experiments
+
+Let us consider an equivalent regularised system for the 3-ball chain. The interaction between two balls is no longer rigid but realised through an Hertzian spring model. We are interested in relative motion between the balls, therefore we choose to write down the dynamical system
+---PAGE_BREAK---
+
+in terms of indentations, $\delta_i = q_{i+1} - q_i$, as :
+
+$$
+\begin{equation}
+\begin{cases}
+m \ddot{\delta}_1 = -2f_1(\delta_1) + f_2(\delta_2) \\
+m \ddot{\delta}_2 = -2f_2(\delta_2) + f_1(\delta_1) \\
+0 \le \mathbf{f} \perp \mathbf{f} - \mathbf{K}(\mathbf{\delta}) \cdot \mathbf{\delta} \ge 0
+\end{cases}
+\tag{4}
+\end{equation}
+$$
+
+where **f** = [f₁, f₂]ᵀ represents the efforts between balls, **δ** = [δ₁, δ₂]ᵀ the vector of collected indentations and **K**(q) is the stiffness matrix. For Hertzian contact, the stiffness matrix takes the form :
+
+$$
+K = \begin{bmatrix}
+k_1 (\delta_1)^{1/2} & 0 \\
+0 & k_2 (\delta_2)^{1/2}
+\end{bmatrix}
+\quad (5)
+$$
+
+where $k_1 = k$ and $k_2 = \kappa k, \kappa \in \mathbb{R}_+$ are the coefficients of stiffness related to some material and geometrical parameters.
+
+The integration, which is intractable analytically, is performed with Scilab® for various initial relative velocities (choosing $v_2 = 0$). Actually, the solution is sufficiently smooth to allow the use of a traditional numerical ODE solver.
+
+On Figure 1, some curves are given which draw the forces between balls versus time. One can remark that the process of collision is not trivial: several periods of contact may occur before the balls separate definitively (see Figure 1(b)1(c)), or the contact period between two balls may not begin at the first instant of contact (see Figure 1(d)).
+
+If we define a multiple impact in regularised systems as the existence of a time interval where both contact forces are different from zero, all of these processes lead to multiple impacts. Naturally, the rigid limit in a mathematical sense requires additional care.
+---PAGE_BREAK---
+
+2.3 Analytical results for linear springs
+
+Let us now analyse the 3-ball chain with linear springs. This model is not consistent with the
+contact mechanics between two balls, but it is useful if we want to perform some analytical
+developments which are intractable with the Hertz model.
+
+For example, let us consider, $v_1 > 0$, $v_2 = v_3 = 0$ with $\kappa > 1$. We can demonstrate that there exists a non-zero interval $[0, t^*]$ in which the system behaves as the following bilateral system :
+
+$$
+\begin{equation}
+\left\{
+\begin{aligned}
+m \ddot{u}_1 &= -2k(\delta_1) + \kappa k(\delta_2) \\
+m \ddot{u}_2 &= -2\kappa k(\delta_2) + k(\delta_1) \\
+\delta_1(0) &= \delta_2(0) = 0, \quad \dot{\delta}_1(0) = -v_1, \quad \dot{\delta}_2(0) = v_3
+\end{aligned}
+\right.
+\tag{6}
+\end{equation}
+$$
+
+On $[0, t^*]$, the solution of (6) is:
+
+$$
+\left\{
+\begin{aligned}
+\delta_1(t) &= \frac{-v_1}{\beta - \gamma} \left( \frac{\beta}{\omega_1} \sin(\omega_1 t) - \frac{\gamma}{\omega_2} \sin(\omega_2 t) \right) \\
+\delta_2(t) &= \frac{-\beta \gamma v_1}{\beta - \gamma} \left( \frac{1}{\omega_2} \sin(\omega_2 t) - \frac{1}{\omega_1} \sin(\omega_1 t) \right)
+\end{aligned}
+\right.
+\qquad (7)
+$$
+
+where $(\omega_i, \phi_i)$ are the natural modes of the system given by :
+
+$$
+\left\{
+\begin{array}{ll}
+\omega_1^2 = \displaystyle\frac{k}{m} (\kappa + 1 - \sqrt{\kappa^2 - \kappa + 1}), & \phi_1 = [\beta = \kappa - 1 + \sqrt{\kappa^2 - \kappa + 1}, 1]^T \\
+\\
+\omega_2^2 = \displaystyle\frac{k}{m} (\kappa + 1 + \sqrt{\kappa^2 - \kappa + 1}), & \phi_2 = [\gamma = \kappa - 1 - \sqrt{\kappa^2 - \kappa + 1}, 1]^T
+\end{array}
+\right.
+\qquad (8)
+$$
+
+The first time one contact breaks, denoted as *t**, is provided by the smallest positive root
+of the transcendental equations:
+
+$$
+t_{12} = \min_{t \in \mathbb{R}^{+*}} \left\{ f_1(t) = 0 \text{ with } f_1(t) = \sin(\omega_1 t) - \frac{\omega_1}{\omega_2} \frac{\gamma}{\beta} \sin(\omega_2 t) \right\} \text{ (first pair of balls)}
+$$
+
+$$
+t_{23} = \min_{t \in \mathbb{R}^{+*}} \left\{ f_2(t) = 0 \text{ with } f_2(t) = \sin(\omega_1 t) - \frac{\omega_1}{\omega_2} \sin(\omega_2 t) \right\} \text{ (second pair of balls)}
+$$
+---PAGE_BREAK---
+
+Finding the smallest root with respect to the physical parameters of the system is a painful
+work. However, for this particular case, the following holds:
+
+**Proposition 2.1**
+
+If $\omega_2/\omega_1 = j \in \mathbb{N}^*$ then $t_{12} = t_{23} = t^* = \pi/\omega_1$.
+
+If $\omega_2/\omega_1 \in (j; j+1)$, $j \in \mathbb{N}^*$ and $j$ odd (resp. even) then $t^* = t_{12} < t_{23}$ (resp. $t_{12} > t_{23} = t^*$).
+
+For $t > t^*$, only two balls are still in contact. The rest of the process is easily integrable up to the final separation at the time $t_f$. Moreover, one can show that there is no further contact between the balls as illustrated in Figure 1(e).
+
+For $t^* = t_{12} < t_{23}$, the ICR is calculated as follows:
+
+$$
+\alpha = \frac{p_1}{p_2} = \frac{1}{\beta\gamma} \left( \frac{-\beta}{\omega_1^2} [\cos(\omega_1 t_{12}) - 1] - \frac{\gamma}{\omega_2^2} [\cos(\omega_2 t_{12}) - 1] \right) / \left[ \frac{1}{\omega_2^2} (\cos(\omega_2 t_{12}) - 1) - \frac{1}{\omega_1^2} (\cos(\omega_1 t_{12}) - 1) + \frac{1}{\omega_2'}^2 (\cos(\omega_2 t_{12}) - \cos(\omega_1 t_{12})) (\cos(\omega_2' \hat{t}_{23}) - 1) - \frac{1}{\omega_2'} (\frac{1}{\omega_2} \sin(\omega_2 t_{12}) - \frac{1}{\omega_1} \sin(\omega_1 t_{12})) (\sin(\omega_2' \hat{t}_{23})) \right] \quad (9)
+$$
+
+where $\omega_2' = \sqrt{2\kappa k/m}$ is the natural pulsation of two balls in contact and $\hat{t}_{23} = t_{23} - t^*$.
+
+## 2.4 Preliminary conclusions.
+
+Other cases have been treated in the same way. It is noteworthy that the occurrence of tran-
+scendental equations in the resolution creates serious difficulties to integrate analytically the
+process of collisions. Particularly, the time and the order of interactions are not easily pre-
+dictable.
+
+Nevertheless, a preliminary conclusion can be stated, on which more general results will
+be provided in Section 4:
+---PAGE_BREAK---
+
+**Proposition 2.2**
+
+The instants of changes in the contact interactions, in an adimensional scale of time, for instance, $T = \omega_i t$, and the ratio of impulses, $\alpha$, do not depend on the absolute values of stiffness $k$ and mass $m$. Moreover, the impulse correlation ratio $\alpha$, is completely determined by the natural modes of the regularised dynamical system and the pre-impact velocities.
+
+This conclusion outlines two important consequences:
+
+* from a mechanical point of view, the introduction of an impulse ratio enhances the model with some informations about the behavior of dynamical system when it is binded by elastic contact.
+
+* from a numerical modelling point of view, the independence to absolute value ok $k$ allows one to consider in a consistent manner its applications to very large stiffnesses, which are generally encountered in applications.
+
+**3 Some remarks on impulse correlation ratios in n-ball chains**
+
+An important aspect of a correct impact law is that it qualitatively represents the physical phenomena. For the *n*-ball chain or the Newton’s cradle, we know that conservation of kinetic energy and momentum is not sufficient to explain that there is no ball at rest after an impact [8]. The introduction of a set of ICR in *n*-ball chain as Ceanga and Hurmuzlu [4] have done, describes qualitatively this important phenomenon.
+
+From a quantitative point of view, some remarks must be made. Let us study the values of the ICR obtained by numerical simulation of a *n*-ball chain made of steel (E = 210Mpa, ν = 0.3, ρ = 7800kg/m³) regularised with elastic Hertz model, where the first ball is dropped at 1m/s and the other balls are at rest.
+---PAGE_BREAK---
+
+On the Figure 2(a), the number of balls of radius 10mm in the chain ranges from 3 to 21.
+
+For *n* balls, there are *n* − 1 impulses and *n* − 2 ICR, defined by:
+
+$$
+\alpha_i = \text{icr}(i) = \frac{p_i}{p_{i+1}} \tag{10}
+$$
+
+The first remark is that only the ICR which corresponds to the last triplet in the chain (for instance, the point A for 8 balls) is very different from the others. Therefore, the value of ICR measured from an experiment on a triplet cannot be used for the *n*-ball chain.
+
+On the Figure 2(b), we observe the value of ICR in a 21-ball where the tenth ball has been changed to a big ball of radius 50mm. The ICR corresponding to the percussion on the big ball is different, but also the value of ICR for the triplets 10 to 19. Moreover, the value of ICR computed for a 3-ball with a middle big wall is about 63.47, which is very different from the value computed in the whole chain (point B). This shows that the ICR depends on the dynamical features of the whole coupled system.
+
+# **4 Towards an extension to finite dimensional systems – Major results and conclusion**
+
+The case study of 3-ball chain is extended to finite dimensional systems subjected to perfect unilateral constraints. The major results are :
+
+1. The post-impact velocity, computed with the multiple impact law defined by *impulse correlation ratio*, is provided in a unique way and the system becomes mathematically well-posed.
+
+2. If the perfect constraints are regularised by a general viscoelastic contact model corresponding to a linear viscoelastic bulk behavior [9 ; 11] i.e.
+
+$$
+f = K \delta^n + C \delta^{n-1} \dot{\delta} \quad (11)
+$$
+
+then
+---PAGE_BREAK---
+
+(a) the ratio of impulse is finite and the subspace of the state space defined by
+
+$$E = \{\delta \ge 0, \dot{\delta} \ge 0\} \qquad (12)$$
+
+is globally attractive. Moreover, the amplitude of the force asymptotically tends towards zero and the relative velocity $\dot{\delta}$ towards a finite constant. This last point is very important from a numerical point of view. Extending these results to finite time convergence is still an issue,
+
+(b) The ICRs are independent of the absolute value of stiffness.
+
+3. If the perfect constraints are regularised by a linear model, i.e.
+
+$$f = K\delta \qquad (13)$$
+
+then the ICRs depend only on natural modes of the system and the pre-impact velocities
+
+4. The augmented impact law, which consists of a set of *energetic coefficients* and *impulse correlation ratio* fits within Fremond's thermodynamic framework [6]. It ensures that the principles of thermodynamics are respected.
+
+**References**
+
+[1] M. Abadie. Dynamic simulation of rigid bodies: Modelling of frictional contact. In B. Brogliato, editor, *Impacts in Mechanical Systems: Analysis and Modelling*, volume 551 of LNP, pages 61–144. Springer, 2000.
+
+[2] P. Ballard. The dynamics of discrete mechanical systems with perfect unilateral constraints. *Archives for Rational Mechanics and Analysis*, 154:199–274, 2000.
+
+[3] B. Brogliato. *Nonsmooth Mechanics: Models, Dynamics and Control*. Communications and Control Engineering. Springer-Verlag, second edition, 1999.
+---PAGE_BREAK---
+
+[4] V. Ceanga and Y. Hurmuzlu. A new look to an old problem : Newton's cradle. *Journal of Applied Mechanics, Transactions of A.S.M.E*, 68(4):575-583, July 2001.
+
+[5] E. Falcon, A. Laroche, and C. Coste. Collision of a 1-d column of beads with a wall. *The European Physical Journal B*, 5:111-131, 1998.
+
+[6] M. Frémond. *Non-Smooth Thermo-mechanics*. Springer-Verlag, 2002.
+
+[7] I. Han and B.J. Gilmore. Multi-body impact motion with friction-analysis, simulation, and experimental validation. *A.S.M.E. Journal of mechanical Design*, 115:412-422, 1993.
+
+[8] F. Herrmann and M. Seitz. How does the ball-chain work ? *American Journal of Physics*, 50(11):977-981, 1982.
+
+[9] J.M. Hertzsch, F. Spahn, and N.V. Brilliantov. On low-velocity collisions of viscoelastic particles. *Journal de Physique II (France)*, 5:1725-1738, 1995.
+
+[10] J.J. Moreau. Unilateral contact and dry friction in finite freedom dynamics. In J.J. Moreau and P.D. Panagiotopoulos, editors, *Nonsmooth mechanics and applications*, number 302 in CISM, Courses ans lectures, pages 1-82. Springer Verlag, 1988.
+
+[11] R. Ramirez, T. Pöschel, N. V. Brilliantov, and T. Schwager. Coefficient of restitution of colliding viscoelastic spheres. *Physical Review E*, 60(4):4465-4472, 1999.
+---PAGE_BREAK---
+
+Figure 1: Numerical integration of 3 balls chain. Forces between balls versus time. Figures (a-d) Hertzian spring contact. Figures (e-h) linear spring
+---PAGE_BREAK---
+
+Figure 2: Impulse correlation ratios in a *n*-ball chain
\ No newline at end of file
diff --git a/samples/texts_merged/565481.md b/samples/texts_merged/565481.md
new file mode 100644
index 0000000000000000000000000000000000000000..903abebffbb7920018e31733f53dee8df8a5fa87
--- /dev/null
+++ b/samples/texts_merged/565481.md
@@ -0,0 +1,149 @@
+
+---PAGE_BREAK---
+
+Homework Handout II
+
+4. For the following, $\mathcal{V}$ is a three-dimensional space of traditional vectors with standard basis
+
+$$S = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\}.$$
+
+(If you prefer, use $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ or $\{\mathbf{x}, \mathbf{y}, \mathbf{z}\}$.)
+
+Also, let
+
+$$\mathcal{B} = \{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3\}$$
+
+where
+
+$$\mathbf{b}_1 = \mathbf{i} + 2\mathbf{j}, \quad \mathbf{b}_2 = 3\mathbf{j} - \mathbf{k} \text{ and } \mathbf{b}_3 = 2\mathbf{i} - 3\mathbf{j}.$$
+
+1. Solve for $\mathbf{i}, \mathbf{j}$ and $\mathbf{k}$ in terms of $\mathbf{b}_1, \mathbf{b}_2$ and $\mathbf{b}_3$.
+
+2. Is $\mathcal{B}$ a basis for $\mathcal{V}$? Give a reason for your answer.
+
+3. Let $\mathbf{v} = 2\mathbf{i} + 3\mathbf{j} + 4\mathbf{k}$. What is $\mathbf{v}$ in terms of $\mathcal{B}$?
+
+4. Find the following (with $\mathbf{v}$ as above):
+
+$$
+\begin{align*}
+|\mathbf{i}\rangle_S, & |\mathbf{j}\rangle_S, |\mathbf{k}\rangle_S, |\mathbf{b}_1\rangle_S, |\mathbf{b}_2\rangle_S, |\mathbf{b}_3\rangle_S, |\mathbf{v}\rangle_S, \\
+|\mathbf{i}\rangle_B, & |\mathbf{j}\rangle_B, |\mathbf{k}\rangle_B, |\mathbf{b}_1\rangle_B, |\mathbf{b}_2\rangle_B, |\mathbf{b}_3\rangle_B \text{ and } |\mathbf{v}\rangle_B
+\end{align*}
+$$
+
+5. Compute $\langle \mathbf{b}_i | \mathbf{b}_j \rangle$ (i.e., $\mathbf{b}_i \cdot \mathbf{b}_j$) for all possible $i$'s and $j$'s.
+
+6. Let $\mathbf{v} = v_1 \mathbf{b}_1 + v_2 \mathbf{b}_2 + v_3 \mathbf{b}_3$ and $\mathbf{w} = w_1 \mathbf{b}_1 + w_2 \mathbf{b}_2 + w_3 \mathbf{b}_3$. Find the corresponding component formulas for $\langle \mathbf{v} | \mathbf{w} \rangle$ and $\|\mathbf{v}\|$.
+(Note: $\langle \mathbf{v} | \mathbf{w} \rangle \neq v_1 w_1 + v_2 w_2 + v_3 w_3$ and $\|\mathbf{v}\| \neq \sqrt{(v_1)^2 + (v_2)^2 + (v_3)^2}$! )
+
+7. (optional) Suppose **c** is any vector in **V** and let the components of **c** with respect to our two bases be denoted by
+
+$$|\mathbf{c}\rangle_S = \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix} \quad \text{and} \quad |\mathbf{c}\rangle_B = \begin{bmatrix} \beta_1 \\ \beta_2 \\ \beta_3 \end{bmatrix}.$$
+
+Find the formulas for computing the $\alpha_k$'s from the $\beta_k$'s, and for computing the $\beta_k$'s from the $\alpha_k$'s.
+---PAGE_BREAK---
+
+B. Consider (but don't yet bother solving, yet) the differential equation
+
+$$y'' + y = 0 .$$
+
+1. Suppose $y_1$ and $y_2$ are two solutions to this differential equation. Verify that any linear combination of these two solutions is also a solution.
+
+2. Let $S$ be the set of all solutions to this differential equation. Is $S$ a vector space?
+Explain.
+
+3. What is the general solution to this differential equation? What does it tell you about a possible basis for $S$ and the dimension of $S$?
+
+C. Let $S$ be the set of all solutions to some given homogeneous linear differential equation
+
+$$ay'' + by' + cy = 0$$
+
+where $a$, $b$, and $c$ are known functions. Show that $S$ is a vector space. (If you recall enough from your old differential equations class, you can even state the dimension of $S$.)
+
+D. Compute $\langle \mathbf{v} | \mathbf{w} \rangle$, $\langle \mathbf{w} | \mathbf{v} \rangle$ and $\|\mathbf{v}\|$ when the vector space is $\mathbb{C}^2$, $\mathbf{v} = (3i, 2+3i)$ and $\mathbf{w} = (4, 5+2i)$.
+
+E. Compute the “energy norm” inner product of two functions $f$ and $g$ on the interval $[0, 1]$,
+
+$$\langle f | g \rangle = \int_{0}^{1} f^{*}(x) g(x) dx,$$
+
+for the following choices of $f$ and $g$ (simplify your answers as much as practical):
+
+1. $f(x) = 3 + (2 + 3i)x$ and $g(x) = 5x - 2ix^2$
+
+2. $f(x) = 3 + (2 + 3i)e^{i2\pi x}$ and $g(x) = e^{i\pi x}$
+
+3. $f(x) = e^{i2\pi x}$ and $g(x) = 2 + x$
+
+F. Let **a** and **v** be two (nonzero) vectors from a vector space $\mathcal{V}$ with an inner product $\langle \cdot | \cdot \rangle$. Define the “generalized projection of vector **v** onto vector **a**” by
+
+$$\vec{\mathrm{pr}}_{\mathbf{a}}(\mathbf{v}) = \frac{\langle \mathbf{a} | \mathbf{v} \rangle}{\|\mathbf{a}\|^2} \mathbf{a} .$$
+
+and define the corresponding “generalized projection of **v** orthogonal to **a**” by
+
+$$\overrightarrow{\mathrm{or}}_{\mathbf{a}}(\mathbf{v}) = \mathbf{v} - \vec{\mathrm{pr}}_{\mathbf{a}}(\mathbf{v}).$$
+---PAGE_BREAK---
+
+Note that we automatically have that $\vec{pr}_a(\mathbf{v})$ is "parallel" to $\mathbf{a}$, and that
+
+$$ \mathbf{v} = \vec{pr}_a(\mathbf{v}) + \overrightarrow{or}_a(\mathbf{v}) . $$
+
+Now confirm that the set $\{\vec{pr}_a(\mathbf{v}), \overrightarrow{or}_a(\mathbf{v})\}$ is orthogonal.
+
+G. Let $V$ be the linear space of all functions of the form
+
+$$ f(x) = \alpha_{-2}e^{-i4\pi x} + \alpha_{-1}e^{-i2\pi x} + \alpha_0 + \alpha_1 e^{i2\pi x} + \alpha_2 e^{i4\pi x} $$
+
+where $\alpha_{-2}, \alpha_{-1}, \alpha_0, \alpha_1$ and $\alpha_2$ are constants.
+
+1. Using the inner product
+
+$$ \langle f | g \rangle = \int_{0}^{1} f^*(x) g(x) dx , $$
+
+verify that both
+
+$$ B_E = \{e^{-i4\pi x}, e^{-i2\pi x}, 1, e^{i2\pi x}, e^{i4\pi x}\} $$
+
+and
+
+$$ B_T = \{1, \cos(2\pi x), \sin(2\pi x), \cos(4\pi x), \sin(4\pi x)\} $$
+
+are orthogonal bases for $V$.
+
+2. What is $|e^{i2\pi x}\rangle_{B_T}$? $|\sin(4\pi x)\rangle_{B_E}$? (That is, find the components of each function with respect to the indicated basis.)
+
+3. Construct the orthonormal basis corresponding to $B_E$ and the orthonormal basis corresponding to $B_T$.
+
+H. Let $V$ be a three-dimensional space of traditional vectors with a “standard” basis
+
+$$ S = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\} . $$
+
+(If you prefer, use $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ or $\{\mathbf{x}, \mathbf{y}, \mathbf{z}\}$.)
+
+Using the Gram-Schmidt procedure, construct an orthonormal basis for $V$ from
+
+$$ B = \{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3\} $$
+
+where
+
+$$ \mathbf{b}_1 = \mathbf{i} + 2\mathbf{j} , \quad \mathbf{b}_2 = 3\mathbf{j} - \mathbf{k} \quad \text{and} \quad \mathbf{b}_3 = 2\mathbf{i} - 3\mathbf{j} . $$
+---PAGE_BREAK---
+
+I. You should have already convinced yourself that the space $P$ of all polynomials has the
+basis
+
+$$ \{1, x, x^2, x^3, x^4, x^5, \ldots\} . $$
+
+However, this basis is not orthonormal or even orthogonal with respect to the inner product
+
+$$ \langle f | g \rangle = \int_{0}^{1} f^{*}(x) g(x) dx . $$
+
+Let
+
+$$ \Phi = \{ \phi_0(x), \phi_1(x), \phi_2(x), \phi_3(x), \phi_4(x), \phi_5(x), \ldots \} $$
+
+be the corresponding orthonormal basis generated from the above basis by the Gram-Schmidt procedure.
+
+1. Find the formulas for $\phi_0(x)$, $\phi_1(x)$ and $\phi_2(x)$.
+
+2. Find the components of $f(x) = 2 + 3x^2$ with respect to basis $\Phi$.
\ No newline at end of file
diff --git a/samples/texts_merged/5658873.md b/samples/texts_merged/5658873.md
new file mode 100644
index 0000000000000000000000000000000000000000..4feefefbd28ee35a64362507d75b2ad161cbb8c0
--- /dev/null
+++ b/samples/texts_merged/5658873.md
@@ -0,0 +1,458 @@
+
+---PAGE_BREAK---
+
+Weakly useful sequences
+
+Stephen A. Fenner* Jack H. Lutz† Elvira Mayordomo‡ Patrick Reardon§
+
+July 23, 2004
+
+Abstract
+
+An infinite binary sequence *x* is defined to be
+
+(i) *strongly useful* if there is a computable time bound within which every decidable sequence is Turing reducible to *x*; and
+
+(ii) *weakly useful* if there is a computable time bound within which all the sequences in a non-measure 0 subset of the set of decidable sequences are Turing reducible to *x*.
+
+Juedes, Lathrop, and Lutz (1994) proved that every weakly useful sequence is strongly deep in the sense of Bennett (1988) and asked whether there are sequences that are weakly useful but not strongly useful.
+The present paper answers this question affirmatively. The proof is a direct construction that combines
+the *martingale diagonalization* technique of Lutz (1994) with a new technique, namely, the construction
+of a sequence that is “computably deep” with respect to an arbitrary, given uniform reducibility. The
+*abundance* of such computably deep sequences is also proven and used to show that every weakly useful
+sequence is computably deep with respect to every uniform reducibility.
+
+# 1 Introduction
+
+It is a truism that the usefulness of a data object does not vary directly with its information content. For example, consider two infinite binary strings, $\chi_K$, the characteristic sequence of the halting problem (whose nth bit is 1 if and only if the nth Turing machine halts on input $n$), and $z$, a sequence that is algorithmically random in the sense of Martin-Löf [10]. The following facts are well-known.
+
+1. The first $n$ bits of $\chi_K$ can be specified using only $O(\log n)$ bits of information, namely, the number of 1's in the first $n$ bits of $\chi_K$ [1].
+
+2. The first $n$ bits of $z$ cannot be specified using significantly fewer than $n$ bits of information [10].
+
+3. Oracle access to $\chi_K$ would enable one to decide any decidable sequence in polynomial time (i.e., decide the nth bit of the sequence in time polynomial in the length of the binary representation of $n$) [11].
+
+4. Even with oracle access to $z$, most decidable sequences cannot be computed in polynomial time. (This appears to be folklore, known at least since [2].)
+
+*University of South Carolina, Columbia, South Carolina, USA. E-mail: fenner@cse.sc.edu. This research was supported in part by National Science Foundation Grants CCR-9209833, CCR-9501794, and CCR-9996310.
+
+†Iowa State University, Ames, Iowa, USA. E-mail: lutz@iastate.edu. This research was supported in part by National Science Foundation Grants 9157382, 9610461, 9988483, and 0344187.
+
+‡University of Zaragoza, Zaragoza, SPAIN. E-mail: elvira@unizar.es. This research was supported in part by Spanish Government projects PB98-0937-C04-02 and TIC2002-04019-C03-03 and by National Science Foundation Grant 0344187
+
+§SE Oklahoma State University, Durant, Oklahoma, USA. E-mail: preardon@sosu.edu. Work supported in part by a grant from the Applied and Organized Research Fund at SEOSU.
+---PAGE_BREAK---
+
+Facts (1) and (2) tell us that $\chi_K$ contains far less information than $z$. In contrast, facts (3) and (4) tell us that $\chi_K$ is computationally much more useful than $z$. That is, the information in $\chi_K$ is "more usefully organized" than that in $z$.
+
+Bennett [2] introduced the notion of computational depth (also called "logical depth") in order to quantify the degree to which the information in an object has been organized. In particular, for infinite binary sequences, Bennett defined two "levels" of depth, strong depth and weak depth, and argued that the above situation arises from the fact that $\chi_K$ is strongly deep, while $z$ is not even weakly deep. (The present paper is motivated by the study of computational depth, but does not directly use strong or weak depth, so definitions are omitted here. The interested reader is referred to [2], [7], or [5] for details, and for related aspects of algorithmic information theory.)
+
+Investigating this matter further, Juedes, Lathrop, and Lutz [5] considered two "levels of usefulness" for infinite binary sequences. Specifically, let $\mathbf{C}$ be the Cantor space of all infinite binary sequences and let DEC be the set of all decidable elements of $\mathbf{C}$. For $x \in \mathbf{C}$ and $t: \mathbf{N} \to \mathbf{N}$, let $\text{DTIME}^x(t)$ be the set of all $y \in \mathbf{C}$ for which there exists an oracle Turing machine $M$ that, on input $n \in \mathbf{N}$ with oracle $x$, computes $y[n]$, the $n$th bit of $y$, in at most $t(\ell)$ steps, where $\ell$ is the number of bits in the binary representation of $n$. Then a sequence $x \in \mathbf{C}$ is defined to be *strongly useful* if there is a computable time bound $t: \mathbf{N} \to \mathbf{N}$ such that $\text{DTIME}^x(t)$ contains every decidable sequence. A sequence $x \in \mathbf{C}$ is defined to be *weakly useful* if there is a computable time bound $t: \mathbf{N} \to \mathbf{N}$ such that the set of decidable sequences contained in $\text{DTIME}^x(t)$ is a non-measure 0 subset of DEC in the sense of resource-bounded measure [9]. That is, $x$ is weakly useful if access to $x$ enables one to decide a *nonnegligible set* of decidable sequences within some fixed computable time bound. No decidable or algorithmically random sequence can be weakly useful. It is evident that $\chi_K$ is strongly useful, and that every strongly useful sequence is weakly useful.
+
+Juedes, Lathrop, and Lutz [5] generalized Bennett's result that $\chi_K$ is strongly deep by proving that every weakly useful sequence is strongly deep. This confirmed Bennett's intuitive arguments by establishing a definite relationship between computational depth and computational usefulness. It also substantially extended Bennett's result on $\chi_K$ by implying (in combination with known results of recursion theory [10, 13, 3, 4]) that all high Turing degrees and some low Turing degrees contain strongly deep sequences.
+
+Notwithstanding this progress, Juedes, Lathrop, and Lutz [5] left a critical question open: Do there exist weakly useful sequences that are not strongly useful? The main result of the present paper answers this question affirmatively. This establishes the existence of strongly deep sequences that are not strongly useful. More importantly, it indicates a need for further investigation of the class of weakly useful sequences.
+
+The proof of our main result is a direct construction that combines the *martingale diagonalization* technique introduced by Lutz [8] with a new technique, namely, the construction of a sequence that is *computably F-deep*, where *F* is an arbitrary uniform reducibility. This notion of computable uniform depth is closely related to Bennett's notion of weak depth.
+
+The paper is organized as follows. Section 2 contains basic definitions. In Section 3 we introduce and investigate the notions of computable *F*-depth and computable weak depth. In addition to using specific constructions of computably *F*-deep sequences, we prove that for each uniform reducibility *F*, *almost every* sequence in DEC is computably *F*-deep. This implies that a weakly useful sequence is computably *F*-deep for any uniform reducibility *F*. The main theorem is proved in Section 4, where in addition we introduce a canonical technique for constructing computably *F*-deep sequences that satisfy an additional property which, loosely translated, guarantees that the depths of their initial segments increase at a rate exponential in the length of the segment.
+
+## 2 Preliminaries
+
+We use $\mathbf{N}$ to denote the set of natural numbers (including 0), and $\mathbf{Q}$ to denote the set of rational numbers.
+We write $[\varphi]$ for the Boolean value of a condition $\varphi$, i.e.,
+
+$$ [\varphi] = \text{if } \varphi \text{ then } 1 \text{ else } 0. $$
+---PAGE_BREAK---
+
+For any $x, y \in \{0, 1\}^* \cup \{0, 1\}^\infty$, we write $x \sqsubseteq y$ to mean that $x$ is a prefix of $y$. For every $w \in \{0, 1\}^*$, define $C_w = \{x \in C : w \sqsubseteq x\}$. We fix a computable, bijective pairing function $\langle \cdot, \cdot \rangle: N^2 \to N$, monotone in both arguments, such that $i \le \langle i, j \rangle$ and $j \le \langle i, j \rangle$ for all $i, j \in N$.
+
+Weakly useful sequences are defined (in Section 1) in terms of *computable measure*, a special case of the resource-bounded measure developed by Lutz [9]. We very briefly sketch the elements of this theory, referring the reader to [9, 8] for motivation, details, and intuition.
+
+**Definition 2.1** A martingale is a function $d: \{0,1\}^* \to [0,\infty)$ such that $d(w) = \frac{d(w_0)+d(w_1)}{2}$ for all $w \in \{0,1\}^*$. A martingale $d$ is *computable* if there is a total computable function $\hat{d}: \mathbb{N} \times \{0,1\}^* \to \mathbb{Q}$ such that, for all $r \in \mathbb{N}$ and $w \in \{0,1\}^*$,
+
+$$|\hat{d}(r,w) - d(w)| \le 2^{-r}.$$
+
+We make use of two notions of “success” for a martingale.
+
+**Definition 2.2** Suppose $d$ is a martingale.
+
+1. *d succeeds on a sequence* $x \in C$ if
+
+$$\limsup_{n \to \infty} d(x[0 \dots n - 1]) = \infty,$$
+
+where $x[0 \dots n - 1]$ is the $n$-bit prefix of $x$.
+
+2. The *success set* of $d$ is
+
+$$S^\infty[d] = \{x \in C : d \text{ succeeds on } x\}.$$
+
+3. The *strong unitary success set* of $d$ is
+
+$$SS^1[d] = \{x \in C : \text{ for all but finitely many } n, d(x[0 \dots n - 1]) \ge 1\}.$$
+
+**Definition 2.3** Let $X \subseteq C$.
+
+1. $X$ has *computable measure 0*, and we write $\mu_{comp}(X) = 0$, if there is a computable martingale $d$ such that $X \subseteq S^\infty[d]$.
+
+2. $X$ has *computable measure 1*, and we write $\mu_{comp}(X) = 1$, if $\mu_{comp}(X^c) = 0$, where $X^c = C - X$ is the complement of $X$.
+
+3. $X$ has *measure 0 in DEC*, and we write $\mu(X | \text{DEC}) = 0$, if $\mu_{comp}(X \cap \text{DEC}) = 0$.
+
+4. $X$ has *measure 1 in DEC*, and we write $\mu(X | \text{DEC}) = 1$, if $\mu(X^c | \text{DEC}) = 0$. In this case, we say that $X$ contains *almost every* element of DEC.
+
+# 3 Uniform Computable Depth
+
+Bennett [2] defines an infinite sequence $A$ to be *weakly deep* if $A$ is not tt-reducible to any algorithmically random sequence. The definition of algorithmic randomness, due to Martin-Löf [10], can be stated in terms of constructive null sets, which are sets with a computably enumerable sequence of open covers whose measures grow arbitrarily small. In this section we develop a similar notion of depth based on computable measure, a special case of the resource-bounded measure developed by Lutz [9]. This depth property is used in the proof of our main result in Section 4. It is also of independent interest as it is closely related to Bennett's weak depth.
+---PAGE_BREAK---
+
+We first make our terminology precise. As in [12], we define a *truth-table condition* (briefly, a *tt-condition*) to be an ordered pair $\tau = ((n_1, \dots, n_k), g)$, where $k, n_1, \dots, n_k \in \mathbf{N}$ and $g: \{0, 1\}^k \to \{0, 1\}$. We write TTC for the class of all tt-conditions. The *tt-value* of a sequence $B \in \mathbf{C}$ under a tt-condition $\tau = ((n_1, \dots, n_k), g)$ is the bit $\tau^B = g(B[n_1]B[n_2]\cdots B[n_k])$. If $\tau$ is a tt-condition, then we say that $\tau$ *queries* the integer $m$ if $m \in \{n_1, \dots, n_k\}$, and the *query height* of $\tau$ is defined as $\max(n_1, \dots, n_k) + 1$.
+
+A *truth-table reduction* (briefly, a *tt-reduction*) is a total computable function $F: \mathbf{N} \to \text{TTC}$. A truth-table reduction $F$ naturally induces a function $\hat{F}: \mathbf{C} \to \mathbf{C}$ defined by
+
+$$ \hat{F}(B) = F(0)^B F(1)^B \dots $$
+
+In general, we identify a truth-table reduction $F$ with the induced function $\hat{F}$, writing $F$ for either function and relying on context to avoid confusion.
+
+The following terminology is convenient for our purposes.
+
+**Definition 3.1** A *uniform reducibility* is a total computable function $F: \mathbf{N} \times \mathbf{N} \to \text{TTC}$.
+
+If $F$ is a uniform reducibility, then we use the notation $F_k(n) = F(k, n)$ for all $k, n \in \mathbf{N}$. We thus regard a uniform reducibility as a computable sequence $F_0, F_1, F_2, \dots$ of tt-reductions.
+
+**Definition 3.2** If $F$ and $G$ are uniform reducibilities, then we define the *composition* of $F$ with $G$ to be the uniform reducibility
+
+$$ F \circ G: \mathbf{N} \times \mathbf{N} \to \text{TTC} $$
+
+defined by
+
+$$ (F \circ G)(\langle k, j \rangle, n) = (F_k \circ G_j)(n) $$
+
+for all $k, j, n \in \mathbf{N}$, where “$F_k \circ G_j$” denotes the (easily defined) truth-table reduction satisfying $(F_k \circ G_j)(B) = F_k(G_j(B))$ for all $B \in \mathbf{C}$.
+
+**Definition 3.3** Suppose $F$ is a uniform reducibility, and $A$ and $B$ are infinite binary sequences.
+
+1. A is *F-reducible* to $B$, and we write $A \leq_F B$, if there is some $k$ such that $A = F_k(B)$.
+
+2. The *upper F-span* of $A$ is the set $F^{-1}(A) = \{X \in \mathbf{C} : A \leq_F X\}$.
+
+3. A is computably F-deep if $\mu_{comp}(F^{-1}(A)) = 0$.
+
+4. A is computably weakly deep if, for every uniform reducibility $F$, A is computably F-deep.
+
+We pursue for a moment the analogy between Definition 3.3(3) and Bennett's weak depth. In [15], Terwijn and Torenvliet extended the resource-bounded measure of Lutz [9] using computably enumerable supermartingales, functions like martingales except the averaging condition they must satisfy is weaker than that required of ordinary martingales. Using this notion of measure, termed c.e. measure, Terwijn and Torenvliet proved that the class of non-algorithmically random languages is the maximum c.e. measure 0 class. Bennett's notion of weak depth can thus be characterized in terms of c.e. measure in the sense that a language $A$ is weakly deep if and only if the c.e. measure of its upper tt-span is 0. Definition 3.3(3) reflects the spirit of this characterization, but replaces 'tt-reducible' with 'F-reducible' and replaces 'c.e. measure 0' with 'comp-measure 0'. Regarding Definition 3.3(4), observe that every computably weakly deep sequence is weakly deep. Lathrop and Lutz [6] have shown the converse is not true.
+
+Although the definition of a weakly useful sequence was stated in terms of Turing reductions, we work almost exclusively in this section and the next with truth table reductions. The connection between these two notions is expressed by the following well-known fact.
+
+**Lemma 3.4** For every computable time bound $t(n)$, there is a uniform reducibility $F$ such that for all $x \in \mathbf{C}$,
+$DTIME^x(t) = \{F_0(x), F_1(x), \dots\}$.
+---PAGE_BREAK---
+
+We now prove the main result of this section.
+
+**Theorem 3.5** For every uniform reducibility *F*, comp-almost every sequence in DEC is computably *F*-deep.
+
+*Proof.* Let $F$ denote a uniform reducibility, and write $F = F_0, F_1, \dots$. For each $j \in \mathbb{N}$, define $D_j(w) = \{B \in \mathbf{C} : F_j(B)[0 \dots |w|-1] = w\}$, i.e., those oracles which allow $F_j$ to correctly compute $w$. For every $j \in \mathbb{N}$ and $w \in \{0, 1\}^*$, we may, using a program that computes $F$, calculate $L_j(w) = \max(\{h \in \mathbb{N} : h$ is the query height of $F_j(i)$ for some $0 \le i < |w|\})$, and then poll $\{0, 1\}^{L_j(w)}$ using the tt-conditions $F_j(i)$ for $0 \le i < |w|$ to obtain the set
+
+$$E_j(w) = \{\alpha \in \{0,1\}^{L_j(w)} : F_j(B)[0\dots|w|-1] = w \text{ for all } B \supseteq \alpha\}.$$
+
+Let $\Pr(D_j(w))$ denote the probability that an oracle chosen at random belongs to $D_j(w)$. Then the function $\tilde{d}(w) = 2^{|w|} \cdot \Pr(D_j(w))$ is a martingale, and $\{\tilde{d}_j\}_{j=0}^\infty$ is uniformly computable since $\Pr(D_j(w)) = |E_j(w)| \cdot 2^{-L_j(w)}$. Set
+
+$$\tilde{d}(w) = \sum_{j=0}^{\infty} 2^{-j} \cdot \tilde{d}_j(w).$$
+
+It is routine to verify that $\tilde{d}$ is computable.
+
+To show that $\mu_{comp}(F^{-1}(A)) = 0$ for comp-almost every $A \in \text{DEC}$, we construct a computable martingale $d$ which succeeds on $F^{-1}(A)$. Suppose $A \in \text{DEC} - S^\infty[\tilde{d}]$. Then for every $j \in \mathbb{N}$, we have
+
+$$\lim_{m \to \infty} \Pr(D_j(A[0 \dots m])) = 0.$$
+
+Because of this we may, for each $j, n \in \mathbb{N}$, compute a number $m_{j,n}$ such that $\Pr(D_j(A[0 \dots m_{j,n}])) \le 2^{-j-n-1}$. This can be accomplished by using programs that compute both $F$ and $A$ to calculate, for any $j, n \in \mathbb{N}$, $\Pr(D_j(A[0 \dots m]))$ for increasing values of $m$. We then define $m_{j,n}$ to be the least $m$ such that $\Pr(D_j(A[0 \dots m])) \le 2^{-j-n-1}$. We remark that $\{m_{j,n}\}_{j,n=0}^\infty$ is uniformly computable, and define a uniformly computable sequence $\{d_{j,n}\}_{j,n=0}^\infty$ of martingales as follows. For all $j, n \in \mathbb{N}$, let $d_{j,n}$ be the unique martingale with initial value $d_{j,n}(\lambda) = \Pr(D_j(A[0 \dots m_{j,n}]))$, and satisfying $D_j(A[0 \dots m_{j,n}]) = SS^1[d_{j,n}]$.
+
+This implies that $d_{j,n}(\lambda) \le 2^{-j-n-1}$ for all $j, n \in \mathbb{N}$, and we define a function $d: \{0,1\}^* \to [0,\infty)$ by
+
+$$d(w) = \sum_{n=0}^{\infty} \sum_{j=0}^{\infty} d_{j,n}(w).$$
+
+Then
+
+$$d(\lambda) = \sum_{n=0}^{\infty} \sum_{j=0}^{\infty} d_{j,n}(\lambda) \leq \sum_{n=0}^{\infty} \sum_{j=0}^{\infty} 2^{-j-n-1} = 2,$$
+
+and thus $d$ is a martingale. To see that it is computable, define $\hat{d}(r, w) = \sum_{n=0}^{r+1+|w|} \sum_{j=0}^{r+1+|w|} d_{j,n}(w)$. The fact that $\{d_{j,n}\}_{j,n=0}^\infty$ is uniformly computable implies that $\hat{d}(r, w)$ is computable, and
+
+$$
+\begin{align*}
+\left| d(w) - \hat{d}(r,w) \right| &\leq \sum_{n=0}^{\infty} \sum_{j=r+2+|w|}^{\infty} 2^{|w|} \cdot 2^{-j-n-1} + \sum_{n=r+2+|w|}^{\infty} \sum_{j=0}^{\infty} 2^{|w|} \cdot 2^{-j-n-1} \\
+&= \sum_{n=0}^{\infty} 2^{-r-2-n} + \sum_{n=r+2+|w|}^{\infty} 2^{|w|-n} \\
+&= 2^{-r-1} + 2^{-r-1} = 2^{-r}.
+\end{align*}
+$$
+
+For every $B \in F^{-1}(A)$, there exists $j \in N$ such that for all $n \in N$, $B \in SS^1[d_{j,n}]$, whence $F^{-1}(A) \subseteq S^\infty[d]$. This shows that $\mu_{comp}(F^{-1}(A)) = 0$. The sequence $A \in DEC - S^\infty[\tilde{d}]$ was arbitrary, so it follows that comp-almost every decidable sequence is computably F-deep. $\square$
+---PAGE_BREAK---
+
+**Theorem 3.6** Every weakly useful sequence is computably weakly deep.
+
+**Proof.** Assume that A is weakly useful and fix a uniform reducibility F. Fix a computable time bound $t: \mathbf{N} \to \mathbf{N}$ such that $\mu(\text{DTIME}^A(t) | \text{DEC}) \neq 0$. Then by Lemma 3.4 there is a uniform reducibility $\tilde{F}$ such that $\text{DTIME}^A(t) = \{\tilde{F}_0(A), \tilde{F}_1(A), \dots\}$. Let X denote the collection of computably $(\tilde{F} \circ F)$-deep sequences. By Theorem 3.5, $\mu(X | \text{DEC}) = 1$, so there is a sequence $B \in X \cap \text{DTIME}^A(t) \cap \text{DEC}$. Let $C \in F^{-1}(A)$ and choose $j, k \in \mathbf{N}$ such that $A = F_j(C)$ and $B = \tilde{F}_k(A)$. Then $B = \tilde{F}_k(F_j(C))$, so $C \in (\tilde{F} \circ F)^{-1}(B)$. This shows that $F^{-1}(A) \subseteq (\tilde{F} \circ F)^{-1}(B)$. Since $B \in X$, it follows that $\mu_{\text{comp}}(F^{-1}(A)) = \mu_{\text{comp}}((\tilde{F} \circ F)^{-1}(B)) = 0$, whence A is computably weakly deep. $\square$
+
+# 4 Main Result
+
+In this section, we prove the existence of weakly useful sequences that are not strongly useful. Although we consider infinite, nonuniform collections of uniform reducibilities F, our construction uses computably F-deep sets that are constructed in a canonical way.
+
+We will deal extensively with *partial characteristic functions*, i.e., functions with domain a subset of **N** and range {0, 1}. If σ and τ are partial characteristic functions, we let dom(σ) denote the domain of σ, and say that σ and τ are *compatible* if they agree on all elements in dom(σ) ∩ dom(τ). We say that *σ is extended by* τ (σ ⊑ τ) if σ and τ are compatible and dom(σ) ⊆ dom(τ). If D ⊆ N, then σ restricted to D is the unique partial characteristic function
+
+$$\tau[x] = \begin{cases} \sigma[x] & \text{if } x \in D \\ \text{undefined} & \text{otherwise.} \end{cases}$$
+
+We often identify $\mathbf{N}$ with $\mathbf{N}^2$ via the pairing function. The $i$th section of natural numbers $\{\langle i, j \rangle : j \in \mathbf{N}\}$ is denoted by $\mathbf{N}_i$, the union of the first $i$ sections $\mathbf{N}_0 \cup \dots \cup \mathbf{N}_{i-1}$ by $\mathbf{N}_{*j*(⟨*j*, *n*′⟩) for all *n*′ ≤ *n*. Partition {0, 1}*L* into two sets *R*0 and *R*1 so that
+
+$$\alpha \in R_b \iff (F_j(\langle j, n \rangle)^B = b \text{ for every oracle } B \supseteq \alpha).$$
+
+Informally, we identify $R_0$ with the set of oracles which answer “No” when queried by $F_j(\langle j, n \rangle)$, and $R_1$ with the set of oracles which answer “Yes.” Our strategy will be to diagonalize against the majority in the construction of $A$, ensuring that only the minority answer among those consistent with previous answers can correctly compute any given bit of $A$. We thus define $A[\langle j, n \rangle]$ by induction as follows. Assume that $A[\langle j, n' \rangle]$ has already been defined for all $n' < n$, and let
+
+$$R = \{\alpha \in \{0,1\}^L : (\forall B \supseteq \alpha)(A_{=j}[ 0 \\ 2^{-l} & \text{otherwise,} \end{cases} \quad (1)$$
+
+where the probabilities refer to the uniform probability measure on $\mathbf{C}$, and, for measurable sets $X, Y \subseteq \mathbf{C}$ with $\Pr(Y) > 0$, we define $\Pr(X | Y)$ to be $\frac{\Pr(X \cap Y)}{\Pr(Y)}$ as usual. Note that the definition of $d_{k,q,l}^{i,j}$ above remains unchanged if we replace $y_{i,j}(q\ell)$ with any $y \ge y_{i,j}(q\ell)$, because $D_{j,q\ell}^i$ depends on $B \in \mathbf{C}$ only for those queries made by $F_j^i$ on inputs $\langle j, 0 \rangle, \dots, \langle j, q\ell - 1 \rangle$, and none of these queries is of the form $\langle x, y \rangle$ for $y \ge y_{i,j}(q\ell)$.
+
+Summing over $\ell \ge 1$, define
+
+$$d_{k,q}^{i,j}(w) = \sum_{\ell=1}^{\infty} d_{k,q,\ell}^{i,j}(w).$$
+
+Finally, set
+
+$$d_k(w) = \tilde{d}_k(w) + \sum_{i=0}^{k} \sum_{j=0}^{\infty} \sum_{q=0}^{\infty} d_{k,q}^{i,j}(w) \cdot 2^{-q-j}.$$
+
+We define $H_k: N_k \rightarrow \{0,1\}$ so that it is compatible with $\alpha_k$ on their common domain but diagonalizes out of the success set of $d_k$ otherwise. Specifically,
+
+$$H_k[n] = H_k[\langle k, n \rangle] = \begin{cases} \alpha_k[\langle k, n \rangle] & \text{if it is defined} \\ [d_k(H_k[\langle n \rangle]1) \le d_k(H_k[\langle n \rangle]0)] & \text{otherwise.} \end{cases}$$
+
+As stated, it may not be the case that the comparison in the definition can actually be accomplished since a
+computable martingale such as $d_k$ cannot in general be computed exactly, but is only approximated. What
+we are really comparing then are not $d_k(H_k[\langle n \rangle]1)$ and $d_k(H_k[\langle n \rangle]0)$, but rather their $n$th approximations,
+which *are* computable. Since these approximations are guaranteed to be within $2^{-n}$ of the actual values,
+and our sole aim is to make $d_k$ fail on $H_k$, it suffices for our purposes to consider only the approximations
+when doing the comparisons above. The same trick is used in [8].
+
+$H_k$ is decidable, and for cofinitely many $n$, $H_k[n]$ is chosen so that $d_k(H_k[\langle n+1\rangle]) \le d_k(H_k[\langle n\rangle]+2^{-n})$, the $2^{-n}$ owing to the error in the approximation of $d_k$. Thus $d_k$ fails on $H_k$, from which we obtain
+
+**Fact 4.5** The martingales $\tilde{d}_k$ and $d_{k,q}^{i,j}$, where $j,q$ are arbitrary and $i \le k$, all fail on $H_k$.
+
+Thus Conditions 1 and 2 above are satisfied. Define $H$ to be the function whose value at $\langle u, v \rangle$ is $H_u[v]$.
+Each $H_k$ preserves the diagonalization commitments made by the $\alpha_{k'}$ for $k' \le k$, and one can easily see that
+$\alpha_0 \sqsubseteq \alpha_1 \sqsubseteq \cdots H$. This completes the construction of $H$.
+---PAGE_BREAK---
+
+*H is weakly useful but not strongly useful*
+
+Notice that $J = \{H_0, H_1, ...\} \subseteq \text{DTIME}^H(\text{linear})$, but $\mu_{\text{comp}}(J) \neq 0$. Therefore, *H* is weakly useful. It only remains to show that *H* is not strongly useful. For every *i*, $\text{DTIME}^H(t_i) = \{B \in \mathbf{C} : B \leq_{F^i} H\}$. Hence it suffices to show that for all *i* and *j*, the canonical computably $F^i$-deep set $A^i \neq F_j^i(H)$.
+
+We assume there are natural numbers $r$ and $s$ such that $A^r = F_s^r(H)$ and work towards a contradiction. Define $k_0 = \langle r, s \rangle$ and let $\sigma = H_{ r$ such that $\mathbf{N}_{\ge q_0} \cap \text{dom}(\sigma) = \emptyset$. We will show that $d_{n,q_0}^{r,s}$ succeeds on $H_n$ for some $n < q_0$, contradicting Fact 4.5.
+
+For every $y \in \mathbb{N}$, let $\{\delta_i : 0 \le i < 2^{ry}\}$ be an enumeration of all partial characteristic functions with domain $\mathcal{T} = \{\langle u, v\rangle : 0 \le u < r \text{ and } 0 \le v < y\}$, and such that $\delta_0 = H_{ 0, \quad (2)
+$$
+
+since $R_{q_0,y} \subseteq D_{s,q_0\ell}^r$ by our assumption. For these $n$, only the top equation in (1) is relevant. Thus for sufficiently large $y$,
+
+$$
+\begin{align*}
+\prod_{n=r}^{q_0-1} d_{n,q_0,\ell}^{r,s}(H_n[ | PAGE |
|---|
| Preface (A. AUTHIER) | xi | | PART 1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES | | 1.1. Introduction to the properties of tensors (A. AUTHIER) | 3 | | 1.1.1. The matrix of physical properties | 3 | | 1.1.2. Basic properties of vector spaces | 5 | | 1.1.3. Mathematical notion of tensor | 7 | | 1.1.4. Symmetry properties | 10 | | 1.1.5. Thermodynamic functions and physical property tensors | 31 | | 1.1.6. Glossary | 32 | | 1.2. Representations of crystallographic groups (T. JANSSEN) | 34 | | 1.2.1. Introduction | 34 | | 1.2.2. Point groups | 35 | | 1.2.3. Space groups | 46 | | 1.2.4. Tensors | 51 | | 1.2.5. Magnetic symmetry | 53 | | 1.2.6. Tables | 56 | | 1.2.7. Introduction to the accompanying software Tenχar (M. EPHRAIM, T. JANSSEN, A. JANNER AND A. THIERS) | 62 | | 1.2.8. Glossary | 70 | | 1.3. Elastic properties (A. AUTHIER AND A. ZAREMBOWITCH) | 72 | | 1.3.1. Strain tensor | 72 | | 1.3.2. Stress tensor | 76 | | 1.3.3. Linear elasticity | 80 | | 1.3.4. Propagation of elastic waves in continuous media – dynamic elasticity | 86 | | 1.3.5. Pressure dependence and temperature dependence of the elastic constants | 89 | | 1.3.6. Nonlinear elasticity | 91 | | 1.3.7. Nonlinear dynamic elasticity | 94 | | 1.3.8. Glossary | 97 | | 1.4. Thermal expansion (H. KÜPPERS) | 99 | | 1.4.1. Definition, symmetry and representation surfaces | 99 | | 1.4.2. Grüneisen relation | 100 | | 1.4.3. Experimental methods | 101 | | 1.4.4. Relation to crystal structure | 103 | | 1.4.5. Glossary | 104 | | 1.5. Magnetic properties (A. S. BOROVIK-ROMANOV AND H. GRIMMER) | 105 | | 1.5.1. Introduction | 105 | | 1.5.2. Magnetic symmetry | 109 | | 1.5.3. Phase transitions into a magnetically ordered state | 116 | | 1.5.4. Domain structure | 125 | | 1.5.5. Weakly non-collinear magnetic structures | 127 | | 1.5.6. Reorientation transitions | 131 | | 1.5.7. Piezomagnetism | 132 | | 1.5.8. Magnetoelectric effect | 137 | | 1.5.9. Magnetostriction | 142 | | 1.5.10. Transformation from Gaussian to SI units | 146 | | 1.5.11. Glossary | 146 | |
|---|
+
+vii
+---PAGE_BREAK---
+
+CONTENTS
+
+ | |
|---|
| 1.6. Classical linear crystal optics (A. M. GLAZER AND K. G. COX) | 150 | | 1.6.1. Introduction | 150 | | 1.6.2. Generalized optical, electro-optic and magneto-optic effects | 150 | | 1.6.3. Linear optics | 152 | | 1.6.4. Practical observation of crystals | 154 | | 1.6.5. Optical rotation | 166 | | 1.6.6. Linear electro-optic effect | 172 | | 1.6.7. The linear photoelastic effect | 173 | | 1.6.8. Glossary | 176 | | 1.7. Nonlinear optical properties (B. BOULANGER AND J. ZYSS) | 178 | | 1.7.1. Introduction | 178 | | 1.7.2. Origin and symmetry of optical nonlinearities | 178 | | 1.7.3. Propagation phenomena | 183 | | 1.7.4. Determination of basic nonlinear parameters | 212 | | 1.7.5. The main nonlinear crystals | 214 | | 1.7.6. Glossary | 216 | | 1.8. Transport properties (G. D. MAHAN) | 220 | | 1.8.1. Introduction | 220 | | 1.8.2. Macroscopic equations | 220 | | 1.8.3. Electrical resistivity | 220 | | 1.8.4. Thermal conductivity | 224 | | 1.8.5. Seebeck coefficient | 226 | | 1.8.6. Glossary | 227 | | 1.9. Atomic displacement parameters (W. F. KUHS) | 228 | | 1.9.1. Introduction | 228 | | 1.9.2. The atomic displacement parameters (ADPs) | 228 | | 1.9.3. Site-symmetry restrictions | 232 | | 1.9.4. Graphical representation | 232 | | 1.9.5. Glossary | 242 | | 1.10. Tensors in quasiperiodic structures (T. JANSSEN) | 243 | | 1.10.1. Quasiperiodic structures | 243 | | 1.10.2. Symmetry | 245 | | 1.10.3. Action of the symmetry group | 247 | | 1.10.4. Tensors | 249 | | 1.10.5. Tables | 255 |
+
+## PART 2. SYMMETRY ASPECTS OF EXCITATIONS
+
+ | |
|---|
| 2.1. Phonons (G. ECKOLD) | 266 | | 2.1.1. Introduction | 266 | | 2.1.2. Fundamentals of lattice dynamics in the harmonic approximation | 266 | | 2.1.3. Symmetry of lattice vibrations | 274 | | 2.1.4. Conclusion | 291 | | 2.1.5. Glossary | 291 | | 2.2. Electrons (K. SCHWARZ) | 294 | | 2.2.1. Introduction | 294 | | 2.2.2. The lattice | 294 | | 2.2.3. Symmetry operators | 294 | | 2.2.4. The Bloch theorem | 295 | | 2.2.5. The free-electron (Sommerfeld) model | 297 |
+---PAGE_BREAK---
+
+CONTENTS
+
+
+
+
+ | 2.2.6. Space-group symmetry |
+ 297 |
+
+
+ | 2.2.7. The k vector and the Brillouin zone |
+ 298 |
+
+
+ | 2.2.8. Bloch functions |
+ 299 |
+
+
+ | 2.2.9. Quantum-mechanical treatment |
+ 299 |
+
+
+ | 2.2.10. Density functional theory |
+ 300 |
+
+
+ | 2.2.11. Band-theory methods |
+ 301 |
+
+
+ | 2.2.12. The linearized augmented plane wave method |
+ 303 |
+
+
+ | 2.2.13. The local coordinate system |
+ 304 |
+
+
+ | 2.2.14. Characterization of Bloch states |
+ 305 |
+
+
+ | 2.2.15. Electric field gradient tensor |
+ 307 |
+
+
+ | 2.2.16. Examples |
+ 310 |
+
+
+ | 2.2.17. Conclusion |
+ 312 |
+
+
+ | 2.3. Raman scattering (I. GREGORA) |
+ 314 |
+
+
+ | 2.3.1. Introduction |
+ 314 |
+
+
+ | 2.3.2. Inelastic light scattering in crystals – basic notions |
+ 314 |
+
+
+ | 2.3.3. First-order scattering by phonons |
+ 315 |
+
+
+ | 2.3.4. Morphic effects in Raman scattering |
+ 322 |
+
+
+ | 2.3.5. Spatial-dispersion effects |
+ 325 |
+
+
+ | 2.3.6. Higher-order scattering |
+ 326 |
+
+
+ | 2.3.7. Conclusions |
+ 327 |
+
+
+ | 2.3.8. Glossary |
+ 328 |
+
+
+ | 2.4. Brillouin scattering (R. VACHER AND E. COURTENS) |
+ 329 |
+
+
+ | 2.4.1. Introduction |
+ 329 |
+
+
+ | 2.4.2. Elastic waves |
+ 329 |
+
+
+ | 2.4.3. Coupling of light with elastic waves |
+ 330 |
+
+
+ | 2.4.4. Brillouin scattering in crystals |
+ 330 |
+
+
+ | 2.4.5. Use of the tables |
+ 331 |
+
+
+ | 2.4.6. Techniques of Brillouin spectroscopy |
+ 331 |
+
+
+
+
+PART 3. SYMMETRY ASPECTS OF STRUCTURAL PHASE TRANSITIONS, TWINNING AND DOMAIN
+STRUCTURES
+
+
+
+
+ | 3.1. |
+ Structural phase transitions (J.-C. TOLÉDANO, V. JANOVEC, V. KOPSKÝ, J. F. SCOTT AND P. BOČEK) |
+ 338 |
+
+
+ | 3.1.1. |
+ Introduction (J.-C. TOLÉDANO) |
+ 338 |
+
+
+ | 3.1.2. |
+ Thermodynamics of structural transitions (J.-C. TOLÉDANO) |
+ 340 |
+
+
+ | 3.1.3. |
+ Equitranslational phase transitions. Property tensors at ferroic phase transitions (V. JANOVEC AND V. KOPSKÝ) |
+ 350 |
+
+
+ | 3.1.4. |
+ Example of a table for non-equitranslational phase transitions (J.-C. TOLÉDANO) |
+ 361 |
+
+
+ | 3.1.5. |
+ Microscopic aspects of structural phase transitions and soft modes (J. F. SCOTT) |
+ 361 |
+
+
+ | 3.1.6. |
+ Group informatics and tensor calculus (V. KOPSKÝ AND P. BOČEK) |
+ 372 |
+
+
+ | 3.1.7. |
+ Glossary |
+ 374 |
+
+
+ | 3.2. |
+ Twinning and domain structures (V. JANOVEC, TH. HAHN AND H. KLAPPER) |
+ 377 |
+
+
+ | 3.2.1. |
+ Introduction and history |
+ 377 |
+
+
+ | 3.2.2. |
+ A brief survey of bicrystallography |
+ 378 |
+
+
+ | 3.2.3. |
+ Mathematical tools |
+ 379 |
+
+
+ | 3.3. |
+ Twinning of crystals (TH. HAHN AND H. KLAPPER) |
+ 393 |
+
+
+ | 3.3.1. |
+ Crystal aggregates and intergrowths |
+ 393 |
+
+
+ | 3.3.2. |
+ Basic concepts and definitions of twinning |
+ 394 |
+
+
+ | 3.3.3. |
+ Morphological classification, simple and multiple twinning |
+ 398 |
+
+
+ | 3.3.4. |
+ Composite symmetry and the twin law |
+ 399 |
+
+
+
+---PAGE_BREAK---
+
+CONTENTS
+
+| 3.3.5. Description of the twin law by black-white symmetry | 402 | | 3.3.6. Examples of twinned crystals | 403 | | 3.3.7. Genetic classification of twins | 412 | | 3.3.8. Lattice aspects of twinning | 416 | | 3.3.9. Twinning by merohedry and pseudo-merohedry | 422 | | 3.3.10. Twin boundaries | 426 | | 3.3.11. Glossary | 444 | | 3.4. Domain structures (V. JANOVEC AND J. PŘÍVRATSKÁ) | 449 | | 3.4.1. Introduction | 449 | | 3.4.2. Domain states | 451 | | 3.4.3. Domain pairs: domain twin laws, distinction of domain states and switching | 470 | | 3.4.4. Domain twins and domain walls | 491 | | 3.4.5. Glossary | 502 | | List of terms and symbols used in this volume | 507 | | Author index | 509 | | Subject index | 514 |
+---PAGE_BREAK---
+
+# Preface
+
+BY ANDRÉ AUTHIER
+
+The initial idea of having a volume of *International Tables for Crystallography* dedicated to the physical properties of crystals is due to Professor B. T. M. Willis. He submitted the proposal to the Executive Committee of the International Union of Crystallography during their meeting in Vienna in 1988. The principle was then adopted, with Professor Willis as Editor. After his resignation in 1990, I was asked by the Executive Committee to become the new Editor. Following a broad consultation with many colleagues, a nucleus of potential authors met in Paris in June 1991 to define the contents of the volume and to designate its contributors. This was followed by a meeting in 1995, hosted by Theo Hahn in Aachen, of the authors contributing to Part 3 and by another meeting in 1998, hosted by Vaclav Janovec and Vojtech Kopský in Prague, of the authors of the supplementary software.
+
+The aim of Volume D is to provide an up-to-date account of the physical properties of crystals, with many useful tables, to a wide readership in the fields of mineralogy, crystallography, solid-state physics and materials science. An original feature of the volume is the bringing together of various topics that are usually to be found in quite different handbooks but that have in common their tensorial nature and the role of crystallographic symmetry. Part 3 thus confronts the properties of twinning, which traditionally pertains to crystallography and mineralogy, and the properties of ferroelectric or ferroelastic domains, which are usually studied in physics.
+
+The volume comprises three parts and a CD-ROM of supplementary software.
+
+The first part is devoted to the tensorial properties of physical quantities. After a presentation of the matrix of physical properties and an introduction to the mathematical notion of a tensor, the symmetry properties of tensors and the representations of crystallographic groups are discussed, with a special treatment for the case of quasiperiodic structures. The first part also includes several examples of physical property tensors developed in separate chapters: elastic properties, thermal expansion, magnetic properties, optical properties (both linear and nonlinear), transport properties and atomic displacement parameters.
+
+The second part is concerned with the symmetry aspects of excitations in reciprocal space. It includes bases of solid-state physics and describes in the first two chapters the properties of phonons and electrons in crystals. The following two chapters deal with Raman and Brillouin scattering.
+
+The third part concerns structural phase transitions and twinning. The first chapter includes an introduction to the
+
+Landau theory, a description of the behaviour of physical property tensors at ferroic phase transitions and an approach to the microscopical aspect of structural transitions and soft modes, with practical examples. The second chapter explains the relationship between twinning and domain structures and introduces the group-theoretical tools needed for the analysis of domain structures and twins. In the third chapter, the basic concepts and definitions of twinning are presented, as well as the morphological, genetic and lattice classifications of twins and the properties of twin boundaries, with many examples. The fourth chapter is devoted to the symmetry and crystallographic analysis of domain structures. The relations that govern their formation are derived and tables with useful ready-to-use data on domain structures of ferroic phases are provided.
+
+An innovation of Volume D is an accompanying CD-ROM containing two programs. The first, *Tenχar (Calculations with Tensors and Characters)* supports Part 1 for the determination of irreducible group representations and tensor components. The second, *GI\*KoBo-1*, supports Part 3 on structural phase transitions and enables the reader to find the changes in the tensor properties of physical quantities during ferroic phase transitions.
+
+For various reasons, Volume D has taken quite a long time to produce, from the adoption of its principle in 1990 to its actual printing in 2003, and it is a particular pleasure for me to see the outcome of so many efforts. I would like to take this opportunity to thank all those who have contributed to the final result. Firstly, thanks are due to Terry Willis, whose idea the volume was and who made the initial push to have it accepted. I am very grateful to him for his encouragement and for having translated into English a set of notes that I had written for my students and which served as the nucleus of Chapter 1.1. I am greatly indebted to the Technical Editors who have worked tirelessly over the years: Sue Barnes in the early years and then Nicola Ashcroft, Amanda Berry and the staff of the Editorial Office in Chester, who did the hard work of editing all the chapters and translating them into Standard Generalized Markup Language (SGML); I thank them for their infinite patience and good humour. I am also very grateful to the Research and Development Officer, Brian McMahon, for his successful integration of the supplementary software and for his constant cooperation with its authors. Last but not least, I would like to thank all the authors who contributed to the volume and made it what it is.
+---PAGE_BREAK---
+
+SAMPLE PAGES
+---PAGE_BREAK---
+
+# 1.1. Introduction to the properties of tensors
+
+BY A. AUTHIER
+
+## 1.1.1. The matrix of physical properties
+
+### 1.1.1.1. Notion of extensive and intensive quantities
+
+Physical laws express in general the response of a medium to a certain influence. Most physical properties may therefore be defined by a relation coupling two or more measurable quantities. For instance, the specific heat characterizes the relation between a variation of temperature and a variation of entropy at a given temperature in a given medium, the dielectric susceptibility the relation between electric field and electric polarization, the elastic constants the relation between an applied stress and the resulting strain etc. These relations are between quantities of the same nature: thermal, electrical and mechanical, respectively. But there are also cross effects, for instance:
+
+(a) *thermal expansion* and *piezocalorific effect*: mechanical reaction to a thermal impetus or the reverse;
+
+(b) *pyroelectricity* and *electrocalorific effect*: electrical response to a thermal impetus or the reverse;
+
+(c) *piezoelectricity* and *electrostriction*: electric response to a mechanical impetus;
+
+(d) *piezomagnetism* and *magnetostriction*: magnetic response to a mechanical impetus;
+
+(e) *photoelasticity*: birefringence produced by stress;
+
+(f) *acousto-optic effect*: birefringence produced by an acoustic wave;
+
+(g) *electro-optic effect*: birefringence produced by an electric field;
+
+(h) *magneto-optic effect*: appearance of a rotatory polarization under the influence of a magnetic field.
+
+The physical quantities that are involved in these relations can be divided into two categories:
+
+(i) *extensive quantities*, which are proportional to the volume of matter or to the mass, that is to the number of molecules in the medium, for instance entropy, energy, quantity of electricity etc. One uses frequently specific extensive parameters, which are given per unit mass or per unit volume, such as the specific mass, the electric polarization (dipole moment per unit volume) etc.
+
+(ii) *intensive parameters*, quantities whose product with an extensive quantity is homogeneous to an energy. For instance, volume is an extensive quantity; the energy stored by a gas undergoing a change of volume dV under pressure p is p dV. Pressure is therefore the intensive parameter associated with volume. Table 1.1.1.1 gives examples of extensive quantities and of the related intensive parameters.
+
+### 1.1.1.2. Notion of tensor in physics
+
+Each of the quantities mentioned in the preceding section is represented by a mathematical expression. Some are direction independent and are represented by *scalars*: specific mass, specific heat, volume, pressure, entropy, temperature, quantity of electricity, electric potential. Others are direction dependent and are represented by *vectors*: force, electric field, electric displacement, the gradient of a scalar quantity. Still others cannot be represented by scalars or vectors and are represented by more complicated mathematical expressions. Magnetic quantities are represented by *axial vectors* (or pseudovectors), which are a particular kind of tensor (see Section 1.1.4.5.3). A few examples will show the necessity of using tensors in physics and Section 1.1.3 will present elementary mathematical properties of tensors.
+
+(i) *Thermal expansion*. In an isotropic medium, thermal expansion is represented by a single number, a scalar, but this is
+
+not the case in an anisotropic medium: a sphere cut in an anisotropic medium becomes an ellipsoid when the temperature is varied and thermal expansion can no longer be represented by a single number. It is actually represented by a tensor of rank 2.
+
+(ii) *Dielectric constant*. In an isotropic medium of a perfect dielectric we can write, in SI units,
+
+$$ \mathbf{P} = \epsilon_0 \chi_e \mathbf{E} $$
+
+$$ \mathbf{D} = \epsilon_0 \mathbf{E} + \mathbf{P} = \epsilon_0 (1 + \chi_e) \mathbf{E} = \varepsilon \mathbf{E}, $$
+
+where **P** is the electric polarization (= dipole moment per unit volume), $\epsilon_0$ the permittivity of vacuum, $\chi_e$ the dielectric susceptibility, **D** the electric displacement and $\varepsilon$ the dielectric constant, also called dielectric permittivity. These expressions indicate that the electric field, on the one hand, and polarization and displacement, on the other hand, are linearly related. In the general case of an anisotropic medium, this is no longer true and one must write expressions indicating that the components of the displacement are linearly related to the components of the field:
+
+$$ \left\{ \begin{aligned} D^1 &= \epsilon_1^1 E^1 + \epsilon_1^2 E^2 + \epsilon_1^3 E \\ D^2 &= \epsilon_1^2 E^1 + \epsilon_2^2 E^2 + \epsilon_2^3 E \\ D^3 &= \epsilon_1^3 E^1 + \epsilon_2^3 E^2 + \epsilon_3^3 E \end{aligned} \right. \qquad (1.1.1.1) $$
+
+The dielectric constant is now characterized by a set of nine components $\epsilon_i^j$; they are the components of a tensor of rank 2. It will be seen in Section 1.1.4.5.2.1 that this tensor is symmetric ($\epsilon_i^j = \epsilon_j^i$) and that the number of independent components is equal to six.
+
+(iii) *Stressed rod (Hooke's law)*. If one pulls a rod of length $l$ and cross section $A$ with a force $F$, its length is increased by a quantity $\Delta l$ given by $\Delta l/l = (1/E)F/A$, where $E$ is Young's modulus, or elastic stiffness (see Section 1.3.3.1). But, at the same time, the radius, $r$, decreases by $\Delta r$ given by $\Delta r/r = -(v/E)F/A$, where $v$ is Poisson's ratio (Section 1.3.3.4.3). It can be seen that a scalar is not sufficient to describe the elastic deformation of a material, even if it is isotropic. The number of independent components depends on the symmetry of the medium and it will be seen that they are the components of a tensor of rank 4. It was precisely to describe the properties of elasticity by a mathematical expression that the notion of a tensor was introduced in physics by W. Voigt in the 19th century (Voigt, 1910) and by L. Brillouin in the first half of the 20th century (Brillouin, 1949).
+
+Table 1.1.1.1. Extensive quantities and associated intensive parameters
+
+The last four lines of the table refer to properties that are time dependent.
+
+| Extensive quantities | Intensive parameters | | Volume | Pressure | | Strain | Stress | | Displacement | Force | | Entropy | Temperature | | Quantity of electricity | Electric potential | | Electric polarization | Electric field | | Electric displacement | Electric field | | Magnetization | Magnetic field | | Magnetic induction | Magnetic field | | Reaction rate | Chemical potential | | Heat flow | Temperature gradient | | Diffusion of matter | Concentration gradient | | Electric current | Potential gradient |
+---PAGE_BREAK---
+
+1.1. INTRODUCTION TO THE PROPERTIES OF TENSORS
+
+Furthermore, the left-hand term of (1.1.4.11) remains unchanged if we interchange the indices i and j. The terms on the right-hand side therefore also remain unchanged, whatever the value of $T_{ll}$ or $T_{kl}$. It follows that
+
+$$
+\begin{align*}
+s_{ijll} &= s_{jill} \\
+s_{ijkl} &= s_{ijlk} = s_{jikl} = s_{jilk}.
+\end{align*}
+$$
+
+Similar relations hold for $c_{ijkl}$, $Q_{ijkl}$, $P_{ijkl}$ and $\pi_{ijkl}$: the submatrices
+**2** and **3**, **4** and **7**, **5**, **6**, **8** and **9**, respectively, are equal.
+
+Equation (1.4.1.11) can be rewritten, introducing the coeffi-
+cients of the Voigt strain matrix:
+
+$$
+S_{\alpha} = S_{ii} = \sum_{l} s_{ill} T_{ll} + \sum_{k \neq l} (s_{ikl} + s_{ilbk}) T_{kl} \quad (\alpha = 1, 2, 3)
+$$
+
+$$
+S_{\alpha} = S_{ij} + S_{ji} = \sum_{l} (s_{ijll} + s_{jill} T_{ll}) \\
+\qquad + \sum_{k \neq l} (s_{ijkl} + s_{ijlk} + s_{jikl} + s_{jiik}) T_{kl} \quad (\alpha = 4, 5, 6).
+$$
+
+We shall now introduce a two-index notation for the elastic
+compliances, according to the following conventions:
+
+$$
+\left.
+\begin{array}{l}
+i = j; \quad k = l; \quad s_{\alpha\beta} = s_{iil} \\
+i = j; \quad k \neq l; \quad s_{\alpha\beta} = s_{ikl} + s_{ilbk} \\
+i \neq j; \quad k = l; \quad s_{\alpha\beta} = s_{ijk} + s_{jikk} \\
+i \neq j; \quad k \neq l; \quad s_{\alpha\beta} = s_{ijkl} + s_{ijlk} + s_{jiik} + s_{jiil}.
+\end{array}
+\right\}
+\quad (1.1.4.12)
+$$
+
+We have thus associated with the fourth-rank tensor a square
+$6 \times 6$ matrix with 36 coefficients:
+
+
+
+
+ | β |
+ 1 |
+ 2 |
+ 3 |
+ 4 |
+ 5 |
+ 6 |
+
+
+
+
+ | α |
+ |
+ |
+ |
+ |
+ |
+ |
+
+
+ | 1 |
+ 11 |
+ 12 |
+ 13 |
+ 14 |
+ 15 |
+ 16 |
+
+
+ | 2 |
+ 21 |
+ 22 |
+ 23 |
+ 24 |
+ 25 |
+ 26 |
+
+
+ | 3 |
+ 31 |
+ 32 |
+ 33 |
+ 34 |
+ 35 |
+ 36 |
+
+
+ | 4 |
+ 41 |
+ 42 |
+ 43 |
+ 44 |
+ 45 |
+ 46 |
+
+
+ | 5 |
+ 51 |
+ 52 |
+ 53 |
+ 54 |
+ 55 |
+ 56 |
+
+
+ | 6 |
+ 61 |
+ 62 |
+ 63 |
+ 64 |
+ 65 |
+ 66 |
+
+
+
+
+One can translate relation (1.1.4.12) using the 9 × 9 matrix
+representing $s_{ijkl}$ by adding term by term the coefficients of
+submatrices **2** and **3**, **4** and **7** and **5**, **6**, **8** and **9**, respectively:
+
+$$
+\left( \frac{1}{2+3} \right) = \left( \frac{1}{4+7} \frac{2}{5+6 \\ +8+9} \right) \times \left( \frac{1}{2+3} \right)
+$$
+
+Using the two-index notation, equation (1.1.4.9) becomes
+
+$$
+S_{\alpha} = s_{\alpha\beta} T_{\beta}. \tag{1.1.4.13}
+$$
+
+A similar development can be applied to the other fourth-rank
+tensors $\pi_{ijkl}$, which will be replaced by $6 \times 6$ matrices with 36
+coefficients, according to the following rules.
+
+(i) Elastic stiffnesses, $c_{ijkl}$ and elasto-optic coefficients, $p_{ijkl}$:
+
+$$
+\left( \frac{1}{2} \right) = \left( \frac{1}{4} ; \frac{2}{5} \right) \times \left( \frac{1}{2} \right)
+$$
+
+where
+
+$$
+c_{\alpha\beta} = c_{ijkl}
+$$
+
+$$
+p_{\alpha\beta} = p_{ijkl}.
+$$
+
+(ii) Piezo-optic coefficients, $\pi_{ijkl}$:
+
+$$
+\left( \frac{1}{2} \right) = \left( \begin{matrix} 1 & 2+3 \\ 4 & 5+6 \end{matrix} \right) \times \left( \frac{1}{2} \right)
+$$
+
+where
+
+$$
+\left.
+\begin{array}{l}
+i = j; \quad k = l; \quad \pi_{\alpha\beta} = \pi_{iil} \\
+i = j; \quad k \neq l; \quad \pi_{\alpha\beta} = \pi_{ikl} + \pi_{ilk} \\
+i \neq j; \quad k = l; \quad \pi_{\alpha\beta} = \pi_{ijk} = \pi_{jik} \\
+i \neq j; \quad k \neq l; \quad \pi_{\alpha\beta} = \pi_{ijkl} + \pi_{jiik} = \pi_{ijlk} + \pi_{jiil}.
+\end{array}
+\right\}
+$$
+
+(iii) Electrostriction coefficients, $Q_{ijkl}$: same relation as for the elastic compliances.
+
+1.1.4.10.6. Independent components of the matrix associated with a fourth-rank tensor according to the following point groups
+
+1.1.4.10.6.1. Triclinic system, groups $\bar{1}, 1$
+
+$$
+\left(
+\begin{array}{cccccc}
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & *
+\end{array}
+\right)
+$$
+
+36 independent components
+
+$$
+c_{\alpha\beta}, s_{\alpha\beta}
+$$
+
+$$
+\left(
+\begin{array}{cccccc}
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & *
+\end{array}
+\right)
+$$
+
+20 independent components
+
+$$
+c_{a\beta}, s_{a\beta}
+$$
+
+$$
+\left(
+\begin{array}{cccccc}
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & * \\
+ * & * & * & * & * & *
+\end{array}
+\right)
+$$
+
+9 independent components
+
+$$
+c_{a,b}, s_{a,b}
+$$
+
+$$
+c_{a\beta}, c_{b\beta}
+$$
+
+$$
+p_{a,b}, p_{b,a}
+$$
+
+$$
+p_{a,b}, p_{b,a}
+$$
+
+$$
+c_2 m, c_2 m, c_2 m,
+$$
+
+1.1.4.10.6.3. Orthorhombic system
+
+Groups mmm, 2mm, 222:
+---PAGE_BREAK---
+
+# 1.2. Representations of crystallographic groups
+
+BY T. JANSSEN
+
+## 1.2.1. Introduction
+
+Symmetry arguments play an important role in science. Often one can use them in a heuristic way, but the correct formulation is in terms of group theory. This remark is in fact superfluous for crystallographers, who are used to point groups and space groups as they occur in the description of structures. However, besides these structural problems there are many others where group theory may play a role. A central role in this context is played by representation theory, which treats the action of a group on physical quantities, and usually this is done in terms of linear transformations, although nonlinear representations may also occur.
+
+To start with an example, consider a spin system, an arrangement of spins on sites with a certain symmetry, for example space-group symmetry. The elements of the space group map the sites onto other sites, but at the same time the spins are rotated or transformed otherwise in a well defined fashion. The spins can be seen as elements of a vector space (spin space) and the transformation in this space is an image of the space-group element. In a similar way, all symmetric tensors of rank 2 form a vector space, because one can add them and multiply them by a real factor. A linear change of coordinates changes the vectors, and the transformations in the space of tensors are the image of the coordinate transformations. Probably the most important use of such representations is in quantum mechanics, where transformations in coordinate space are mapped onto linear transformations in the quantum mechanical space of state vectors.
+
+To see the relation between groups of transformations and the use of their representations in physics, consider a tensor which transforms under a certain point group. Let us take a symmetric rank 2 tensor $T_{ij}$ in three dimensions. We take as example the point group 222. From Section 1.1.3.2 one knows how such a tensor transforms: it transforms into a tensor $T'_{ij}$ according to
+
+$$T'_{ij} = \sum_{k=1}^{3} \sum_{m=1}^{3} R_{ik} R_{jm} T_{km} \quad (1.2.1.1)$$
+
+for all orthogonal transformations $R$ in the group 222. This action of the point group 222 is obviously a linear one:
+
+$$ (c_1 T_{ij}^{(1)} + c_2 T_{ij}^{(2)})' = c_1 T_{ij}^{(1)'} + c_2 T_{ij}^{(2)'} $$
+
+The transformations on the tensors really form an image of the group, because if one writes $D(R)T$ for $T'$, one has for two elements $R^{(1)}$ and $R^{(2)}$ the relation
+
+$$ (D(R^{(1)}R^{(2)}))T = D(R^{(1)})(D(R^{(2)})T) $$
+
+or
+
+$$ D(R^{(1)}R^{(2)}) = D(R^{(1)})D(R^{(2)}). \quad (1.2.1.2) $$
+
+This property is said to define a (linear) representation. Because of the representation property, it is sufficient to know how the tensor transforms under the generators of a group. In our example, one could be interested in symmetric tensors that are invariant under the group 222. Then it is sufficient to consider the rotations over 180° along the x and y axes. If the point group is a symmetry group of the system, a tensor describing the relation between two physical quantities should remain the same. For invariant tensors one has
+
+$$ \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix}, $$
+
+$$ \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{pmatrix} = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix}. $$
+
+and the solution of these equations is
+
+$$ \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & a_{22} & a_{23} \\ a_{13} & a_{23} & a_{33} \end{pmatrix} = \begin{pmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33} \end{pmatrix}. $$
+
+The matrices of rank 2 form a nine-dimensional vector space. The rotation over 180° around the x axis can also be written as
+
+$$ R = \begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ a_{21} \\ a_{22} \\ a_{23} \\ a_{31} \\ a_{32} \\ a_{33} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ a_{21} \\ a_{22} \\ a_{23} \\ a_{31} \\ a_{32} \\ a_{33} \end{pmatrix}. $$
+
+This nine-dimensional matrix together with the one corresponding to a rotation along the y axis generate a representation of the group 222 in the nine-dimensional space of three-dimensional rank 2 tensors. The invariant tensors form the subspace $(a_{11}, 0, 0, a_{22}, 0, 0, a_{33})$. In this simple case, group theory is barely needed. However, in more complex situations, the calculations may become quite cumbersome without group theory. Moreover, group theory may give a wealth of other information, such as selection rules and orthogonality relations, that can be obtained only with much effort without group theory, or in particular representation theory. Tables of tensor properties, and irreducible representations of point and space groups, have been in use for a long time. For point groups see, for example, Butler (1981) and Altmann & Herzig (1994); for space groups, see Miller & Love (1967), Kovalev (1987) and Stokes & Hatch (1988).
+
+In the following, we shall discuss the representation theory of crystallographic groups. We shall adopt a slightly abstract language, which has the advantage of conciseness and generality, but we shall consider examples of the most important notions. Another point that could give rise to some problems is the fact that we shall consider in part the theory for crystallographic groups in arbitrary dimension. Of course, physics occurs in three-
+---PAGE_BREAK---
+
+# 1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES
+
+Table 1.2.6.5. Irreducible representations and character tables for the 32 crystallographic point groups in three dimensions
+
+$$ (a) C_1 $$
+
+
+
+$$ \Gamma_1 : A = \chi_1, \quad x, y, z \qquad x^2, y^2, z^2, yz, xz, xy $$
+
+$$ C_1 $$
+
+(b) $C_2$
+
+ | C2 | ε | α |
|---|
| n | 1 | 1 | 1 | | Order | 1 | 1 | 2 | | Γ1 | 1 | 1 | 1 | | Γ2 | 1 | -1 | 1 |
+
+$$ \begin{array}{lll} 2 & \alpha = C_{2z} & \Gamma_1 : A = \chi_1 \\ C_2 & \Gamma_2 : B = \chi_3 & x, y & x^2, y^2, z^2, xy \\ m & \alpha = \sigma_z & \Gamma_1 : A' = \chi_1 & x, y \\ C_s & \Gamma_2 : A'' = \chi_3 & z & x^2, y^2, z^2, xy \\ \bar{1} & \alpha = I & \Gamma_1 : A_g = \chi_1^+ & \\ C_i & \Gamma_2 : A_u = \chi_1^- & x, y, z & x^2, y^2, z^2, yz, xz, xy \end{array} $$
+
+(c) $C_3$ [$\omega = \exp(2\pi i/3)$].
+
+ | C3 | ε | α | α2 |
|---|
| n | 1 | 1 | 1 | 1 | | Order | 1 | 3 | 3 | 3 | | Γ1 | 1 | 1 | 1 | 1 | | Γ2 | 1 | ω | ω2 | 1 | | Γ3 | 1 | ω2 | ω | 1 |
+
+Matrices of the real two-dimensional representation:
+
+ | ε | α | α2 |
|---|
| Γ2 ⊕ Γ3 | ( 0 0 ) | ( 0 -1 ) | ( -1 1 ) |
|---|
+
+$$ \begin{array}{lll} 3 & \alpha = C_{3z} & \Gamma_1 : A = \chi_1 \\ C_3 & \Gamma_2 & E = \chi_{1c} + \chi_{1c}^* & x, y & x^2 + y^2, z^2 \\ & & \Gamma_2 & x^2 - y^2, xz, yz, xy & xy \end{array} $$
+
+(d) $C_4$
+
+ | C4 | ε | α | α2 | α3 |
|---|
| n | 1 | 1 | 1 | 1 | 1 | | Order | 1 | 4 | 2 | 4 | 4 | | Γ1 | 1 | 1 | 1 | 1 | 1 | | Γ2 | 1 | i | -1 | -i | -i | | Γ3 | 1 | -1 | 1 | -1 | -i | | Γ4 | 1 | -i | -1 | i | i |
+
+Matrices of the real two-dimensional representation:
+
+
+
+
+ |
+ ε α α² α³ |
+
+
+ | C4 |
+ n Order |
+ |
+
+
+ | ( 0 0 ) ( 0 -1 ) ( -1 0 ) ( 0 1 ) ( 0 -1 ) ( -1 0 ) ( 0 1 ) ( -1 0 ) ( -1 0 ) ( 0 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 ) ( -1 0 )
+ |
+
+ ( C4) α = C4z |
+ ( Γ4) A = A = χ₁ Γ3: B = χ₃ ( Γ₂ ⊕ Γ₄: E = χ₁c + χ₁c*) ( α = Si) Γ₁: A = χ₁ Γ3: B = χ₃ ( Γ₂ ⊕ Γ₄: E = χ₁c + χ₁c*) ( αβ = I) |
+ ( D2) x y z xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy yz xz x y xy
+ |
+
+ ( a̅) C3h |
+ ( a̅) Sxz |
+ ( a̅) Syz |
+ ( a̅) Sxz,yz,zx,yz,yz,xz,yz,zx,yz,zx,yz,xz,yz,xz,zx,yz,zx,yz,xz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz,zx,yz/z
+ |
+
+ ( b̅) D2h |
+ ( a̅) Sxzx̅ |
+ ( a̅) Syzx̅ |
+ ( a̅) Sxzx̅,yz̅ |
+ ( a̅) Sxzx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx̅,yz̅,xz̅,yx |
+ ( a#96;#96;) D2#96; |
+ ( a#96;#96;) Dx#96; |
+ ( a#96;#96;) Dy#96; |
+ ( a#96;#96;) Dx#96#,y#96; |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96; |
+ bβ = Cy#96; |
+ aβ = Cx#96#,y#96; |
+ bβ = Cy#96#,y#96; |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96; |
+ bβ = Cy#96#,y#96,y#96; |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+ | (mm#96) |
+ a = Cx#96; |
+ b = Cy#96; |
+ aβ = Cx#96,y#96; |
+ bβ = Cy#96,y#96; |
+ aβ = Cx#96#,y#96,x#96, |
+ bβ = Cy#96#,y#96,y#96, |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(f) $D_2$
+
+| D₂ n Order | ε | α | β | αβ |
|---|
| Γ4 | l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= l= ll=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l=l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;l;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;b;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c;c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c)c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c) c)
+D₂c
+α=C₂ₓ
+β=C₂ᵧ
+αβ=C₂ₓ
+Γ₁:A₁=x₁
+Γ₂:B₃=x₃
+Γ₃:B₂=x₄
+Γ₄:B₁=x₂
+D₂c
+α=C₂ₓ
+β=A₂=x₂
+αβ=A₃=x₃
+Γ₁:A₁=x₁
+Γ₂:B₂=x₃
+Γ₃:B₄=x₄
+Γ₄:B₁=x₂
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:B₂=x₃
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+D₂c
+α=C₂ₓ
+β=sₓ
+αβ=sᵧ
+Γ₁:A₁=x₁
+Γ₂:A₂=x₂
+Γ₃:B₄=x₄
+Γ₄:B₅=x₅
+
+4 <a href="https://en.wikipedia.org/wiki/Groupoid" target="new">Groupoid</div><div style="display: inline-block; vertical-align: middle;">C₄ — A = $\chi_{C_4}$ — z — x² + y², z² — y², x², xy — $\chi_{C_4}$ — x, y — x² − y², x², y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy — $\chi_{C_4}$ — x, y — y², x² − y², xy” </div>
+S₄ α=Sᵢ S₄ β=Sᵢ S₄ αβ=Sᵢ S₄ αβ=Sᵢ S₄ S₄ β=Sᵢ S₄ S₄ αβ=Sᵢ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ S₄ αβ=Sᵢ S₄ S₄ S₄ Sғrɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rɑ;rărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărărătətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətətə,tactotototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototototo...
+
+d̅̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇̇\n(a) <d̅̈d̈́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d́d־ d'k d'k d'dk d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dkd d'dKD\\n\n(b)" style="vertical-align: middle;">(a)<d'k kdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkdkDK\\n\n(c)" style="vertical-align: middle;">(c)<d'k kskksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmsrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrmrm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rm rimlililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililililvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilvilv vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vil vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vi vii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvvii vvvвіііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііііі і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і і i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_i_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__y__\\(a)\n(b)\n(c)\n(d)" style="vertical-align: middle;">(a)<d'k_ksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksksk
+---PAGE_BREAK---
+
+Fig. 1.3.3.4. Representation surface of the inverse of Young's modulus. (a) NaCl, cubic, anisotropy factor > 1; (b) W, cubic, anisotropy factor = 1; (c) Al, cubic, anisotropy factor < 1; (d) Zn, hexagonal; (e) Sn, tetragonal; (f) calcite, trigonal.
+
+If one introduces the Lamé constants,
+
+$$
+\begin{aligned}
+\mu &= (1/2)(c_{11} - c_{12}) = c_{44} \\
+\lambda &= c_{12},
+\end{aligned}
+ $$
+
+the equations may be written in the form often used in mechanics:
+
+$$
+\begin{aligned}
+T_1 &= 2\mu S_1 + \lambda(S_1 + S_2 + S_3) \\
+T_2 &= 2\mu S_2 + \lambda(S_1 + S_2 + S_3) \\
+T_3 &= 2\mu S_3 + \lambda(S_1 + S_2 + S_3).
+\end{aligned}
+ \quad (1.3.3.16) $$
+
+Two coefficients suffice to define the elastic properties of an isotropic material, $s_{11}$ and $s_{12}$, $c_{11}$ and $c_{12}$, $\mu$ and $\lambda$, $\mu$ and $\nu$, etc. Table 1.3.3.3 gives the relations between the more common elastic coefficients.
+
+### 1.3.3.6. Equilibrium conditions of elasticity for isotropic media
+
+We saw in Section 1.3.2.3 that the condition of equilibrium is
+
+$$ \partial T_{ij} / \partial x_i + \rho F_j = 0. $$
+
+If we use the relations of elasticity, equation (1.3.3.2), this condition can be rewritten as a condition on the components of the strain tensor:
+
+$$ c_{ijkl} \frac{\partial S_{kl}}{\partial x_j} + \rho F_i = 0. $$
+
+Recalling that
+
+$$ S_{kl} = \frac{1}{2} \left[ \frac{\partial u_k}{\partial x_l} + \frac{\partial u_l}{\partial x_k} \right], $$
+
+the condition becomes a condition on the displacement vector, **u**(**r**):
+
+$$ c_{ijkl} \frac{\partial^2}{\partial x_i \partial x_j} + \rho F_i = 0. $$
+
+In an isotropic orthonormal medium, this equation, projected on the axis $0x_1$, can be written with the aid of relations (1.3.3.5) and (1.3.3.9):
+
+$$
+\begin{aligned}
+& c_{11} \frac{\partial^2 u_1}{(\partial x_1)^2} + c_{12} \left[ \frac{\partial^2 u_2}{\partial x_1 \partial x_2} + \frac{\partial^2 u_3}{\partial x_1 \partial x_3} \right] \\
+& \qquad + \frac{1}{2}(c_{11} - c_{12}) \left[ \frac{\partial^2 u_1}{(\partial x_2)^2} + \frac{\partial^2 u_3}{\partial x_1 \partial x_3} + \frac{\partial^2 u_1}{(\partial x_3)^2} \right] + \rho F_i \\
+&= 0.
+\end{aligned}
+ $$
+
+This equation can finally be rearranged in one of the three following forms with the aid of Table 1.3.3.3.
+
+$$
+\begin{aligned}
+& \frac{1}{2}(c_{11} - c_{12})\Delta\mathbf{u} + \frac{1}{2}(c_{11} + c_{12})\nabla(\nabla\mathbf{u}) + \rho\mathbf{F} = 0 \\
+& \mu\Delta\mathbf{u} + (\mu + \lambda)\nabla(\nabla\mathbf{u}) + \rho\mathbf{F} = 0 \\
+& \mu\left[\Delta\mathbf{u} + \frac{1}{1-2\nu}\nabla(\nabla\mathbf{u})\right] + \rho\mathbf{F} = 0.
+\end{aligned}
+ \quad (1.3.3.17) $$
+---PAGE_BREAK---
+
+Fig. 1.3.5.4. Temperature dependence of the elastic constant $c_{11}$ in KNiF$_3$, which undergoes a para-antiferromagnetic phase transition. Reprinted with permission from *Appl. Phys. Lett.* (Nouet et al., 1972). Copyright (1972) American Institute of Physics.
+
+The softening of $c_{44}$ when the temperature decreases starts more than 100 K before the critical temperature, $\Theta_c$. In contrast, Fig. 1.3.5.4 shows the temperature dependence of $c_{11}$ in KNiF$_3$, a crystal that undergoes a para-antiferromagnetic phase transition at 246 K; the coupling between the elastic and the magnetic energy is weak, consequently $c_{11}$ decreases abruptly only a few degrees before the critical temperature. We can generalize this observation and state that the softening of an elastic constant occurs over a large domain of temperature when this constant is the order parameter or is strongly coupled to the order parameter of the transformation; for instance, in the cooperative Jahn-Teller phase transition in DyVO$_4$, $(c_{11} - c_{12})/2$ is the soft acoustic phonon mode leading to the phase transition and this parameter anticipates the phase transition 300 K before it occurs (Fig. 1.3.5.5).
+
+### 1.3.5.3. Pressure dependence of the elastic constants
+
+As mentioned above, anharmonic potentials are needed to explain the stress dependence of the elastic constants of a crystal. Thus, if the strain-energy density is developed in a polynomial in terms of the strain, only the first and the second elastic constants are used in linear elasticity (harmonic potentials), whereas higher-order elastic constants are also needed for nonlinear elasticity (anharmonic potentials).
+
+Concerning the pressure dependence of the elastic constants (nonlinear elastic effect), considerable attention has been paid to their experimental determination since they are a unique source of significant information in many fields:
+
+(i) In geophysics, a large part of the knowledge we have on the interior of the earth comes from the measurement of the transit time of elastic bursts propagating in the mantle and in the core (in the upper mantle, the average pressure is estimated to be about a few hundred GPa, a value which is comparable to that of the elastic stiffnesses of many materials).
+
+(ii) In solid-state physics, the pressure dependence of the elastic constants gives significant indications concerning the stability of crystals. For example, Fig. 1.3.5.2 shows the pressure dependence of the elastic constants of KZnF$_3$, a cubic crystal belonging to the perovskite family. As mentioned previously, this crystal is known to be stable over a wide range of temperature and the elastic stiffnesses $c_{ij}$ depend linearly on pressure. It may be noted that, consequently, the third-order elastic constants
+
+Fig. 1.3.5.5. Temperature dependence of $(c_{11} - c_{12})/2$ in DyVO$_4$, which undergoes a cooperative Jahn-Teller phase transition (after Melcher & Scott, 1972).
+
+(TOECs) are constant. On the contrary, we observe in Fig. 1.3.5.6 that the pressure dependence of the elastic constants of TICdF$_3$, a cubic crystal belonging to the same family but which is known to become unstable when the temperature is decreased to 191 K (Fischer, 1982), is nonlinear even at low pressures. In this case, the development of the strain-energy density in terms of strains cannot be stopped after the terms containing the third-order elastic constants; the contributions of the fourth- and fifth-order elastic constants are not negligible.
+
+(iii) For practical use in the case of technical materials such as concrete or worked metals, the pressure dependence of the elastic moduli is also required for examining the effect of applied stresses or of an applied hydrostatic pressure, and for studying residual stresses resulting from loading (heating) and unloading (cooling) the materials.
+
+## 1.3.6. Nonlinear elasticity
+
+### 1.3.6.1. Introduction
+
+In a solid body, the relation between the stress tensor $T$ and the strain tensor $S$ is usually described by Hooke's law, which postulates linear relations between the components of $T$ and $S$ (Section 1.3.3.1). Such relations can be summarized by (see equation 1.3.3.2)
+
+$$T_{ij} = c_{ijkl}S_{kl},$$
+
+where the $c_{ijkl}$'s are the elastic stiffnesses.
+
+Table 1.3.5.2. Order of magnitude of the temperature dependence of the elastic stiffnesses for different types of crystals
+
+| Type of crystal | (∂ ln c11/∂Θ)p (K-1) | (∂ ln c44/∂Θ)p (K-1) |
|---|
| Ionic | −10-3 | −3 × 10-4 | | Covalent | −10-4 | −8 × 10-5 | | Metallic | −2 × 10-4 | −3 × 10-4 |
+
+Fig. 1.3.5.6. Pressure dependence of the elastic constants $(c_{11} - c_{12})/2$ in TICdF$_3$. Reproduced with permission from Ultrasonics Symposium Proc. IEEE (Fischer et al., 1980). Copyright (1980) IEEE.
+---PAGE_BREAK---
+
+# 1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES
+
+Table 1.4.1.1. Shape of the quadric and symmetry restrictions
+
+| System | Quadric | No. of independent components | Nonzero components |
|---|
| Shape | Direction of principal axes |
|---|
| Triclinic | General ellipsoid or hyperboloid | No restrictions | 6 | (● ● ●) | | Monoclinic | | One axis parallel to twofold axis (b) | 4 | (● ● ●) | | Orthorhombic | | Parallel to crystallographic axes | 3 | (● ● ●) | Trigonal, tetragonal, hexagonal | Revolution ellipsoid or hyperboloid | c axis is revolution axis | 2 | (● ● ●) | Cubic, isotropic media | Sphere | Arbitrary, not defined | 1 | (● ● ●) |
+
+## 1.4.2. Grüneisen relation
+
+Thermal expansion of a solid is a consequence of the anharmonicity of interatomic forces (see also Section 2.1.2.8). If the potentials were harmonic, the atoms would oscillate (even with large amplitudes) symmetrically about their equilibrium positions and their mean central position would remain unchanged. In order to describe thermal expansion, the anharmonicity is most conveniently accounted for by means of the so-called ‘quasiharmonic approximation’, assuming the lattice vibration frequencies $\omega$ to be independent of temperature but dependent on volume [$(\partial\omega/\partial V) \neq 0$]. Anharmonicity is taken into account by letting the crystal expand, but it is assumed that the atoms vibrate about their new equilibrium positions harmonically, i.e. lattice dynamics are still treated in the harmonic approximation. The assumption $(\partial\omega/\partial V) = 0$, which is made for the harmonic oscillator, is a generalization of the postulate that the frequency of a harmonic oscillator does not depend on the amplitude of vibration.
+
+This approach leads, as demonstrated below, to the Grüneisen relation, which combines thermal expansion with other material constants and, additionally, gives an approximate description of the temperature dependence of thermal expansion (cf. Krishnan et al., 1979; Barron, 1998).
+
+For isotropic media, the volume expansion $\beta$ [= $3\alpha$ = $\alpha_{11} + \alpha_{22} + \alpha_{33}$], cf. (1.4.1.2), can be expressed by the thermodynamic relation
+
+$$ \beta = \frac{1}{V} \left( \frac{\partial V}{\partial T} \right)_p = - \frac{1}{V} \left( \frac{\partial p}{\partial V} \right)_T \left( \frac{\partial p}{\partial T} \right)_V = \kappa \left( \frac{\partial p}{\partial T} \right)_V, \quad (1.4.2.1) $$
+
+$\kappa$ being the isothermal compressibility. To obtain the quantity $(\partial p / \partial T)_V$, the pressure $p$ is deduced from the free energy $F$, whose differential is $dF = -S \, dT - p \, dV$, i.e. from
+
+$$ p = - (\partial F / \partial V)_T. \quad (1.4.2.2) $$
+
+In a crystal consisting of $N$ unit cells with $p$ atoms in each unit cell, there are $3p$ normal modes with frequencies $\omega_s$ (denoted by an index $s$ running from 1 to $3p$) and with $N$ allowed wavevectors
+
+$\mathbf{q}_t$ (denoted by an index $t$ running from 1 to $N$). Each normal mode $\omega_s(\mathbf{q}_t)$ contributes to the free energy by the amount
+
+$$ f_{s,t} = \frac{\hbar}{2} \omega_s(\mathbf{q}_t) + kT \ln \left[ 1 - \exp\left(-\frac{\hbar\omega_s(\mathbf{q}_t)}{kT}\right) \right]. \quad (1.4.2.3) $$
+
+The total free energy amounts, therefore, to
+
+$$ \begin{aligned} F &= \sum_{s=1}^{3p} \sum_{t=1}^{N} f_{s,t} \\ &= \sum_{s=1}^{3p} \sum_{t=1}^{N} \left\{ \frac{\hbar}{2} \omega_s(\mathbf{q}_t) + kT \ln \left[ 1 - \exp\left(-\frac{\hbar\omega_s(\mathbf{q}_t)}{kT}\right) \right] \right\}. \end{aligned} \quad (1.4.2.4) $$
+
+From (1.4.2.2)
+
+$$ \begin{aligned} p &= -\left(\frac{\partial F}{\partial V}\right)_T \\ &= -\sum_{s=1}^{3p} \sum_{t=1}^{N} \left\{ \frac{\hbar}{2} \frac{\partial \omega_s}{\partial V} + \frac{\exp(-\hbar\omega_s/kT)\hbar(\partial\omega_s/\partial V)}{1 - \exp(-\hbar\omega_s/kT)} \right\}. \end{aligned} \quad (1.4.2.5) $$
+
+The last term can be written as
+
+$$ \frac{\hbar(\partial\omega_s/\partial V)}{\exp(\hbar\omega_s/kT) - 1} = \hbar n(\omega_s(\mathbf{q}_t), T) \frac{\partial\omega_s}{\partial V}, \quad (1.4.2.6) $$
+
+where $n(\omega_s, T)$ is the Bose-Einstein distribution
+---PAGE_BREAK---
+
+The behaviour of the transition-metal ions is very different. In
+contrast to the rare-earth ions, the electrons of the partly filled
+shell in transition metals interact strongly with the electric field of
+the crystal. As a result, their energy levels are split and the orbital
+moments can be 'quenched'. This means that relation (1.5.1.5)
+transforms to
+
+$$
+p_{ij} = (g_{\text{eff}})_{ij} [S(S+1)]^{1/2}. \quad (1.5.1.6)
+$$
+
+Here the value of the effective spin S represents the degeneration
+of the lowest electronic energy level produced by the splitting in
+the crystalline field; ($g_{\text{eff}}$)$_{ij}$ differs from the usual Landé g-factor.
+The values of its components lie between 0 and 10–20. The tensor
+($g_{\text{eff}}$)$_{ij}$ becomes diagonal in the principal axes. According to
+relation (1.5.1.6), the magnetic susceptibility also becomes a
+tensor. The anisotropy of ($g_{\text{eff}}$)$_{ij}$ can be studied using electron
+paramagnetic resonance (EPR) techniques.
+
+The Curie–Weiss law describes the behaviour of those para-
+magnets in which the magnetization results from the competition
+of two forces. One is connected with the reduction of the
+magnetic energy by orientation of the magnetic moments of ions
+in the applied magnetic field; the other arises from thermal
+fluctuations, which resist the tendency of the field to orient these
+moments. At low temperatures and in strong magnetic fields, the
+linear dependence of the magnetization *versus* magnetic field
+breaks down and the magnetization can be saturated in a suffi-
+ciently strong magnetic field. Most of the paramagnetic
+substances that obey the Curie–Weiss law ultimately transform to
+an ordered magnetic as the temperature is decreased.
+
+The conduction electrons in metals possess paramagnetism in addition to diamagnetism. The paramagnetic susceptibility of the conduction electrons is small (of the same order of magnitude as the diamagnetic susceptibility) and does not depend on temperature. This is due to the fact that the conduction electrons are governed by the laws of Fermi-Dirac statistics.
+
+### 1.5.1.2. Ordered magnetics
+
+1.5.1.2.1. Ferromagnets (including ferrimagnets)
+
+As stated above, all ordered magnetics that possess a sponta-
+neous magnetization **M**s different from zero (a magnetization
+even in zero magnetic field) are called ferromagnets. The simplest
+type of ferromagnet is shown in Fig. 1.5.1.2(a). This type
+possesses only one kind of magnetic ion or atom. All their
+magnetic moments are aligned parallel to each other in the same
+direction. This magnetic structure is characterized by one vector
+**M**. It turns out that there are very few ferromagnets of this type in
+which only atoms or ions are responsible for the ferromagnetic
+magnetization (CrBr3, EuO etc.). The overwhelming majority of
+ferromagnets of this simplest type are metals, in which the
+magnetization is the sum of the magnetic moments of the loca-
+lized ions and of the conduction electrons, which are partly
+polarized.
+
+More complicated is the type of ferromagnet which is called a
+ferrimagnet. This name is derived from the name of the oxides of
+the elements of the iron group. As an example, Fig. 1.5.1.2(b)
+schematically represents the magnetic structure of magnetite
+(Fe3O4). It contains two types of magnetic ions and the number
+of Fe3+ ions (μ1 and μ2) is twice the number of Fe2+ ions (μ3). The
+values of the magnetic moments of these two types of ions differ.
+The magnetic moments of all Fe2+ ions are aligned in one
+direction. The Fe3+ ions are divided into two parts: the magnetic
+moments of one half of these ions are aligned parallel to the
+magnetic moments of Fe2+ and the magnetic moments of the
+other half are aligned antiparallel. The array of all magnetic
+moments of identical ions oriented in one direction is called a
+magnetic sublattice. The magnetization vector of a given sublat-
+tice will be denoted by Mr. Hence the magnetic structure of Fe3O4
+
+consists of three magnetic sublattices. The magnetizations of two
+of them are aligned in one direction, the magnetization of the
+third one is oriented in the opposite direction. The net ferro-
+magnetic magnetization is M_s = M_1 - M_2 + M_3 = M_3.
+
+The special feature of ferrimagnets, as well as of many anti-
+ferromagnets, is that they consist of sublattices aligned anti-
+parallel to each other. Such a structure is governed by the nature
+of the main interaction responsible for the formation of the
+ordered magnetic structures, the exchange interaction. The
+energy of the exchange interaction does not depend on the
+direction of the interacting magnetic moments (or spins S) rela-
+
+Fig. 1.5.1.2. Ordered arrangements of magnetic moments $\mu_i$ in: (a) an ordinary ferromagnet $\mathbf{M}_s = N\mathbf{\mu}_1$; (b) a ferrimagnet $\mathbf{M}_s = (N/3)(\mathbf{\mu}_1+\mathbf{\mu}_2+\mathbf{\mu}_3)$; (c) a weak ferromagnet $\mathbf{M} = \mathbf{M}_D = (N/2)(\mathbf{\mu}_1+\mathbf{\mu}_2)$, $\mathbf{L} = (N/2)(\mathbf{\mu}_1-\mathbf{\mu}_2)$, $(L_x \gg M_y; M_x = M_z = L_y = L_z = 0)$. (N is the number of magnetic ions per cm$^3$.)
+---PAGE_BREAK---
+
+Fig. 1.5.1.4. Helical and sinusoidal magnetic structures. (a) An antiferromagnetic helix; (b) a cone spiral; (c) a cycloidal spiral; (d) a longitudinal spin-density wave; (e) a transverse spin-density wave.
+
+the vectors of the magnetization of the layers are arranged on the surface of a cone. The ferromagnetic magnetization is aligned along the z axis. This structure is called a ferromagnetic helix. It usually belongs to the incommensurate magnetic structures.
+
+More complicated antiferromagnetic structures also exist: sinusoidal structures, which also consist of layers in which all the magnetic moments are parallel to each other. Fig. 1.5.1.4(c) displays the cycloidal spiral and Figs. 1.5.1.4(d) and (e) display longitudinal and transverse spin density waves, respectively.
+
+### 1.5.2. Magnetic symmetry
+
+As discussed in Section 1.5.1, in studies of the symmetry of magnetics one should take into account not only the crystallographic elements of symmetry (rotations, reflections and translations) but also the time-inversion element, which causes the reversal of the magnetic moment density vector **m**(r). Following Landau & Lifshitz (1957), we shall denote this element by **R**. If combined with any crystallographic symmetry element **G** we get a product **RG**, which some authors call the space-time symmetry operator. We shall not use this terminology in the following.
+
+To describe the symmetry properties of magnetics, one should use magnetic point and space groups instead of crystallographic ones. (See also Section 1.2.5.)
+
+By investigating the ‘four-dimensional groups of three-dimensional space’, Heesch (1930) found not only the 122 groups that now are known as magnetic point groups but also the seven triclinic and 91 monoclinic magnetic space groups. He also recognized that these groups can be used to describe the symmetry of spin arrangements. The present interest in magnetic symmetry was much stimulated by Shubnikov (1951), who considered the symmetry groups of figures with black and white faces, which he called antisymmetry groups. The change of colour of the faces in antisymmetry (black–white symmetry, see also Section 3.3.5) corresponds to the element **R**. These antisymmetry classes were derived as magnetic symmetry point groups by Tavger & Zaitsev (1956). Beside antisymmetry, the concept of
+
+colour (or generalized) symmetry also was developed, in which the number of colours is not 2 but 3, 4 or 6 (see Belov et al., 1964; Koptsi & Kuzhukeev, 1972). A different generalization to more than two colours was proposed by van der Waerden & Burckhardt (1961). The various approaches have been compared by Schwarzenberger (1984).
+
+As the theories of antisymmetry and of magnetic symmetry evolved often independently, different authors denote the operation of time inversion (black–white exchange) by different symbols. Of the four frequently used symbols ($R = E' = \frac{1}{2} = 1'$) we shall use in this article only two: **R** and $\frac{1'}{2}$.
+
+#### 1.5.2.1. Magnetic point groups
+
+Magnetic point groups may contain rotations, reflections, the element **R** and their combinations. A set of such elements that satisfies the group properties is called a magnetic point group. It is obvious that there are 32 trivial magnetic point groups; these are the ordinary crystallographic point groups supplemented by the element **R**. Each of these point groups contains all the elements of the ordinary point group $\mathcal{P}$ and also all the elements of this group $\mathcal{P}$ multiplied by **R**. This type of magnetic point group $M_{P1}$ can be represented by
+
+$$ M_{P1} = \mathcal{P} + R\mathcal{P}. \quad (1.5.2.1) $$
+
+These groups are sometimes called ‘grey’ magnetic point groups. As pointed out above, all dia- and paramagnets belong to this type of point group. To this type belong also antiferromagnets with a magnetic space group that contains translations multiplied by **R** (space groups of type III$^b$).
+
+The second type of magnetic point group, which is also trivial in some sense, contains all the 32 crystallographic point groups without the element **R** in any form. For this type $M_{P2} = \mathcal{P}$. Thirteen of these point groups allow ferromagnetic spontaneous magnetization (ferromagnetism, ferrimagnetism, weak ferromagnetism). They are listed in Table 1.5.2.4. The remaining 19 point groups describe antiferromagnets. The groups $M_{P2}$ are often called ‘white’ magnetic point groups.
+
+The third type of magnetic point group $M_{P3}$, ‘black and white’ groups (which are the only nontrivial ones), contains those point groups in which **R** enters only in combination with rotations or reflections. There are 58 point groups of this type. Eighteen of them describe different types of ferromagnetism (see Table 1.5.2.4) and the others represent antiferromagnets.
+
+Replacing **R** by the identity element **E** in the magnetic point groups of the third type does not change the number of elements in the point group. Thus each group of the third type $M_{P3}$ is isomorphic to a group $\mathcal{P}$ of the second type.
+
+The method of derivation of the nontrivial magnetic groups given below was proposed by Indenbom (1959). Let $\mathcal{H}$ denote the set of those elements of the group $\mathcal{P}$ which enter into the asso-
+
+Table 1.5.2.2. Comparison of different symbols for the elements of magnetic point groups
+
+Magnetic point group | Elements |
|---|
| Schoenflies | Hermann-Mauguin |
|---|
| D4R = 4221' | E1, C2, 2C4, 2U2, 2Ua2, R, RC2, 2RC4, 2RU2, 2RUa2 | 1, 2x, 2y, 2z, 2xy, 2-xy, ±4z, 1', 2'x, 2'y, 2z, 2'xy, 2''-xy, ±4'z | | D4 = 422 | E1, C2, 2C4, 2U2, 2Ua2 | 1, 2x, 2y, 2z, 2xy, 2-xy, ±4z | | D4(C4) = 4'2'2' | E1, C2, 2C4, 2RU2, 2RUa2 | 1, 2x, ±4z, 2'x, 2'y, 2'xy, 2''-xy | | D4(D2) = 4'22' | E1, C2, C2, C2, 2RUa2, 2RC4 | 1, xx, yy, zz, xy, xy, xy, ±4'z' |
+---PAGE_BREAK---
+
+# 1.6. Classical linear crystal optics
+
+BY A. M. GLAZER AND K. G. COX†
+
+## 1.6.1. Introduction
+
+The field of classical crystal optics is an old one, and in the last century, in particular, it was the main subject of interest in the study of crystallography. Since the advent of X-ray diffraction, however, crystal optics tended to fall out of widespread use, except perhaps in mineralogy, where it has persisted as an important technique for the classification and identification of mineral specimens. In more recent times, however, with the growth in optical communications technologies, there has been a revival of interest in the optical properties of crystals, both linear and nonlinear. There are many good books dealing with classical crystal optics, which the reader is urged to consult (Hartshorne & Stuart, 1970; Wahlstrom, 1959; Bloss, 1961). In addition, large collections of optical data on crystals exist (Groth, 1906–1919; Winchell, 1931, 1939, 1951, 1954, 1965; Kerr, 1959). In this chapter, both linear and nonlinear optical effects will be introduced briefly in a generalized way. Then the classical derivation of the refractive index surface for a crystal will be derived. This leads on to a discussion on the practical means by which conventional crystal optics can be used in the study of crystalline materials, particularly in connection with mineralogical study, although the techniques described apply equally well to other types of crystals. Finally, some detailed explanations of certain linear optical tensors will be given.
+
+## 1.6.2. Generalized optical, electro-optic and magneto-optic effects
+
+When light of a particular cyclic frequency $\omega$ is incident on a crystal of the appropriate symmetry, in general an electrical polarization **P** may be generated within the crystal. This can be expressed in terms of a power series with respect to the electric vector of the light wave (Nussbaum & Phillips, 1976; Butcher & Cotter, 1990; Kaminow, 1974):
+
+$$ P = \sum \epsilon_o \chi^{(i)} E^i = \epsilon_o (\chi^{(1)}E + \chi^{(2)}E^2 + \chi^{(3)}E^3 + \dots), \quad (1.6.2.1) $$
+
+where the $\chi^{(i)}$ are susceptibilities of order $i$. Those working in the field of electro-optics tend to use this notation as a matter of course. The susceptibility $\chi^{(1)}$ is a linear term, whereas the higher-order susceptibilities describe nonlinear behaviour.
+
+However, it is convenient to generalize this concept to take into account other fields (e.g. electrical, magnetic and stress fields) that can be imposed on the crystal, not necessarily due to the incident light. The resulting polarization can be considered to arise from many different so-called electro-optic, magneto-optic and photoelastic (elasto-optic) effects, expressed as a series expansion of $P_i$ in terms of susceptibilities $\chi_{ijk\ell...}$ and the applied fields **E**, **B** and **T**. This can be written in the following way:
+
+$$ P_i = P_i^0 + \epsilon_o \chi_{ij} E_j^\omega + \epsilon_o \chi_{ij\ell} \nabla_\ell E_j^\omega + \epsilon_o \chi_{ijk} E_j^{\omega_1} E_k^{\omega_2} \\ + \epsilon_o \chi_{ijk\ell} E_j^{\omega_1} E_k^{\omega_2} E_\ell^{\omega_3} + \epsilon_o \chi_{ijk} E_j^{\omega_1} B_k^{\omega_2} \\ + \epsilon_o \chi_{ijk\ell} E_j^{\omega_1} B_k^{\omega_2} B_\ell^{\omega_3} + \epsilon_o \chi_{ijk\ell} E_j^{\omega_1} T_{k\ell}^{\omega_2} + \dots \quad (1.6.2.2) $$
+
+Here, the superscripts refer to the frequencies of the relevant field terms and the susceptibilities are expressed as tensor components. Each term in this expansion gives rise to a specific effect that may or may not be observed, depending on the crystal symmetry and the size of the susceptibility coefficients. Note a possible confusion: in the notation $\chi^{(i)}$, $i$ is equal to one less than its rank. It is important to understand that these terms describe various properties, both linear and nonlinear. Those terms that describe the effect purely of optical frequencies propagating through the crystal give rise to *linear* and *nonlinear* optics. In the former case, the input and output frequencies are the same, whereas in the latter case, the output frequency results from sums or differences of the input frequencies. Furthermore, it is apparent that nonlinear optics depends on the intensity of the input field, and so is an effect that is induced by the strong optical field.
+
+If the input electrical fields are static (the term 'static' is used here to mean zero or low frequency compared with that of light), the resulting effects are either linear or nonlinear electrical effects, in which case they are of no interest here. There is, however, an important class of effects in which both static and optical fields are involved: linear and nonlinear electro-optic effects. Here, the use of the terms linear and nonlinear is open to confusion, depending on whether it is the electrical part or the optical part to which reference is made (see for example below in the discussion of the linear electro-optic effect). Similar considerations apply to applied magnetic fields to give linear and nonlinear magneto-optic effects and to applied stresses, the *photoelastic* effects. Table 1.6.2.1 lists the most important effects according to the terms in this series. The susceptibilities are written in the form $\chi(\omega_1; \omega_2, \omega_3, ...)$ to indicate the frequency $\omega_1$ of the output electric field, followed after the semicolon by the input frequencies $\omega_1, \omega_2, ...$
+
+Table 1.6.2.1. Summary of linear and nonlinear optical properties
+
+| Type of polarization term | Susceptibility | Effect |
|---|
Pi0 εoχijEjω | χ(0; 0) χ(ω; ω) | Spontaneous polarization Dielectric polarization, refractive index, linear birefringence | εoχijl∇ℓEjω εoχijkEjω1Ekω2 | χ(ω; ω) χ(0; 0, 0) χ(ω; ω, 0) χ(ω1 ± ω2; ω1, ω2) χ(ω; ω/2, ω/2) χ(0; ω/2, ω/2) χ(ω3; ω1, ω2) | Optical rotation (gyration) Quadratic electric effect Linear electro-optic effect or Pockels effect Sum/difference frequency generation, two-wave mixing Second harmonic generation (SHG) Optical rectification Parametric amplification | | εoχijkℓEjω1Ekω2Eℓω3 | χ(ω; 0, 0) χ(ω; ω/2, ω/2, 0) χ(-ω1; ω2, ω3, -ω4) | Quadratic electro-optic effect or Kerr effect Electric-field induced second harmonic generation (EFISH) Four-wave mixing | εoχijlkEjω1Bkω2 εoχijkℓEjω1Bkω3Bℓω5 | χ(ω; ω, 0) χ(ω; ω, 0, 0) χ(ω; ω, 0) | Faraday rotation Quadratic magneto-optic effect or Cotton-Mouton effect Linear elasto-optic effect or photoelastic effect Linear acousto-optic effect | | εoχijkℓEjω1Tkℓω2 | χ(ω1 ± ω2; ω1, ω2) | |
+
+† The sudden death of Keith Cox is deeply regretted. He died in a sailing accident on 27 August 1998 in Scotland at the age of 65.
+---PAGE_BREAK---
+
+Fig. 1.6.4.7. Three birefringence images of industrial diamond viewed along [111] taken with the rotating analyser system. (a) $I_0$; (b) $|\sin \delta|$; (c) orientation $\varphi$ of slow axis with respect to horizontal.
+
+images observed in plane-polarized light rely on scattering from point sources within the specimen, and do not depend strictly on whether the configuration is conoscopic or orthoscopic. Nevertheless, relief and the Becke line are much more clearly observable in orthoscopic use.
+
+The principle of conoscopic use is quite different. Here, the image is formed in the *back focal plane* of the objective. Any group of parallel rays passing through the specimen is brought to a focus in this plane, at a specific point depending on the direction of transmission. Hence every point in the image corresponds to a different transmission direction (see Fig. 1.6.4.8). Moreover, the visible effects are entirely caused by interference, and there is no image of the details of the specimen itself. That image is of course also present, towards the top of the tube at or near the cross wires, but the two are not simultaneously visible. The conoscopic image may be viewed simply by removing the eyepiece and looking down the tube, where it appears as a small but bright circle. More commonly however, the Bertrand lens is inserted in the tube, which has the effect of transferring the conoscopic image from the back focal plane of the objective to the front focal plane of the eyepiece, where it coincides with the cross wires and may be examined as usual.
+
+Fig. 1.6.4.8. Formation of the interference figure. The microscope axis lies vertically in the plane of the paper. A bundle of rays travelling through the crystal parallel to the microscope axis (dashed lines) is brought to a focus at A in the back focal plane of the objective. This is the centre of the interference figure. A bundle of oblique rays (solid lines) is brought to a focus at B, towards the edge of the figure.
+
+It is useful to think of the conoscopic image as analogous to the gnomonic projection as used in crystallography. The geometrical principles are the same, as each direction through the crystal is projected directly through the centre of the lens into the back focal plane.
+
+### 1.6.4.12. Uniaxial figures
+
+To understand the formation of an interference figure, consider a simple example, a specimen of calcite cut at right angles to the c crystallographic axis. Calcite is uniaxial negative, with the optic axis parallel to c. The rays that have passed most obliquely through the specimen are focused around the edge of the figure, while the centre is occupied by rays that have travelled parallel to the optic axis (see Fig. 1.6.4.8). The birefringence within the image clearly must increase from nil in the centre to some higher value at the edges, because the rays here have had longer path lengths through the crystal. Furthermore, the image must have radial symmetry, so that the first most obvious feature of the figure is a series of coloured rings, corresponding in outward sequence to the successive orders. The number of rings visible will of course depend on the thickness of the sample, and when birefringence is low enough no rings will be obvious because all colours lie well within the first order (Figs. 1.6.4.9a and b). Fig. 1.6.4.10(a) illustrates, by reference to the indicatrix, the way in which the vibration directions of the o and e rays are disposed. Fig. 1.6.4.10(b) shows the disposition of vibration directions in the figure. Note that o rays always vibrate tangentially and e rays radially. The o-ray vibration directions lie in the plane of the figure, but e-ray vibration directions become progressively more inclined to the plane of the figure towards the edge.
+
+The shaded cross on the figure illustrates the position of dark ‘brushes’ known as *isogyres* (Fig. 1.6.4.10b). These develop wherever vibration directions lie N-S or E-W, hence corresponding to the vibration directions of the analyser and polarizer. As the stage is rotated, as long as the optic axis is truly parallel to the microscope axis, the figure will not change. This is an example of a centred uniaxial optic axis figure, and such a figure identifies the crystal as belonging to the tetragonal, trigonal or hexagonal systems (see Fig. 1.6.4.11a).
+
+From the point of crystal identification, one can also determine whether the figure coincides with the uniaxial positive ($n_e > n_o$) or uniaxial negative ($n_e < n_o$) cases. Inserting the sensitive-tint plate will move the coloured ring up or down the birefringence scale by a complete order. Fig. 1.6.4.11(c) shows the centred optic axis figure for calcite, which is optically negative. The insertion of a tint plate with its slow vibration direction lying NE-SW lowers the colours in the NE and SW quadrants of the figure, and raises
+---PAGE_BREAK---
+
+# 1.7. Nonlinear optical properties
+
+BY B. BOULANGER AND J. ZYSS
+
+## 1.7.1. Introduction
+
+The first nonlinear optical phenomenon was observed by Franken *et al.* (1961): ultraviolet radiation at 0.3471 µm was detected at the exit of a quartz crystal illuminated with a ruby laser beam at 0.6942 µm. This was the first demonstration of second harmonic generation at optical wavelengths. A coherent light of a few W cm⁻² is necessary for the observation of nonlinear optical interactions, which thus requires the use of laser beams.
+
+The basis of nonlinear optics, including quantum-mechanical perturbation theory and Maxwell equations, is given in the paper published by Armstrong *et al.* (1962).
+
+It would take too long here to give a complete historical account of nonlinear optics, because it involves an impressive range of different aspects, from theory to applications, from physics to chemistry, from microscopic to macroscopic aspects, from quantum mechanics of materials to classical and quantum electrodynamics, from gases to solids, from mineral to organic compounds, from bulk to surface, from waveguides to fibres and so on.
+
+Among the main nonlinear optical effects are harmonic generation, parametric wave mixing, stimulated Raman scattering, self-focusing, multiphoton absorption, optical bistability, phase conjugation and optical solitons.
+
+This chapter deals mainly with harmonic generation and parametric interactions in anisotropic crystals, which stand out as one of the most important fields in nonlinear optics and certainly one of its oldest and most rigorously treated topics. Indeed, there is a great deal of interest in the development of solid-state laser sources, be they tunable or not, in the ultraviolet, visible and infrared ranges. Spectroscopy, telecommunications, telemetry and optical storage are some of the numerous applications.
+
+The electric field of light interacts with the electric field of matter by inducing a dipole due to the displacement of the electron density away from its equilibrium position. The induced dipole moment is termed polarization and is a vector: it is related to the applied electric field via the dielectric susceptibility tensor. For fields with small to moderate amplitude, the polarization remains linearly proportional to the field magnitude and defines the linear optical properties. For increasing field amplitudes, the polarization is a nonlinear function of the applied electric field and gives rise to nonlinear optical effects. The polarization is properly modelled by a Taylor power series of the applied electric field if its strength does not exceed the atomic electric field (10⁸–10⁹ V cm⁻¹) and if the frequency of the electric field is far away from the resonance frequencies of matter. Our purpose lies within this framework because it encompasses the most frequently encountered cases, in which laser intensities remain in the kW to MW per cm² range, that is to say with electric fields from 10³ to 10⁴ V cm⁻¹. The electric field products appearing in the Taylor series express the interactions of different optical waves. Indeed, a wave at the circular frequency ω can be radiated by the second-order polarization induced by two waves at ω_a and ω_b such as ω = ω_a ± ω_b; these interactions correspond to sum-frequency generation (ω = ω_a + ω_b), with the particular cases of second harmonic generation (2ω_a = ω_a + ω_a) and indirect third harmonic generation (3ω_a = ω_a + 2ω_a); the other three-wave process is difference-frequency generation, including optical parametric amplification and optical parametric oscillation. In the same way, the third-order polarization governs four-wave mixing: direct third harmonic generation (3ω_a = ω_a + ω_a + ω_a)
+
+and more generally sum- and difference-frequency generations (ω = ω_a ± ω_b ± ω_c).
+
+Here, we do not consider optical interactions at the microscopic level, and we ignore the way in which the atomic or molecular dielectric susceptibility determines the macroscopic optical properties. Microscopic solid-state considerations and the relations between microscopic and macroscopic optical properties, particularly successful in the realm of organic crystals, play a considerable role in materials engineering and optimization. This important topic, known as molecular and crystalline engineering, lies beyond the scope of this chapter. Therefore, all the phenomena studied here are connected to the macroscopic first-, second- and third-order dielectric susceptibility tensors χ⁽¹⁾, χ⁽²⁾ and χ⁽³⁾, respectively; we give these tensors for all the crystal point groups.
+
+We shall mainly emphasize propagation aspects, on the basis of Maxwell equations which are expressed for each Fourier component of the optical field in the nonlinear crystal. The reader will then follow how the linear optical properties come to play a pivotal role in the nonlinear optical interactions. Indeed, an efficient quadratic or cubic interaction requires not only a high magnitude of χ⁽²⁾ or χ⁽³⁾, respectively, but also specific conditions governed by χ⁽¹⁾: existence of phase matching between the induced nonlinear polarization and the radiated wave; suitable symmetry of the field tensor, which is defined by the tensor product of the electric field vectors of the interacting waves; and small or nil double refraction angles. Quadratic and cubic processes cannot be considered as fully independent in the context of cascading. Significant phase shifts driven by a sequence of sum- and difference-frequency generation processes attached to a χ⁽²⁾ · χ⁽²⁾ contracted tensor expression have been reported (Bosshard, 2000). These results point out the relevance of polar structures to cubic phenomena in both inorganic and organic structures, thus somewhat blurring the borders between quadratic and cubic NLO.
+
+We analyse in detail second harmonic generation, which is the prototypical interaction of frequency conversion. We also present indirect and direct third harmonic generations, sum-frequency generation and difference-frequency generation, with the specific cases of optical parametric amplification and optical parametric oscillation.
+
+An overview of the methods of measurement of the nonlinear optical properties is provided, and the chapter concludes with a comparison of the main mineral and organic crystals showing nonlinear optical properties.
+
+## 1.7.2. Origin and symmetry of optical nonlinearities
+
+### 1.7.2.1. Induced polarization and susceptibility
+
+The macroscopic electronic polarization of a unit volume of the material system is classically expanded in a Taylor power series of the applied electric field **E**, according to Bloembergen (1965):
+
+$$ \mathbf{P} = \mathbf{P}_0 + \epsilon_o (\chi^{(1)} \cdot \mathbf{E} + \chi^{(2)} \cdot \mathbf{E}^2 + \dots + \chi^{(n)} \cdot \mathbf{E}^n + \dots), \quad (1.7.2.1) $$
+
+where $\chi^{(n)}$ is a tensor of rank $n+1$, $\mathbf{E}^n$ is a shorthand abbreviation for the $n$th order tensor product $\mathbf{E} \otimes \mathbf{E} \otimes \dots \otimes \mathbf{E} = \otimes^n \mathbf{E}$ and the dot stands for the contraction of the last $n$ indices of the
+---PAGE_BREAK---
+
+# 1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES
+
+Table 1.7.3.9. Field-tensor components specifically nil in the principal planes of uniaxial and biaxial crystals for three-wave and four-wave interactions
+($i, j, k$) = x, y or z.
+
+| Configurations of polarization | Nil field-tensor components |
|---|
| (xy) plane | (xz) plane | (yz) plane |
|---|
| eoo | $F_{xjk} = 0$; $F_{yjk} = 0$ | $F_{ixk} = F_{ijx} = 0$ $F_{iyk} = 0$ | $F_{iyk} = F_{ijy} = 0$ $F_{xjk} = 0$ | | oee | $F_{ixk} = F_{ijx} = 0$ $F_{iyk} = F_{ijy} = 0$ | $F_{iyk} = F_{ijy} = 0$ $F_{xik} = 0$ | $F_{ixk} = F_{ijx} = 0$ $F_{iyk} = 0$ | | eooo | $F_{xjkl} = 0$; $F_{yjkl} = 0$ | $F_{ixkl} = F_{ijxl} = F_{ijkx} = 0$ $F_{yjkl} = 0$ | $F_{iykl} = F_{ijyl} = F_{ijky} = 0$ $F_{xjkl} = 0$ | | oeee | $F_{ixkl} = F_{ijxl} = F_{ijkx} = 0$ $F_{iykl} = F_{ijyl} = F_{ijky} = 0$ | $F_{iykl} = F_{ijyl} = F_{ijky} = 0$ $F_{xjkl} = 0$ | $F_{ixkl} = F_{ijxl} = F_{ijkx} = 0$ $F_{iykl} = 0$ | | ooee | $F_{xjxl} = F_{ijxl} = 0$ $F_{iyjl} = F_{ijly} = 0$ | $F_{xjkl} = F_{ixkl} = 0$ $F_{iyjl} = F_{ijky} = 0$ | $F_{iyjl} = F_{ijkl} = 0$ $F_{xjxl} = F_{ijxl} = 0$ |
+
+and configurations of polarization: $D_4$ and $D_6$ for 2o.e, $C_{4v}$ and $C_{6v}$ for 2e.o, $D_6$, $D_{6h}$, $D_{3h}$ and $C_{6v}$ for 3o.e and 3e.o. Thus, even if phase-matching directions exist, the effective coefficient in these situations is nil, which forbids the interactions considered (Boulanger & Marnier, 1991; Boulanger et al., 1993). The number of forbidden crystal classes is greater under the Kleinman approximation. The forbidden crystal classes have been determined for the particular case of third harmonic generation assuming Kleinman conjecture and without consideration of the field tensor (Midwinter & Warner, 1965).
+
+### 1.7.3.2.4.3. Biaxial class
+
+The symmetry of the biaxial field tensors is the same as for the uniaxial class, though only for a propagation in the principal planes $xz$ and $yz$; the associated matrix representations are given in Tables 1.7.3.7 and 1.7.3.8, and the nil components are listed in Table 1.7.3.9. Because of the change of optic sign from either side of the optic axis, the field tensors of the interactions for which the phase-matching cone joins areas b and a or a and c, given in Fig. 1.7.3.5, change from one area to another: for example, the field tensor (*oee* becomes an (*oeo*) and so the solicited components of the electric susceptibility tensor are not the same.
+
+Fig. 1.7.3.6. Schematic configurations for second harmonic generation: (a) non-resonant SHG; (b) external resonant SHG: the resonant wave may either be the fundamental or the harmonic one; (c) internal resonant SHG. $P^{o,2o}$ are the fundamental and harmonic powers; $HT^o$ and $HR^{o,2o}$ are the high-transmission and high-reflection mirrors at $\omega$ or $2\omega$ and $T^{o,2o}$ are the transmission coefficients of the output mirror at $\omega$ or $2\omega$. NLC is the nonlinear crystal with a nonzero $\chi^{(2)}$.
+
+The nonzero field-tensor components for a propagation in the $xy$ plane of a biaxial crystal are: $F_{zxx}$, $F_{zyy}$, $F_{zxy} \neq F_{zyx}$ for (*eoo*); $F_{xzz}$, $F_{yzz}$ for (*oee*); $F_{zxxx}$, $F_{zyyy}$, $F_{zxyy} \neq F_{zyxy} \neq F_{zyyx}$, $F_{zxy} \neq F_{zxyx} \neq F_{zyxx}$ for (*eooo*); $F_{xzzz}$, $F_{yzzz}$ for (*oeee*); $F_{xyz}$ $\neq F_{yxz}$, $F_{xxzz}$, $F_{yyzz}$ for (*oeee*). The nonzero components for the other configurations of polarization are obtained by the associated permutations of the Cartesian indices and the corresponding polarizations.
+
+The field tensors are not symmetric for a propagation out of the principal planes in the general case where all the frequencies are different: in this case there are 27 independent components for the three-wave interactions and 81 for the four-wave interactions, and so all the electric susceptibility tensor components are solicited.
+
+As phase matching imposes the directions of the electric fields of the interacting waves, it also determines the field tensor and hence the effective coefficient. Thus there is no possibility of choice of the $\chi^{(2)}$ coefficients, since a given type of phase matching is considered. In general, the largest coefficients of polar crystals, i.e. $\chi_{zzz}$, are implicated at a very low level when phase matching is achieved, because the corresponding field tensor, i.e. $F_{zzz}$, is often weak (Boulanger *et al.*, 1997). In contrast, QPM authorizes the coupling between three waves polarized along the $z$ axis, which leads to an effective coefficient which is purely $\chi_{zzz}$, i.e. $\chi_{eff} = (2/\pi)\chi_{zzz}$, where the numerical factor comes from the periodic character of the rectangular function of modulation (Fejer *et al.*, 1992).
+
+#### 1.7.3.3. Integration of the propagation equations
+
+##### 1.7.3.3.1. Spatial and temporal profiles
+
+The resolution of the coupled equations (1.7.3.22) or (1.7.3.24) over the crystal length $L$ leads to the electric field amplitude $E_l(X, Y, L)$ of each interacting wave. The general solutions are Jacobian elliptic functions (Armstrong *et al.*, 1962; Fève, Boulanger & Douady, 2002). The integration of the systems is simplified for cases where one or several beams are held constant, which is called the undepleted pump approximation. We consider mainly this kind of situation here. The power of each interacting wave is calculated by integrating the intensity over the cross section of each beam according to (1.7.3.8). For our main purpose, we consider the simple case of plane-wave beams with two kinds of transverse profile:
+
+$$
+\begin{aligned}
+\mathbf{E}(X, Y, Z) &= \mathbf{e}E_o(Z) && \text{for } (X, Y) \in [-w_o, +w_o] \\
+\mathbf{E}(X, Y, Z) &= 0 && \text{elsewhere}
+\end{aligned}
+\quad (1.7.3.36)
+$$
+
+for a flat distribution over a radius $w_o$;
+
+$$ \mathbf{E}(X, Y, Z) = \mathbf{e}E_o(Z) \exp[-(\mathbf{X}^2 + \mathbf{Y}^2)/w_o^2] \quad (1.7.3.37) $$
+
+for a Gaussian distribution, where $w_o$ is the radius at $(1/e)$ of the electric field and so at $(1/e^2)$ of the intensity.
+---PAGE_BREAK---
+
+# 1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES
+
+$$ \rho(T) = \rho_i(1 + BT^2) + AT^2. \quad (1.8.3.14) $$
+
+The term $\rho_i$ is the constant due to the impurity scattering. There is also a term proportional to $BT^2$, which is proportional to the impurity resistance. This factor is due to the Koshino–Taylor effect (Koshino, 1960; Taylor, 1964), which has been treated rigorously by Mahan & Wang (1989). It is the inelastic scattering of electrons by impurities. The impurity is part of the lattice and phonons can be excited when the impurity scatters the electrons. The term $AT^2$ is due to electron-electron interactions. The Coulomb interaction between electrons is highly screened and makes only a small contribution to A. The largest contribution to A is caused by phonons. MacDonald *et al.* (1981) showed that electrons can interact by exchanging phonons. There are also terms due to boundary scattering, which is important in thin films: see Bruls *et al.* (1985).
+
+Note that (1.8.3.14) has no term from phonons of $O(T^5)$. Such a term is lacking in simple metals, contrary to the assertion in most textbooks. Its absence is due to *phonon drag*. For a review and explanation of this behaviour, see Wiser (1984). The $T^5$ term is found in the noble metals, where phonon drag is less important owing to the complexities of the Fermi surface.
+
+## 1.8.3.2. Metal alloys
+
+Alloys are solids composed of a mixture of two or more elements that do not form a stoichiometric compound. An example is Cu$_x$Ni$_{1-x}$, in which x can have any value. For small values of x, or of $(1-x)$, the atoms of one element just serve as impurities in the other element. This results in the type of behaviour described above. However, in the range $0.2 < x < 0.8$, a different type of resistivity is found. This was first summarized by Mooij (1973), who found a remarkable range of behaviours. He measured the resistivity of hundreds of alloys and also surveyed the published literature for additional results. He represented the resistivity at $T = 300$ K by two values: the resistivity itself, $\rho(T = 300)$, and its logarithmic derivative, $\alpha = d\ln(\rho)/dT$. He produced the graph shown in Fig. 1.8.3.2, where these two values are plotted against each other. Each point is one sample as represented by these two numbers. He found that all of the results fit within a band of numbers, in which larger values of $\rho(T = 300)$ are accompanied by negative values of $\alpha$. Alloys with very high values of resistivity generally have a resistivity $\rho(T)$ that decreases with increasing temperature. The region where $\alpha = 0$ corresponds to a resistivity of $\rho^* = 150~\mu\Omega$ cm, which appears to be a fixed point. As the temperature is increased, the resistivities of alloys with $\rho > \rho^*$ decrease to this value, while the resistivities of alloys with $\rho < \rho^*$ increase to this value.
+
+Fig. 1.8.3.2. The temperature coefficient of resistance versus resistivity for alloys according to Mooij (1973). Data are shown for bulk alloys (+), thin films (●) and amorphous alloys (×).
+
+Mooij's observations are obviously important, but the reason for this behaviour is not certain. Several different explanations have been proposed and all are plausible: see Jonson & Girvin (1979), Allen & Chakraborty (1981) or Tsuei (1986).
+
+Recently, another group of alloys have been found that are called *bad metals*. The ruthenates (see Allen et al., 1996; Klein et al., 1996) have a resistivity $\rho > \rho^*$ that increases at high temperatures. Their values are outliers on Mooij's plot.
+
+## 1.8.3.3. Semiconductors
+
+The resistivity of semiconductors varies from sample to sample, even of the same material. The conductivity can be written as $\sigma = n_0 e\mu$, where e is the charge on the electron, $\mu = e\tau/m^*$ is the mobility and $n_0$ is the density of conducting particles (electrons or holes). It is the density of particles $n_0$ that varies from sample to sample. It depends upon the impurity content of the semiconductor as well as upon temperature. Since no two samples have exactly the same number of impurities, they do not have the same values of $n_0$. In semiconductors and insulators, the conducting particles are extrinsic – they come from defects, impurities or thermal excitation – in contrast to metals, where the density of the conducting electrons is usually an intrinsic property.
+
+In semiconductors, instead of talking about the conductivity, the more fundamental transport quantity (Rode, 1975) is the mobility $\mu$. It is the same for each sample at high temperature if the density of impurities and defects is low. There is an intrinsic mobility, which can be calculated assuming there are no impurities and can be measured in samples with a very low density of impurities. We shall discuss the intrinsic mobility first.
+
+Fig. 1.8.3.3 shows the intrinsic mobility of electrons in silicon, from Rode (1972), as a function of temperature. The mobility generally decreases with increasing temperature. This behaviour is found in all common semiconductors. The mobility also decreases with an increasing concentration of impurities: see Jacoboni et al. (1977).
+
+The intrinsic mobility of semiconductors is due to the scattering of electrons and holes by phonons. The phonons come in various branches called TA, LA, TO and LO, where T is transverse, L is longitudinal, A is acoustic and O is optical. At long wavelengths, the acoustic modes are just the sound waves, which
+
+Fig. 1.8.3.3. The intrinsic mobility of electrons in silicon. Solid line: theory; points: experimental. After Rode (1972).
+---PAGE_BREAK---
+
+# 1.9. Atomic displacement parameters
+
+BY W. F. KUHS
+
+## 1.9.1. Introduction
+
+Atomic thermal motion and positional disorder is at the origin of a systematic intensity reduction of Bragg reflections as a function of scattering vector **Q**. The intensity reduction is given as the well known Debye-Waller factor (DWF); the DWF may be of purely thermal origin (thermal DWF or temperature factor) or it may contain contributions of static atomic disorder (static DWF). As atoms of chemically or isotopically different elements behave differently, the individual atomic contributions to the global DWF (describing the weakening of Bragg intensities) vary. Formally, one may split the global DWF into the individual atomic contributions. Crystallographic experiments usually measure the global weakening of Bragg intensities and the individual contributions have to be assessed by adjusting individual atomic parameters in a least-squares refinement.
+
+The theory of lattice dynamics (see e.g. Willis & Pryor, 1975) shows that the atomic thermal DWF $T_α$ is given by an exponential of the form
+
+$$T_{\alpha}(\mathbf{Q}) = \langle \exp(i\mathbf{Q}\mathbf{u}_{\alpha}) \rangle, \quad (1.9.1.1)$$
+
+where $\mathbf{u}_α$ are the individual atomic displacement vectors and the brackets symbolize the thermodynamic (time-space) average over all contributions $\mathbf{u}_α$. In the harmonic (Gaussian) approximation, (1.9.1.1) reduces to
+
+$$T_{\alpha}(\mathbf{Q}) = \exp[-(1/2)\langle(\mathbf{Q}\mathbf{u}_{\alpha})^2\rangle]. \quad (1.9.1.2)$$
+
+The thermodynamically averaged atomic mean-square displacements (of thermal origin) are given as $U^{ij} = \langle u^i u^j \rangle$, i.e. they are the thermodynamic average of the product of the displacements along the *i* and *j* coordinate directions. Thus (1.9.1.2) may be expressed with $\mathbf{Q} = 4\pi\mathbf{h}|\mathbf{a}|$ in a form more familiar to the crystallographer as
+
+$$T_{\alpha}(\mathbf{h}) = \exp(-2\pi^2 h_i |\mathbf{a}^i| h_j |\mathbf{a}^j| U_{\alpha}^{ij}), \quad (1.9.1.3)$$
+
+where $h_i$ are the covariant Miller indices, $\mathbf{a}^i$ are the reciprocal-cell basis vectors and $1 \le i, j, \phi \le 3$. Here and in the following, tensor notation is employed; implicit summation over repeated indices is assumed unless stated otherwise. For computational convenience one often writes
+
+$$T_{\alpha}(\mathbf{h}) = \exp(-h_i h_j \beta_{\alpha}^{ij}) \quad (1.9.1.4)$$
+
+with $\beta_{\alpha}^{ij} = 2\pi^2 |\mathbf{a}^i||\mathbf{a}^j|U_{\alpha}^{ij}$ (no summation). Both **h** and **β** are dimensionless tensorial quantities; **h** transforms as a covariant tensor of rank 1, **β** as a contravariant tensor of rank 2 (for details of the mathematical notion of a tensor, see Chapter 1.1).
+
+Similar formulations are found for the static atomic DWF $S_α$, where the average of the atomic static displacements $\Delta\mathbf{u}_α$ may also be approximated [though with weaker theoretical justification, see Kuhs (1992)] by a Gaussian distribution:
+
+$$S_{\alpha}(\mathbf{Q}) = \exp[-(1/2)\langle(\mathbf{Q}\Delta\mathbf{u}_{\alpha})^2\rangle]. \quad (1.9.1.5)$$
+
+As in equation (1.9.1.3), the static atomic DWF may be formulated with the mean-square disorder displacements $\Delta U^{ij} = \langle \Delta u^i \Delta u^j \rangle$ as
+
+$$S_{\alpha}(\mathbf{h}) = \exp(-2\pi^2 h_i |\mathbf{a}^i| h_j |\mathbf{a}^j| \Delta U_{\alpha}^{ij}). \quad (1.9.1.6)$$
+
+It is usually difficult to separate thermal and static contributions, and it is often wise to use the sum of both and call them simply (mean-square) atomic displacements. A separation may however be achieved by a temperature-dependent study of atomic displacements. A harmonic diagonal tensor component of purely thermal origin extrapolates linearly to zero at 0 K; zero-point motion causes a deviation from this linear behaviour at low temperatures, but an extrapolation from higher temperatures (where the contribution from zero-point motion becomes negligibly small) still yields a zero intercept. Any positive intercept in such extrapolations is then due to a (temperature-independent) static contribution to the total atomic displacements. Care has to be taken in such extrapolations, as pronounced anharmonicity (frequently encountered at temperatures higher than the Debye temperature) will change the slope, thus invalidating the linear extrapolation (see e.g. Willis & Pryor, 1975). Owing to the difficulty in separating thermal and static displacements in a standard crystallographic structure analysis, a subcommittee of the IUCr Commission on Crystallographic Nomenclature has recommended the use of the term *atomic displacement parameters* (ADPs) for $U^{ij}$ and $\beta^{ij}$ (Trueblood et al., 1996).
+
+## 1.9.2. The atomic displacement parameters (ADPs)
+
+One notes that in the Gaussian approximation, the mean-square atomic displacements (composed of thermal and static contributions) are fully described by six coefficients $\beta^{ij}$, which transform on a change of the direct-lattice base (according to $\mathbf{a}_k = A_{ki}\mathbf{a}_i$) as
+
+$$\beta^{kl} = A_{kl}A_{lj}\beta^{ij}. \quad (1.9.2.1)$$
+
+This is the transformation law of a tensor (see Section 1.1.3.2); the mean-square atomic displacements are thus tensorial properties of an atom $\alpha$. As the tensor is contravariant and in general is described in a (non-Cartesian) crystallographic basis system, its indices are written as superscripts. It is convenient for comparison purposes to quote the dimensionless coefficients $\beta^{ij}$ as their dimensioned representations $U^{ij}$.
+
+In the harmonic approximation, the atomic displacements are fully described by the fully symmetric second-order tensor given in (1.9.2.1). Anharmonicity and disorder, however, cause deviations from a Gaussian distribution of the atomic displacements around the atomic position. In fact, anharmonicity in the thermal motion also provokes a shift of the atomic position as a function of temperature. A generalized description of atomic displacements therefore also involves first-, third-, fourth- and even higher-order displacement terms. These terms are defined by a moment-generating function $M(Q)$ which expresses $\langle (\exp(iQ\mathbf{u}_\alpha)) \rangle$ in terms of an infinite number of moments; for a Gaussian distribution of displacement vectors, all moments of order $> 2$ are identically equal to zero. Thus
+
+$$M(Q) = \langle \exp(iQ\mathbf{u}_\alpha) \rangle = \sum_{N=0}^{\infty} \frac{i^N}{N!} \langle (\mathbf{Q}\mathbf{u}_\alpha)^N \rangle . \quad (1.9.2.2)$$
+
+The moments $\langle (\mathbf{Q}\mathbf{u}_\alpha)^N \rangle$ of order $N$ may be expressed in terms of cumulants $\langle (\mathbf{Q}\mathbf{u}_\alpha)^N \rangle_{\text{cum}}$ by the identity
+
+$$\sum_{N=0}^{\infty} \frac{(1/N!)}{(N)!} \langle (\mathbf{Q}\mathbf{u}_{\alpha})^N \rangle = \exp \sum_{N=1}^{\infty} \frac{(1/N!)}{(N)!} \langle (\mathbf{Q}\mathbf{u}_{\alpha})^N \rangle_{\text{cum}} . \quad (1.9.2.3)$$
+---PAGE_BREAK---
+
+1. TENSORIAL ASPECTS OF PHYSICAL PROPERTIES
+
+Fig. 1.9.4.1. A selection of graphical representations of density modulations due to higher-order terms in the Gram-Charlier series expansion of a Gaussian atomic probability density function. All figures are drawn on a common scale and have a common orientation. All terms within any given order of expansion are numerically identical and refer to the same underlying isotropic second-order term; the higher-order terms of different order of expansion differ by one order of magnitude, but refer again to the same underlying isotropic second-order term. The orthonormal crystallographic axes are oriented as follows: x oblique out of the plane of the paper towards the observer, y in the plane of the paper and to the right, and z in the plane of the paper and upwards. All surfaces are scaled to 1% of the absolute value of the maximum modulation within each density distribution. Positive modulations (i.e. an increase of density) are shown in red, negative modulations are shown in blue. The source of illumination is located approximately on the [111] axis. The following graphs are shown (with typical point groups for specific cases given in parentheses). Third-order terms: (a) $b^{222}$; (b) $b^{223}$; (c) $b^{113} = -b^{223}$ (point group 4); (d) $b^{123}$ (point group 43m). Fourth-order terms: (e) $b^{222}$; (f) $b^{111} = b^{222}$; (g) $b^{111} = b^{222} = b^{333}$ (point group $m\bar{3}m$); (h) $b^{122}$; (i) $b^{112} = b^{122}$; (j) $b^{112}$; (k) $b^{113} = b^{223}$; (l) $b^{112} = b^{133} = b^{223}$ (point group $m\bar{3}m$).
+---PAGE_BREAK---
+
+# 1.10. Tensors in quasiperiodic structures
+
+BY T. JANSSEN
+
+## 1.10.1. Quasiperiodic structures
+
+### 1.10.1.1. Introduction
+
+Many materials are known which show a well ordered state without lattice translation symmetry, often in a restricted temperature or composition range. This can be seen in the diffraction pattern from the appearance of sharp spots that cannot be labelled in the usual way with three integer indices. The widths of the peaks are comparable with those of perfect lattice periodic crystals, and this is a sign that the coherence length is comparable as well.
+
+A typical example is $K_2SeO_4$, which has a normal lattice periodic structure above 128 K with space group *Pcmn*, but below this temperature shows satellites at positions $\gamma c^*$, where $\gamma$ is an irrational number, which in addition depends on temperature. These satellites cannot be labelled with integer indices with respect to the reciprocal basis **a**\*, **b**\*, **c**\* of the structure above the transition temperature. Therefore, the corresponding structure cannot be lattice periodic.
+
+The diffraction pattern of $K_2SeO_4$ arises because the original lattice periodic *basic structure* is deformed below 128 K. The atoms are displaced from their positions in the basic structure such that the displacement itself is again periodic, but with a period that is *incommensurate* with respect to the lattice of the basic structure.
+
+Such a *modulated structure* is just a special case of a more general type of structure. These structures are characterized by the fact that the diffraction pattern has sharp Bragg peaks at positions **H** that are linear combinations of a finite number of basic vectors:
+
+$$ \mathbf{H} = \sum_{i=1}^{n} h_i \mathbf{a}_i^* \quad (\text{integer } h_i). \qquad (1.10.1.1) $$
+
+Structures that have this property are called *quasiperiodic*. The minimal number *n* of basis vectors such that all $h_i$ are integers is called the *rank* of the structure. If the rank is three and the vectors **a**ᵢ do not all fall on a line or in a plane, the structure is just lattice periodic. Lattice periodic structures form special cases of quasiperiodic structures. The collection of vectors **H** forms the *Fourier module* of the structure. For rank three, this is just the *reciprocal lattice* of the lattice periodic structure.
+
+The definition given above results in some important practical difficulties. In the first place, it is not possible to show experimentally that a wavevector has irrational components instead of rational ones, because an irrational number can be approximated by a rational number arbitrarily well. Very often the wavevector of the satellite changes with temperature. It has been reported that in some compounds the variation shows plateaux, but even when the change seems to be continuous and smooth one can not be sure about the irrationality. On the other hand, if the wavevector jumps from one rational position to another, the structure would always be lattice periodic, but the unit cell of this structure would vary wildly with temperature. This means that, if one wishes to describe the incommensurate phases in a unified fashion, it is more convenient to treat the wavevector as generically irrational. This experimental situation is by no means dramatic. It is similar to the way in which one can never be sure that the angles between the basis vectors of an orthorhombic
+
+lattice are really 90°, although this is a concept that no-one has problems understanding.
+
+A second problem stems from the fact that the wavevectors of the Fourier module are dense. For example, in the case of $K_2SeO_4$ the linear combinations of **c**\* and $\gamma c\*$ cover the **c** axis uniformly. To pick out a basis here could be problematic, but the intensity of the spots is usually such that choosing a basis is not a problem. In fact, one only observes peaks with an intensity above a certain threshold, and these form a discrete set. At most, the occurrence of scale symmetry may make the choice less obvious.
+
+### 1.10.1.2. Types of quasiperiodic crystals
+
+One may distinguish various families of quasiperiodic systems. [Sometimes these are also called incommensurate systems (Janssen & Janner, 1987).] It is not a strict classification, because one may have intermediate cases belonging to more than one family as well. Here we shall consider a number of pure cases.
+
+An *incommensurately modulated structure* or *incommensurate crystal (IC)* phase is a periodically modified structure that without the modification would be lattice periodic. Hence there is a *basic structure* with space-group symmetry. The periodicity of the modification should be incommensurate with respect to the basic structure. The position of the jth atom in the unit cell with origin at the lattice point **n** is **n** + **r**j ($j$ = 1, 2, ..., *s*).
+
+For a *displacive modulation*, the positions of the atoms are shifted from a lattice periodic basic structure. A simple example is a structure that can be derived from the positions of the basic structure with a simple displacement wave. The positions of the atoms in the IC phase are then
+
+$$ \mathbf{n} + \mathbf{r}_j + \mathbf{f}_j(\mathbf{Q} \cdot \mathbf{n}) \quad [\mathbf{f}_j(x) = \mathbf{f}_j(x+1)]. \qquad (1.10.1.2) $$
+
+Here the *modulation wavevector Q* has irrational components with respect to the reciprocal lattice of the basic structure. One has
+
+$$ \mathbf{Q} = \alpha \mathbf{a}^* + \beta \mathbf{b}^* + \gamma \mathbf{c}^*, \qquad (1.10.1.3) $$
+
+where at least one of $\alpha$, $\beta$ or $\gamma$ is irrational. A simple example is the function $\mathbf{f}_j(x) = \mathbf{A}_j \cos(2\pi x + \varphi_j)$, where $\mathbf{A}_j$ is the *polarization vector* and $\varphi_j$ is the phase of the modulation. The diffraction pattern of the structure (1.10.1.2) shows spots at positions
+
+$$ \mathbf{H} = h_1 \mathbf{a}^* + h_2 \mathbf{b}^* + h_3 \mathbf{c}^* + h_4 \mathbf{Q}. \qquad (1.10.1.4) $$
+
+Therefore, the rank is four and $\mathbf{a}_4^* = \mathbf{Q}$. In a more general situation, the components of the atom positions in the IC phase are given by
+
+$$ \mathbf{n}^\alpha + \mathbf{r}_j^\alpha + \sum_m A_j^\alpha(\mathbf{Q}_m) \cos(2\pi \mathbf{Q}_m \cdot \mathbf{n} + \varphi_{jm\alpha}), \quad \alpha = x, y, z. \qquad (1.10.1.5) $$
+
+Here the vectors $\mathbf{Q}_m$ belong to the Fourier module of the structure. Then there are vectors $\mathbf{Q}_j$ such that any spot in the diffraction pattern can be written as
+
+$$ \mathbf{H} = \sum_{i=1}^{3} h_i \mathbf{a}_i^* + \sum_{j=1}^{d} h_{3+j} \mathbf{Q}_j \qquad (1.10.1.6) $$
+
+and the rank is 3+d. The peaks corresponding to the basic structure [the combinations of the three reciprocal-lattice vectors
+---PAGE_BREAK---
+
+$$
+\mathbf{e}(\mathbf{q}, j) = \begin{pmatrix} \mathbf{e}_1(\mathbf{q}, j) \\ \vdots \\ \mathbf{e}_N(\mathbf{q}, j) \end{pmatrix} = \begin{pmatrix} e_1^x(\mathbf{q}, j) \\ e_1^y(\mathbf{q}, j) \\ e_1^z(\mathbf{q}, j) \\ \vdots \\ e_N^x(\mathbf{q}, j) \\ e_N^y(\mathbf{q}, j) \\ e_N^z(\mathbf{q}, j) \end{pmatrix} \quad (2.1.2.18)
+$$
+
+and simultaneously the 3 × 3 matrices **F**KK'(**q**) to a 3N × 3N
+matrix **F**(**q**)
+
+$$
+\mathbf{F}(\mathbf{q}) =
+\begin{pmatrix}
+F_{11}^{xx} & F_{11}^{xy} & F_{11}^{xz} & s & F_{1N}^{xx} & F_{1N}^{xy} & F_{1N}^{xz} \\
+F_{11}^{yx} & F_{11}^{yy} & F_{11}^{yz} & s & F_{1N}^{yx} & F_{1N}^{yy} & F_{1N}^{yz} \\
+F_{11}^{zx} & F_{11}^{zy} & F_{11}^{zz} & s & F_{1N}^{zx} & F_{1N}^{zy} & F_{1N}^{zz} \\
+\vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\
+F_{N1}^{xx} & F_{N1}^{xy} & F_{N1}^{xz} & s & F_{NN}^{xx} & F_{NN}^{xy} & F_{NN}^{xz} \\
+F_{N1}^{yx} & F_{N1}^{yy} & F_{N1}^{yz} & s & F_{NN}^{yx} & F_{NN}^{yy} & F_{NN}^{yz} \\
+F_{N1}^{zx} & F_{N1}^{zy} & F_{N1}^{zz} & s & F_{NN}^{zx} & F_{NN}^{zy} & F_{NN}^{zz}
+\end{pmatrix},
+\quad (2.1.2.19)
+$$
+
+equation (2.1.2.17) can be written in matrix notation and takes
+the simple form
+
+$$
+\omega_{q,j}^2 \mathbf{e}(q, j) = [\mathbf{M} \mathbf{F}(q) \mathbf{M}] \mathbf{e}(q, j) = \mathbf{D}(q) \mathbf{e}(q, j), \quad (2.1.2.20)
+$$
+
+where the diagonal matrix
+
+$$
+\mathbf{M} =
+\begin{pmatrix}
+\frac{1}{\sqrt{m_1}} & 0 & 0 \\
+0 & \frac{1}{\sqrt{m_1}} & 0 \\
+0 & 0 & \frac{1}{\sqrt{m_1}}
+\end{pmatrix}
+s
+\begin{pmatrix}
+\vdots \\
+s
+\end{pmatrix}
+\begin{pmatrix}
+0 & 0 \\
+\frac{1}{\sqrt{m_N}} & 0 \\
+0 & 0
+\end{pmatrix}
+\tag{2.1.2.21}
+$$
+
+contains the masses of all atoms. The 3N × 3N matrix
+
+$$
+\mathbf{D}(\mathbf{q}) = \mathbf{M}\mathbf{F}(\mathbf{q})\mathbf{M}
+\quad (2.1.2.22)
+$$
+
+is called the dynamical matrix. It contains all the information
+about the dynamical behaviour of the crystal and can be calcu-
+lated on the basis of specific models for interatomic interactions.
+In analogy to the 3 × 3 matrices **FKK'(**q**), we introduce the
+submatrices of the dynamical matrix:
+
+$$
+\mathbf{D}_{\kappa\kappa'}(\mathbf{q}) = \frac{1}{\sqrt{m_\kappa m_{\kappa'}}} \mathbf{F}_{\kappa\kappa'}(\mathbf{q}). \quad (2.1.2.22a)
+$$
+
+Owing to the symmetry of the force-constant matrix,
+
+$$
+V_{\alpha\beta}(\kappa l, \kappa l') = V_{\beta\alpha}(\kappa l', \kappa l), \quad (2.1.2.23)
+$$
+
+the dynamical matrix is Hermitian:¹
+
+$$
+\mathbf{D}^T(\mathbf{q}) = \mathbf{D}^*(\mathbf{q}) = \mathbf{D}(-\mathbf{q})
+$$
+
+or more specifically
+
+$$
+D_{KK'}^{\alpha\beta}(q) = D_{K'K}^{\beta\alpha*}(q) = D_{K'K}^{\beta\alpha}(-q). \quad (2.1.2.24a)
+$$
+
+Obviously, the squares of the vibrational frequency $\omega_{q,j}$ and the
+polarization vectors $\mathbf{e}(q, j)$ are eigenvalues and corresponding
+eigenvectors of the dynamical matrix. As a direct consequence of
+
+Fig. 2.1.2.3. Phonon dispersion of b.c.c. hafnium for wavevectors along the main symmetry directions of the cubic structure. The symbols represent experimental data obtained by inelastic neutron scattering and the full lines are the results of the model. From Trampenau *et al*. (1991). Copyright (1991) by the American Physical Society.
+
+equation (2.1.2.20), the eigenvalues $\omega_{q,j}^2$ are real quantities and
+the following relations hold:
+
+$$
+\omega_{q,j}^2 = \omega_{-q,j}^2, \quad (2.1.2.25)
+$$
+
+$$
+e^*(\mathbf{q}, j) = e(-\mathbf{q}, j). \tag{2.1.2.26}
+$$
+
+Moreover, the eigenvectors are mutually orthogonal and can be
+chosen to be normalized.
+
+### 2.1.2.4. Eigenvalues and phonon dispersion, acoustic modes
+
+The wavevector dependence of the vibrational frequencies is called phonon dispersion. For each wavevector **q** there are 3N fundamental frequencies yielding 3N phonon branches when $\omega_{q,j}$ is plotted versus **q**. In most cases, the phonon dispersion is displayed for wavevectors along high-symmetry directions. These dispersion curves are, however, only special projections of the dispersion hypersurface in the four-dimensional **q**-ω space. As a simple example, the phonon dispersion of b.c.c. hafnium is displayed in Fig. 2.1.2.3. The wavevectors are restricted to the first Brillouin zone (see Section 2.1.3.1) and the phonon dispersion for different directions of the wavevector are combined in one single diagram making use of the fact that different high-symmetry directions meet at the Brillouin-zone boundary. Note that in Fig. 2.1.2.3, the moduli of the wavevectors are scaled by the Brillouin-zone boundary values and represented by the reduced coordinates ξ. Owing to the simple b.c.c. structure of hafnium with one atom per primitive cell, there are only three phonon branches. Moreover, for all wavevectors along the directions [00ξ] and [ξξξ], two exhibit the same frequencies – they are said to be degenerate. Hence in the corresponding parts of Fig. 2.1.2.3 only two branches can be distinguished.
+
+Whereas in this simple example the different branches can be
+separated quite easily, this is no longer true for more complicated
+crystal structures. For illustration, the phonon dispersion of the
+high-$T_c$ superconductor Nd$_2$CuO$_4$ is shown in Fig. 2.1.2.4 for the
+main symmetry directions of the tetragonal structure (space
+group I4/mmm, seven atoms per primitive cell). Note that in
+many publications on lattice dynamics the frequency $\nu = \omega/2\pi$ is
+used rather than the angular frequency $\omega$.
+
+The 21 phonon branches of Nd₂CuO₄ with their more
+complicated dispersion reflect the details of the interatomic
+interactions between all atoms of the structure. The phonon
+frequencies $\nu$ cover a range from 0 to 18 THz. In crystals with
+
+¹ The superscripts T and * are used to denote the transposed and the complex conjugate matrix, respectively.
+---PAGE_BREAK---
+
+## 2.2.5. The free-electron (Sommerfeld) model
+
+The free-electron model corresponds to the special case of taking a constant potential in the Schrödinger equation (2.2.4.1). The physical picture relies on the assumption that the (metallic) valence electrons can move freely in the field of the positively charged nuclei and the tightly bound core electrons. Each valence electron moves in a potential which is nearly constant due to the screening of the remaining valence electrons. This situation can be idealized by assuming the potential to be constant [$V(\mathbf{r}) = 0$]. This simple picture represents a crude model for simple metals but has its importance mainly because the corresponding equation can be solved analytically. By rewriting equation (2.2.4.1), we have
+
+$$ \nabla^2 \psi_k(\mathbf{r}) = -\frac{2mE}{\hbar^2} \psi_k(\mathbf{r}) = -|\mathbf{k}|^2 \psi_k(\mathbf{r}), \quad (2.2.5.1) $$
+
+where in the last step the constants are abbreviated (for later convenience) by $|\mathbf{k}|^2$. The solutions of this equation are plane waves (PWs)
+
+$$ \psi_k(\mathbf{r}) = C \exp(i\mathbf{k} \cdot \mathbf{r}), \quad (2.2.5.2) $$
+
+where $C$ is a normalization constant which is defined from the integral over one unit cell with volume $\Omega$. The PWs satisfy the Bloch condition and can be written (using the bra–ket notation) as
+
+$$ |\mathbf{k}\rangle = \psi_{\mathbf{k}}(\mathbf{r}) = \Omega^{1/2} \exp(i\mathbf{k} \cdot \mathbf{r}). \quad (2.2.5.3) $$
+
+From (2.2.5.1) we see that the corresponding energy (labelled by **k**) is given by
+
+$$ E_{\mathbf{k}} = \frac{\hbar^2}{2m} |\mathbf{k}|^2. \quad (2.2.5.4) $$
+
+In this context it is useful to consider the momentum of the electron, which classically is the vector **p** = *mv*, where *m* and **v** are the mass and velocity, respectively. In quantum mechanics we must replace **p** by the corresponding operator **P**.
+
+$$ \mathbb{P}|\mathbf{k}\rangle = \frac{\hbar}{i} \frac{\partial}{\partial \mathbf{r}} |\mathbf{k}\rangle = \frac{\hbar}{i} i \mathbf{k} |\mathbf{k}\rangle = \hbar \mathbf{k} |\mathbf{k}\rangle. \quad (2.2.5.5) $$
+
+Thus a PW is an eigenfunction of the momentum operator with eigenvalue $\hbar\mathbf{k}$. Therefore the **k** vector is also called the *momentum* vector. Note that this is strictly true for a vanishing potential but is otherwise only approximately true (referred to as *pseudomomentum*).
+
+Another feature of a PW is that its phase is constant in a plane perpendicular to the vector **k** (see Fig. 2.2.5.1). For this purpose, consider a periodic function in space and time,
+
+$$ \varphi_k(\mathbf{r}, t) = \exp[i(\mathbf{k} \cdot \mathbf{r} - \omega t)], \quad (2.2.5.6) $$
+
+which has a constant phase factor $\exp(i\omega t)$ within such a plane. We can characterize the spatial part by **r** within this plane. Taking the nearest parallel plane (with vector **r'**) for which the same phase factors occur again but at a distance $\lambda$ away (with the unit vector **e** normal to the plane),
+
+$$ \mathbf{r}' = \mathbf{r} + \lambda \mathbf{e} = \mathbf{r} + \lambda \frac{\mathbf{k}}{|\mathbf{k}|}, \quad (2.2.5.7) $$
+
+then $\mathbf{k} \cdot \mathbf{r}'$ must differ from $\mathbf{k} \cdot \mathbf{r}$ by $2\pi$. This is easily obtained from (2.2.5.7) by multiplication with **k** leading to
+
+Fig. 2.2.5.1. Plane waves. The wavevector **k** and the unit vector **e** are normal to the two planes and the vectors **r** in plane 1 and **r'** in plane 2.
+
+$$ \mathbf{k} \cdot \mathbf{r}' = \mathbf{k} \cdot \mathbf{r} + \lambda \frac{|\mathbf{k}|^2}{|\mathbf{k}|} = \mathbf{k} \cdot \mathbf{r} + \lambda |\mathbf{k}| \quad (2.2.5.8) $$
+
+$$ \mathbf{k} \cdot \mathbf{r}' - \mathbf{k} \cdot \mathbf{r} = \lambda |\mathbf{k}| = 2\pi \quad (2.2.5.9) $$
+
+$$ \lambda = \frac{2\pi}{|\mathbf{k}|} \text{ or } |\mathbf{k}| = \frac{2\pi}{\lambda}. \quad (2.2.5.10) $$
+
+Consequently $\lambda$ is the wavelength and thus the **k** vector is called the *wavevector* or *propagation vector*.
+
+## 2.2.6. Space-group symmetry
+
+### 2.2.6.1. Representations and bases of the space group
+
+The effect of a space-group operation {$p|\mathbf{w}$} on a Bloch function, labelled by **k**, is to transform it into a Bloch function that corresponds to a vector **p**,**k**,
+
+$$ \{p|\mathbf{w}\}\psi_{\mathbf{k}} = \psi_{p\mathbf{k}}, \quad (2.2.6.1) $$
+
+which can be proven by using the multiplication rule of Seitz operators (2.2.3.12) and the definition of a Bloch state (2.2.4.17). A special case is the inversion operator, which leads to
+
+$$ \{i|\mathbb{E}\}\psi_{\mathbf{k}} = \psi_{-\mathbf{k}}. \quad (2.2.6.2) $$
+
+The Bloch functions $\psi_k$ and $\psi_{pk}$, where $p$ is any operation of the point group $P$, belong to the same basis for a representation of the space group $G$.
+
+$$ (\psi_{\mathbf{k}}) = (\psi_{p\mathbf{k}}) \text{ for all } p \in P \text{ for all } p\mathbf{k} \in \text{BZ}. \quad (2.2.6.3) $$
+
+The same $p\mathbf{k}$ cannot appear in two different bases, thus the two bases $\psi_k$ and $\psi_{k'}$ are either identical or have no **k** in common.
+
+Irreducible representations of $T$ are labelled by the $N$ distinct **k** vectors in the BZ, which separate in disjoint bases of $G$ (with no **k** vector in common). If a **k** vector falls on the BZ edge, application of the point-group operation $p$ can lead to an equivalent **k'** vector that differs from the original by **K** (a vector of the reciprocal lattice). The set of all mutually inequivalent **k** vectors of $p\mathbf{k}$ ($p \in P$) define the *star of the k vector* ($S_k$) (see also Section 1.2.3.3 of the present volume).
+
+The set of all operations that leave a **k** vector invariant (or transform it into an equivalent **k** + **K**) forms the *group* $G_k$ of the **k** vector. Application of $q$, an element of $G_k$, to a Bloch function (Section 2.2.8) gives
+
+$$ q\psi_{\mathbf{k}}^{j}(\mathbf{r}) = \psi_{\mathbf{k}}^{j}(\mathbf{r}) \text{ for } q \in G_{\mathbf{k}}, \quad (2.2.6.4) $$
+---PAGE_BREAK---
+
+## 2. SYMMETRY ASPECTS OF EXCITATIONS
+
+Table 2.2.13.1. Picking rules for the local coordinate axes and the corresponding LM combinations ($\ell$mp) of non-cubic groups taken from Kurki-Suonio (1977)
+
+| Symmetry | Coordinate axes | $\ell, m, p$ of $y_{\ell mp}$ | Crystal system |
|---|
| $\frac{1}{1}$ | Any Any | All $(\ell, m, \pm)$ $(2\ell, m, \pm)$ | Triclinic | 2 $m$ $2/m$ | $2 \parallel z$ $m \perp z$ $2 \parallel z, m \perp z$ | $(\ell, 2m, \pm)$ $(\ell, \ell - 2m, \pm)$ $(2\ell, 2m, \pm)$ | Monoclinic | $222$ $mm2$ $mmm$ | $2 \parallel z, 2 \parallel y$ ($2 \parallel x$) $2 \parallel z, m \perp y$ ($2 \perp x$) $2 \perp z, m \perp y, 2 \perp x$ | $(2\ell, 2m+, 2\ell+1, 2m-, -)$ $(\ell, 2m+, +)$ $(2\ell, 2m+, +)$ | Orthorhombic | $\frac{4}{4}$ $4/m$ $422$ $4mm$ $\frac{4}{4}m$ $4mmm$ | $4 \parallel z$ $-4 \parallel z$ $4 \parallel z, m \perp z$ $4 \parallel z, 2 \parallel y$ ($2 \parallel x$) $4 \parallel z, m \perp y$ ($2 \perp x$) $-4 \parallel z, 2 \parallel x$ ($m = xy \rightarrow yx$) $4 \parallel z, m \perp z, m \perp x$ | $(\ell, 4m, \pm)$ $(2\ell, 4m, \pm), (2\ell+1, 4m+2, \pm)$ $(2\ell, 4m, \pm)$ $(2\ell, 4m+, +), (2\ell+1, 4m-, -)$ $(\ell, 4m+, +)$ $(2\ell, 4m+, +), (2\ell+1, 4m+2-, -)$ $(2\ell, 4m+, +)$ | Tetragonal | $\frac{3}{3}$ $32$ $3m$ $\frac{3}{3}m$ | $3 \parallel z$ $-3 \parallel z$ $3 \parallel z, 2 \parallel y$ $3 \parallel z, m \perp y$ $-3 \parallel z, m \perp y$ | $(\ell, 3m, \pm)$ $(2\ell, 3m, \pm)$ $(2\ell, 3m+, +), (2\ell+1, 3m-, -)$ $(\ell, 3m+, +)$ $(2\ell, 3m+, +)$ | Rhombohedral | $\frac{6}{6}$ $6/m$ $622$ $6mm$ $\frac{6}{6}m$ $6mmm$ | $6 \parallel z$ $-6 \parallel z$ $6 \parallel z, m \perp z$ $6 \parallel z, 2 \parallel y$ ($2 \parallel x$) $6 \parallel z, m \parallel y$ ($m \perp x$) $-6 \parallel z, m \perp y$ ($2 \parallel x$) $6 \parallel z, m \perp z, m \perp y$ ($m \perp x$) | $(\ell, 6m, \pm)$ $(2\ell, 6m+, +), (2\ell+1, 6m+3, \pm)$ $(2\ell, 6m, \pm)$ $(2\ell, 6m+, +), (2\ell+1, 6m-, -)$ $(\ell, 6m+, +)$ $(2\ell, 6m+, +), (2\ell+1, 6m+3, +)$ $(2\ell, 6m+, +)$ | Hexagonal |
+
+Therefore in the MTA one must make a compromise, whereas in full-potential calculations this problem practically disappears.
+
+### 2.2.13. The local coordinate system
+
+The partition of a crystal into atoms (or molecules) is ambiguous and thus the atomic contribution cannot be defined uniquely. However, whatever the definition, it must follow the relevant site symmetry for each atom. There are at least two reasons why one would want to use a local coordinate system at each atomic site: the concept of crystal harmonics and the interpretation of bonding features.
+
+#### 2.2.13.1. Crystal harmonics
+
+All spatial observables of the bound atom (e.g. the potential or the charge density) must have the crystal symmetry, i.e. the point-group symmetry around an atom. Therefore they must be representable as an expansion in terms of site-symmetrized spherical harmonics. Any point-symmetry operation transforms a spherical harmonic into another of the same $\ell$. We start with the usual complex spherical harmonics,
+
+$$Y_{\ell m}(\vartheta, \varphi) = N_{\ell m} P_{\ell}^{m}(\cos \vartheta) \exp(im\varphi), \quad (2.2.13.1)$$
+
+which satisfy Laplacian's differential equation. The $P_{\ell m}^n(\cos \vartheta)$ are the associated Legendre polynomials and the normalization $N_{\ell m}$ is according to the convention of Condon & Shortley (1953). For the $\varphi$-dependent part one can use the real and imaginary part and thus use $\cos(m\varphi)$ and $\sin(m\varphi)$ instead of the $\exp(im\varphi)$ functions,
+
+but we must introduce a parity $p$ to distinguish the functions with the same $|m|$. For convenience we take real spherical harmonics, since physical observables are real. The even and odd polynomials are given by the combination of the complex spherical harmonics with the parity $p$ either $+$ or $-$ by
+
+$$
+\begin{align}
+y_{\ell mp} &=
+\begin{cases}
+y_{\ell m+} = (1/\sqrt{2})(Y_{\ell m} + Y_{\ell \bar{m}}) & + \text{parity}, \\
+y_{\ell m-} = -(i/\sqrt{2})(Y_{\ell m} - Y_{\ell \bar{m}}) & - \text{parity},
+\end{cases}
+& m = 2n \\
+y_{\ell mp} &=
+\begin{cases}
+y_{\ell m+} = -(1/\sqrt{2})(Y_{\ell m} - Y_{\ell \bar{m}}) & + \text{parity}, \\
+y_{\ell m-} = (i/\sqrt{2})(Y_{\ell m} + Y_{\ell \bar{m}}) & - \text{parity},
+\end{cases}
+& m = 2n+1.
+\end{align}
+(2.2.13.2) $$
+
+The expansion of – for example – the charge density $\rho(\mathbf{r})$ around an atomic site can be written using the LAPW method [see the analogous equation (2.2.12.5) for the potential] in the form
+
+$$\rho(\mathbf{r}) = \sum_{LM} \rho_{LM}(r) K_{LM}(\hat{\mathbf{r}}) \text{ inside an atomic sphere,} \quad (2.2.13.3)$$
+
+where we use capital letters *LM* for the indices (i) to distinguish this expansion from that of the wavefunctions in which complex spherical harmonics are used [see (2.2.12.1)] and (ii) to include the parity $p$ in the index *M* (which represents the combined index *mp*). With these conventions, $K_{LM}$ can be written as a linear combination of real spherical harmonics $y_{\ell mp}$ which are symmetry-adapted to the site symmetry,
+
+$$K_{LM}(\hat{\mathbf{r}}) =
+\begin{cases}
+y_{Lmp} & \text{non-cubic} \\
+\sum_j c_{Lj} y_{ejp} & \text{cubic}
+\end{cases}
+\quad (2.2.13.4)$$
+
+Table 2.2.13.2. LM combinations of cubic groups as linear combinations of $y_{lmp}$'s (given in parentheses)
+
+The linear-combination coefficients can be found in Kurki-Suonio (1977).
+
+| Symmetry | LM combinations |
|---|
| 23 | (0 0), (3 2–), (4 0, 4 4+), (6 0, 6 4+), (6 2+, 6 6+) | | m3 | (0 0), (4 0, 4 4+), (6 0, 6 4+) (6 2+, 6 6+) | | 432 | (0 0), (4 0, 4 4+), (6 0, 6 4+) | | 43m | (0 0), (3 2–), (4 0, 4 4+), (6 0, 6 4+), (6 0, 4 4+) | | m3m | (0 0), (4 0, 4 4+), (6 0, 6 4+) |
+
+i.e. they are either $y_{lmp}$ [(2.2.13.2)] in the non-cubic cases (Table 2.2.13.1) or are well defined combinations of $y_{lmp}$'s in the five cubic cases (Table 2.2.13.2), where the coefficients $c_{Lj}$ depend on the normalization of the spherical harmonics and can be found in Kurki-Suonio (1977).
+
+According to Kurki-Suonio, the number of (non-vanishing) *LM* terms [e.g. in (2.2.13.3)] is minimized by choosing for each atom a local Cartesian coordinate system adapted to its site
+---PAGE_BREAK---
+
+## 2.3. Raman scattering
+
+BY I. GREGORA
+
+### 2.3.1. Introduction
+
+The term Raman scattering, traditionally used for light scattering by molecular vibrations or optical lattice vibrations in crystals, is often applied in a general sense to a vast variety of phenomena of inelastic scattering of photons by various excitations in molecules, solids or liquids. In crystals these excitations may be collective (phonons, plasmons, polaritons, magnons) or single-particle (electrons, electron-hole pairs, vibrational and electronic excitation of impurities). Raman scattering provides an important tool for the study of the properties of these excitations. In the present chapter, we shall briefly review the general features of Raman scattering in perfect crystals on a phenomenological basis, paying special attention to the consequences of the crystal symmetry. Our focus will be mainly on Raman scattering by vibrational excitations of the crystal lattice – phonons. Nevertheless, most of the conclusions have general validity and may be (with possible minor modifications) transferred also to inelastic scattering by other excitations.
+
+### 2.3.2. Inelastic light scattering in crystals – basic notions
+
+Although quantum concepts must be used in any complete theory of inelastic scattering, basic insight into the problem may be obtained from a semiclassical treatment. In classical terms, the origin of inelastically scattered light in solids should be seen in the modulation of the dielectric susceptibility of a solid by elementary excitations. The exciting light polarizes the solid and the polarization induced *via* the modulated part of the susceptibility is re-radiated at differently shifted frequencies. Thus inelastic scattering of light by the temporal and spatial fluctuations of the dielectric susceptibility that are induced by elementary excitations provides information about the symmetry and wavevector-dependent frequencies of the excitations themselves as well as about their interaction with electromagnetic waves.
+
+#### 2.3.2.1. Kinematics
+
+Let us consider the incident electromagnetic radiation, the scattered electromagnetic radiation and the elementary excitation to be described by plane waves. The incident radiation is characterized by frequency $\omega_I$, wavevector $\mathbf{k}_I$ and polarization vector $\mathbf{e}_I$. Likewise, the scattered radiation is characterized by $\omega_S$, $\mathbf{k}_S$ and $\mathbf{e}_S$:
+
+$$ \mathbf{E}_{I,S}(\mathbf{r}, t) = E_{I,S} \mathbf{e}_{I,S} \exp(i\mathbf{k}_{I,S}\mathbf{r} - \omega t). \quad (2.3.2.1) $$
+
+The scattering process involves the annihilation of the incident photon, the emission or annihilation of one or more quanta of elementary excitations and the emission of a scattered photon. The scattering is characterised by a *scattering frequency* $\omega$ (also termed the *Raman shift*) corresponding to the energy transfer $\hbar\omega$ from the radiation field to the crystal, and by a *scattering wave-vector* $\mathbf{q}$ corresponding to the respective momentum transfer $\hbar\mathbf{q}$. Since the energy and momentum must be conserved in the scattering process, we have the conditions
+
+$$ \begin{gathered} \omega_I - \omega_S = \omega, \\ \mathbf{k}_I - \mathbf{k}_S = \mathbf{q}. \end{gathered} \quad (2.3.2.2) $$
+
+Strictly speaking, the momentum conservation condition is valid only for sufficiently large, perfectly periodic crystals. It is further assumed that there is no significant absorption of the incident and
+
+scattered light beams, so that the wavevectors may be considered real quantities.
+
+Since the photon wavevectors ($\mathbf{k}_I$, $\mathbf{k}_S$) and frequencies ($\omega_I$, $\omega_S$) are related by the dispersion relation $\omega = ck/n$, where c is the speed of light in free space and n is the refractive index of the medium at the respective frequency, the energy and wavevector conservation conditions imply for the magnitude of the scattering wavevector q
+
+$$ c^2 q^2 = n_I^2 \omega_I^2 + n_S^2 (\omega_I - \omega)^2 - 2n_I n_S \omega_I (\omega_I - \omega) \cos \varphi, \quad (2.3.2.3) $$
+
+where $\varphi$ is the *scattering angle* (the angle between $\mathbf{k}_I$ and $\mathbf{k}_S$). This relation defines in the ($\omega, q$) plane the region of wavevectors and frequencies accessible to the scattering. This relation is particularly important for scattering by excitations whose frequencies depend markedly on the scattering wavevector (e.g. acoustic phonons, polaritons etc.).
+
+#### 2.3.2.2. Cross section
+
+In the absence of any excitations, the incident field $\mathbf{E}_I$ at frequency $\omega_I$ induces in the crystal the polarization **P**, related to the field by the *linear* dielectric susceptibility tensor $\chi$ ($\epsilon_0$ is the permittivity of free space):
+
+$$ \mathbf{P} = \epsilon_0 \chi(\omega_I) \mathbf{E}_I. \quad (2.3.2.4) $$
+
+The linear susceptibility $\chi(\omega_I)$ is understood to be independent of position, depending on the crystal characteristics and on the frequency of the radiation field only. In the realm of nonlinear optics, additional terms of higher order in the fields may be considered; they are expressed through the respective *nonlinear* susceptibilities.
+
+The effect of the excitations is to modulate the wavefunctions and the energy levels of the medium, and can be represented macroscopically as an additional contribution to the linear susceptibility. Treating this modulation as a perturbation, the resulting contribution to the susceptibility tensor, the so-called *transition susceptibility* $\delta\chi$ can be expressed as a Taylor expansion in terms of *normal coordinates* $Q_j$ of the excitations:
+
+$$ \chi \rightarrow \chi + \delta\chi, \text{ where } \delta\chi = \sum_j \chi^{(j)} Q_j + \sum_{j,f} \chi^{(j,f)} Q_j Q_f + \dots \quad (2.3.2.5) $$
+
+The tensorial coefficients $\chi^{(j)}, \chi^{(j,f)}, \dots$ in this expansion are, in a sense, *higher-order susceptibilities* and are often referred to as *Raman tensors* (of the first, second and higher orders). They are obviously related to *susceptibility derivatives* with respect to the normal coordinates of the excitations. The time-dependent polarization induced by $\delta\chi$ via time dependence of the normal coordinates can be regarded as the source of the inelastically scattered radiation.
+
+The central quantity in the description of Raman scattering is the *spectral differential cross section*, defined as the relative rate of energy loss from the incident beam (frequency $\omega_I$, polarization $\mathbf{e}_I$) as a result of its scattering (frequency $\omega_S$, polarization $\mathbf{e}_S$) in volume V into a unit solid angle and unit frequency interval. The corresponding formula may be concisely written as (see e.g. Hayes & Loudon, 1978)
+
+$$ \frac{\mathrm{d}^2\sigma}{\mathrm{d}\Omega\,\mathrm{d}\omega_s} = \frac{\omega_s^3 \omega_I V^2 n_s}{(4\pi)^2 c^4 n_I} \left| (\mathbf{e}_I \delta\chi \mathbf{e}_s)^2 \right|_\omega . \quad (2.3.2.6) $$
+---PAGE_BREAK---
+
+## 2. SYMMETRY ASPECTS OF EXCITATIONS
+
+Table 2.3.3.3. Raman selection rules in crystals of the 4mm class
+
+ | Cross section for symmetry species |
|---|
| A1 | E |
|---|
| q || z | ~ |aLO|2 | — | | q ⊥ z | ~ |bTO|2 | — | | ȳ(xz)y, x̄(yz)x | — | ~ |fTO|2 | | x'(zx')y', x'(y'z)y' | — | (1/2)|fTO|2 + (1/2)|fLO|2 |
+
+*Example:* To illustrate the salient features of polar-mode scattering let us consider a crystal of the 4mm class, where of the Raman-active symmetry species the modes $A_1(z)$ and $E(x, y)$ are polar. According to Table 2.3.3.1, their ($\mathbf{q}=0$) Raman tensors are identical to those of the $A_{1g}$ and $E_g$ modes in the preceding example of a 4/mmm-class crystal. Owing to the macroscopic electric field, however, here one has to expect directional dispersion of the frequencies of the long wavelength ($\mathbf{q} \approx 0$) $A_1$ and E optic phonon modes according to their longitudinal or transverse character. Consequently, in determining the polarization selection rules, account has to be taken of the direction of the phonon wavevector (i.e. the scattering wavevector) **q** with respect to the crystallographic axes. Since for a general direction of **q** the modes are coupled by the field, a suitable experimental arrangement permitting the efficient separation of their respective contributions should have the scattering wavevector **q** oriented along principal directions. At $\mathbf{q} \parallel \mathbf{z}$, the $A_1$ phonons are longitudinal (LO$_{\|}$) and both E modes (2TO$_{\perp}$) are transverse, remaining degenerate, whereas at $\mathbf{q} \parallel \mathbf{x}$ or $\mathbf{q} \parallel \mathbf{y}$, the $A_1$ phonons become transverse (TO$_{\perp}$) and the E phonons split into a pair of (TO$_{\perp}$, LO$_{\perp}$) modes of different frequencies. The subscripts $\|$ or $\perp$ explicitly indicate the orientation of the electric dipole moment carried by the mode with respect to the fourfold axis ($4 \parallel \mathbf{c} \equiv \mathbf{z}$).
+
+Schematically, the situation (i.e. frequency shifts and splittings) at $\mathbf{q} \approx 0$ can be represented by
+
+$$
+\begin{array}{ccc}
+\mathbf{q} \parallel \mathbf{z} & \mathbf{q} \parallel \mathbf{x} & \\
+& - & A_1(\text{TO}_{\|}) \\
+A_1(\text{LO}_{\|}) & - & - \\
+E(2\text{TO}_{\perp}) & - & - E_x(\text{LO}_{\perp}) \\
+& & E_y(\text{TO}_{\perp})
+\end{array}
+$$
+
+For a general direction of **q**, the modes are of a mixed character and their frequencies show directional (angular) dispersion. The overall picture depends on the number of $A_1$ and E phonons present in the given crystal, as well as on their effective charges and on the ordering of their eigenfrequencies. In fact, only the $E(\text{TO}_\perp)$ modes remain unaffected by the directional dispersion.
+
+Table 2.3.3.3 gives the corresponding contributions of these modes to the cross section for several representative scattering geometries, where subscripts TO and LO indicate that the components of the total Raman tensor may take on different values for TO and LO modes due to electro-optic contributions in the latter case.
+
+### 2.3.3.6. q-dependent terms
+
+So far, we have not explicitly considered the dependence of the Raman tensor on the magnitude of the scattering wavevector, assuming $\mathbf{q} \to 0$ (the effects of directional dispersion in the case of scattering by polar modes were briefly mentioned in the preceding section). In some cases, however, the Raman tensors vanish in this limit, or **q**-dependent corrections to the scattering may appear. Formally, we may expand the susceptibility in a Taylor series in **q**. The coefficients in this expansion are higher-order susceptibility derivatives taken at $\mathbf{q} = 0$. The symmetry-restricted form of these tensorial coefficients may be determined in the same way as that of the zero-order term, i.e. by decomposing the reducible representation of the third-, fourth- and
+
+higher-order polar Cartesian tensors into irreducible components $\Gamma(j)$. General properties of the **q**-dependent terms can be advantageously discussed in connection with the so-called *morphic* effects (see Sections 2.3.4 and 2.3.5).
+
+### 2.3.4. Morphic effects in Raman scattering
+
+By *morphic* effects we understand the effects that arise from a reduction of the symmetry of a system caused by the application of *external forces*. The relevant consequences of morphic effects for Raman scattering are changes in the selection rules. Applications of external forces may, for instance, render it possible to observe scattering by excitations that are otherwise inactive. Again, group-theoretical arguments may be applied to obtain the symmetry-restricted component form of the Raman tensors under applied forces.
+
+It should be noted that under external forces in this sense various ‘built-in’ fields can be included, e.g. electric fields or elastic strains typically occurring near the crystal surfaces. Effects of ‘intrinsic’ macroscopic electric fields associated with long-wavelength LO polar phonons can be treated on the same footing. Spatial-dispersion effects connected with the finiteness of the wavevectors, **q** or **k**, may also be included among morphic effects, since they may be regarded as being due to the gradients of the fields (displacement or electric) propagating in the crystal.
+
+#### 2.3.4.1. General remarks
+
+Various types of applied forces – in a general sense – can be classified according to symmetry, i.e. according to their transformation properties. Thus a force is characterized as a *polar* force if it transforms under the symmetry operation of the crystal like a polar tensor of appropriate rank (rank 1: electric field **E**; rank 2: electric field gradient $\nabla$**E**, stress **T** or strain **S**). It is an *axial* force if it transforms like an axial tensor (rank 1: magnetic field **H**). Here we shall deal briefly with the most important cases within the macroscopic approach of the susceptibility derivatives. We shall treat explicitly the first-order scattering only and neglect, for the moment, **q**-dependent terms.
+
+In a perturbation approach, the first-order transition susceptibility $\delta\chi$ in the presence of an applied force **F** can be expressed in terms of Raman tensors $\mathbf{R}^I(\mathbf{F})$ expanded in powers of **F**:
+
+$$
+\begin{gathered}
+\delta\chi(\mathbf{F}) = \sum_j \mathbf{R}^I(\mathbf{F}) Q_j, \\
+\text{where } \mathbf{R}^I(\mathbf{F}) = \mathbf{R}^{i0} + \mathbf{R}^{iF}\mathbf{F} + \frac{1}{2}\mathbf{R}^{IFF}\mathbf{FF} + \dots
+\end{gathered}
+\quad (2.3.4.1)
+$$
+
+Here, $\mathbf{R}^{i0} = \chi^{(i)}(0) = (\partial\chi_{\alpha\beta}/\partial Q_j)$ is the zero-field intrinsic Raman tensor, whereas the tensors
+
+$$
+\begin{aligned}
+\mathbf{R}^{iF}\mathbf{F} &= \left( \frac{\partial^2 \chi_{\alpha\beta}}{\partial Q_j \partial F_\mu} \right) F_\mu, \\
+\mathbf{R}^{IFF}\mathbf{FF} &= \left( \frac{\partial^3 \chi_{\alpha\beta}}{\partial Q_j \partial F_\mu \partial F_v} \right) F_\mu F_v \text{ etc.}
+\end{aligned}
+\quad (2.3.4.2)
+$$
+
+are the *force-induced* Raman tensors of the respective order in the field, associated with the jth normal mode. The scattering cross section for the jth mode becomes proportional to $|\mathbf{e}_S(\mathbf{R}^{i0} + \mathbf{R}^{iF}\mathbf{F} + \frac{1}{2}\mathbf{R}^{IFF}\mathbf{FF} + \dots)e_I|^2$, which, in general, may modify the polarization selection rules. If, for example, the mode is intrinsically Raman inactive, i.e. $\mathbf{R}^{i0} = 0$ whereas $\mathbf{R}^{iF} \neq 0$, we deal with purely force-induced Raman scattering; its intensity is proportional to $F^2$ in the first order. Higher-order terms must be investigated if, for symmetry reasons, the first-order terms vanish.
+
+For force-induced Raman activity, in accordance with general rules, invariance again requires that a particular symmetry species $\Gamma(j)$ can contribute to the first-order transition susceptibility by terms of order $n$ in the force only if the identity
+---PAGE_BREAK---
+
+## 2.4. BRILLOUIN SCATTERING
+
+dissipation theorem in the classical limit for $h\delta v \ll k_B T$ (Hayes & Loudon, 1978). The coupling coefficient *M* is given by
+
+$$M = |e_m e'_n \kappa_{mi} \kappa_{nj} p'_{ijk\ell} \hat{u}_k \hat{Q}_\ell|^2. \quad (2.4.4.8)$$
+
+In practice, the incident intensity is defined outside the scattering volume, $I_{\text{out}}$, and for normal incidence one can write
+
+$$I_{\text{in}} = \frac{4n}{(n+1)^2} I_{\text{out}}. \qquad (2.4.4.9a)$$
+
+Similarly, the scattered power is observed outside as $P_{\text{out}}$, and
+
+$$P_{\text{out}} = \frac{4n'}{(n'+1)^2} P_{\text{in}}, \qquad (2.4.4.9b)$$
+
+again for normal incidence. Finally, the approximative relation between the scattering solid angle $\Omega_{\text{out}}$, outside the sample, and the solid angle $\Omega_{\text{in}}$, in the sample, is
+
+$$\Omega_{\text{out}} = (n')^2 \Omega_{\text{in}}. \qquad (2.4.4.9c)$$
+
+Substituting (2.4.4.9a,b,c) in (2.4.4.7), one obtains (Vacher & Boyer, 1972)
+
+$$\frac{\mathrm{d}P_{\text{out}}}{\mathrm{d}\Omega_{\text{out}}} = \frac{8\pi^2 k_B T}{\lambda_0^4} \frac{n^4}{(n+1)^2} \frac{(n')^4}{(n'+1)^2} \beta V I_{\text{out}}, \quad (2.4.4.10)$$
+
+where the coupling coefficient $\beta$ is
+
+$$\beta = \frac{1}{n^4(n')^4} \frac{|e_m e'_n \kappa_{mi} \kappa_{nj} p'_{ijk\ell} \hat{u}_k \hat{Q}_\ell|^2}{C}. \qquad (2.4.4.11)$$
+
+In the cases of interest here, the tensor $\mathbf{\kappa}$ is diagonal, $\kappa_{ij} = n_i^2 \delta_{ij}$ without summation on $i$, and (2.4.4.11) can be written in the simpler form
+
+$$\beta = \frac{1}{n^4(n')^4} \frac{|e_i n_i^2 p'_{ijk\ell} \hat{u}_k \hat{Q}_\ell e'_j n_j^2|^2}{C}. \qquad (2.4.4.12)$$
+
+### 2.4.5. Use of the tables
+
+The tables in this chapter give information on modes and scattering geometries that are in most common use in the study of hypersound in single crystals. Just as in the case of X-rays, Brillouin scattering is not sensitive to the presence or absence of a centre of symmetry (Friedel, 1913). Hence, the results are the same for all crystalline classes belonging to the same centric group, also called Laue class. The correspondence between the point groups and the Laue classes analysed here is shown in Table 2.4.5.1. The monoclinic and triclinic cases, being too cumbersome, will not be treated here.
+
+For tensor components $c_{ijk\ell}$ and $p_{ijk\ell}$, the tables make use of the usual contracted notation for index pairs running from 1 to 6. However, as the tensor $p'_{ijk\ell}$ is not symmetric upon interchange of $(k, \ell)$, it is necessary to distinguish the order $(k, \ell)$ and $(\ell, k)$. This is accomplished with the following correspondence:
+
+| 1, 1 → 1 | 2, 2 → 2 | 3, 3 → 3 | | 1, 2 → 6 | 2, 3 → 4 | 3, 1 → 5 | | 2, 1 → 6̅ | 3, 2 → 4̅ | 1, 3 → 5̅ |
+
+Geometries for longitudinal modes (LA) are listed in Tables 2.4.5.2 to 2.4.5.8. The first column gives the direction of the scattering vector $\hat{\mathbf{Q}}$ that is parallel to the displacement $\hat{\mathbf{u}}$. The second column gives the elastic coefficient according to (2.4.2.6). In piezoelectric materials, effective elastic coefficients defined in (2.4.2.11) must be used in this column. The third column gives the direction of the light polarizations $\hat{\mathbf{e}}$ and $\hat{\mathbf{e}}'$, and the last column
+
+gives the corresponding coupling coefficient $\beta$ [equation (2.5.5.11)]. In general, the strongest scattering intensity is obtained for polarized scattering ($\hat{\mathbf{e}} = \hat{\mathbf{e}}'$), which is the only situation listed in the tables. In this case, the coupling to light ($\beta$) is independent of the scattering angle $\theta$, and thus the tables apply to any $\theta$ value.
+
+Tables 2.4.5.9 to 2.4.5.15 list the geometries usually used for the observation of TA modes in backscattering ($\theta = 180^\circ$). In this case, $\hat{\mathbf{u}}$ is always perpendicular to $\hat{\mathbf{Q}}$ (pure transverse modes), and $\hat{\mathbf{e}}'$ is not necessarily parallel to $\hat{\mathbf{e}}$. Cases where pure TA modes with $\hat{\mathbf{u}}$ in the plane perpendicular to $\hat{\mathbf{Q}}$ are degenerate are indicated by the symbol $D$ in the column for $\hat{\mathbf{u}}$. For the Pockels tensor components, the notation is $p_{\alpha\beta}$ if the rotational term vanishes by symmetry, and it is $p'_{\alpha\beta}$ otherwise.
+
+Tables 2.4.5.16 to 2.4.5.22 list the common geometries used for the observation of TA modes in $90^\circ$ scattering. In these tables, the polarization vector $\hat{\mathbf{e}}$ is always perpendicular to the scattering plane and $\hat{\mathbf{e}}'$ is always parallel to the incident wavevector of light **q**. Owing to birefringence, the scattering vector $\hat{\mathbf{Q}}$ does not exactly bisect **q** and $q'$ [equation (2.4.4.4)]. The tables are written for strict $90^\circ$ scattering, $\mathbf{q} \cdot q' = 0$, and in the case of birefringence the values of $\mathbf{q}^{(m)}$ to be used are listed separately in Table 2.4.5.23. The latter assumes that the birefringences are not large, so that the values of $\mathbf{q}^{(m)}$ are given only to first order in the birefringence.
+
+### 2.4.6. Techniques of Brillouin spectroscopy
+
+Brillouin spectroscopy with visible laser light requires observing frequency shifts falling typically in the range ~1 to ~100 GHz, or ~0.03 to ~3 cm⁻¹. To achieve this with good resolution one mostly employs interferometry. For experiments at very small angles (near forward scattering), photocorrelation spectroscopy can also be used. If the observed frequency shifts are ≥ 1 cm⁻¹, rough measurements of spectra can sometimes be obtained with modern grating instruments. Recently, it has also become possible to perform Brillouin scattering using other excitations, in particular neutrons or X-rays. In these cases, the coupling does not occur via the Pockels effect, and the frequency shifts that are observed are much larger. The following discussion is restricted to optical interferometry.
+
+The most common interferometer that has been used for this purpose is the single-pass planar Fabry-Perot (Born & Wolf, 1993). Upon illumination with monochromatic light, the frequency response of this instrument is given by the Airy function, which consists of a regular comb of maxima obtained as the optical path separating the mirrors is increased. Successive maxima are separated by $\lambda/2$. The ratio of the maxima separation to the width of a single peak is called the finesse $F$, which increases as the mirror reflectivity increases. The finesse is also limited by the planarity of the mirrors. A practical limit is $F \sim 100$. The resolving power of such an instrument is $R = 2\ell/\lambda$, where $\ell$ is the optical thickness. Values of $R$ around $10^6$ to $10^7$ can be achieved. It is impractical to increase $\ell$ above ~5 cm because the luminosity of the instrument is proportional to $1/\ell$. If higher
+
+Table 2.4.5.1. Definition of Laue classes
+
+| Crystal system | Laue class | Point groups |
|---|
| Cubic | C1 | 432, â3m, mâ3m | | C2 | 23, â3m | | Hexagonal | H1 | 622, â6m, â6m, â6mm | | H2 | â6, â6, â6/m | | Tetragonal | T1 | 422, â4m, â4m, â4mm | | T2 | 4, â4, â4, â4/m | | Trigonal | R1 | 321, â3m, â3m | | R2 | 3, â3 | | Orthorhombic | O | mmm, âm, âm, âm |
+---PAGE_BREAK---
+
+Fig. 3.1.2.3. Plots representative of the equations $\alpha_1(p, T) = 0$ and $\alpha_2(p, T) = 0$. The simultaneous vanishing of these coefficients occurs for a single couple of temperature and pressure ($p_0, T_0$).
+
+($T_0, p_0$). Let us consider, for instance, the situation depicted in Fig. 3.1.2.3. For $p > p_0$, on lowering the temperature, $\alpha_1$ vanishes at $T'$ and $\alpha_2$ remains positive in the neighbourhood of $T'$. Hence, the equilibrium value of the set ($d_x, d_y$) remains equal to zero on either side of $T'$. A transition at this temperature will only concern a possible change in $d_z^0$.
+
+Likewise for $p$ below $p_0$, a transition at $T''$ will only concern a possible change of the set of components ($d_x^0, d_y^0$), the third component $d_z$ remaining equal to zero on either sides of $T''$. Hence an infinitesimal change of the pressure (for instance a small fluctuation of the atmospheric pressure) from above $p_0$ to below $p_0$ will modify qualitatively the nature of the phase transformation with the direction of the displacement changing abruptly from $z$ to the $(x, y)$ plane. As will be seen below, the crystalline symmetries of the phases stable below $T'$ and $T''$ are different. This is a singular situation, of instability, of the type of phase transition, not encountered in real systems. Rather, the standard situation corresponds to pressures away from $p_0$, for which a slight change of the pressure does not modify significantly the direction of the displacement. In this case, one coefficient $\alpha_i$ only vanishes and changes sign at the transition temperature, as stated above.
+
+### 3.1.2.2.5. Stable state below $T_c$ and physical anomalies induced by the transition
+
+We have seen that either $d_z$ or the couple ($d_x, d_y$) of components of the displacement constitute the order parameter of the transition and that the free energy needs only to be expanded as a function of the components of the order parameter. Below the transition, the corresponding coefficient $\alpha_i$ is negative and, accordingly, the free energy, limited to its second-degree terms, has a maximum for $\mathbf{d} = 0$ and no minimum. Such a truncated expansion is not sufficient to determine the equilibrium state of the system. The stable state of the system must be determined by positive terms of higher degrees. Let us examine first the simplest case, for which the order parameter coincides with the $d_z$ component.
+
+The same symmetry argument used to establish the form (3.1.2.1) of the Landau free energy allows one straightforwardly to assert the absence of a third-degree term in the expansion of $F$ as a function of the order parameter $d_z$, and to check the effective occurrence of a fourth-degree term. If we assume that this simplest form of expansion is sufficient to determine the equilibrium state of the system, the coefficient of the fourth-degree term must be positive in the neighbourhood of $T_c$. Up to the latter degree, the form of the relevant contributions to the free energy is therefore
+
+$$F = F_o(T, p) + \frac{\alpha(T - T_c)}{2} d_z^2 + \frac{\beta}{4} d_z^4. \quad (3.1.2.2)$$
+
+In this expression, $\alpha_1$, which is an odd function of $(T - T_c)$ since it vanishes and changes sign at $T_c$, has been expanded linearly. Likewise, the lowest-degree expansion of the function $\beta(T - T_c)$ is a positive constant in the vicinity of $T_c$. The function $F_0$, which is the zeroth-degree term in the expansion, represents
+
+Fig. 3.1.2.4. Plots of the Landau free energy as a function of the order parameter, for values of the temperature above or below $T_c$ or coincident with $T_c$. The shape of the plot changes qualitatively from a one-minimum plot to a two-minimum plot.
+
+the normal 'background' part of the free energy. It behaves smoothly since it does not depend on the order parameter. A plot of $[F(d_z) - F_0]$ for three characteristic temperatures is shown in Fig. 3.1.2.4.
+
+The minima of $F$, determined by the set of conditions
+
+$$\frac{\partial F}{\partial d_z} = 0; \quad \frac{\partial^2 F}{\partial^2 d_z} > 0, \qquad (3.1.2.3)$$
+
+occur above $T_c$ for $d_z = 0$, as expected. For $T < T_c$ they occur for
+
+$$d_z^0 = \pm \sqrt{\frac{T_c - T}{\beta}}. \quad (3.1.2.4)$$
+
+This behaviour has a general validity: the order parameter of a transition is expected, in the framework of Landau's theory, to possess a square-root dependence as a function of the deviation of the temperature from $T_c$.
+
+Note that one finds two minima corresponding to the same value of the free energy and opposite values of $d_z^0$. The corresponding upward and downward displacements of the $M^+$ ion (Fig. 3.1.2.1) are distinct states of the system possessing the same stability.
+
+Other physical consequences of the form (3.1.2.2) of the free energy can be drawn: absence of latent heat associated with the crossing of the transition, anomalous behaviour of the specific heat, anomalous behaviour of the *dielectric susceptibility* related to the order parameter.
+
+The *latent heat* is $L = T\Delta S$, where $\Delta S$ is the difference in entropy between the two phases at $T_c$. We can derive $S$ in each phase from the equilibrium free energy $F(T, p, d_z^0(T, p))$ using the expression
+
+$$S = -\left.\frac{\mathrm{d}F}{\mathrm{d}T}\right|_{d_z^0} = -\left[\frac{\partial F}{\partial T}\bigg|_{d_z^0} + \frac{\partial F}{\partial d_z}\left(\frac{\mathrm{d}(d_z)}{\mathrm{d}T}\right)\bigg|_{d_z^0}\right]. \quad (3.1.2.5)$$
+
+However, since $F$ is a minimum for $d_z = d_z^0$, the second contribution vanishes. Hence
+
+$$S = -\frac{\alpha}{2} (d_z^0)^2 - \frac{\partial F_0}{\partial T}. \quad (3.1.2.6)$$
+
+Since both $d_z^0$ and $(\partial F_0 / \partial T)$ are continuous at $T_c$, there is no entropy jump $\Delta S = 0$, and *no latent heat at the transition*.
+
+Several values of the specific heat can be considered for a system, depending on the quantity that is maintained constant. In the above example, the displacement $\mathbf{d}$ of a positive ion determines the occurrence of an electric dipole (or of a macroscopic polarization **P**). The quantity $\epsilon$, which is thermodynamically conjugated to $d_z$, is therefore proportional to an electric field (the conjugation between quantities $\eta$ and $\zeta$ is expressed by the fact that infinitesimal work on the system has the form $\zeta d\eta - cf.$
+---PAGE_BREAK---
+
+Fig. 3.1.5.12. Structure of the tungsten bronze barium sodium niobate Ba₂NaNb₅O₁₅ in its highest-temperature *P4/mbm* phase above 853 K.
+
+with ribbons of such octahedra rather widely separated by the large ionic radius barium ions in the *b* direction. The resulting structure is, both magnetically and mechanically, rather two-dimensional, with easy cleavage perpendicular to the *b* axis and highly anisotropic electrical (ionic) conduction.
+
+Most members of the BaMF₄ family (*M* = Mg, Zn, Mn, Co, Ni, Fe) have the same structure, which is that of orthorhombic C2v (2mm) point-group symmetry. These materials are all ferroelectric (or at least pyroelectric; high conductivity of some makes switching difficult to demonstrate) at all temperatures, with an 'incipient' ferroelectric Curie temperature extrapolated from various physical parameters (dielectric constant, spontaneous polarization etc.) to lie 100 K or more above the melting point (ca. 1050 K). The Mn compound is unique in having a low-temperature phase transition. The reason is that Mn+2 represents (Shannon & Prewitt, 1969) an end point in ionic size (largest) for the divalent transition metal ions Mn, Zn, Mg, Fe, Ni, Co; hence, the Mn ion and the space for it in the lattice are not a good match. This size mismatch can be accommodated by the r.m.s. thermal motion above room temperature, but at lower temperatures a structural distortion must occur.
+
+This phase transition was first detected (Spencer et al., 1970) via ultrasonic attenuation as an anomaly near 255 K. This experimental technique is without question one of the most sensitive in discovering phase transitions, but unfortunately it gives no direct information about structure and often it signals something that is not in fact a true phase transition (in BaMnF₄ Spencer et al. emphasized that they could find no other evidence that a phase transition occurred).
+
+Raman spectroscopy was clearer (Fig. 3.1.5.11b), showing unambiguously additional vibrational spectra that arise from a doubling of the primitive unit cell. This was afterwards confirmed directly by X-ray crystallography at the Clarendon Laboratory, Oxford, by Wondre (1977), who observed superlattice lines indicative of cell doubling in the *bc* plane.
+
+The real structural distortion near 250 K in this material is even more complicated, however. Inelastic neutron scattering at Brookhaven by Shapiro et al. (1976) demonstrated convincingly that the ‘soft’ optical phonon lies not at (0, 1/2, 1/2) in the Brillouin zone, as would have been expected for the *bc*-plane cell doubling suggested on the basis of Raman studies, but at (0.39, 1/2, 1/2). This implies that the actual structural distortion from the high-temperature $C_{2v}^{12}$ ($Cmc_2$) symmetry does indeed double the primitive cell along the *bc* diagonal but in addition modulates the lattice along the *a* axis with a resulting repeat length that is incommensurate with the original (high-temperature) lattice constant *a*. The structural distortion microscopically approximates a rigid fluorine octahedra rotation, as might be expected. Hence, the chronological history of developments for this material is that X-ray crystallography gave the correct lattice structure at room temperature; ultrasonic attenuation revealed a possible phase transition near 250 K; Raman spectroscopy confirmed the transition and implied that it involved primitive
+
+Fig. 3.1.5.13. Sequence of phases encountered with raising or lowering the temperature in barium sodium niobate.
+
+cell doubling; X-ray crystallography confirmed directly the cell doubling, and finally neutron scattering revealed an unexpected incommensurate modulation as well. This interplay of experimental techniques provides a rather good model as exemplary for the field. For most materials, EPR would also play an important role in the likely scenarios; however, the short relaxation times for Mn ions made magnetic resonance of relatively little utility in this example.
+
+### 3.1.5.2.8. Barium sodium niobate
+
+The tungsten bronzes represented by Ba₂NaNb₅O₁₅ have complicated sequences of structural phase transitions. The structure is shown in Fig. 3.1.5.12 and, viewed along the polar axis, consists of triangular, square and pentagonal spaces that may or may not be filled with ions. In barium sodium niobate, the pentagonal channels are filled with Ba ions, the square channels are filled with sodium ions, and the triangular areas are empty.
+
+The sequence of phases is shown in Fig. 3.1.5.13. At high temperatures (above $T_c$ = 853 K) the crystal is tetragonal and paraelectric ($P4/mbm = D̃₄h$). When cooled below 853 K it becomes ferroelectric and of space group $P4bm = C_{4v}^2$ (still tetragonal). Between ca. 543 and 582 K it undergoes an incommensurate distortion. From 543 to ca. 560 K it is orthorhombic and has a ‘1q’ modulation along a single orthorhombic axis. From 560 to 582 K it has a ‘tweed’ structure reminiscent of metallic lattices; it is still microscopically orthorhombic but has a short-range modulated order along a second orthorhombic direction and simultaneous short-range modulated order along an orthogonal axis, giving it an incompletely developed ‘2q’ structure.
+
+As the temperature is lowered still further, the lattice becomes orthorhombic but not incommensurate from 105–546 K; below 105 K it is incommensurate again, but with a microstructure quite different from that at 543–582 K. Finally, below ca. 40 K it becomes macroscopically tetragonal again, with probable space-group symmetry $P4nc$ ($C_{4v}^6$) and a primitive unit cell that is four times that of the high-temperature tetragonal phases above 582 K.
+
+This sequence of phase transitions involves rather subtle distortions that are in most cases continuous or nearly continuous. Their elucidation has required a combination of experimental techniques, emphasizing optical birefringence (Schneck, 1982), Brillouin spectroscopy (Oliver, 1990; Schneck et al., 1977; Tolédano et al., 1986; Errandonea et al., 1984), X-ray scattering, electron microscopy and Raman spectroscopy (Shawabkeh & Scott, 1991), among others. As with the other examples described in this chapter, it would have been difficult and perhaps impossible to establish the sequence of structures *via* X-ray techniques alone. In most cases, the distortions are very small and involve essentially only the oxygen ions.
+
+### 3.1.5.2.9. Tris-sarcosine calcium chloride (TSCC)
+
+Tris-sarcosine calcium chloride has the structure shown in Fig. 3.1.5.14. It consists of sarcosine molecules of formula
+---PAGE_BREAK---
+
+## 3.2. Twinning and domain structures
+
+BY V. JANOVEC, TH. HAHN AND H. KLAPPER
+
+### 3.2.1. Introduction and history
+
+Twins have been known for as long as mankind has collected minerals, admired their beauty and displayed them in museums and mineral collections. In particular, large specimens of contact and penetration twins with their characteristic re-entrant angles and simulated higher symmetries have caught the attention of mineral collectors, miners and scientists. Twinning as a special feature of crystal morphology, therefore, is a 'child' of mineralogy, and the terms and symbols in use for twinned crystals have developed during several centuries together with the development of mineralogy.
+
+The first scientific description of twinning, based on the observation of re-entrant angles, goes back to Romé de l'Isle (1783). Haüy (1801) introduced symmetry considerations into twinning. He described hemitropes (twofold rotation twins) and penetration twins, and stated that the twin face is parallel to a possible crystal face. Much pioneering work was done by Weiss (1809, 1814, 1817/1818) and Mohs (1822/1824, 1823), who extended the symmetry laws of twinning and analysed the symmetry relations of many twins occurring in minerals. Naumann (1830) was the first to distinguish between twins with parallel axes (Zwillinge mit parallelen Achsensystemen) and twins with inclined (crossed) axes (Zwillinge mit gekreuzten Achsen- systemen), and developed the mathematical theory of twins (Naumann, 1856). A comprehensive survey of the development of the concept and understanding of twinning up to 1869 is presented by Klein (1869).
+
+At the beginning of the 20th century, several important mineralogical schools developed new and far-reaching ideas on twinning. The French school of Mallard (1879) and Friedel (1904) applied the lattice concept of Bravais to twinning. This culminated in the lattice classification of twins by Friedel (1904, 1926) and his introduction of the terms macles par mériédrie (twinning by merohedry), macles par pseudo-mériédrie (twinning by pseudo-merohedry), macles par mériédrie réticulaire [twinning by reticular (lattice) merohedry] and macles par pseudo-mériédrie réticulaire (twinning by reticular pseudo-merohedry). This concept of twinning was very soon taken up and further developed by Niggli in Zürich, especially in his textbooks (1919, 1920, 1924, 1941). The lattice theory of Mallard and Friedel was subsequently extensively applied and further extended by J. D. H. Donnay (1940), and in many later papers by Donnay & Donnay, especially Donnay & Donnay (1974). The Viennese school of Tschermak (1904, 1906), Tschermak & Becke (1915), and Tertsch (1936) thoroughly analysed the morphology of twins, introduced the Kantennormalengesetz and established the minimal conditions for twinning. The structural and energy aspects of twins and their boundaries were first accentuated and developed by Buerger (1945). Presently, twinning plays an important (but negative) role in crystal structure determination. Several sophisticated computer programs have been developed that correct for the presence of twinning in a small single crystal.
+
+A comprehensive review of twinning is given by Cahn (1954); an extensive treatment of mechanical twinning is presented in the monograph by Klassen-Neklyudova (1964). A tensor classification of twinning was recently presented by Wadhawan (1997, 2000). Brief modern surveys are contained in the textbooks by Bloss (1971), Giacovazzo (1992) and Indenbom (see Vainshtein et al., 1995), the latter mainly devoted to theoretical aspects. In previous volumes of *International Tables*, two articles on twinning
+
+have appeared: formulae for the calculation of characteristic twin data, based on the work by Friedel (1926, pp. 245–252), are collected by Donnay & Donnay in Section 3 of Volume II of the previous series (Donnay & Donnay, 1972), and a more mathematical survey is presented by Koch in Chapter 1.3 of Volume C of the present series (Koch, 1999).
+
+Independently from the development of the concept of twinning in mineralogy and crystallography, summarized above, the concept of domain structures was developed in physics at the beginning of the 20th century. This started with the study of ferromagnetism by Weiss (1907), who put forward the idea of a molecular field and formulated the hypothesis of differently magnetized regions, called ferromagnetic domains, that can be switched by an external magnetic field. Much later, von Hámos & Thiessen (1931) succeeded in visualizing magnetic domains by means of colloidal magnetic powder. For more details about magnetic domains see Section 1.6.4 of the present volume.
+
+In 1921, Valasek (1921) observed unusual dielectric behaviour in Rochelle salt and pointed out its similarity with anomalous properties of ferromagnetic materials. This analogy led to a prediction of 'electric' domains, i.e. regions with different directions of spontaneous polarization that can be switched by an electric field. Materials with this property were called Seignette electrics (derived from the French, 'sel de Seignette', denoting Rochelle salt). The term seignettoelectrics is still used in Russian, but in English has been replaced by the term ferroelectrics (Mueller, 1935). Although many experimental and theoretical results gave indirect evidence for ferroelectric domain structure [for an early history see Cady (1946)], it was not until 1944 that Zwicker & Scherrer (1944) reported the first direct optical observation of the domain structure in ferroelectric potassium dihydrogen phosphate (KDP). Four years later, Klassen-Neklyudova et al. (1948) observed the domain structure of Rochelle salt in a polarizing microscope (see Klassen-Neklyudova, 1964, p. 27). In the same year, Blattner et al. (1948), Kay (1948) and Matthias & von Hippel (1948) visualized domains and domain walls in barium titanate crystals using the same technique.
+
+These early studies also gave direct evidence of the influence of mechanical stress and electric field on domain structure. Further, it was disclosed that a domain structure exists only below a certain temperature, called the Curie point, and that the crystal structures below and above the Curie point have different point-group symmetries. The Curie point thus marks a structural phase transition between a paraelectric phase without a domain structure and a ferroelectric phase with a ferroelectric domain structure. Later, the term 'Curie point' was replaced by the more suitable expression Curie temperature or transition temperature.
+
+The fundamental achievement in understanding phase transitions in crystals is the Landau theory of continuous phase transitions (Landau, 1937). Besides a thermodynamic explanation of anomalies near phase transitions, it discloses that any continuous phase transition is accompanied by a discontinuous decrease of crystal symmetry. In consequence, a phase with lower symmetry can always form a domain structure.
+
+The basic role of symmetry was demonstrated in the pioneering work of Zheludev & Shuvalov (1956), who derived by simple crystallographic considerations the point groups of paraelectric and ferroelectric phases of all possible ferroelectric phase transitions and gave a formula for the number of ferroelectric domain states.
+---PAGE_BREAK---
+
+Fig. 3.3.6.3. The four variants of Japanese twins of quartz (after Frondel, 1962; cf. Heide, 1928). The twin elements 2 and *m* and their orientations are shown. In actual twins, only the upper part of each figure is realized. The lower part has been added for better understanding of the orientation relation. R, L: right-, left-handed quartz. The polarity of the twofold axis parallel to the plane of the drawing is indicated by an arrow. In addition to the cases I(R) and II(R), I(L) and II(L) also exist, but are not included in the figure. Note that a vertical line in the plane of the figure is the zone axis [11̄1] for the two rhombohedral faces *r* and *z*, and is parallel to the twin and composition plane (11̄22) and the twin axis in variant II.
+
+The eigensymmetry of high-temperature quartz is 622 (order 12). Hence, the coset of the Brazil twin law contains 12 twin operations, as follows:
+
+(i) the six twin operations of a Brazil twin in low-temperature quartz, as listed above in Example 3.3.6.3.2;
+
+(ii) three further reflections across planes {10̄10}, which bisect the three Brazil twin planes {11̄20} of low-temperature quartz;
+
+(iii) three further rotoinversions around [001]: 6¹, 6³ = *m*z, 6⁵ = 6⁻¹.
+
+The composite symmetry is
+
+$$
+\mathcal{K} = \frac{6}{m'm'm'} \left(\bar{1}'\right),
+$$
+
+a supergroup of index [2] of the *eigensymmetry* 622.
+
+In high-temperature quartz, the combined Dauphiné–Brazil twins (Leydolt twins) are identical with Brazil twins, because the Dauphiné twin operation has become part of the *eigensymmetry* 622. Accordingly, both kinds of twins of low-temperature quartz merge into one upon heating above 846 K. We recommend that these twins are called ‘Brazil twins’, independent of their type of twinning in the low-temperature phase. Upon cooling below 846 K, transformation Dauphiné twin domains may appear in both Brazil growth domains, leading to four orientation states as shown in Fig. 3.3.6.2. Among these four orientation states, two Leydolt pairs occur. Such Leydolt domains, however, are not necessarily in contact (cf. Example 3.3.6.3.3 above).
+
+In addition to these twins with ‘parallel axes’ (merohedral twins), several kinds of growth twins with ‘inclined axes’ occur in high-temperature quartz. They are not treated here, but additional information is provided by Frondel (1962).
+
+Fig. 3.3.6.4. Twin intergrowth of ‘obverse’ and ‘reverse’ rhombohedra of rhombohedral FeBO₃. (a) ‘Obverse’ rhombohedron with four of the 12 alternative twin elements. (b) ‘Reverse’ rhombohedron (twin orientation). (c) Interpenetration of both rhombohedra, as observed in penetration twins of FeBO₃. (d) Idealized skeleton of the six components (exploded along [001] for better recognition) of the ‘obverse’ orientation state shown in (a). The components are connected at the edges along the threefold and the twofold eigensymmetry axes. The shaded faces are {1010} and {0001} coinciding twin reflection and contact planes with the twin components of the ‘reverse’ orientation state. Parts (a) to (c) courtesy of R. Diehl, Freiburg.
+
+### 3.3.6.5. Twinning of rhombohedral crystals
+
+In some rhombohedral crystals such as corundum Al₂O₃ (Wallace & White, 1967), calcite CaCO₃ or FeBO₃ (calcite structure) (Kotrbova et al., 1985; Klapper, 1987), growth twinning with a 'twofold twin rotation around the threefold symmetry axis [001]' (similar to the Dauphiné twins in low-temperature quartz described above) is common. Owing to the eigensymmetry 32/m (order 12), the following 12 twin operations form the coset (twin law). They are described here in hexagonal axes:
+
+(i) three rotations around the threefold axis [001]: 6¹, 6³ = 2z,
+6⁵ = 6⁻¹;
+
+(ii) three twofold rotations around the axes [120], [210], [1̄10];
+
+(iii) three reflections across the planes (10̄10), (1̄100), (0̄1̄10);
+
+(iv) three rotoinversions around the threefold axis [001]: 6¹,
+ 6³ = *m**z* and 6⁵ = 6⁻¹.
+
+Some of these twin elements are shown in Fig. 3.3.6.4. They include the particularly conspicuous twin reflection plane *m**z* perpendicular to the threefold axis [001]. The composite symmetry is
+
+$$
+\mathcal{K} = \frac{6'}{m'} \bar{(3)} \frac{2}{m} \frac{2'}{m'} \quad (\text{order } 24).
+$$
+
+It is of interest that for FeBO₃ crystals this twin law always, without exception, forms penetration twins (Fig. 3.3.6.4), whereas for the isotypic calcite CaCO₃ only (0001) contact twins are found (Fig. 3.3.6.5). This aspect is discussed further in Section 3.3.8.6.
+---PAGE_BREAK---
+
+Fig. 3.3.10.13. Thin section of tetragonal leucite, K(AlSi₂O₆), between crossed polarizers. The two nearly perpendicular systems of (101) twin lamellae result from the cubic-to-tetragonal phase transition at about 878 K. Width of twin lamellae 20–40 µm. Courtesy of M. Raith, Bonn.
+
+(cf. Example 3.3.6.7). Three (cyclic) sets of orthorhombic twin lamellae with interfaces parallel to {10\bar{1}0}$_{hex}$ or {110}$_{orth}$ are generated by the transformation. More detailed observations on hexagonal-orthorhombic twins are available for the III→II (heating) and I→II (cooling) transformations of KLiSO$_4$ at about 712 and 938 K (Jennissen, 1990; Scherf et al., 1997). The development of the three systems of twin lamellae of the orthorhombic phase II is shown by two polarization micrographs in Fig. 3.3.10.14. A further example, the cubic-rhombohedral phase transition of the perovskite LaAlO$_3$, was studied by Bueble et al. (1998).
+
+Another surprising feature is the penetration of two or more differently oriented nano-sized twin lamellae, which is often encountered in electron micrographs (cf. Müller et al., 1989, Fig. 2b). In several cases, the penetration region is interpreted as a metastable area of the higher-symmetrical para-elastic parent phase.
+
+In addition to the fitting problems discussed above, the resulting final twin texture is determined by several further effects, such as:
+
+(a) the nucleation of the (twinned) daughter phase in one or several places in the crystal;
+
+(b) the propagation of the phase boundary (transformation front, cf. Fig. 3.3.10.14);
+
+(c) the tendency of the twinned crystal to minimize the overall elastic strain energy induced by the fitting problems of different twin lamellae systems.
+
+Systematic treatments of ferroelastic twin textures were first published by Boulesteix (1984, especially Section 3.3 and references cited therein) and by Shuvalov *et al.* (1985). This topic is extensively treated in Section 3.4.4 of the present volume. A detailed theoretical explanation and computational simulation of these twin textures, with numerous examples, was recently presented by Salje & Ishibashi (1996) and Salje *et al.* (1998). Textbook versions of these problems are available by Zheludev (1971) and Putnis (1992).
+
+### 3.3.10.7.4. Tweed microstructures
+
+The textures of ferroelastic twins and their fitting problems, discussed above, are 'time-independent' for both growth and deformation twins, i.e. after twin nucleation and growth or after the mechanical deformation there occurs in general no 'ripening process' with time before the final twin structure is produced.
+
+Fig. 3.3.10.14. Twin textures generated by the two different hexagonal-to-orthorhombic phase transitions of KLiSO₄. The figures show parts of (0001)ₐₓ plates (viewed along [001]) between crossed polarizers. (a) Phase boundary III→II with circular 712 K transition isotherm during heating. Transition from the inner (cooler) room-temperature phase III (hexagonal, dark) to the (warmer) high-temperature phase II (orthorhombic, birefringent). Owing to the loss of the threefold axis, lamellar {10\bar{1}0}ₐₓ = {110}orth cyclic twin domains of three orientation states appear. (b) Sketch of the orientations states 1, 2, 3 and the optical extinction directions of the twin lamellae. Note the tendency of the lamellae to orient their interfaces normal to the circular phase boundary. Arrows indicate the direction of motion of the transition isotherm during heating. (c) Phase boundary I→II with 938 K transition isotherm during cooling. The dark upper region is still in the hexagonal phase I, the lower region has already transformed into the orthorhombic phase II (below 938 K). Note the much finer and more irregular domain structure compared with the III→II transition in (a). Courtesy of Ch. Scherf, PhD thesis, RWTH Aachen, 1999; cf. Scherf *et al.* (1997).
+
+This is characteristically different for some transformation twins, both of the (slow) order-disorder and of the (fast) displacive type and for both metals and non-metals. Here, with time and/or with decreasing temperature, a characteristic microstructure is formed in between the high- and the low-temperature polymorph. This 'precursor texture' was first recognized and illustrated by Putnis in the investigation of cordierite transformation twinning and called 'tweed microstructure' (Putnis *et al.*, 1987; Putnis, 1992). In addition to the hexagonal-orthorhombic cordierite transformation, tweed structures have been investigated in particular in the K-feldspar orthoclase (monoclinic-triclinic transformation), in both cases involving (slow) Si-Al ordering processes. Examples of tweed structures occurring in (fast) displacive transformations are provided by tetragonal-orthorhombic Co-doped YBaCu₃O₇₋₃ (Schmahl *et al.*, 1989) and rhombohedral-monoclinic (Pb,Sr)₃(PO₄)₂ and (Pb,Ba)₃(PO₄)₂ (Bismayer *et al.*, 1995).
+
+Tweed microstructures are precursor twin textures, intermediate between those of the high- and the low-temperature modifications, with the following characteristic features:
+---PAGE_BREAK---
+
+# 3.4. Domain structures
+
+BY V. JANOVEC AND J. PŘÍVRATSKÁ
+
+## 3.4.1. Introduction
+
+### 3.4.1.1. Basic concepts
+
+It was demonstrated in Section 3.1.2 that a characteristic feature of structural phase transitions connected with a lowering of crystal symmetry is an anomalous behaviour near the transition, namely unusually large values of certain physical properties that vary strongly with temperature. In this chapter, we shall deal with another fundamental feature of structural phase transitions: the formation of a non-homogeneous, textured low-symmetry phase called a *domain structure*.
+
+When a crystal homogeneous in the parent (prototypic) phase undergoes a phase transition into a ferroic phase with lower point-group symmetry, then this ferroic phase is almost always formed as a non-homogeneous structure consisting of homogeneous regions called *domains* and contact regions between domains called *domain walls*. All domains have the same or the enantiomorphous crystal structure of the ferroic phase, but this structure has in different domains a different orientation, and sometimes also a different position in space. When a domain structure is observed by a measuring instrument, different domains can exhibit different tensor properties, different diffraction patterns and can differ in other physical properties. The domain structure can be visualized optically (see Fig. 3.4.1.1) or by other experimental techniques. Powerful high-resolution electron microscopy (HREM) techniques have made it possible to visualize atomic arrangements in domain structures (see Fig. 3.4.1.2). The appearance of a domain structure, detected by any reliable technique, provides the simplest unambiguous experimental proof of a structural phase transition.
+
+Under the influence of external fields (mechanical stress, electric or magnetic fields, or combinations thereof), the domain structure can change; usually some domains grow while others
+
+decrease in size or eventually vanish. This process is called *domain switching*. After removing or decreasing the field a domain structure might not change considerably, i.e. the form of a domain pattern depends upon the field history: the domain structure exhibits *hysteresis* (see Fig. 3.4.1.3). In large enough fields, switching results in a reduction of the number of domains. Such a procedure is called *detwinning*. In rare cases, the crystal may consist of one domain only. Then we speak of a *single-domain crystal*.
+
+There are two basic types of domain structures:
+
+(i) Domain structures with one or several systems of parallel plane domain walls that can be observed in an optical or electron microscope. Two systems of perpendicular domain walls are often visible (see Fig. 3.4.1.4). In polarized light domains exhibit different colours (see Fig. 3.4.1.1) and in diffraction experiments splitting of reflections can be observed (see Fig. 3.4.3.9). Domains can be switched by external mechanical stress. These features are typical for a *ferroelastic domain structure* in which neighbouring domains differ in mechanical strain (deformation). Ferroelastic domain structures can appear only in ferroelastic phases, i.e. as a result of a phase transition characterized by a spontaneous shear distortion of the crystal.
+
+(ii) Domain structures that are not visible using a polarized-light microscope and in whose diffraction patterns no splitting of reflections is observed. Special methods [e.g. etching, deposition of liquid crystals (see Fig. 3.4.1.5), electron or atomic force microscopy, or higher-rank optical effects (see Fig. 3.4.3.3)] are needed to visualize domains. Domains have the same strain and cannot usually be switched by an external mechanical stress. Such domain structures are called *non-ferroelastic domain structures*. They appear in all non-ferroelastic phases resulting from symmetry lowering that preserves the crystal family, and in partially ferroelastic phases.
+
+Another important kind of domain structure is a *ferroelectric domain structure*, in which domains differ in the direction of the spontaneous polarization. Such a domain structure is formed at ferroelectric phase transitions that are characterized by the appearance of a new polar direction in the ferroic phase.
+
+Fig. 3.4.1.1. Domain structure of tetragonal barium titanate (BaTiO₃). A thin section of barium titanate ceramic observed at room temperature in a polarized-light microscope (transmitted light, crossed polarizers). Courtesy of U. Täffner, Max-Planck-Institut für Metallforschung, Stuttgart. Different colours correspond to different ferroelastic domain states, connected areas of the same colour are ferroelastic domains and sharp boundaries between these areas are domain walls. Areas of continuously changing colour correspond to gradually changing thickness of wedge-shaped domains. An average distance between parallel ferroelastic domain walls is of the order of 1–10 µm.
+
+Fig. 3.4.1.2. Domain structure of a BaGa₂O₄ crystal seen by high-resolution transmission electron microscopy. Parallel rows are atomic layers. Different directions correspond to different ferroelastic domain states of domains, connected areas with parallel layers are different ferroelastic domains and boundaries between these areas are ferroelastic domain walls. Courtesy of H. Lemmens, EMAT, University of Antwerp.
+---PAGE_BREAK---
+
+Fig. 3.4.3.7. Ferroelastic twins in a very thin YBa$_2$Cu$_3$O$_{7-δ}$ crystal observed in a polarized-light microscope. Courtesy of H. Schmid, Université de Geneve.
+
+YBa$_2$Cu$_3$O$_{7-δ}$ in Fig. 3.4.3.7. The symmetry descent $G = 4_z/m_z^* m_x m_{xy} \supset m_x m_y m_z = F_1 = F_2$ gives rise to two ferroelastic domain states $\mathbf{R}_1$ and $\mathbf{R}_2$. The twinning group $K_{12}$ of the non-trivial domain pair $(\mathbf{R}_1, \mathbf{R}_2)$ is
+
+$$ K_{12}[m_x m_y m_z] = J_{12}^* = m_x m_y m_z \cup 4_z^* \{2_x m_y m_z\} = 4_z^* / m_z m_x m_{xy}^*. \quad (3.4.3.61) $$
+
+The colour of a domain state observed in a polarized-light microscope depends on the orientation of the index ellipsoid (indicatrix) with respect to a fixed polarizer and analyser. This index ellipsoid transforms in the same way as the tensor of spontaneous strain, i.e. it has different orientations in ferroelastic domain states. Therefore, different ferroelastic domain states exhibit different colours: in Fig. 3.4.3.7, the blue and pink areas (with different orientations of the ellipse representing the spontaneous strain in the plane of figure) correspond to two different ferroelastic domain states. A rotation of the crystal that does not change the orientation of ellipses (e.g. a 180° rotation about an axis parallel to the fourfold rotation axis) does not change the colours (ferroelastic domain states). If one neglects disorientations of ferroelastic domain states (see Section 3.4.3.6) – which are too small to be detected by polarized-light microscopy – then none of the operations of the group $F_1 = F_2 = m_x m_y m_z$ change the single-domain ferroelastic domain states $\mathbf{R}_1, \mathbf{R}_2$, hence there is no change in the colours of domain regions of the crystal. On the other hand, all operations with a
+
+star symbol (operations lost at the transition) exchange domain states $\mathbf{R}_1$ and $\mathbf{R}_2$, i.e. also exchange the two colours in the domain regions. The corresponding permutation is a transposition of two colours and this attribute is represented by a star attached to the symbol of the operation. This exchange of colours is nicely demonstrated in Fig. 3.4.3.7 where a -90° rotation is accompanied by an exchange of the pink and blue colours in the domain regions (Schmid, 1991, 1993).
+
+It can be shown (Shuvalov et al., 1985; Dudnik & Shuvalov, 1989) that for small spontaneous strains the amount of shear $s$ and the angle $\varphi$ can be calculated from the second invariant $\Lambda_2$ of the differential tensor $\Delta u_{ik}$:
+
+$$ s = 2\sqrt{-\Lambda_2}, \qquad (3.4.3.62) $$
+
+$$ \varphi = \sqrt{-\Lambda_2}, \qquad (3.4.3.63) $$
+
+where
+
+$$ \Lambda_2 = \begin{vmatrix} \Delta u_{11} & \Delta u_{12} \\ \Delta u_{21} & \Delta u_{22} \end{vmatrix} + \begin{vmatrix} \Delta u_{22} & \Delta u_{23} \\ \Delta u_{32} & \Delta u_{33} \end{vmatrix} + \begin{vmatrix} \Delta u_{11} & \Delta u_{13} \\ \Delta u_{31} & \Delta u_{33} \end{vmatrix}. \qquad (3.4.3.64) $$
+
+In our example, where there are only two nonzero components of the differential spontaneous strain tensor [see equation (3.4.3.58)], the second invariant $\Lambda_2 = -(\Delta u_{11}\Delta u_{22}) = -(u_{22}-u_{11})^2$ and the angle $\varphi$ is
+
+$$ \varphi = \pm|u_{22} - u_{11}|. \qquad (3.4.3.65) $$
+
+In this case, the angle $\varphi$ can also be expressed as $\varphi = \pi/2 - 2\arctan a/b$, where a and b are lattice parameters of the orthorhombic phase (Schmid et al., 1988).
+
+The shear angle $\varphi$ ranges in ferroelastic crystals from minutes to degrees (see e.g. Schmid et al., 1988; Dudnik & Shuvalov, 1989).
+
+Each equally deformed plane gives rise to two compatible domain walls of the same orientation but with opposite sequence of domain states on each side of the plane. We shall use for a *simple domain twin* with a planar wall a symbol $(\mathbf{R}_1^+ | \mathbf{n} | \mathbf{R}_2^-)$ in which **n** denotes the normal to the wall. The bra–ket symbol ($|\|$ and $|$) represents the half-space domain regions on the negative and positive sides of **n**, respectively, for which we have used letters $B_1$ and $B_2$, respectively. Then $(\mathbf{R}_1^+ |$ and $|\mathbf{R}_2^-)$ represent domains $\mathbf{D}_1(\mathbf{R}_1^+, B_1)$ and $\mathbf{D}_2(\mathbf{R}_2^-, B_2)$, respectively. The symbol $(\mathbf{R}_1^+ | \mathbf{R}_2^-)$ properly specifies a domain twin with a zero-thickness domain wall.
+
+A domain wall can be considered as a domain twin with domain regions restricted to non-homogeneous parts near the plane $p$. For a domain wall in domain twin $(\mathbf{R}_1^+ | \mathbf{R}_2^-)$ we shall use the symbol $[\mathbf{R}_1^+ | \mathbf{R}_2^-]$, which expresses the fact that a domain wall of zero thickness needs the same specification as the domain twin.
+
+If we exchange domain states in the twin $(\mathbf{R}_1^+ | \mathbf{n} | \mathbf{R}_2^-)$, we get a *reversed twin (wall)* with the symbol $(\mathbf{R}_2^- | \mathbf{n} | \mathbf{R}_1^+)$. These two ferroelastic twins are depicted in the lower right and upper left parts of Fig. 3.4.3.8, where – for ferroelastic-non-ferroelectric twins – we neglect spontaneous polarization of ferroelastic domain states. The reversed twin $\mathbf{R}_2^- | \mathbf{n}' | \mathbf{R}_1^+$ has the opposite shear direction.
+
+Twin and reversed twin can be, but may not be, crystallographically equivalent. Thus e.g. ferroelastic-non-ferroelectric twins $(\mathbf{R}_1^+ | \mathbf{n} | \mathbf{R}_2^-)$ and $(\mathbf{R}_2^- | \mathbf{n} | \mathbf{R}_1^+)$ in Fig. 3.4.3.8 are equivalent, e.g. via $2z$, whereas ferroelastic-ferroelectric twins $(\mathbf{S}_1^+ | \mathbf{n} | \mathbf{S}_3^-)$ and $(\mathbf{S}_3^- | \mathbf{n} | \mathbf{S}_1^+)$ are not equivalent, since there is no operation in the group $K_{12}$ that would transform $(\mathbf{S}_1^+ | \mathbf{n} | \mathbf{S}_3^-)$ into $(\mathbf{S}_3^- | \mathbf{n} | \mathbf{S}_1^+)$. As we shall show in the next section, the symmetry group $T_{12}(\mathbf{n})$ of a twin and the symmetry group $T_{21}(\mathbf{n})$ of a reverse twin are equal,
+
+$$ T_{12}(\mathbf{n}) = T_{21}(\mathbf{n}). \qquad (3.4.3.66) $$
\ No newline at end of file
diff --git a/samples/texts_merged/5725464.md b/samples/texts_merged/5725464.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e9060a41dbb37c48c6cc3b22e2d5e7f7fc26c32
--- /dev/null
+++ b/samples/texts_merged/5725464.md
@@ -0,0 +1,2282 @@
+
+---PAGE_BREAK---
+
+Laconic Oblivious Transfer and its Applications
+
+Chongwon Cho
+HRL Laboratories
+
+Nico Döttling*
+UC Berkeley
+
+Sanjam Garg†
+UC Berkeley
+
+Divya Gupta†,‡
+Microsoft Research India
+
+Peihan Miao†
+UC Berkeley
+
+Antigoni Polychroniadou§
+Cornell University
+
+Abstract
+
+In this work, we introduce a novel technique for secure computation over large inputs. Specifically, we provide a new oblivious transfer (OT) protocol with a laconic receiver. Laconic OT allows a receiver to commit to a large input $D$ (of length $M$) via a short message. Subsequently, a single short message by a sender allows the receiver to learn $m_{D[L]}$, where the messages $m_0, m_1$ and the location $L \in [M]$ are dynamically chosen by the sender. All prior constructions of OT required the receiver's outgoing message to grow with $D$.
+
+Our key contribution is an instantiation of this primitive based on the Decisional Diffie-Hellman (DDH) assumption in the common reference string (CRS) model. The technical core of this construction is a novel use of somewhere statistically binding (SSB) hashing in conjunction with hash proof systems. Next, we show applications of laconic OT to non-interactive secure computation on large inputs and multi-hop homomorphic encryption for RAM programs.
+
+*Research supported by a postdoc fellowship of the German Academic Exchange Service (DAAD).
+
+†Research supported in part from 2017 AFOSR YIP Award, DARPA/ARL SAFEWARE Award W911NF15C0210, AFOSR Award FA9550-15-1-0274, NSF CRII Award 1464397, and research grants by the Okawa Foundation, Visa Inc., and Center for Long-Term Cybersecurity (CLTC, UC Berkeley). The views expressed are those of the author and do not reflect the official policy or position of the funding agencies.
+
+‡Work done while at University of California, Berkeley.
+
+§Part of the work done while visiting University of California, Berkeley. Research supported in part the National Science Foundation under Grant No. 1617676, IBM under Agreement 4915013672, and the Packard Foundation under Grant 2015-63124.
+---PAGE_BREAK---
+
+# Contents
+
+| 1 | Introduction | 4 | | 1.1 | Laconic OT | 4 | | 1.2 | Warm-Up Application: Non-Interactive Secure Computation on Large Inputs | 5 | | 1.3 | Main Application: Multi-Hop Homomorphic Encryption for RAM Programs | 6 | | 1.4 | Roadmap | 7 | | 2 | Technical Overview | 8 | | 2.1 | Laconic OT | 8 | | 2.1.1 | Laconic OT with Factor-2 Compression | 8 | | 2.1.2 | Bootstrapping Laconic OT | 10 | | 2.2 | Non-interactive Secure Computation on Large Inputs. | 12 | | 2.3 | Multi-Hop Homomorphic Encryption for RAM Programs | 13 | | 3 | Laconic Oblivious Transfer | 14 | | 3.1 | Laconic OT | 15 | | 3.2 | Updatable Laconic OT. | 17 | | 4 | Laconic Oblivious Transfer with Factor-2 Compression | 18 | | 4.1 | Somewhere Statistically Binding Hash Functions and Hash Proof Systems | 18 | | 4.2 | HPS-friendly SSB Hashing. | 19 | | 4.3 | A Hash Proof System for Knowledge of Preimage Bits | 22 | | 4.4 | The Laconic OT Scheme. | 23 | | 5 | Construction of Updatable Laconic OT | 26 | | 5.1 | Background. | 26 | | 5.1.1 | Garbled Circuits. | 26 | | 5.1.2 | Merkle Tree. | 27 | | 5.2 | Construction. | 27 | | 5.3 | Security. | 32 | | 6 | Warm-Up Application: Non-Interactive Secure Computation (NISC) on Large Inputs in RAM Setting | 36 | | 6.1 | Background. | 36 | | 6.1.1 | Random Access Machine (RAM) Model of Computation. | 36 | | 6.1.2 | Oblivious Transfer. | 37 | | 6.2 | Formal Model for NISC in RAM Setting. | 38 | | 6.3 | Construction. | 40 | | 6.4 | Correctness. | 42 | | 6.5 | Security Proof. | 44 | | 6.6 | Extension. | 46 | | 7 | Main Application: Multi-Hop Homomorphic Encryption for RAM Programs | 47 | | 7.1 | Our Model. | 47 | | 7.2 | Building Blocks Needed. | 50 | | 7.2.1 | 2-message Secure Function Evaluation based on Garbled Circuits. | 50 |
+---PAGE_BREAK---
+
+
+
+
+ | 7.2.2 |
+ Re-Randomizable Secure Function Evaluation based on Garbled Circuits |
+ 50 |
+
+
+ | 7.3 |
+ Our Construction of Multi-hop RAM Scheme |
+ 52 |
+
+
+ | 7.3.1 |
+ UMA Secure Construction |
+ 52 |
+
+
+ | 7.3.2 |
+ Correctness |
+ 58 |
+
+
+ | 7.3.3 |
+ Extending to Multiple Executions |
+ 61 |
+
+
+ | 7.3.4 |
+ Security Proof |
+ 61 |
+
+
+ | 7.3.5 |
+ UMA to Full Security for Multi-hop RAM Scheme |
+ 66 |
+
+
+
+---PAGE_BREAK---
+
+# 1 Introduction
+
+Big data poses serious challenges for the current cryptographic technology. In particular, cryptographic protocols for secure computation are typically based on Boolean circuits, where both the computational complexity and communication complexity scale with the size of the input dataset, which makes it generally unsuitable for even moderate dataset sizes. Over the past few decades, substantial effort has been devoted towards realizing cryptographic primitives that overcome these challenges. This includes works on fully-homomorphic encryption (FHE) [Gen09, BV11b, BV11a, GSW13] and on the RAM setting of oblivious RAM [Gol87, Ost90] and secure RAM computation [OS97, GKK$^{+}$12, LO13, GHL$^{+}$14, GGMP16]. Protocols based on FHE generally have a favorable communication complexity and are basically non-interactive, yet incur a prohibitively large computational overhead (dependent on the dataset size). On the other hand, protocols for the RAM model generally have a favorable computational overhead, but lack in terms of communication efficiency (that grows with the program running time), especially in the multi-party setting. Can we achieve the best of both worlds? In this work we make positive progress on this question. Specifically, we introduce a new tool called laconic oblivious transfer that helps to strike a balance between the two seemingly opposing goals.
+
+Oblivious transfer (or OT for short) is a fundamental and powerful primitive in cryptography [Kil88, IPS08]. Since its first introduction by Rabin [Rab81], OT has been a foundational building block for realizing secure computation protocols [Yao82, GMW87, IPS08]. However, typical secure computation protocols involve executions of multiple instances of an oblivious transfer protocol. In fact, the number of needed oblivious transfers grows with the input size of one of the parties, which is the receiver of the oblivious transfer.¹ In this work, we observe that a two-message OT protocol, with a short message from the receiver, can be a key tool towards the goal of obtaining simultaneous improvements in computational and communication cost for secure computation.
+
+## 1.1 Laconic OT
+
+In this paper, we introduce the notion of laconic oblivious transfer (or laconic OT for short). Laconic OT allows an OT receiver to commit to a large input $D \in \{0, 1\}^M$ via a short message. Subsequently, the sender responds with a single short message to the receiver depending on dynamically chosen two messages $m_0, m_1$ and a location $L \in [M]$. The sender's response message allows the receiver to recover $m_{D[L]}$ (while $m_{1-D[L]}$ remains computationally hidden). Furthermore, without any additional communication with the receiver, the sender could repeat this process for multiple choices of $L$. The construction we give is secure against semi-honest adversaries, but it can be upgraded to the malicious setting in a similar way as we will discuss in Section 1.2 for the first application.
+
+Our construction of laconic OT is obtained by first realizing a “mildly compressing” laconic OT protocol for which the receiver’s message is factor-2 compressing, i.e., half the size of its input. We base this construction on the Decisional Diffie-Hellman (DDH) assumption. We note that, subsequent to our work, the factor-2 compression construction has been simplified by Döttling and Garg [DG17] (another alternative simplification can be obtained using [AIKW13]). Next we show that such a “mildly compressing” laconic OT can be bootstrapped, via the usage of a Merkle Hash
+
+¹We remark that related prior works on OT extension [Bea96, IKNP03, KK13, ALSZ13] makes the number of public key operations performed during protocol executions independent of the receiver's input size. However, the communication complexity of receivers in these protocols still grows with the input size of the receiver.
+---PAGE_BREAK---
+
+Tree and Yao's Garbled Circuits [Yao82], to obtain a "fully compressing" laconic OT, where the size of the receiver's message is independent of its input size. The laconic OT scheme with a Merkle Tree structure allows for good properties like local verification and local updates, which makes it a powerful tool in secure computation with large inputs.
+
+We will show new applications of laconic OT to non-interactive secure computation and homomorphic encryption for RAM programs, as briefly described below in Sections 1.2 and 1.3.
+
+## 1.2 Warm-Up Application: Non-Interactive Secure Computation on Large Inputs
+
+Can a receiver publish a (small) encoding of her large confidential database $D$ so that any sender, who holds a secret input $x$, can reveal the output $f(x, D)$ (where $f$ is a circuit) to the receiver by sending her a single message? For security, we want the receiver's encoding to hide $D$ and the sender's message to hide $x$. Using laconic OT, we present the first solution to this problem. In our construction, the receiver's published encoding is independent of the size of her database, but we do not restrict the size of the sender's message.²
+
+**RAM Setting.** Consider the scenario where $f$ can be computed using a RAM program $P$ of running time $t$. We use the notation $P^D(x)$ to denote the execution of the program $P$ on input $x$ with random access to the database $D$. We provide a construction where as before the size of the receiver's published message is independent of the size of the database $D$. Moreover, the size of the sender's message (and computational cost of the sender and the receiver) grows only with $t$ and the receiver learns nothing more than the output $P^D(x)$ and the locations in $D$ touched during the computation. Note that in all prior works on general secure RAM computation [OS97, GKK$^{+}$12, LO13, WHC$^{+}$14, GHL$^{+}$14, GLOS15, GLO15] the size of the receiver's message grew at least with its input size.³
+
+**Against Malicious Adversaries.** The results above are obtained in the semi-honest setting. We can upgrade to security against a malicious sender by use of (i) non-interactive zero knowledge proofs (NIZKs) [FLS90] at the cost of additionally assuming doubly enhanced trapdoor permutations or bilinear maps [CHK04, GOS06], (ii) the techniques of Ishai et al. [IKO$^{+}$11] while obtaining slightly weaker security,⁴ or (iii) interactive zero-knowledge proofs but at the cost of additional interaction.
+
+Upgrading to security against a malicious receiver is tricky. This is because the receiver's public encoding is short and hence, it is not possible to recover the receiver's entire database just given the encoding. Standard simulation-based security can be obtained by using (i) universal arguments
+
+²We remark that solutions for this problem based on fully-homomorphic encryption (FHE) [Gen09, LNO13], unlike our result, reduce the communication cost of both the sender's and the receiver's messages to be independent of the size of $D$, but require additional rounds of interaction.
+
+³The communication cost of the receiver's message can be reduced to depend only on the running time of the program by allowing round complexity to grow with the running time of the program (using Merkle Hashing). Analogous to the circuit case, we remark that FHE-based solutions can make the communication of both the sender and the receiver small, but at the cost of extra rounds. Moreover, in the setting of RAM programs FHE-based solutions additionally incur an increased computational cost for the receiver. In particular, the receiver's computational cost grows with the size of its database.
+
+⁴The receiver is required to keep the output of the computation private.
+---PAGE_BREAK---
+
+as done by [CV12, COV15] at the cost of additional interaction, or (ii) using SNARKs at the cost
+of making extractability assumptions [BCCT12, BSCG$^{+}$13].⁵
+
+**Other Related Work.** Prior works consider secure computation which hides the input size of one [MRK03, IP07, ADT11, LNO13] or both parties [LNO13]. Our notion only requires the receiver's communication cost to be independent of the its input size, and is therefore weaker. However, these results are largely restricted to special functionalities, such as zero-knowledge sets and computing certain branching programs (which imply input-size hiding private set intersection). The general result of [LNO13] uses FHE and as mentioned earlier needs more rounds of interaction.⁶
+
+## 1.3 Main Application: Multi-Hop Homomorphic Encryption for RAM Programs
+
+Consider a scenario where $S$ (a server), holding an input $x$, publishes an encryption $ct_0$ of her private input $x$ under her public key. Now this ciphertext is passed on to a client $Q_1$ that homomorphically computes a (possibly private) program $P_1$ accessing (private) memory $D_1$ on the value encrypted in $ct_0$, obtaining another ciphertext $ct_1$. More generally, the computation could be performed by multiple clients. In other words, clients $Q_2, Q_3, \dots$ could sequentially compute private programs $P_2, P_3, \dots$ accessing their own private databases $D_2, D_3, \dots$. Finally, we want $S$ to be able to use her secret key to decrypt the final ciphertext and recover the output of the computation. For security, we require simulation based security for a client $Q_i$ against a collusion of the server and any subset of the clients, and IND-CPA security for the server's ciphertext.
+
+Though we described the simple case above, we are interested in the general case when computation is performed in different sequences of the clients. Examples of two such computation paths are shown in Figure 1. Furthermore, we consider the setting of persistent databases, where each client is able to execute dynamically chosen programs on the encrypted ciphertexts while using the same database that gets updated as these programs are executed.
+
+Figure 1: Two example paths of computation on server $S$'s ciphertexts.
+
+**FHE-Based Solution.** Gentry's [Gen09] fully homomorphic encryption (FHE) scheme offers a solution to the above problem when circuit representations of the desired programs $P_1, P_2, \dots$ are considered. Specifically, $S$ could encrypt her input $x$ using an FHE
+
+$^5$We finally note that relaxing to the weaker notion of indistinguishability-based security we can expect to obtain the best of both worlds, i.e. a non-interactive solution while making only a black-box use of the adversary (a.k.a. avoiding the use of extractability assumptions). We leave this open for future work.
+
+$^6$We remark that in an orthogonal work of Hubacek and Wichs [HW15] obtain constructions where the communication cost is independent of the length of the output of the computation using indistinguishability obfuscation [GGH$^{+}$13b].
+---PAGE_BREAK---
+
+scheme. Now, the clients can publicly compute arbitrary programs on the encrypted value using a public evaluation procedure. This procedure can be adapted to preserve the privacy of the computed circuit [OPP14, DS16, BPMW16] as well. However, this construction only works for circuits. Realizing the scheme for RAM programs involves first converting the RAM program into a circuit of size at least linear in the size of the database. This linear effort can be exponential in the running time of the program for several applications of interest such as binary search.
+
+**Our Relaxation.** In obtaining homomorphic encryption for RAM programs, we start by relaxing the compactness requirement in FHE.⁷ Compactness in FHE requires that the size of the ciphertexts does not grow with computation. In particular, in our scheme, we allow the evaluated ciphertexts to be bigger than the original ciphertext. Gentry, Halevi and Vaikuntanathan [GHV10] considered an analogous setting for the case of circuits. As in Gentry et al. [GHV10], in our setting computation itself will happen at the time of decryption. Therefore, we additionally require that clients $Q_1, Q_2, \dots$ first ship pre-processed versions of their databases to S for the decryption, and security will additionally require that S does not learn the access pattern of the programs on client databases. This brings us to the following question:
+
+> Can we realize multi-hop encryption schemes for RAM programs where the ciphertext grows linearly only in the running time of the computation performed on it?
+
+We show that laconic OT can be used to realize such a multi-hop homomorphic encryption scheme for RAM programs. Our result bridges the gap between growth in ciphertext size and computational complexity of homomorphic encryption for RAM programs.
+
+Our work also leaves open the problem of realizing (fully or somewhat) homomorphic encryption for RAM programs with (somewhat) compact ciphertexts and for which computational cost grows with the running time of the computation, based on traditional computational assumptions. Our solution for multi-hop RAM homomorphic encryption is for the semi-honest (or, semi-malicious) setting only. We leave open the problem of obtaining a solution in the malicious setting.⁸
+
+## 1.4 Roadmap
+
+We now lay out a roadmap for the remainder of the paper. In Section 2 we give a technical overview of this work. We introduce the notion of laconic OT formally in Section 3, and give a construction with factor-2 compression in Section 4, which can be bootstrapped to a fully compressing updatable laconic OT in Section 5. Finally we present our two applications in Sections 6 and 7.
+
+⁷One method for realizing homomorphic encryption for RAM programs [GKP+13, GHRW14, CHJV15, BGL+15, KLW15] would be to use obfuscation [GGH+13b] based on multilinear maps [GGH13a]. However, in this paper we focus on basing homomorphic RAM computation on DDH and defer the work on obfuscation to future work.
+
+⁸Using NIZKs alone does not solve the problem, because locations accessed during computation are dynamically decided.
+---PAGE_BREAK---
+
+# 2 Technical Overview
+
+## 2.1 Laconic OT
+
+We will now provide an overview of laconic OT and our constructions of this new primitive. Laconic OT consists of two major components: a hash function and an encryption scheme. We will call the hash function **Hash** and the encryption scheme (**Send**, **Receive**). In a nutshell, laconic OT allows a receiver $R$ to compute a *succinct digest* **digest** of a large database $D$ and a private state $\hat{D}$ using the hash function **Hash**. After **digest** is made public, anyone can non-interactively send OT messages to $R$ w.r.t. a location $L$ of the database such that the receiver's choice bit is $D[L]$. Here, $D[L]$ is the database-entry at location $L$. In more detail, given **digest**, a database location $L$, and two messages $m_0$ and $m_1$, the algorithm **Send** computes a ciphertext $e$ such that $R$, who owns $\hat{D}$, can use the decryption algorithm **Receive** to decrypt $e$ to obtain the message $m_{D[L]}$.
+
+For security, we require sender privacy against semi-honest receiver. In particular, given an honest receiver's view, which includes the database $D$, the message $m_{1-D[L]}$ is computationally hidden. We formalize this using a simulation based definition. On the other hand, we do not require receiver privacy as opposed to standard oblivious transfer, namely, no security guarantee is provided against a cheating (semi-honest) sender. This is mostly for ease of exposition. Nevertheless, adding receiver privacy to laconic OT can be done in a straightforward manner via the usage of garbled circuits and two-message OT (see Section 3.1 for a detailed discussion).
+
+For efficiency, we have the following requirement: First, the size of **digest** only depends on the security parameter and is independent of the size of the database $D$. Moreover, after **digest** and $\hat{D}$ are computed by **Hash**, the workload of both the sender and receiver (that is, the runtime of both **Send** and **Receive**) becomes essentially independent of the size of the database (i.e., depending at most polynomially on $\log(|D|$)).
+
+Notice that our security definition and efficiency requirement immediately imply that the **Hash** algorithm used to compute the succinct digest must be collision resistant. Thus, it is clear that the hash function must be keyed and in our case it is keyed by a common reference string.
+
+**Construction at a high level.** We first construct a laconic OT scheme with factor-2 compression, which compresses a $2\lambda$-bit database to a $\lambda$-bit digest. Next, to get laconic OT for databases of arbitrary size, we bootstrap this construction using an interesting combination of Merkle hashing and garbled circuits. Below, we give an overview of each of these steps.
+
+### 2.1.1 Laconic OT with Factor-2 Compression
+
+We start with a construction of a laconic OT scheme with factor-2 compression, i.e., a scheme that hashes a $2\lambda$-bit database to a $\lambda$-bit digest. This construction is inspired by the notion of witness encryption [GGSW13]. We will first explain the scheme based on witness encryption. Then, we show how this specific witness encryption scheme can be realized with the more standard notion of hash proof systems (HPS) [CS02]. Our overall scheme will be based on the security of Decisional Diffie-Hellman (DDH) assumption.
+
+**Construction Using Witness Encryption.** Recall that a witness encryption scheme is defined for an NP-language $\mathcal{L}$ (with corresponding witness relation $\mathcal{R}$). It consists of two algorithms Enc and Dec. The algorithm Enc takes as input a problem instance $x$ and a message $m$, and
+---PAGE_BREAK---
+
+produces a ciphertext. A recipient of the ciphertext can use Dec to decrypt the message if $x \in \mathcal{L}$ and the recipient knows a witness $w$ such that $\mathcal{R}(x, w)$ holds. There are two requirements for a witness encryption scheme, correctness and security. Correctness requires that if $\mathcal{R}(x, w)$ holds, then $\text{Dec}(x, w, \text{Enc}(x, m)) = m$. Security requires that if $x \notin \mathcal{L}$, then $\text{Enc}(x, m)$ computationally hides $m$.
+
+We will now discuss how to construct a laconic OT with factor-2 compression using a two-to-one hash function and witness encryption. Let $\mathbf{H} : \mathcal{K} \times \{0,1\}^{2\lambda} \to \{0,1\}^{\lambda}$ be a keyed hash function, where $\mathcal{K}$ is the key space. Consider the language $\mathcal{L} = \{(K,L,y,b) \in \mathcal{K} \times [2\lambda] \times \{0,1\}^{\lambda} \times \{0,1\} | \exists D \in \{0,1\}^{2\lambda} \text{ such that } \mathbf{H}(K,D) = y \text{ and } D[L] = b\}$. Let (Enc, Dec) be a witness encryption scheme for the language $\mathcal{L}$.
+
+The laconic OT scheme is as follows: The Hash algorithm computes $y = \mathbf{H}(K, D)$ where $K$ is the common reference string and $D \in \{0,1\}^{2\lambda}$ is the database. Then $y$ is published as the digest of the database. The Send algorithm takes as input $K, y$, a location $L$, and two messages $(m_0, m_1)$ and proceeds as follows. It computes two ciphertexts $e_0 \leftarrow \text{Enc}((K, L, y, 0), m_0)$ and $e_1 \leftarrow \text{Enc}((K, L, y, 1), m_1)$ and outputs $e = (e_0, e_1)$. The Receive algorithm takes as input $K, L, y, D$, and the ciphertext $e = (e_0, e_1)$ and proceeds as follows. It sets $b = D[L]$, computes $m \leftarrow \text{Dec}((K, L, y, b), D, e_b)$ and outputs $m$.
+
+It is easy to check that the above scheme satisfies correctness. However, we run into trouble when trying to prove sender privacy. Since $\mathbf{H}$ compresses $2\lambda$ bits to $\lambda$ bits, most hash values have exponentially many pre-images. This implies that for most values of $(K, L, y)$, it holds that both $(K, L, y, 0) \in \mathcal{L}$ and $(K, L, y, 1) \in \mathcal{L}$, that is, most problem instances are yes-instances. However, to reduce sender privacy of our scheme to the security of witness encryption, we ideally want that if $y = \mathbf{H}(K, D)$, then $(K, L, y, D[L]) \in \mathcal{L}$ while $(K, L, y, 1 - D[L]) \notin \mathcal{L}$. To overcome this problem, we will use a somewhere statistically binding hash function that allows us to artificially introduce no-instances as described below.
+
+**Somewhere Statistically Binding Hash to the Rescue.** Somewhere statistically binding (SSB) hash functions [HW15, KLW15, OPWW15] support a special key generation procedure such that the hash value information theoretically fixes certain bit(s) of the pre-image. In particular, the special key generation procedure takes as input a location $L$ and generates a key $K^{(L)}$. Then the hash function keyed by $K^{(L)}$ will bind the $L$-th bit of the pre-image. That is, $K^{(L)}$ and $y = \mathbf{H}(K^{(L)}, D)$ uniquely determines $D[L]$. The security requirement for SSB hashing is the *index-hiding* property, i.e., keys $K^{(L)}$ and $K^{(L')}$ should be computationally indistinguishable for any $L \neq L'$.
+
+We can now establish security of the above laconic OT scheme when instantiated with SSB hash functions. To prove security, we will first replace the key $K$ by a key $K^{(L)}$ that statistically binds the $L$-th bit of the pre-image. The index hiding property guarantees that this change goes unnoticed. Now for every hash value $y = \mathbf{H}(K^{(L)}, D)$, it holds that $(K, L, y, D[L]) \in \mathcal{L}$ while $(K, L, y, 1 - D[L]) \notin \mathcal{L}$. We can now rely on the security of witness encryption to argue that $\text{Enc}((K^{(L)}, L, y, 1 - D[L]), m_{1-D[L]})$ computationally hides the message $m_{1-D[L]}$.
+
+**Working with DDH.** The above described scheme relies on a witness encryption scheme for the language $\mathcal{L}$. We note that witness encryption for general NP languages is only known under strong assumptions such as graded encodings [GGSW13] or indistinguishability obfuscation [GGH$^{+}$13b]. Nevertheless, the aforementioned laconic OT scheme does not need full power of general witness
+---PAGE_BREAK---
+
+encryption. In particular, we will leverage the fact that hash proof systems [CS02] can be used to construct statistical witness encryption schemes for specific languages [GGSW13]. Towards this end, we will carefully craft an SSB hash function that is hash proof system friendly, that is, allows for a hash proof system (or statistical witness encryption) for the language $\mathcal{L}$ required above. Our construction of the HPS-friendly SSB hash is based on the Decisional Diffie-Hellman assumption and is inspired from a construction by Okamoto et al. [OPWW15].
+
+We will briefly outline our HPS-friendly SSB hash below. We strongly encourage the reader to see Section 4.2 for the full construction or see [DG17] for a simplified construction.
+
+Let $\mathbb{G}$ be a (multiplicative) cyclic group of order $p$ generated by a generator $g$. A hashing key is of the form $\hat{\mathbf{H}} = g^{\mathbf{H}}$ (the exponentiation is done component-wisely), where the matrix $\mathbf{H} \in \mathbb{Z}_p^{2\times 2^\lambda}$ is chosen uniformly at random. The hash function of $\mathbf{x} \in \mathbb{Z}_p^{2^\lambda}$ is computed as $\mathbf{H}(\hat{\mathbf{H}}, \mathbf{x}) = \hat{\mathbf{H}}^{\mathbf{x}} \in \mathbb{G}^2$ (where $(\hat{\mathbf{H}}^{\mathbf{x}})_i = \prod_{k=1}^{2^\lambda} \hat{\mathbf{H}}_{i,k}^{x_k}$, hence $\hat{\mathbf{H}}^{\mathbf{x}} = g^{\mathbf{H}\mathbf{x}}$). The binding key $\hat{\mathbf{H}}^{(i)}$ is of the form $\hat{\mathbf{H}}^{(i)} = g^{\mathbf{A}+\mathbf{T}}$, where $\mathbf{A} \in \mathbb{Z}_p^{2\times 2^\lambda}$ is a random rank 1 matrix, and $\mathbf{T} \in \mathbb{Z}_p^{2\times 2^\lambda}$ is a matrix with zero entries everywhere, except that $\mathbf{T}_{2,i} = 1$.
+
+Now we describe a witness encryption scheme (Enc, Dec) for the language $\mathcal{L} = \{((\hat{\mathbf{H}}, i, \hat{\mathbf{y}}, b) | \exists \mathbf{x} \in \mathbb{Z}_p^{2^\lambda} \text{ s.t. } \hat{\mathbf{H}}^{\mathbf{x}} = \hat{\mathbf{y}} \text{ and } x_i = b)\}$. Enc((\hat{\mathbf{H}}, i, \hat{\mathbf{y}}, b), m) first sets
+
+$$ \hat{\mathbf{H}}' = \begin{pmatrix} \hat{\mathbf{H}} \\ g^{\mathbf{e}_i^\top} \end{pmatrix} \in \mathbb{G}^{3 \times 2^\lambda}, \hat{\mathbf{y}}' = \begin{pmatrix} \hat{\mathbf{y}} \\ g^b \end{pmatrix} \in \mathbb{G}^3, $$
+
+where $\mathbf{e}_i \in \mathbb{Z}_p^{2^\lambda}$ is the $i$-th unit vector. It then picks a random $\mathbf{r} \in \mathbb{Z}_p^3$ and computes a ciphertext $c = (((\hat{\mathbf{H}}')^\top)^\mathbf{r}, ((\hat{\mathbf{y}}')^\top)^\mathbf{r} \oplus m)$. To decrypt a ciphertext $c = (\hat{\mathbf{h}}, z)$ given a witness $\mathbf{x} \in \mathbb{Z}_p^{2^\lambda}$, we compute $m = z \oplus \hat{\mathbf{h}}^{\mathbf{x}}$. It is easy to check correctness. For the security proof, see Section 4.3.
+
+### 2.1.2 Bootstrapping Laconic OT
+
+We will now provide a bootstrapping technique that constructs a laconic OT scheme with arbitrary compression factor from one with factor-2 compression. Let $\ell OT_{const}$ denote a laconic OT scheme with factor-2 compression.
+
+**Bootstrapping the Hash Function via a Merkle Tree.** A binary Merkle tree is a natural way to construct hash functions with an arbitrary compression factor from two-to-one hash functions, and this is exactly the route we pursue. A binary Merkle tree is constructed as follows: The database is split into blocks of $\lambda$ bits, each of which forms the leaf of the tree. An interior node is computed as the hash value of its two children via a two-to-one hash function. This structure is defined recursively from the leaves to the root. When we reach the root node (of $\lambda$ bits), its value is defined to be the (succinct) hash value or digest of the entire database. This procedure defines the hash function.
+
+The next step is to define the laconic OT algorithms Send and Receive for the above hash function. Our first observation is that given the digest, the sender can transfer specific messages corresponding to the values of the left and right children of the root (via $2^\lambda$ executions of $\ell OT_{const}.Send$). Hence, a naive approach for the sender is to output $\ell OT_{const}$ encryptions for the path of nodes from the root to the leaf of interest. This approach runs into an immediate issue because to compute $\ell OT_{const}$ encryptions at any layer other than the root, the sender needs to know the value at that internal node. However, in the scheme a sender only knows the value of the root and nothing else.
+---PAGE_BREAK---
+
+**Traversing the Merkle Tree via Garbled Circuits.** Our main idea to make the above naive idea work is via an interesting usage of garbled circuits. At a high level, the sender will output a sequence of garbled circuits (one per layer of the tree) to transfer messages corresponding to the path from the root to the leaf containing the *L*-th bit, so that the receiver can traverse the Merkle tree from the root to the leaf as illustrated in Figure 2.
+
+Above GCircuit is a circuit garbling procedure, which garbles the circuit $\ell OT_{const}.Send(crs, \cdot, \text{Keys}^2)$ using input keys **Keys**¹ (see Section 5.1.1 for the definition of garbled circuits).
+
+Figure 2: The Bootstrapping Step
+
+In more detail, the construction works as follows: The **Send** algorithm outputs $\ell OT_{const}$ encryptions using the root digest and a collection of garbled circuits, one per layer of the Merkle tree. The $i$-th circuit has a bit $b$ hardwired in it, which specifies whether the path should go to the left or right child at the $i$-th layer. It takes as input a pair of sibling nodes ($node_0, node_1$) along the path at layer $i$ and outputs $\ell OT_{const}$ encryptions corresponding to nodes on the path at layer $i+1$ w.r.t. $node_b$ as the hash value. Conceptually, the circuit computes $\ell OT_{const}$ encryptions for the next layer.
+
+The $\ell OT_{const}$ encryptions at the root encrypt the input keys of the first garbled circuit. In the garbled circuit at layer $i$, the messages being encrypted/sent correspond to the input keys of the garbled circuit at layer $i+1$. The last circuit takes two sibling leaves as input which contains $D[L]$, and outputs $\ell OT_{const}$ encryptions of $m_0$ and $m_1$ corresponding to location $L$ (among the $2^\lambda$ locations).
+
+Given a laconic OT ciphertext, which consists of $\ell OT_{const}$ ciphertexts w.r.t. the root digest and a sequence of garbled circuits, the receiver can traverse the Merkle tree as follows. First he runs $\ell OT_{const}.Receive$ for the $\ell OT_{const}$ ciphertexts using as witness the children of the root, obtaining the input labels corresponding to these to be fed into the first garbled circuit. Next, he uses the input labels to evaluate the first garbled circuit, obtaining $\ell OT_{const}$ ciphertexts for the second layer. He then runs $\ell OT_{const}.Receive$ again for these ciphertexts using as witness the children of the second node on the path. This procedure continues till the last layer.
+
+Security of the construction can be established using the sender security of $\ell OT_{const}.Receive$ and simulation based security of the circuit garbling scheme.
+---PAGE_BREAK---
+
+**Extension.** Finally, for our RAM applications we need a slightly stronger primitive which we call *updatable laconic OT* that additionally allows for modifications/writes to the database while ensuring that the digest is updated in a consistent manner. The construction sketched in this paragraph can be modified to support this stronger notion. For a detailed description of this notion refer to Section 3.2.
+
+## 2.2 Non-interactive Secure Computation on Large Inputs
+
+**The Circuit Setting.** This is the most straightforward application of laconic OT. We will provide a non-interactive secure computation protocol where the receiver $R$, holding a large database $D$, publishes a short encoding of it such that any sender $S$, with private input $x$, can send a single message to reveal $C(x, D)$ to $R$. Here, $C$ is the circuit being evaluated.
+
+Recall the garbled circuit based approach to non-interactive secure computation, where $R$ can publish the first message of a two-message oblivious transfer (OT) for his input $D$, and the sender responds with a garbled circuit for $C[x, \cdot]$ (with hardcoded input $x$) and sends the input labels corresponding to $D$ via the second OT message. The downside of this protocol is that $R$'s public message grows with the size of $D$, which could be substantially large.
+
+We resolve this issue via our new primitive laconic OT. In our protocol, $R$'s first message is the digest *digest* of his large database $D$. Next, the sender generates the garbled circuit for $C[x, \cdot]$ as before. It also transfers the labels for each location of $D$ via laconic OT Send messages. Hence, by efficiency requirements of laconic OT, the length of $R$'s public message is independent of the size of $D$. Moreover, sender privacy against a semi-honest receiver follows directly from the sender privacy of laconic OT and security of garbled circuits. To achieve receiver privacy, we can enhance the laconic OT with receiver privacy (discussed in Section 3.1).
+
+**The RAM Setting.** This is the RAM version of the above application where $S$ holds a RAM program $P$ and $R$ holds a large database $D$. As before, we want that (1) the length of $R$'s first message is independent of $|D|$, (2) $R$'s first message can be published and used by multiple senders, (3) the database is persistent for a sequence of programs for every sender, and (4) the computational complexity of both $S$ and $R$ per program execution grows only with running time of the corresponding program. For this application, we only achieve unprotected memory access (UMA) security against a corrupt receiver, i.e., the memory access pattern in the execution of $P^D(x)$ is leaked to the receiver. We achieve full security against a corrupt sender.
+
+For simplicity, consider a read-only program such that each CPU step outputs the next location to be read based on the value read from last location. At a high level, since we want the sender's complexity to grow only with the running time $t$ of the program, we cannot create a garbled circuit that takes $D$ as input. Instead, we would go via the garbled RAM based approaches where we have a sequence of $t$ garbled circuits where each circuit executes one CPU step. A CPU step circuit takes the current CPU state and the last bit read from the database $D$ as input and outputs an updated state and a new location to be read. The new location would be read from the database and fed into the next CPU step. The most non-trivial part in all garbled RAM constructions is being able to compute the correct labels for the next circuit based on the value of $D[L]$, where $L$ is the location being read. Since we are working with garbled circuits, it is crucial for security that the receiver does not learn two labels for any input wire. We solve this issue via laconic OT as follows.
+---PAGE_BREAK---
+
+For the simpler case of sender security, $R$ publishes the short digest of $D$, which is fed into the first garbled circuit and this digest is passed along the sequence of garbled circuits. When a circuit wants to read a location $L$, it outputs the laconic OT ciphertexts which encrypt the input keys for the next circuit and use digest of $D$ as the hash value.⁹ Security against a corrupt receiver follows from the sender security of laconic OT and security of garbled circuits. To achieve security against a corrupt sender, $R$ does not publishes digest in the clear. Instead, the labels for digest for the first circuit are transferred to $R$ via regular OT.
+
+Note that the garbling time of the sender as well as execution time of the receiver will grow only with the running time of the program. This follows from the efficiency requirements of laconic OT.
+
+Above, we did not describe how we deal with general programs that also write to the database or memory. We achieve this via updatable laconic OT (for definition see Section 3.2), This allows for transferring the labels for updated digest (corresponding to the updated database) to the next circuit. For a formal description of our scheme for general RAM programs, see Section 6.
+
+## 2.3 Multi-Hop Homomorphic Encryption for RAM Programs
+
+**Our model and problem — a bit more formally.** We consider a scenario where a server $S$, holding an input $x$, publishes a public key $pk$ and an encryption ct of $x$ under $pk$. Now this ciphertext is passed on to a client $Q$ that will compute a (possibly private) program $P$ accessing memory $D$ on the value encrypted in ct, obtaining another ciphertext ct'. Finally, we want that the server can use its secret key to recover $P^D(x)$ from the ciphertext ct' and $\tilde{D}$, where $\tilde{D}$ is an encrypted form of $D$ that has been previously provided to $S$ in a one-time setup phase. More generally, the computation could be performed by multiple clients $Q_1, \dots, Q_n$. In this case, each client is required to place a pre-processed version of its database $\tilde{D}_i$ with the server during setup. The computation itself could be performed in different sequences of the clients (for different extensions of the model, see Section 7.1). Examples of two such computation paths are shown in Figure 1.
+
+For security, we want IND-CPA security for server's input $x$. For honest clients, we want *program-privacy* as well as *data-privacy*, i.e., the evaluation does not leak anything beyond the output of the computation even when the adversary corrupts the server and any subset of the clients. We note that *data-privacy* is rather easy to achieve via encryption and ORAM. Hence we focus on the challenges of achieving UMA security for honest clients, i.e., the adversary is allowed to learn the database $D$ as well as memory access pattern of $P$ on $D$.
+
+**UMA secure multi-hop scheme.** We first build on the ideas from non-interactive secure computation for RAM programs. Every client first passes its database to the server. Then in every round, the server sends an OT message for input $x$. We assume for simplicity that every client has an up-to-date digest of its own database. Next, the first client $Q_1$ generates a garbled program for $P_1$, say $ct_1$ and sends it to $Q_2$. Here, the garbled program consists of $t_1$ ($t_1$ is the running time of $P_1$) garbled circuits accessing $D_1$ via laconic OT as described in the previous application. Now, $Q_2$ appends its garbled program for $P_2$ to the end of $ct_1$ and generates $ct_2$ consisting of $ct_1$ and new garbled program. Note that $P_2$ takes the output of $P_1$ as input and hence, the output keys of the
+
+⁹We note that the above idea of using laconic OT also gives a conceptually very simple solution for UMA secure garbled RAM scheme [LO13]. Moreover, there is a general transformation [GHL+14] that converts any UMA secure garbled RAM into one with full security via the usage of symmetric key encryption and oblivious RAM. This would give a simplified construction of fully secure garbled RAM under DDH assumption.
+---PAGE_BREAK---
+
+last garbled circuit of $P_1$ have to be compatible with the input keys of the first garbled circuit of $P_2$ and so on. If we continue this procedure, after the last client $Q_n$, we get a sequence of garbled circuits where the first $t_1$ circuits access $D_1$, the next set accesses from $D_2$ and so on. Finally, the server $S$ can evaluate the sequence of garbled circuits given $D_1, \dots, D_n$. It is easy to see that correctness holds. But we have no security for clients.
+
+The issue is similar to the issue pointed out by [GHV10] for the case of multi-hop garbled circuits. If the client $Q_{i-1}$ colludes with the server, then they can learn both input labels for the garbled program of $Q_i$. To resolve this issue it is crucial that $Q_i$ re-randomizes the garbled circuits provided by $Q_{i-1}$. For this we rely on re-randomizable garbled circuits provided by [GHV10], where given a garbled circuit anyone can re-garble it such that functionality of the original circuit is preserved while the re-randomized garbled circuit is unrecognizable even to the party who generated it. In our protocol we use re-randomizable garbled circuits but we stumble upon the following issue.
+
+Recall that in the RAM application above, a garbled circuit outputs the laconic OT ciphertexts corresponding to the input keys of the next circuit. Hence, the input keys of the $(\tau + 1)$-th circuit have to be hardwired inside the $\tau$-th circuit. Since all of these circuits will be re-randomized for security, for correctness we require that we transform the hardwired keys in a manner consistent with the future re-randomization. But for security, $Q_{i-1}$ does not know the randomness that will be used by $Q_i$.
+
+Our first idea to resolve this issue is as follows: The circuits generated by $Q_{i-1}$ will take additional inputs $s_i, \dots, s_n$ which are the randomness used by future parties for their re-randomization procedure. Since we are in the non-interactive setting, we cannot run an OT protocol between clients $Q_{i-1}$ and later clients. We resolve this issue by putting the first message of OT for $s_j$ in the public key of client $Q_j$ and client $Q_{i-1}$ will send the OT second messages along with $ct_{i-1}$. We do not want the clients' public keys to grow with the running time of the programs, hence, we think of $s_j$ as PRF keys and each circuit re-randomization will invoke the PRF on a unique input.
+
+The above approach causes a subtle issue in the security proof. Suppose, for simplicity, that client $Q_i$ is the only honest client. When arguing security, we want to simulate all the garbled circuits in $ct_i$. To rely on the security of re-randomization, we need to replace the output of the PRF with key $s_i$ with uniform random values but this key is fed as input to the circuits of the previous clients. We note that this is not a circularity issue but makes arguing security hard. We solve this issue as follows: Instead of feeding in PRF keys directly to the garbled circuits, we feed in corresponding outputs of the PRF. We generate the PRF output via a bunch of PRF circuits that take the PRF keys as input (see Figure 3). Now during simulation, we will first simulate these PRF circuits, followed by the simulation of the main circuits. We describe the scheme formally in Section 7.3.
+
+# 3 Laconic Oblivious Transfer
+
+In this section, we will introduce a primitive we call *Laconic OT* (or, *LOT* for short). We will start by describing laconic OT and then provide an extension of it to the notion of updatable laconic OT.
+---PAGE_BREAK---
+
+Figure 3: One step circuit for $P_i$ along with the attached PRF circuits generated by $Q_i$.
+
+## 3.1 Laconic OT
+
+**Definition 3.1 (Laconic OT).** A laconic OT (LOT) scheme syntactically consists of four algorithms crsGen, Hash, Send and Receive.
+
+* crs $\leftarrow crsGen(1^\lambda)$. It takes as input the security parameter $1^\lambda$ and outputs a common reference string crs.
+
+* $(\text{digest}, \hat{\text{D}}) \leftarrow \text{Hash}(crs, D)$. It takes as input a common reference string crs and a database $D \in \{0, 1\}^*$ and outputs a digest $\text{digest}$ of the database and a state $\hat{\text{D}}$.
+
+* $e \leftarrow \text{Send}(crs, \text{digest}, L, m_0, m_1)$. It takes as input a common reference string crs, a digest $\text{digest}$, a database location $L \in \mathbb{N}$ and two messages $m_0$ and $m_1$ of length $\lambda$, and outputs a ciphertext $e$.
+
+* $m \leftarrow \text{Receive}^{\hat{\text{D}}}(crs, e, L)$. This is a RAM algorithm with random read access to $\hat{\text{D}}$. It takes as input a common reference string crs, a ciphertext $e$, and a database location $L \in \mathbb{N}$. It outputs a message $m$.
+
+We require the following properties of an LOT scheme (crsGen, Hash, Send, Receive).
+
+* **Correctness:** We require that it holds for any database $D$ of size at most $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, any memory location $L \in [M]$, and any pair of messages
+---PAGE_BREAK---
+
+$$ (m_0, m_1) \in \{0, 1\}^\lambda \times \{0, 1\}^\lambda \text{ that} $$
+
+$$ \Pr \left[ m = m_{D[L]} \middle| \begin{array}{c} \text{crs} \\ (\text{digest}, \hat{\text{D}}) \\ \text{e} \\ m \end{array} \begin{array}{l} \leftarrow \text{crsGen}(1^\lambda) \\ \leftarrow \text{Hash}(crs, D) \\ \leftarrow \text{Send}(crs, \text{digest}, L, m_0, m_1) \\ \leftarrow \text{Receive}^{\hat{\text{D}}}(crs, e, L) \end{array} \right] = 1, $$
+
+where the probability is taken over the random choices made by crsGen and Send.
+
+* **Sender Privacy Against Semi-Honest Receivers:** There exists a PPT simulator $\ell$OTSim such that the following holds. For any database $D$ of size at most $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, any memory location $L \in [M]$, and any pair of messages $(m_0, m_1) \in \{0, 1\}^\lambda \times \{0, 1\}^\lambda$, let crs $\leftarrow$ crsGen$(1^\lambda)$ and digest $\leftarrow$ Hash crs, D. Then it holds that
+
+$$ (\text{crs}, \text{Send}(\text{crs}, \text{digest}, L, m_0, m_1)) \stackrel{c}{\approx} (\text{crs}, \ell\text{OTSim}(D, L, m_{D[L]})). $$
+
+* **Efficiency Requirement:** The length of digest is a fixed polynomial in $\lambda$ independent of the size of the database; we will assume for simplicity that $|\text{digest}| = \lambda$. Moreover, the algorithm Hash runs in time $|D| \cdot \text{poly}(\log |D|, \lambda)$, Send and Receive run in time $\text{poly}(\log |D|, \lambda)$.
+
+**Receiver Privacy.** In the above definition, we do not require receiver privacy as opposed to standard oblivious transfer, namely, no security guarantee is provided against a cheating (semi-honest) sender. This is mostly for ease of exposition. We would like to point out that adding receiver privacy (i.e., standard simulation based security against a semi-honest sender) to laconic OT can be done in a straightforward way. Instead of sending digest directly from the receiver to the sender and sending e back to the receiver, the two parties compute Send together via a two-round secure 2PC protocol, where the input of the receiver is digest and the input of the sender is $(L, m_0, m_1)$, and only the receiver obtains the output e. This can be done using standard two-message OT and garbled circuits.
+
+**Multiple executions of Send that share the same digest.** Notice that since the common reference string is public (i.e., not chosen by the simulator), the sender can involve Send function multiple times while still ensuring that security can be argued from the above definition (for the case of single execution) via a standard hybrid argument.
+
+It will be convenient to use the following shorthand notations (generalizing the above notions) to run laconic OT for every single element in a database. Let $\text{Keys} = ((\text{Key}_{1,0}, \text{Key}_{1,1}), \dots, (\text{Key}_{M,0}, \text{Key}_{M,1}))$ be a list of $M = |D|$ key-pairs, where each key is of length $\lambda$. Then we will define
+
+$$ \text{Send}(\text{crs}, \text{digest}, \text{Keys}) = (\text{Send}(\text{crs}, \text{digest}, 1, \text{Key}_{1,0}, \text{Key}_{1,1}), \dots, \text{Send}(\text{crs}, \text{digest}, M, \text{Key}_{M,0}, \text{Key}_{M,1})). $$
+
+Likewise, for a vector $\mathbf{e} = (\mathbf{e}_1, \dots, \mathbf{e}_M)$ of ciphertexts define
+
+$$ \text{Receive}^{\hat{\mathcal{D}}}(crs, e) = (\text{Receive}^{\hat{\mathcal{D}}}(crs, e_1, 1), \dots, \text{Receive}^{\hat{\mathcal{D}}}(crs, e_M, M)). $$
+
+Similarly, let $\text{Labels} = \text{Keys}_D = (\text{Key}_{1,D[1]}, \dots, \text{Key}_{M,D[M]})$, and define
+
+$$ \ell\text{OTSim}(crs, D, \text{Labels}) = (\ell\text{OTSim}(crs, D, 1, \text{Key}_{1,D[1]}), \dots, \ell\text{OTSim}(crs, D, M, \text{Key}_{M,D[M]})). $$
+
+By the sender security for multiple executions, we have that
+
+$$ (\text{crs}, \text{Send}(\text{crs}, \text{digest}, \text{Keys})) \stackrel{c}{\approx} (\text{crs}, \ell\text{OTSim}(\text{crs}, D, \text{Labels})). $$
+---PAGE_BREAK---
+
+## 3.2 Updatable Laconic OT
+
+For our applications, we will need a version of laconic OT for which the receiver's short commitment digest to his database can be updated quickly (in time much smaller than the size of the database) when a bit of the database changes. We call this primitive supporting this functionality updatable laconic OT and define more formally below. At a high level, updatable laconic OT comes with an additional pair of algorithms SendWrite and ReceiveWrite which transfer the keys for an updated digest digest* to the receiver. For convenience, we will define ReceiveWrite such that it also performs the write in $\hat{D}$.
+
+**Definition 3.2 (Updatable Laconic OT).** An updatable laconic OT (updatable $\ell$OT) scheme consists of algorithms crsGen, Hash, Send, Receive as per Definition 3.1 and additionally two algorithms SendWrite and ReceiveWrite with the following syntax.
+
+• $e_w \leftarrow \text{SendWrite}(\text{crs}, \text{digest}, L, b, \{m_{j,0}, m_{j,1}\}_{j=1}^{\lceil\text{digest}\rceil})$. It takes as input the common reference string crs, a digest digest, a location $L \in \mathbb{N}$, a bit $b \in \{0, 1\}$ to be written, and $|\text{digest}|$ pairs of messages $\{m_{j,0}, m_{j,1}\}_{j=1}^{\lceil\text{digest}\rceil}$, where each $m_{j,c}$ is of length $\lambda$. And it outputs a ciphertext $e_w$.
+
+• $\{m_j\}_{j=1}^{\lceil\text{digest}\rceil} \leftarrow \text{ReceiveWrite}^{\hat{D}}(\text{crs}, L, b, e_w)$. This is a RAM algorithm with random read/write access to $\hat{D}$. It takes as input the common reference string crs, a location $L$, a bit $b \in \{0, 1\}$ and a ciphertext $e_w$. It updates the state $\hat{D}$ (such that $D[L] = b$) and outputs messages $\{m_j\}_{j=1}^{\lceil\text{digest}\rceil}$.
+
+We require the following properties on top of properties of a laconic OT scheme.
+
+• **Correctness With Regard To Writes:** For any database $D$ of size at most $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, any memory location $L \in [M]$, any bit $b \in \{0, 1\}$, and any messages $\{m_{j,0}, m_{j,1}\}_{j=1}^{\lceil\text{digest}\rceil}$ of length $\lambda$, the following holds. Let $D^*$ be identical to $D$, except that $D^*[L] = b$,
+
+$$ \Pr \left[ \begin{array}{c} m'_j = m_{j,\text{digest}^*} \\ \forall j \in [\lceil\text{digest}\rceil] \end{array} \right| \begin{array}{ll} \text{crs} & \leftarrow \text{crsGen}(1^\lambda) \\ (\text{digest}, \hat{\text{D}}) & \leftarrow \text{Hash}(\text{crs}, D) \\ (\text{digest}^*, \hat{\text{D}}^*) & \leftarrow \text{Hash}(\text{crs}, D^*) \\ e_w & \leftarrow \text{SendWrite} \left( \text{crs}, \text{digest}, L, b, \{m_{j,0}, m_{j,1}\}_{j=1}^{\lceil\text{digest}\rceil} \right) \\ \{m'_j\}_{j=1}^{\lceil\text{digest}\rceil} & \leftarrow \text{ReceiveWrite}^{\hat{D}}(\text{crs}, L, b, e_w) \end{array} \right] = 1, $$
+
+where the probability is taken over the random choices made by crsGen and SendWrite. Furthermore, we require that the execution of ReceiveWrite$\hat{D}$ above updates $\hat{D}$ to $\hat{D}^*$. (Note that digest is included in $\hat{D}$, hence digest is also updated to digest$^*$.)
+
+• **Sender Privacy Against Semi-Honest Receivers With Regard To Writes:** There exists a PPT simulator $\ell$OTSimWrite such that the following holds. For any database $D$ of size at most $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, any memory location $L \in [M]$, any bit $b \in \{0, 1\}$, and any messages $\{m_{j,0}, m_{j,1}\}_{j=1}^{\lceil\text{digest}\rceil}$ of length $\lambda$, let crs $\leftarrow$ crsGen$(1^\lambda)$,
+---PAGE_BREAK---
+
+(digest, $\hat{D}$) $\leftarrow$ Hash(crs, $D$), and $(\text{digest}^*, \hat{D}^*) \leftarrow$ Hash(crs, $D^*$), where $D^*$ is identical to $D$
+except that $D^*[L] = b$. Then it holds that
+
+$$
+\stackrel{c}{\approx} \left( \begin{array}{l} \text{crs, SendWrite}(\text{crs}, \text{digest}, L, b, \{m_{j,0}, m_{j,1}\}_{j=1}^{|\text{digest}|}) \\ \text{crs, } \ell \text{OTSimWrite} \left( \text{crs, } D, L, b, \{m_{j,\text{digest}^*}\}_{j \in [|\text{digest}|]} \right) \end{array} \right).
+$$
+
+• *Efficiency Requirements:* We require that both SendWrite and ReceiveWrite run in time poly($\log |D|$, $\lambda$).
+
+# 4 Laconic Oblivious Transfer with Factor-2 Compression
+
+In this section, based on the DDH assumption we will construct a laconic OT scheme for which the hash function **Hash** compresses a database of length $2\lambda$ into a digest of length $\lambda$. We would refer to this primitive as laconic OT with factor-2 compression. We note that, subsequent to our work, the factor-2 compression construction has been simplified by Döttling and Garg [DG17] (another alternative simplification can be obtained using [AIKW13]). We refer the reader to [DG17] for the simpler construction and preserve the older construction here.
+
+We will first construct the following two primitives as building blocks: (1) a somewhere statistically binding (SSB) hash function, and (2) a hash proof system that allows for proving knowledge of preimage bits for this SSB hash function. We will then present the $\ell$OT scheme with factor-2 compression in Section 4.4.
+
+## 4.1 Somewhere Statistically Binding Hash Functions and Hash Proof Systems
+
+In this section, we give definitions of somewhere statistically binding (SSB) hash functions [HW15] and hash proof systems [CS98]. For simplicity, we will only define SSB hash functions that compress $2\lambda$ values in the domain into $\lambda$ bits. The more general definition works analogously.
+
+**Definition 4.1** (Somewhere Statistically Binding Hashing). An SSB hash function SSBH consists of three algorithms crsGen, bindingCrsGen and Hash with the following syntax.
+
+• crs $\leftarrow$ crsGen$(1^\lambda)$. It takes the security parameter $\lambda$ as input and outputs a common reference string crs.
+
+• crs $\leftarrow$ bindingCrsGen$(1^\lambda, i)$. It takes as input the security parameter $\lambda$ and an index $i \in [2\lambda]$, and outputs a common reference string crs.
+
+• $y \leftarrow \text{Hash}(\text{crs}, x)$. For some domain $\mathcal{D}$, it takes as input a common reference string crs and a string $x \in \mathcal{D}^{2\lambda}$, and outputs a string $y \in \{0, 1\}^\lambda$.
+
+We require the following properties of an SSB hash function.
+
+• **Statistically Binding at Position i:** For every $i \in [2\lambda]$ and an overwhelming fraction of crs in the support of bindingCrsGen$(1^\lambda, i)$ and every $x \in \mathcal{D}^{2\lambda}$, we have that $(\text{crs}, \text{Hash}(\text{crs}, x))$ uniquely determines $x_i$. More formally, for all $x' \in \mathcal{D}^{2\lambda}$ such that $x_i \neq x'_i$ we have that $\text{Hash}(\text{crs}, x') \neq \text{Hash}(\text{crs}, x)$.
+---PAGE_BREAK---
+
+• **Index Hiding**: It holds for all $i \in [2\lambda]$ that crsGen$(1^\lambda) \stackrel{c}{\approx} \text{bindingCrsGen}(1^\lambda, i)$, i.e., common reference strings generated by crsGen and bindingCrsGen are computationally indistinguishable.
+
+Next, we define hash proof systems [CS98] that are designated verifier proof systems that allow for proving that the given problem instance in some language. We give the formal definition as follows.
+
+**Definition 4.2 (Hash Proof System).** Let $\mathcal{L}_z \subseteq \mathcal{M}_z$ be an NP-language residing in a universe $\mathcal{M}_z$, both parametrized by some parameter $z$. Moreover, let $\mathcal{L}_z$ be characterized by an efficiently computable witness-relation $\mathcal{R}$, namely, for all $x \in \mathcal{M}_z$ it holds that $x \in \mathcal{L}_z \Leftrightarrow \exists w : \mathcal{R}(x, w) = 1$. A hash proof system HPS for $\mathcal{L}_z$ consists of three algorithms KeyGen, $H_{\text{public}}$ and $H_{\text{secret}}$ with the following syntax.
+
+• $(pk, sk) \leftarrow \text{KeyGen}(1^{\lambda}, z)$: Takes as input the security parameter $\lambda$ and a parameter $z$, and outputs a public-key and secret key pair $(pk, sk)$.
+
+• $y \leftarrow H_{\text{public}}(pk, x, w)$: Takes as input a public key pk, an instance $x \in \mathcal{L}_z$, and a witness w, and outputs a value y.
+
+• $y \leftarrow H_{\text{secret}}(sk, x)$: Takes as input a secret key sk and an instance $x \in \mathcal{M}_z$, and outputs a value y.
+
+We require the following properties of a hash proof system.
+
+• **Perfect Completeness:** For every $z$, every $(pk, sk)$ in the support of $\text{KeyGen}(1^{\lambda}, z)$, and every $x \in \mathcal{L}_z$ with witness $w$ (i.e., $\mathcal{R}(x, w) = 1$), it holds that
+
+$$H_{\text{public}}(pk, x, w) = H_{\text{secret}}(sk, x).$$
+
+• **Perfect Soundness:** For every $z$ and every $x \in \mathcal{M}_z \setminus \mathcal{L}_z$, let $(pk, sk) \leftarrow \text{KeyGen}(1^{\lambda}, z)$, then it holds that
+
+$$ (z, pk, H_{\text{secret}}(sk, x)) \equiv (z, pk, u), $$
+
+where $u$ is distributed uniformly random in the range of $H_{\text{secret}}$. Here, $\equiv$ denotes distributional equivalence.
+
+## 4.2 HPS-friendly SSB Hashing
+
+In this section, we will construct an HPS-friendly SSB hash function that supports a hash proof system. In particular, there is a hash proof system that enables proving that a certain bit of the pre-image of a hash-value has a certain fixed value (in our case, either 0 or 1).
+
+We start with some notations. Let $(\mathbb{G}, \cdot)$ be a cyclic group of order $p$ with generator $g$. Let $\mathbf{M} \in \mathbb{Z}_p^{m \times n}$ be a matrix. We will denote by $\hat{\mathbf{M}} = g^{\mathbf{M}} \in \mathbb{G}^{m \times n}$ the element-wise exponentiation of $g$ with the elements of $\mathbf{M}$. We also define $\hat{\mathbf{L}} = \hat{\mathbf{H}}^{\mathbf{M}} \in \mathbb{G}^{m \times k}$, where $\hat{\mathbf{H}} \in \mathbb{G}^{m \times n}$ and $\mathbf{M} \in \mathbb{Z}_p^{n \times k}$ as follows: Each element $\hat{\mathbf{L}}_{i,j} = \prod_{k=1}^{n} \hat{\mathbf{H}}_{i,k}^{\mathbf{M}_{k,j}}$ (intuitively this operation corresponds to matrix multiplication in the exponent). This is well-defined and efficiently computable.
+---PAGE_BREAK---
+
+**Computational Assumptions.** In the following, we first define the computational problems on which we will base the security of our HPS-friendly SSB hash function.
+
+**Definition 4.3** (The Decisional Diffie-Hellman (DDH) Problem). Let $(\mathbb{G}, \cdot)$ be a cyclic group of prime order $p$ and with generator $g$. Let $a, b, c$ be sampled uniformly at random from $\mathbb{Z}_p$ (i.e., $a, b, c \stackrel{\$}{\leftarrow} \mathbb{Z}_p$). The DDH problem asks to distinguish the distributions $(g, g^a, g^b, g^{ab})$ and $(g, g^a, g^b, g^c)$.
+
+**Definition 4.4** (Matrix Rank Problem). Let $m, n$ be integers and let $\mathbb{Z}_p^{m \times n; r}$ be the set of all $m \times n$ matrices over $\mathbb{Z}_p$ with rank $r$. Further, let $1 \le r_1 < r_2 \le \min(m,n)$. The goal of the matrix rank problem, denoted as MatrixRank$(\mathbb{G}, m, n, r_1, r_2)$, is to distinguish the distributions $g^{\mathbf{M}_1}$ and $g^{\mathbf{M}_2}$, where $\mathbf{M}_1 \stackrel{\$}{\leftarrow} \mathbb{Z}_p^{m \times n; r_1}$ and $\mathbf{M}_2 \stackrel{\$}{\leftarrow} \mathbb{Z}_p^{m \times n; r_2}$.
+
+In a recent result by Villar [Vil12] it was shown that the matrix rank problem can be reduced almost tightly to the DDH problem.
+
+**Theorem 4.5** ([Vil12] Theorem 1, simplified). Assume there exists a PPT distinguisher $\mathcal{D}$ that solves MatrixRank$(\mathbb{G}, m, n, r_1, r_2)$ problem with advantage $\epsilon$. Then, there exists a PPT distinguisher $\mathcal{D}'$ (running in almost time as $\mathcal{D}$) that solves DDH problem over $\mathbb{G}$ with advantage at least $\frac{\epsilon}{|\log_2(r_2/r_1)|}$.
+
+We next give the construction of an HPS-friendly SSB hash function.
+
+**Construction.** Our construction builds on the scheme of Okamoto et al. [OPWW15]. We will not delve into the details of their scheme and directly jump into our construction.
+
+Let $n$ be an integer such that $n = 2\lambda$, and let $(\mathbb{G}, \cdot)$ be a cyclic group of order $p$ and with generator $g$. Let $\mathbf{T}_i \in \mathbb{Z}_p^{2 \times n}$ be a matrix which is zero everywhere except the $i$-th column, and the $i$-th column is equal to $\mathbf{t} = (0, 1)^{\top}$. The three algorithms of the SSB hash function are defined as follows.
+
+• crsGen$(1^\lambda)$: Pick a uniformly random matrix $\mathbf{H} \stackrel{\$}{\leftarrow} \mathbb{Z}_p^{2 \times n}$ and output $\hat{\mathbf{H}} = g^{\mathbf{H}}$.
+
+• bindingCrsGen$(1^\lambda, i)$: Pick a uniformly random vector $(w_1, w_2)^{\top} = \mathbf{w} \stackrel{\$}{\leftarrow} \mathbb{Z}_p^2$ with the restriction that $w_1 = 1$, pick a uniformly random vector $\mathbf{a} \stackrel{\$}{\leftarrow} \mathbb{Z}_p^n$ and set $\mathbf{A} \leftarrow \mathbf{w} \cdot \mathbf{a}^{\top}$. Set $\mathbf{H} \leftarrow \mathbf{T}_i + \mathbf{A}$ and output $\hat{\mathbf{H}} = g^{\mathbf{H}}$.
+
+• Hash(crs, x): Parse x as a vector in $\mathfrak{D}^n$ ($\mathfrak{D} = \mathbb{Z}_p$) and parse crs = $\hat{\mathbf{H}}$. Compute $y \in \mathbb{G}^2$ as $y = \hat{\mathbf{H}}^x$. Parse y as a binary string and output the result.
+
+**Compression.** Notice that we can get factor two compression for an input space $\{0, 1\}^{2\lambda}$ by restricting the domain to $\mathfrak{D}' = \{0, 1\} \subset \mathfrak{D}$. The input length $n = 2\lambda$, where $\lambda$ is set to be twice the number of bits in the bit representation of a group element in $\mathbb{G}$. In the following we will assume that $n = 2\lambda$ and that the bit-representation size of a group element in $\mathbb{G}$ is $\frac{\lambda}{2}$.
+
+We will first show that the distributions crsGen$(1^\lambda)$ and bindingCrsGen$(1^\lambda, i)$ are computationally indistinguishable for every index $i \in [n]$, given that the DDH problem is computationally hard in the group $\mathbb{G}$.
+---PAGE_BREAK---
+
+**Lemma 4.6 (Index Hiding).** Assume that the MatrixRank($\mathbb{G}$, 2, $n$, 1, 2) problem is hard. Then the distributions crsGen(1$^{\lambda}$) and bindingCrsGen(1$^{\lambda}$, i) are computationally indistinguishable, for every $i \in [n]$.
+
+*Proof.* Assume there exists a PPT distinguisher $\mathcal{D}$ that distinguishes the distributions crsGen(1$^{\lambda}$) and bindingCrsGen(1$^{\lambda}$, i) with non-negligible advantage $\epsilon$. We will construct a PPT distinguisher $\mathcal{D}'$ that distinguishes MatrixRank($\mathbb{G}$, 2, $n$, 1, 2) with non-negligible advantage.
+
+The distinguisher $\mathcal{D}'$ does the following on input $\hat{\mathbf{M}} \in \mathbb{G}^{2 \times n}$. It computes $\hat{\mathbf{H}} \in \mathbb{G}^{2 \times n}$ as element-wise multiplication of $\hat{\mathbf{M}}$ and $g^{\mathbf{T}_i}$ and runs $\mathcal{D}$ on $\hat{\mathbf{H}}$. If $\mathcal{D}$ outputs crsGen, then $\mathcal{D}'$ outputs rank 2, otherwise $\mathcal{D}'$ outputs rank 1.
+
+We will now show that $\mathcal{D}'$ also has non-negligible advantage. Write $\mathcal{D}'$'s input as $\hat{\mathbf{M}} = g^{\mathbf{M}}$. If $\mathbf{M}$ is chosen uniformly random with rank 2, then $\mathbf{M}$ is uniform in $\mathbb{Z}_p^{2 \times n}$ with overwhelming probability. Hence with overwhelming probability, $\mathbf{M} + \mathbf{T}_i$ is also distributed uniformly random and it follows that $\hat{\mathbf{H}} = g^{\mathbf{M}+\mathbf{T}_i}$ is uniformly random in $\mathbb{G}^{2 \times n}$ which is identical to the distribution generated by crsGen(1$^{\lambda}$). On the other hand, if $\mathbf{M}$ is chosen uniformly random with rank 1, then there exists a vector $\mathbf{w} \in \mathbb{Z}_p^2$ such that each column of $\mathbf{M}$ can be written as $a_i \cdot \mathbf{w}$. We can assume that the first element $w_1$ of $\mathbf{w}$ is 1, since the case $w_1 = 0$ happens only with probability $1/p = \text{negl}(\lambda)$ and if $w_1 \neq 0$ we can replace all $a_i$ by $a'_i = a_i \cdot w_1$ and replace $w_i$ by $w'_i = \frac{w_i}{w_1}$. Thus, we can write $\mathbf{M}$ as $\mathbf{M} = \mathbf{w} \cdot \mathbf{a}^{\top}$ and consequently $\hat{\mathbf{H}}$ as $\hat{\mathbf{H}} = g^{\mathbf{w} \cdot \mathbf{a}^{\top} + \mathbf{T}_i}$. Notice that $\mathbf{a}$ is uniformly distributed, hence $\hat{\mathbf{H}}$ is identical to the distribution generated by bindingCrsGen(1$^{\lambda}$, i). Since $\mathcal{D}$ can distinguish the distributions crsGen(1$^{\lambda}$) and bindingCrsGen(1$^{\lambda}$, i) with non-negligible advantage $\epsilon$, $\mathcal{D}'$ can distinguish MatrixRank($\mathbb{G}$, 2, $n$, 1, 2) with advantage $\epsilon - \text{negl}(\lambda)$, which contradicts the hardness of MatrixRank($\mathbb{G}$, 2, $n$, 1, 2). □
+
+A corollary of Lemma 4.6 is that for all $i,j \in [n]$ the distributions bindingCrsGen(1$^{\lambda}$, i) and bindingCrsGen(1$^{\lambda}$, j) are indistinguishable, stated as follows.
+
+**Corollary 4.7.** *Assume the MatrixRank($\mathbb{G}$, 2, $n$, 1, 2) problem is computationally hard. Then it holds for all $i,j \in [n]$ that bindingCrsGen(1$^{\lambda}$, i) and bindingCrsGen(1$^{\lambda}$, j) are computationally indistinguishable.*
+
+We next show that if the common reference string crs = $\hat{\mathbf{H}}$ is generated by bindingCrsGen(1$^{\lambda}$, i), then the hash value Hash( CRS , x ) is statistically binded to x$_{i}$ .
+
+**Lemma 4.8 (Statistically Binding at Position i).** *For every $i \in [n]$, every $\mathbf{x} \in \mathbb{Z}_p^n$, and all choices of crs in the support of bindingCrsGen(1$^{\lambda}$, i) we have that for every $\mathbf{x}' \in \mathbb{Z}_p^n$ such that $x_i' \neq x_i$, Hash( crs , x ) ≠ Hash( crs , x' ).*
+
+*Proof.* We first write crs as $\hat{\mathbf{H}} = g^{\mathbf{H}} = g^{\mathbf{w} \cdot \mathbf{a}^{\top} + \mathbf{T}_i}$ and Hash( crs , x ) as Hash( $\hat{\mathbf{H}}$, x ) = g^{\mathbf{y}} = g^{\mathbf{H} \cdot \mathbf{x}}$. Thus, by taking the discrete logarithm with basis g our task is to demonstrate that there exists a unique $x_i$ from $\mathbf{H} = \mathbf{w} \cdot \mathbf{a}^{\top} + \mathbf{T}_i$ and $\mathbf{y} = \mathbf{H} \cdot \mathbf{x}$. Observe that
+
+$$
+\begin{align*}
+\mathbf{y} &= \mathbf{H} \cdot \mathbf{x} = (\mathbf{w} \cdot \mathbf{a}^\top + \mathbf{T}_i) \cdot \mathbf{x} = \mathbf{w} \cdot (\mathbf{a}, \mathbf{x}) + \mathbf{T}_i \cdot \mathbf{x} \\
+&= \binom{1}{w_2} \cdot (\mathbf{a}, \mathbf{x}) + \binom{0}{1} \cdot x_i,
+\end{align*}
+$$
+
+where $(\mathbf{a}, \mathbf{x})$ is the inner product of $\mathbf{a}$ and $\mathbf{x}$. If $\mathbf{a} \neq \mathbf{0}$, then we can use any non-zero element of $\mathbf{a}$ to compute $w_2$ from $\mathbf{H}$, and recover $x_i$ by computing $x_i = y_2 - w_2 \cdot y_1$; otherwise $\mathbf{a} = \mathbf{0}$, so $x_i = y_2$. □
+---PAGE_BREAK---
+
+### 4.3 A Hash Proof System for Knowledge of Preimage Bits
+
+In this section, we give our desired hash proof systems. In particular, we need a hash proof system for membership in a subspace of a vector space. In our proof we need the following technical lemma.
+
+**Lemma 4.9.** Let $\mathbf{M} \in \mathbb{Z}_p^{m \times n}$ be a matrix. Let $\text{colsp}(\mathbf{M}) = \{\mathbf{M} \cdot \mathbf{x} | \mathbf{x} \in \mathbb{Z}_p^n\}$ be its column space, and $\text{rowsp}(\mathbf{M}) = \{\mathbf{x}^\top \cdot \mathbf{M} | \mathbf{x} \in \mathbb{Z}_p^m\}$ be its row space. Assume that $\mathbf{y} \in \mathbb{Z}_p^m$ and $\mathbf{y} \notin \text{colsp}(\mathbf{M})$. Let $\mathbf{r} \stackrel{\$}{\leftarrow} \mathbb{Z}_p^m$ be chosen uniformly at random. Then it holds that
+
+$$ (\mathbf{M}, \mathbf{y}, \mathbf{r}^\top \mathbf{M}, \mathbf{r}^\top \mathbf{y}) = (\mathbf{M}, \mathbf{y}, \mathbf{r}^\top \mathbf{M}, u), $$
+
+where $u \stackrel{\$}{\leftarrow} \mathbb{Z}_p$ is distributed uniformly and independently of $\mathbf{r}$. Here, $\equiv$ denotes distributional equivalence.
+
+*Proof.* For any $\mathbf{t} \in \text{rowsp}(\mathbf{M})$ and $s \in \mathbb{Z}_p$, consider following linear equation system
+
+$$ \begin{cases} \mathbf{r}^\top \mathbf{M} = \mathbf{t} \\ \mathbf{r}^\top \mathbf{y} = s \end{cases}. $$
+
+Let $\mathcal{N}$ be the left null space of $\mathbf{M}$. We know that $\mathbf{y} \notin \text{colsp}(\mathbf{M})$, hence $\mathbf{M}$ has rank $\le m-1$, therefore $\mathcal{N}$ has dimension $\ge 1$. Let $\mathbf{r}_0$ be an arbitrary solution for $\mathbf{r}^\top\mathbf{M} = \mathbf{t}$, and let $\mathbf{n}$ be a vector in $\mathcal{N}$ such that $\mathbf{n}^\top\mathbf{y} \ne 0$ (there must be such a vector since $\mathbf{y} \notin \text{colsp}(\mathbf{M})$). Then there exists a solution $\mathbf{r}$ for the above linear equation system, that is,
+
+$$ \mathbf{r} = \mathbf{r}_0 + (\mathbf{n}^\top \mathbf{y})^{-1} \cdot (\mathbf{s} - \mathbf{r}_0^\top \mathbf{y}) \cdot \mathbf{n}, $$
+
+where $(\mathbf{n}^\top\mathbf{y})^{-1}$ is the multiplicative inverse of $\mathbf{n}^\top\mathbf{y}$ in $\mathbb{Z}_p$. Then two cases arise: (i) column vectors of $(\mathbf{M}\mathbf{y})$ are full-rank, or (ii) not. In this first case, there is a unique solution for $\mathbf{r}$. In the second case the solution space has the same size as the left null space of $(\mathbf{M}\mathbf{y})$. Therefore, in both cases, the number of solutions for $\mathbf{r}$ is the same for every $(\mathbf{t}, s)$ pair.
+
+As $\mathbf{r}$ is chosen uniformly at random, all pairs $(\mathbf{t}, s) \in \text{rowsp}(\mathbf{M}) \times \mathbb{Z}_p$ have the same probability of occurrence and the claim follows. $\square$
+
+**Construction.** Fix a matrix $\hat{\mathbf{H}} \in \mathbb{G}^{2 \times n}$ and an index $i \in [n]$. We will construct a hash proof system HPS = (KeyGen, $H_{\text{public}}$, $H_{\text{secret}}$) for the following language $\mathcal{L}_{\hat{\mathbf{H}},i}$:
+
+$$ \mathcal{L}_{\hat{\mathbf{H}},i} = \{( \hat{\mathbf{y}}, b ) \in \mathbb{G}^2 \times \{0,1\} \mid \exists \mathbf{x} \in \mathbb{Z}_p^n \text{ s.t. } \hat{\mathbf{y}} = \hat{\mathbf{H}}^{\mathbf{x}} \text{ and } x_i = b \}. $$
+
+Note that in our hash proof system we only enforce that a single specified bit is $b$, where $b \in \{0,1\}$. However, our hash proof system does not place any requirement on the value used at any of the other locations. In fact the values used at the other locations may actually be from the full domain $\mathfrak{D}$ (i.e., $\mathbb{Z}_p$). Observe that the formal definition of the language $\mathcal{L}_{\hat{\mathbf{H}},i}$ above incorporates this difference in how the honest computation of the hash function is performed and what the hash proof system is supposed to prove.
+
+For ease of exposition, it will be convenient to work with a matrix $\hat{\mathbf{H}}' \in \mathbb{G}_p^{3\times n}$:
+
+$$ \hat{\mathbf{H}}' = \begin{pmatrix} \hat{\mathbf{H}} \\ g e_i^\top \end{pmatrix}, $$
+
+where $e_i \in \mathbb{Z}_p^n$ is the $i$-th unit vector, with all elements equal to zero except the $i^{th}$ one which is equal to one.
+---PAGE_BREAK---
+
+• KeyGen$(1^{\lambda}, (\hat{\mathbf{H}}, i))$: Choose $\mathbf{r} \leftarrow_S \mathbb{Z}_p^3$ uniformly at random. Compute $\hat{\mathbf{h}} = ((\hat{\mathbf{H}}')^{\top})^{\mathbf{r}}$. Set $\mathbf{pk} = \hat{\mathbf{h}}$ and $\mathbf{sk} = \mathbf{r}$. Output $(\mathbf{pk}, \mathbf{sk})$.
+
+• $H_{\text{public}}(\mathbf{pk}, (\hat{\mathbf{y}}, b), \mathbf{x})$: Parse $\mathbf{pk}$ as $\hat{\mathbf{h}}$. Compute $\hat{z} = (\hat{\mathbf{h}}^{\top})^{\mathbf{x}}$ and output $\hat{z}$.
+
+• $H_{\text{secret}}(\mathbf{sk}, (\hat{\mathbf{y}}, b))$: Parse $\mathbf{sk}$ as $\mathbf{r}$ and set $\hat{\mathbf{y}}' = (\begin{pmatrix} \hat{\mathbf{y}} \\ g^b \end{pmatrix})$. Compute $\hat{z} = ((\hat{\mathbf{y}}')^{\top})^{\mathbf{r}}$ and output $\hat{z}$.
+
+**Lemma 4.10.** For every matrix $\hat{\mathbf{H}} \in \mathbb{G}^{2 \times n}$ and every $i \in [n]$, HPS is a hash proof system for the language $\mathcal{L}_{\hat{\mathbf{H}},i}$.
+
+*Proof.* Let $\hat{\mathbf{H}} = g^{\mathbf{H}}, \mathbf{r} = (\mathbf{r}^*, r_3)$ where $\mathbf{r}^* \in \mathbb{Z}_p^2$. Let $\mathbf{y}' := \log_g \hat{\mathbf{y}}', \mathbf{y} := \log_g \hat{\mathbf{y}}, \mathbf{H}' := \log_g \hat{\mathbf{H}}', \mathbf{h} := \log_g \hat{\mathbf{h}}$.
+
+For perfect correctness, we need to show that for every $i \in [n]$, every $\hat{\mathbf{H}} \in \mathbb{G}^{2 \times n}$, and every $(\mathbf{pk}, \mathbf{sk})$ in the support of KeyGen$(1^{\lambda}, (\hat{\mathbf{H}}', i))$, if $(\hat{\mathbf{y}}, b) \in \mathcal{L}_{\hat{\mathbf{H}},i}$ and $\mathbf{x}$ is a witness for membership (i.e., $\hat{\mathbf{y}} = \hat{\mathbf{H}}^*$ and $x_i = b$), then it holds that $H_{\text{public}}(\mathbf{pk}, (\hat{\mathbf{y}}, b), \mathbf{x}) = H_{\text{secret}}(\mathbf{sk}, (\hat{\mathbf{y}}, b))$.
+
+To simplify the argument, we again consider the statement under the discrete logarithm with basis $g$. Then it holds that
+
+$$
+\begin{align*}
+& \log_g (\mathrm{H}_{\mathrm{secret}}(\mathrm{sk}, (\hat{\mathbf{y}}, b))) \\
+&= \log_g (((\hat{\mathbf{y}}')^\top)^{\boldsymbol{\mathrm{r}}}) = \langle \mathbf{y}', \boldsymbol{\mathrm{r}} \rangle = \langle \mathbf{y}, \boldsymbol{\mathrm{r}}^* \rangle + b \cdot r_3 \\
+&= \langle \boldsymbol{\mathrm{H}} \cdot \mathbf{x}, \boldsymbol{\mathrm{r}}^* \rangle + x_i \cdot r_3 = \langle \boldsymbol{\mathrm{H}}'\mathbf{x}, \boldsymbol{\mathrm{r}} \rangle = \langle ((\boldsymbol{\mathrm{H}}')^\top \boldsymbol{\mathrm{r}}, \mathbf{x}) \\
+&= \langle \boldsymbol{\mathrm{h}}, \mathbf{x} \rangle = \log_g ((\hat{\boldsymbol{\mathrm{h}}}^\top)^{\boldsymbol{\mathrm{x}}}) \\
+&= \log_g (\mathrm{H}_{\mathrm{public}}(\mathrm{pk}, (\hat{\mathbf{y}}, b), \mathbf{x})) .
+\end{align*}
+$$
+
+For perfect soundness, let $(\mathbf{pk}, \mathbf{sk}) \leftarrow \text{KeyGen}(1^{\lambda}, (\hat{\mathbf{H}}', i))$. We will show that if $(\hat{\mathbf{y}}, b) \notin \mathcal{L}_{\hat{\mathbf{H}},i}$, then $H_{\text{secret}}(\mathbf{sk}, (\hat{\mathbf{y}}, b))$ is distributed uniformly random in the range of $H_{\text{secret}}$, even given $\hat{\mathbf{H}}, i$, and $\mathbf{pk}$. Again under the discrete logarithm, this is equivalent to showing that $\langle \mathbf{y}', \mathbf{r} \rangle$ is distributed uniformly random given $\boldsymbol{\mathrm{H}}'$ and $\mathbf{h} = (\boldsymbol{\mathrm{H}}')^\top \mathbf{r}$.
+
+Note that we can re-write the language $\mathcal{L}_{\hat{\mathbf{H}},i} = \{((\hat{\mathbf{y}}, b) \in \mathbb{G}^2 \times \mathbb{Z}_p | \exists \mathbf{x} \in \mathbb{Z}_p^n \text{s.t. } \boldsymbol{\mathrm{H}}'\mathbf{x} = \mathbf{y}'\}$. It follows that if $(\hat{\mathbf{y}}, b) \notin \mathcal{L}_{\hat{\mathbf{H}},i}$, then $\mathbf{y}' \notin \operatorname{span}(\boldsymbol{\mathrm{H}}')$. Now it follows directly from Lemma 4.9 that
+
+$$
+\boldsymbol{\mathrm{r}}^\top \boldsymbol{\mathrm{y}}' \equiv u
+$$
+
+given $\boldsymbol{\mathrm{H}}'$ and $\boldsymbol{\mathrm{r}}^\top \boldsymbol{\mathrm{H}}'$, where $u$ is distributed uniformly random. This concludes the proof. $\square$
+
+**Remark 4.11.** While proving the security of our applications based on the above hash-proof system, we would generate $\hat{\boldsymbol{\mathrm{H}}}$ to be the output of binding CrsGen$(1^{\lambda}, i)$ and use the property that if $(\hat{\boldsymbol{\mathrm{y}}}, b) \in \mathcal{L}_{\hat{\boldsymbol{\mathrm{H}}},i}$, then $(\hat{\boldsymbol{\mathrm{y}}} , (1-b)) \notin \mathcal{L}_{\hat{\boldsymbol{\mathrm{H}}},i}$. This follows directly from Lemma 4.8 (that is, $\hat{\boldsymbol{\mathrm{H}}}$ and $\hat{\boldsymbol{\mathrm{y}}}$ uniquely fixes $x_i$).
+
+## 4.4 The Laconic OT Scheme
+
+We are now ready to put the pieces together and provide our *lOT* scheme with factor-2 compression.
+---PAGE_BREAK---
+
+**Construction.** Let SSBH = (SSBH.crsGen, SSBH.bindingCrsGen, SSBH.Hash) be the HPS-friendly SSB hash function constructed in Section 4.2 with domain $\mathcal{D} = \mathbb{Z}_p$. Notice that we achieve factor-2 compression (namely, compressing $2\lambda$ bits into $\lambda$ bits) by restricting the domain from $\mathcal{D}^n$ to $\{0, 1\}^n$ in our laconic OT scheme. Also, abstractly let the associated hash proof system be HPS = (HPS.KeyGen, HPS.Hpublic, HPS.Hsecret) for the language
+
+$$ \mathcal{L}_{\text{crs},i} = \{( \text{digest}, b ) \in \{0, 1\}^\lambda \times \{0, 1\} \mid \exists D \in \mathcal{D}^{2\lambda} : \text{SSBH.Hash}( \text{crs}, D ) = \text{digest} \text{ and } D[i] = b \}. $$
+
+Recall that the bit-representation size of a group element of $\mathbb{G}$ is $\frac{\lambda}{2}$, hence the language defined above is the same as the one defined in Section 4.3.
+
+Now we construct the laconic OT scheme $\ell OT = (\text{crsGen}, \text{Hash}, \text{Send}, \text{Receive})$ as follows.
+
+* $\text{crsGen}(1^\lambda)$: Compute $\text{crs} \leftarrow \text{SSBH.crsGen}(1^\lambda)$ and output $\text{crs}$.
+
+* **Hash($\text{crs}, D \in \{0, 1\}^{2\lambda}$):**
+
+ * digest $\leftarrow$ SSBH.Hash($\text{crs}, D$)
+ * $\hat{D} \leftarrow (D, \text{digest})$
+ * Output $(\text{digest}, \hat{D})$
+
+* **Send($\text{crs}, \text{digest}, L, m_0, m_1$):**
+
+ Let HPS be the hash-proof system for the language $\mathcal{L}_{\text{crs},L}$
+
+ * ($\text{pk}, \text{sk}) \leftarrow \text{HPS.KeyGen}(1^\lambda, (\text{crs}, L))$
+ * $c_0 \leftarrow m_0 \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, 0))$
+ * $c_1 \leftarrow m_1 \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, 1))$
+ * Output $e = (\text{pk}, c_0, c_1)$
+
+* **Receive$\hat{\text{D}}(\text{crs}, e, L)$:**
+
+ * Parse $e = (\text{pk}, c_0, c_1)$
+ * Parse $\hat{\text{D}} = (D, \text{digest})$, and set $b \leftarrow D[L]$.
+ * $m \leftarrow c_b \oplus \text{HPS.H}_{\text{public}}(\text{pk}, (\text{digest}, b), D)$
+ * Output $m$
+
+We will now show that $\ell OT$ is a laconic OT protocol with factor-2 compression, i.e., it has compression factor 2, and satisfies the correctness and sender privacy requirements. First notice that SSBH.Hash is factor-2 compressing, so Hash also has compression factor 2. We next argue correctness and sender privacy in Lemmas 4.12 and 4.13, respectively.
+
+**Lemma 4.12.** *Given that HPS satisfies the correctness property, the $\ell OT$ scheme also satisfies the correctness property.*
+
+*Proof.* Fix a common reference string $\text{crs}$ in the support of $\text{crsGen}(1^\lambda)$, a database string $D \in \{0, 1\}^{2\lambda}$ and an index $L \in [2\lambda]$. For any $\text{crs}, D, L$ such that $D[L] = b$, let $\text{digest} = \text{Hash}(\text{crs}, D)$. Then it clearly holds that $(\text{digest}, b) \in \mathcal{L}_{\text{crs},L}$. Thus, by the correctness property of the hash proof system HPS it holds that
+
+$$ \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, b)) = \text{HPS.H}_{\text{public}}(\text{pk}, (\text{digest}, b), D). $$
+---PAGE_BREAK---
+
+By the construction of Send(crs, digest, L, $m_0$, $m_1$), $c_b = m_b \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, b))$. Hence the output $m$ of Receive$^{\hat{D}}$(crs, e, L) is
+
+$$
+\begin{align*}
+m &= c_b \oplus \text{HPS.H}_{\text{public}}(\text{pk}, (\text{digest}, b), D) \\
+ &= m_b \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, b)) \oplus \text{HPS.H}_{\text{public}}(\text{pk}, (\text{digest}, b), D) \\
+ &= m_b.
+\end{align*}
+$$
+
+**Lemma 4.13.** *Given that SSBH is index-hiding and has the statistically binding property and that HPS is sound, then the $\ell$OT scheme satisfies sender privacy against semi-honest receiver.*
+
+*Proof.* We first construct the simulator $\ell$OTSim.
+
+$\ell$OTSim(crs, D, L, $m_{D[L]}$):
+digest $\leftarrow$ SSBH.Hash(crs, D)
+Let HPS be the hash-proof system for the language $\mathcal{L}_{\text{crs},L}$
+(pk, sk) $\leftarrow$ HPS.KeyGen($1^\lambda$, (crs, L))
+$c_0 \leftarrow m_{D[L]} \oplus$ HPS.$\text{H}_{\text{secret}}$(sk, (digest, 0))
+$c_1 \leftarrow m_{D[L]} \oplus$ HPS.$\text{H}_{\text{secret}}$(sk, (digest, 1))
+Output (pk, $c_0$, $c_1$)
+
+For any database $D$ of size at most $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, any memory location $L \in [M]$, and any pair of messages $(m_0, m_1) \in \{0, 1\}^\lambda \times \{0, 1\}^\lambda$, let $\text{crs} \leftarrow \text{crsGen}(1^\lambda)$ and $\text{digest} \leftarrow \text{Hash}(\text{crs}, D)$. Then we will prove that the two distributions $(\text{crs}, \text{Send}(\text{crs}, \text{digest}, L, m_0, m_1))$ and $(\text{crs}, \ell\text{OTSim}(\text{crs}, D, L, m_{D[L]}))$ are computationally indistinguishable. Consider the following hybrids.
+
+• Hybrid 0: This is the real experiment, namely $(\text{crs}, \text{Send}(\text{crs}, \text{digest}, L, m_0, m_1))$.
+
+• Hybrid 1: Same as hybrid 0, except that $\text{crs}$ is computed by $\text{crs} \leftarrow \text{SSBH.bindingCrsGen}(1^\lambda, L)$.
+
+• Hybrid 2: Same as hybrid 1, except that $c_{1-D[L]}$ is computed by $c_{1-D[L]} \leftarrow m_{D[L]} \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, 1 - D[L]))$. That is, both $c_0$ and $c_1$ encrypt the same message $m_{D[L]}$.
+
+• Hybrid 3: Same as hybrid 2, except that $\text{crs}$ is computed by $\text{crs} \leftarrow \text{SSBH.crsGen}(1^\lambda)$. This is the simulated experiment, namely $(\text{crs}, \ell\text{OTSim}(\text{crs}, D, L, m_{D[L]}))$.
+
+Indistinguishability of hybrid 0 and hybrid 1 follows directly from Lemma 4.6, as we replace the distribution of $\text{crs}$ from $\text{SSBH.crsGen}(1^\lambda)$ to $\text{SSBH.bindingCrsGen}(1^\lambda, L)$. Indistinguishability of hybrids 2 and 3 also follows from Lemma 4.6, as we replace the distribution of $\text{crs}$ from $\text{SSBH.bindingCrsGen}(1^\lambda, L)$ back to $\text{SSBH.crsGen}(1^\lambda)$.
+
+We will now show that hybrids 1 and 2 are identically distributed. Since $\text{crs}$ is in the support of $\text{SSBH.bindingCrsGen}(1^\lambda, i)$ and $\text{digest} = \text{SSBH.Hash}(\text{crs}, D)$, by Lemma 4.8 it holds that $(\text{digest}, 1 - D[L]) \notin \mathcal{L}_{\text{crs},L}$. By the soundness property of the hash-proof system HPS, it holds that
+
+$$
+(\mathrm{crs}, L, \mathrm{pk}, \mathrm{HPS.H}_{\mathrm{secret}}(\mathrm{sk}, (\mathrm{digest}, 1 - D[L])) \equiv (\mathrm{crs}, L, \mathrm{pk}, u),
+$$
+---PAGE_BREAK---
+
+for a uniformly random $u$. Furthermore, $c_{D[L]}$ can be computed by $m_{D[L]} \oplus \text{HPS.H}_{\text{public}}(\text{pk}, (\text{digest}, D[L]), D)$. Hence
+
+$$
+\begin{align*}
+& (\text{crs}, L, \text{pk}, m_{D[L]} \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, 1 - D[L]))), c_{D[L]}) \\
+& \equiv (\text{crs}, L, \text{pk}, u, c_{D[L]}) \\
+& \equiv (\text{crs}, L, \text{pk}, m_{1-D[L]} \oplus \text{HPS.H}_{\text{secret}}(\text{sk}, (\text{digest}, 1 - D[L]))), c_{D[L]}).
+\end{align*}
+$$
+
+This concludes the proof. $\square$
+
+# 5 Construction of Updatable Laconic OT
+
+In this section, we will construct an updatable laconic OT that supports a hash function that allows for compression from an input (database) of size an arbitrary polynomial in $\lambda$ to $\lambda$ bits. As every updatable laconic OT protocol is also a (standard) laconic OT protocol, we will only construct the former. Our main technique in this construction, is the use of garbled circuits to bootstrap a laconic OT with factor-2 compression into one with an arbitrary compression factor.
+
+Below in Section 5.1 we describe some background on the primitives needed for realizing our laconic OT construction. Then we will give the construction of laconic OT along with its correctness and security proofs in Sections 5.2 and 5.3, respectively.
+
+## 5.1 Background
+
+In this section we recall the needed background of garbled circuits and Merkle trees.
+
+### 5.1.1 Garbled Circuits
+
+Garbled circuits were first introduced by Yao [Yao82] (see Lindell and Pinkas [LP09] and Bellare et al. [BHR12] for a detailed proof and further discussion). A circuit garbling scheme GC is a tuple of PPT algorithms (GCircuit, Eval). Very roughly GCircuit is the circuit garbling procedure and Eval the corresponding evaluation procedure. Looking ahead, each individual wire $w$ of the circuit being garbled will be associated with two labels, namely $\mathbf{key}_{w,0}$, $\mathbf{key}_{w,1}$.
+
+• $\tilde{\mathcal{C}} \leftarrow \text{GCircuit}(1^\lambda, \mathcal{C}, \{\mathbf{key}_{w,b}\}_{w \in \text{inp}(\mathcal{C}), b \in \{0,1\}})$: GCircuit takes as input a security parameter $\lambda$, a circuit $\mathcal{C}$, and a set of labels $\mathbf{key}_{w,b}$ for all the input wires $w \in \text{inp}(\mathcal{C})$ and $b \in \{0,1\}$. This procedure outputs a garbled circuit $\tilde{\mathcal{C}}$.
+
+• $y \leftarrow \text{Eval}(\tilde{\mathcal{C}}, \{\mathbf{key}_{w,x_w}\}_{w \in \text{inp}(\mathcal{C})})$: Given a garbled circuit $\tilde{\mathcal{C}}$ and a garbled input represented as a sequence of input labels $\{\mathbf{key}_{w,x_w}\}_{w \in \text{inp}(\mathcal{C})}$, Eval outputs $y$.
+
+**Terminology of Keys and Labels.** We note that, in the rest of the paper, we use the notation Keys to refer to both the secret values sampled for wires and the notation Labels to refer to exactly one of them. In other words, generation of garbled circuit involves Keys while computation itself depends just on Labels. Let $\text{Keys} = ((\mathbf{key}_{1,0}, \mathbf{key}_{1,1}), \dots, (\mathbf{key}_{n,0}, \mathbf{key}_{n,1}))$ be a list of $n$ key-pairs, we denote $\text{Keys}_x$ for a string $x \in \{0,1\}^n$ to be a list of labels $(\mathbf{key}_{1,x_1}, \dots, \mathbf{key}_{n,x_n})$.
+---PAGE_BREAK---
+
+**Correctness.** For correctness, we require that for any circuit C and input $x \in \{0,1\}^m$ (here m is the input length to C) we have that:
+
+$$ \mathrm{Pr} \left[ C(x) = \mathrm{Eval} \left( \tilde{\mathcal{C}}, \{\mathbf{key}_{w,x_w}\}_{w \in \mathrm{inp}(C)} \right) \right] = 1 $$
+
+where $\tilde{\mathcal{C}} \leftarrow \mathrm{GCircuit} (1^\lambda, C, \{\mathbf{key}_{w,b}\}_{w \in \mathrm{inp}(C), b \in \{0,1\}})$.
+
+**Security.** For security, we require that there is a PPT simulator CircSim such that for any C, x, and uniformly random keys $\{\mathbf{key}_{w,b}\}_{w \in \mathrm{inp}(C), b \in \{0,1\}}$, we have that
+
+$$ (\tilde{\mathcal{C}}, \{\mathbf{key}_{w,x_w}\}_{w \in \mathrm{inp}(C)}) \stackrel{c}{\approx} \mathrm{CircSim} (1^\lambda, C, y) $$
+
+where $\tilde{\mathcal{C}} \leftarrow \mathrm{GCircuit} (1^\lambda, C, \{\mathbf{key}_{w,b}\}_{w \in \mathrm{inp}(C), b \in \{0,1\}})$ and $y = C(x)$.
+
+### 5.1.2 Merkle Tree
+
+In this section we briefly review Merkle trees. A Merkle tree is a hash based data structure that generically extend the domain of a hash function. The following description will be tailored to the hash function of the laconic OT scheme that we will present in Section 5.2. Given a two-to-one hash function **Hash**: $\{0,1\}^{2\lambda} \rightarrow \{0,1\}^{\lambda}$, we can use a Merkle tree to construct a hash function that compresses a database of an arbitrary (a priori unbounded polynomial in $\lambda$) size to a $\lambda$-bit string. Now we briefly illustrate how to compress a database $D \in \{0,1\}^M$ (assume for ease of exposition that $M = 2^d \cdot \lambda$). First, we partition D into strings of length $2\lambda$; we call each string a *leaf*. Then we use **Hash** to compress each leaf into a new string of length $\lambda$; we call each string a *node*. Next, we bundle the new nodes in pairs of two and call these pairs *siblings*, i.e., each pair of siblings is a string of length $2\lambda$. We then use **Hash** again to compress each pair of siblings into a new node of size $\lambda$. We continue the process till a single node of size $\lambda$ is obtained. This process forms a binary tree structure, which we refer to as a Merkle tree. Looking ahead, the hash function of the laconic OT scheme has output $(\hat{D}, \mathrm{digest})$, where $\hat{D}$ is the entire Merkle tree, and *digest* is the root of the tree.
+
+A Merkle tree has the following property. In order to verify that a database $D$ with hash root *digest* has a certain value $b$ at a location $L$ (namely, $D[L] = b$), there is no need to provide the entire Merkle tree. Instead, it is sufficient to provide a path of siblings from the Merkle tree root to the leaf that contains location $L$. It can then be easily verified if the hash values from the leaf to the root are correct.
+
+Moreover, a Merkle tree can be updated in the same fashion when the value at a certain location of the database is updated. Instead of recomputing the entire tree, we only need to recompute the nodes on the path from the updated leaf to the root. This can be done given the path of siblings from the root to the leaf.
+
+## 5.2 Construction
+
+We will now provide our construction to bootstrap an *ℓOT* scheme with factor-2 compression into an updatable *ℓOT* scheme with an arbitrary compression factor, which can compress a database of an arbitrary (a priori unbounded polynomial in $\lambda$) size.
+---PAGE_BREAK---
+
+**Overview.** We first give an overview of the construction. Consider a database $D \in \{0, 1\}^M$ such that $M = 2^d \cdot \lambda$. Given a laconic OT scheme with factor-2 compression (denoted as $\ell OT_{const}$), we will first use a Merkle tree to obtain a hash function with arbitrary (polynomial) compression factor. As described in Section 5.1.2, the **Hash** function of the updatable $\ell OT$ scheme will have an output $(\hat{D}, \text{digest})$, where $\hat{D}$ is the entire Merkle tree, and **digest** is the root of the tree.
+
+In the **Send** algorithm, suppose we want to send a message depending on a bit $D[L]$, we will follow the natural approach of traversing the Merkle tree layer by layer until reaching the leaf containing $L$. In particular, $L$ can be represented as $L = (b_1, \dots, b_{d-1}, t)$, where $b_1, \dots, b_{d-1}$ are bits representing the path from the root to the leaf containing location $L$, and $t \in [2\lambda]$ is the position within the leaf. The **Send** algorithm first takes as input the root **digest** of the Merkle tree, and it will generate a chain of garbled circuits, which would enable the receiver to traverse the Merkle tree from the root to the leaf. And upon reaching the leaf, the receiver will be able to evaluate the last garbled circuit and retrieve the message corresponding to the $t$-th bit of the leaf.
+
+We briefly explain the chain of garbled circuits as follows. The chain consists of $d-1$ traversing circuits along with a reading circuit. Every traversing circuit takes as input a pair of siblings `sbl` = (`sbl`0, `sbl`1) at a certain layer of the Merkle tree, chooses `sbl`b which is the node in the path from root to leaf, and generates a laconic OT ciphertext (using $\ell OT_{const}.Send$) which encrypts the input keys of the next traversing garbled circuit and uses `sbl`b as the hash value. Looking ahead, when the receiver evaluates the traversing circuit and obtains the laconic OT ciphertext, he can then use the siblings at the next layer to decrypt the ciphertext (by $\ell OT_{const}.Receive$) and obtain the corresponding input labels for the next traversing garbled circuit. Using the chain of traversing garbled circuits the receiver can therefore traverse from the first layer to the leaf of the Merkle tree. Furthermore, the correct keys for the first traversing circuit are sent via the $\ell OT_{const}$ with **digest** (i.e., root of the tree) as the hash value.
+
+Finally, the last traversing circuit will transfer keys for the last reading circuit to the receiver in a similar fashion as above. The reading circuit takes the leaf as input and outputs $m_{leaf[t]}$, i.e., the message corresponding to the $t$-th bit of the leaf. Hence, when evaluating the reading circuit, the receiver can obtain the message $m_{leaf[t]}$.
+
+**SendWrite** and **ReceiveWrite** are similar as **Send** and **Receive**, except that (a) **ReceiveWrite** updates the Merkle tree from the leaf to the root, and (b) the last writing circuit recomputes the root of the Merkle tree and outputs messages corresponding to the new root. To enable (b), the writing circuit will take as input the whole path of siblings from the root to the leaf. The input keys for the writing circuit corresponding to the siblings at the $(i+1)$-th layer are transferred via the $i$-th traversing circuit. That is, the $i$-th traversing circuit transfers the keys for the $(i+1)$-th transferring circuit as well as partial keys for the writing circuit. In the actual construction, both the reading circuit and writing circuit take as input the entire path of siblings (for the purpose of symmetry).
+
+**The Construction.** Let $\ell OT_{const} = (\ell OT_{const}.crsGen, \ell OT_{const}.Hash, \ell OT_{const}.Send, \ell OT_{const}.Receive)$ be a laconic OT protocol with factor-2 compression. Let GC = (GCircuit, Eval) be a circuit garbling scheme. Without loss of generality, let $D \in \{0,1\}^M$ be a database such that $|M| = 2^d \cdot \lambda$. A location $L \in [M]$ can be represented as $(b_1, b_2, \dots, b_{d-1}, t) \in \{0,1\}^{d-1} \times [2\lambda]$, where the bits $b_i$'s define the path from the root to a leaf in the Merkle tree, and $t \in [2\lambda]$ defines a position in that leaf.
+
+Before delving into the construction, we first describe three gadget circuits: the traversing circuit $C^{\text{trav}}$, the reading circuit $C^{\text{read}}$, and the writing circuit $C^{\text{write}}$. These circuits are defined
+---PAGE_BREAK---
+
+formally in Figures 4, 5, and 6, respectively.
+
+The traversing circuit has hardwired inside it a common reference string `crs`, a bit `b` and two vectors of input keys `Keys`, $\tilde{Keys}$, each containing $2\lambda$ key-pairs (a key-pair is a pair of $\lambda$-bit strings). It takes as input a pair of siblings `sbl = (sbl₀, sbl₁)`, each of length $\lambda$, and generates two laconic OT Send messages with `sblₐ` as the digest and `Keys`, $\tilde{Keys}$ as message vectors respectively. Further, it also has the randomness needed for $\ell OT_{const}$. Send hardwired inside it.
+
+The reading circuit $C^{read}$ has a location $t \in [2\lambda]$, and messages $m_0, m_1 \in \{0, 1\}^\lambda$ hardwired inside it. It takes as input a path of siblings from the root to a leaf, reads the $t$-th bit of the leaf, and outputs either $m_0$ or $m_1$ depending on that bit.
+
+The writing circuit $C^{write}$ has hardwired inside it a common reference string `crs`, a location $L \in [M]$, a bit `b` and a vector of messages `Keys` consisting of $\lambda$ key-pairs. It takes as input a path of siblings from the root to a leaf, changes the $t$-th bit of the leaf to `b` (where $L$ corresponds to $t$-th location in the leaf), recomputes the Merkle tree root along the path, and outputs the corresponding labels for the new root/digest.
+
+Figure 4: The Traversing Circuit $C^{trav}[crs, b, \tilde{Keys}, \tilde{Keys}, r, \tilde{r}]$
+
+Figure 5: The Reading Circuit $C^{read}[t, m_0, m_1]$
+
+Now we construct the updatable $\ell OT$, namely ($crsGen$, `Hash`, `Send`, `Receive`, `SendWrite`, `ReceiveWrite`) as follows.
+
+* `crsGen(1^λ)`: Sample `crs` ← $\ell OT_{const}$.crsGen(1^λ) and output `crs`.
+
+* Hash(crs, $D \in \{0, 1\}^M$):
+
+Build a Merkle tree $\hat{D}$ of $D$ using $\ell OT_{const}$.Hash(crs, ·), as in Section 5.1.2.
+Let `digest` be the root of $\hat{D}$.
+Output (`digest`, $\hat{D}$).
+
+* Send(crs, digest, $L$, $m_0, m_1$):
+
+Parse $L = (b_1, b_2, ..., b_{d-1}, t)$.
+Pick $(\tilde{\text{Keys}}^1, \dots, \tilde{\text{Keys}}^d)$ as input keys for $C^{read}$,
+---PAGE_BREAK---
+
+**Circuit C**write
+
+**Hardwired Values:** crs, L, b, Keys
+
+**Input:** path
+
+Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$
+
+Parse $path = (sbl^1, \dots, sbl^{d-1}, leaf)$, and parse $sbl^i = (sbl_0^i, sbl_1^i)$ for $i \in [d-1]$
+
+$leaf[t] \leftarrow b$
+
+$sbl^d \leftarrow leaf$
+
+For $i = d - 1$ downto 1:
+
+$sbl_{b_i}^i \leftarrow \ell OT_{\text{const}}.\text{Hash}(crs, sbl^{i+1})$
+
+$digest^* \leftarrow \ell OT_{\text{const}}.\text{Hash}(crs, sbl^1).$
+
+Output $Keys_{digest^*}$
+
+Figure 6: The Writing Circuit $C^{write}[crs, L, b, \text{Keys}]$
+
+where $\tilde{\text{Keys}}^i$ corresponds to the input keys of $sbl^i$ for $i \in [d-1]$,
+and $\tilde{\text{Keys}}^d$ corresponds to the input keys of leaf.
+
+$\tilde{C}^{\text{read}} \leftarrow \text{GCircuit}\left(1^{\lambda}, C^{\text{read}}[t, m_0, m_1], (\tilde{\text{Keys}}^1, \dots, \tilde{\text{Keys}}^d)\right)$
+
+Let $\text{Keys}^d$ be 0*
+
+For $i = d - 1$ downto 1:
+
+Pick $\text{Keys}^i$ as input keys for $C^{\text{trav}}$
+
+Pick $r_i, \tilde{r}_i$ as random coins for $\ell OT_{\text{const}}$.Send
+
+$\tilde{C}_i \leftarrow \text{GCircuit}\left(1^{\lambda}, C^{\text{trav}}[\text{crs}, b_i, \text{Keys}^{i+1}, \tilde{\text{Keys}}^{i+1}, r_i, \tilde{r}_i], \text{Keys}^i\right)$
+
+$e_0 \leftarrow \ell OT_{\text{const}}.Send(\text{crs}, \text{digest}, \tilde{\text{Keys}}^1)$
+
+$\tilde{e}_0 \leftarrow \ell OT_{\text{const}}.Send(\text{crs}, \text{digest}, \tilde{\text{Keys}}^1)$
+
+Output $e = (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{C}^{\text{read}})$
+
+• Receive $\hat{D}$(crs, L, e):
+
+Parse $e = (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{C}^{\text{read}})$
+
+Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$
+
+Parse $\hat{D}$ as a Merkle tree.
+
+Denote the end node of path $b_1b_2\dots b_i$ by $\hat{D}_{b_1b_2\dots b_i}$.
+
+For $i=1$ to $d-1$:
+
+$sbl^i \leftarrow (\hat{D}_{b_1\ldots b_{i-1}0}, \hat{D}_{b_1\ldots b_{i-1}1})$
+
+$ Labels^i \leftarrow \ell OT_{\text{const}}.Receive(\text{crs}, e_{i-1}, sbl^i) $
+
+$\widetilde{\text{Labels}}^i \leftarrow \ell OT_{\text{const}}.Receive(\text{crs}, \tilde{\text{e}}_{i-1}, sbl^i)$
+
+$(e_i, \tilde{e}_i) \leftarrow \text{Eval}(\tilde{C}_i, \text{Labels}^i)$
+
+$\mathit{leaf} \leftarrow (\hat{\mathcal{D}}_{b_1\ldots b_{d-1}0}, \hat{\mathcal{D}}_{b_1\ldots b_{d-1}1})$
+
+$\widetilde{\mathit{Labels}}^d \leftarrow \ell OT_{\text{const}}.Receive(\text{crs}, \tilde{\mathit{e}}_{d-1}, \mathit{leaf})$
+
+$m \leftarrow \text{Eval}\left(\tilde{\mathcal{C}}^{\text{read}}, (\widetilde{\mathit{Labels}}^1, \dots, \widetilde{\mathit{Labels}}^d)\right)$
+
+Output $m$
+
+• SendWrite(crs, digest, L, b, {$m_{j,0}, m_{j,1}$} $\lambda_{j=1}$):
+---PAGE_BREAK---
+
+Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$.
+
+Pick $(\tilde{\text{Keys}}^1, \dots, \tilde{\text{Keys}}^d)$ as input keys for $\mathcal{C}^{\text{write}}$, where $\tilde{\text{Keys}}^i$ corresponds to the input keys of $\text{sbl}^i$ for $i \in [d-1]$, and $\tilde{\text{Keys}}^d$ corresponds to the input keys of leaf.
+
+$$ \tilde{\mathcal{C}}^{\text{write}} \leftarrow \text{GCircuit} \left( 1^\lambda, \mathcal{C}^{\text{write}}[\text{crs}, L, b, \{m_{j,0}, m_{j,1}\}_{j=1}^\lambda], (\tilde{\text{Keys}}^1, \dots, \tilde{\text{Keys}}^d) \right) $$
+
+Let $\text{Keys}^d$ be 0*
+
+For $i = d - 1$ downto 1:
+
+* Pick $\text{Keys}^i$ as input keys for $\mathcal{C}^{\text{trav}}$
+* Pick $r_i, \tilde{r}_i$ as random coins for $\ell OT_{\text{const}}$.Send
+* $\tilde{\mathcal{C}}_i \leftarrow \text{GCircuit} \left( 1^\lambda, \mathcal{C}^{\text{trav}}[\text{crs}, b_i, \text{Keys}^{i+1}, \tilde{\text{Keys}}^{i+1}, r_i, \tilde{r}_i], \text{Keys}^i \right)$
+* $e_0 \leftarrow \ell OT_{\text{const}}$.Send(\crs, digest, $\text{Keys}^1$)
+* $\tilde{e}_0 \leftarrow \ell OT_{\text{const}}$.Send(\crs, digest, $\tilde{\text{Keys}}^1$)
+* Output $e_w = (e_0, \tilde{e}_0, \tilde{\mathcal{C}}_1, \dots, \tilde{\mathcal{C}}_{d-1}, \tilde{\mathcal{C}}^{\text{write}})$
+
+* **ReceiveWrite$\hat{D}$**($\text{crs}, L, b, e_w$):
+
+ Parse $e_w = (e_0, \tilde{e}_0, \tilde{\mathcal{C}}_1, \dots, \tilde{\mathcal{C}}_{d-1}, \tilde{\mathcal{C}}^{\text{write}})$
+ Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$
+ Parse $\hat{D}$ as a Merkle tree.
+ Denote the end node of path $b_1b_2\dots b_i$ by $\hat{D}_{b_1b_2\dots b_i}$.
+
+**Computing messages corresponding to the new digest:**
+
+For $i=1$ to $d-1$:
+
+* $\text{sbl}^i \leftarrow (\hat{D}_{b_1 \dots b_{i-1} 0}, \hat{D}_{b_1 \dots b_{i-1} 1})$
+* $\text{Labels}^i \leftarrow \ell OT_{\text{const}}$.Receive($\text{crs}, e_{i-1}, \text{sbl}^i$)
+* $\widetilde{\text{Labels}}^i \leftarrow \ell OT_{\text{const}}$.Receive($\text{crs}, \tilde{e}_{i-1}, \text{sbl}^i$)
+* $(e_i, \tilde{e}_i) \leftarrow \text{Eval}(\tilde{\mathcal{C}}_i, \text{Labels}^i)$
+* leaf $\leftarrow (\hat{D}_{b_1 \dots b_{d-1} 0}, \hat{D}_{b_1 \dots b_{d-1} 1})$
+* $\widetilde{\text{Labels}}^d \leftarrow \ell OT_{\text{const}}$.Receive($\text{crs}, \tilde{e}_{d-1}$, leaf)
+* $\{m_j\}_{j=1}^\lambda \leftarrow \text{Eval} \left( \tilde{\mathcal{C}}^{\text{write}}, (\widetilde{\text{Labels}}^1, \dots, \widetilde{\text{Labels}}^d)^T \right)$
+
+**Updating the Merkle tree:**
+
+$$ (\hat{D}_{b_1 \dots b_{d-1} 0} || \hat{D}_{b_1 \dots b_{d-1} 1}) [t] \leftarrow b $$
+
+For $i = d - 1$ downto 0:
+
+$$ \hat{D}_{b_1 \dots b_i} \leftarrow \ell OT_{\text{const}}.\text{Hash}(\text{crs}, \hat{D}_{b_1 \dots b_i 0} || \hat{D}_{b_1 \dots b_i 1}) $$
+
+Update digest with the new root of $\hat{D}$
+
+Output $\{m_j\}_{j=1}^\lambda$
+---PAGE_BREAK---
+
+**Correctness.** We briefly argue (perfect) correctness of the updatable laconic OT scheme. Given a ciphertext $e = (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{C}^{\text{read}})$ computed by Send, correctness of $\ell OT_{\text{const}}$ ensures that $\text{Labels}^1 \leftarrow \ell OT_{\text{const}}$. Receive($\text{crs}, e_0, \text{sbl}^1$) outputs the correct labels for $\tilde{C}_1$ and that $\widetilde{\text{Labels}}^1 \leftarrow \ell OT_{\text{const}}$. Receive($\text{crs}, \tilde{e}_0, \text{sbl}^1$) outputs the correct labels for $\tilde{C}^{\text{read}}$, namely $\text{Labels}^1 = \text{Keys}_{\text{sbl}^1}$ and $\widetilde{\text{Labels}}^1 = \widetilde{\text{Keys}}_{\text{sbl}^1}$. In turn, correctness of the garbling scheme guarantees that $\tilde{C}_1$ outputs the correct $(e_1, \tilde{e}_1)$, namely $e_1 = \ell OT_{\text{const}}$. Send($\text{crs}, \text{sbl}^1_{b_1}, \text{Keys}^2; r_1$) and $\tilde{e}_1 = \ell OT_{\text{const}}$. Send($\text{crs}, \text{sbl}^1_{b_1}, \widetilde{\text{Keys}}^2; \tilde{r}_1$). It follows inductively that for every $i = 1, 2, \dots, d-1$, $\text{Labels}^i = \text{Keys}_{\text{sbl}^i}$, $\widetilde{\text{Labels}}^i = \widetilde{\text{Keys}}_{\text{sbl}^i}$, $e_i = \ell OT_{\text{const}}$. Send($\text{crs}, \text{sbl}^i_{b_i}, \text{Keys}^{i+1}; r_i$), $\tilde{e}_i = \ell OT_{\text{const}}$. Send($\text{crs}, \text{sbl}^i_{b_i}, \widetilde{\text{Keys}}^{i+1}; \tilde{r}_i$). Again by using the correctness of $\ell OT_{\text{const}}$, $\widetilde{\text{Labels}}^d \leftarrow \ell OT_{\text{const}}$. Receive($\text{crs}, \tilde{e}_{d-1}, \text{leaf}$) gives $\widetilde{\text{Labels}}^d = \widetilde{\text{Keys}}_{\text{leaf}}$. Then by using correctness of the garbling scheme it follows that evaluating $\tilde{C}^{\text{read}}$ gives the correct output $m_{D[L]}$. Correctness with regard to writes can be argued analogously.
+
+## 5.3 Security
+
+In this section, we will prove the security of the above updatable laconic OT scheme.
+
+**Theorem 5.1 (Sender Privacy against Semi-honest Receivers).** *Given that $\ell OT_{\text{const}}$ has sender privacy and that the garbled circuit scheme GCircuit is secure, the updatable laconic OT scheme $\ell OT$ has sender privacy.*
+
+*Proof.* Let $\ell OTSim_{\text{const}}$ be the simulator for $\ell OT_{\text{const}}$ and CircSim be the simulator for the garbling scheme GCircuit. Below, we provide the two simulators $\ell OTSim$ for a read and $\ell OTSimWrite$ for the write.
+
+* $\ell OTSim(\text{crs}, D, L, m)$:
+
+(digest, $\hat{D}) \leftarrow Hash(\text{crs}, D)$
+
+Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$
+
+$$ (\tilde{C}^{\text{read}}, (\widetilde{\text{Labels}}^1, \dots, \widetilde{\text{Labels}}^d)) \leftarrow \text{CircSim}(1^\lambda, C^{\text{read}}, m) $$
+
+leaf $\leftarrow (\hat{D}_{b_1 \dots b_{d-1} 0}, \hat{D}_{b_1 \dots b_{d-1} 1})$
+
+$e_{d-1} \leftarrow \ell OTSim_{\text{const}}(\text{crs}, \text{leaf}, 0^*)$
+
+$\tilde{e}_{d-1} \leftarrow \ell OTSim_{\text{const}}(\text{crs}, \text{leaf}, \widetilde{\text{Labels}}^d)$
+
+For $i = d-1$ downto 1:
+
+$(\tilde{C}_i, \widetilde{\text{Labels}}^i) \leftarrow \text{CircSim}(1^\lambda, C^{\text{trav}}, (e_i, \tilde{e}_i))$
+
+$\text{sbl}^i \leftarrow (\hat{D}_{b_1 \dots b_{i-1} 0}, \hat{D}_{b_1 \dots b_{i-1} 1})$
+
+$e_{i-1} \leftarrow \ell OTSim_{\text{const}}(\text{crs}, \widetilde{\text{sbl}}^i, \widetilde{\text{Labels}}^i)$
+
+$\tilde{e}_{i-1} \leftarrow \ell OTSim_{\text{const}}(\text{crs}, \widetilde{\text{sbl}}^i, \widetilde{\widetilde{\text{Labels}}}^i)$
+
+Output $e = (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{C}^{\text{read}})$
+
+* $\ell OTSimWrite(\text{crs}, D, L, b, \{m_j\}_{j=1}^\lambda)$:
+
+(digest, $\hat{D}) \leftarrow Hash(\text{crs}, D)$
+
+Parse $L = (b_1, b_2, \dots, b_{d-1}, t)$
+
+$$ (\tilde{C}^{\text{write}}, (\widetilde{\text{Labels}}^1, \dots, \widetilde{\text{Labels}}^d)) \leftarrow \text{CircSim}(1^\lambda, C^{\text{write}}, \{m_j\}_{j=1}^\lambda) $$
+---PAGE_BREAK---
+
+$$ \begin{align*}
+\text{leaf} &\leftarrow (\hat{\mathrm{D}}_{b_1 \dots b_{d-1} 0}, \hat{\mathrm{D}}_{b_1 \dots b_{d-1} 1}) \\
+\mathrm{e}_{d-1} &\leftarrow \ell \text{OTSim}_{\text{const}} (\text{crs}, \text{leaf}, 0^*) \\
+\tilde{\mathrm{e}}_{d-1} &\leftarrow \ell \text{OTSim}_{\text{const}} \left( \text{crs}, \text{leaf}, \widetilde{\text{Labels}}^d \right)
+\end{align*} $$
+
+For $i = d - 1$ downto 1:
+
+$$
+\begin{align*}
+(\tilde{\mathcal{C}}_i, \mathcal{Labels}^i) &\leftarrow \text{CircSim}(1^\lambda, C^{\text{trav}}, (e_i, \tilde{e}_i)) \\
+\mathit{sbl}^i &\leftarrow (\hat{\mathrm{D}}_{b_1 \dots b_{i-1} 0}, \hat{\mathrm{D}}_{b_1 \dots b_{i-1} 1}) \\
+e_{i-1} &\leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \mathit{sbl}^i, \mathcal{Labels}^i) \\
+\tilde{e}_{i-1} &\leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \mathit{sbl}^i, \widetilde{\mathcal{Labels}}^i)
+\end{align*} $$
+
+Output $\mathbf{e}_w = (\mathbf{e}_0, \tilde{\mathbf{e}}_0, \tilde{\mathcal{C}}_1, \dots, \tilde{\mathcal{C}}_{d-1}, \tilde{\mathcal{C}}^{\text{write}})$
+
+In the following we will only prove sender security with regard to reads. Since (Send, $\ell$OTSim) and (SendWrite, $\ell$OTSimWrite) are very similar, sender security with regard to writes can be argued analogously.
+
+We prove security via a hybrid argument. In the first hybrid, we replace the ciphertexts $\mathbf{e}_0$ and $\tilde{\mathbf{e}}_0$ computed by $\ell OT_{\text{const}}$. Send with ciphertexts computed by $\ell OT_{\text{Sim}}_{\text{const}}$.
+
+Afterwards, we can use security of the garbling scheme to replace the honestly generated $\tilde{\mathcal{C}}_1$ with a simulated one, and run $\ell OT_{\text{Sim}}_{\text{const}}$ using the simulated input labels of $\tilde{\mathcal{C}}_1$. As the output of $\tilde{\mathcal{C}}_1$ is again a pair of ciphertexts $(\mathbf{e}_1, \tilde{\mathbf{e}}_1)$, we will simulate it using $\ell OT_{\text{Sim}}_{\text{const}}$ in the next hybrid. We continue alternating between simulating the garbled circuits and simulating the ciphertexts, until reaching the reading circuit. Once we reach the reading circuit, it holds that all $\widetilde{\mathcal{Labels}}^i$ are information theoretically fixed to the path from the root to the leaf containing $L$. We will then invoke the garbled circuit security of the reading circuit, and conclude the hybrid argument.
+
+The formal proof is as follows. For every PPT machine $\mathcal{A}$, let $\text{crs} \leftarrow \text{crsGen}(1^\lambda)$, and let $(D, L, m_0, m_1) \leftarrow \mathcal{A}(\text{crs})$. Further let $\text{digest} \leftarrow \text{Hash}(\text{crs}, D)$. Then we will prove that the two distributions $(\text{crs}, \text{Send}(\text{crs}, \text{digest}, L, m_0, m_1))$ and $(\text{crs}, \ell \text{OTSim}(\text{crs}, D, L, m_{D[L]}))$ are computationally indistinguishable. Consider the following hybrids.
+
+• **Hybrid 0:** This is the real experiment, i.e., $(\text{crs}, \text{Send}(\text{crs}, \text{digest}, L, m_0, m_1))$.
+
+• **Hybrid 1:** Same has hybrid 0, except that $\mathbf{e}_0$ and $\tilde{\mathbf{e}}_0$ are computed as follows.
+
+$$ (\text{digest}, \hat{\mathrm{D}}) \leftarrow \text{Hash}(\text{crs}, D) $$
+
+$$
+\begin{align*}
+\text{Parse } L &= (b_1, b_2, \ldots, b_{d-1}, t). \\
+\text{Pick } (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d) &\text{ as input keys for } C^{\text{read}} \\
+\tilde{\text{C}}^{\text{read}} &\leftarrow \text{GCircuit} (1^\lambda, C^{\text{read}}[t, m_0, m_1], (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d)) \\
+\text{Let } \text{Keys}^d &\text{ be } 0^* \\
+\text{For } i = d-1 &\text{ downto 1: } \\
+&\quad \text{Pick } \text{Keys}^i \text{ as input keys for } C^{\text{trav}} \\
+&\quad \text{Pick } r_i, \tilde{r}_i \text{ as random coins for } \ell \text{OT}_{\text{const}}.\text{Send} \\
+&\quad \tilde{\mathcal{C}}_i \leftarrow \text{GCircuit} (1^\lambda, C^{\text{trav}}[\text{crs}, b_i, \text{Keys}^{i+1}, \widetilde{\text{Keys}}^{i+1}, r_i, \tilde{r}_i], \text{Keys}^i)
+\end{align*} $$
+
+$$
+\begin{array}{l}
+\boxed{\substack{\text{sbl}^1 \leftarrow (\hat{\mathrm{D}}_0, \hat{\mathrm{D}}_1) \\ \text{Labels}^1 \leftarrow \text{Keys}_{\mathrm{sbl}^1}^1}}
+\end{array}
+$$
+---PAGE_BREAK---
+
+$$ \begin{array}{c} \widetilde{\text{Labels}}^1 \leftarrow \widetilde{\text{Keys}}_{\text{sbl}}^1 \\[1em] e_0 \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^1, \text{Labels}^1) \\[1em] \tilde{e}_0 \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^1, \widetilde{\text{Labels}}^1) \\[1em] \text{Output } e = (e_0, \tilde{e}_0, \tilde{\mathcal{C}}_1, \dots, \tilde{\mathcal{C}}_{d-1}, \tilde{\mathcal{C}}^{\text{read}}) \end{array} $$
+
+The differences between hybrid 0 and hybrid 1 have been marked with boxes. Indistinguishability between hybrid 0 and hybrid 1 can be argued from the multi-execution sender security of $\ell OT_{\text{const}}$ via the following reduction. Given crs by the experiment and the adversarial input $D$, compute hybrid 1 until $e_0$ and $\tilde{e}_0$ are computed. In particular, compute $\hat{D}$, $\widetilde{\text{Keys}}^1$, $\widetilde{\text{Keys}}^1$, $\text{sbl}^1$, $\widetilde{\text{Labels}}^1 = \text{Keys}_{\text{sbl}^1}^1$, $\widetilde{\text{Labels}}^1 = \widetilde{\text{Keys}}_{\text{sbl}^1}^1$. Then choose $\text{sbl}^1$ as the database and $(\widetilde{\text{Keys}}^1, \widetilde{\text{Keys}}^1)$ as the messages for $\ell OT_{\text{const}}$, and obtain the challenge $(e_0^*, \tilde{e}_0^*)$, which is from one of the following two distributions:
+
+$$ (\ell OT_{\text{const}}.\text{Send}(\text{crs}, \text{sbl}^1, \text{Keys}^1), \ell OT_{\text{const}}.\text{Send}(\text{crs}, \text{sbl}^1, \widetilde{\text{Keys}}^1)); $$
+
+$$ (\ell \mathrm{OTSim}_{\mathrm{const}}(\mathrm{crs}, \mathrm{sbl}^1, \mathrm{Labels}^1), \ell \mathrm{OTSim}_{\mathrm{const}}(\mathrm{crs}, \mathrm{sbl}^1, \widetilde{\mathrm{Labels}}^1)). $$
+
+If $(e_0^*, \tilde{e}_0^*)$ is from the first distribution, then it results in hybrid 0; otherwise it results in hybrid 1. Hence the indistinguishability of the two distributions implies indistinguishability of the two hybrids.
+
+* **Hybrid** $2k$ ($k=1,2,\dots,d-1$): Same has hybrid $2k-1$, except that $\tilde{\mathcal{C}}_k$ is computed as follows.
+
+$$ (\text{digest}, \hat{D}) \leftarrow \text{Hash}(\text{crs}, D) $$
+
+$$ \text{Parse } L = (b_1, b_2, \ldots, b_{d-1}, t). $$
+
+$$ \text{Pick } (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d) \text{ as input keys for } \mathcal{C}^{\text{read}} $$
+
+$$ \tilde{\mathcal{C}}^{\text{read}} \leftarrow \text{GCircuit} (1^\lambda, \mathcal{C}^{\text{read}}[t, m_0, m_1], (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d)) $$
+
+Let $\text{Keys}^d$ be $0^*$
+
+For $i = d-1$ downto $k+1$:
+
+* Pick $\text{Keys}^i$ as input keys for $\mathcal{C}^{\text{trav}}$
+
+* Pick $r_i, \tilde{r}_i$ as random coins for $\ell OT_{\text{const}}.\text{Send}$
+
+$$ \tilde{\mathcal{C}}_i \leftarrow \text{GCircuit} (1^\lambda, \mathcal{C}^{\text{trav}}[\text{crs}, b_i, \text{Keys}^{i+1}, \widetilde{\text{Keys}}^{i+1}, r_i, \tilde{r}_i], \text{Keys}^i) $$
+
+$$ \text{sbl}^k \leftarrow (\hat{\mathcal{D}}_{b_1 \dots b_{k-1} 0}, \hat{\mathcal{D}}_{b_1 \dots b_{k-1} 1}) $$
+
+$$ e_k \leftarrow \ell OT_{\text{const}}.\text{Send}(\text{crs}, \text{sbl}_{b_k}^k, \widetilde{\text{Keys}}^{k+1}) $$
+
+$$ \tilde{e}_k \leftarrow \ell OT_{\text{const}}.\text{Send}(\text{crs}, \widetilde{\text{sbl}}_{b_k}^k, \widetilde{\widetilde{\text{Keys}}}^{k+1}) $$
+
+For $i=k$ downto 1:
+
+$$ (\tilde{\mathcal{C}}_i, \widetilde{\text{Labels}}^i) \leftarrow \text{CircSim}(1^\lambda, \mathcal{C}^{\text{trav}}, (e_i, \tilde{e}_i)) $$
+
+$$ \widetilde{\text{sbl}}^i \leftarrow (\hat{\mathcal{D}}_{b_1 \dots b_{i-1} 0}, \hat{\mathcal{D}}_{b_1 \dots b_{i-1} 1}) $$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+\widetilde{\text{Labels}}^i &\leftarrow \widetilde{\text{Keys}}_{\text{sbl}^i} \\
+e_{i-1} &\leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^i, \text{Labels}^i) \\
+\tilde{e}_{i-1} &\leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^i, \widetilde{\text{Labels}}^i) \\
+\text{Output } e &= (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{C}^{\text{read}})
+\end{align*}
+$$
+
+• **Hybrid** $2k+1$ ($k=1,2,\dots,d-1$): Same has hybrid $2k$, except that $e_k$ and $\tilde{e}_k$ are computed as follows.
+
+$$ ( \mathit{digest}, \hat{\mathcal{D}} ) \leftarrow \mathit{Hash}( \mathit{crs}, D ) $$
+
+$$ \text{Parse } L = (b_1, b_2, \ldots, b_{d-1}, t). $$
+
+$$ \text{Pick } (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d) \text{ as input keys for } \mathcal{C}^{\text{read}} $$
+
+$$ \tilde{\mathcal{C}}^{\text{read}} \leftarrow \text{GCircuit} (1^{\lambda}, \mathcal{C}^{\text{read}}[t, m_0, m_1], (\widetilde{\text{Keys}}^1, \ldots, \widetilde{\text{Keys}}^d)) $$
+
+Let $\mathit{Keys}^d$ be 0*
+
+For $i = d - 1$ downto $k + 1$:
+
+Pick $\mathit{Keys}^i$ as input keys for $\mathcal{C}^{\text{trav}}$
+
+Pick $r_i, \tilde{r}_i$ as random coins for $\ell \mathit{OT}_{\mathrm{const}}$.Send
+
+$$ \tilde{\mathcal{C}}_i \leftarrow \text{GCircuit} (1^{\lambda}, \mathcal{C}^{\text{trav}}[\text{crs}, b_i, \mathit{Keys}^{i+1}, \widetilde{\mathit{Keys}}^{i+1}, r_i, \tilde{r}_i], \mathit{Keys}^i) $$
+
+$$ \mathit{sbl}^{k+1} \leftarrow (\hat{\mathcal{D}}_{b_1 \dots b_k 0}, \hat{\mathcal{D}}_{b_1 \dots b_k 1}) $$
+
+$$ \mathit{Labels}^{k+1} \leftarrow \mathit{Keys}_{\mathit{sbl}^{k+1}} $$
+
+$$ \widetilde{\mathit{Labels}}^{k+1} \leftarrow \widetilde{\mathit{Keys}}_{\widetilde{\mathit{sbl}}^{k+1}} $$
+
+$$ e_k \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^{k+1}, \text{Labels}^{k+1}) $$
+
+$$ \tilde{e}_k \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^{k+1}, \widetilde{\text{Labels}}^{k+1}) $$
+
+For $i = k$ downto 1:
+
+$$ (\tilde{\mathcal{C}}_i, \mathit{Labels}^i) \leftarrow \text{CircSim}(1^\lambda, \mathcal{C}^{\text{trav}}, (e_i, \tilde{e}_i)) $$
+
+$$ \mathit{sbl}^i \leftarrow (\hat{\mathcal{D}}_{b_1 \dots b_{i-1} 0}, \hat{\mathcal{D}}_{b_1 \dots b_{i-1} 1}) $$
+
+$$ \widetilde{\mathit{Labels}}^i \leftarrow \widetilde{\mathit{Keys}}_{\widetilde{\mathit{sbl}}^i} $$
+
+$$ e_{i-1} \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^i, \text{Labels}^i) $$
+
+$$ \tilde{e}_{i-1} \leftarrow \ell \text{OTSim}_{\text{const}}(\text{crs}, \text{sbl}^i, \widetilde{\text{Labels}}^i) $$
+
+Output $e = (e_0, \tilde{e}_0, \tilde{C}_1, \dots, \tilde{C}_{d-1}, \tilde{\mathcal{C}}^{\text{read}})$
+
+We will first show that hybrids $2k-1$ and $2k$ are indistinguishable via a reduction to the security of the garbling scheme GCircuit. Notice that the only difference between hybrids $2k-1$ and $2k$ is $(\tilde{\mathcal{C}}_k, e_{k-1})$. Consider the following two distributions:
+
+$$ (\tilde{\mathcal{C}}_k, \mathit{Labels}^k) \leftarrow (\text{GCircuit} (1^\lambda, C^{\text{trav}}[\text{crs}, b_k, \mathit{Keys}^{k+1}, \widetilde{\mathit{Keys}}^{k+1}, r_i, \tilde{r}_k], \mathit{Keys}^k), \widetilde{\mathit{Keys}}_{\widetilde{\mathit{sbl}}^k}) ; $$
+
+$$ (\tilde{\mathcal{C}}_k, \mathit{Labels}^k) \leftarrow \text{CircSim}(1^\lambda, C^{\text{trav}}, (e_k, \tilde{e}_k)), $$
+
+where $e_k \leftarrow \ell OT_{\mathrm{const}}.Send(\mathrm{crs}, sbl_{b_k}^k, Keys^{k+1})$ and $\tilde{e}_k \leftarrow \ell OT_{\mathrm{const}}.Send(\mathrm{crs}, sbl_{b_k}^k, \widetilde{Keys}^{k+1})$. Notice that $(e_k, \tilde{e}_k)$ is the output of $(\tilde{\mathcal{C}}_k, \mathit{Labels}^k)$ from the first distribution. By security of
+---PAGE_BREAK---
+
+the garbled circuit scheme, the above two distributions are computationally indistinguishable. Furthermore, if $\tilde{C}_k$ is generated using the first distribution and $e_{k-1}$ is computed using Labels$^k$ from the first distribution, then it results in hybrid $2k-1$; otherwise it results in hybrid $2k$. Hence the two hybrids are computationally indistinguishable.
+
+Indistinguishability of hybrids $2k$ and $2k+1$ follows again from sender security of $\ell OT_{\text{const}}$, in the same fashion as the indistinguishability between hybrids 0 and 1.
+
+* **Hybrid 2d**: This is the simulated experiment, namely $(\text{crs}, \ell \text{OTSim}(\text{crs}, D, L, m_{D[L]}))$.
+
+The difference between hybrids $2d-1$ and $2d$ is $(\tilde{C}^{\text{read}}, \tilde{e}_0, \dots, \tilde{e}_{d-1})$. The indistinguishability would follow from the security of garbled circuit scheme, similarly as when we argue indistinguishability hybrids $2k-1$ and $2k$.
+
+# 6 Warm-Up Application: Non-Interactive Secure Computation (NISC) on Large Inputs in RAM Setting
+
+In this section, we consider the application of non-interactive secure computation in the RAM (random access machine) setting.
+
+## 6.1 Background
+
+We recall the needed background of RAM computation model and two-message oblivious transfer in this section. We will also use garbled circuits (see Section 5.1.1) as building blocks.
+
+### 6.1.1 Random Access Machine (RAM) Model of Computation
+
+Now we define the RAM model of computation. Parts of this subsection have been taken verbatim from [GLO15].
+
+**Notation for the RAM Model of Computation.** The RAM model consists of a CPU and a memory storage of size *M*. The CPU executes a program that can access the memory by using read/write operations. In particular, for a program *P* with memory of size *M* we denote the initial contents of the memory data by $D \in \{0, 1\}^M$. Additionally, the program gets a “short” input $x \in \{0, 1\}^m$, which we alternatively think of as the initial state of the program. We use the notation $P^D(x)$ to denote the execution of program *P* with initial memory contents *D* and input *x*. The program *P* can read from and write to various locations in memory *D* throughout its execution.¹⁰
+
+We will also consider the case where several different programs are executed sequentially and the memory persists between executions. We denote this process as $(y_1, \dots, y_\ell) = (P_1(x_1), \dots, P_\ell(x_\ell))^D$ to indicate that first $P_1^D(x_1)$ is executed, resulting in some memory contents $D_1$ and output $y_1$, then $P_2^{D_1}(x_2)$ is executed resulting in some memory contents $D_2$ and output $y_2$ etc. As an example,
+
+¹⁰In general, the distinction between what to include in the program *P*, the memory data *D* and the short input *x* can be somewhat arbitrary. However as motivated by our applications we will typically be interested in a setting where the data *D* is large while the size of the program *|P|* and input length *m* is small.
+---PAGE_BREAK---
+
+imagine that $D$ is a huge database and the programs $P_i$ are database queries that can read and
+possibly write to the database and are parameterized by some values $x_i$.
+
+**CPU-Step Circuit.** Consider an execution of a RAM program which involves at most *t* CPU steps. We represent a RAM program *P* via *t* small *CPU-Step Circuits* each of which executes one CPU step. In this work we will denote one CPU step by:
+
+$$C_{\text{CPU}}^{P}(\text{state}, \text{rData}) = (\text{state}', R/W, L, wData)$$
+
+This circuit takes as input the current CPU state **state** and a bit **rData**. Looking ahead the bit **rData** will be read from the memory location that was requested by the previous CPU step. The circuit outputs an updated state **state'**, a read or write bit R/W, the next location to read/write from **L** ∈ [*M*], and a bit **wData** to write into that location (**wData** = ⊥ when reading). The sequence of locations and read/write values collectively form what is known as the *access pattern*, namely **MemAccess** = {(R/W*τ*, L*τ*, wData*τ*) : *τ* = 1, ..., t}.
+Note that in the description above without loss of generality we have made some simplifying assumptions. We assume that each CPU-step circuit always reads from or write some location in memory. This is easy to implement via a dummy read and write step. Moreover, we assume that the instructions of the program itself are hardwired into the CPU-step circuits.
+
+Representing RAM computation by CPU-Step Circuits. The computation $P^D(x)$ starts with the initial state set as $\mathbf{state}_1 = x$. In each step $\tau \in \{1, \dots, t\}$, the computation proceeds as follows: If $\tau = 1$ or $R/W^{\tau-1} = \mathbf{write}$, then $\mathbf{rData}^\tau := 0$; otherwise $\mathbf{rData}^\tau := D[L^{\tau-1}]$. Next it executes the CPU-Step Circuit $C_{\mathbf{CPU}}^P(\mathbf{state}^\tau, \mathbf{rData}^\tau) = (\mathbf{state}^{\tau+1}, R/W^\tau, L^\tau, w\mathbf{Data}^\tau)$. If $R/W^\tau = \mathbf{write}$, then set $D[L^\tau] = w\mathbf{Data}^\tau$. Finally, when $\tau = t$, then $\mathbf{state}^{\tau+1}$ is the output of the program.
+
+6.1.2 Oblivious Transfer
+
+[AIR01, NP01, HK12] gave two-message oblivious transfer (OT) protocols. We describe the definition below and refer the reader to [AIR01, NP01, HK12] for details.
+
+**Definition 6.1** (Two-Message Oblivious Transfer). A two-message oblivious transfer protocol OT = (OT₁, OT₂, OT₃) is a protocol between a sender S and a receiver R where S gets as input two strings s₁, s₂ of equal length and R gets as input a choice bit x ∈ {0, 1}. The algorithms have the following syntax:
+
+• $(m_1, \text{secret}) \leftarrow \text{OT}_1(1^\lambda, x)$: It takes as input the security parameter $1^\lambda$ and receiver's choice bit $x \in \{0, 1\}$ and outputs the first OT message $m_1$ (sent by the receiver) and receiver's secret state secret.
+
+• $m_2 \leftarrow \text{OT}_2(m_1, s_0, s_1)$: It takes as input the first OT message and the sender's input $(s_0, s_1)$, and outputs the second OT message $m_2$ (sent back to the receiver).
+
+• $s \leftarrow \text{OT}_3(m_2, \text{secret})$: It takes $m_2$ and secret $s$ as input, and outputs a string $s$.
+---PAGE_BREAK---
+
+* **Perfect Correctness:** For all security parameter $\lambda$, sender input strings $(s_1, s_2)$ of equal length, and receiver's choice bit $x$, let $(m_1, \text{secret}) \leftarrow \text{OT}_1(1^\lambda, x)$, $m_2 \leftarrow \text{OT}_2(m_1, s_0, s_1)$, and $s \leftarrow \text{OT}_3(m_2, \text{secret})$, then it holds that
+
+$$\Pr[s = s_x] = 1.$$
+
+* **Receiver Security:** The following two distributions are computationally indistinguishable:
+
+$$\text{OT}_1(1^\lambda, 0) \stackrel{c}{\approx} \text{OT}_1(1^\lambda, 1).$$
+
+* **Sender Security:** There exists a PPT simulator OTSim such that for all sender input strings $(s_1, s_2)$ of equal length and receiver's choice bit $x$, and any first message $m_1$ in the support of $\text{OT}_1(1^\lambda, x)$, the following two distributions are statistically close:
+
+$$\text{OT}_2(m_1, s_0, s_1) \stackrel{s}{\approx} \text{OTSim}(1^\lambda, x, s_x, m_1).$$
+
+We described the above definition with respect to one OT, but the same formalism naturally extends to support multiple parallel executions of OT. We will use the following short-hand notations (generalizing the above notions) to run multiple parallel executions. Let $\text{Keys} = ((\text{Key}_{1,0}, \text{Key}_{1,1}), \dots, (\text{Key}_{n,0}, \text{Key}_{n,1}))$ be a list of $n$ string-pairs, and $x \in \{0, 1\}^n$ be an $n$-bit choice string. Then we define
+
+* $(m_1, \text{secret}) \leftarrow \text{OT}_1(1^\lambda, x) = (\text{OT}_1(1^\lambda, x_1), \dots, \text{OT}_1(1^\lambda, x_n))$.
+
+* $m_2 \leftarrow \text{OT}_2(m_1, \text{Keys}) = (\text{OT}_2(m_{1,1}, \text{Key}_{1,0}, \text{Key}_{1,1}), \dots, \text{OT}_2(m_{1,n}, \text{Key}_{n,0}, \text{Key}_{n,1}))$.
+
+* **Labels** $\leftarrow \text{OT}_3(m_2, \text{secret}) = (\text{OT}_3(m_{2,1}, \text{secret}_1), \dots, \text{OT}_3(m_{2,n}, \text{secret}_n))$.
+
+In the above $m_1 = (m_{1,1}, \dots, m_{1,n})$, $m_2 = (m_{2,1}, \dots, m_{2,n})$, secret = ($\text{secret}_1, \dots, \text{secret}_n$). Correctness guarantees that **Labels** = **Keys**$_x$ = $(\text{Key}_{1,x_1}, \dots, \text{Key}_{n,x_n})$.
+
+Moreover, we will use two important properties of the oblivious transfer [NP01] for our applications: (1) Security holds for multiple second OT messages with regard to the same first OT message. This will be crucial for extending NISC for RAM to support multiple senders with the same receiver. (2) The second OT message is re-randomizable. This will be crucial for the application of multi-hop homomorphic encryption for RAM.
+
+## 6.2 Formal Model for NISC in RAM Setting
+
+Suppose the receiver owns a large confidential database $D \in \{0, 1\}^M$. It first publishes a short message, denoted by $m_1$, which hides $D$. Afterwards, if a sender wants to run a RAM program $P$ (with input $x$) on $D$, it can send a single message $m_2$ to the receiver. For security we require that $m_2$ only reveals the output $P^D(x)$ and the memory access pattern MemAccess of the execution to the receiver. We require that once $m_1$ is published, the computational cost of both the sender (in computing $m_2$) and the receiver (in evaluation), as well as the size of $m_2$, should grow only with the running time of the RAM computation and the size of $m_1$, and is independent of the size of $D$.
+
+Moreover, the sender can run a sequence of programs on a persistent database by sending one message per program to the receiver. Finally, the receiver can run the protocol in parallel with multiple senders, where the same $m_1$ is used. For ease of exposition, below we will describe the setting of one single sender executing one program with the receiver. We provide details on above extensions in Section 6.6.
+---PAGE_BREAK---
+
+**The Model.** A non-interactive secure RAM computation scheme NISC-RAM = (Setup, EncData, EncProg, Dec) has the following syntax. It is a two-party protocol between a receiver holding a large secret database $D$ and a sender holding secret program $P$ of running time $t$ and a short input $x$.
+
+* **Setup:** crs $\leftarrow$ Setup($1^{\lambda}$).
+On input the security parameter $1^{\lambda}$, it outputs a common reference string.
+
+* **Database Encryption:** $(m_1, \tilde{D}) \leftarrow EncData(crs, D)$.
+On input the common reference string crs and a database $D \in \{0, 1\}^M$, it outputs a message $m_1$ and a secret state $\tilde{D}$. The receiver publishes $m_1$ as the short message corresponding to $D$.
+
+* **Program Encryption:** $m_2 \leftarrow EncProg(crs, m_1, (P, x, t))$.
+It takes as input the crs, a message $m_1$, a RAM program $P$ with input $x$ and maximum run-time $t$. It then outputs another message $m_2$. The sender sends the message $m_2$.
+
+* **Decryption:** $y \leftarrow \text{Dec}^{\tilde{D}}(\text{crs}, m_2)$.
+The procedure Dec is modeled as a RAM program that can read and write to arbitrary locations of its database initially containing $\tilde{D}$. This procedure is run by the receiver. On input the crs and $m_2$, it outputs $y$.
+
+The following conditions are satisfied:
+
+* **Correctness:** For every database $D \in \{0, 1\}^M$ where $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, for every RAM program $(P, x, t)$, it holds that
+
+$$
+\Pr[\text{Dec}^{\tilde{D}}(\text{crs}, m_2) = P^D(x)] = 1,
+$$
+
+where crs $\leftarrow$ Setup($1^{\lambda}$), $(m_1, \tilde{D}) \leftarrow$ EncData(crs, D), $m_2 \leftarrow$ EncProg(crs, $m_1$, (P, x, t)).
+
+* **Receiver Privacy:** For every pair of databases $D_0 \in \{0, 1\}^M$, $D_1 \in \{0, 1\}^M$ where $M$ is polynomial in $\lambda$, for every crs in the support of Setup($1^{\lambda}$), let $(m_0, \tilde{D}_0) \leftarrow EncData(crs, D_0)$, $(m_1, \tilde{D}_1) \leftarrow EncData(crs, D_1)$. Then it holds that
+
+$$
+(\mathrm{crs}, m_0) \stackrel{c}{\approx} (\mathrm{crs}, m_1).
+$$
+
+* **Sender Privacy:** There exists a PPT simulator niscSim such that for every database $D \in \{0, 1\}^M$ where $M = \text{poly}(\lambda)$ for any polynomial function $\text{poly}(\cdot)$, and for every RAM program $(P, x, t)$, let $y = P^D(x)$ be the output of the program, and MemAccess be the memory access pattern, then it holds that
+
+$$
+(\text{crs}, D, (m_1, \tilde{D}), m_2) \stackrel{c}{\approx} \text{niscSim}(1^\lambda, D, (y, \text{MemAccess}))
+$$
+
+where crs $\leftarrow$ Setup($1^{\lambda}$), $(m_1, \tilde{D}) \leftarrow$ EncData(crs, D) and $m_2 \leftarrow$ EncProg(crs, $m_1$, (P, x, t)).
+
+* **Efficiency:** The length of $m_1$ is a fixed polynomial in $\lambda$ independent of the size of the database. Moreover, the algorithm EncData runs in time $M \cdot \text{poly}(\lambda, \log M)$, EncProg and Dec run in time $t \cdot \text{poly}(\lambda, \log M)$.
+---PAGE_BREAK---
+
+## 6.3 Construction
+
+**Overview.** We first give an overview of the construction. For ease of exposition, consider a read-only program where each CPU step outputs the next location to be read based on the value read from last location.
+
+We first describe the EncProg procedure. As already mentioned in technical overview (see Section 2.2), our construction is based on high level ideas of garbled RAM (introduced by Lu and Ostrovsky [LO13]) to make sender and receiver complexity grow only with the running time of the program. In particular, the sender would generate a garbled RAM program consisting of a sequence of *t* garbled step circuits. Similar to the RAM computation model described in Section 6.1.1, every step circuit takes as input the current CPU state and the last read bit and outputs the updated state and the next read location, say *L*. Note that the next step circuit would take the new value read from database as input.
+
+The main challenge in program garbling is revealing the correct labels for the next circuit based on the value of $D[L]$. Moreover, it is crucial for garble circuit security that the receiver does not learn the label corresponding to $1 - D[L]$. Prior works [LO13, GHL$^{+}$14, GLOS15, GLO15] proposed several different solutions to the above problem. Here we present a new and arguably simpler solution for achieving this using laconic oblivious transfer.
+
+Let digest be the hash value of *D* that would be fed into the first step circuit and passed along the sequence of circuits. That is, each circuit would take this digest as input and also output the correct input labels corresponding to the digest for the next circuit. Now, to transfer the correct label corresponding to the value in the database, a step circuit would output a laconic OT ciphertext (using algorithm Send) that encrypts the input keys of the next step circuit and uses digest as the hash value. Looking ahead, when the receiver evaluates the step circuit which outputs the laconic OT ciphertext, he can use *D* to decrypt it to obtain the correct labels (using the procedure Receive of laconic OT).
+
+We would show that the sender privacy follows from the sender privacy of laconic OT and security of circuit garbling. In order to achieve receiver privacy, the receiver does not publish digest in the clear, but instead, the labels for digest of the first step circuit are transferred from the sender to the receiver via a two-message OT. In particular, the EncData procedure outputs the first OT message of digest, and EncProg will output the garbled step circuits along with the second OT message for digest's labels.
+
+Finally, note that a general program can also write to the database, in which case we need to update the database as well as the step circuits need to know the updated digest for the correctness of laconic OT and future reads/writes. This is achieved via the updatability property of the laconic OT which allows a sender to generate a ciphertext that allows the receiver to learn messages corresponding to the updated digest. In our case, the messages encrypted would be the input digest keys of the next step circuit.
+
+Next, we give a more formal construction of our scheme.
+
+**The Construction.** Let $\ell OT = (\text{crsGen}, \text{Hash}, \text{Send}, \text{Receive}, \text{SendWrite}, \text{ReceiveWrite})$ be an updatable laconic OT protocol as per Definition 3.2. Let $\text{OT} = (\text{OT}_1, \text{OT}_2, \text{OT}_3)$ be a two-message secure oblivious transfer, and let $\text{GC} = (\text{GCircuit}, \text{Eval})$ be a circuit garbling scheme. The non-interactive secure RAM computation scheme $\text{NISC-RAM} = (\text{Setup}, \text{EncData}, \text{EncProg}, \text{Dec})$ is constructed as follows.
+---PAGE_BREAK---
+
+**Setup:** crs $\leftarrow$ Setup($1^{\lambda}$).
+
+The set up algorithm is described in Figure 7. It generates the common reference string for the updatable laconic OT scheme.
+
+Figure 7: Set up procedure of NISC-RAM
+
+**Database Encryption:** $(m_1, \tilde{D}) \leftarrow EncData(crs, D)$.
+
+The algorithm is formally described in Figure 8. It hashes the database *D* using laconic OT Hash function and obtains digest. Then digest is encrypted using the $OT_1$ procedure of two-message OT protocol.
+
+Figure 8: Database encryption procedure of NISC-RAM
+
+**Program Encryption:** $m_2 \leftarrow EncProg(crs, m_1, (P, x, t))$.
+
+The program encryption procedure is formally described in Figure 9. As mentioned above, it generates *t* garbled step circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t$, where every step circuit implements the functionality of a CPU-step circuit. We describe the structure of a step circuit $C^{\text{step}}$ below. The program encryption also consists of the second OT message corresponding to the short message $m_1$ of the receiver (for digest) where the sender's messages consist of the input keys for the first garbled circuit. Finally, it also outputs the keys for decrypting the output of the last step circuit.
+
+Now we elaborate on the logic of a step circuit. The pseudocode of a step circuit $C^{\text{step}}$ is formally described in Figure 10, and the structure is illustrated in Figure 11. The input of a step circuit can be partitioned into (**state**, *rData*, **digest**), where **state** is the current CPU state, *rData* is the bit read from the database, and **digest** is the up-to-date digest of the database. If the previous step is a write, then *rData* = 0. The program encryption outputs garbled circuits for these step circuits, hence, the first step of *EncProg* is to pick the input keys for all the circuits. The $\tau$-th step circuit $C_{\tau}^{\text{step}}$ has hardwired in it the input keys *nextKeys* = (*stateKeys*, *dataKeys*, *digestKeys*) for the next step circuit $C_{\tau+1}^{\text{step}}$.
+
+The logic of the step circuit is as follows: It first computes the new (*state'*, R/W, L, wData). Then, in the case of a "read" it outputs *stateKeys* corresponding to *state'*, labels for *rData* via laconic OT procedure *Send*(·), and *digestKeys* corresponding to *digest*. The case of a write is similar, but now the labels of new updated *digest* are transferred via laconic OT procedure *SendWrite*(·).
+---PAGE_BREAK---
+
+**Program Encryption.** $m_2 \leftarrow EncProg(crs, m_1, (P, x, t))$.
+
+1. Generate the garbled program for $P$: Generate garbled circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t$.
+ (a) Sample $\text{stateKeys}^{\tau}, \text{dataKeys}^{\tau}, \text{digestKeys}^{\tau}$ for each $\tau \in \{1, \dots, t+1\}$.
+ (b) For each $\tau \in \{1, \dots, t\}$
+
+$$ \tilde{C}_{\tau}^{\text{step}} \leftarrow \text{GCircuit} \left( 1^{\lambda}, C^{\text{step}}[\text{crs}, P, \text{Keys}^{\tau+1}], \text{Keys}^{\tau} \right), $$
+
+where $\text{Keys}^{\tau} = (\text{stateKeys}^{\tau}, \text{dataKeys}^{\tau}, \text{digestKeys}^{\tau})$.
+
+(c) For $\tau = 1$, embed labels $\text{dataKeys}_0^1$ and $\text{stateKeys}_x^1$ in $\tilde{C}_1^{\text{step}}$.
+
+2. Compute $L \leftarrow \text{OT}_2(m_1, \text{digestKeys}^1)$.
+
+3. Output $m_2 = (L, \{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t, \text{stateKeys}^{t+1})$.
+
+Figure 9: Program encryption procedure of NISC-RAM
+
+**Hardwired Parameters:** [crs, *P*, nextKeys = (stateKeys, dataKeys, digestKeys)].
+**Input:** (state, rData, digest).
+
+($\text{state}'$, $R/W$, $L$, wData) := $C_{CPU}^P$(state, rData).
+
+**if** $R/W$ = read **then**
+ $e_{data} \leftarrow \text{Send}(\text{crs}, \text{digest}, L, \text{dataKeys})$.
+ **return** (($\text{stateKeys}_{\text{state}'}$, $e_{data}$, digestKeys$_{\text{digest}}$), $R/W$, $L$).
+**else**
+ $e_{digest} \leftarrow \text{SendWrite}(\text{crs}, \text{digest}, L, \text{wData}, \text{digestKeys})$.
+ **return** (($\text{stateKeys}_{\text{state}'}$, $\text{dataKeys}_0$, $e_{digest}$, wData), $R/W$, $L$).
+
+Figure 10: Pseudocode of a step circuit $C^{\text{step}}[\text{crs}, \mathbf{P}, \text{nextKeys}]$.
+
+**Decryption:** $y \leftarrow \text{Dec}^{\tilde{D}}(\text{crs}, m_2)$.
+
+The decryption procedure is described in Figure 13. At a high level the receiver evaluates the garbled step circuits one by one from $\tilde{C}_1^{\text{step}}$ to $\tilde{C}_t^{\text{step}}$, and uses the database to decrypt $\ell OT$ ciphertexts between two consecutive circuits. The output of the last step circuit can be decrypted using $\text{stateKeys}^{t+1}$ and hence $y$ is obtained.
+
+More precisely, the receiver first obtains the **digestLabels** for the first step circuit by running $\text{OT}_3$. Note that the first garbled step circuit already has labels for the **rData** and **state** embedded. Hence the receiver can obtain all the labels for the first step circuit and evaluate it. Then the receiver executes the circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t$ one by one, and learns the labels for the next circuit by running the receiver algorithms of laconic OT on its database.
+
+## 6.4 Correctness
+
+For correctness, we require that for every database $D \in \{0, 1\}^M$, for every RAM program $(P, x, t)$,
+it holds that
+
+$$ \Pr[\text{Dec}^{\tilde{D}}(\text{crs}, m_2) = P^D(x)] = 1, $$
+---PAGE_BREAK---
+
+Figure 11: A step circuit $C^{step}$ [crs, P, nextKeys]
+
+where crs $\leftarrow$ Setup($1^{\lambda}$), ($m_1, \tilde{D}$) $\leftarrow$ EncData(crs, D), $m_2 \leftarrow$ EncProg(crs, $m_1$, (P, x, t)). Correctness follows from Lemma 6.3 that we will prove below.
+
+**Claim 6.2.** The first garbled step circuit $\tilde{C}_1^{step}$ gets evaluated on $(x, 0, \text{digest})$, where $(\text{digest}, \hat{D}) = \text{Hash}(crs, D)$.
+
+*Proof.* Since $(m_1, \text{secret}) \leftarrow \text{OT}_1(1^\lambda, \text{digest})$, $L \leftarrow \text{OT}_2(m_1, \text{digestKeys}^1)$, and $\text{digestLabels}^1 \leftarrow \text{OT}_3(L, \text{secret})$, by correctness of OT, $\text{digestLabels}^1 = \text{digestKeys}_\text{digest}^1$. Moreover, $\tilde{C}_1^{step}$ already has labels $\text{stateKeys}_x^1$ and $\text{dataKeys}_0^1$ embedded in it, by correctness of the circuit garbling scheme, $\tilde{C}_1^{step}$ gets evaluated on $(x, 0, \text{digest})$. $\square$
+
+**Lemma 6.3.** Consider the execution of $P^D(x)$. Let $(\text{state}^\tau, \text{rData}^\tau)$ be the input to the $\tau$-th CPU step. Let $D^\tau$ be the database at the beginning of step $\tau$, and let $(\text{digest}^\tau, \hat{D}^\tau) = \text{Hash}(crs, D^\tau)$. During the Dec procedure, for every $\tau \in [t]$, $\tilde{C}_\tau^{step}$ is evaluated on inputs $(\text{state}^\tau, \text{rData}^\tau, \text{digest}^\tau)$. Moreover, the state of the database held by the receiver at the beginning of evaluating $\tilde{C}_\tau^{step}$ is $\hat{D}^\tau$.
+
+*Proof.* We will prove this lemma by induction on $\tau$. The base case follows from Claim 6.2. Assume that the lemma holds for $\tau = \rho$, then we prove that the lemma holds for $\rho + 1$ in the following. We know that $(\hat{D}^\rho, \text{digest}^\rho) = \text{Hash}(crs, D^\rho)$, and that $\tilde{C}_\rho^{step}$ is executed on $(\text{state}^\rho, \text{rData}^\rho, \text{digest}^\rho)$. By correctness of GC, $\tilde{C}_\rho^{step}$ implements its code of a CPU step, namely $(\text{state}^\prime, \text{R/W}, L, wData) = C_\text{CPU}^P(\text{state}^\rho, \text{rData}^\rho)$. Also notice that nextKeys = $(\text{stateKeys}, \text{dataKeys}, \text{digestKeys})$ hardwired in $\tilde{C}_\rho^{step}$ are the input keys for $\tilde{C}_{\rho+1}^{step}$. There are two cases:
+
+* **R/W = read:** In this case, it follows directly from the Dec procedure that $\text{stateLabels}^{\rho+1} = \text{stateKeys}_{\text{state}'}$ and $\text{digestLabels}^{\rho+1} = \text{digestKeys}_{\text{digest}}$. Since $e_{\text{data}} \leftarrow \text{Send}(crs, \text{digest}^\rho, L, \text{dataKeys})$ and $\text{dataLabels}^{\rho+1} = \text{Receive}^{\hat{D}^\rho}(crs, e_{\text{data}}, L)$, by correctness of the $\ell$OT scheme, $\text{dataLabels}^{\rho+1} = \text{dataKeys}_{D^\rho[L]}$. Hence $\tilde{C}_{\rho+1}^{step}$ is evaluated on inputs $(\text{state}^\prime, D^\rho[L], \text{digest})$, which is exactly $(\text{state}^{\rho+1}, \text{rData}^{\rho+1}, \text{digest}^{\rho+1})$. And $(\hat{D}^\rho, \text{digest}^\rho)$ remains unchanged.
+---PAGE_BREAK---
+
+**Decryption.** $y \leftarrow \text{Dec}^{\tilde{D}}(\text{crs}, m_2).
+
+1. Parse $\tilde{D} = (\text{digest}, \hat{\mathcal{D}}, \text{secret})$.
+
+2. Parse $m_2 = (L, \{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t, \text{stateKeys}^{t+1})$.
+
+3. Compute $\text{digestLabels}^1 \leftarrow \text{OT}_3(L, \text{secret})$.
+
+4. Parse $\tilde{C}_1^{\text{step}} = (\tilde{C}_1^{\text{step}}, \text{dataLabels}^1, \text{stateLabels}^1)$.
+
+5. For $\tau = 1$ to $t$ do the following:
+
+$$ (X, R/W, L) := \text{Eval}(\tilde{C}_{\tau}^{\text{step}}, (\text{stateLabels}^{\tau}, \text{dataLabels}^{\tau}, \text{digestLabels}^{\tau})) $$
+
+**if** $R/W$ **= read** **then**
+
+$$
+\begin{align*}
+\text{Parse } X &= (\text{stateLabels}^{\tau+1}, \text{e}_{\text{data}}, \text{digestLabels}^{\tau+1}) \\
+\text{dataLabels}^{\tau+1} &= \text{Receive}^{\hat{\mathcal{D}}}(\text{crs}, \text{e}_{\text{data}}, L)
+\end{align*}
+$$
+
+**else**
+
+$$
+\begin{align*}
+\text{Parse } X &= (\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, \text{e}_{\text{digest}}, \text{wData}) \\
+\text{digestLabels}^{\tau+1} &= \text{ReceiveWrite}^{\hat{\mathcal{D}}}(\text{crs}, L, \text{wData}, \text{e}_{\text{digest}})
+\end{align*}
+$$
+
+6. Use $\text{stateKeys}^{t+1}$ to decode $\text{stateLabels}^{t+1}$ and obtain $y$.
+
+Figure 13: Decryption procedure of NISC-RAM
+
+• R/W = write: In this case, it follows from the Dec procedure that $\text{stateLabels}^{\rho+1} = \text{stateKeys}_{\text{state}'}$ and $\text{dataLabels}^{\rho+1} = \text{dataKeys}_0$. Since $\text{e}_{\text{digest}} \leftarrow \text{SendWrite}(\text{crs}, \text{digest}^\rho, L, \text{wData}, \text{digestKeys})$ and $\text{digestLabels}^{\rho+1} = \text{ReceiveWrite}^{\hat{\mathcal{D}}^\rho}(\text{crs}, L, \text{wData}, \text{e}_{\text{digest}})$, by correctness of the $\ell$OT scheme, $\text{digestLabels}^{\rho+1} = \text{digestKeys}_{\text{digest}'}$ where $(\hat{\mathcal{D}}', \text{digest}') = \text{Hash}(\text{crs}, D')$ for an updated database $D' (D'$ is identical to $\hat{\mathcal{D}}$ except that $D'[L] = \text{wData})$. Hence $\tilde{C}_{\rho+1}^{\text{step}}$ is evaluated on inputs $(\text{state}', 0, \text{digest}')$, which is exactly $(\text{state}^{\rho+1}, \text{rData}^{\rho+1}, \text{digest}^{\rho+1})$. And $(\hat{\mathcal{D}}^\rho, \text{digest}^\rho)$ gets updated to $(\hat{\mathcal{D}}', \text{digest}')$, which is exactly $(\hat{\mathcal{D}}^{\rho+1}, \text{digest}^{\rho+1})$. □
+
+## 6.5 Security Proof
+
+In this section we prove sender privacy and receiver privacy as defined in Section 6.2 under the decisional Diffie-Hellman (DDH) assumption. The receiver privacy follows directly from the receiver security of OT. Below we prove sender privacy by describing a PPT simulator niscSim such that for every database $D \in \lbrace 0, 1 \rbrace^M$ where M is polynomial in $\lambda$, and for every RAM program $(P, x, t)$, let $y = P^D(x)$ be the output of the program, and MemAccess be the memory access pattern, then it holds that
+
+$$ (\text{crs}, (m_1, \tilde{D}), \text{EncProg}(\text{crs}, m_1, (P, x, t))) \stackrel{c}{\approx} (\text{crs}, (m_1, \tilde{D}), \text{niscSim}(\text{crs}, m_1, D, y, \text{MemAccess})) $$
+
+where $\text{crs} \leftarrow \text{Setup}(1^\lambda)$, $(m_1, \tilde{D}) \leftarrow \text{EncData}(\text{crs}, D)$. Notice that this definition is slightly different from the definition in Section 6.2, but in the semi-honest case it implies a simulator as defined in Section 6.2
+---PAGE_BREAK---
+
+1. Sample input keys ($stateKeys^{t+1}$, $dataKeys^{t+1}$, $digestKeys^{t+1}$) for Cstep.
+
+2. Parse MemAccess as {($R/W^T$, $L^T$, $wData^T$) : $\tau \in [t]$}, where ($R/W^T$, $L^T$, $wData^T$) is partial output of the $\tau$-th CPU step circuit. Compute ($rData^T$, $D^T$, $digest^T$) at the beginning of step $\tau$ for every $\tau \in [t+1]$.
+
+3. Compute ($stateLabels^{t+1}$, $dataLabels^{t+1}$, $digestLabels^{t+1}$):
+
+$$
+\begin{align*}
+stateLabels^{t+1} &\leftarrow stateKeys_y^{t+1} \\
+digestLabels^{t+1} &\leftarrow digestKeys_{digest^{t+1}}^{t+1} \\
+dataLabels^{t+1} &\leftarrow dataKeys_{rData^{t+1}}^{t+1}
+\end{align*}
+$$
+
+4. For $\tau = t$ down to 1, proceed as follows:
+
+**if** $R/W^T$ = read **then**
+ $e_{data} \leftarrow \ell OTSim (\text{crs}, D^\tau, L^\tau, \text{dataLabels}^{\tau+1})$.
+ $X \leftarrow (\text{stateLabels}^{\tau+1}, e_{data}, \text{digestLabels}^{\tau+1})$.
+**else**
+ $e_{digest} \leftarrow \ell OTSimWrite (\text{crs}, D^\tau, L^\tau, \text{wData}^\tau, \text{digestLabels}^{\tau+1})$.
+ $X \leftarrow (\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, e_{digest}, \text{wData}^\tau)$.
+$$
+(\tilde{C}_\tau^{\text{step}}, \text{stateLabels}^\tau, \text{dataLabels}^\tau, \text{digestLabels}^\tau) \leftarrow \text{CircSim}(1^\lambda, C^{\text{step}}, (X, R/W^\tau, L^\tau)).
+$$
+
+5. $L \leftarrow \text{OT}_2(m_1, (\text{digestLabels}^1, \text{digestLabels}^1))$.
+
+6. Output $(L, \{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t, \text{stateKeys}^{t+1})$.
+
+We show that the above simulation is indistinguishable from the real execution through a sequence of hybrids where the first hybrid outputs the real execution and the last hybrid outputs the simulated one.
+
+• $H_{2i}$ for $i \in \{0, 1, \dots, t\}$: Notice that in the output, there are $t$ garbled step circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t$. In hybrid $H_{2i}$, the garbled step circuits from 1 to $i$ are simulated while the remaining step circuits ($i+1$ to $t$) are generated honestly. Given the program, all the intermediate outputs of every step circuit can all be computed. Given the correct output of circuit $C_i^{\text{step}}$, the step circuits from 1 to $i$ can be simulated one by one from the $i$-th to the first similarly as niscSim. More formally, it proceeds as follows.
+
+1. Execute $P^D(x)$ to obtain $(R/W^\tau, L^\tau, wData^\tau)$ for every $\tau \in [t]$ and $state^{t+1} = y$. Compute $(rData^\tau, D^\tau, digest^\tau)$ at the beginning of step $\tau$ for every $\tau \in [t+1]$.
+
+2. Generate the garble circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=i+1}^t$ honestly (same as Step 1 in EncProg).
+
+3. Let $(stateKeys^{i+1}, dataKeys^{i+1}, digestKeys^{i+1})$ be the input keys of $\tilde{C}_{i+1}^{\text{step}}$.
+
+4. Compute $(stateLabels^{i+1}, dataLabels^{i+1}, digestLabels^{i+1})$:
+
+$$
+\begin{align*}
+stateLabels^{i+1} &\leftarrow stateKeys_{state^{i+1}}^{i+1} \\
+digestLabels^{i+1} &\leftarrow digestKeys_{digest^{i+1}}^{i+1} \\
+dataLabels^{i+1} &\leftarrow dataKeys_{rData^{i+1}}^{i+1}
+\end{align*}
+$$
+---PAGE_BREAK---
+
+5. For $\tau = i$ downto 1, proceed as in Step 4 of the simulator niscSim.
+
+6. $L \leftarrow OT_2(m_1, (\text{digestLabels}^1, \text{digestLabels}^1))$.
+
+7. Output $(L, \{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=1}^t, \text{stateKeys}^{t+1})$.
+
+• $H_{2i+1}$ for $i \in \{0, \dots, t-1\}$: Hybrid $H_{2i+1}$ is identical to $H_{2i}$ except that $H_{2i+1}$ simulates $\tilde{C}_{i+1}^{\text{step}}$ based on the real output of $C_{i+1}^{\text{step}}$. In particular, $H_{2i+1}$ is the same as $H_{2i}$ except that Steps 2, 3, 4 proceed as follows:
+
+2. Generate the garble circuits $\{\tilde{C}_{\tau}^{\text{step}}\}_{\tau=i+2}^t$ honestly (same as Step 1 in EncProg).
+
+3. Let $(\text{stateKeys}^{i+2}, \text{dataKeys}^{i+2}, \text{digestKeys}^{i+2})$ be the input keys of $\tilde{C}_{i+2}^{\text{step}}$.
+
+4. **if** $R/W^{i+1} = \text{read then}$
+
+$$
+\begin{align*}
+e_{\text{data}} &\leftarrow \text{Send}(crs, \text{digest}^{i+1}, L^{i+1}, \text{dataKeys}^{i+2}). \\
+X &\leftarrow (\text{stateKeys}_{\text{state}^{i+2}}^{i+2}, e_{\text{data}}, \text{digestKeys}_{\text{digest}}^{i+2}).
+\end{align*}
+$$
+
+**else**
+
+$$
+\begin{align*}
+e_{\text{digest}} &\leftarrow \text{SendWrite}(crs, \text{digest}^{i+1}, L^{i+1}, wData^{i+1}, \text{digestKeys}^{i+2}). \\
+X &\leftarrow (\text{stateKeys}_{\text{state}^{i+1}}^{i+1}, \text{dataKeys}_0^{i+1}, e_{\text{digest}}, wData^{i+1}).
+\end{align*}
+$$
+
+$$ (\tilde{C}_{i+1}^{\text{step}}, \text{stateLabels}^{i+1}, \text{dataLabels}^{i+1}, \text{digestLabels}^{i+1}) \leftarrow \text{CircSim}(1^{\lambda}, C^{\text{step}}, (X, R/W^{i+1}, L^{i+1})) $$
+
+It is easy to see that $H_0$ is the output of the real execution, and $H_{2t}$ is the simulated output. Now we prove that the consecutive hybrids are computationally indistinguishable. Below we prove that $H_{2i} \stackrel{c}{\approx} H_{2i+1} \stackrel{c}{\approx} H_{2(i+1)}$ for every $i \in \{0, \dots, t-1\}$. Since hybrid $H_{2i+1}$ simulates $\tilde{C}_{i+1}^{\text{step}}$ based on the real output of $C_{i+1}^{\text{step}}$, the output of $\tilde{C}_{i+1}^{\text{step}}$ is identical for hybrids $H_{2i}$ and $H_{2i+1}$. That said, indistinguishibility of hybrids $H_{2i}$ and $H_{2i+1}$ follows from the garbled circuit security. Next, indistinguishability between $H_{2i+1}$ and $H_{2(i+1)}$ follows from the sender's privacy property of the updatable laconic OT since the laconic OT responses are simulated in $H_{2(i+1)}$. This concludes the proof.
+
+## 6.6 Extension
+
+For simplicity of exposition, the protocol we described so far is for a single sender executing a single program with the receiver. It can be extended to the setting where a sender can execute a sequence of programs on a persistent database. Moreover, the message $m_1$ published by the receiver can be used by multiple senders, in which case the receiver maintains a different copy of the database for every sender.
+
+**Executing multiple programs on a persistent database.** After receiving the first message $m_1$ from the receiver, a sender can run multiple programs on a persistent database (with initial content $D$) by sending one message per program to the receiver. For security we require only the output and the memory access pattern of every program execution are revealed to the receiver. We also require that once $m_1$ is published, the computational cost of both the sender and the receiver for every program should grow only with the running time of the RAM computation, and is independent of the size of $D$. The NISC-RAM scheme we constructed in Section 6.3 can be naturally extended to the multi-program setting. We explain the extension by describing the changes of EncProg and Dec procedures for the second program. Encryption and evaluation of more programs would follow analogously.
+---PAGE_BREAK---
+
+* EncProg: When encrypting the first program, the sender should store locally $digestKeys^* = digestKeys^{t+1}$. Then, when encrypting the second program, there are two changes in EncProg compared to encrypting the first program: (1) $digestKeys^*$ is used as the digest keys of the first step circuit, (2) L is not generated.
+
+* Dec: When evaluating the first program, the receiver should store locally $digestLabels^* = digestLabels^{t+1}$. Then, when evaluating the second program, the sender should use $digestLabels^*$ as the digest labels for the first step circuit.
+
+**Multiple senders with a single receiver.** The above protocol also works for multiple parallel senders. That is, after the receiver publishes the first message $m_1$, every sender $S$ can send a message $m_S$ to the receiver enabling the execution of $P_S^D(x_S)$, where $D$ is the initial database of the receiver, and $(P_S, x_S)$ is the program of $S$. Security follows from the security of OT which supports multiple second OT messages with the same first OT message. Moreover, every sender can execute a sequence of programs on a persistent database. In this case, the receiver keeps a different copy of her initial database for every sender.
+
+# 7 Main Application: Multi-Hop Homomorphic Encryption for RAM Programs
+
+## 7.1 Our Model
+
+Consider a server $S$ and a collection of clients $Q_1, Q_2, \dots$ with private databases $D_1, D_2, \dots$, respectively. The clients ship their encrypted databases to $S$ to be computed on later in multiple executions in a persistent manner. At the beginning of any execution, the server $S$ encrypts his private input $x$ as $ct_0$, chooses a subset of clients $Q_{i_1}, \dots, Q_{i_n}$ and sends the $ct_0$ to client $Q_{i_1}$. Next, for all $j \in [n]$, client $Q_{i_j}$ homomorphically evaluates an arbitrary program $P_j$ of his choice on $ct_{j-1}$ to obtain $ct_j$. Finally, client $Q_{i_n}$ sends $ct_n$ to the server $S$. The server decrypts this ciphertext using his secret key of encryption as well as encrypted databases sent earlier to learn $P_n^{D_{i_n}}(\dots P_1^{D_{i_1}}(x)\dots)$. During this execution, the databases get updated and future execution of any client happens on respective updated databases.
+
+We require that the size of the ciphertext only grows with the cumulative running time of all programs in an execution and is independent of the size of the databases. For security, we require program and data privacy for all honest clients against an adversary that corrupts the server and any subset of the clients. Next, we describe the model formally.
+
+We say that an ordered sequence of RAM programs $P_1, \dots, P_n$ are *compatible* if the output length of $P_i$ is the same as the input length of $P_{i+1}$ for every $i \in [n-1]$. A multi-hop RAM homomorphic encryption scheme $\text{mhop-RAM} = (\text{Setup}, \text{KeyGen}, \text{InpEnc}, \text{EncData}, \text{Eval}, \text{Dec})$ has the following syntax. We define the algorithms w.r.t. clients $Q_1, \dots, Q_n$.
+
+* **Setup:** crs $\leftarrow \text{Setup}(1^\lambda)$.
+On input the security parameter $1^\lambda$, it outputs a common reference string.
+
+* **Key Generation:** (pk, sk) $\leftarrow \text{KeyGen}(1^\lambda)$.
+On input the security parameter $1^\lambda$, it outputs a public/secret key pair (pk, sk).
+---PAGE_BREAK---
+
+* **Database Encryption:** $(\tilde{D} = (\hat{D}, \text{digest})) \leftarrow \text{EncData}(crs, D)$.
+On input the common reference string crs and database $D \in \{0, 1\}^M$, it outputs an encrypted database $\tilde{D} = (\hat{D}, \text{digest})$, where digest is a short digest of the database.
+
+* **Input Encryption:** $(ct, x\_secret) \leftarrow \text{InpEnc}(x)$.
+On input $S$'s input $x$, it outputs a ciphertext $ct$ and secret state for $S$ denoted by $x\_secret$.
+
+* **Homomorphic Evaluation:** $ct' \leftarrow \text{Eval}(crs, i, \{\text{pk}_j\}_{j=i+1}^n, ct, \text{sk}, (P, t), \text{digest})$.
+It takes as input the crs, the client number $i$, the public keys of the clients later in the sequence, i.e., $Q_{i+1}, \dots, Q_n$, a ciphertext from the previous client, $Q$'s secret key $\text{sk}$, $Q$'s RAM program $P$ with maximum run-time $t$ and the digest `digest` of the database $D$ of $Q$. It then outputs a new ciphertext $ct'$.
+
+* **Decryption:** $y = \text{Dec}^{\tilde{D}_1, \dots, \tilde{D}_n}(crs, x\_secret, ct)$.
+On input the crs, server's state $x\_secret$, the final ciphertext $ct$ from client $Q_n$, and RAM access to encrypted databases $\tilde{D}_1, \dots, \tilde{D}_n$, it outputs $y$. The procedure Dec is itself modeled as a RAM program that can read and write to arbitrary locations of its database initially containing $\tilde{D}_1, \dots, \tilde{D}_n$.
+
+Next, we describe how these algorithms are used in a real execution.
+
+**Real Scenario.** In our multi-hop scheme for RAM programs, after the initialization phase that generates the common-random string crs, each cleint runs key generation to generate the public key and the secret key, followed by the database encryption. The encrypted database is sent to the server, and the cleint stores the digest of the database locally. After this initialization phase, the server $S$ can initiate various executions of RAM computations with different subsets of the clients. After each execution, the database of a client gets updated by the server during the decryption phase. It is ensured that the server also learns the updated digest of the database that is communicated to the clients during the start of the next execution.
+
+At the onset of any execution, the server $S$ encrypts his input and sends the ciphertext $ct_0$ to the first client $Q_1$ and maintains $x\_secret$ to be used later. The client $Q_1$ generates the ciphertext $ct_1$ using his program $P_1$ and digest `digest_1` and sends it to $Q_2$. Similarly, when a client $Q_i$ receives $ct_{i-1}$ from $Q_{i-1}$, it uses program $P_i$ and digest $ct_i$ to generate $ct_i$ and sends it to $Q_{i+1}$. This continues and finally, $Q_n$ sends $ct_n$ back to the server $S$. Then, the server runs the decryption procedure on $ct_n$ using all the encrypted databases and secret state $x\_secret$ to obtain output $y$.
+
+For the case of multiple executions, each of the above procedures take the session identifier `sid` as additional input. We denote by $\tilde{D}_1^{(\text{sid})}, \dots, \tilde{D}_n^{(\text{sid})}$ the encrypted databases before the execution with session identifier `sid`. Initially $\tilde{D}_1^{(1)} = \tilde{D}_1, \dots, \tilde{D}_n^{(1)} = \tilde{D}_n$.
+
+We require the algorithms above to satisfy the correctness, sender-privacy, client-privacy and efficiency properties described below.
+
+**Correctness.** We require that in a sequence of executions, each output of homomorphic evaluation equals the output of the corresponding computation in the clear. We formalize this as follows: For every set of keys $\{(pk_i, sk_i)\}_{i=1}^n$ in the support of KeyGen, and any collection of initial databases $D_1, \dots, D_n$, for any unbounded polynomial N number of executions the following holds:
+---PAGE_BREAK---
+
+For sid ∈ N, let $P_1^{(\text{sid})}, \dots, P_n^{(\text{sid})}$ be the sequence of programs, $x^{(\text{sid})}$ be the server's input and $D_i^{(\text{sid})}$ be the resulting database after executing the session sid-1 in the clear, then
+
+$$ \mathrm{Pr} \left[ \mathrm{Dec} \tilde{D}_1^{(\text{sid})}, \dots, \tilde{D}_n^{(\text{sid})} \left( \mathrm{crs}, \mathrm{x\_secret}^{(\text{sid})}, \mathrm{ct}_n^{(\text{sid})} \right) = P_n^{(\text{sid})} D_n^{(\text{sid})} \left( \dots \left( P_1^{(\text{sid})} D_1^{(\text{sid})} (x^{(\text{sid})}) \dots \right) \right) \right] = 1, $$
+
+where $\tilde{D}_i^{(\text{sid})}$ is the resulting garbled database after executing sid-1 homomorphic evaluations, $(\mathrm{ct}_0^{(\text{sid})}, \mathrm{x\_secret}^{(\text{sid})}) \leftarrow \mathrm{InpEnc}(x^{(\text{sid})}), \mathrm{ct}_i^{(\text{sid})} \leftarrow \mathrm{Eval}(\mathrm{crs}, i, \{\mathrm{pk}_j\}_{j=i+1}^n, \mathrm{ct}_{i-1}^{(\text{sid})}, \mathrm{sk}, P_i^{(\text{sid})}, t_i^{(\text{sid})}, \mathrm{digest}_i^{(\text{sid})})$.
+
+**Server Privacy (Semantic Security).** For server privacy, we require that for every pair of inputs $(x_0, x_1)$, let $(\mathrm{CT}_b, \mathrm{x\_secret}^b) \leftarrow \mathrm{InpEnc}(x_b)$ for $b \in \{0, 1\}$, then
+
+$$ \mathrm{CT}_0 \stackrel{c}{\approx} \mathrm{CT}_1. $$
+
+**Client Privacy (Program Privacy) with Unprotected Memory Access (UMA).** We define client privacy against a semi-honest adversary that corrupts the server S as well as an arbitrary subset of clients $\mathcal{I} \subset [n]$. Intuitively, we require *program-privacy* for the honest clients such that the adversary cannot learn anything beyond the output of the honest client’s program on one input. We formalize this as follows:
+
+There exists a PPT simulator ihopSim such that the following holds. Let crs $\leftarrow$ Setup($1^\lambda$), for every set of keys $\{\langle \mathrm{pk}_i, \mathrm{sk}_i \rangle_{i=1}^n\}$ in the support of KeyGen, and any collection of initial databases $D_1, \dots, D_n$, for any unbounded polynomial N number of executions: For sid $\in$ N, let $P_1^{(\text{sid})}, \dots, P_n^{(\text{sid})}$ be the sequence of programs, $x^{(\text{sid})}$ be the server's input, then
+
+$$ \left( \mathrm{crs}, (\tilde{D}_1, \dots, \tilde{D}_n), \left\{ \mathrm{ct}_0^{(\text{sid})}, \mathrm{ct}_1^{(\text{sid})}, \dots, \mathrm{ct}_n^{(\text{sid})} \right\}_{\text{sid} \in [\mathcal{N}]} \right) \stackrel{c}{\approx} \\ \left( \mathrm{crs}, \mathrm{ihopSim} \left( \mathrm{crs}, \left\{ \langle \mathrm{pk}_i, \mathrm{sk}_i \rangle_{i \in [\mathcal{N}]}, (\{D_j, P_j^{(\text{sid})}\}_{j \in \mathcal{I}}, x^{(\text{sid})}), \left\{ D_j, \mathrm{MemAccess}_j^{(\text{sid})}, y_j^{(\text{sid})} \right\}_{j \in [\mathcal{N}] \setminus \mathcal{I}}} \right\}_{\text{sid} \in [\mathcal{N}]} \right) $$
+
+where $\tilde{D}_i, \mathrm{ct}_i^{(\text{sid})}$ corresponds to outputs in the real execution given all the programs and databases and $y_j^{(\text{sid})} = P_j^{(\text{sid})} D_j^{(\text{sid})} (\dots (\tilde{P}_1^{(\text{sid})} D_1^{(\text{sid})} (x^{(\text{sid})})) \dots)$.
+
+**Remark 7.1.** We note that the above definition also captures the security against a semi-malicious adversary who may choose his randomness for KeyGen maliciously but behaves honestly in the protocol.
+
+**Client Privacy (Program Privacy) with Full Security.** In the case of full client privacy, the simulator ihopSim does not get the database and the access pattern for the honest clients. That is, the simulator ihopSim takes as input $\{\{\mathrm{pk}_i, \mathrm{sk}_i\}_{i \in [\mathcal{N}]}, (\{D_j, P_j^{(\text{sid})}\}_{j \in \mathcal{I}}, x^{(\text{sid})}), \left\{\mathbb{1}_{M_j}, 1_{t_j^{(\text{sid})}}, y_j^{(\text{sid})}\right\}_{j \in [\mathcal{N}] \setminus \mathcal{I}}\}_{\text{sid} \in [\mathcal{N}]}$,
+
+where $M_j$ is the size of $D_j$ and $t_j^{(\text{sid})}$ is the running time of $P_j^{(\text{sid})}$.
+---PAGE_BREAK---
+
+**Efficiency.** We require the following efficiency guarantees from mhop-RAM. Let $M_i = |D_i|$.
+
+* $|\tilde{D}_i| = M_i \cdot \text{poly}(\lambda, \log M_i)$ for all $i \in [n]$.
+
+* $|\text{ct}_0| = |x| \cdot \text{poly}(\lambda)$, where $\text{ct}_0$ is the output of $\text{InpEnc}(x)$.
+
+* $|\text{ct}_i| = \sum_{j=1}^i n \cdot t_j \cdot \text{poly}(\lambda, \log M_j, \log t_j)$ for all $i \in [n]$.
+
+**Extension.** This definition (and our construction) can be extended to the setting where in each execution all the clients do not necessarily join the homomorphic evaluation. We allow for different set of clients to participate in different executions. In particular, before the first execution, the initial database of every client is encrypted. Later before each execution, a sequence of distinct clients $\langle i_1, \dots, i_m \rangle$ can be specified.
+
+The input encryption is the same as above, while the homomorphic evaluation is executed in the specified order (as specified by the server) as $\text{ct}_j \leftarrow \text{Eval}(\text{crs}, j, \{\text{pk}_{i_u}\}_{u=j+1}^n, \text{sk}_{i_j}, \text{ct}_{j-1}, P_{i_j}, t_{i_j}, \text{digest}_{i_j})$ for every $j \in [m]$. And the decryption is executed as $y = \text{Dec}_{\tilde{D}_{i_1}, \dots, \tilde{D}_{i_m}}(\text{crs}, x_\text{secret}, \text{ct}_m)$, where $\tilde{D}_{i_1}, \dots, \tilde{D}_{i_m}$ are the up-to-date garbled databases of clients $Q_{i_1}, \dots, Q_{i_m}$. The correctness and privacy properties can be naturally extended to this setting.
+
+## 7.2 Building Blocks Needed
+
+In this section we introduce building blocks needed in our construction. In addition to these building blocks, we will also need RAM computation model (see Section 6.1.1) and two-message oblivious transfer (see Section 6.1.2). We use $[n]$ to denote the set $\{1, \dots, n\}$.
+
+### 7.2.1 2-message Secure Function Evaluation based on Garbled Circuits
+
+A two-message secure function evaluation (SFE) based on garbled circuits is as follows: Let $\mathcal{U}(\cdot, \cdot)$ be a particular “universal circuit evaluator” that takes as input the description of a circuit *C* and an argument *x*, and returns $\mathcal{U}(C, x)$. We write *C(x)* as a shorthand for $\mathcal{U}(C, x)$. Let Alice be the client with private input *x* and Bob have private input a circuit *C*. The protocol is as follows:
+
+1. $(m_1, x_\text{secret}) \leftarrow \text{SFE}_1(x)$: Alice computes $(m_1, x_\text{secret}) \leftarrow \text{OT}_1(x)$ and sends $m_1$ to Bob.
+
+2. $m_2 \leftarrow \text{SFE}_2(C, m_1)$: Bob computes $\tilde{C} \leftarrow \text{GCircuit}(C, \{\text{key}_b^w\}_{w \in \text{inp}(C), b \in \{0,1\}})$ and
+ $\text{L} \leftarrow \text{OT}_2(m_1, \{\text{key}_b^w\}_{w \in \text{inp}(C), b \in \{0,1\}})$. Sends $m_2 := (\tilde{C}, \text{L})$.
+
+3. $y = \text{SFE}_{\text{out}}(x_\text{secret}, m_2)$: Alice locally computes the output: $\{\text{key}_{x_w}^w\}_{w \in \text{inp}(C)} = \text{OT}_3(\text{L}, s_\text{secret})$, and $y = \text{Eval}(\tilde{C}, \{\text{key}_{x_w}^w\}_{w \in \text{inp}(C)})$.
+
+The correctness of the above protocol follows from the correctness of Yao garbled circuits. It can be shown that the above protocol is a secure function evaluation protocol satisfying both semi-honest client privacy and semi-honest server privacy.
+
+### 7.2.2 Re-Randomizable Secure Function Evaluation based on Garbled Circuits
+
+[GHV10] defined the tool of “re-randomizable secure function evaluation” that was used to realize multi-hop homomorphic computation for circuits. This tool was constructed under the DDH assumption by instantiating Yao’s garbled circuits with a special encryption scheme (BHHO [BHHO08]) and using re-randomizable two-message oblivious transfer [NP01].
+---PAGE_BREAK---
+
+**Definition 7.2.** A secure function evaluation protocol is said to be re-randomizable if there exists an efficient procedure Re-rand such that for every input $x$ and function $f$ and every $(m_1, x\_secret) \in SFE_1(x)$ and $m_2 \in SFE_2(C, m_1)$, the distributions $\{x, C, m_1, x\_secret, m_2, \text{Re-rand}(m_1, m_2)\}$ and $\{x, C, m_1, x\_secret, m_2, SFE_2(C, m_1)\}$ are computationally indistinguishable.
+
+[GHV10] proved the following:
+
+**Theorem 7.3 ([GHV10]).** Under the DDH assumption, there exists a re-randomizable secure function evaluation protocol satisfying Definition 7.2.
+
+Below, we abstract out the scheme of [GHV10] by stating some of the procedures implicitly provided by [GHV10] that will be needed for this paper.
+
+**Definition 7.4 (Re-randomizable Yao garbled circuits).** The scheme in [GHV10] provides the following algorithms (implicitly) for their re-randomizable Yao scheme.
+
+1. **Keys** = SampleKeys($1^\lambda$, W; r): Takes as input a set of input wires W as well randomness $r$ and outputs the input-keys for set of wires W for re-randomizable Yao. Note that it is a deterministic function given the randomness $r$. When clear from context, we will skip mentioning the inputs in the calls to this function.
+
+2. $\tilde{C} \leftarrow \text{ReGCircuit}(C, \text{InpKeys})$: Takes as input a circuit $C$ and $\text{InpKeys}$ for the input wires of $C$ and outputs a re-randomizable garbled circuit $\tilde{C}$ where input wires have keys as $\text{InpKeys}$.
+
+3. $\tilde{C}' \leftarrow \text{ReGCircuit}'(C, \text{InpKeys}, \text{OutKeys})$: Takes as input a circuit $C$, $\text{InpKeys}$ for input wires of $C$ and $\text{OutKeys}$ for output wires of $C$, and outputs a re-randomizable garbled circuit $\tilde{C}$ where input wires have keys as $\text{InpKeys}$ and output wires have keys as $\text{OutKeys}$.
+
+4. **Keys**† = Transform(KeyS, r): Takes as input Keys and randomness $r$ and outputs randomized keys KeyS†. Also, we use Transform(KeyS, {$r_1, ..., r_k$}) to denote
+
+$$ \text{Transform}(\text{Transform}(... (\text{Transform}(\text{KeyS}, r_1), ..., r_{k-1}), r_k)) $$
+
+5. $(\tilde{C}', L') \leftarrow \text{Re-rand}((\tilde{C}, L), \{r_w\}_{w \in \text{Wires}(C)})$: Takes as input a re-randomizable garbled circuit $\tilde{C}$ and OT second messages $L$ for the keys of input wires of $C$ and randomness to re-randomize each wire of $C$ and outputs a new functionally equivalent re-randomizable garbled circuit $\tilde{C}'$ and consistent OT second messages $L'$. This procedure satisfies the property of re-randomizable SFE. Moreover, the guarantee is that after randomization, for any wire $w$, the new keys for $w$ in $\tilde{C}'$ are Transform(key$^w$, $r_w$). Finally, re-randomization of OT messages only requires¹¹ {$r_w$}$_{w \in \text{inp}(C)}$.
+
+For our multi-hop homomorphic scheme for RAM it will be useful to define `SampleKeys`() as follows: `SampleKeys(1^λ, W, r) = Transform(SampleKeys(1^λ, W, 0*), r)`.
+
+¹¹In fact, each OT message for keys of a wire can be randomized consistently just given the randomness used for that wire.
+---PAGE_BREAK---
+
+## 7.3 Our Construction of Multi-hop RAM Scheme
+
+Below, in Section 7.3.1 we provide our construction for multi-hop scheme for RAM satisfying UMA-security for clients (see Section 7.1). We give a general transformation for UMA to full security in Section 7.3.5.
+
+### 7.3.1 UMA Secure Construction
+
+In this section, we first describe the UMA-secure scheme for one execution and then explain how this scheme can be extended naturally for multiple executions in Section 7.3.3. Also, as we shall see our scheme can easily be extended to the setting where different subset of parties participate in each session.
+
+**UMA-secure multi-hop RAM scheme for a single execution involving all parties.**
+
+Let $Q_1, \dots, Q_n$ be the clients holding databases $D_1, \dots, D_n$, respectively, and $S$ be the server. Let the server's private input be $x$ and secret programs of clients be $P_1, \dots, P_n$, respectively. Let $\ell OT = (\text{crsGen}, \text{Commit}, \text{Send}, \text{Receive}, \text{SendWrite}, \text{ReceiveWrite})$ be an updatable laconic OT scheme with sender privacy as defined in Definition 3.2. Let $\text{Re-GC} = (\text{SampleKeys}, \text{ReGCircuit}, \text{ReGCircuit}', \text{Transform}, \text{Re-rand})$ be a re-randomizable scheme for Yao's garbled circuits given by [GHV10] (see Definition 7.4). Let $\text{OT} = (\text{OT}_1, \text{OT}_2, \text{OT}_3)$ be a two-message oblivious transfer protocol as defined in Section 6.1.2.
+
+The multi-hop RAM scheme $\text{mhop-RAM} = (\text{Setup}, \text{KeyGen}, \text{EncData}, \text{InpEnc}, \text{Eval}, \text{Dec})$ is as follows: The algorithms Setup, KeyGen, EncData are formally described in Figure 14.
+
+**Setup:** $\text{crs} \leftarrow \text{Setup}(1^\lambda)$
+Setup algorithm generates the common reference string for laconic OT.
+
+**Key Generation:** $(pk, sk) \leftarrow \text{KeyGen}(1^\lambda)$
+Each client runs this algorithm once to generate the secret-key sk and public-key pk. A client Q picks a PRF seed s as the secret key. Next, it generates the public key as the first message of OT for s and secret-key as the secret state for OT as well as PRF key.
+
+Looking ahead, the client Q will use the PRF key s to garble his own P and to randomize the garbled program generated by all previous clients in any execution.
+
+**Database Encryption:** $\tilde{D} \leftarrow \text{EncData}(\text{crs}, D)$
+Each client runs this algorithm at the beginning to garble the database and sends the garbled database to the server S. The garbled database is generated by executing the **Hash** procedure of laconic OT. This outputs an encoded database $\tilde{D}$ and a digest **digest**, both of which are given to the server S.
+
+**Input Encryption:** $(ct, x\_secret) \leftarrow \text{InpEnc}(x)$
+In each execution, the server S encrypts its input x as follows: It computes the first message of OT as the ciphertext and stores the secret state of OT to be used for decryption of computation later. The ciphertext is sent the first client, w.l.o.g. $Q_1$. The algorithm InpEnc is described formally in Figure 15.
+---PAGE_BREAK---
+
+**Set up.** crs $\leftarrow$ Setup$(1^{\lambda})$.
+
+1. Sample crs $\leftarrow$ crsGen$(1^{\lambda})$.
+
+2. Output crs.
+
+**Key Generation.** (pk, sk) $\leftarrow$ KeyGen$(1^{\lambda})$.
+
+1. Sample a PRF key $s \leftarrow \{0, 1\}^{\lambda}$ and generate $(pk, s\_secret) \leftarrow OT_1(s)$.
+
+2. Output $(pk, sk := (s, s\_secret))$.
+
+**Database Encryption.** $\tilde{D} \leftarrow EncData(crs, D)$.
+
+1. $(digest, \tilde{D}) \leftarrow Hash(crs, D)$.
+
+2. Output $\tilde{D} = (digest, \hat{D})$.
+
+Figure 14: Formal description of the set up, key generation and database encryption algorithms.
+
+**Input Encryption.** $(ct, x\_secret) \leftarrow InpEnc(crs, x)$.
+
+1. $(ct, x\_secret) \leftarrow OT_1(x)$.
+
+Figure 15: Input encryption algorithm.
+
+**Homomorphic Evaluation:** $ct' \leftarrow Eval(\text{crs}, i, \{\text{pk}_j\}_{j=i+1}^n, ct, sk = (s, s\_secret), (P, t), \text{digest})$. This algorithm is executed by client $Q_i$ to generate the next ciphertext $ct'$ given ct from client $Q_{i-1}$, and is described formally in Figure 16. This is the most involved procedure in our construction, and hence, we first provide an informal description. At a very high level, as illustrated in Figure 17, the client $Q_i$ generate the garbled program for $P$ consisting of $t$ garbled circuits and also re-randomize all the circuits in ct. As mentioned before, this re-randomization step is crucial to get program privacy for this client. Moreover, the re-randomization has to be done carefully so that the previous ct is consistent with the new garbled program.$^{12}$
+
+This procedure consists of four main steps: Let $T$ be the number of step-circuits in ct.
+
+1. Garble the new program $P$: For each $\tau \in [T+1, T+t]$, client does the following: It generates a “super-circuit” that is illustrated in Figure 18 consisting of a CPU step circuit $C_{\tau}^{step}$ (see Figure 19) and PRF circuits $C_{\tau,i+1}^{\text{PRF}}, \dots, C_{\tau,n}^{\text{PRF}}$ (see Figure 20). A step circuit, encodes the logic of a CPU step of a program $P$ and PRF circuits provide a part of the randomness used in re-randomization. We will elaborate on the functionality of PRF circuits later.
+
+The garbled program will consist of garbled circuits corresponding to all the step circuits and PRF circuits. The first step is to pick the keys for the input wires of all of these circuits. Next, we begin by describing the step circuits.
+
+**Step Circuits $C_{\tau}^{step}$ (Figure 19)**: The inputs of a step circuit (see Figure 18) can be partitioned into $((state, rData, digest), Rd)$, where state is the current CPU state, rData is the bit-read from database, and digest is the up-to-date digest of the database. Rd corresponds to the randomness given as input to the step-circuit computed from the PRF circuits. A step
+
+$^{12}$We do this by keeping track of the randomness used in randomizing the input wires for each garbled circuit.
+---PAGE_BREAK---
+
+Homomorphic Evaluation.
+
+$$ \text{ct}' \leftarrow \text{Eval} \left( \text{crs}, i, \{\text{pk}_j\}_{j=i+1}^n, \text{ct} = \left( L_0, \{\tilde{\mathcal{C}}_{\tau}^{\text{step}}, \{\tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}}, L_{\tau,j}\}_{j=i}^n\}_{\tau \in [T]} \right), \text{sk} = (s, s_\text{secret}), (P, t), \text{digest} \right). $$
+
+1. **Generate the “new” garbled program for P:** Generate garbled circuits $\{\tilde{\mathcal{C}}_{\tau}^{\text{step}}, \{\tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}}\}_{j=i+1}^n\}_{\tau=T+1}^{T+t}$.
+
+(a) Set stateKeys$^\tau$, dataKeys$^\tau$, digestKeys$^\tau$, RdKeys$^{\tau,j}$, PKeys$^{\tau,j}$ for each $\tau \in [T+1, T+t]$ and $j \in [i+1, n]$ as SampleKeys($F_s(\text{GC}_\star || \tau)$) where $F_s(\text{GC}_\star || \tau)$ is the randomness used and $\star \in \{\text{STATE, DATA, DIGEST, RD, P}\}$, respectively.
+Set stateKeys$^\tau$, dataKeys$^\tau$, digestKeys$^\tau$ to SampleKeys$(0^*)$ for $\tau = T + t + 1$.
+
+(b) **Garble Cstep circuits:** For each $\tau \in [T+1, T+t]$,
+$$ \tilde{\mathcal{C}}_{\tau}^{\text{step}} \leftarrow \text{ReGCircuit} (\mathcal{C}^{\text{step}}[i, \text{crs}, P, \text{Keys}^{\tau+1}, F_s(\text{PSI}||\tau)], (\text{Keys}^{\tau}, \{\text{RdKeys}^{\tau,j}\}_{j=i+1}^n)), $$
+where $\text{Keys}^{\tau} = (\text{stateKeys}^{\tau}, \text{dataKeys}^{\tau}, \text{digestKeys}^{\tau})$.
+Embed labels $\text{dataKeys}_0^{T+1}$ and $\text{digestKeys}_{\text{digest}}^{T+1}$ in $\tilde{\mathcal{C}}_{T+1}^{\text{step}}$.
+
+(c) **Garble CPRF circuits:** For each $[T+1, T+t]$ and $j \in [i+1, n]$, compute
+$$ \tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}} \leftarrow \text{ReGCircuit}' (\mathcal{C}^{\text{PRF}}[\tau], \text{PKeys}^{\tau,j}, \text{RdKeys}^{\tau,j}). $$
+
+2. **Generate the OT second messages for newly generated circuits:** For all $\tau \in [T+1, T+t]$ and $j \in [i+1, n]$ compute $L_{\tau,j} \leftarrow \text{OT}_2(\text{pk}_j, \text{PKeys}^{\tau,j})$.
+
+3. **Obtain partial labels for previous circuits:**
+
+(a) For every $\tau \in [T]$, compute $M_{\tau,i} = \text{OT}_3(L_{\tau,i}, s_\text{secret})$ and $\tilde{\mathcal{C}}_{\tau,i}^{\text{PRF}}$ using input labels $M_{\tau,i}$ and embed the labels in $\tilde{\mathcal{C}}_{\tau}^{\text{step}}$.
+
+(b) If $i=1$, then $L_0 \leftarrow \text{OT}_2(L_0, \text{stateKeys}^1)$.
+
+4. **Re-randomize previous garbled circuits:** If $i > 1$, do the following:
+
+(a) For each $\tau \in [T]$, re-randomize the circuit $\tilde{\mathcal{C}}_{\tau}^{\text{step}}$ using Re-rand($\cdot$) (see Definition 7.4) such that the input wire keys are randomized using $F_s(\text{GC}_\star || \tau)$, where $\star \in \{\text{STATE, DATA, DIGEST, RD}\}$ for different input wires appropriately.
+
+(b) For each $\tau \in [T]$, re-randomize the circuits $\{\mathcal{C}_{\tau,j}^{\text{PRF}}\}_{j \in [i+1,n]}$ and $\{\mathcal{L}_{\tau,j}\}_{j \in [i+1,n]}$ using Re-rand($\cdot$) such that the input wires are randomized using $F_s(\text{GC\_P} || \tau)$ and output wires are randomized using $F_s(\text{GC\_RD} || \tau)$.
+
+(c) Re-randomize $L_0$ using $F_s(\text{GC\_STATE} || 1)$.
+
+5. Output $\text{ct}' = \left( L_0, \{\tilde{\mathcal{C}}_{\tau}^{\text{step}}, \{\tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}}, L_{\tau,j}\}_{j=i+1}^n\}_{\tau=1}^{T+t} \right)$.
+---PAGE_BREAK---
+
+Figure 17: Homomorphic Evaluation by $Q_i$: $Q_i$ contributes new circuits (denoted in white in the lower layer) and processes the input circuits as follows: (i) computes the yellow circuits, and (ii) re-randomizes all input circuits. The re-randomized circuits are shown in gray color.
+
+circuit executes one CPU step and passes on the updated state, new bit read, and new digest to the next step circuit. Note that we do not achieve this by passing the output wires of $\tau$ into input wires of $\tau + 1$. That is, the output wire of $\tau$ will not have same keys as input wires of $\tau + 1$ (Note that the two consecutive step circuits are not connected by solid lines in Figure 17.). Hence, the step circuit $\tau$ will have the keys of the next circuit hard-coded inside it.
+
+Next, we explain the logic of a step-circuit. First, it computes the new (state', R/W, L, wData). Next, it computes the transformed keys **nextKeys**† of the next step-circuit using the hard-coded keys and the input randomness (this uses the transform functionality of re-randomizable Yao from Section 7.2.2). Then, in the case of a "read" it outputs **stateKeys**† corresponding to new state', labels for data via laconic OT procedure **Send**(·) for location L where the sender's inputs are **dataKeys**0†, **dataKeys**1† and **digestKeys**† corresponding to **digest**. The case of a write is similar, but now the labels of new updated digest are transferred via laconic OT procedure **SendWrite**(·). Note that it follows via correctness of reads and writes of the laconic OT that the evaluator would be able to recover the correct labels for the read-data and the new digest.
+
+The down-bend in output and input wires of step-circuits for data and digest in Figure 17 represents that these keys are not output in the clear, but are output using laconic OT.
+---PAGE_BREAK---
+
+Figure 18: One step circuit along with the attached PRF circuits.
+
+Correct labels will be learnt during execution using the encoded database $\tilde{D}_i$ and laconic OT procedures.
+
+**PRF circuits $C_{\tau,j}^{PRF}$ (Figure 20)**: This circuit takes as input a PRF key $s_j$ of client $Q_j$ and outputs the PRF value corresponding to time-step $\tau$. The use of these circuits will be clear when we describe the re-randomization step below.
+
+All these circuits are garbled such that the keys for output wires of PRF circuits are same as keys for Rd input keys of step circuits. In Figure 17, this is depicted by joining the output wires of PRF circuits with Rd input wires of step circuit with a solid line. The garbled program consists of garbled step circuits and garbled PRF circuits $\{\tilde{C}_{\tau}^{\text{step}}, \{\tilde{C}_{\tau,j}^{\text{PRF}}\}_{j \in \{i+1, \dots, n\}}\}_{\tau \in [T+1, T+t]}$. The client also embeds labels for rData = 0 and digest$_i$ in the first step circuit.
+
+2. Generate OT messages for $C_{\tau,j}^{PRF}$: Recall that this circuit takes as input a PRF key $s_j$ of client $Q_j$ whose OT first message is present in $pk_j$. Client $Q_i$ generates the OT second message $L_{\tau,j}$ for the input keys of $\tilde{C}_{\tau,j}^{PRF}$.
+
+3. Evaluating the PRF circuits for itself: Note that the ciphertext ct consists of a sequence of step circuits and PRF circuits for each step circuit corresponding to $j \in \{i, \dots, n\}$. See Figure 17 where the PRF circuits for client $Q_i$ are depicted in yellow. $Q_i$ computes the input labels for $\tilde{C}_{\tau,i}^{PRF}$ using the OT message $L_{\tau,i}$ and embeds the output labels in to $\tilde{C}_{\tau}^{\text{step}}$ for
+---PAGE_BREAK---
+
+**Hard-coded parameters:** $[i, crs, P, nextKeys = (\text{stateKeys}, \text{dataKeys}, \text{digestKeys}), \psi]$.
+**Input:** $((\text{state}, \text{rData}, \text{digest}), (\{\omega_j, \phi_j\}_{j>i}))$.
+$(\text{state}', \text{R/W}, L, wData) := C_{\text{CPU}}^P(\text{state}, \text{rData})$.
+$nextKeys^\dagger := \text{Transform}(\text{nextKeys}, \{\omega_j\}_{j>i})$.
+Parse $nextKeys^\dagger$ as $(\text{stateKeys}^\dagger, \text{dataKeys}^\dagger, \text{digestKeys}^\dagger)$.
+
+if R/W = read then
+ e$_{\text{data}} \leftarrow \text{Send}(\text{crs}, \text{digest}, L, \text{dataKeys}^\dagger; \psi \oplus \bigoplus_{j>i} \phi_j).
+return $((\text{stateKeys}_{\text{state}'}, e_{\text{data}}, \text{digestKeys}_{\text{digest}}^\dagger), \text{R/W}, L)$.
+else
+ e$_{\text{digest}} \leftarrow \text{SendWrite}(\text{crs}, \text{digest}, L, wData, \text{digestKeys}^\dagger; \psi \oplus \bigoplus_{j>i} \phi_j)$.
+return $((\text{stateKeys}_{\text{state}'}, \text{dataKeys}_0^\dagger, e_{\text{digest}}, wData), \text{R/W}, L)$.
+
+Figure 19: Pseudocode of the step circuit $C^{\text{step}}[i, crs, P, nextKeys, \psi]$.
+
+**Hard-coded parameters:** $[\tau]$.
+**Input:** $s$.
+**Output:** $\left( \left\{ F_s(\text{GC}_{-||\tau+1}) \right\}_{\tau \in \{\text{STATE,DATA,DIGEST}\}}, F_s(\text{LACONIC\_OT}||\tau) \right)$.
+
+Figure 20: PRF circuit $C^{\text{PRF}}[\tau]$.
+
+all $\tau \in [\tau]$. In other words, $Q_i$ consumes the first PRF circuits from each step of previous ciphertext ct.
+
+4. Re-randomize the previous circuits: After consuming the first PRF circuit from each step, $Q_i$ randomizes all the remaining circuits using appropriate randomness. Note that the input keys of $\tilde{C}_{\tau+1}^{\text{step}}$ are randomized using the exact randomness that was fed into $C_{\tau}^{\text{step}}$ via the PRF circuit for $Q_i$. This makes sure that the hard-coded input keys of step $\tau + 1$ are randomized consistently in the same way as how $Q_i$ will randomize the circuit $\tilde{C}_{\tau+1}^{\text{step}}$.
+
+Hence, to conclude, the PRF circuits are present to provide the randomness needed to randomize the hard-coded keys inside the step circuits¹³.
+
+**Homomorphic Decryption:** $y = \text{Dec}^{\tilde{D}_1, \dots, \tilde{D}_n} (\text{crs}, x\_secret, ct = (L_0, \{\tilde{C}_{\tau}^{\text{step}}\}_{\tau \in [\tau]})).$
+
+The algorithm is described formally in Figure 21. It takes as input the secret state of the server $x\_secret$ and the final ciphertext $ct_n$ consisting of OT message for $x$ and sequence of $T$ step-circuits, where $T = \sum_{i \in [n]} t_i$. Note that all the PRF circuits have been evaluated already by correct parties and correct labels for RdKeys have been embedded into the step-circuits. The server does the following for decryption:
+
+1. It obtains the **stateLabels** for the first circuit by running $\text{OT}_3$. Note that the first circuit of program of any client has labels for data and digest already embedded. Hence, now the server knows all the labels for the first circuit.
+
+¹³Note that randomization of garbled circuits preserves the functionality. Since the keys for the next circuit are transferred using the laconic OT, we need to feed in correct keys into the Send functions of laconic OT.
+---PAGE_BREAK---
+
+Homomorphic Decryption.
+
+$$
+y = \text{Dec}^{\tilde{D}_1, \dots, \tilde{D}_n} \left( \text{crs}, \text{x\_secret}, \text{ct} = \left( L_0, \left\{ \tilde{C}_{\tau}^{\text{step}} \right\}_{\tau \in [T]} \right) \right), \text{ where } T = \sum_{i \in [n]} t_i.
+$$
+
+1. For all $i \in [n]$, parse $\tilde{D}_i = (\text{digest}_i, \hat{\text{D}}_i)$.
+
+2. Compute $M_0 = \text{OT}_3(L_0, x_\text{secret})$.
+
+3. Parse $\tilde{C}_1^{\text{step}} = (\tilde{C}_1^{\text{step}}, \text{dataLabels}, \text{digestLabels})$.
+
+4. Define $\text{Labels}^1 = (\text{stateLabels}^1 = M_0, \text{dataLabels}^1 = \text{dataLabels}, \text{digestLabels}^1 = \text{digestLabels})$.
+
+5. For $\tau = 1$ to $T$ do the following:
+
+Define $i$ s.t. $\tau \in [\sum_{j \in [i-1]} t_j + 1, \sum_{j \in [i]} t_j]$.
+$(X, R/W, L) := \text{ReEval}(\tilde{C}_{\tau}^{\text{step}}, \text{Labels}^{\tau})$.
+
+if R/W = read then
+ Parse $X = (\text{stateLabels}^{\tau+1}, e_{\text{data}}, \text{digestLabels}^{\tau+1})$.
+ dataLabels^{\tau+1} := \text{Receive}\hat{D}_i(\text{crs}, e_{\text{data}}, L).
+else
+ Parse $X = (\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, e_{\text{digest}}, \text{wData})$.
+ digestLabels^{\tau+1} := \text{ReceiveWrite}\hat{D}_i(\text{crs}, L, \text{wData}, e_{\text{digest}})$.
+
+if $\tau = \sum_{j \in [i]} t_j$ and $\tau < T$ then
+ Parse $\tilde{C}_{\tau+1}^{\text{step}} = (\tilde{C}_{\tau}^{\text{step}}, \text{dataLabels}, \text{digestLabels})$
+ Set $\text{dataLabels}^{\tau+1} = \text{dataLabels}$ and $\text{digestLabels}^{\tau+1} = \text{digestLabels}$.
+$\quad$ $\text{Labels}^{\tau+1} := (\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, \text{digestLabels}^{\tau+1})$.
+
+6. Decode output $y$ using $(\text{stateLabels}^{T+1}, \text{SampleKeys}(0^*))$.
+
+Figure 21: Decryption algorithm for multi-hop scheme for RAM.
+
+2. For $\tau \in [T]$, the server executes the circuit $\tilde{C}_{\tau}^{\text{step}}$, and learns the labels for the next circuit via running the receiver algorithms of laconic OT correctly.
+
+7.3.2 Correctness
+
+Here we first prove correctness (as defined in Section 7.1) for a single execution. In fact, we would
+prove something stronger that would help us extend the scheme to multiple executions in a straight-
+forward manner. We prove the following two properties:
+
+**Property 1.** For the above scheme, $y = P_n^{D_n} (\dots P_1^{D_1}(x) \dots)$, where programs, databases and input $x$ are as defined above.
+
+**Property 2.** Let $\hat{\mathrm{D}}'_i, \mathrm{digest}'_i$ denote the updated encoded database and digest with the server after
+---PAGE_BREAK---
+
+the execution. We show that these are equal to $\text{Hash}(crs, D'_i)$, where $D'$ results after executing
+$P_i^{D_i} (P_{i-1}^{D_{i-1}} (\dots P_1^{D_1}(x)\dots))$.
+
+Below we prove correctness via a sequence of facts and claims.
+
+**Fact 7.5.** *At any point in homomorphic evaluation, the circuit $\tilde{C}_{\tau,j}^{\text{PRF}}$ and the second OT message for its input keys $L_{\tau,j}$ are consistent.*
+
+This follows from correctness of $\text{OT}_2(\cdot, \cdot)$ procedure when it is generated and the fact that re-randomization happens consistently in Re-rand procedure of re-randomizable garbled circuits.
+
+**Fact 7.6.** *During the homomorphic evaluation of client $Q_i$, in Step 3a, Figure 16 while obtaining partial labels, $M_{\tau,i} = P\text{Keys}_s^{\tau,i}$, where $s$ is the PRF key of $Q_i$.*
+
+This follows from the correctness of OT protocol as well as Fact 7.5.
+
+**Fact 7.7.** *During the homomorphic evaluation of client $Q_i$, in Step 3a, Figure 16 the labels embedded in circuit $\tilde{C}_{\tau}^{\text{step}}$ correspond to $\text{RdKeys}_{\omega_i, \phi_i}^{\tau, i}$ where $\phi_i = F_s(\text{LACONIC\_OT} || \tau)$ and $\omega_i = \{F_s(\text{GC}_{-*} || \tau + 1)\}_{* \in \{\text{STATE, DATA, DIGEST}\}}$.*
+
+This is because the functionality of the $\mathbb{C}^{\text{PRF}}$ is preserved in randomization so far, Fact 7.6 and because the output keys of $\tilde{C}_{\tau,d}^{\text{PRF}}$ and $\text{RdKeys}^{\tau,i}$ are same when they are generated and are re-randomized using same randomness in Step 4b of Figure 16.
+
+Recall that $ct_n$ consists of garbled step-circuits of client $Q_1$ followed by $Q_2$ and so on. We prove the following fact about garbled step circuits belonging to some client $Q_i$ in final ciphertext $ct_n$.
+
+**Claim 7.8.** *Consider circuits $\tilde{C}_{\tau}^{\text{step}}$ and $\tilde{C}_{\tau+1}^{\text{step}}$ such that both belong to program $P_i$ for some $i$. Since all the PRF circuits $\tilde{\mathbb{C}}^{\text{PRF}}$ have been evaluated, the value nextKeys$^\dagger$ in $\tilde{C}_{\tau}^{\text{step}}$ is well defined. Then, nextKeys$^\dagger = \text{Keys}^{\tau+1}$ where $\text{Keys}^{\tau+1}$ corresponds to the input keys for $\tilde{C}_{\tau+1}^{\text{step}}$ in $ct_n$.*
+
+*Proof.* Initially, $Q_i$ picks $\text{Keys}^{\tau+1}$ as $\text{SampleKeys}(F_s(\text{GC}_{-*} || \tau+1))$, where $* \in \{\text{STATE, DATA, DIGEST}\}$ and uses them in garbling of $\tilde{C}_{\tau+1}^{\text{step}}$ as well as are hardcoding inside $\tilde{C}_{\tau}^{\text{step}}$.
+
+Then, $\tilde{C}_{\tau+1}^{\text{step}}$ is randomized by clients $Q_{i+1}, \dots, Q_n$ such that the stateKeys, dataKeys, digestKeys are randomized sequentially using $\omega_j = (F_{s_j}(\text{GC\_STATE}||\tau+1), F_{s_j}(\text{GC\_DATA}||\tau+1), F_{s_j}(\text{GC\_DIGEST}||\tau+1))$. This is same as Transform($\text{Keys}^{\tau+1}, \{\omega_j\}_{j>i}$) inside $\tilde{C}_{\tau}^{\text{step}}$. By Fact 7.7, $\omega_j$ is the value used for Transform in $\tilde{C}_{\tau}^{\text{step}}$. $\square$
+
+**Claim 7.9.** *The above claim also holds for $\tilde{C}_{\tau}^{\text{step}}$ and $\tilde{C}_{\tau+1}^{\text{step}}$ when $\tau$ is the last circuit of a program for $Q_i$ and $\tau + 1$ is the first circuit for $Q_{i+1}$.*
+
+*Proof.* When the client $Q_i$ generates $\tilde{C}_{\tau}^{\text{step}}$, the keys hard-coded are $\text{SampleKeys}(0^*)$. Then, this circuit is re-randomized by $Q_{i+1}$ resulting in keys $\text{SampleKeys}(F_{s_{i+1}}(\text{GC}_{-*} || \tau + 1))$ which same as the value used by $Q_{i+1}$ to generate the step-circuit $\tilde{C}_{\tau+1}^{\text{step}}$. $\square$
+
+**Fact 7.10.** *The first garbled step circuit $\tilde{C}_1^{\text{step}}$ gets evaluated on $(x, 0, \text{digest}_1)$.*
+
+This follows from correctness of OT and consistency of re-rerandomization of OT and garbled circuits similar to Fact 7.5.
+
+Now, we will prove a lemma about the execution of circuits generated by client $Q_1$. Then, we will prove a claim about the inputs on which circuit of $Q_2$ is executed. Finally, the correctness of execution programs of all clients would follow in a similar manner.
+---PAGE_BREAK---
+
+**Lemma 7.11.** Consider the program $P_1$ and the database $D_1$ of the first client and the input $x$ of the server. Consider the execution $P_1^{\tilde{D}_1}(x)$ execution in the clear as $(\mathbf{state}_{\tau}, \mathbf{rData}_{\tau})$ as the values on which $\mathbf{C}^{\text{step}}$ is executed. Also, let $(\hat{\mathbf{D}}_1^{\tau}, \mathbf{digest}_{\tau})$ denote the Hash($crs, D_1^{\tau}$), where $D_1^{\tau}$ is the database at beginning of step $\tau$. Then, while decryption, $\tilde{\mathbf{C}}_{\tau}^{\text{step}}$ is executed on inputs $(\mathbf{state}_{\tau}, \mathbf{rData}_{\tau}, \mathbf{digest}_{\tau})$. Moreover, the encoded database held by the server before step $\tau$ is $\hat{\mathbf{D}}_1^{\tau}$.
+
+*Proof.* We will prove this lemma by induction on $\tau$. The base case follows from Fact 7.10. Assume that the lemma holds for $\tau = \rho$, then we prove that the lemma holds for $\rho + 1$ as follows: So it holds that $\tilde{\mathbf{C}}_{\rho}^{\text{step}}$ is executed on $(\mathbf{state}_{\rho}, \mathbf{rData}_{\rho}, \mathbf{digest}_{\rho})$. Moreover, $(\hat{\mathbf{D}}_1^{\rho}, \mathbf{digest}_{\rho})$ denote the Hash($crs, D_1^{\rho}$). Note that $\tilde{\mathbf{C}}_{\rho}^{\text{step}}$ correctly implements its code that includes one CPU step of $P_1$. Hence, $(\mathbf{state}', R/W, L, wData) = C_{CPU}^{\rho}(\mathbf{state}_{\rho}, \mathbf{rData}_{\rho})$. Also, by Claim 7.8, $\mathbf{nextKeys}^{\dagger}$ in $\tilde{\mathbf{C}}_{\rho}^{\text{step}}$ are correct input keys for $\tilde{\mathbf{C}}_{\rho+1}^{\text{step}}$. There are following two cases:
+
+* R/W = read: In this case, database and the digest are unchanged. New CPU state and digest are output correctly. Moreover, the labels for bit read from the memory will be learnt via Receive of updatable laconic OT. Correctness of these labels follows from correctness of read of laconic OT.
+
+* R/W = write: Similar to above, in this case new state and data keys are correctly output. Moreover, the digest keys w.r.t. the new updated digest are output via laconic OT. The correctness of these labels follows from correctness of laconic OT write function ReceiveWrite. Finally, in this function, the encoded database is updated correctly. $\square$
+
+**Lemma 7.12.** Let $\tilde{\mathbf{C}}_{t_1}^{\text{step}}$ be the last circuit of client $Q_1$ or program $P_1$. Then, during decryption, $\tilde{\mathbf{C}}_{t_1+1}^{\text{step}}$ is executed on $(y_1, 0, \mathbf{digest}_2)$, where $y_1 = P_1^{\tilde{D}_1}(x)$ and $\mathbf{digest}_2$ is the digest for $D_2$.
+
+*Proof.* At the time of homomorphic evaluation, in Step 1b, Figure 16, labels $\mathbf{dataKeys}_0^{t_1+1}$ and $\mathbf{digestKeys}_{\mathbf{digest}}^{t_1+1}$ are embedded in $\tilde{\mathbf{C}}_{t_1+1}^{\text{step}}$. Also, by Claim 7.9, in the final ciphertext ct, $\mathbf{nextKeys}^{\dagger}$ inside $\tilde{\mathbf{C}}_{t_1}^{\text{step}}$ are correct keys for $\tilde{\mathbf{C}}_{t_1+1}^{\text{step}}$. Hence, the lemma holds since $\tilde{\mathbf{C}}_{t_1}^{\text{step}}$ outputs $\mathbf{stateKeys}_{\mathbf{state}'}^{\dagger}$, where $\mathbf{state}' = y_1$. $\square$
+
+**Lemma 7.13.** Consider the program $P_i$ and the database $D_i$ of the client $Q_i$ and the input $x$ of the server. Consider the execution $P_i^{\tilde{D}_i}(y_{i-1})$ execution in the clear as $(\mathbf{state}_{\tau}, \mathbf{rData}_{\tau})$ as the values on which $\mathbf{C}^{\text{step}}$ is executed. Also, let $(\hat{\mathbf{D}}_i^{\tau}, \mathbf{digest}_{\tau})$ denote the Hash($crs, D_i^{\tau}$), where $D_i^{\tau}$ is the database at beginning of step $\tau$. Then, while decryption, $\tilde{\mathbf{C}}_{\tau}^{\text{step}}$ is executed on inputs $(\mathbf{state}_{\tau}, \mathbf{rData}_{\tau}, \mathbf{digest}_{\tau})$. Moreover, the encoded database held by the server before step $\tau$ is $\hat{\mathbf{D}}_i^{\tau}$.
+
+*Proof.* The lemma follows via induction on number of clients where the base case is proved in Lemma 7.11. The rest of the proof follows simply via induction similar to Lemma 7.11 where at the end of each program and beginning of a new program we prove Lemma 7.12. This proves both the properties 1 and 2. $\square$
+---PAGE_BREAK---
+
+### 7.3.3 Extending to Multiple Executions
+
+Recall that for correctness we also proved that the after one execution, the resulting garbled database $\tilde{D} = (\hat{D}, \text{digest})$ corresponds to the output of $\text{Hash}(crs, D')$, where $D'$ is the correct database resulting after the execution in the clear (See Property 2, Section 7.3.2).
+
+Given this invariant after the first execution, the next execution happens identically as the first execution with minor differences. To run the algorithm Eval, the clients need the updated digest of their respective databases. The updated digests of all the clients taking part in an execution would be sent by the server to the first client on that execution path, and would be passed along with each ciphertext. Also, to ensure that no PRF output is used twice, each PRF invocation would take the session identifier `sid` as an additional input. With these changes, the second execution is identical to the first execution and hence, its correctness follows in a straight-forward manner.
+
+Also, this does not affect the UMA-security because the simulator of the ideal world is given the databases as well as memory access pattern of the honest clients as input as well.
+
+Moreover, note that this generalizes to the scenario when different subset of clients take part in different executions. Only the digests of the relevant client are passed around by the server.¹⁴
+
+### 7.3.4 Security Proof
+
+Server privacy follows receiver privacy of oblivious transfer. For ease of exposition, we prove client UMA privacy for the setting of a single honest client $Q_i$ for a single execution. At the end of this section we will show that the proof can be extended for the case of multiple honest clients and multiple executions as well.
+
+In the following, we prove that there exists a PPT simulator ihopSim such that, for any set of databases $\{D_j\}_{j \in [n]}$, any sequence of compatible programs $P_1, \dots, P_n$ running time $t_1, \dots, t_n$ and input $x$, the outputs of the following two experiments are computational indistinguishable:
+
+**Real experiment**
+
+* $(pk_j, sk_j) \leftarrow \text{KeyGen}(1^\lambda)$ for $\forall j \in [n]$.
+
+* $\tilde{D}_j = (\text{digest}_j, \hat{D}_j) \leftarrow \text{EncData}(crs, D_j)$ for $\forall j \in [n]$.
+
+* $(ct_0, x\_secret) \leftarrow \text{InpEnc}(crs, x)$.
+
+* $ct_j \leftarrow \text{Eval}\left(\text{crs}, j, \{\text{pk}_k\}_{k=j+1}^n, \text{ct}_{j-1}, \text{sk}_j, (P_j, t_j), \text{digest}_j\right)$ for $\forall j \in [n]$.
+
+* Output $ct_0, \{\tilde{D}_j, \text{ct}_j\}_{j \in [n]}$.
+
+**Simulated experiment**
+
+* $(pk_i, sk_i) \leftarrow \text{ihopSim}(1^\lambda, i)$.
+
+* $(pk_j, sk_j) \leftarrow \text{KeyGen}(1^\lambda; r_j)$ for $\forall j \in [n] \setminus \{i\}$. Here, $r_j$ are uniform random coins.
+
+* $(\text{ct}_0, \{\tilde{D}_j, \text{ct}_j\}_{j \in [n]}) \leftarrow \text{ihopSim}(crs, x, \{\text{pk}_j, \text{sk}_j, D_j, t_j\}_{j \in [n]}, \{P_j, r_j\}_{j \in [n] \setminus \{i\}}, \text{MemAccess}_i, y_i)$,
+where $y_i = P_i^{D_i} (\cdots (P_1^{D_1}(x)) \cdots)$.
+
+* Output $\text{ct}_0, \{\tilde{D}_j, \text{ct}_j\}_{j \in [n]}$.
+
+¹⁴It can also be extended to the setting, when a client $Q_i$ occurs multiple times in the chain of clients in an execution. To handle this setting, the digest of $D_i$ is passed along all the programs between two instances of this client as additional state.
+---PAGE_BREAK---
+
+The above definition can be made semi-malicious by allowing the adversary to pick random coins $r_j$ adversarially given the public key $\mathbf{pk}_i$ of honest client as follows: $\{r_j\}_{j \in [n] \setminus \{i\}} \leftarrow \mathcal{A}(1^\lambda, \text{crs}, \mathbf{pk}_i)$ that will be used to define $(\mathbf{pk}_j, \mathbf{sk}_j)$ in Step 2. Our proof would also support this stronger setting as well.
+
+**Construction of ihopSim:** We describe the two phases of ihopSim. In the first phase, ihopSim generates the keys of honest client $Q_i$ as $(\mathbf{pk}_i, \mathbf{sk}_i) \leftarrow \text{KeyGen}(1^{\lambda})$.
+
+In the second phase, ihopSim is described in Figure 22. At a high level, ihopSim generates everything honestly except $ct_i$. When generating $ct_i$, it simulates the step circuits one by one from the last to the first using the output $y_i$ and memory access $\text{MemAccess}_i$. In particular, since ihopSim takes $D_i$ and $\text{MemAccess}_i$ as input, it can compute $D_i$ and digest$_i$ before every step circuit, and use that to compute the output of every step circuit. Security follows from security of re-randomization of SFE, namely re-randomized garbled circuits are indistinguishable from freshly generated ones and that freshly generated garbled circuits are indistinguishable from simulated ones.
+
+Now we give a series of hybrids such that the first hybrid outputs $(ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]})$ in the real execution, and the last hybrid is the output of ihopSim. Notice that the only difference between the real and ideal experiments is $ct_i$, so all the hybrids generate everything in the same way except $ct_i$.
+
+* $\hat{H}_0$: Output in the real experiment.
+
+* $H_0$: In this hybrid, replace $F_{s_i}(\cdot)$ with a truly random function $F$. In particular, when computing $ct_i$ as in Figure 16, in steps 1a and 4, use the values generated by $F$; in step 3a embed labels corresponding to the values from $F$. The indistinguishability of this hybrid with $\hat{H}_0$ follows from the pseudo-randomness of $F_{s_i}(\cdot)$ and privacy of oblivious transfer ($s_i$ is hidden in $\mathbf{pk}_i$).
+
+* $H_m$ ($m \in [T_i]$): Next we consider a sequence of hybrids $H_1, \dots, H_{T_i}$. The description of $H_m$ is in Figure 23. Notice that $ct_i$ consists of $T_i$ step circuits with corresponding PRF circuits. In hybrid $H_m$, the step circuits from 1 to $m$ are simulated while the remaining step circuits ($m+1$ to $T_i$) are generated honestly. Given all the programs and secret keys, the intermediate outputs as well as input/output labels of every step circuit can all be computed. Given the correct output of circuit $\tilde{C}_m^{step}$, the step circuits from 1 to $m$ can be simulated one by one from the $m$-th to the first similarly as in ihopSim.
+
+To show $H_m$ is indistinguishable from $H_{m-1}$, first notice that they are the same except $(\tilde{C}_m^{step}, \{\tilde{C}_{m,j}^{\text{PRF}}, L_{m,j}\}_{j=i+1}^n)$ in $ct_i$. Consider an intermediate hybrid $\hat{H}_m$ which is the same as $H_m$ except that in step 2f when $\tau = m$, follow the steps in Figure 24. In particular, when $\tau = m$, $\hat{H}_m$ computes the output of $\tilde{C}_m^{step}$ and uses that output to simulate $\tilde{C}_m^{step}$ by CircSim and OT$_2$. The output of $\tilde{C}_m^{step}$ is the same for $H_{m-1}$ and $\hat{H}_m$. The indistinguishibility of $\hat{H}_m$ and $H_{m-1}$ follows from the security of garbled circuits directly when $m \in [T_{i-1}+1, T_i]$. When $m \in [T_{i-1}]$, it follows from the security of garbled circuits and re-randomization. More precisely, the re-randomized garbled circuit is indistinguishable from a freshly generated garbled circuit, which is indistinguishable from a simulated one. Notice that the random coins used in re-randomization for $\tilde{C}_m^{step}$ is $F(\text{GC}_{-*}||m)$, which is not used anywhere else in $H_{m-1}$, so it can be treated as truly random coins.
+---PAGE_BREAK---
+
+$$ (ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}) \leftarrow ihopSim (\text{crs}, x, \{\text{pk}_j, \text{sk}_j, D_j, t_j\}_{j \in [n]}, \{P_j\}_{j \in [n] \setminus \{i\}}, \text{MemAccess}_i, y_i). $$
+
+1. Compute $\tilde{D}_j = (\text{digest}_j, \hat{D}_j) \leftarrow \text{EncData}(\text{crs}, D_j)$ for $\forall j \in [n]$.
+ Compute $(ct_0, x_{\text{secret}}) \leftarrow \text{InpEnc}(x)$.
+ Compute $ct_j \leftarrow \text{Eval} (j, \{\text{pk}_k\}_{k=j+1}^n, ct_{j-1}, \text{sk}_j, (P_j, t_j), \text{digest}_j)$ for every $j \in [i-1]$.
+ Pick a random function $F$ (in the following use random values for $F(\cdot)$).
+
+2. Let $T_j := \sum_{k \in [j]} t_k$. Generate $ct_i$ as follows:
+
+(a) Run the program $P_{i-1}^{D_{i-1}} (\cdots (P_1^{D_1}(x)) \cdots)$ to obtain $(\mathbb{R}/\mathbb{W}^\tau, L^\tau, wData^\tau)$ for every CPU step $\tau \in [T_{i-1}]$. Obtain $(\mathbb{R}/\mathbb{W}^\tau, L^\tau, wData^\tau)$ for $\tau \in [T_{i-1} + 1, T_i]$ from $\text{MemAccess}_i$.
+
+(b) $(\text{stateKeys}^{T_i+1}, \text{dataKeys}^{T_i+1}, \text{digestKeys}^{T_i+1}) \leftarrow \text{SampleKeys}(0^*)$.
+ Compute $\text{stateLabels}^{T_i+1}$ using $y_i$; compute $\text{dataLabels}^{T_i+1}, \text{digestLabels}^{T_i+1}$ using $(D_i, \text{digest}_i, \mathbb{R}/\mathbb{W}, L, wData)$ of the last CPU step.
+
+(c) For $\tau = T_i$ downto 1, do the following:
+ $(\mathbb{R}/\mathbb{W}, L, wData) := (\mathbb{R}/\mathbb{W}^\tau, L^\tau, wData^\tau)$.
+ Define $j$ s.t. $\tau \in [T_{j-1} + 1, T_j]$.
+ Let $D$ be the database of $Q_j$ before step $\tau$.
+
+ $(\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, \text{digestLabels}^{\tau+1})$
+ $\leftarrow$ Transform $((\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, \text{digestLabels}^{\tau+1}),$
+ $\{F_{s_{i+1}}(\text{GC}_{-*} || \tau + 1)\}_{* \in \{\text{STATE,DATA,DIGEST}\}} || \cdots || \{F_{s_n}(\text{GC}_{-*} || \tau + 1)\}_{* \in \{\text{STATE,DATA,DIGEST}\}})$.
+
+**if** $\mathbb{R}/\mathbb{W}$ = read then
+ $\mathbf{e}_{\text{data}} \leftarrow \ell$OTSim $(\text{crs}, D, L, \text{dataLabels}^{\tau+1})$.
+ $X \leftarrow (\text{stateLabels}^{\tau+1}, \mathbf{e}_{\text{data}}, \text{digestLabels}^{\tau+1})$.
+else
+ $\mathbf{e}_{\text{digest}} \leftarrow \ell$OTSimWrite $(\text{crs}, D, L, wData, \text{digestLabels}^{\tau+1})$.
+ $X \leftarrow (\text{stateLabels}^{\tau+1}, \text{dataLabels}^{\tau+1}, \mathbf{e}_{\text{digest}}, wData)$
+
+$$ (\{\tilde{\mathcal{C}}_{\tau}^{\text{step}}, \{\tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}}\}_{j=i+1}^n\}, \text{Labels}^{\tau}) \leftarrow \text{CircSim} (1^{\lambda}, \mathcal{U}, (X, \mathbb{R}/\mathbb{W}, L)) \text{ such that the output labels of } \tilde{\mathcal{C}}_{\tau,j}^{\text{PRF}} \text{ are the same as input labels of } \text{RdLabs}^{\tau,j} \text{ for } \tilde{\mathcal{C}}_{\tau}^{\text{step}}. $$
+
+$$
+\begin{gathered}
+\text{Parse Labels}^{\tau} = (\text{stateLabels}^{\tau}, \text{dataLabels}^{\tau}, \text{digestLabels}^{\tau}, \{\text{PLabels}^{\tau,j}\}_{j \in [i+1,n]}). \\
+\mathcal{L}_{\tau,j} \leftarrow \text{OT}_2 (\text{pk}_j, (\text{PLabels}^{\tau,j}, \text{PLabels}^{\tau,j})) \text{ for every } j \in [i+1, n].
+\end{gathered}
+$$
+
+**if** $\tau = T_{j-1} + 1$ **then**
+ Embed $\text{stateLabels}^\tau$ and $\text{digestLabels}^\tau$ in $\tilde{\mathcal{C}}_\tau^\text{step}$.
+ $(\text{stateKeys}^\tau, \text{dataKeys}^\tau, \text{digestKeys}^\tau) \leftarrow \text{Transform}(\text{SampleKeys}(0^*),$
+ $\quad \quad \quad $\quad $\quad \quad \quad \quad \quad \quad $\quad$\{F_{s_j}(\mathrm{GC}_{-*} || \tau)\} || \cdots || \{F_{s_{i-1}}(\mathrm{GC}_{-*} || \tau)\} ||$
+ $\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$\quad$
+ $\left.\left\{(F(\mathrm{GC}_{-*} || \tau)\right)\right\}_{*\in\{\mathrm{STATE,DATA,DIGEST}\}}.$$
+ Compute $(\textit{dataLabels}^\tau, \textit{digestLabels}^\tau)$ using $(D_{j-1}, \textit{digest}_{j-1}, \mathbb{R}/\mathbb{W}, L, wData)$ at step $\tau - 1$.
+
+(d) $\mathcal{L}_0 \leftarrow \mathrm{OT}_2 (\mathrm{ct}_0, (\mathrm{stateLabels}^1, \mathrm{stateLabels}^1))$.
+
+(e) $\mathrm{ct}_i := (\mathcal{L}_0, \{\tilde{\mathcal{C}}_\tau^\mathrm{step}, \{\tilde{\mathcal{C}}_{\tau,j}^\mathrm{PRF}, L_{\tau,j}\}_{j=i+1}^n\}_{\tau \in [T_i]})$.
+
+3. Compute $ct_j \leftarrow \texttt{Eval}(j, \{\texttt{pk}_k\}_{k=j+1}^n, ct_{j-1}, sk_j, (P_j, t_j), digest_j)$ for every $j \in [i+1, n]$.
+
+4. Output $ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}$.
+---PAGE_BREAK---
+
+1. Compute $\tilde{D}_j = (\text{digest}_j, \hat{D}_j) \leftarrow \text{EncData}(\text{crs}, D_j)$ for $\forall j \in [n]$.
+ Compute $(\text{ct}_0, \text{x\_secret}) \leftarrow \text{InpEnc}(x)$.
+ Compute $ct_j \leftarrow \text{Eval} (j, \{\text{pk}_k\}_{k=j+1}^n, \text{ct}_{j-1}, \text{sk}_j, (P_j, t_j), \text{digest}_j)$ for every $j \in [i-1]$.
+ Pick a random function $F$ (in the following use random values for $F(\cdot)$).
+
+2. Let $T_j := \sum_{k \in [j]} t_k$. Generate $ct_i$ as follows:
+
+(a) Compute $\{\tilde{C}_{\tau}^{\text{step}}, \{\tilde{C}_{\tau,j}^{\text{PRF}}, L_{\tau,j}\}_{j=i+1}^n\}_{\tau \in [m+1,T_i]}$ honestly as in Figure 16.
+
+(b) Run the program $P_i^{D_i}(\dots(P_1^{D_1}(x))^\dots)$ to obtain $(\text{state}^\tau, R/W^\tau, L^\tau, w\text{Data}^\tau)$ for every $\tau \in [T_i]$.
+
+(c) Define $j$ s.t. $m \in [T_{j-1} + 1, T_j]$.
+
+(d) Set $\mathit{stateKeys}^{m+1}, \mathit{dataKeys}^{m+1}, \mathit{digestKeys}^{m+1}$ as $\mathit{SampleKeys}(F_{s_j}(\mathit{GC}_{-*} ||m+1))$ where $\star \in \{\mathit{STATE}, \mathit{DATA}, \mathit{DIGEST}\}$, respectively.
+If $m = T_j$, then set $\mathit{stateKeys}^{m+1}, \mathit{dataKeys}^{m+1}, \mathit{digestKeys}^{m+1}$ to $\mathit{SampleKeys}(0^*)$.
+If $j < i$, then $(\mathit{stateKeys}^{m+1}, \mathit{dataKeys}^{m+1}, \mathit{digestKeys}^{m+1})$
+$\leftarrow \text{Transform}((\mathit{stateKeys}^{m+1}, \mathit{dataKeys}^{m+1}, \mathit{digestKeys}^{m+1}),$
+$\quad \{\!F_{s_{j+1}}(\mathit{GC}_{-*} ||m+1)\} || \dots || \{\!F_{s_{i-1}}(\mathit{GC}_{-*} ||m+1)\} || \{\!F(\mathit{GC}_{-*} ||m+1)\}_{* \in \{\mathit{STATE}, \mathit{DATA}, \mathit{DIGEST}\}}\!)$.
+
+
+
+
+
+(e) Compute $(\mathit{stateLabels}^{m+1}, \mathit{dataLabels}^{m+1}, \mathit{digestLabels}^{m+1})$ using $(\mathit{state}^m, R/W^m, L^m, w\mathit{Data}^m)$ and $(D_j, \mathit{digest}_j)$ at step $m$.
+
+(f) For $\tau = m$ downto 1, do the following:
+Follow the same steps as in Figure 22 step 2c.
+
+(g) $L_0 \leftarrow \text{OT}_2(\text{ct}_0, (\text{stateLabels}^1, \text{stateLabels}^1))$.
+
+(h) $ct_i := (L_0, \{\tilde{C}_{\tau}^{\text{step}}, \{\tilde{C}_{\tau,j}^{\text{PRF}}, L_{\tau,j}\}_{j=i+1}^n\}_{\tau \in [T_i]})$.
+
+3. Compute $ct_j \leftarrow \text{Eval}(j, \{\text{pk}_k\}_{k=j+1}^n, ct_{j-1}, sk_j, (P_j, t_j), \text{digest}_j)$ for every $j \in [i+1, n]$.
+
+4. Output $ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}$.
+
+Figure 23: Hybrid $H_m$
+---PAGE_BREAK---
+
+(R/W, L, wData) := (R/Wτ, Lτ, wDataτ).
+
+Define $j$ s.t. $\tau \in [T_{j-1} + 1, T_j]$.
+
+Let $(D, \text{digest})$ be the database and digest of $Q_j$ before step $\tau$.
+Let $\text{state}'$ be the CPU state after step $\tau$.
+
+(stateKeysτ+1, dataKeysτ+1, digestKeysτ+1)
+$\leftarrow$ Transform(((stateKeysτ+1, dataKeysτ+1, digestKeysτ+1),
+ {Fsi+1(GC_★||τ + 1)}★∈{STATE,DATA,DIGEST} || ... || {Fsn(GC_★||τ + 1)}★∈{STATE,DATA,DIGEST}).
+
+**if** R/W = read then
+edata $\leftarrow$ Send(crs, digest, L, dataKeysτ+1; F(PSI||τ) ⊕ ⋃j>i Fsj(LACONIC_OT||τ)).
+X $\leftarrow$ (stateKeysτ+1state', edata, digestKeysτ+1digest).
+
+**else**
+edigest $\leftarrow$ SendWrite (crs, digest, L, wData, digestKeysτ+1; F(PSI||τ) ⊕ ⋃j>i Fsj(LACONIC_OT||τ)).
+X $\leftarrow$ (stateKeysτ+1state', dataKeysτ+10, edigest, wData).
+
+($\{\tilde{C}_{\tau}^{\text{step}}, \{\tilde{C}_{\tau,j}^{\text{PRF}}\}_{j=i+1}^n\}$, Labelsτ) $\leftarrow$ CircSim(1λ, U, (X, R/W, L)) such that the output labels of $\tilde{C}_{\tau,j}^{\text{PRF}}$ are the same as input labels of RdLabsτ,j for $\tilde{C}_{\tau}^{\text{step}}$.
+Parse Labelsτ = (stateLabelsτ, dataLabelsτ, digestLabelsτ, {PLabelsτ,j}j i+1,n).
+$L_{\tau,j} \leftarrow \text{OT}_2(\text{pk}_j, (\text{PLabels}^{\tau,j}, \text{PLabels}^{\tau,j}))$ for every $j \in [i+1, n]$.
+
+**if** $\tau = T_{j-1} + 1$ **then**
+
+Embed stateLabelsτ and digestLabelsτ in $\tilde{C}_{\tau}^{\text{step}}$.
+(stateKeysτ, dataKeysτ, digestKeysτ) $\leftarrow$ Transform(SampleKeys(0*),
+ {Fsj(GC_★||τ)}) || ... || {Fsi-1(GC_★||τ)}) || {F(GC_★||τ)}*∈{STATE,DATA,DIGEST}.
+
+Compute (dataLabelsτ, digestLabelsτ) using (Dj-1, digestj-1, R/W, L, wData) at step τ − 1.
+
+Figure 24: Difference of $H_m$ and $\hat{H}_m$.
+---PAGE_BREAK---
+
+To switch from $\hat{H}_m$ to $H_m$, we replace $X$ in Figure 24 with simulated $e_{data}$ and $e_{digest}$ for CircSim and $OT_2$. The indistinguishability follows from sender privacy of updatable laconic OT and that Send and SendWrite both take random coins $F(\text{PSI}||m)$.
+
+• $\hat{H}_{T_i}$: Output in the simulated experiment. This hybrid is the same as $H_{T_i}$.
+
+**Extension.** The above proof can be naturally extended to provide security for multiple clients and many executions. For example in the case of two clients $Q_{i_1}$ and $Q_{i_2}$, ihopSim first computes $(ct_0, \{\tilde{D}_j\}_{j \in [n]}, \{ct_j\}_{j \in [i_1-1]})$ honestly, then computes $ct_{i_1}$ same as in Figure 22 step 2. It then computes $\{ct_j\}_{j \in [i_1+1, i_2-1]}$ from $ct_{i_1}$ by Eval, and computes $ct_{i_2}$ same as in Figure 22 step 2.¹⁵ Finally it computes $\{ct_j\}_{j \in [i_2+1, n]}$ from $ct_{i_2}$ by Eval. To show this is indistinguishable from the real execution, we consider the following hybrids:
+
+• $H_0$: Output in the real experiment.
+
+• $H_1$: First compute $(ct_0, \{\tilde{D}_j\}_{j \in [n]}, \{ct_j\}_{j \in [i_2-1]})$ honestly, and then compute $ct_{i_2}$ same as in Figure 22 step 2. Finally it computes $\{ct_j\}_{j \in [i_2+1, n]}$ from $ct_{i_2}$ honestly by Eval.
+
+• $H_2$: Output in the simulated experiment.
+
+The above hybrids are indistinguishable because an honestly generated $ct_{i_1}$ or $ct_{i_2}$ is indistinguishable from a simulated one, as we have shown in the single-client case.
+
+To simulate multiple executions, ihopSim can simply repeat the procedure for every execution. We note that there is no connection between executions except the digest, so they can be simulated separately given the initial digests of every execution. The hybrids go from the real experiment to the simulated experiment by replacing all the honestly generated ct's in one execution by simulated ones, one execution per hybrid.
+
+### 7.3.5 UMA to Full Security for Multi-hop RAM Scheme
+
+In this section we provide a fully secure multi-hop RAM scheme. We first review Oblivious RAM (ORAM), which was first introduced by Goldreich [Gol87, GO96] and Ostrovsky [Ost90, Ost92, GO96]. We then use ORAM as a compiler to encode the memory and program into a special format that does not reveal the access pattern or data contents during an execution.
+
+**Definition 7.14 (Oblivious RAM).** An Oblivious RAM scheme consists of two procedures (OData, OProg) with syntax:
+
+• $(D^*, s^*) \leftarrow \text{OData}(1^\lambda, D)$: Given a security parameter $\lambda$ and memory $D \in \{0, 1\}^M$ as input, OData outputs the encoded memory $D^*$ and encoding key $s^*$.
+
+• $P^* \leftarrow \text{OProg}(1^\lambda, 1^{\log M}, 1^t, P)$: Given a security parameter $\lambda$, a memory size $M$, and a program $P$ that runs in time $t$, OProg outputs an oblivious program $P^*$ that can access $D^*$ as RAM and takes two inputs $x$ and $s^*$.
+
+¹⁵Notice that different from Figure 22, the simulator here doesn't take $P_{i_1}$ as input, but the simulation can still obtain $(R/W^\tau, L^\tau, wData^\tau)$ for every step $\tau \in [T_{i_1-1} + 1, T_{i_1}]$ given $D_{i_1}$ and MemAccess$_{i_1}$, and that is enough for the simulation.
+---PAGE_BREAK---
+
+**Efficiency.** We require that the run-time of `OData` should be $M \cdot \text{polylog}(M) \cdot \text{poly}(\lambda)$, and the run-time of `OProg` should be $t \cdot \text{poly}(\lambda) \cdot \text{polylog}(M)$. Finally, the oblivious program $P^*$ itself should run in time $t' = t \cdot \text{poly}(\lambda) \cdot \text{polylog}(M)$. Both the new memory size $M' = |D^*|$ and the running time $t'$ should be efficiently computable from $M, t$, and $\lambda$.
+
+**Correctness.** Let $P_1, \dots, P_\ell$ be programs running in polynomial times $t_1, \dots, t_\ell$ on memory $D$ of size $M$. Let $x_1, \dots, x_\ell$ be the inputs and $\lambda$ be a security parameter. Then we require that:
+
+$$ \Pr[(P_1^*(x_1, s^*), \dots, P_\ell^*(x_\ell, s^*))^{D^*} = (P_1(x_1), \dots, P_\ell(x_\ell))^D] = 1 $$
+
+where $(D^*, s^*) \leftarrow \text{OData}(1^\lambda, D)$, $P_i^* \leftarrow \text{OProg}(1^\lambda, 1^{\log M}, 1^t, P_i)$ and $(P_1^*(x_1, s^*), \dots, P_\ell^*(x_\ell, s^*))^{D^*}$ indicates running the ORAM programs on $D^*$ sequentially.
+
+**Security.** For security, we require that there exists a PPT simulator $S$ such that for any sequence of programs $P_1, \dots, P_\ell$, initial memory data $D \in \{0, 1\}^M$, and inputs $x_1, \dots, x_\ell$ we have that:
+
+$$ (D^*, \text{MemAccess}) \stackrel{c}{\approx} S(1^\lambda, 1^M, \{1^{t_i}, y_i\}_{i=1}^\ell) $$
+
+where $(y_1, \dots, y_\ell) = (P_1(x_1), \dots, P_\ell(x_\ell))^D$, $(D^*, s^*) \leftarrow \text{OData}(1^\lambda, D)$, and MemAccess corresponds to the access pattern of the CPU-step circuits during the sequential execution of the oblivious programs $(P_1^*(x_1, s^*), \dots, P_\ell^*(x_\ell, s^*))^{D^*}$.
+
+We prove the following theorem.
+
+**Theorem 7.15.** Assume there exists a UMA-secure multi-hop RAM scheme and an ORAM scheme. Then there exists a fully secure multi-hop RAM scheme. Moreover, we give a black-box construction of one given a UMA-secure multi-hop RAM and ORAM scheme.
+
+*Proof.* We first give the construction of the scheme itself and then provide a construction of an appropriate simulator to prove security. Let (`Setup`, `KeyGen`, `EncData`, `InpEnc`, `Eval`, `Dec`) be a UMA-secure multi-hop RAM scheme and let (`OData`, `OProg`) be an ORAM scheme. We construct a new multi-hop RAM scheme (`$\widehat{\text{Setup}}$, $\widehat{\text{KeyGen}}$, $\widehat{\text{EncData}}$, $\widehat{\text{InpEnc}}$, $\widehat{\text{Eval}}$, $\widehat{\text{Dec}}$`) as follows:
+
+• $\widehat{\text{Setup}}(1^\lambda)$: Generate crs same as Setup.
+
+• $\widehat{\text{KeyGen}}(1^\lambda)$: Generate ($pk, sk$) same as KeyGen.
+
+• $\widehat{\text{EncData}}(\text{crs}, D)$: Execute $(D^*, s^*) \leftarrow \text{OData}(1^\lambda, D)$ followed by $\tilde{D} \leftarrow \text{EncData}(1^\lambda, D^*)$.
+
+• $\widehat{\text{InpEnc}}(x)$: Execute $(ct, x\_secret) \leftarrow \text{InpEnc}(x)$.
+
+• $\widehat{\text{Eval}}(i, \{\text{pk}_j\}_{j=i+1}^n, \text{ct}, \text{sk}, (P, t), \text{digest})$: Execute $(P^*, t^*) \leftarrow \text{OProg}(1^\lambda, 1^{\log M}, 1^t, P)$ followed by Eval $(i, \{\text{pk}_j\}_{j=i+1}^n, \text{ct}, \text{sk}, (P^*[s^*], t^*), \text{digest})$, where $P^*$ has $s^*$ hard-coded inside it.
+
+• $\widehat{\text{Dec}}^{\tilde{D}_1, \dots, \tilde{D}_n}(x\_secret, ct)$: Output $\text{Dec}^{\tilde{D}_1, \dots, \tilde{D}_n}(x\_secret, ct)$.
+
+We prove that the construction above given by (`$\widehat{\text{Setup}}$, $\widehat{\text{KeyGen}}$, $\widehat{\text{EncData}}$, $\widehat{\text{InpEnc}}$, $\widehat{\text{Eval}}$, $\widehat{\text{Dec}}$`) is a fully secure multi-hop RAM scheme.
+---PAGE_BREAK---
+
+**Correctness in a single execution.** First we prove correctness for a single execution, and then we will generalize to multiple executions. In a single execution, our goal is to demonstrate that
+
+$$ \mathrm{Pr} \left[ \widehat{\mathrm{Dec}}^{\tilde{D}_1, \dots, \tilde{D}_n} (\mathrm{x\_secret}, \mathrm{ct}_n) = P_n^{D_n} ( \dots (P_1^{D_1}(x)) \dots ) \right] = 1, $$
+
+where $\tilde{D}_i \leftarrow \widehat{\mathrm{EncData}}(1^\lambda, D_i)$, $(\mathrm{ct}_0, \mathrm{x\_secret}) \leftarrow \widehat{\mathrm{InpEnc}}(x), \mathrm{ct}_i \leftarrow \widehat{\mathrm{Eval}}(i, \{\mathrm{pk}_j\}_{j=i+1}^n, \mathrm{ct}_{i-1}, \mathrm{sk}, P_i, t_i, \mathrm{digest}_i)$.
+
+By definition, $\widehat{\mathrm{Dec}}^{\tilde{D}_1, \dots, \tilde{D}_n}(\mathrm{x\_secret}, \mathrm{ct}_n) = \widehat{\mathrm{Dec}}^{\tilde{D}_1, \dots, \tilde{D}_n}(\mathrm{x\_secret}, \mathrm{ct}_n)$. By the correctness of the UMA-secure multi-hop RAM scheme, we have that $\widehat{\mathrm{Dec}}^{\tilde{D}_1, \dots, \tilde{D}_n}(\mathrm{x\_secret}, \mathrm{ct}_n) = P_n^*[s_n^*]^{D_n^*} (\dots (P_1^*[s_1^*]^{D_1^*}(x)) \dots)$. Finally, by the correctness of the ORAM scheme, $P_n^*[s_n^*]^{D_n^*} (\dots (P_1^*[s_1^*]^{D_1^*}(x)) \dots) = P_n^{D_n} (\dots (P_1^{D_1}(x)) \dots)$.
+
+**Correctness in multiple executions.** To prove correctness in multiple executions, we need to show that
+
+$$ \mathrm{Pr} \left[ \widehat{\mathrm{Dec}}^{\tilde{D}_1^{(\mathrm{sid})}, \dots, \tilde{D}_n^{(\mathrm{sid})}} (\mathrm{x\_secret}^{(\mathrm{sid})}, \mathrm{ct}_n^{(\mathrm{sid})}) = P_n^{(\mathrm{sid})} {D_n^{(\mathrm{sid})}} \left( \dots \left( P_1^{(\mathrm{sid})} {D_1^{(\mathrm{sid})}} (x^{(\mathrm{sid})}) \dots \right) \right) \right] = 1, $$
+
+where $\tilde{D}_i^{(\mathrm{sid})}$ is the resulting garbled database after executing $\mathrm{sid}-1$ homomorphic evaluations, $(\mathrm{ct}_0^{(\mathrm{sid})}, \mathrm{x\_secret}^{(\mathrm{sid})}) \leftarrow \widehat{\mathrm{InpEnc}}(x^{(\mathrm{sid})}), \mathrm{ct}_i^{(\mathrm{sid})} \leftarrow \widehat{\mathrm{Eval}}(i, \{\mathrm{pk}_j\}_{j=i+1}^n, \mathrm{ct}_{i-1}^{(\mathrm{sid})}, \mathrm{sk}, P_i^{(\mathrm{sid})}, t_i^{(\mathrm{sid})}, \mathrm{digest}_i^{(\mathrm{sid})})$.
+
+Recall that for correctness of the UMA-secure multi-hop RAM scheme we proved that after every execution, the resulting garbled database $\tilde{D}_i^{(\mathrm{sid})}$ corresponds to the output of $\mathrm{Hash}(\mathrm{crs}, D_i^{(\mathrm{sid})})$, where $D_i^{(\mathrm{sid})}$ is the correct $D_i^*$ resulting after previous $\mathrm{sid}-1$ executions in the clear (see Property 2, Section 7.3.2). By correctness of ORAM and the underlying UMA-secure multi-hop RAM scheme, we conclude the correctness in multiple executions.
+
+**Security for a single client in a single execution.** Server privacy is followed by receiver privacy of oblivious transfer. Now we prove client privacy for a single honest client $Q_i$ in a single execution. More precisely, we prove that there exists a PPT simulator ihopSim such that, for any set of databases $\{D_j\}_{j \in [n]}$, any sequence of compatible programs $P_1, \dots, P_n$ running time $t_1, \dots, t_n$ and input $x$, the outputs of the following two experiments are computational indistinguishable:
+
+**Real experiment**
+
+• $(\mathrm{pk}_j, \mathrm{sk}_j) \leftarrow \mathrm{KeyGen}(1^\lambda)$ for $\forall j \in [n]$.
+
+• $\tilde{D}_j = (\mathrm{digest}_j, \hat{\mathrm{D}}_j) \leftarrow \mathrm{EncData}(\mathrm{crs}, D_j)$ for $\forall j \in [n]$.
+
+• $(\mathrm{ct}_0, \mathrm{x\_secret}) \leftarrow \mathrm{InpEnc}(x)$.
+
+• $\mathrm{ct}_j \leftarrow \mathrm{Eval}(j, \{\mathrm{pk}_k\}_{k=j+1}^n, \mathrm{ct}_{j-1}, \mathrm{sk}_j, (P_j, t_j), \mathrm{digest}_j)$ for $\forall j \in [n]$.
+
+• Output $\mathrm{ct}_0, \{\tilde{D}_j, \mathrm{ct}_j\}_{j \in [n]}$.
+
+**Simulated experiment**
+
+• $(\mathrm{pk}_i, \mathrm{sk}_i) \leftarrow \mathrm{ihopSim}(1^\lambda, i)$.
+
+• $(\mathrm{pk}_j, \mathrm{sk}_j) \leftarrow \mathrm{KeyGen}(1^\lambda; r_j)$ for $\forall j \in [n] \setminus \{i\}$. Here, $r_j$ are uniform random coins.
+---PAGE_BREAK---
+
+* $(ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}) \leftarrow ihopSim(crs, x, \{pk_j, sk_j, t_j\}_{j \in [n]}, \{D_j, P_j, r_j\}_{j \in [n] \setminus \{i\}}, 1^{M_i}, y_i)$, where $y_i = P_i^{D_i} (\cdots (P_1^{D_1}(x)} \cdots)$.
+
+* Output $ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}$.
+
+We let OSim be the ORAM simulator, and USim be the simulator for the UMA-secure multi-hop RAM scheme. We describe the two phases of ihopSim. In the first phase, ihopSim generates the keys of honest client $Q_i$ as $(pk_i, sk_i) \leftarrow \text{KeyGen}(1^\lambda)$. In the second phase, ihopSim proceeds as follows.
+
+1. Compute $(D_i^*, \text{MemAccess}_i) \leftarrow \text{OSim}(1^\lambda, 1^{M_i}, 1^{t_i}, y_i)$.
+
+2. Compute $(D_j^*, s_j^*)$ from $\widehat{\text{EncData}}$ and $P_j^*$ from $\widehat{\text{Eval}}$ for every $j \in [n] \setminus \{i\}$.
+
+3. Compute $(ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}) \leftarrow \text{USim}(crs, x, \{pk_j, sk_j, t_j^*\}_{j \in [n]}, \{D_j^*, P_j^*[s_j^*], r_j\}_{j \in [n] \setminus \{i\}}, D_i^*, \text{MemAccess}_i, y_i)$.
+
+4. Output $(ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]})$.
+
+We now prove the output of the simulator is computationally indistinguishable from the real distribution.
+
+* $H_0$: Output of the real experiment.
+
+* $H_1$: Compute $(D_j^*, s_j^*)$ from $\widehat{\text{EncData}}$ and $P_j^*$ from $\widehat{\text{Eval}}$ for every $j \in [n] \setminus \{i\}$. Use the honestly generated $(D_i^*, s_i^*)$ from $\widehat{\text{EncData}}$ and $P_i^*$ from $\widehat{\text{Eval}}$ to execute the program $P_i^*[s_i^*]^{D_i^*} (\cdots (P_1^*[s_1^*]^{D_1^*}(x)} \cdots)$ and obtain $y_i$ and a sequence of memory accesses $\text{MemAccess}_i$. Run $(ct_0, \{\tilde{D}_j, ct_j\}_{j \in [n]}) \leftarrow \text{USim}(crs, x, \{pk_j, sk_j, t_j^*\}_{j \in [n]}, \{D_j^*, P_j^*[s_j^*], r_j\}_{j \in [n] \setminus \{i\}}, D_i^*, \text{MemAccess}_i, y_i)$ and output. Since $(D_i^*, \text{MemAccess}_i)$ is the same as the real execution, the indistinguishability of this hybrid and $H_0$ follows from UMA-security of the underlying multi-hop RAM scheme.
+
+* $H_2$: Output of the simulated experiment. The only thing that differs in $H_1$ and $H_2$ is how we generate $D_i^*$ and $\text{MemAccess}_i$. In $H_1$ they are generated honestly and in $H_2$ they are generated by OSim. $H_1 \stackrel{c}{\approx} H_2$ follows from the security of ORAM.
+
+**Security for multiple clients and multiple executions.** Similar as in the proof of UMA security, the above proof can be naturally extended to provide security for multiple clients and many executions. For example in the case of two clients $Q_{i_1}$ and $Q_{i_2}$, ihopSim first computes $(ct_0, \{\tilde{D}_j\}_{j \in [n]}, \{ct_j\}_{j \in [i_1-1]})$ honestly, then simulates $ct_{i_1}$ same as above as if there were only one honest client $Q_{i_1}$. It then computes $\{ct_j\}_{j \in [i_1+1, i_2-1]}$ from $ct_{i_1}$ by $\widehat{\text{Eval}}$, and simulates $ct_{i_2}$ same as above as if there were only one honest client $Q_{i_2}$. Notice that when simulating $ct_{i_2}$, similar as in the UMA-secure scenario, ihopSim cannot generate $(D_{i_1}^*, s_{i_1}^*, P_{i_1}^*)$ honestly. Instead it will use the simulated $(D_{i_1}^*, \text{MemAccess}_{i_1})$ generated from OSim, and that is enough for the simulation. Finally it computes $\{ct_j\}_{j \in [i_2+1, n]}$ from $ct_{i_2}$ by $\widehat{\text{Eval}}$. To show this is indistinguishable from the real execution, we consider the following hybrids:
+---PAGE_BREAK---
+
+• $H_0$: Output in the real experiment.
+
+• $H_1$: First compute $(ct_0, \{\tilde{D}_j\}_{j \in [n]}, \{ct_j\}_{j \in [i_2-1]})$ honestly, and then compute $ct_{i_2}$ same as above as if there were only one honest client $Q_{i_2}$. Finally it computes $\{ct_j\}_{j \in [i_2+1, n]}$ from $ct_{i_2}$ honestly by $\widehat{Eval}$.
+
+• $H_2$: Output in the simulated experiment.
+
+The above hybrids are indistinguishable because an honestly generated $ct_{i_1}$ or $ct_{i_2}$ is indistinguishable from a simulated one, as we have shown in the single-client case.
+
+To simulate multiple executions, ihopSim should first use OSim to simulate ($D_i^*$, MemAccess$_i$) for every honest client $Q_i$ in all executions, and then repeat the above procedure for every execution. In the hybrids, we start from the real execution and first replace the honestly generated ct's by simulated ones while using honestly generated ($D_i^*$, MemAccess$_i$), and this step follows from the UMA-security of the underlying multi-hop RAM scheme. Afterwards we replace the honestly generated ($D_i^*$, MemAccess$_i$) by the output of OSim, and this step follows from the security of ORAM supporting multiple executions. □
+
+## Acknowledgement
+
+We thank the anonymous reviewers of CRYPTO 2017 for their helpful suggestions in improving this paper. We also thank Yuval Ishai for useful discussions.
+
+## References
+
+[ADT11] Giuseppe Ateniese, Emiliano De Cristofaro, and Gene Tsudik. (If) size matters: Size-hiding private set intersection. In Dario Catalano, Nelly Fazio, Rosario Gennaro, and Antonio Nicolosi, editors, *PKC 2011: 14th International Conference on Theory and Practice of Public Key Cryptography*, volume 6571 of *Lecture Notes in Computer Science*, pages 156–173, Taormina, Italy, March 6–9, 2011. Springer, Heidelberg, Germany.
+
+[AIKW13] Benny Applebaum, Yuval Ishai, Eyal Kushilevitz, and Brent Waters. Encoding functions with constant online rate or how to compress garbled circuits keys. In Ran Canetti and Juan A. Garay, editors, *Advances in Cryptology – CRYPTO 2013, Part II*, volume 8043 of *Lecture Notes in Computer Science*, pages 166–184, Santa Barbara, CA, USA, August 18–22, 2013. Springer, Heidelberg, Germany.
+
+[AIR01] William Aiello, Yuval Ishai, and Omer Reingold. Priced oblivious transfer: How to sell digital goods. In Birgit Pfitzmann, editor, *Advances in Cryptology – EURO-Crypt 2001*, volume 2045 of *Lecture Notes in Computer Science*, pages 119–135, Innsbruck, Austria, May 6–10, 2001. Springer, Heidelberg, Germany.
+
+[ALSZ13] Gilad Asharov, Yehuda Lindell, Thomas Schneider, and Michael Zohner. More efficient oblivious transfer and extensions for faster secure computation. In Ahmad-Reza Sadeghi, Virgil D. Gligor, and Moti Yung, editors, *ACM CCS 13: 20th Conference*
+---PAGE_BREAK---
+
+on Computer and Communications Security, pages 535–548, Berlin, Germany, November 4–8, 2013. ACM Press.
+
+[BCCT12] Nir Bitansky, Ran Canetti, Alessandro Chiesa, and Eran Tromer. From extractable collision resistance to succinct non-interactive arguments of knowledge, and back again. In Shafi Goldwasser, editor, *ITCS 2012: 3rd Innovations in Theoretical Computer Science*, pages 326–349, Cambridge, MA, USA, January 8–10, 2012. Association for Computing Machinery.
+
+[Bea96] Donald Beaver. Correlated pseudorandomness and the complexity of private computations. In *28th Annual ACM Symposium on Theory of Computing*, pages 479–488, Philadelphia, PA, USA, May 22–24, 1996. ACM Press.
+
+[BGL$^{+}$15] Nir Bitansky, Sanjam Garg, Huijia Lin, Rafael Pass, and Sidharth Telang. Succinct randomized encodings and their applications. In Rocco A. Servedio and Ronitt Rubinfeld, editors, *47th Annual ACM Symposium on Theory of Computing*, pages 439–448, Portland, OR, USA, June 14–17, 2015. ACM Press.
+
+[BHHO08] Dan Boneh, Shai Halevi, Michael Hamburg, and Rafail Ostrovsky. Circular-secure encryption from decision Diffie-Hellman. In David Wagner, editor, *Advances in Cryptology – CRYPTO 2008*, volume 5157 of *Lecture Notes in Computer Science*, pages 108–125, Santa Barbara, CA, USA, August 17–21, 2008. Springer, Heidelberg, Germany.
+
+[BHR12] Mihir Bellare, Viet Tung Hoang, and Phillip Rogaway. Foundations of garbled circuits. In Ting Yu, George Danezis, and Virgil D. Gligor, editors, *ACM CCS 12: 19th Conference on Computer and Communications Security*, pages 784–796, Raleigh, NC, USA, October 16–18, 2012. ACM Press.
+
+[BPMW16] Florian Bourse, Rafaël Del Pino, Michele Minelli, and Hoeteck Wee. FHE circuit privacy almost for free. In Matthew Robshaw and Jonathan Katz, editors, *Advances in Cryptology – CRYPTO 2016, Part II*, volume 9815 of *Lecture Notes in Computer Science*, pages 62–89, Santa Barbara, CA, USA, August 14–18, 2016. Springer, Heidelberg, Germany.
+
+[BSCG$^{+}$13] Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer, and Madars Virza. SNARKs for C: Verifying program executions succinctly and in zero knowledge. In Ran Canetti and Juan A. Garay, editors, *Advances in Cryptology – CRYPTO 2013, Part II*, volume 8043 of *Lecture Notes in Computer Science*, pages 90–108, Santa Barbara, CA, USA, August 18–22, 2013. Springer, Heidelberg, Germany.
+
+[BV11a] Zvika Brakerski and Vinod Vaikuntanathan. Efficient fully homomorphic encryption from (standard) LWE. In Rafail Ostrovsky, editor, *52nd Annual Symposium on Foundations of Computer Science*, pages 97–106, Palm Springs, CA, USA, October 22–25, 2011. IEEE Computer Society Press.
+
+[BV11b] Zvika Brakerski and Vinod Vaikuntanathan. Fully homomorphic encryption from ring-LWE and security for key dependent messages. In Phillip Rogaway, editor, *Advances in Cryptology – CRYPTO 2011*, volume 6841 of *Lecture Notes in Computer Science*,
+---PAGE_BREAK---
+
+pages 505–524, Santa Barbara, CA, USA, August 14–18, 2011. Springer, Heidelberg, Germany.
+
+[CHJV15] Ran Canetti, Justin Holmgren, Abhishek Jain, and Vinod Vaikuntanathan. Succinct garbling and indistinguishability obfuscation for RAM programs. In Rocco A. Servedio and Ronitt Rubinfeld, editors, *47th Annual ACM Symposium on Theory of Computing*, pages 429–437, Portland, OR, USA, June 14–17, 2015. ACM Press.
+
+[CHK04] Ran Canetti, Shai Halevi, and Jonathan Katz. Chosen-ciphertext security from identity-based encryption. In Christian Cachin and Jan Camenisch, editors, *Advances in Cryptology – EUROCRYPT 2004*, volume 3027 of *Lecture Notes in Computer Science*, pages 207–222, Interlaken, Switzerland, May 2–6, 2004. Springer, Heidelberg, Germany.
+
+[COV15] Melissa Chase, Rafail Ostrovsky, and Ivan Visconti. Executable proofs, input-size hiding secure computation and a new ideal world. In Elisabeth Oswald and Marc Fischlin, editors, *Advances in Cryptology – EUROCRYPT 2015, Part II*, volume 9057 of *Lecture Notes in Computer Science*, pages 532–560, Sofia, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.
+
+[CS98] Ronald Cramer and Victor Shoup. A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In Hugo Krawczyk, editor, *Advances in Cryptology – CRYPTO’98*, volume 1462 of *Lecture Notes in Computer Science*, pages 13–25, Santa Barbara, CA, USA, August 23–27, 1998. Springer, Heidelberg, Germany.
+
+[CS02] Ronald Cramer and Victor Shoup. Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption. In Lars R. Knudsen, editor, *Advances in Cryptology – EUROCRYPT 2002*, volume 2332 of *Lecture Notes in Computer Science*, pages 45–64, Amsterdam, The Netherlands, April 28 – May 2, 2002. Springer, Heidelberg, Germany.
+
+[CV12] Melissa Chase and Ivan Visconti. Secure database commitments and universal arguments of quasi knowledge. In Reihaneh Safavi-Naini and Ran Canetti, editors, *Advances in Cryptology – CRYPTO 2012*, volume 7417 of *Lecture Notes in Computer Science*, pages 236–254, Santa Barbara, CA, USA, August 19–23, 2012. Springer, Heidelberg, Germany.
+
+[DG17] Nico Döttling and Sanjam Garg. Identity-based encryption from the diffie hellman assumption. CRYPTO 2017 (to appear), 2017.
+
+[DS16] Léo Ducas and Damien Stehlé. Sanitization of FHE ciphertexts. In Marc Fischlin and Jean-Sébastien Coron, editors, *Advances in Cryptology – EUROCRYPT 2016, Part I*, volume 9665 of *Lecture Notes in Computer Science*, pages 294–310, Vienna, Austria, May 8–12, 2016. Springer, Heidelberg, Germany.
+
+[FLS90] Uriel Feige, Dror Lapidot, and Adi Shamir. Multiple non-interactive zero knowledge proofs based on a single random string (extended abstract). In *31st Annual Symposium*
+---PAGE_BREAK---
+
+on *Foundations of Computer Science*, pages 308–317, St. Louis, Missouri, October 22–24, 1990. IEEE Computer Society Press.
+
+[Gen09] Craig Gentry. Fully homomorphic encryption using ideal lattices. In Michael Mitzenmacher, editor, *41st Annual ACM Symposium on Theory of Computing*, pages 169–178, Bethesda, MD, USA, May 31 – June 2, 2009. ACM Press.
+
+[GGH13a] Sanjam Garg, Craig Gentry, and Shai Halevi. Candidate multilinear maps from ideal lattices. In Thomas Johansson and Phong Q. Nguyen, editors, *Advances in Cryptology – EUROCRYPT 2013*, volume 7881 of Lecture Notes in Computer Science, pages 1–17, Athens, Greece, May 26–30, 2013. Springer, Heidelberg, Germany.
+
+[GGH$^{+}$13b] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent Waters. Candidate indistinguishability obfuscation and functional encryption for all circuits. In *54th Annual Symposium on Foundations of Computer Science*, pages 40–49, Berkeley, CA, USA, October 26–29, 2013. IEEE Computer Society Press.
+
+[GGMP16] Sanjam Garg, Divya Gupta, Peihan Miao, and Omkant Pandey. Secure multiparty RAM computation in constant rounds. In *Theory of Cryptography - 14th International Conference, TCC 2016-B, Beijing, China, October 31 - November 3, 2016, Proceedings*, Part I, pages 491–520, 2016.
+
+[GGSW13] Sanjam Garg, Craig Gentry, Amit Sahai, and Brent Waters. Witness encryption and its applications. In Dan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, *45th Annual ACM Symposium on Theory of Computing*, pages 467–476, Palo Alto, CA, USA, June 1–4, 2013. ACM Press.
+
+[GHL$^{+}$14] Craig Gentry, Shai Halevi, Steve Lu, Rafail Ostrovsky, Mariana Raykova, and Daniel Wichs. Garbled RAM revisited. In Phong Q. Nguyen and Elisabeth Oswald, editors, *Advances in Cryptology – EUROCRYPT 2014*, volume 8441 of Lecture Notes in Computer Science, pages 405–422, Copenhagen, Denmark, May 11–15, 2014. Springer, Heidelberg, Germany.
+
+[GHRW14] Craig Gentry, Shai Halevi, Mariana Raykova, and Daniel Wichs. Outsourcing private RAM computation. In *55th Annual Symposium on Foundations of Computer Science*, pages 404–413, Philadelphia, PA, USA, October 18–21, 2014. IEEE Computer Society Press.
+
+[GHV10] Craig Gentry, Shai Halevi, and Vinod Vaikuntanathan. i-Hop homomorphic encryption and rerandomizable Yao circuits. In Tal Rabin, editor, *Advances in Cryptology – CRYPTO 2010*, volume 6223 of Lecture Notes in Computer Science, pages 155–172, Santa Barbara, CA, USA, August 15–19, 2010. Springer, Heidelberg, Germany.
+
+[GKK$^{+}$12] S. Dov Gordon, Jonathan Katz, Vladimir Kolesnikov, Fernando Krell, Tal Malkin, Mariana Raykova, and Yevgeniy Vahlis. Secure two-party computation in sublinear (amortized) time. In Ting Yu, George Danezis, and Virgil D. Gligor, editors, *ACM CCS 12: 19th Conference on Computer and Communications Security*, pages 513–524, Raleigh, NC, USA, October 16–18, 2012. ACM Press.
+---PAGE_BREAK---
+
+[GKP+13] Shafi Goldwasser, Yael Tauman Kalai, Raluca A. Popa, Vinod Vaikuntanathan, and Nickolai Zeldovich. How to run turing machines on encrypted data. In Ran Canetti and Juan A. Garay, editors, *Advances in Cryptology – CRYPTO 2013, Part II*, volume 8043 of *Lecture Notes in Computer Science*, pages 536–553, Santa Barbara, CA, USA, August 18–22, 2013. Springer, Heidelberg, Germany.
+
+[GLO15] Sanjam Garg, Steve Lu, and Rafail Ostrovsky. Black-box garbled RAM. In Venkatesan Guruswami, editor, *56th Annual Symposium on Foundations of Computer Science*, pages 210–229, Berkeley, CA, USA, October 17–20, 2015. IEEE Computer Society Press.
+
+[GLOS15] Sanjam Garg, Steve Lu, Rafail Ostrovsky, and Alessandra Scafuro. Garbled RAM from one-way functions. In Rocco A. Servedio and Ronitt Rubinfeld, editors, *47th Annual ACM Symposium on Theory of Computing*, pages 449–458, Portland, OR, USA, June 14–17, 2015. ACM Press.
+
+[GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game or A completeness theorem for protocols with honest majority. In Alfred Aho, editor, *19th Annual ACM Symposium on Theory of Computing*, pages 218–229, New York City, NY, USA, May 25–27, 1987. ACM Press.
+
+[GO96] Oded Goldreich and Rafail Ostrovsky. Software protection and simulation on oblivious RAMs. *J. ACM*, 43(3):431–473, 1996.
+
+[Gol87] Oded Goldreich. Towards a theory of software protection and simulation by oblivious RAMs. In Alfred Aho, editor, *19th Annual ACM Symposium on Theory of Computing*, pages 182–194, New York City, NY, USA, May 25–27, 1987. ACM Press.
+
+[GOS06] Jens Groth, Rafail Ostrovsky, and Amit Sahai. Non-interactive zaps and new techniques for NIZK. In Cynthia Dwork, editor, *Advances in Cryptology – CRYPTO 2006*, volume 4117 of *Lecture Notes in Computer Science*, pages 97–111, Santa Barbara, CA, USA, August 20–24, 2006. Springer, Heidelberg, Germany.
+
+[GSW13] Craig Gentry, Amit Sahai, and Brent Waters. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based. In Ran Canetti and Juan A. Garay, editors, *Advances in Cryptology – CRYPTO 2013, Part I*, volume 8042 of *Lecture Notes in Computer Science*, pages 75–92, Santa Barbara, CA, USA, August 18–22, 2013. Springer, Heidelberg, Germany.
+
+[HK12] Shai Halevi and Yael Tauman Kalai. Smooth projective hashing and two-message oblivious transfer. *Journal of Cryptology*, 25(1):158–193, January 2012.
+
+[HW15] Pavel Hubacek and Daniel Wichs. On the communication complexity of secure function evaluation with long output. In Tim Roughgarden, editor, *ITCS 2015: 6th Innovations in Theoretical Computer Science*, pages 163–172, Rehovot, Israel, January 11–13, 2015. Association for Computing Machinery.
+
+[IKNP03] Yuval Ishai, Joe Kilian, Kobbi Nissim, and Erez Petrank. Extending oblivious transfers efficiently. In Dan Boneh, editor, *Advances in Cryptology – CRYPTO 2003*, volume
+---PAGE_BREAK---
+
+2729 of Lecture Notes in Computer Science, pages 145–161, Santa Barbara, CA, USA, August 17–21, 2003. Springer, Heidelberg, Germany.
+
+[IKO+11] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, Manoj Prabhakaran, and Amit Sahai. Efficient non-interactive secure computation. In Kenneth G. Paterson, editor, *Advances in Cryptology - EUROCRYPT 2011*, volume 6632 of Lecture Notes in Computer Science, pages 406–425, Tallinn, Estonia, May 15–19, 2011. Springer, Heidelberg, Germany.
+
+[IP07] Yuval Ishai and Anat Paskin. Evaluating branching programs on encrypted data. In Salil P. Vadhan, editor, *TCC 2007: 4th Theory of Cryptography Conference*, volume 4392 of Lecture Notes in Computer Science, pages 575–594, Amsterdam, The Netherlands, February 21–24, 2007. Springer, Heidelberg, Germany.
+
+[IPS08] Yuval Ishai, Manoj Prabhakaran, and Amit Sahai. Founding cryptography on oblivious transfer - efficiently. In David Wagner, editor, *Advances in Cryptology - CRYPTO 2008*, volume 5157 of Lecture Notes in Computer Science, pages 572–591, Santa Barbara, CA, USA, August 17–21, 2008. Springer, Heidelberg, Germany.
+
+[Kil88] Joe Kilian. Founding cryptography on oblivious transfer. In *20th Annual ACM Symposium on Theory of Computing*, pages 20–31, Chicago, IL, USA, May 2–4, 1988. ACM Press.
+
+[KK13] Vladimir Kolesnikov and Ranjit Kumaresan. Improved OT extension for transferring short secrets. In Ran Canetti and Juan A. Garay, editors, *Advances in Cryptology - CRYPTO 2013, Part II*, volume 8043 of Lecture Notes in Computer Science, pages 54–70, Santa Barbara, CA, USA, August 18–22, 2013. Springer, Heidelberg, Germany.
+
+[KLW15] Venkata Koppula, Allison Bishop Lewko, and Brent Waters. Indistinguishability obfuscation for turing machines with unbounded memory. In Rocco A. Servedio and Ronitt Rubinfeld, editors, *47th Annual ACM Symposium on Theory of Computing*, pages 419–428, Portland, OR, USA, June 14–17, 2015. ACM Press.
+
+[LNO13] Yehuda Lindell, Kobbi Nissim, and Claudio Orlandi. Hiding the input-size in secure two-party computation. In Kazue Sako and Palash Sarkar, editors, *Advances in Cryptology - ASIACRYPT 2013, Part II*, volume 8270 of Lecture Notes in Computer Science, pages 421–440, Bengalore, India, December 1–5, 2013. Springer, Heidelberg, Germany.
+
+[LO13] Steve Lu and Rafail Ostrovsky. How to garble RAM programs. In Thomas Johansson and Phong Q. Nguyen, editors, *Advances in Cryptology - EUROCRYPT 2013*, volume 7881 of Lecture Notes in Computer Science, pages 719–734, Athens, Greece, May 26–30, 2013. Springer, Heidelberg, Germany.
+
+[LP09] Yehuda Lindell and Benny Pinkas. A proof of security of Yao's protocol for two-party computation. *Journal of Cryptology*, 22(2):161–188, April 2009.
+
+[MRK03] Silvio Micali, Michael O. Rabin, and Joe Kilian. Zero-knowledge sets. In *44th Annual Symposium on Foundations of Computer Science*, pages 80–91, Cambridge, MA, USA, October 11–14, 2003. IEEE Computer Society Press.
+---PAGE_BREAK---
+
+[NP01] Moni Naor and Benny Pinkas. Efficient oblivious transfer protocols. In S. Rao Kosaraju, editor, *12th Annual ACM-SIAM Symposium on Discrete Algorithms*, pages 448–457, Washington, DC, USA, January 7–9, 2001. ACM-SIAM.
+
+[OPP14] Rafail Ostrovsky, Anat Paskin-Cherniavsky, and Beni Paskin-Cherniavsky. Maliciously circuit-private FHE. In Juan A. Garay and Rosario Gennaro, editors, *Advances in Cryptology – CRYPTO 2014, Part I*, volume 8616 of *Lecture Notes in Computer Science*, pages 536–553, Santa Barbara, CA, USA, August 17–21, 2014. Springer, Heidelberg, Germany.
+
+[OPWW15] Tatsuaki Okamoto, Krzysztof Pietrzak, Brent Waters, and Daniel Wichs. New realizations of somewhere statistically binding hashing and positional accumulators. In Tetsu Iwata and Jung Hee Cheon, editors, *Advances in Cryptology – ASIACRYPT 2015, Part I*, volume 9452 of *Lecture Notes in Computer Science*, pages 121–145, Auckland, New Zealand, November 30 – December 3, 2015. Springer, Heidelberg, Germany.
+
+[OS97] Rafail Ostrovsky and Victor Shoup. Private information storage (extended abstract). In *29th Annual ACM Symposium on Theory of Computing*, pages 294–303, El Paso, TX, USA, May 4–6, 1997. ACM Press.
+
+[Ost90] Rafail Ostrovsky. Efficient computation on oblivious RAMs. In *22nd Annual ACM Symposium on Theory of Computing*, pages 514–523, Baltimore, MD, USA, May 14–16, 1990. ACM Press.
+
+[Ost92] Rafail Ostrovsky. *Software Protection and Simulation On Oblivious RAMs*. PhD thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
+
+[Rab81] Michael O. Rabin. How to exchange secrets with oblivious transfer, 1981.
+
+[Vil12] Jorge Luis Villar. Optimal reductions of some decisional problems to the rank problem. In Xiaoyun Wang and Kazue Sako, editors, *Advances in Cryptology – ASIACRYPT 2012*, volume 7658 of *Lecture Notes in Computer Science*, pages 80–97, Beijing, China, December 2–6, 2012. Springer, Heidelberg, Germany.
+
+[WHC$^{+}$14] Xiao Shaun Wang, Yan Huang, T.-H. Hubert Chan, Abhi Shelat, and Elaine Shi. SCORAM: Oblivious RAM for secure computation. In Gail-Joon Ahn, Moti Yung, and Ninghui Li, editors, *ACM CCS 14: 21st Conference on Computer and Communications Security*, pages 191–202, Scottsdale, AZ, USA, November 3–7, 2014. ACM Press.
+
+[Yao82] Andrew Chi-Chih Yao. Protocols for secure computations (extended abstract). In *23rd Annual Symposium on Foundations of Computer Science*, pages 160–164, Chicago, Illinois, November 3–5, 1982. IEEE Computer Society Press.
\ No newline at end of file
diff --git a/samples/texts_merged/5730729.md b/samples/texts_merged/5730729.md
new file mode 100644
index 0000000000000000000000000000000000000000..88d8eaf416b5dfec9c141e193a64de501f575cca
--- /dev/null
+++ b/samples/texts_merged/5730729.md
@@ -0,0 +1,1026 @@
+
+---PAGE_BREAK---
+
+Time-Dependent Wave-Structure Interaction Revisited:
+Thermo-piezoelectric Scatterers
+
+GEORGE C. HSIAO* and TONATIUH SÁNCHEZ-VIZUET †
+
+*Dedicated to Professor Wolfgang L. Wendland*
+*on the occasion of his 85th Birthday*
+
+**Abstract**
+
+In this paper, we are concerned with a time-dependent transmission problem for a thermo-piezoelectric elastic body immersed in a compressible fluid. It is shown that the problem can be treated by the boundary-field equation method, provided an appropriate scaling factor is employed. As usual, based on estimates for solutions in the Laplace-transformed domain, we may obtain properties of corresponding solutions in the time-domain without having to perform the inversion of the Laplace-domain solutions.
+
+**Key words:** Wave-structure interaction; Coupling procedure; Kirchhoff representation formula; Retarded potential; Laplace transform; Boundary integral equation; Variational formulation; Sobolev space; Transient waves; Thermoelasticity; Piezoelectricity.
+**Mathematics Subject Classifications (1991):** 35J20, 35L05, 45P05, 65N30, 65N38.
+
+# 1 Introduction
+
+The mathematical description of the interaction between an acoustic wave and an elastic body is of central importance in applied mathematics and engineering, as attested for instance by its useage for detection and identification of submerged objects. The problem is mathematically formulated as a transmission problem between elastic and acoustic fields communicating through an interface and is referred to in the literature either as “fluid-structure interaction problem” or “wave-structure interation problem”. The former terminology (wave-structure interaction) is also used to describe a similar problem that involves the coupling between fluid equations (either Stokes or Navier-Stokes) and the equations of elasticity. Here we will be interested in the coupling between the acoustic and elastic wave equations, and we will use the term ”wave-structure interaction” exclusively to avoid any confusion.
+
+*Department of Mathematical Sciences, University of Delaware, Newark, DE 19716-2553, USA Email: ghsiao@udel.edu
+
+†Department of Mathematics, The University of Arizona, Tucson, AZ 85721-0089 USA Email: tonatiuh@math.arizona.edu.
+---PAGE_BREAK---
+
+In the early days of the field, most of the mathematical formulations of these kinds of problems were based on time-harmonic formulations. Motivated by the paper of Mamdi and Jean [16], Hsiao, Kleinman and Schuetz’s paper from 1988 [19] gave the first mathematical justification of a variational formulation for wave-structure interaction problems. This set out the field for many further efforts that expanded the understanding of time-harmonic scattering (see, e.g. [5, 2, 34, 42, 18]). Over the years, time-harmonic wave-structure interaction problems have been studied in various different areas such as inverse problems [12, 13], interaction of fluid and thin structures [21], interaction of electromagnetic fields and elastic bodies [8, 15], to name just a few.
+
+One of the main reasons behind the use of the boundary-field equation method for treating time-harmonic wave-structure problems is to reduce the transmission problem, posed originally in an unbounded domain, to one set in the bounded domain $\Omega$ determined by the elastic scatterer (see Figure 1). However, the conversion from an unbounded to a bounded domain comes at the price of turning the problem into a non-local one, which brings along some mathematical disadvantages. Since the sesquilinear form arising from the nonlocal boundary problem can satisfy only a Gårding inequality, in order to apply the standard Fredholm alternative for the existence theory, the uniqueness of the solution becomes a requirement. However, the straightforward boundary-field method can not circumvent the drawbacks, because the problem is not uniquely solvable when the frequency of the incident wave coincides with what is known as a "Jones frequency". At such a frequencies, the corresponding homogeneous problem may have traction free solutions (a recent discussion on this can be found in [10]). Moreover, uniqueness of the solutions to the boundary integral equations may not be guaranteed when the exterior wavenumber coincides with an eigenvalue of the corresponding interior Dirichlet problem (see [20]). The issue of non-uniqueness has motivated lots of research, and attempts to overcome these difficulties have been made with the help of methods such as Schenck's Combined Helmholtz Integral Equation Formulation [41] (known commonly as the CHIEF method) and the celebrated formulation by Burton and Miller [7].
+
+Figure 1: Schematic of the wave scattering problem. The interface between the solid and the fluid is denoted by $\Gamma$, while the outward-pointing normal vector (defined for almost every point in the boundary) is denoted by $\mathbf{n}$.
+
+In the present paper, inspired by the work of Estorff and Antes [11], we will apply the boundary-field equation method not to a time-harmonic problem, but rather one in the
+---PAGE_BREAK---
+
+transient regime. This will require the treatment of the wave equation, as opposed to
+the Helmholtz equation that is used in the frequency domain. The problem of interest
+is that of the interaction between a thermo-piezoelectric elastic body immersed in a
+compressible fluid. The method will not be directly applied in the time-domain, but
+rather in the Laplace transformed domain. The reasons for this will be made clear in
+due time. The equations will then be reduced to those of a nonlocal boundary problem
+in the transformed domain, where all the analysis will be performed. The technique
+that will be applied will allow us to understand the behavior of the transient problem
+(and even simulate it computationally if we were so inclined) *without ever having to*
+*invert the Laplace transform.*
+
+The outline of the solution/analysis procedure for the time-dependent wave-structure
+interaction is as follows:
+
+1. Formulate a time-dependent transmission problem.
+
+2. Apply the Laplace transform to the time-dependent transmission problem.
+
+3. Reduce the transformed transmission problem to a nonlocal boundary problem in the bounded domain $\Omega$ with the help of a Boundary Integral Equation (BIE). This leads to the boundary-field equation formulation of the problem in the transformed domain.
+
+4. Obtain estimates of variational solutions of the nonlocal boundary problem in terms of the Laplace transformed variable $s$.
+
+5. Deduce estimates for the solutions in the time domain from those of the corresponding solutions in the Laplace domain using Lubich's and Sayas's approach for treating BIEs of the convolution type [32, 33, 31]).
+
+The process described above has been successfully applied to a number of special cases [17, 23, 24, 38]. However, in all the cases under considerations the formulations in the fluid domain were given in terms of velocity potentials, not in terms of standard fluid pressures. As will be seen, to formulate the problem in term of fluid pressure, an appropriate scaling factor will have to be introduced.
+
+The analysis will proceed more or less following the steps outlined above. The time-domain formulation of the problem is introduced in Section 2, the corresponding nonlocal boundary problem in the Laplace transferred domain is then described in Section 3. Section 4 contains mathematical ingredients concerning crucial estimates for the solution of the nonlocal boundary problem in the transformed domain. The main results in the time domain are presented in Section 5 and end the paper with some concluding remarks on Section 6.
+
+## 2 Formulations of the problem
+
+We will denote by $\Omega$ an open and bounded subset of $\mathbb{R}^3$ that will be considered to be occupied by an elastic solid. We will further assume that the boundary of the solid is described by a Lipschitz-continuous curve and will denote it by $\Gamma$. The exterior of this solid, which will be denoted by $\Omega^c = \mathbb{R}^3 \setminus \bar{\Omega}$, will be filled by an inviscid and compressible fluid. A schematic of the geometric setting is depicted in Figure 1.
+
+We will consider that, when at rest, the velocity, pressure and density in the fluid are described by the constant fields $v_0 = 0, p_0$ and $\rho_f$, and will be interested in the time evolution of small perturbations from this static configuration as described by the
+---PAGE_BREAK---
+
+fields **v**, *p* and $\rho$ which is given by the linearized Euler equation in the fluid domain $\Omega^c$
+
+$$ \rho_f \frac{\partial \mathbf{v}}{\partial t} + \nabla p = \mathbf{0}, \qquad (2.1) $$
+
+the continuity equation
+
+$$ \frac{\partial \rho}{\partial t} + \rho_f \nabla \cdot \mathbf{v} = 0, \qquad (2.2) $$
+
+for $\rho$, and **v**, and the state equation for *p* and $\rho$
+
+$$ p = c^2 \rho. \qquad (2.3) $$
+
+Above, the sound speed *c* is a function that varies depending on the properties of
+the fluid (see e.g., [1, 43]), and the operator $\frac{\partial}{\partial t}$ is the usual partial derivative with
+respect to the time variable, not to be confused with the material derivative. All these
+equations are posed in $\Omega^c \times [0, \infty)$. A simple manipulation shows that with the help
+of (2.2) and (2.3), we may replace equation (2.1) by a single wave equation for the
+pressure *p*
+
+$$ \frac{1}{c^2} \frac{\partial^2 p}{\partial t^2} - \Delta p = 0 \quad \text{in } \Omega^c \times [0, \infty). $$
+
+Now, inside the domain $\Omega$ occupied by the solid, the governing equation depends on the properties of the solid. It may be as simple as an elastic obstacle, or it may have more complicated physical properties such as a thermoelastic solid, or a thermopiezoelectric solid as in our present case. The problem under consideration, for a thermo-piezoelectric body, consists of determining the stress and strains tensors, $\sigma(\mathbf{x}, t)$ and $\epsilon(\mathbf{x}, t)$, the elastic displacement $\mathbf{u}(\mathbf{x}, t)$, temperature variation $\theta(\mathbf{x}, t)$ and the electric potential $\varphi(\mathbf{x}, t)$. The physics of the process can be described in terms of the reference density of the solid $\rho_e$, the absolute temperature in the solid $T$ and its stress-free reference temperature $T_0$, the electric displacement vector $\mathbf{D}(\mathbf{x}, t)$, and the entropy per unit volume $P(\mathbf{x}, t)$. The governing equations have been derived by Mindlin [35] and consist of three coupled partial differential equations, namely the dynamic elastic equations
+
+$$ \rho_e \frac{\partial^2 \mathbf{u}}{\partial t^2} - \nabla \cdot \boldsymbol{\sigma} = \mathbf{0}, \qquad (2.4) $$
+
+the generalized heat equation
+
+$$ T \frac{\partial P}{\partial t} - \Delta \theta = 0, \quad \theta := T - T_0, \qquad (2.5) $$
+
+and the equation of the quasi-stationary electric field (i.e., Gauss’s electric field law
+without electric charge density):
+
+$$ \nabla \cdot \mathbf{D} = 0. \qquad (2.6) $$
+
+These equations need to be supplied with adequate constitutive relations providing
+a description of the functional dependence between the unknown variables within the
+thermo-piezoelectric media. In the isotropic case, the constitutive relations may be
+simplified in the form (see [30]):
+---PAGE_BREAK---
+
+$$ \boldsymbol{\sigma} = \sigma(\mathbf{u}, \theta, \varphi) := \sigma_e(\mathbf{u}) - \zeta\theta\mathbf{I} - \mathbf{e}^\top\mathbf{E}, \quad (2.7) $$
+
+$$ P = P(\mathbf{u}, \theta, \varphi) := \zeta \nabla \cdot \mathbf{u} + \frac{c_\epsilon}{T_0}\theta + \mathbf{p} \cdot \mathbf{E}, $$
+
+$$ \mathbf{D} = \mathbf{D}(\mathbf{u}, \theta, \varphi) := \mathbf{e} \varepsilon(\mathbf{u}) + \theta \mathbf{p} + \varepsilon \mathbf{E}, \quad (2.8) $$
+
+where
+
+$$ \boldsymbol{\sigma}_e := \lambda (\nabla \cdot \mathbf{u}) \mathbf{I} + 2\mu \boldsymbol{\varepsilon}(\mathbf{u}), \quad \text{and} \quad \boldsymbol{\varepsilon}(\mathbf{u}) := \frac{1}{2} (\nabla \mathbf{u} + \nabla \mathbf{u}^{\top}) $$
+
+are the usual stress and strain tensors for isotropic elastic media, while $\mathbf{e} = ((e_{ijk}))$ is the piezoelectric tensor with constant elements such that $e_{kij} = e_{kji}$. This third order tensor maps matrices into vectors, while its adjoint, which will be denoted by $\mathbf{e}^\top$, maps vectors into symmetric matrices. More precisely, for a real symmetric matrix $\mathbf{M} \in \mathbb{R}_{\text{sym}}^{d \times d}$ and for a vector, $\mathbf{d} \in \mathbb{R}^d$ we define
+
+$$ (\mathbf{eM})_k := \sum_{ij} e_{kij} \mathbf{M}_{ij} \in \mathbb{R}^d \quad \text{and} \quad (\mathbf{e}^\top\mathbf{d})_{ij} := \sum_k e_{kij} \mathbf{d}_k \in \mathbb{R}_{\text{sym}}^{d \times d}. $$
+
+The constants $\zeta$ and $\epsilon$ are respectively the thermal and dielectric constants; $c_\epsilon$ is the specific heat at constant strain, and the constant vector **p** is the pyroelectric moduli vector. The electric field **E** in the constitutive equations is replaced by **E** = −∇**$\varphi**$. As usual, $\mu > 0$ and $\lambda$ are the Lamé constants for the elastic body (note that it is customary to require $\lambda > 0$, however this is not necessary as long as the physically meaningful quantity $3\lambda + 2\mu$, known as the bulk modulus, remains positive). The theory of thermopiezoelectricity was first proposed by Mindlin [35]. The physical laws for thermopiezoelectric materials were explored by Nowacki [36] (GCH would like to thank Prof. T.W. Chou for locating this reference for him), [37], where more general constitutive relations are available than those given in (2.7) - (2.8).
+
+Making use of these constitutive relations in conjunction with the governing equations (2.4), (2.5), and (2.6) we arrive at differential equations
+
+$$ \rho_e \frac{\partial^2 \mathbf{u}}{\partial t^2} - \nabla \cdot (\boldsymbol{\sigma}_e(\mathbf{u}) - \zeta\theta\mathbf{I} + \mathbf{e}^\top\nabla\varphi) = 0 \quad (2.9) $$
+
+$$ \frac{\partial}{\partial t} (\zeta \nabla \cdot \mathbf{u} - \mathbf{p} \cdot \nabla \varphi) + \frac{1}{T_0} \left( c_\epsilon \frac{\partial \theta}{\partial t} - \Delta \theta \right) = 0 \quad (2.10) $$
+
+$$ \nabla \cdot (\boldsymbol{e}\boldsymbol{\varepsilon}(\mathbf{u}) + \theta\mathbf{p} - \varepsilon\nabla\varphi) = 0 \quad (2.11) $$
+
+We remark that Equation (2.10) is derived under the assumption that $|\frac{\theta}{T_0}| \ll 1$. This means $T \simeq T_0$, since $T = T_0(1 + \frac{\theta}{T_0})$. Equations (2.9)-(2.11) constitute the complete set of equations of thermopiezoelectricity coupling a hyperbolic equation for **u**, a parabolic equation for $\theta$, and an elliptic equation for $\varphi$. Here and in the sequel, all the constant physical quantities satisfy
+
+$$ \rho_e > 0, \mu > 0, 3\lambda + 2\mu > 0, e_{ijk} > 0, \zeta > 0, c_\varepsilon > 0. $$
+
+To formulate a typical time-dependent wave-structure problem, we need to prescribe initial, boundary and transmission conditions. This leads to a model of partial
+---PAGE_BREAK---
+
+differential equations for the time-dependent wave-structure problem.
+
+**Time-dependent transmission problem.** Given ($p^{inc}$, $\partial_n p^{inc}$, $f_\theta$, $f_D$), find the solutions $(\mathbf{u}, \theta, \varphi)$ in $\Omega \times [0, \infty)$, and $p$ in $\Omega^c \times [0, \infty)$ satisfying the partial differential equations
+
+$$
+\begin{align}
+\rho_e \frac{\partial^2 \mathbf{u}}{\partial t^2} - \nabla \cdot (\boldsymbol{\sigma}_e(\mathbf{u}) - (\zeta\theta)\mathbf{I} + \mathbf{e}^\top \nabla \varphi) &= \mathbf{0} && \text{in } \Omega \times [0, \infty) \tag{2.12} \\
+\frac{\partial}{\partial t} (\zeta \nabla \cdot \mathbf{u} - \mathbf{p} \cdot \nabla \varphi) + \frac{1}{T_0} \left( c_\epsilon \frac{\partial \theta}{\partial t} - \Delta \theta \right) &= \mathbf{0} && \text{in } \Omega \times [0, \infty) \\
+\nabla \cdot (\mathbf{e} \boldsymbol{\varepsilon}(\mathbf{u}) + \theta \mathbf{p} - \epsilon \nabla \varphi) &= \mathbf{0} && \text{in } \Omega \times [0, \infty)
+\end{align}
+$$
+
+and
+
+$$ \frac{1}{c^2} \frac{\partial^2 p}{\partial t^2} - \Delta p = 0 \quad \text{in} \quad \Omega^c \times [0, \infty). \qquad (2.13) $$
+
+together with the transmission conditions
+
+$$
+\begin{gather}
+\boldsymbol{\sigma}(\mathbf{u}, \theta, \varphi)^{-}\mathbf{n} = -(p+p^{inc})^{+}\mathbf{n} \quad \text{on } \Gamma \times [0, \infty), \tag{2.14} \\
+\frac{\partial \mathbf{u}^{-}}{\partial t} \cdot \mathbf{n} = -\frac{1}{\rho_f} \int_0^t \frac{\partial}{\partial n} (p+p^{inc})^+ d\tau \quad \text{on } \Gamma \times [0, \infty),
+\end{gather}
+$$
+
+the boundary conditions
+
+$$ \partial_n \theta = f_\theta, \quad \text{and} \quad \mathbf{D} \cdot \mathbf{n} = f_D, \quad \text{on } \Gamma \times [0, \infty) \qquad (2.15) $$
+
+*and homogeneous initial conditions for u, $\partial u/\partial t$, $\theta$, $p$ and $\partial p/\partial t$.*
+
+The given data and solutions are required to satisfy certain regularity properties that will be specified later. In the formulation, we use the superscripts $+$ or $-$ to denote the traces or restrictions to the boundary $\Gamma$ of a function when taken as limits from functions defined on $\Omega^c$ and $\Omega$, respectively. This is equivalent to the notation $v^+ = \gamma^+ v$ and $v^- = \gamma^- v$ customary in the mathematical literature. Whenever the trace—or restriction—of a function to the boundary does not depend on the side from which the limit is taken, we will drop the superscript and write only $\gamma v$. In this formulation, one has to solve the wave equation for the pressure in the exterior—unbounded—domain, which can be a drawback from the computational point of view.
+
+To sidestep the challenge of undboundedness, we will resort to a formulation of the transmission problem defined by (2.12), (2.13), (2.14), and (2.15) that will couple boundary integral equations with partial differential equations. This technique, put forward in the context of time-harmonic problems [20], transforms the problem into a nonlocal one that is posed only in the bounded computational domain $\Omega$ by representing the pressure in the fluid domain through an integral along the interface $\Gamma$ between the solid and the fluid. To this avail, we must introduce the fundamental solution to the wave equation
+
+$$ G(x - y, t) = \frac{1}{4\pi|x - y|} \delta(t - c^{-1}|x - y|). $$
+---PAGE_BREAK---
+
+Above, $\delta(\cdot)$ is Dirac's delta. Using this fundamental solution, it is possible to express any solution to (2.13) in terms of density functions $\phi$, and $\lambda$ that correspond to the Cauchy data of the problem. Namely the pressure restricted to $\Gamma$ and its normal derivative respectively. This is known as the Kirchhoff representation formula (see e.g., [27, 28, 31])
+
+$$p(x,t) = (\mathcal{D} * \phi)(x,t) - (\mathcal{S} * \lambda)(x,t), \quad (x,t) \in \Omega^c \times [0, \infty). \qquad (2.16)$$
+
+Above, the asterisk $*$ refers to convolution with respect time time,
+
+$$f * g = \int_{0}^{t} f(t - \tau)g(\tau)d\tau,$$
+
+and $\mathcal{D}$ and $\mathcal{S}$ are known respectively as the double- and simple-layer potentials. They can be defined as convolutions with the fundamental solution and its normal derivative
+
+$$
+\begin{align*}
+(\mathcal{S} * \lambda)(\boldsymbol{x}, t) &:= \int_0^t \int_\Gamma \mathcal{G}(\boldsymbol{x} - \boldsymbol{y}, t-\tau) \lambda(\boldsymbol{y}, \tau) \, d\Gamma_y d\tau \\
+&= \int_\Gamma \frac{1}{4\pi|\boldsymbol{x}-\boldsymbol{y}|} \lambda(\boldsymbol{y}, t-c^{-1}|\boldsymbol{x}-\boldsymbol{y}|) \, d\Gamma_y \\
+&= \int_\Gamma E(\boldsymbol{x}, \boldsymbol{y}) \lambda(\boldsymbol{y}, t-c^{-1}|\boldsymbol{x}-\boldsymbol{y}|) \, d\Gamma_y,
+\end{align*}
+$$
+
+$$
+\begin{align*}
+(\mathcal{D} * \phi)(\mathbf{x}, t) &:= \int_0^t \int_\Gamma \frac{\partial}{\partial n_y} \mathcal{G}(\mathbf{x} - \mathbf{y}, t-\tau) \phi(\mathbf{y}, \tau) d\Gamma_y d\tau \\
+&= \int_\Gamma \frac{\partial}{\partial n_y} \left( \frac{1}{4\pi|\mathbf{x}-\mathbf{y}|} \phi(\mathbf{y}, t-c^{-1}|\mathbf{x}-\mathbf{y}|) \right) d\Gamma_y \\
+&= \int_\Gamma \frac{\partial}{\partial n_y} \left( E(\mathbf{x}, \mathbf{y}) \phi(\mathbf{y}, t-c^{-1}|\mathbf{x}-\mathbf{y}|) \right) d\Gamma_y.
+\end{align*}
+$$
+
+In these equations, we have denoted the fundamental solution of the negative Laplacian in $\mathbb{R}^3$ by $E(x, y) := \frac{1}{4\pi|x-y|}$. The reader will notice that the convolution with the fundamental solution introduces a delay into the density functions $\lambda$ and $\phi$. It is customary in the wave propagation community, to write $[\varphi] = \varphi(y, t - c^{-1}|x - y|)$ and call $[\varphi]$ the retarded value of $\varphi$. This is the reason why sometimes $(\mathcal{S} * \lambda)(x, t)$ and $(\mathcal{D} * \phi)(x, t)$ are referred to as the *retarded layer potentials*.
+
+Similarly, by introducing the convolution integral
+
+$$(I * \varphi)(x, t) := \int_{0}^{t} \int_{\Gamma} \delta(x - y; t - \tau) \varphi(y, \tau) d\Gamma_{y} d\tau = \varphi(x, t),$$
+
+At non-singluar points of $\Gamma$, the Cauchy data $\phi$ and $\lambda$ satisfy the following system of boundary integral equations (see, e.g., [3, 4, 9, 29])
+
+$$
+\begin{pmatrix} \phi \\ \lambda \end{pmatrix} = \begin{pmatrix} \frac{1}{2}\mathcal{I} + K & -V \\ -W & (\frac{1}{2}\mathcal{I} - K)' \end{pmatrix} * \begin{pmatrix} \phi \\ \lambda \end{pmatrix} \quad \text{on } \Gamma \times [0, \infty). \tag{2.17}
+$$
+---PAGE_BREAK---
+
+The boundary integral operators $\mathcal{V}, \mathcal{K}, \mathcal{K}',$ and $\mathcal{W}$ appearing above are known re-
+spectively as the simple layer, double layer, transpose double layer, and hypersingular
+boundary integral operators for the dynamic wave equation. They are defined as follows
+
+$$
+\left.
+\begin{aligned}
+(\mathcal{V} * \lambda) &:= \{\{\gamma(S * \lambda)\}\} &&= \frac{1}{2} (\gamma^{-}(S * \lambda) + \gamma^{+}(S * \lambda)) \\
+ &= \gamma^{-}(S * \lambda) &&= \gamma^{+}(S * \lambda) \\
+(K * \phi) &:= \{\{\gamma(K * \phi)\}\} &&= \frac{1}{2} (\gamma^{-}(K * \phi) + \gamma^{+}(K * \phi)) \\
+(K' * \lambda) &:= \{\{\gamma(K' * \lambda)\}\} &&= \frac{1}{2} (\gamma^{-}(K' * \lambda) + \gamma^{+}(K' * \lambda)) \\
+(\mathcal{W} * \phi) &:= -\{\partial_n(D * \phi)\} &&= -\frac{1}{2} (\partial_n^{-}(D * \lambda) + \partial_n^{+}(D * \lambda)) \\
+ &= -\partial_n^{-}(D * \phi) &&= -\partial_n^{+}(D * \lambda)
+\end{aligned}
+\right\} \quad \text{on } \Gamma \times [0, \infty).
+$$
+
+Note that the averaging operator {{·}} has been defined implicitly in the second
+equality on the first line above.
+
+We can now state the reformulation of the original problem that we will be focusing
+on:
+
+**Time-dependent nonlocal problem.** Given ($p^{inc}$, $\partial_n p^{inc}$, $f_\theta$, $f_D$), find the solutions $(\mathbf{u}, \theta, \varphi)$ in $\Omega \times [0, \infty)$ and $(\phi, \lambda)$ on $\Gamma \times [0, \infty)$ satisfying the partial differential equations
+
+$$
+\begin{align*}
+\rho_e \frac{\partial^2 \mathbf{u}}{\partial t^2} - \nabla \cdot (\boldsymbol{\sigma}_e(\mathbf{u}) - (\gamma\theta)\mathbf{I} + \mathbf{e}^\top\nabla\varphi) &= \mathbf{0} && \text{in } \Omega \times [0, \infty), \\
+\frac{\partial}{\partial t} (\gamma \nabla \cdot \mathbf{u} - \mathbf{p} \cdot \nabla\varphi) + \frac{1}{T_0} \left( c_\epsilon \frac{\partial\theta}{\partial t} - \Delta\theta \right) &= 0 && \text{in } \Omega \times [0, \infty), \\
+\nabla \cdot (\mathbf{e}\boldsymbol{\varepsilon}(\mathbf{u}) + \theta\mathbf{p} - \epsilon\nabla\varphi) &= 0 && \text{in } \Omega \times [0, \infty),
+\end{align*}
+$$
+
+and the differential- boundary integral equations
+
+$$
+\begin{gather*}
+-\rho_f \frac{\partial \mathbf{u}}{\partial t} \cdot \mathbf{n} + \int_0^t ((W*\phi)(\mathbf{x},t) - \frac{1}{2}\lambda(\mathbf{x},t) + (K'*\lambda)(\mathbf{x},t)) d\tau = \int_0^t \partial_n^+ p^{inc} d\tau \quad \text{on } \Gamma \times [0, \infty), \\
+\frac{1}{2}\phi(\mathbf{x},t) - (K*\phi)(\mathbf{x},t) + (V*\lambda)(\mathbf{x},t) = 0 \quad \text{on } \Gamma \times [0, \infty).
+\end{gather*}
+$$
+
+together with the transmission condition
+
+$$
+\boldsymbol{\sigma}(\mathbf{u}, \theta, \varphi)^{-} \mathbf{n} = -(\phi + p^{inc})^{+} \mathbf{n}, \quad \text{on } \Gamma \times [0, \infty),
+$$
+
+the boundary conditions
+
+$$
+\partial_n \theta = f_\theta, \quad \text{and} \quad D \cdot n = f_D, \quad \text{on } \Gamma \times [0, \infty),
+$$
+
+as well as homogeneous initial conditions for **u**, ∂**u**/∂**t**, θ, φ and λ.
+
+Throughout the paper, the given data ($p^{inc}$, $\partial_n p^{inc}$, $f_\theta$, $f_D$) will always be assumed to be causal functions. Namely, functions of time $t$ that vanish identically for $t < 0$.
+---PAGE_BREAK---
+
+From the definitions of the operators $\mathcal{V}, \mathcal{K}, \mathcal{K}',$ and $\mathcal{W}$, we notice that the non-locality of the boundary integral equations in (2.17) is not restricted to space, but extends also into the time variable.
+
+To study the well-posedness of this formulation, we will first transform it to the Laplace domain, where the analysis will be performed. This idea is due to Lubich and Schneider (see, e.g. [32, 33]) and has been extended by Laliena and Sayas [31, 39]. We remark that, the passage to the Laplace domain is required only to simplify the analysis and the stability estimates, but for a computational implementation this technique *does not* require the numerical inversion of the Laplace transform. Instead, from the estimates of the solutions in the transformed domain, the properties of the solutions in the time domain will be deduced automatically. The later is particularly desirable from the computational point of view. In the next section, we will consider the model of partial differential equations for the time-dependent wave-structure problem and/or the time-dependent nonlocal boundary transmission problem in the Laplace domain.
+
+# 3 A nonlocal boundary problem
+
+The passage to the Laplace domain will require us to first introduce some definitions. The complex plane be denoted in the sequel by $\mathbb{C}$, while we will use the notation
+
+$$ \mathbb{C}_+ := \{s \in \mathbb{C} : \text{Re } s > 0\}, $$
+
+to refer to the positive half plane. For any complex-valued function with limited growth at infinity $f: [0, \infty) \to \mathbb{C}$, its Laplace transform is given by
+
+$$ \hat{f}(s) = \mathcal{L}f(s) := \int_{0}^{\infty} e^{-st} f(t)dt, $$
+
+whenever the integral converges. A broad class of functions for which the Laplace transform is well-defined is that of functions of exponential order. More precisely, a function $f$ is said to be of exponential order if there exist constants $t_0 > 0$, $M \equiv M(t_0) > 0$, and $\alpha \equiv \alpha(t_0) > 0$ satisfying
+
+$$ t \geq t_0 \implies |f(t)| \leq Me^{\alpha t}. $$
+
+In the following, let $\hat{\mathbf{u}}(s) := \mathcal{L}\{\mathbf{u}(\boldsymbol{x}, t)\}, \hat{\theta}(s) := \mathcal{L}\{\theta(\boldsymbol{x}, t)\}, \hat{\varphi}(s) := \mathcal{L}\{\varphi(\boldsymbol{x}, t)\}$, and $\hat{p}(s) := \mathcal{L}\{p(\boldsymbol{x}, t)\}$. Then, in the Laplace domain, equations (2.12), (2.13), and (2.14) become
+
+$$ -\nabla \cdot (\boldsymbol{\sigma}_e (\hat{\mathbf{u}}) - (\zeta \hat{\theta}) \mathbf{I} + \mathbf{e}^\top \nabla \hat{\varphi}) + \rho_e s^2 \hat{\mathbf{u}} = \mathbf{0} \quad \text{in } \Omega \quad (3.1) $$
+
+$$ s (\zeta \nabla \cdot \hat{\mathbf{u}} - \mathbf{p} \cdot \nabla \hat{\varphi}) + \frac{1}{T_0} (-\Delta \hat{\theta} + c_\epsilon s \hat{\theta}) = 0 \quad \text{in } \Omega \quad (3.2) $$
+
+$$ \nabla \cdot (\boldsymbol{e} \boldsymbol{\varepsilon}(\hat{\mathbf{u}}) + \hat{\theta} \mathbf{p} - \epsilon \nabla \hat{\varphi}) = 0 \quad \text{in } \Omega \quad (3.3) $$
+
+and
+
+$$ -\Delta \hat{p} + \frac{s^2}{c^2} \hat{p} = 0 \quad \text{in } \Omega^c. \quad (3.4) $$
+---PAGE_BREAK---
+
+together with the transmission conditions
+
+$$
+\sigma(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi})^{-} \mathbf{n} = -(\hat{p} + \hat{p}^{\text{inc}})^{+} \mathbf{n} \quad \text{on } \Gamma, \qquad (3.5)
+$$
+
+$$
+s^2 \hat{\mathbf{u}} \cdot \mathbf{n} = -\frac{1}{\rho_f} \frac{\partial}{\partial n} (\hat{p} + \hat{p}^{\text{inc}})^{+} \quad \text{on } \Gamma, \qquad (3.6)
+$$
+
+and the boundary conditions
+
+$$
+\partial_n \hat{\theta} = \hat{f}_\theta, \quad \text{and} \quad \hat{\mathbf{D}} \cdot \mathbf{n} = \hat{f}_\mathbf{D} \quad \text{on } \Gamma. \tag{3.7}
+$$
+
+Above, analogously to the time-domain system, the generalized stress tensor is given by $\boldsymbol{\sigma}(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}) := \boldsymbol{\sigma}_e(\hat{\mathbf{u}}) - (\zeta\hat{\theta})\mathbf{I} + \mathbf{e}^\top\nabla\hat{\varphi}$.
+
+We will make use of Green’s third identity to derive the equivalent non-local problem. First we must represent the solutions of (3.4) in the form:
+
+$$
+\hat{p}(s) = D(s)\hat{\phi} - S(s)\hat{\lambda} \quad \text{in } \Omega^c, \tag{3.8}
+$$
+
+where the Cauchy data for (3.4) is given by the densities $\hat{\phi} := \hat{p}^{+}(s)$ and $\hat{\lambda} := \partial\hat{p}^{+}/\partial n$, and the simple-layer, $S(s)$, and double-layer, $D(s)$, potentials of the corresponding operator defined by
+
+$$
+S(s) \hat{\lambda}(\mathbf{x}) := \int_{\Gamma} E_{s/c}(\mathbf{x}, \mathbf{y}) \hat{\lambda}(\mathbf{y}) d\Gamma_{\mathbf{y}}, \quad \mathbf{x} \in \Omega^c,
+$$
+
+$$
+D(s) \hat{\phi}(\mathbf{x}) := \int_{\Gamma} \frac{\partial}{\partial n_y} E_{s/c}(\mathbf{x}, \mathbf{y}) \hat{\phi}(\mathbf{y}) d\Gamma_y, \quad \mathbf{x} \in \Omega^c.
+$$
+
+Here
+
+$$
+E_{s/c}(x, y) := \frac{e^{-s|x-y|/c}}{4\pi|x-y|}
+$$
+
+is the fundamental solution of equation (3.4). As with their counterpart in the frequency-domain, the Cauchy data $\hat{\lambda}$ and $\hat{\phi}$ satisfy the following integral relations:
+
+$$
+\left.
+\begin{aligned}
+\begin{pmatrix} \hat{\phi} \\ \hat{\lambda} \end{pmatrix} &= \begin{pmatrix} \frac{1}{2}I + K(s) & -V(s) \\ -W(s) & (\frac{1}{2}I - K(s))' \end{pmatrix} \begin{pmatrix} \hat{\phi} \\ \hat{\lambda} \end{pmatrix} & \text{on } \Gamma.
+\end{aligned}
+\right\}
+$$
+
+In the preceding relation, V, K, K' and W are the four basic boundary integral operators defined by
+
+$$
+\left.
+\begin{array}{l}
+V(s) := \{\gamma S(s)\} = \frac{1}{2}(\gamma^{-}S(s) + \gamma^{+}S(s)) \\
+\phantom{V(s) := } = \gamma^{-}S(s) = \gamma^{+}S(s) \\
+K(s) := \{\gamma D(s)\} = \frac{1}{2}(\gamma^{-}D(s) + \gamma^{+}D(s)) \\
+K'(s) := \{\gamma S(s)\} = \frac{1}{2}(\gamma^{-}S(s) + \gamma^{+}S(s)) \\
+W(s) := -\{\partial_n D(s)\} = -\frac{1}{2}(\partial_n^{-}D(s) + \partial_n^{+}D(s)) \\
+\phantom{W(s) := } = -\partial_n^{-}D(s) = -\partial_n^{+}D(s)
+\end{array}
+\right\}
+$$
+
+on Γ.
+---PAGE_BREAK---
+
+In terms of $\hat{\phi}$ and $\hat{\lambda}$, the two transmission conditions (3.5) and (3.6) become
+
+$$
+\begin{gather}
+\sigma(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi})^{-} \mathbf{n} = - (\hat{\phi}(s) + \hat{p}(s)^{\text{inc}})^{+} \mathbf{n} \quad \text{on } \Gamma, \nonumber \\
+-s^2 \hat{\mathbf{u}}^{-} \cdot \mathbf{n} + \frac{1}{\rho_f} \left( W(s)\hat{\phi} - \left(\frac{1}{2}I - K(s)\right)' \hat{\lambda} \right) = \frac{1}{\rho_f} \left( \frac{\partial \hat{p}^{\text{inc}}}{\partial n} \right)^{+} \quad \text{on } \Gamma. \tag{3.10}
+\end{gather}
+$$
+
+Using the densities $\hat{\phi}$ and $\hat{\lambda}$ as new unknowns, equation (3.4) may be eliminated
+from the problem by using the second equation above together with the boundary
+integral equation in the first row of (3.9), namely
+
+$$
+\left( \frac{1}{2} I - K(s) \right) \hat{\phi} + V(s) \hat{\lambda} = 0 \quad \text{on } \Gamma. \qquad (3.11)
+$$
+
+This leads to an integro-differential formulation for the unknowns $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\phi}, \hat{\lambda})$ satisfying the partial differential equations (3.1), (3.2), and (3.3) in $\Omega$ together with the boundary conditions (3.6), and (3.7), and the boundary integral equations (3.10) and (3.11) on $\Gamma$.
+
+Let us first define the space
+
+$$
+H_*^1(\Omega) := \left\{ \varphi \in H^1(\Omega) \mid \int_{\Omega} \varphi(x) dx = 0 \right\},
+$$
+
+and restrict our search for the unknown functions $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi})$ to the product space $\mathbf{H}^1(\Omega) \times \mathbf{H}^1(\Omega) \times H_*^1(\Omega)$. To do so, we multiply equations (3.1), (3.2), and (3.3) by $(\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}) \in \mathbf{H}^1(\Omega) \times \mathbf{H}^1(\Omega) \times H_*^1(\Omega)$. Integrating by parts the resulting relations will lead to:
+
+$$
+\left.
+\begin{aligned}
+& a(\hat{\mathbf{u}}, \hat{\mathbf{v}}; s) - \zeta(\hat{\theta}, \nabla \cdot \hat{\mathbf{v}})_\Omega + (\nabla \hat{\varphi}, e\boldsymbol{\epsilon}(\hat{\mathbf{v}}))_\Omega + \langle \hat{\phi}\mathbf{n}, \hat{\mathbf{v}}^- \rangle_\Gamma = -\langle \hat{p}^{\text{inc}}+\mathbf{n}, \hat{\mathbf{v}}^- \rangle_\Gamma \\[0.5ex]
+& s(\zeta\nabla \cdot \hat{\mathbf{u}} - \mathbf{p} \cdot \nabla \hat{\varphi}, \hat{\vartheta})_\Omega + \frac{1}{T_0} b(\hat{\theta}, \hat{\vartheta}; s)_\Omega = \frac{1}{T_0} \langle f_{\theta}, \hat{\vartheta}^- \rangle_\Gamma \\[0.5ex]
+& -(e\boldsymbol{\epsilon}(\hat{\mathbf{u}}), \nabla\hat{\psi})_\Omega - (\hat{\theta}\mathbf{p}, \nabla\hat{\psi})_\Omega + c\epsilon(\hat{\varphi}, \hat{\psi}; s)_\Omega = -\langle f_D, \hat{\psi}^- \rangle_\Gamma
+\end{aligned}
+\right\} (3.12)
+$$
+
+where $a(\cdot, \cdot; s)$, $b(\cdot, \cdot; s)$ and $c(\cdot, \cdot; s)$ are sesquilinear forms defined respectively by
+
+$$
+\begin{align*}
+a(\hat{\mathbf{u}}, \hat{\mathbf{v}}; s) &:= (\boldsymbol{\sigma}_e(\hat{\mathbf{u}}), \boldsymbol{\varepsilon}(\hat{\mathbf{v}})) + s^2\rho_e(\hat{\mathbf{u}}, \hat{\mathbf{v}})_\Omega \\
+b(\hat{\theta}, \hat{\vartheta}; s)_\Omega &:= (\nabla\hat{\theta}, \nabla\hat{\vartheta})_\Omega + c_\varepsilon s(\hat{\theta}, \hat{\vartheta})_\Omega \\
+c(\hat{\varphi}, \hat{\psi}; s)_\Omega &:= (\nabla\hat{\varphi}, \nabla\hat{\psi})_\Omega.
+\end{align*}
+$$
+
+Now let $\mathbf{A}_s$, $\mathcal{B}_s$ and $\mathcal{C}_s$ be the operators defined by the mappings
+
+$\mathbf{A}_s\hat{\mathbf{u}} := a(\hat{\mathbf{u}}, ; s)$, $\mathcal{B}_s\hat{\theta} := b(\hat{\theta}, ; s)_\Omega$, and $\mathcal{C}_s\hat{\varphi} := c(\hat{\varphi}, ; s)_\Omega$,
+and consider the function spaces
+---PAGE_BREAK---
+
+$$X := \mathbf{H}^1(\Omega) \times H^1(\Omega) \times H_*^1(\Omega) \times H^{1/2}(\Gamma) \times H^{-1/2}(\Gamma),$$
+
+$$X_0' := (\mathbf{H}^1(\Omega))' \times (\mathbf{H}^1(\Omega))' \times (\mathbf{H}_*^1(\Omega))' \times H^{-1/2}(\Gamma) \times H^{1/2}(\Gamma).$$
+
+Then from (3.12), (3.10) and (3.11), we pose the nonlocal problem as
+
+**The nonlocal boundary problem.**
+
+For problem data $(\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5) \in X'$, given by
+
+$$
+\begin{align*}
+\hat{d}_1 &= -\gamma^{-'}(\gamma^{+}\hat{p}^{inc}\mathbf{n}), & \hat{d}_2 &= \gamma^{-'}(\Theta_0^{-1}\hat{f}_{\theta}), & \hat{d}_3 &= -\gamma^{-'}\hat{f}_{\mathbf{D}}, \\
+\hat{d}_4 &= (\rho_f)^{-1}\partial_n^{+}\hat{p}^{inc}, & \hat{d}_5 &= 0,
+\end{align*}
+$$
+
+find functions $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\phi}, \hat{\lambda}) \in X$ satisfying
+
+$$
+\mathbb{A}(s) (\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\phi}, \hat{\lambda})^\top = (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5)^\top \quad (3.13)
+$$
+
+with
+
+$$
+\mathbb{A}(s) \begin{pmatrix} \hat{\mathbf{u}} \\ \hat{\theta} \\ \hat{\varphi} \\ \hat{\phi} \\ \hat{\lambda} \end{pmatrix} := \begin{pmatrix} \mathbf{A}_s & -\zeta (\nabla \cdot)' & \boldsymbol{\varepsilon}' e^\top \nabla & \gamma_n^{-'} & 0 \\ s \zeta \nabla \cdot & T_0^{-1} B_s & -s \mathbf{p} \cdot \nabla & 0 & 0 \\ -\nabla' e \boldsymbol{\varepsilon} & -\nabla' \mathbf{p} & \epsilon C_s & 0 & 0 \\ -s^2 \gamma_n^{-} & 0 & 0 & \rho_f^{-1} W(s) & -\rho_f^{-1} (\frac{1}{2} I - K(s))' \\ 0 & 0 & 0 & \frac{1}{2} I - K(s) & V(s) \end{pmatrix} \begin{pmatrix} \hat{\mathbf{u}} \\ \hat{\theta} \\ \hat{\varphi} \\ \hat{\phi} \\ \hat{\lambda} \end{pmatrix}. \tag{3.14}
+$$
+
+In the next section we will show that this problem is in fact well-posed.
+
+# 4 Variational solutions
+
+We are interested in seeking variational solutions of the nonlocal boundary problem (3.13) in the transformed domain. To this end we need some additional preliminary results and definitions. We begin with the norms:
+
+$$
+\begin{align}
+\|\hat{\mathbf{u}}\|_{|s|, \Omega}^2 &:= (\sigma(\hat{\mathbf{u}}), \hat{\varepsilon}(\tilde{\mathbf{u}}))_{\Omega} + \rho_e \|s|\|\hat{\mathbf{u}}\|_{\Omega}^2 && \hat{\mathbf{u}} \in \mathbf{H}^1(\Omega), \\
+\|\hat{\theta}\|_{|s|, \Omega}^2 &:= \| \nabla \hat{\theta} \|_{\Omega}^2 + c_{\varepsilon}^{-1} \| \sqrt{|s|} \hat{\theta} \|_{\Omega}^2 && \hat{\theta} \in H^1(\Omega), \\
+\|\hat{\varphi}\|_{1, \Omega}^2 &:= \| \nabla \hat{\varphi} \|_{\Omega}^2 && \hat{\varphi} \in H_*^1(\Omega), \\
+\|\hat{p}\|_{|s|, \Omega^c}^2 &:= \| \nabla \hat{p} \|_{\Omega^c}^2 + c^{-2} \| s | \hat{p} \|_{\Omega^c}^2 && \hat{p} \in H^1(\Omega^c). \tag{4.1}
+\end{align}
+$$
+
+For $\hat{\varphi} \in H_*^1(\Omega)$, we see that $\|\nabla\hat{\varphi}\|_\Omega^2 = 0$ if and only if $\hat{\varphi} = 0$. Hence, (4.1) indeed defines a norm in $H_*^1(\Omega)$ (see Hsiao and Wendland [29, Lemma 5.2.5, p.255]).
+
+We will define $\sigma := Re s$ and $\underline{\sigma} := \min\{1, \sigma\}$. With this notation, it is not hard to verify that
+
+$$
+\underline{\sigma} \leq \min\{1, |s|\}, \quad \text{and} \quad \max\{1, |s|\}\underline{\sigma} \leq |s|, \quad \forall s \in \mathbb{C}_{+}.
+$$
+---PAGE_BREAK---
+
+Using these relations, it is possible to prove the following inequalities relating the energy norms defined above
+
+$$
+\begin{align}
+\underline{\sigma} \|\hat{\mathbf{u}}\|_{1, \Omega} &\leq \|\hat{\mathbf{u}}\|_{|s|, \Omega} \leq \frac{|s|}{\underline{\sigma}} \|\hat{\mathbf{u}}\|_{1, \Omega}, \tag{4.2} \\
+\sqrt{\underline{\sigma}} \|\hat{\theta}\|_{1, \Omega} &\leq \|\hat{\theta}\|_{|s|, \Omega_+} \leq \sqrt{\frac{|s|}{\underline{\sigma}}} \|\hat{\theta}\|_{1, \Omega}, \tag{4.3} \\
+\underline{\sigma} \|\hat{p}\|_{1, \Omega_+} &\leq \|\hat{p}\|_{|s|, \Omega_+} \leq \frac{|s|}{\underline{\sigma}} \|\hat{p}\|_{1, \Omega_+}. \tag{4.4}
+\end{align}
+$$
+
+These relations will be used heavily when estimating the norms of the solutions in terms of the Laplace parameter $s$ and its real part $\sigma$. The norms $\|\cdot\|_{1, \Omega}$ and $\|\cdot\|_{1, \Omega^c}$ are respectively equivalent to $\|\cdot\|_{H^1(\Omega)}$ and $\|\cdot\|_{H^1(\Omega^c)}$. An application of Korn's second inequality [14] shows that, for a vector-valued function $\hat{\mathbf{u}}$, the energy norm $\|\cdot\|_{1, \Omega}$ is also equivalent to the standard $H^1(\Omega)$ norm.
+
+Now, given a vector of solutions $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\phi}, \hat{\lambda})$ to (3.13), by defining
+
+$$
+\hat{p}(s) = D(s)\hat{\phi} - S(s)\hat{\lambda} \quad \text{in } \mathbb{R}^3 \setminus \Gamma,
+$$
+
+then $\hat{p} \in H^1(\mathbb{R}^3 \setminus \Gamma)$ is the unique solution of the transmission problem :
+
+$$
+\begin{align}
+& -\Delta \hat{p} + \frac{s^2}{c^2} \hat{p}(s) = 0 && \text{in } \mathbb{R}^3 \setminus \Gamma, \tag{4.5} \\
+& [\gamma \hat{p}] = \hat{\phi} \in H^{1/2}(\Gamma) && \text{on } \Gamma, \\
+& [\partial_n \hat{p}] = \hat{\lambda} \in H^{-1/2}(\Gamma) && \text{on } \Gamma,
+\end{align}
+$$
+
+where the symbol $[\cdot]$ denotes the "jump" relations of a function across $\Gamma$. More specifically we have
+
+$$
+[\gamma \hat{p}] := (\hat{p}^{+} - \hat{p}^{-}), \quad \text{and} \quad [\partial_n \hat{p}] := (\partial_n^{+} \hat{p} - \partial_n^{-} \hat{p}).
+$$
+
+We remark that in the present case no radiation condition is needed to ensure uniqueness because of Huygen’s principle. In terms of the jumps of $\hat{p}$, the last two equations of (3.13) are equivalent to
+
+$$
+\begin{equation}
+\begin{aligned}
+& -s^2 \gamma_n^- \hat{\mathbf{u}} - \frac{1}{\rho_f} \partial_n^+ \hat{p} = \frac{1}{\rho_f} \hat{d}_4 && \text{on } \Gamma \\
+& -\gamma^- \hat{p} = \hat{d}_5 && \text{on } \Gamma
+\end{aligned}
+\tag{4.6}
+\end{equation}
+$$
+
+Since $\hat{d}_5 = 0$, we conclude that $\hat{p}$ satisfies the homogeneous Dirichlet problem for (4.5) in $\Omega$ and, by uniqueness, it must follow that $\hat{p} = 0$ in $\bar{\Omega}$. As a consequence we have the following relations between the unknown densities and the Cauchy data
+
+$$
+[\gamma \hat{p}] = \gamma^{+} \hat{p} = \hat{\phi} \quad \text{and} \quad [\partial_n \hat{p}] = \partial_n^{+} \hat{p} = \hat{\lambda}. \tag{4.7}
+$$
+---PAGE_BREAK---
+
+On the other hand, the transmission condition (4.6) is closely related to the variational equation of (4.5)
+
+$$
+\begin{align*}
+-\langle \partial_n^+ \hat{p}, \overline{\gamma^+ \hat{q}} \rangle_{\Gamma} &= \int_{\Omega^c} (\nabla \hat{p} \cdot \overline{\nabla \hat{q}} + (s/c)^2 \hat{p} \overline{\hat{q}}) dx \\
+&= d_{\Omega^c}(\hat{p}, \hat{q}; s) \\
+&=: (D_s \hat{p}, \overline{\hat{q}})_{\Omega^c},
+\end{align*}
+$$
+
+where the domain of integration for the sesquilinear form $d_{\Omega^c}(\hat{p}, \hat{q}; s)$ and the associated operator $D_s$, has been indicated explicitly in the definition. Now, using (4.6) we arrive at
+
+$$
+-s^2 \langle \gamma^{-} \hat{\mathbf{u}}, \overline{\gamma^{+} \hat{q}} \mathbf{n} \rangle_{\Gamma} + \frac{1}{\rho_f} (D_s \hat{p}, \overline{\hat{q}})_{\Omega^c} = \langle \hat{d}_4, \overline{\gamma^{+} \hat{q}} \rangle_{\Gamma}.
+$$
+
+Combining the above equality with the weak formulations of the first three equations in (3.13), we can formulate an equivalent variational problem. We will first introduce the space $\mathbb{H} := \mathbf{H}^1(\Omega) \times \mathbf{H}^1(\Omega) \times H_*^1(\Omega) \times \mathbf{H}^1(\Omega^c)$ and endow it with the norm:
+
+$$
+\|(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p})\|_{\mathbb{H}} := (\|\hat{\mathbf{u}}\|_{1, \Omega}^2 + \|\hat{\theta}\|_{1, \Omega}^2 + \|\hat{\varphi}\|_{\Omega}^2 + \|\hat{p}\|_{1, \Omega^c}^2)^{1/2}.
+$$
+
+**The variational problem.** Find $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}) \in \mathbb{H}$ satisfying
+
+$$
+A((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{q}); s) = l_d((\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{q})), \quad \forall (\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{q}) \in \mathbb{H} \quad (4.8)
+$$
+
+where the sesquilinear form on the left hand side of the equation is defined by
+
+$$
+\begin{align*}
+\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{q}); s) &:= (\mathbf{A}_s \hat{\mathbf{u}}, \overline{\hat{\mathbf{v}}})_\Omega - \zeta(\hat{\theta}, \nabla \cdot \overline{\hat{\mathbf{v}}})_\Omega + (\nabla\hat{\varphi}, e\boldsymbol{\epsilon}(\overline{\hat{\mathbf{v}}}))_\Omega + \langle\gamma^+\hat{p}\,\mathbf{n}, \gamma^-\overline{\hat{\mathbf{v}}}\rangle_\Gamma \\
+&\quad + s\zeta(\nabla\cdot\hat{\mathbf{u}}, \overline{\hat{\vartheta}})_\Omega + T_0^{-1}(B_s\hat{\theta}, \overline{\hat{\vartheta}})_\Omega - s(\mathbf{p}\cdot\nabla\hat{\varphi}, \overline{\hat{\vartheta}})_\Omega \\
+&\quad - (\boldsymbol{e}\boldsymbol{\epsilon}(\hat{\mathbf{u}}), \nabla\overline{\hat{\psi}})_\Omega - (\mathbf{p}\hat{\theta}, \nabla\overline{\hat{\psi}})_\Omega + \epsilon(C_s\hat{\varphi}, \overline{\hat{\psi}})_\Omega \\
+&\quad - s^2\langle\gamma^-\hat{\mathbf{u}}, \overline{\gamma^+\overline{\hat{q}}}\,\mathbf{n}\rangle_\Gamma + \frac{1}{\rho_f}(D_s\hat{p}, \overline{\hat{q}})_{\Omega^c}
+\end{align*}
+$$
+
+for $(\tilde{\mathbf{v}}, \tilde{\vartheta}, \tilde{\psi}, \tilde{q}) \in \mathbb{H}$. The bounded linear functional on the right hand side is defined by
+
+$$
+l_d((\tilde{\mathbf{v}}, \tilde{\vartheta}, \tilde{\psi}, \tilde{q})) := (\tilde{d}_1, \overline{\tilde{\mathbf{v}}})_{\Omega} + (\tilde{d}_2, \overline{\tilde{\vartheta}})_{\Omega} + (\tilde{d}_3, \overline{\tilde{\psi}})_{\Omega} + (\tilde{d}_4, \overline{\gamma^+ \tilde{q}})_{\Gamma},
+$$
+
+for all tests $(\tilde{\mathbf{v}}, \tilde{\vartheta}, \tilde{\psi}, \tilde{q}) \in \mathbb{H}$. By construction, this variational problem is equivalent to the transmission problem (3.1) through (3.7) which in turn is equivalent to (3.13). Consequently, it suffices to show the existence of a solution of (4.8) to guarantee that (3.13) is indeed solvable. We now present the following basic existence and uniqueness results.
+
+**Theorem 4.1.** Under the assumption of the constant pyroelectric moduli vector vector **p** satisfying the constraint
+---PAGE_BREAK---
+
+$$
+\| \mathbf{p} \|_{\mathbb{R}^3} < \min\{\epsilon, \frac{c_\epsilon}{T_0}\},
+$$
+
+the variational problem (4.8) has a unique solution $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}) \in \mathbb{H}$. Moreover, the following estimate holds:
+
+$$
+\Vert (\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}) \Vert_{\mathbb{H}} \le c_0 \frac{|s|^3}{\sigma \sigma^6} \Vert (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4) \Vert_{\mathbb{H}'}.
+\quad (4.9)
+$$
+
+Here and in the sequel, $c_0 > 0$ will denote a constant that may depend on $\rho_f, T_0, c_\varepsilon, \epsilon, \mathbf{p}$ only.
+
+Proof. Let $\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}), (\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{\mathbf{q}}); s)$ be the sesquilinear form defined by the varia-
+tional equation (4.8). We first show that $\mathcal{A}$ is continuous. It is easy to verify that
+
+$$
+|(A_s \hat{\mathbf{u}}, \bar{\hat{\mathbf{v}}})_{\Omega} + T_0^{-1}(B_s \hat{\theta}, \bar{\hat{\vartheta}})_{\Omega} + \epsilon(C_s \hat{\varphi}, \bar{\hat{\psi}})_{\Omega} + \frac{1}{\rho_f} (D_s \hat{p}, \bar{\hat{q}})_{\Omega^c}| \le \\
+m_1 \left(\frac{|s|}{\sigma}\right)^2 \|(\mathbf{u}, \theta, \varphi, p)\|_{\mathbb{H}} \|(\bar{\mathbf{v}}, \bar{\vartheta}, \bar{\psi}, \bar{q})\|_{\mathbb{H}}
+$$
+
+The remaining terms in $\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}), (v, \vartheta, \psi, q))$ can be bounded easily using the Cauchy-Schwartz inequality, Poicaré's inequality in $H^1_*(\Omega)$, the trace theorem, and the estimate
+
+$$
+|(\nabla \varphi, e \boldsymbol{\epsilon}(v))_{\Omega}| \leq e_{max} \| \nabla \varphi \|_{\Omega} \| \nabla \cdot v \|_{\Omega}.
+$$
+
+This leads to the continuity estimate
+
+$$
+\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}), (\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{\mathbf{q}}); s) \le (m_1 + m_2) \left(\frac{|s|}{\sigma}\right)^2 \|(\mathbf{u}, \theta, \varphi, p)\|_{\mathbb{H}} \|(\hat{\mathbf{v}}, \hat{\vartheta}, \hat{\psi}, \hat{\mathbf{q}})\|_{\mathbb{H}}.
+$$
+
+Here $m_1$ and $m_2$ are constants depending only upon the physical parameters $\zeta$, $\Theta_0$, $\mathbf{p}$, $\epsilon$,
+and $e_{max} = \max\{e_{ijk}, i, j, k = 1, \dots, 3\}$.
+
+We now introduce the scaling factor
+
+$$
+Z(s) := \begin{pmatrix} s & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & s & 0 \\ 0 & 0 & 0 & s/|s|^2 \end{pmatrix}, \quad (4.10)
+$$
+
+and note that, for $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\mathbf{p}}) \in \mathbb{H}$, we have
+
+$$
+\mathrm{Re}\left(-\zeta(\hat{\theta}, \nabla \cdot \bar{\hat{\mathbf{u}}})_{\Omega} + (\mathbf{e}^{\top}\nabla\hat{\varphi}, \boldsymbol{\epsilon}(\bar{\hat{\mathbf{u}}}))_{\Omega} + \langle\gamma^{+}\hat{p}\,\mathbf{n}, \gamma^{-}\bar{\hat{\mathbf{u}}}\rangle_{\Gamma}\right) \\
++ s\left(\zeta(\nabla \cdot \bar{\hat{\mathbf{u}}}, \bar{\hat{\theta}})_{\Omega} - (\mathbf{e}\boldsymbol{\epsilon}(\bar{\hat{\mathbf{u}}}), \nabla\bar{\hat{\varphi}})_{\Omega}\right) - (\bar{s}/|s|^2)s^2\langle\gamma^{-}\bar{\hat{\mathbf{u}}}, \overline{\gamma^{+}\hat{p}\,\mathbf{n}}\rangle_{\Gamma}\right) = 0.
+$$
+
+Therefore, it follows that
+---PAGE_BREAK---
+
+$$
+\begin{equation}
+\begin{split}
+\operatorname{Re} \langle Z(s) \mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\mathbf{u}, \hat{\theta}, \hat{\varphi}, \hat{p}); s) \rangle &= \operatorname{Re} \left( \bar{s}(\mathbf{A}_s \hat{\mathbf{u}}, \bar{\hat{\mathbf{u}}})_\Omega + T_0^{-1}(B_s \hat{\theta}, \bar{\hat{\theta}})_\Omega \right. \\
+&\qquad \left. - s \left( (\mathbf{p} \cdot \nabla \hat{\varphi}, \bar{\hat{\theta}})_\Omega - (\mathbf{p} \hat{\theta}, \nabla \bar{\hat{\varphi}})_\Omega + \epsilon(C_s \hat{\varphi}, \bar{\hat{\varphi}})_\Omega \right) \right. \\
+&\qquad \left. + (\bar{s}/|s|^2) \rho_f^{-1} (D_s \hat{p}, \bar{\hat{p}})_{\Omega^c} \right).
+\end{split}
+\tag{4.11}
+\end{equation}
+$$
+
+By setting to zero some of the entries of $(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p})$ in the right hand side of (4.11),
+it is possible to derive the following
+
+$$
+\left.
+\begin{aligned}
+\operatorname{Re} (\bar{s}(\mathbf{A}_s \hat{\mathbf{u}}, \bar{\hat{\mathbf{u}}})_\Omega) &= \sigma \| \hat{\mathbf{u}} \|_{|s|, \Omega}^2 \\
+\operatorname{Re} (T_0^{-1} (B_s \hat{\theta}, \bar{\hat{\theta}})_\Omega) &= T_0^{-1} (\| \nabla \hat{\theta} \|_\Omega^2 + c_\varepsilon \sigma \| \hat{\theta} \|_2^2) \\
+\operatorname{Re} (-s ((\mathbf{p} \cdot \nabla \hat{\varphi}, \bar{\hat{\theta}})_\Omega + (\mathbf{p} \hat{\theta}, \nabla \bar{\hat{\varphi}})_\Omega)) &\geq -\sigma \| \mathbf{p} \|_{\mathbb{R}^3} (\| \nabla \varphi \|_\Omega + \| \theta \|_\Omega) \\
+\operatorname{Re} (s\epsilon(C_s \hat{\varphi}, \bar{\hat{\varphi}})_\Omega) &= \sigma \epsilon \| \nabla \varphi \|_\Omega^2 \\
+\operatorname{Re} ((\bar{s}/|s|^2) \rho_f^{-1} (D_s \hat{p}, \bar{\hat{p}})_{\Omega^c}) &= (\sigma/|s|^2) \rho_f^{-1} \| p \|_{|s|, \Omega^c}^2
+\end{aligned}
+\right\}
+\tag{4.12}
+$$
+
+From (4.12) and (4.11), it follows that
+
+$$
+\begin{equation}
+\begin{split}
+\mathrm{Re}\left(Z(s)\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\mathbf{u}, \hat{\theta}, \hat{\varphi}, \hat{p}); s)\right)
+&\geq \frac{\sigma}{|s|^2} \left(\|\hat{\mathbf{u}}\|_{|s|, \Omega}^2 + c_1 \|\hat{\theta}\|_{|s|, \Omega}^2 + c_2 \|\hat{\varphi}\|_{\Omega}^2 + \|\hat{p}\|_{|s|, \Omega^c}^2\right), \\
+&\text{(where } c_1 = c_{\varepsilon}^{-1}(c_{\varepsilon}\Theta_0^{-1} - \|\hat{\mathbf{p}}\|_{\mathbb{R}^3}) > 0 \text{ and } c_2 = (\epsilon - \|\mathbf{p}\|_{\mathbb{R}^3}) > 0. \text{ Alternatively, in view of (4.2) - (4.4), we have})
+\end{split}
+\tag{4.13}
+\end{equation}
+$$
+
+where $c_1 = c_\varepsilon^{-1}(c_\varepsilon\Theta_0^{-1} - \|p\|_{\mathbb{R}^3}) > 0$ and $c_2 = (\epsilon - \|p\|_{\mathbb{R}^3}) > 0$. Alternatively, in view of (4.2) - (4.4), we have
+
+$$
+|\mathcal{A}((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\mathbf{u}, \hat{\theta}, \hat{\varphi}, \hat{p}); s)| \geq \alpha_0 \frac{\sigma}{|s|^3} \|(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p})\|_{H}^{2}
+$$
+
+where $\alpha_0 > 0$ is a constant independent of $\sigma$, and $|s|$. Hence, by the Lax-Milgram lemma, there exists a unique solution of the variational problem (4.8).
+
+Having shown that the problem is uniquely solvable, the stability estimate (4.9)
+can be derived from (4.13) and (4.8) as we show next
+
+$$
+\begin{align*}
+& \frac{\sigma}{|s|^2} (\|\hat{\mathbf{u}}\|_{|s|, \Omega}^2 + c_1 \| \hat{\theta} \|_{|s|, \Omega}^2 + c_2 \| \hat{\varphi} \|_{\Omega}^2 + \| \hat{p} \|_{|s|, \Omega^c}^2) \\
+&\leqslant \operatorname{Re} (Z(s) A((\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p}), (\mathbf{u}, \hat{\theta}, \hat{\varphi}, \hat{p}); s)) \\
+&\leq |(\bar{s}(\hat{d}_1, \hat{\mathbf{u}}))_\Omega + (\bar{d}_2, \hat{\theta})_\Omega + s(\bar{d}_3, \hat{\varphi})_\Omega + \bar{s}/|s|^2 (\hat{d}_4, \gamma^+ \hat{p})_\Gamma| \\
+&\leq \frac{|s|}{\sigma^2} (|(d_1, \hat{\mathbf{u}})_\Omega| + |(d_2, \hat{\theta})_\Omega| + |(d_3, \varphi)_\Omega| + |\langle d_4, \gamma^+ \hat{p} \rangle_\Gamma|)
+.\end{align*}
+$$
+
+Consequently, using the first inequality of the equivalences (4.2) through (4.4), we
+have the estimate
+
+$$
+(\|\tilde{\mathbf{u}}\|_{|s|,\Omega}^2 + \|'\tilde{\theta}\|_{|s|,\Omega}^2 + \|'\tilde{\varphi}\|_{\Omega}^2 + \|'\tilde{p}\|_{|s|,\Omega^c}^2)^{1/2} \le c_0 \frac{|s|^3}{\sigma'\sigma^5} \|(\tilde{d}_1, \tilde{d}_2, \tilde{d}_3, \tilde{d}_4, 0)\|_{X'}.
+$$
+
+(4.14)
+---PAGE_BREAK---
+
+Here $c_0$ is a constant depending only on the physical parameters $\rho_f, T_0, c_\varepsilon, \epsilon, \mathbf{p}$. The desired estimate (4.9) can then be easily derived by simplifying the right hand side of the expression above and applying (4.2) through (4.4) to the term on left hand side. $\square$
+
+The estimate (4.14) will lead us to verify the invertibility of the operator matrix $\mathbb{A}(s)$ defined in (3.14), as we now show.
+
+**Theorem 4.2.** The operator $\mathbb{A}(s) : X \to X'_0$ as defined in (3.14) is invertible. Moreover, we have the estimate:
+
+$$
+\| \mathbb{A}^{-1}(s) |_{X'_0} \|_{X', X} \leq c_0 \frac{|s|^{3+1/2}}{\sigma \underline{\sigma}^{6+1/2}}. \quad (4.15)
+$$
+
+*Proof.* From (4.7), we see that
+
+$$
+[\hat{\gamma}\hat{p}] = \gamma^+\hat{p} = \hat{\phi} \quad \text{and} \quad [\partial_n\hat{p}] = \partial_n^+\hat{p} = \hat{\lambda}.
+$$
+
+From which it can be shown that (see, e.g. [22]):
+
+$$
+\|\hat{\phi}\|_{H^{1/2}(\Gamma)}^2 = \|\gamma^+\hat{p}\|_{H^{1/2}(\Gamma)}^2 \le c_1 \|\hat{p}\|_{1, \Omega^c}^2 \le c_1 \frac{1}{\underline{\sigma}^2} \|\hat{p}\|_{|s|, \Omega^c}^2 \quad (4.16)
+$$
+
+Similarly, we have
+
+$$
+|\langle \hat{\lambda}, \hat{q}^+ \rangle| = |\langle \partial_n^+ \hat{p}, \hat{q}^+ \rangle| = |a_{s,\Omega^c}(\hat{p}, \hat{q})| \le \|\hat{p}\|_{|s|, \Omega^c} \|\hat{q}\|_{|s|, \Omega^c} \le c_2 \sqrt{|s|/\underline{\sigma}} \|\hat{p}\|_{|s|, \Omega^c} \|\hat{q}^+\|_{H^{1/2}(\Gamma)}
+$$
+
+which implies
+
+$$
+\|\hat{\lambda}\|_{H^{-1/2}(\Gamma)} = \sup_{0 \neq q^+ \in H^{-1/2}(\Gamma)} \frac{|\langle \partial_n^+ \hat{p}, \hat{q}^+ \rangle_{\Gamma}|}{\|\hat{q}^+\|_{H^{1/2}(\Gamma)}} \le c_2 \sqrt{|s|/\underline{\sigma}} \|\hat{p}\|_{|s|, \Omega^c}. \quad (4.17)
+$$
+
+Above, Bamberger and Ha-Duong's optimal lifting [3, 4] has been used to bound
+the norm ||$\hat{q}$||$_{|s|,Ω^c}$ by ||$\hat{q}^+$||$_{H^{1/2}(Γ)}$ in (4.15). Then (4.16) and (4.17) yield the estimates
+
+$$
+\frac{1}{2} \left( \frac{1}{c_1 \sigma^2} \| \hat{\phi} \|_{H^{1/2}(\Gamma)}^2 + \frac{\sigma}{c_2^2 |s|} \| \hat{\lambda} \|_{H^{-1/2}(\Gamma)}^2 \right) \le \| \hat{p} \|_{|s|, \Omega_+}^2 . \quad (4.18)
+$$
+
+As a consequence of (4.14), it follows from (4.18) that
+
+$$
+\begin{align*}
+& (\underline{\sigma}^2 \| \hat{\mathbf{u}} \|_{1, \Omega}^2 + \underline{\sigma} \| \hat{\theta} \|_{1, \Omega}^2 + \| \hat{\varphi} \|_{\Omega}^2 \\
+&\quad + \frac{1}{2} \left( \frac{\underline{\sigma}^2}{c_1} \| \hat{\phi} \|_{H^{1/2}(\Gamma)}^2 + \frac{\underline{\sigma}}{c_2^2 |s|} \| \hat{\lambda} \|_{H^{-1/2}(\Gamma)}^2 \right))^{1/2} \\
+&\le c_0 \frac{|s|^3}{\sigma\underline{\sigma}^5} \| (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5) \|_{X'_0}
+\end{align*}
+$$
+
+which implies
+
+$$
+\left( \| \hat{\mathbf{u}} \|_{1, \Omega_-}^2 + \| \hat{\theta} \|_{1, \Omega_-}^2 + \| \hat{\varphi} \|_{H_s^4(\Omega)}^2 + \| \hat{\phi} \|_{H^{1/2}(\Gamma)}^2 + \| \hat{\lambda} \|_{H^{-1/2}(\Gamma)}^2 \right)^{1/2}
+$$
+
+$$
+\le c_0 \frac{|s|^{3+1/2}}{\sigma\underline{\sigma}^{6+1/2}} \|(\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5)\|_{X'_0}. \quad \square
+$$
+---PAGE_BREAK---
+
+# 5 Results in the time domain
+
+Having established the properties of the operators and solutions to our problem in the Laplace domain, we can now return to the time domain and establish analogue results. In order to state the result that will allow us to transfer our previous analysis in the Laplace domain back into the time domain, following [31], let us first define a class of admissible symbols.
+
+The following definition, and the proposition following immediately after it (an improved version of [39, Proposition 3.2.2], and [40]) will be used to transform the Laplace-domain bounds into time-domain statements.
+
+**A class of admissible symbols:** Let $\mathbf{X}$ and $\mathbf{Y}$ be Banach spaces and $\mathcal{B}(\mathbf{X}, \mathbf{Y})$ be the set of bounded linear operators from $\mathbf{X}$ to $\mathbf{Y}$. An operator-valued analytic function $A: \mathbb{C}_+ \to \mathcal{B}(\mathbf{X}, \mathbf{Y})$ is said to belong to the class $\mathcal{A}(\mu, \mathcal{B}(\mathbf{X}, \mathbf{Y}))$, if there exists a real number $\mu$ such that
+
+$$ \|A(s)\|_{\mathbf{X},\mathbf{Y}} \le C_A (\mathrm{Re}(s)) |s|^{\mu} \quad \text{for } s \in \mathbb{C}_+, $$
+
+where the function $C_A : (0, \infty) \to (0, \infty)$ is non-increasing and satisfies
+
+$$ C_A(\sigma) \le \frac{c}{\sigma^m}, \quad \forall \sigma \in (0, 1] $$
+
+for some $m \ge 0$ and $c$ independent of $\sigma$.
+
+**Proposition 5.1.** ([40]) Let $A = \mathcal{L}\{a\} \in \mathcal{A}(k + \alpha, \mathcal{B}(\mathbf{X}, \mathbf{Y}))$ with $\alpha \in [0, 1)$ and $k$ a non-negative integer. If $g \in C^{k+1}(\mathbb{R}, \mathbf{X})$ is causal and its derivative $g^{(k+2)}$ is integrable, then $a * g \in C(\mathbb{R}, \mathbf{Y})$ is causal and
+
+$$ \| (a * g)(t) \|_{\mathbf{Y}} \le 2^{\alpha} C_{\epsilon}(t) C_{A}(t^{-1}) \int_{0}^{1} \| (\mathcal{P}_{2}g^{(k)})(\tau) \|_{\mathbf{X}} d\tau, $$
+
+where
+
+$$ C_{\epsilon}(t) := \frac{1}{2\sqrt{\pi}} \frac{\Gamma(\epsilon/2)}{\Gamma((\epsilon+1)/2)} \frac{t^{\epsilon}}{(1+t)^{\epsilon}}, \quad (\epsilon := 1-\alpha \text{ and } \mu=k+\alpha) $$
+
+and
+
+$$ (\mathcal{P}_2 g)(t) = g + 2\dot{g} + \ddot{g}. $$
+
+The results proven in Section 4—specifically the bounds obtained in terms of the Laplace parameter $s$ and its real part $\sigma$—will now allow us to show that the operators involved belong precisely to one such class of symbols.
+
+We begin with the results in Theorem 4.1 and from (4.9), we may write
+
+$$ (\hat{\mathbf{u}}, \hat{\theta}, \hat{\phi}, \hat{p})^\top = A(s)(\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, 0)^\top, $$
+
+$$ \|A(s)|_{X'_0} \|_{X', \mathbb{H}} \le C_A \frac{|s|^3}{\sigma \underline{\sigma}^6}. \qquad (5.1) $$
+---PAGE_BREAK---
+
+Hence, $A(s) \in \mathcal{A}(3, \mathcal{B}(X_0', \mathbb{H})), \text{ and}$
+
+$$
+\begin{align*}
+(\mathbf{u}, \theta, \varphi, p)^{\top} &= \mathcal{L}^{-1}\{A(s) (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, 0)^{\top}\} \\
+&= \mathcal{L}^{-1}\{A(s)\} * \mathcal{L}^{-1}\{( \hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, 0 )^{\top}\} \\
+&= ( \mathcal{L}^{-1}\{A\} * \mathbf{D})(t) \\
+&=: (a * g)(t) \quad \text{according to Proposition 5.1.}
+\end{align*}
+$$
+
+From the estimate of $A(s)$ in (5.1), we have
+
+$$
+\mu = k + \alpha = 3 \quad \text{implies} \quad k = 3, \alpha = 0 \quad \text{and} \quad \varepsilon = 1 - \alpha = 1.
+$$
+
+Thus, we have established the following theorem.
+
+**Theorem 5.1.** Let $\mathbb{H} := \mathbf{H}^1(\Omega) \times \mathbf{H}^1(\Omega) \times H_*^1(\Omega) \times \mathbf{H}^1(\Omega^c)$. If
+
+$$
+\mathbf{D}(t) := \mathcal{L}^{-1} \left\{ (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, 0)^{\top} \right\} \in C^4(\mathbb{R}, X'_0)
+$$
+
+is causal and its derivative $\mathbf{D}^{(5)}$ is integrable, then $(\mathbf{u}, \theta, \varphi, p)^{\top} \in C(\mathbb{R}, \mathbb{H})^{\top}$ is causal
+and
+
+$$
+\|(\mathbf{u}, \theta, \varphi, p)^{\top}(t)\|_{\mathbb{H}} \le c_0 \frac{t^2}{1+t} \max\{1, t^6\} \int_0^t \|(\mathcal{P}_2 \mathbf{D}^{(3)})(\tau)\| d\tau
+$$
+
+for some constant $c_0 > 0$, where $(\mathcal{P}_2\mathbf{D})(t) = \mathbf{D} + 2\dot{\mathbf{D}} + \ddot{\mathbf{D}}.$
+
+Similarly, from Theorem 4.2, we have
+
+$$
+(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{\phi}, \hat{\lambda})^\top = A^{-1}(s) (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5)^\top,
+$$
+
+from which, using (4.15), we infer that
+
+$$
+\|\mathbb{A}^{-1}(s)|_{X_0'}\|_{X', \mathbb{H}} \le c_0 \frac{|s|^{3+1/2}}{\sigma\underline{\sigma}^{6+1/2}},
+$$
+
+hence $\mathbb{A}^{-1}(s) \in \mathcal{A}(3\frac{1}{2}, \mathcal{B}(X_0', \mathbb{H}))$. Applying Proposition 5.1 with
+
+$$
+\mu = (k + \alpha) = 3\frac{1}{2}, k = 3, \alpha = 1/2, \varepsilon = 1 - \alpha = 1/2,
+$$
+
+then yields the following theorem.
+
+**Theorem 5.2.** Let $X := \mathbf{H}^1(\Omega) \times \mathbf{H}^1(\Omega) \times H_*^1(\Omega) \times \mathbf{H}^{1/2}(\Gamma) \times \mathbf{H}^{-1/2}(\Gamma)$. If
+
+$$
+\mathbf{D}(t) := \mathcal{L}^{-1}\{( (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, \hat{d}_5 )^{\top} \}(t) \in C^{(4)}(\mathbb{R}, X'_{0})
+$$
+
+is causal, and its derivative $\mathbf{D}^{(5)}$ is integrable, then $(\mathbf{u}, \theta, \varphi, \phi, \lambda)^{\top} \in C(\mathbb{R}, X)$ is
+casual, and there holds the estimate
+---PAGE_BREAK---
+
+$$
+\begin{gather*}
+\|((\mathbf{u}, \theta, \varphi, \phi, \lambda)^\top(t))\|_{X} \le c_{1/2} \frac{t^{1+1/2}}{(1+t)^{1/2}} \max\{1, t^{6+1/2}\} \int_0^t \|(\mathcal{P}_2 \mathbf{D}^{(3)})(\tau)\|_{X'} d\tau, \\
+(\mathcal{P}_2 \mathbf{D})(t) := \mathbf{D} + 2\dot{\mathbf{D}} + \ddot{\mathbf{D}}.
+\end{gather*}
+$$
+
+for some constant $c_{1/2} > 0$.
+
+In view of (3.8) and the inverse of $\mathbb{A}(s)$, we see that $\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}$ and $\hat{p}$ are simply solutions of the following system
+
+$$
+(\hat{\mathbf{u}}, \hat{\theta}, \hat{\varphi}, \hat{p})^\top = A_2(s) \circ \mathbb{A}^{-1}(s) (\hat{d}_1, \hat{d}_2, \hat{d}_3, \hat{d}_4, 0)^\top, \quad (5.2)
+$$
+
+where
+
+$$
+A_2(s) = \begin{pmatrix}
+1 & 0 & 0 & 0 & 0 \\
+0 & 1 & 0 & 0 & 0 \\
+0 & 0 & 1 & 0 & 0 \\
+0 & 0 & 0 & D(s) & -S(s)
+\end{pmatrix}.
+$$
+
+As a consequence of Theorem 5.1, $A_2(s) \circ \mathbb{A}^{-1}(s)$ belongs to the class $\mathcal{A}(3 + 1/2, \mathcal{B}(X'_0, \mathbb{H}))$. However, we may also compute the index of the matrix of operators $A_2(s)$. For $\hat{\phi} \in H^{1/2}(\Gamma)$, let $\hat{u} = D(s)\phi \in \mathbb{R}^3 \setminus \Gamma$, then
+
+$$
+\begin{align*}
+\sigma \|\hat{u}\|_{s|, \mathbb{R}^3 \setminus \Gamma}^2 &= \operatorname{Re} \left( \bar{s} \left\langle W \hat{\phi}, \bar{\hat{\phi}} \right\rangle_{\Gamma} \right) \\
+&\le |s| \|W \hat{\phi}\|_{H^{-1/2}(\Gamma)} \| \hat{\phi} \|_{H^{1/2}(\Gamma)} \\
+&\le c_1 \left( \frac{|s|}{\sigma} \right)^{1/2} |s| \| \hat{u} \|_{s|, \mathbb{R}^3 \setminus \Gamma} \| \hat{\phi} \|_{H^{1/2}(\Gamma)}
+\end{align*}
+$$
+
+Hence, from (4.4), we obtain
+
+$$
+\|D(s)\hat{\phi}\|_{H^1(\mathbb{R}^3\setminus\Gamma)} \leq c_1 \frac{|s|^{3/2}}{\sigma\bar{\sigma}^{3/2}} \|\hat{\phi}\|_{H^{1/2}(\Gamma)},
+$$
+
+which implies $D(s) \in A(3/2, B(H^{1/2}(\Gamma), H^1(\mathbb{R}^3 \setminus \Gamma))$. Similarly, for $\tilde{\lambda} \in H^{-1/2}(\Gamma)$,
+if we set $\tilde{u} = S(s)\tilde{\lambda}$ in $\mathbb{R}^3 \setminus \Gamma$, then we may show that
+
+$$
+\| S(s) \tilde{\lambda} \|_{H^1(\mathbb{R}^3 \setminus \Gamma)} \leq c_2 \frac{|s|}{\sigma \bar{\sigma}^2} \| \tilde{\lambda} \|_{H^{-1/2}(\Gamma)} .
+$$
+
+That is, $S(s) \in A(1, B(H^{-1/2}(\Gamma), H^1(\mathbb{R}^3 \setminus \Gamma)))$, and hence
+
+$$
+\|A_2(s)\|_{X,\mathbb{H}} \le c_3 \frac{|s|^{1+1/2}}{\sigma\bar{\sigma}^{2+1/2}}.
+$$
+
+Following [31], if we apply the composition rule and make use of the estimate of $\mathbb{A}^{-1}(s)$ in (4.15), we find the matrices of the operators in (5.2) ended with an index $\mu = (1+1/2)+(3+1/2)=5$. However, this only gives an upper bound for the actual index of $A_2(s) \circ \mathbb{A}^{-1}(s)$ in (5.2).
+---PAGE_BREAK---
+
+## 6 Concluding remarks
+
+A few remarks should be in order. This paper is dealing with a time-dependent wave-thermopiezoelectric structure interaction problem by the time-dependent boundary-field equation approach. With the help of an appropriate scaling factor $Z(s)$ in (4.10), we are able to establish the existence and uniqueness of the solutions to the problem. For simplicity, in this paper we only impose natural boundary conditions for the corresponding partial differential equations involved in the interior domain $\Omega$. Clearly, one may also impose mixed boundary conditions as well. Moreover, the results presented in this communication generalize those presented in [23] for elastic-acoustic interactions, [38] for acoustic-piezoelectric interactions, and [24] for acoustic-thermoelastic interactions, since all those results can be recovered from the ones in this communication by setting to zero selected entries of the piezoelectric tensor, or thermal constants. Moreover, the present work complements the recent articles [25, 26] where boundary integral equations of the first kind are studied for the dynamic thermo-elastic equations.
+
+These results can be used to simulate wave-structure interactions numerically by using the nowadays well-known convolution quadrature (CQ) method. Numerical experiments based on QC for the special cases of the wave-structure interactions listed above are available in [6, 17, 23, 24, 38]. The numerical treatment for the operators in the present paper will be reported in a separate communication.
+
+## References
+
+[1] A. J. Acheson. *Elementary Fluid Dynamics*, Oxford Applied Mathematics and Computing Science Series. Clarendon Press, Oxford, 1990.
+
+[2] S. Amini, P.J. Harris. Boundary element and finite element methods for the coupled fluid interaction problem In: *BEM X C.A. Brebbia ed*. CMP, Springer-verlag, 509-520, 1988
+
+[3] A. Bamberger and T. Ha Duong. Formulation Variationnelle Espace-Temps pour le Calcul par Potentiel Retard'e de la Diffraction of d'une Onde Acoustique (I) *Math. Meth. Appl.Sci.*, 8(3): 405-435, 1986.
+
+[4] A. Bamberger and T. Ha Duong. Formulation Variationnelle pour le Calcul de la Diffraction of d'une Onde Acoustique par une Surface Rigide *Math. Meth. Appl.Sci.*, 8(4): 598-608, 1986.
+
+[5] J. Bielak and R.C. MacCamy. Symmetric finite element and boundary integral coupling methods for fluid-solid interaction I. *Quart. J. Appl. Math.*, 49: 107-119, 1991.
+
+[6] T. S. Brown, T. Sánchez-Vizuet and F-J. Sayas. Evolution of a semidiscrete system modeling the scattering of acoustic waves by a piezoelectric solid. *ESAIM: Mathematical Modeling and Numerical Analysis (M2AN)*, 52(2):423-455, 2018.
+
+[7] A.J. Burton, G.F. Miller. The application of integral equation methods to the numerical solution of some exterior boundary-value problems *Proc. Roy. Soc. London*. ser. A, 323: 201-210, 1971
+---PAGE_BREAK---
+
+[8] F. Çakoni, G.C. Hsiao. Mathematical model of the Interaction problem between electromagnetic field and elastic body, in *Acoustics, Mechanics, and the Related Topics of Mathematical Analysis*, A. Wirgin, ed., World Scientific Publishing Co., New Jersey, 48–54, 2002.
+
+[9] M. Costabel. Time-dependent problems with the boundary integral equation method. Chapter 25, 703–721, Volume 1 *Fundamentals*. In *Encyclopedia of Computational Mechanics*. John Wiley & Sons, Ltd. 2004.
+
+[10] S. Domínguez, N. Nigam, and J. Sun. Revisiting the Jones Eigenproblem in fluid-structure Interaction. *SIAM Journal on Applied Mathematics*, 79(6): 2385–2408, 2019.
+
+[11] O.von Estorff and H. Antes. On FEM-BEM coupling for fluid-structure interaction analysis in the time domain *Int. J. Numer. Meth. Engng*, 31: 1151–168, 1991.
+
+[12] J. Elschner, G.C. Hsiao, A. Rathsfed. On the direct and inverse problems in fluid-structural interaction. *ICIAM 2007 Proceedings in Applied Mathematics and Mechanics*. 7: 1130501–1130502, 2007.
+
+[13] J. Elschner, G.C. Hsiao, A. Rathsfed. An inverse problem for fluid-solid interaction. *Inverse Problems and Imaging*. 2: 83–120, 2008.
+
+[14] G. Fichera. *Existence theorems in elasticity theory*. 347–389, Volume 2 Handbuch der Physik. Springer, 1972.
+
+[15] G.N. Gatica, G.C. Hsiao, S. Meddahi. A coupled mixed finite element method for interaction problem between electromagnetic field and elastic body *SIAM Journal on Numerical Analysis*, 48:1338–1368, 2010
+
+[16] M.A. Hamdi, P. Jean. A mixed functional for the numerical resolution of wave-structure interaction problems. In: *Aero- and Hydro-Acoustic IUTAM Symposium*, G. Comte-Bellot and J.E. William (eds.) Springer-Verlag, 1985, 269–276.
+
+[17] M. E. Hassell, T. Qiu, T. Sánchez-Vizuet, and F-J. Sayas. A new and improved analysis of the time domain boundary integral operators for acoustics. *Journal of Integral Equations and Applications*, 29(1):107–136, 2017.
+
+[18] G.C. Hsiao. On the boundary-field equation methods for fluif-structure In: *Problems and Methods in Mathematical Physics*, L. Jentsch and F. Tröltzsch (eds) Teubner-Texte zur Mathematik, Band 134, B. G. Teubner Veriagsgesellschaft, Stuttgart, Leipzig, 1994, 79-88.
+
+[19] G.C. Hsiao, R.E. Kleinman and L.S. Schuetz. On variational formulations of boundary value problems for fluid-solid interactions. In: *Elastic Wave Propagation I.T.U.A.M.-I.U.P.A.P. Symposium*, M.F. McCarthy and M.A. Hayes (eds.) Elsevier Science Publishers B.V. (North-Holland), 1989, 321–326,
+
+[20] G.C. Hsiao, R.E. Kleinman and G.F. Roach. Weak solutions of fluid-solid interaction problems. *Math. Nachr.*, 218:139–163, 2000
+
+[21] G.C. Hsiao, N. Nigam. A transmission problem for fluid-structure interaction in the exterior of a thin domain, *Adv. Differential Equations*, 8(11): 1281–1318, 2003.
+---PAGE_BREAK---
+
+[22] G.C. Hsiao, F.-J. Sayas and R.J. Weinacht. Time-Dependent fluid-structure Interaction. *Math. Meth. Appl. Sci.*, 40: 486–500, 2017.
+Article first published online 19 Mar 2015, DOI: 10.10137/14099173X
+
+[23] G. C. Hsiao, T. Sánchez-Vizuet, and F-J. Sayas. Boundary and coupled boundary-finite element methods for transient wave-structure interaction. *IMA Journal of Numerical Analysis*, 37(1):237–265, 2016.
+
+[24] G. C. Hsiao, T. Sánchez-Vizuet, F-J. Sayas and R.J. Weinacht. A time-dependent fluid-thermoelastic solid Interaction. *IMA Journal of Numerical Analysis*, 39(2):924–956, 2018.
+
+[25] G. C. Hsiao and T. Sánchez-Vizuet. Boundary integral formulations for transient linear thermoelasticity with combined-type boundary conditions. (*Submitted*), 2020. https://arxiv.org/abs/2010.04909.
+
+[26] G. C. Hsiao and T. Sánchez-Vizuet. Time-domain boundary integral methods in linear thermoelasticity. *SIAM Journal on Mathematical Analysis*, 52(3):2463–2490, 2020. Dedicated to the memory of Francisco-Javier Sayas.
+
+[27] G. C. Hsiao and R. J. Weinacht. A representation formula for the wave equation revoted. *Applicable Analysis: An international Journal*, 91(2): 371–380, 2012.
+
+[28] G.C. Hsiao and R.J. Weinacht. Transparent boundary conditions for the wave equation—a Kirchhoff point of view. *Math. Meth. Appl. Sci.*, 36: 2011-2017, 2013
+
+[29] G.C. Hsiao and W.L. Wendland. *Boundary Integral Equations, Applied Mathematical Sciences*, 164 Springer, Berlin, 2008.
+
+[30] V. D. Kupradze. *Three-dimensional Problems of the Mathematical Theory of Elasticity and Thermoelasticity*, North-Holland Series in *Applied Mathematics and Mechanics*, 164 North-Holland Publishing Company, Amsterdam, New York, Oxford, 1979.
+
+[31] A. R. Laiena and F-J. Sayas. Theoretical aspects of the application of convolution quadrature to scattering off acoustic waves. *Numer. Math.*, 112: 637–678, 2009.
+
+[32] Ch. Lubich. On the multistep time discretization of linear initial-boundary value problems and their boundary integral equations. *Numer. Math.*, 67(3):365–389, 1994.
+
+[33] Ch. Lubich and R. Schneider. Time discretization of parabolic boundary integral equations. *Numer. Math.*, 63(4):455–481, 1992.
+
+[34] C.J. Luke and P.A. Martin. Fluid-solid interaction: acoustic scattering by a smooth elastic obstacle *SIAM J. Appl. Math.*, 55: 904–922, 1995.
+
+[35] R.D. Mindlin. On the equations of motion of piezoelectric crystals. In *Problems of Continuous Media* SIAM, Philadelphia, 1961
+
+[36] W. Nowacki. Some general theorems of thermopiezoelectricity *Journal of Thermal Stresses*, 1:171–182, 1978.
+
+[37] W. Nowacki. Electromagnetic Interaction in Elastic Solids. In *International Centre for Mechanical Sciences*, ed. by H. Parkus Springer-Verlag, Wien.
+---PAGE_BREAK---
+
+[38] T. Sánchez-Vizuet and F-J. Sayas. Symmetric boundary-finite element discretization of time dependent acoustic scattering by elastic obstacles with piezoelectric behavior. *Journal of Scientific Computing*, 70(3):1290-1315, 2017.
+
+[39] F-J. Sayas. *Retarded potentials and time domain boundary integral equations: a road-map*. *Computational Mathematics*, 50. Springer, 2016.
+
+[40] F-J. Sayas. **Errata to: Retarded potentials and time domain boundary integral equations: a road-map.** https://team-pancho.github.io/documents/ERRATA.pdf
+
+[41] H.A. Schenck. Improved integral formulation for acoustic radiation problem *J. Acoust. Soc. Am.*, 44:41-48, 1968.
+
+[42] H.A. Schenk, G.W. Benthien. The application of a coupled finite-element boundary-element technique to large-scale structure acoustic problems. In: *proceedings of the Eleventh International Conference on Boundary Element Methods* Vol 2, 309-318, 19890
+
+[43] J. Serrin. *Mathematical Principles of Classical Fluid Mechanics*, 3–26, Volume 8/1 Handbuch der Physik, Springer, 1959.
\ No newline at end of file
diff --git a/samples/texts_merged/5750798.md b/samples/texts_merged/5750798.md
new file mode 100644
index 0000000000000000000000000000000000000000..0912a78029fdeeb461333e8b0f6631353cde8ec2
--- /dev/null
+++ b/samples/texts_merged/5750798.md
@@ -0,0 +1,400 @@
+
+---PAGE_BREAK---
+
+Improving analogical extrapolation using case pair
+competence
+
+Jean Lieber, Emmanuel Nauer, Henri Prade
+
+► To cite this version:
+
+Jean Lieber, Emmanuel Nauer, Henri Prade. Improving analogical extrapolation using case pair competence. Case-Based Reasoning Research and Development, 27th International Conference (ICCBR-2019), Sep 2019, Otzenhausen, France. [hal-02370747](https://hal.inria.fr/hal-02370747)
+
+HAL Id: hal-02370747
+
+https://hal.inria.fr/hal-02370747
+
+Submitted on 19 Nov 2019
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+Improving analogical extrapolation using case
+pair competence
+
+Jean Lieber¹, Emmanuel Nauer¹, and Henri Prade²
+
+¹ Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France
+
+² IRIT, Université de Toulouse, France
+
+**Abstract.** An analogical proportion is a quaternary relation that is to be read “*a* is to *b* as *c* is to *d*”, verifying some symmetry and permutation properties. As can be seen, it involves a pair of pairs. Such a relation is at the basis of an approach to case-based reasoning called analogical extrapolation, which consists in retrieving three cases forming an analogical proportion with the target problem in the problem space and then in finding a solution to this problem by solving an analogical equation in the solution space. This paper studies how the notion of competence of pairs of source cases can be estimated and used in order to improve extrapolation. A preprocessing of the case base associates to each case pair a competence given by two scores: the support and the confidence of the case pair, computed on the basis of other case pairs forming an analogical proportion with it. An evaluation in a Boolean setting shows that using case pair competences improves significantly the result of the analogical extrapolation process.
+
+**Keywords:** analogical proportion, analogical inference, case-based rea-
+soning, competence, extrapolation
+
+# 1 Introduction
+
+In a recent paper [15], the authors have advocated that reasoning about cases (or case-based reasoning, CBR [18, 17]) may not be only based on similarity-based reasoning, looking for the nearest solved cases, but may also use analogical proportions for extrapolation purposes. Extrapolation is based on analogical inference, that uses triples of cases for building the solution of a fourth (new) case through an adaptation mechanism [2].
+
+Usually, several triples in the case base can be used for predicting the solution of the fourth case, and predictions may diverge. In fact, it has been established for Boolean features that such an inference makes no error (thus all the triples agree on the same prediction) if and only if the function that associates the solution to the description of a case is an affine Boolean function [5]. This is why, when the function is not assumed to be affine, a voting procedure is organized between the predicting triples.
+
+Such a procedure is quite brute-force, and did not take really lesson from the
+case base. Indeed, it may happen that some triples in the base fail to predict
+---PAGE_BREAK---
+
+the correct answer of another case of the case base. In this paper, we propose to take into account this kind of information for restricting the number of triples used for making a prediction in a meaningful way.
+
+The paper is organized as follows. The next section provides the necessary background on analogical proportions and the notations about CBR used throughout the paper. Section 3 discusses how to restrict the set of triples allowed to participate to a given prediction. Section 4 reports experimentations showing the gain in accuracy of the new inference procedure. Section 5 discusses related work, before concluding.
+
+## 2 Preliminaries
+
+This section presents first the formal framework of this study: the nominal representations and, more specifically, the representation by tuples of Boolean values. Then, it recalls some notions and gives some notations about analogical proportions and about case-based reasoning.
+
+### 2.1 Nominal representations and Boolean setting
+
+Feature-value representations are often used in CBR (see, e.g., [13]). A nominal representation is a feature-value representation where the range of each feature is finite (and, typically, small). More formally, let $\mathcal{U}_1, \mathcal{U}_2, \dots, \mathcal{U}_p$ be $p$ finite sets and $\mathcal{U} = \mathcal{U}_1 \times \mathcal{U}_2 \times \dots \times \mathcal{U}_p$. A feature on $\mathcal{U}$ is one of the $p$ projections $(x_1, x_2, \dots, x_p) \in \mathcal{U} \mapsto x_i \in \mathcal{U}_i$ ($i \in \{1, 2, \dots, p\}$).
+
+A Boolean representation is a nominal representation where $\mathcal{U}_1 = \mathcal{U}_2 = \dots = \mathcal{U}_p = \mathbb{B}$, where $\mathbb{B} = \{0, 1\}$ is the set of Boolean values: the value “false” is assimilated to the integer 0, and “true” is assimilated to 1. The Boolean operators $\neg$, $\wedge$ and $\vee$ are defined, for $a, b \in \mathbb{B}$, by $\neg a = 1-a$, $a \wedge b = \begin{cases} 1 & \text{if } a = b = 1 \\ 0 & \text{otherwise} \end{cases}$ and $a \vee b = \neg(\neg a \wedge \neg b)$. An element of $\mathbb{B}^p$ is denoted without commas and parentheses, e.g., $01101$ stands for $(0, 1, 1, 0, 1)$.
+
+### 2.2 Analogical proportions
+
+Given a set $\mathcal{U}$, an analogical proportion on $\mathcal{U}$ is a quaternary relation on $\mathcal{U}$, denoted by $a:b::c:d$ for $(a,b,c,d) \in \mathcal{U}^4$, and satisfying the following postulates (for $a,b,c,d \in \mathcal{U}$):
+
+(Reflexivity) $a:b::a:b$.
+
+(Symmetry) If $a:b::c:d$ then $c:d::a:b$.
+
+(Exchange of the means) If $a:b::c:d$ then $a:c::b:d$.
+
+Given a finite set $\mathcal{U}_i$, the relation defined below is an analogical proportion:
+
+$$ a:b::c:d \stackrel{\text{def}}{=} (a=b \land c=d) \lor (a=c \land b=d) $$
+---PAGE_BREAK---
+
+Therefore, in the nominal representation, the 4-tuples $(a, b, c, d)$ in analogical proportion have one of the three following forms: $(s, s, s, s)$, $(s, t, s, t)$ and $(s, s, t, t)$ for $s, t \in \mathcal{U}_i$. In particular, if $\mathcal{U}_i = \mathbb{B}$, the set of $(\mathbf{a}, \mathbf{b}, \mathbf{c}, \mathbf{d}) \in \mathbb{B}^4$ such that $\mathbf{a}:\mathbf{b}::\mathbf{c}:\mathbf{d}$ is $\{\text{0000}, 0011, 0101, 1111, 1100, 1010\}$.
+
+Given a finite $\mathcal{U} = \mathcal{U}_1 \times \mathcal{U}_2 \times \dots \times \mathcal{U}_p$ the following analogical proportion can be defined:
+
+$$ a:b::c:d \stackrel{\text{def}}{=} a_1:b_1::c_1:d_1 \land a_2:b_2::c_2:d_2 \land \dots \land a_p:b_p::c_p:d_p $$
+
+Given $a,b,c \in \mathcal{U}$, solving the *analogical equation* $a:b::c:y$ aims at finding the $y \in \mathcal{U}$ such that this relation holds. In a nominal representation, such an equation has 0 or 1 solution. More precisely:
+
+- If $a=b$, the solution is $y=c$.
+
+- If $a=c$, the solution is $y=b$.
+
+- Otherwise, $a:b::c:y$ has no solution.
+
+## 2.3 Notations and assumptions on CBR
+
+Let $\mathcal{P}$ and $\mathcal{S}$ be two sets. A problem $\mathbf{x}$ is by definition an element of $\mathcal{P}$ and a solution $\mathbf{y}$, an element of $\mathcal{S}$. If $a \in \mathcal{P} \times \mathcal{S}$, then $\mathbf{x}^a$ and $\mathbf{y}^a$ denote its problem and solution parts: $a = (\mathbf{x}^a, \mathbf{y}^a)$. Let $\rightsquigarrow$ be a relation on $\mathcal{P} \times \mathcal{S}$. For $(\mathbf{x}, \mathbf{y}) \in \mathcal{P} \times \mathcal{S}$, $\mathbf{x} \rightsquigarrow \mathbf{y}$ is read “$\mathbf{x}$ has for solution $\mathbf{y}$” or “$\mathbf{y}$ solves $\mathbf{x}$”. A case is a pair $(\mathbf{x}, \mathbf{y})$ such that $\mathbf{x} \rightsquigarrow \mathbf{y}$. The aim of a CBR system is to solve problems, i.e., it should approximate the relation $\rightsquigarrow$: given $\mathbf{x}^{\mathrm{tgt}} \in \mathcal{P}$ (the target problem), it aims at proposing $\mathbf{y}^{\mathrm{tgt}} \in \mathcal{S}$ such that it is plausible that $\mathbf{x}^{\mathrm{tgt}} \rightsquigarrow \mathbf{y}^{\mathrm{tgt}}$. For this purpose, a finite set of cases, called the case base and denoted by CB, is used. An element of CB is called a source case. Besides the case base, other knowledge containers are often used [17], but they are not considered in this paper.
+
+The classical way of defining a CBR process consists in selecting a set of $k$ source cases related to $\mathbf{x}^{\mathrm{tgt}}$ (retrieve phase) and solve $\mathbf{x}^{\mathrm{tgt}}$ with the help of the retrieved cases (reuse phase). Other steps are considered in the classical 4 Rs model [1], but not in this paper. In [15], three approaches are presented for $k \in \{1, 2, 3\}$. The approach for $k=3$, called analogical extrapolation, is recalled in the next section.
+
+# 3 Improving extrapolation thanks to case pair competence
+
+This section presents the proposed approach. First, it is shown how a notion of competence associated to case pairs can be used to improve extrapolation, an approach to CBR based on analogical proportions. Then, this notion of competence is formally defined. Finally, strategies for exploiting case pair competence are described.
+---PAGE_BREAK---
+
+## 3.1 Principles
+
+The analogical proportion-based inference principle [20] can be stated as follows (using the notations on CBR introduced above; $a = (x^a, y^a)$, $b = (x^b, y^b)$, $c = (x^c, y^c)$ and $d = (x^d, y^d)$ are four cases):
+
+$$ \frac{x^a:x^b::x^c:x^d \text{ holds}}{y^a:y^b::y^c:y^d \text{ holds}} $$
+
+In order to solve a new problem $x^{tgt}$, this leads to look for all triples of source cases $(a, b, c)$ such that $x^a:x^b::x^c:x^{tgt}$ holds and such that the equation $y^a:y^b::y^c:y$ is solvable. Let $\mathcal{T}$ be the set of all these triples. Then the implementation of this inference pattern uses a vote among all triples of $\mathcal{T}$ and chooses the solution $y$ found for the largest number of triples. This is the principle called analogical extrapolation (or, simply, extrapolation) in [15].
+
+In the following, we assume for simplicity that all the features are nominal (e.g., Boolean). When there is only one feature for the solutions, the problemsolving task is a classification task (finding the class $y^{tgt} \in S$ to be associated with $x^{tgt}$). When there are several features, one can handle them one by one only if they are logically independent, otherwise the vote should be organized between the whole vectors describing the different solutions. In the following, we assume independence, and we consider one of the components $y_i$ of a solution $y$ (thus, the index $i$ is useless: $y_i$ is denoted by $y$).
+
+Still, one may wonder if all triples of $\mathcal{T}$ involved in a vote for making a particular prediction have the same legitimacy. Indeed, one may take lesson from $\mathcal{T}$ by observing that if one wants to predict a solution for one problem taken from $\mathcal{T}$ from the rest of the examples, there may exist triples that make a wrong prediction, as suggested in [16]. The situation may be better analyzed in terms of pairs, as shown now.
+
+Indeed look at Table 1. It exhibits three Boolean pairs such that $a:b::c:d$ and $a':b'::c:d$ hold in all columns, except the last one (column 'S', as in solution). Note that the 'D' columns (first two columns, 'D' as in disagreement) show the possible patterns expressing that $a$ and $b$ differ in the same way as $c$ and $d$ and as $a'$ and $b'$.¹ The 'A' columns (as in agreement) show all the ways $a$ and $b$ agree, while $c$ and $d$ agree, and $a'$ and $b'$ also agree, maybe in different manners.² If we take out the value of $d$ in the column 'S' and we try to predict it from the other values from this column, the equation $a:b::c:y$ yields the good result (i.e., 0:1::0:y gives $y = 1$, i.e., the value of $d$ in the table, column 'S'), while the equation $a':b'::c:y$ gives a wrong result (i.e., 0:0::0:y gives $y = 0$, whereas $d = 1$ in the table, column 'S').
+
+So, for each pair, like $(a, b)$ or $(a', b')$ in the table, one may count the numbers of times where the pairs leads to a correct and to a wrong prediction for an
+
+¹ *D*(0/1) indicates the disagreement between **a** and **b** (respectively between **c** and **d** and between **a′** and **b′**) when the former is equal to 0 and the latter is equal to 1. *D*(1/0) is the reverse disagreement.
+
+² *A*(*u*, *v*, *w*) means that **a** = **b** = *u*, **c** = **d** = *v* and **a′** = **b′** = *w*.
+---PAGE_BREAK---
+
+ | D(0/1) | D(1/0) | A(0,0,0) | A(0,0,1) | A(0,1,0) | A(0,1,1) | A(1,0,0) | A(1,0,1) | A(1,1,0) | A(1,1,1) | S |
|---|
| a | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | | b | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | | c | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | | d | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | | a' | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | | b' | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 |
+
+**Table 1.** Double pairing of pairs (a, b), (c, d) and (a', b'): Analogy breaking on S.
+
+example taken from the case base. This provides a basis for favoring triples containing pairs leading often to good predictions, in the voting procedure.
+
+The above idea of looking at pairs of cases can be related to the reading of a pair of cases (a, b) as a potential rule expressing either that the change from xª to xᵇ explains the change from yª to yᵇ, whatever the context (encoded by the features where the two examples agree), or that the change from xª to xᵇ does not modify the solution (if yª = yᵇ). This view of pairs as rules has already been proposed in CBR for finding adaptation rules [10, 6, 7] and later in an analogical proportion-based inference perspective in [4, 3].
+
+So, roughly speaking, we are interested in a preprocessing step, in order to discover analogy breakings in $\mathcal{T}$. By an analogy breaking, we mean the existence of a quadruple of cases (a, b, c, d) such that (i) xª:xᵇ::xᶜ:xᵈ holds, while (ii) yª:yᵇ::yᶜ:yᵈ does not hold. If some analogy breaking(s) can be found in $\mathcal{T}$, this means that the partially unknown Boolean function associating to a problem a solution (or a class) cannot be affine [5]. In such a situation, analogical inference cannot be blindly applied with any triple, and we should take into account the analogy breaking(s), by introducing some further restrictions in the choice of the suitable triples.
+
+More precisely, the idea is to make a preliminary preprocessing of the pairs (a, b) ∈ CB², by associating with each of them a competence. The intuition behind this notion is that the more a case pair is competent for solving problems, the more it can play a role during the voting and selection process. To assess the competence of a pair (a, b) ∈ CB², it has to be compared to other pairs (c, d) ∈ CB² such that the triple (a, b, c) can be used to solve the problem xᵈ by extrapolation. When the outcome y of the extrapolation is equal to yᵈ, then it increases the competence of the case pair (a, b). Otherwise, it lowers it. The definition of competence is detailed in the next section.
+
+The case pair competence can be used at problem-solving time according to different strategies. Section 3.3 presents some of these strategies that are experimentally evaluated in a Boolean setting in Section 4.
+
+## 3.2 Case pair competence: definition
+
+Let (a, b) be a pair of source cases: a = (xª, yª) ∈ CB and b = (xᵇ, yᵇ) ∈ CB. The competence of the pair (a, b) is defined by two scores: the support and the confidence of (a, b), defined below following the principle presented above.
+---PAGE_BREAK---
+
+First, let **SolvableBy**(a, b) be the set of source case pairs (c, d) ≠ (a, b) such that the triple (a, b, c) can be used to solve **x**d by extrapolation: **x**a:**x**b::**x**c:**x**d and the equation **y**a:**y**b::**y**c:**y** is solvable (and so, its solution **y** is unique in a nominal representation). Formally:
+
+$$
+\text{SolvableBy}(a,b) = \left\{ (c,d) \in \text{CB}^2 \mid \begin{array}{l} (c,d) \neq (a,b), \quad \mathbf{x}^a:\mathbf{x}^b::\mathbf{x}^c:\mathbf{x}^d \\ \text{and the equation } \mathbf{y}^a:\mathbf{y}^b::\mathbf{y}^c:\mathbf{y} \text{ is solvable} \end{array} \right\}
+$$
+
+In other words, `SolvableBy(a,b)` is the set of source case pairs such that `c` can be adapted into a solution of `x^d` using `(a,b)` as an adaptation rule (without considering the trivial case when `(a,b) = (c,d)`). The support of `(a,b)`, `supp(a,b)`, is simply the number of such pairs:
+
+$$
+\mathrm{supp}(a, b) = |\mathrm{SolvableBy}(a, b)|
+$$
+
+Among the $(c, d) \in \text{SolvableBy}(a, b)$ some leads to a correct solution ($y = y^d$)
+and some does not. The formers constitute the following set:
+
+$$
+\text{CorrectlySolvableBy}(a,b) = \{(c,d) \in \text{SolvableBy}(a,b) \mid y^a:y^b::y^c:y^d\}
+$$
+
+For example, if $\text{supp}(a,b) = 6$ and $|\text{CorrectlySolvableBy}(a,b)| = 4$, it
+means that $(a,b)$, considered as a rule, has been tested 6 times on the case
+base and has given 4 correct answers. Thus, the proportion of correct answers is
+$4/6 = 2/3$. This proportion is called the confidence of $(a,b)$, denote by $\text{conf}(a,b)$.
+A special case has to be considered when $\text{supp}(a,b) = 0$. This means that the
+"adaptation rule" $(a,b)$ cannot be tested on the case base. In such a situation,
+the value of the confidence is set to 0.5 (better than a confidence of, say, $3/7$ for
+which the rule fails more often then it succeeds and worse then a confidence of
+$4/7$ for which it succeeds more often then in fails). To summarize, the confidence
+of a pair $(a,b)$ is:
+
+$$
+\mathrm{conf}(a,b) = \begin{cases} \frac{|\mathrm{CorrectlySolvableBy}(a,b)|}{\mathrm{supp}(a,b)} & \text{if } \mathrm{supp}(a,b) \neq 0 \\ 0.5 & \text{otherwise} \end{cases}
+$$
+
+### 3.3 Using case pair competence for selection and vote strategies
+
+Given a target problem $\mathbf{x}^{\text{tgt}}$, extrapolation consists in retrieving triples $(a, b, c) \in \text{CB}^3$ such that $\mathbf{x}^a:\mathbf{x}^b::\mathbf{x}^c:\mathbf{x}^{\text{tgt}}$ and in adapting this triple by solving the equation $\mathbf{y}^a:\mathbf{y}^b::\mathbf{y}^c:\mathbf{y}$ for each such triples (the triples $(a, b, c)$ for which the equation has no solution are not considered). So, the result of extrapolation is the set $\mathcal{R}$ of $((a, b, c), y) \in \text{CB}^3 \times S$, $y$ being the result of the extrapolation of $(a, b, c)$ in order to solve $\mathbf{x}^{\text{tgt}}$. Now, the question is how to consider all these solutions $y$ to propose a sole solution $\mathbf{y}^{\text{tgt}}$ of $\mathbf{x}^{\text{tgt}}$. Four strategies for that purpose are detailed below.
+
+The first one, called *withoutComp*, just makes a vote on all values of *y*,
+regardless of the competences. The proposed solution is thus
+
+$$
+\hat{y}^{\text{tgt}} = \underset{\hat{y}}{\arg\max} |((a, b, c), y) \in \mathcal{R} | y = \hat{y}| \\
+$$
+---PAGE_BREAK---
+
+This is the strategy used in [15] and the baseline for the evaluation.
+
+The second strategy, called **allConf**, considers all the $((a, b, c), y) \in \mathcal{R}$ and make a vote weighted by the confidence:
+
+$$y^{\text{tgt}} = \operatorname*{argmax}_{\hat{y}} \sum_{((a,b,c),y) \in \mathcal{R}, y=\hat{y}} \operatorname{conf}(a,b)$$
+
+The third strategy, called **topConf**, considers only the $((a, b, c), y) \in \mathcal{R}$ with the highest confidence, then makes a vote among them. Formally:
+
+$$\begin{align*}
+& \text{with } \mathrm{conf}_{\max} = \max \{\mathrm{conf}(a,b) \mid ((a,b,c),y) \in \mathcal{R}\} \\
+& \text{and } \mathcal{R}^* = \{((a,b,c),y) \in \mathcal{R} \mid \mathrm{conf}(a,b) = \mathrm{conf}_{\max}\} \\
+& y^{\text{tgt}} = \underset{\hat{y}}{\operatorname{argmax}} |((a,b,c),y) \in \mathcal{R}^* \mid y = \hat{y}|
+\end{align*}$$
+
+(1)
+
+The fourth strategy, called topConfSupp, is similar to the previous one, except that it uses both the confidence and the support to make a preference. More precisely, it is based on the preference relation $\succ$ on case pairs defined below (for $(a, b)$, $(a', b') \in \text{CB}^2$):
+
+$$((a,b) \succ (a',b')) \quad \begin{array}{@{}l@{\quad}c@{\quad}l@{\quad}l@{}} & \mathrm{conf}(a,b) > \mathrm{conf}(a',b') & \text{or} & \\ & (\mathrm{conf}(a,b) = \mathrm{conf}(a',b') \quad \text{and} \quad \mathrm{supp}(a,b) \geq \mathrm{supp}(a',b')) & & \end{array}$$
+
+In other words, confidence is the primary criterion, but in case of equality, the higher the support is, the more competent the case pair $(a, b)$ is considered. For instance, if $\mathrm{conf}(a,b) = \mathrm{conf}(a',b') = 0.75$, $\mathrm{supp}(a,b) = 8$ and $\mathrm{supp}(a',b') = 4$, then $(a, b)$ gives the good answer in 6 situations over 8, whereas $(a', b')$ gives the good answer in 3 situations over 4. In this example, $(a, b)$ is strictly preferred to $(a', b')$ —$(a, b) \succ (a', b')$. Now, let $\mathcal{R}^*$ be the set of $((a, b, c), y) \in \mathcal{R}$ such that $(a, b)$ is maximal for $\succ$. Then, $y^{\text{tgt}}$ results from a vote, as described above in equation (1).
+
+The interest of considering a triple $(a, b, c)$ in the voting procedure at the end of the inference process is evaluated in terms of the competence of the pair $(a, b)$. Since analogical proportions are stable under central permutation, one might think of considering the pair $(a, c)$ as well. Preliminary investigations using different combinations (minimum, maximum, sum or product of the confidences of $(a, b)$ and $(a, c)$) have not shown any clear improvement with respect to the simple use of the competence of $(a, b)$; it is why we have restricted ourselves to this latter type of competence assessment. However, these preliminary investigations were only based on the allConf strategy, so it deserves to be reconsidered: this constitutes a future work.
+
+# 4 Evaluation
+
+The objective of the evaluation is to study the impact of the strategies for case pair selection and vote presented before on various types of Boolean functions.
+---PAGE_BREAK---
+
+## 4.1 Experiment setting
+
+In the experiment, $\mathcal{P} = \mathbb{B}^8$ and $\mathcal{S} = \mathbb{B}$. $\rightsquigarrow$ is assumed to be functional: $\rightsquigarrow = f$, meaning that $y$ is a solution to $x$ if $y = f(x)$.
+
+The function $f$ is randomly generated using the following generators that are based on two normal forms, with the purpose of having various types of functions:
+
+DNF $f$ is generated in a disjunctive normal form, i.e., $f(x)$ is a disjunction of $n_{\text{disj}}$ conjunctions of literals, for example $f(x) = (x_1 \land \neg x_7) \lor (\neg x_3 \land x_7 \land x_8) \lor x_4$. The value of $n_{\text{disj}}$ is randomly chosen uniformly in $\{3, 4, 5\}$. Each conjunction is generated on the basis of two parameters, $p^+ > 0$ and $p^- > 0$, with $p^+ + p^- < 1$: each variable $x_i$ occurs in the disjunct in a positive (resp. negative) literal with a probability $p^+$ (resp., $p^-$). In the experiment, the values $p^+ = p^- = 0.1$ were chosen.³
+
+Pol $f$ is generated in polynomial normal form: it is the same as DNF, except that the disjunctions ($\vee$) are replaced with exclusive or's ($\oplus$). As only positive literals occur in the polynomial normal form, the parameter $p^-= 0$.
+
+The case base $\mathbf{CB}$ is generated randomly, with the values for its size: $|\mathbf{CB}| \in \{32, 64, 96, 128\}$, i.e. $|\mathbf{CB}|$ is between $\frac{1}{8}$ and $\frac{1}{2}$ of $|\mathcal{P}| = 2^8 = 256$. Each source case $(x,y)$ is generated as follows: $x$ is randomly chosen in $\mathcal{P}$ with a uniform distribution and $y = f(x)$.
+
+Let $\#tgt_pb$ be the number of target problems posed to the system, $\#ans$ be the number of (correct or incorrect) answers ($\#tgt_pb - \#ans$ is the number of target problems for which the system fails to propose a solution), and $\#corr答案$ be the number of correct answers. For each selection and vote strategy, the following scores are computed:
+
+**The error rate %err** is the average of $\left(1 - \frac{\#corr\_ans}{\#ans}\right) \times 100 \in [0, 100]$.
+
+**The answer rate %ans** is the average of the ratios $\frac{\#ans}{\#tgt_pb} \times 100 \in [0, 100]$.
+
+If the system always gives an answer (correct or not) then %ans = 100.
+
+The average is computed on 1 million problem solving for each function generator, requiring the generation of 1420 $f$ for each of them. The average computing time of a CBR session (retrieval and adaptation for solving one problem) is about 2 ms on a current standard laptop.
+
+³ A generator CNF, generating formulas in CNF (conjunctive normal form: conjunction of disjunctions of literals) could also have been considered. However, this does not add anything new since it is dual with the DNF generator for two reasons. First, the drawn inferences are code-independent, meaning that replacing the attributes by their negations does not change the result of the inference, in particular, for a,b,c,d $\in \mathbb{B}$, a:b::c:d iff $\neg a:\neg b:::\neg c:\neg d$. Second, if $f$ is obtained from the DNF generator then $\neg f$ can be put easily in a function $g$ written in CNF using De Morgan laws, and the distribution of $g$ obtained this way would be the same as the distribution from a CNF generator with the same parameters.
+---PAGE_BREAK---
+
+Fig. 1. Error rate function of $|CB|$, for each generator (DNF at the left, Pol at the right).
+
+For the sake of reproducibility, the code for this experiment is available
+at https://tinyurl.com/analogyCBRTests, with the detailed results (gener-
+ated functions and details of the evaluation).
+
+## 4.2 Results
+
+ | |CB| = 32 | |CB| = 64 | |CB| = 96 | |CB| = 128 |
|---|
| %err | %ans | %err | %ans | %err | %ans | %err | %ans |
|---|
| DNF withoutComp | 15.1 | | 10.1 | | 8.4 | | 7.7 | | | allConf | 15.1 | 97.5 | 8.8 | 100.0 | 7.0 | 100.0 | 6.3 | 100.0 | | topConf | 15.5 | | 8.0 | | 4.9 | | 3.1 | | | topConfSupp | 16.4 | | 6.9 | | 3.3 | | 1.7 | | | POL withoutComp | 20.6 | | 13.7 | | 10.5 | | 8.8 | | | allConf | 20.1 | 95.8 | 10.8 | 100.0 | 6.9 | 100.0 | 4.9 | 100.0 | | topConf | 20.1 | | 8.2 | | 3.1 | | 1.2 | | | topConfSupp | 21.5 | | 6.3 | | 1.6 | | 0.5 | |
+
+**Table 2.** %err and %ans for the different selection and vote strategies for the different generators.
+
+Table 2 presents the error rate and the answer rate for the different case selection and vote strategies for the two different generators with an application on the different case base sizes. Error rate curves are given in Figure 1.
+
+Given a function generator and a case base size, the answer rate is the same for the four strategies because all case pair selection strategies provide results
+---PAGE_BREAK---
+
+for a problem that could be solved, without using competences, by *withoutComp* (i.e. if a triple was found to solve a case $x^{\text{tgt}}$ by the *withoutComp* strategy, this triple is considered by the three case pair selection strategies and either it will participate in solving $x^{\text{tgt}}$ or it exists another “better” triple according to the selection procedure. The answer rate is high for all the methods: over 96% for $|CB| = 32$ and 100% for $|CB| \ge 64$.
+
+Except for $|CB| = 32$, which seems to be a too small training data set for computing competences, the error rate shows that the hypothesis of pair selection improves the precision. For both generators, all pair selection strategies give better results than the baseline (*withoutComp*). However, the improvement is rather different depending of the selection strategy: the more the selection of pairs is constrained, the more the error rate decreases. *allConf* decreases the error rate a little bit, *topConf* decreases the error rate a little bit more, and the best results are given by *topConfSupp*.
+
+The benefit of all strategies is related to the case base size: the more the case base contains cases for competence acquisition, the better the results are. Comparing to the baseline, the benefit of the best selection strategy *topConfSupp* is noteworthy. Even if the error rate is already rather good with the baseline, and especially with a 100% answer rate, *topConfSupp* improves it, making it close to a 100% of correct answers. For DNF, according to the size of the case base (64, 96 and 128), the error rate %err decreases from 10.1 to 6.9 (decreasing of 32%), from 8.4 to 3.3 (decreasing of 61%) and from 7.7 to 1.7 (decreasing of 78%). For Pol, the results are even more impressive: according to the size of the case base (64, 96 and 128), the error rate %err decreases from 13.7 to 6.3 (decreasing of 54%), from 10.5 to 1.6 (decreasing of 85%), and from 8.8 to 0.5 (decreasing of 94%).
+
+So, these first experimental results show that from a given case base size, the *topConfSupp* strategy overcomes all others and decreases drastically the error rate, while using less triples.
+
+# 5 Discussion and Related Work
+
+In this section, the approach presented in this paper is compared to related work in CBR according to two viewpoints: the notion of competence in CBR and the adaptation knowledge learning approaches.
+
+*Competence in CBR.* In [15], three types of CBR processes are distinguished, in particular extrapolation, that retrieves and reuses cases by triples and approximation, that retrieves and reuses cases by singletons. It is argued here that previous researches on competence are related to approximation, whereas the work presented in this paper considers a notion of competence related to extrapolation.
+
+The notion of competence in CBR is used in general for the purpose of case base maintenance, either for deleting the least competent cases [19] or adding the most competent ones [21]. In these previous studies, competence is assessed
+---PAGE_BREAK---
+
+to individual source cases, in relation to other cases from the case base. In par-
+ticular, in the seminal paper [19], the competence of cases is assessed by putting
+source cases into categories (from pivotal cases who are the most competent
+ones to auxiliary cases), these categories being defined with the help of the bi-
+nary relation of adaptability between a case and a problem. Thus, this notion of
+competence is linked with the approximation process (considering individually
+source cases).
+
+By contrast, the current paper is concerned by competence related to the
+extrapolation process: cases are retrieved by triples. The competence of a triple
+$(a, b, c) \in CB^3$ is reduced to the competence of a pair $(a, b) \in CB^2$, which is related
+to the set of the other pairs $(c, d) \in CB^2$. A common point of these two notions
+of competence is that the competence of an object (an object being a case for
+approximation and a case pair for extrapolation) is not an intrinsic property of
+the object, but is related to other objects (from CB or $CB^2$).
+
+A minor difference between previous studies on competence and the one
+defined in this paper is related to the use of competence: case base maintenance
+for the formers and problem-solving for the latter.
+
+*Relations with adaptation knowledge learning.* The work presented in this paper has strong links with the issue of adaptation knowledge learning (AKL). The adaptation considered here is the one that follows the retrieval of a sole case (i.e., it is a single case adaptation). Such an adaptation has profit of the adaptation knowledge AK, that can be informally defined by:
+
+AK = "How does the solution changes when the problem changes."
+
+The approach generally applied for AKL is modelled in the seminal work of Kathleen Hanney and Mark T. Keane [10]. It uses the case base for learning adaptation knowledge according to the following principle. A set TS of source case pairs (a, b), with a ≠ b, is built, either by considering all the distinct pairs from CB or by considering only the pairs (a, b) where a and b are judged as enough similar, according to some criterion. Then, TS is used as training set of a supervised learning process: for each pair (a, b) the input of an example is the pair (xª, xᵇ) and its output is the pair (yª, yᵇ). The supervised learning process provides a model of this knowledge AK, used by the adaptation process.
+
+Several work are based on this general scheme. In [12], AK consists in the representation of “adaptation cases”. In [6], different techniques are used, in particular, decision tree induction and ensemble learning techniques. In [7] the frequent closed itemset extraction is used. The expert interpretation following this extraction produces adaptation rules to be added to AK. In [9], similar techniques as in [7] are used (formal concept analysis and frequent closed itemset extraction are similar data-mining techniques), but, in this work, negative cases (i.e., pairs (x, y) ∈ P × S such that y is not a correct solution of x) are used, which improves significantly the results of the learning process. In [11], an ensemble approach provides adaptation rules with categorical features.
+
+The work presented in this paper could also be considered as an AKL ap-
+proach. In fact, in this paper, the term of adaptation rule for considering a case
+---PAGE_BREAK---
+
+pair $(a, b)$ has been used. Let us make this idea more accurate. Let $\sim$ be the
+relation defined, for $(a, b)$ and $(a', b')$, two case pairs, by:
+
+$$ (a,b) \sim (a',b') \quad \text{if} \quad \begin{array}{l} x^a:x^b::x^{a'}:x^{b'} \\ \text{and} \\ y^a:y^b::y^{a'}:y^{b'} \end{array} $$
+
+For analogical proportions on nominal representations defined in Section 2.2, ~ is an equivalence relation⁴. Thus, solving a problem $x^{\text{tgt}}$ by extrapolation from a triple $(a, b, c) \in \text{CB}^3$ or from a triple $(a', b', c) \in \text{CB}$ (with the same $c$) such that $(a, b) \sim (a', b')$ will give the same result: extrapolation is independent from the choice of a representative of the equivalent class of $(a, b)$ for ~. Such an equivalence class $C\ell$ can be used as an adaptation rule (where $c$ is the retrieved case and $x^{\text{tgt}}$ is the problem to be solved):
+
+*with* $(a, b)$ arbitrarily chosen in $C\ell$
+*if* $x^a:x^b::x^c:x^{\text{tgt}}$ and $y^a:y^b::y^c:y$ has a solution
+*then* this solution is a plausible solution to $x^{\text{tgt}}$
+
+Thus, the set of equivalent classes of the restriction of $\sim$ to $\text{CB}^2$ gives a set of candidate adaptation rules, but all these rules are not equivalently interesting: some gives more plausible results than the other ones. So, a criterion has to be defined for making a preference between these rules and, if it is decided to apply all of them, to do so by making a weighted vote (the more an adaptation rule is preferred, the higher its weight in the vote should be).
+
+A simple way of doing this (used, e.g., in [7]) consists in using the cardinality of $C\ell$. This can be related to the notion of competence of a case pair: if $(a,b) \in C\ell$ then $|C\ell| = \text{supp}(a,b) \times \text{conf}(a,b)$. One limitation of this approach is that it counts only the examples (supporting the rule), not the counterexamples (penalizing the rule). By contrast, the approach presented in this paper takes into account counterexamples. For example, if $\text{conf}(a,b) = 1/3$, then, for each example of the rule, there are two counterexamples, so, even if $\text{supp}(a,b)$ is large, the rule associated to $(a,b)$ is, at best, questionable.
+
+Another difference with the work of [7] is that, in [7], when several case pairs have the same variations only on a subset of the features, they are still used to build an adaptation rule. For example, if $(a,b)$ and $(a',b')$ are two source case pairs such that for most attributes $i$, $x_i^a:x_i^b::x_i^{a'}:x_i^{b'}$, the rules built on these common attributes are considered, neglecting the other attributes. In a formal framework in which analogies are rare (for example, when there are features with real number values), it could be justified to replace the exact analogical proportion with a gradual analogical proportion [8] in the approach described in this paper. Studying it constitutes a potential future work.
+
+This discussion shows how some ideas related to AKL from the case base can
+be easily reformulated in the framework of analogical proportions: the links so
+established between these two fields is therefore potentially fruitful.
+
+⁴ Reflexivity and symmetry are direct consequences of the postulates with the same name. By contrast, there exist analogical proportions for which transitivity does not hold [14].
+---PAGE_BREAK---
+
+## 6 Conclusion
+
+Classical case-based reasoning relies on the individual similarities of the problem at hand with each already solved problem that is known. We have shown that it may be also of interest to consider triples of cases (a, b, c) in order to equalize the change from a to b with the change from c to the problem to be solved with its tentative solution. This is the basis of analogical extrapolation based on analogical proportions. Still it has been observed that some triples may lead to wrong inferences.
+
+In this paper, we have proposed to discriminate triples according to an evaluation of the “competence” of the pairs involved in the triples. Indeed an analogical proportion “a is to b as c is to d” can be viewed as establishing a parallel between two pairs. The differences between the components of a pair of problems are naturally related to the differences between solutions, but this relation may depend on the context expressed by the component values that do not change. We have shown that it was possible, at least to some extent, to evaluate the competence of pairs for selecting “good” triples and improving analogical inference results. This contributes to confirm the interest of analogical extrapolation for case-based reasoning.
+
+Several future works follow these studies.
+
+The first one has been mentioned at the end of Section 3. It consists, when choosing a triple (a, b, c), in considering not only the competence of the pair (a, b) but also the competence of the pair (a, c). Preliminary studies with the strategy *allConf* where carried out that does not give significant changes in the result. However, this may be different for the other strategies, and this remains to be studied.
+
+Another future work will be to transfer contributions from the adaptation knowledge learning field to improve furthermore the performance of analogical extrapolation (cf. Section 5). In particular, a promising direction of work is the use of a base of negative cases, as it has been used in [9].
+
+Finally, the competence of case pairs can be used in order to associate to a solution proposed by extrapolation an indication of its plausibility, according to the following idea: the higher are the competences of the case pairs used for giving a solution, the more plausible the proposed solution is. This can be used in order to combine analogical extrapolation with other approaches to CBR that also provides an indication of plausibility.
+
+## References
+
+1. Aamodt, A., Plaza, E.: Case-based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AI Communications **7**(1) (1994) 39–59
+
+2. Billingsley, R., Prade, H., Richard, G., Williams, M.A.: Towards analogy-based decision - A proposal. In H. Christiansen, H.J., ed.: Proc. 12th Conf. on Flexible Query Answering Systems (FQAS’17), London, Jun. 21-23. LNAI 10333, Springer (2017) 28–35
+---PAGE_BREAK---
+
+3. Bounhas, M., Prade, H., Richard, G.: Analogical classification: A rule-based view. In: Proc. of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Springer (2014) 485–495
+
+4. Correa, W.F., Prade, H., Richard, G.: Trying to understand how analogical classifiers work. In: Proc. of the International Conference on Scalable Uncertainty Management, Springer (2012) 582–589
+
+5. Couceiro, M., Hug, N., Prade, H., Richard, G.: Analogy-preserving functions: a way to extend Boolean samples. In: Proc. of the 26th Int. Joint Conf. on Artificial Intelligence (IJCAI’17), Morgan Kaufmann, Inc. (2017) 1575–1581
+
+6. Craw, S., Wiratunga, N., Rowe, R.C.: Learning adaptation knowledge to improve case-based reasoning. Artificial Intelligence **170**(16-17) (2006) 1175–1192
+
+7. d'Aquin, M., Badra, F., Lafrogne, S., Lieber, J., Napoli, A., Szathmary, L.: Case base mining for adaptation knowledge acquisition. In Veloso, M.M., eds.: Proc. of the 20th Int. Joint Conf. on Artificial Intelligence (IJCAI’07), Morgan Kaufmann, Inc. (2007) 750–755
+
+8. Dubois, D., Prade, H., Richard, G.: Multiple-valued extensions of analogical proportions. Fuzzy Sets and Systems **292** (2016) 193–202
+
+9. Gillard, T., Lieber, J., Nauer, E.: Improving Adaptation Knowledge Discovery by Exploiting Negative Cases: First Experiment in a Boolean Setting. In: Proc. of ICCBR 2018 - 26th International Conference on Case-Based Reasoning, Stockholm, Sweden (July 2018)
+
+10. Hanney, K., Keane, M.T.: Learning adaptation rules from a case-base. In Smith, I., Faltings, B., eds.: Advances in Case-Based Reasoning – Proc. of the Third Eur. Workshop, EWCBR’96. LNAI 1168, Springer Verlag, Berlin (1996) 179–192
+
+11. Jalali, V., Leake, D., Forouzandehmehr, N.: Learning and applying adaptation rules for categorical features: An ensemble approach. AI Communications **30**(3-4) (2017) 193–205
+
+12. Jarmulak, J., Craw, S., Rowe, R.: Using case-base data to learn adaptation knowledge for design. In: Proc. of the 17th Int. Joint Conf. on Artificial Intelligence (IJCAI’01), Morgan Kaufmann, Inc. (2001) 1011–1016
+
+13. Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann, Inc. (1993)
+
+14. Lepage, Y.: Proportional analogy in written language data. In Gala, N., Rapp, R., Bel-Enguix, G., eds.: Language, Production, Cognition and the Lexicon. Text, Speech and Language Technology 48. Springer International Publishing Switzerland (2014) 151–173
+
+15. Lieber, J., Nauer, E., Prade, H., Richard, G.: Making the Best of Cases by Approximation, Interpolation and Extrapolation. In: ICCBR 2018 - 26th International Conference on Case-Based Reasoning, Stockholm, Sweden, Springer (July 2018)
+
+16. Prade, H., Richard, G.: A discussion of analogical-proportion based inference. In Sánchez-Ruiz, A.A., Kofod-Petersen, A., eds.: Proc. ICCBR’17 Workshops (CAW, CBRDL, PO-CBR), Doctoral Consortium, and Competitions co-located with the 25th Int. Conf. on Case-Based Reasoning (ICCBR’17), Trondheim, June 26-28. Volume 2028 of CEUR Workshop Proceedings. (2017) 73–82
+
+17. Richter, M.M., Weber, R.O.: Case-based reasoning, a textbook. Springer (2013)
+
+18. Riesbeck, C.K., Schank, R.C.: Inside Case-Based Reasoning. Lawrence Erlbaum Associates, Inc., Hillsdale, New Jersey (1989) Available on line.
+
+19. Smyth, B., Keane, M.T.: Remembering to forget. In: Proc. of the 14th Int. Joint Conf. on Artificial Intelligence (IJCAI’95), Montréal (1995)
+
+20. Stroppa, N., Yvon, F.: Analogical learning and formal proportions: Definitions and methodological issues. Technical Report D004, ENST-Paris (2005)
+---PAGE_BREAK---
+
+21. Zhu, J., Yang, Q.: Remembering to add: competence-preserving case-addition policies for case base maintenance. In: Proc. of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI’99). (1999) 234–241
\ No newline at end of file
diff --git a/samples/texts_merged/582208.md b/samples/texts_merged/582208.md
new file mode 100644
index 0000000000000000000000000000000000000000..4ee351e81a189ccb64e86d7f0a518cce8ba549db
--- /dev/null
+++ b/samples/texts_merged/582208.md
@@ -0,0 +1,6 @@
+
+---PAGE_BREAK---
+
+3.4 Exit Ticket
+
+Sketch the polynomial function $y = 4(-\frac{5}{4}x - 20)^3 - 1$ and describe the transformations that are being applied.
\ No newline at end of file
diff --git a/samples/texts_merged/582263.md b/samples/texts_merged/582263.md
new file mode 100644
index 0000000000000000000000000000000000000000..2821d12c95094523122176e89700bdd6f82e970b
--- /dev/null
+++ b/samples/texts_merged/582263.md
@@ -0,0 +1,418 @@
+
+---PAGE_BREAK---
+
+# Multiple Use Confidence Intervals for a Univariate Statistical Calibration
+
+Martina Chvosteková
+
+Institute of Measurement Science, Slovak Academy of Sciences, Dúbravská cesta 9, 841 04 Bratislava, Slovakia
+martina.chvostekova@savba.sk
+
+The statistical calibration problem treated here consists of constructing the interval estimates for future unobserved values of a univariate explanatory variable corresponding to an unlimited number of future observations of a univariate response variable. An interval estimate is to be computed for a value $x$ of an explanatory variable after observing a response $Y_x$ by using the same calibration data from a single calibration experiment, and it is called the multiple use confidence interval. It is assumed that the normally distributed response variable $Y_x$ is related to the explanatory variable $x$ through a linear regression model, a polynomial regression is probably the most frequently used model in industrial applications. Construction of multiple use confidence intervals (MUCI's) by inverting the tolerance band for a linear regression has been considered by many authors, but the resultant MUCI's are conservative. A new method for determining MUCI's is suggested straightforward from their marginal property assuming a distribution of the explanatory variable. Using simulations, we show that the suggested MUCI's satisfy the coverage probability requirements of MUCI's quite well and they are narrower than previously published. The practical implementation of the proposed MUCI's is illustrated in detail on an example.
+
+Keywords: Statistical calibration, linear regression model, tolerance interval, multiple use confidence interval.
+
+## 1. INTRODUCTION
+
+Univariate linear regression model $Y_x = f^T(x)\beta + \epsilon, \epsilon \sim N(0, \sigma^2)$, where $\epsilon$ is an error, $Y_x$ is an observation corresponding to a value $x$, $f^T(x)$ is a q-dimensional known function of value $x$, vector $\beta = (\beta_0, \beta_1, ..., \beta_{q-1})^T$ and $\sigma^2 > 0$ are the unknown parameters of the model, is used in many applications. For example, the 2-order polynomial regression (i.e., $q=3$, $f^T(x) = (1, x, x^2)$) was used to model the relationship between a one-dimensional response variable $Y_x$ and a one-dimensional explanatory variable $x$ in the example at the end of Section 3. The statistical calibration is typically motivated by the problem of estimating $x$ for a subject in the case when measuring corresponding $Y_x$ is relatively easier and it does not require so much effort or expenses, etc. It means that we want to make a statistical inference about $x$, but it is possible to measure only the dependent variable $Y_x$. A relation between the variables is fitted based on calibration data from a calibration experiment. In this article we suppose a univariate controlled calibration, i.e. in a calibration experiment $\mathcal{E}_n = \{(x_i, Y_{xi}), i = 1, 2, ..., n\}$ the value $x_i, i = 1, ..., n$ is treated as a known scalar and a response $Y_{xi}, i = 1, ..., n$ is assumed to be a random variable. The calibration experiment is often designed so that the chosen values $x_1, ..., x_n$ span the range of the possible values, $\mathcal{X}^* = [x_{min}, x_{max}] \subset \mathbb{R}$, and it is worth emphasizing that $f^T(x)\beta$ is a monotonic function on $\mathcal{X}$. An overview of statistical calibration tasks is provided in
+
+Osborne [15].
+
+The statistical calibration problem treated here is to construct the interval estimates for the unknown independent values $x_{n+1}, x_{n+2}, ..., $ corresponding to an unlimited sequence of additional observations $Y_{x_{n+1}}, Y_{x_{n+2}}, ...$ using the same calibration data, i.e. using the same estimates of the unknown parameters $\beta, \sigma^2$. Two sources of error must be taken into account in the problem, the uncertainty of the estimates of unknown parameters of the model from the calibration data, and the uncertainty of all future responses. Eisenhart [3] demonstrated that a $(1-\alpha)$-confidence set for a single future $x$ can be obtained by inverting a $(1-\alpha)$-prediction interval in a linear regression. It means that the limits for the true $x$-value after observing the response $Y_x$ are determined as the intersections of the $(1-\alpha)$-prediction band with the straight line $y=Y_x$, see Fig.1. If the fitted regression line was not strictly monotone on $\mathcal{X}^*$, we would get an ambiguous solution (i.e., we would find more than two intersections of the horizontal line $y=Y_x$ with a band around such a fitted calibration curve). Since the interval estimates for $x_{n+1}, x_{n+2}, ...$ are constructed by using repeatedly the same estimates of unknown parameters $\beta, \sigma^2$, we would like to make a simultaneous confidence statement about them. It must be pointed out that it is an incorrect interpretation that $100(1-\alpha)\%$ of the interval estimates for $x_{n+1}, x_{n+2}, ...$ determined by inverting the $(1-\alpha)$-prediction band contain the true $x$-value. Indeed, the coverage is much less than $100(1-\alpha)\%$ and it decreases as the num-
+
+DOI: 10.2478/msr-2019-0034
+---PAGE_BREAK---
+
+Fig. 1. Illustration of the construction of an interval estimate [$x_{\text{lower}}, x_{\text{upper}}$] for the x-value corresponding to an observation $Y_x$ by inverting a band around the fitted regression line.
+
+ber of $x_{n+i}$'s increases. Mandel [11] considered the problem of constructing confidence sets for a prechosen number $m$ of future responses, he suggested to invert the simultaneous prediction intervals. In literature proposed simultaneous predictions intervals (see e.g., Lieberman [9], Carlstein [2]) become extremely wide when $m$ is large. We can conclude that the simultaneous prediction intervals cannot be used in the case of an unknown number of future observations and they are impractical for use in the case when $m$ is large. If a prechosen number $m$ of MUCI's is constructed by inverting the simultaneous prediction intervals, then the MUCI's contain the corresponding true values with a prescribed confidence $1 - \alpha$. This strong condition, that all $m$ constructed MUCI's contain the true x-value, was replaced with the condition that at least $\gamma$ proportion of them contains the corresponding true value with a confidence $1 - \alpha$ (see Acton [1], Halperin [4]). So, MUCI's are constructed by using the calibration data (i.e., by using the same estimates of $\hat{\beta}, \sigma^2$) from a single calibration experiment $\mathcal{E}_n$ and have the property that at least a proportion $\gamma$ of them contains the corresponding true x-value with confidence $1 - \alpha$. The two-sided MUCI for the unknown $x$ corresponding to a future observation $Y_x$ is considered in Lieberman et al. [10], Scheffé [16], Mee et al. [13], Krishnamoorthy and Mathew [7], and Witkovský [17] in the closed form
+
+$$ \mathcal{I}(Y_x) = \{x \in \mathcal{X} : f^T(x)\hat{\beta} - g(x)S \le Y_x \le f^T(x)\hat{\beta} + g(x)S\}, \quad (1) $$
+
+where $\hat{\beta}$ denotes the least squares estimator of $\beta$, $S^2$ denotes the residual mean square based on $n-q$ degrees of freedom, and $g(\cdot)$ is a positive, unimodal function determined subject to requirements of MUCI's. It means, that the two-sided MUCI is also found as an intersection of horizontal line in $y=Y_x$ with a band around the fitted calibration curve $f^T(x)\hat{\beta} \pm g(x)S, x \in \mathcal{X}$ (see Fig.1.). If an observation $Y_{x*}$ is captured by the band [$f^T(x^*)\hat{\beta} - g(x^*)S, f^T(x^*)\hat{\beta} + g(x^*)S$], then it is obvious that $\mathcal{I}(Y_{x*})$ will contain the true value $x^*$. Hence, a function $g(\cdot)$ is to be chosen so as to satisfy the condition of MUCI's, which can be expressed as
+
+$$ P_{\hat{\beta},S} (\liminf_{K \to \infty} \frac{1}{K} \sum_{i=1}^{K} \delta(x_{n+i}) \geq \gamma) = 1 - \alpha, \text{ where } \delta(x) = \\
+\begin{aligned}
+& 1 \text{ if } f^T(x)\hat{\beta} - g(x)S \le Y_x \le f^T(x)\hat{\beta} + g(x)S \text{ and } 0 \text{ otherwise,} \\
+& \text{and } \frac{1}{K} \sum_{i=1}^{K} \delta(x_{n+i}) \text{ is the proportion of the intervals } \mathcal{I}(Y_{x_{n+i}}), \\
+& i = 1, 2, \dots, K \text{ which contain the true corresponding x-value.} \\
+& \text{The variable } \delta(x) \text{ is Bernoulli distributed with success probability conditional on given } \hat{\beta}, S:
+\end{aligned}
+\tag{2} $$
+
+Thus, for a large number $K$ of future observations the property of MUCI's is simplified based on the strong law of large numbers to
+
+$$ P_{\hat{\beta},S} \left( \frac{1}{K} \sum_{i=1}^{K} C(x_i; \hat{\beta}, S) \geq \gamma \right) = 1 - \alpha, \quad (3) $$
+
+see e.g. Mee and Eberhardt [12], Krishnamoorthy and Mathew [7]. The condition (3) can be rewritten for the one-sided MUCI's, see Krishnamoorthy et al. [8], Krishnamoorthy and Mathew [7], Han et al.[5].
+
+The condition (3) is a rather difficult condition to work with. A sufficient condition to the property of MUCI's to hold is the condition of the $(1-\alpha, \gamma)$-simultaneous tolerance intervals (STI's) for a linear regression model (or equivalently the $(1-\alpha, \gamma)$-tolerance band), i.e. $P_{\hat{\beta},S}(\min_{x \in \mathcal{X}} C(x; \hat{\beta}, S) \geq \gamma) = 1 - \alpha$. Determination of the MUCI's accomplished by inverting the STI's has been exploited by several authors, see e.g., Lieberman et al. [10], Scheffé [16], Mee et al. [13], and Witkovský [17]. The two-sided STI's presented in Mee et al. [13] and the one-sided STI's presented in Odeh and Mee [14] are exact for a multiple linear regression model. For the considered model, where the covariates are assumed to have functional relationships, the STI's become conservative, except for the case of a simple linear regression. A simulation-based method for determining the exact one-sided STI's for our considered model is suggested in Han et al. [5]. Since the same fixed functional form of function $g(\cdot)$ is used in Han et al. [5] as in Odeh and Mee [14], the computation of the resultant MUCI's is simple, a built-in function for finding root of a function is in usually used analytical software, e.g `fzero()` in MATLAB. The Han et al. method can be modified to the two-sided case, but the resultant MUCI's, as in the one-sided case, will be conservative and it means that they will be wider than necessary.
+
+The layout of this paper is as follows. Section 2 deals with the construction of the MUCI's suggested from the marginal property (3) assuming a distribution of the explanatory variable. Section 3 provides a numerical comparison of the MUCI's based on the exact $(1-\alpha, \gamma)$-STI's and constructed by the suggested method for the case of a simple linear regression. The application of MUCI's is illustrated on an example. Section 4 contains discussion and conclusions.
+---PAGE_BREAK---
+
+## 2. NEW MULTIPLE USE CONFIDENCE INTERVALS
+
+A future observation, $Y_x = f^T(x)\beta + \varepsilon, Y_x \sim N(f^T(x)\beta, \sigma^2)$, corresponding to a value $x \in \mathcal{X}$ is assumed to be independent of a vector of observations $Y = (Y_{x_1}, \dots, Y_{x_n})^T$ from the calibration experiment $\mathcal{E}_n$. Let $X$ denote a $(n \times q)$-dimensional calibration experiment design matrix with rows $f^T(x_i), i = 1, \dots, n$. Throughout, we shall assume that rank($X$) = $q$. The least squares estimator $\hat{\beta} = (X^T X)^{-1} X^T Y$ of $\beta$, and the estimator of the measurement error variance $S^2 = (Y - X\hat{\beta})^T (Y - X\hat{\beta})/(n-q)$ are independent. Under the model assumptions it holds $\hat{\beta} \sim N_q(\beta, \sigma^2(X^T X)^{-1})$ and $S^2(n-q)/\sigma^2 \sim \chi_{n-q}^2$, where $\chi_{n-q}^2$ denotes a central chi-squared random variable with $n-q$ degrees of freedom.
+
+Define independent pivotal variables
+
+$$B = \frac{\hat{\beta} - \beta}{\sigma} \sim N_q(0_q, (X^T X)^{-1}), U^2 = \frac{S^2}{\sigma^2} \sim \frac{\chi_{n-q}^2}{n-q}, \quad (4)$$
+
+where $O_q$ denotes the $q$-dimensional vector of zeros. By using the pivotal variables, the probability of covering the observation $Y_x$ (2) can be written as
+
+$$
+\begin{aligned}
+C(x; \hat{\beta}, S) &= P_{Y_x} \left( f^T(x)B - g(x)U \le \frac{Y_x - f^T(x)\hat{\beta}}{\sigma} \le f^T(x)B + g(x)U \right) \\
+&= \Phi(f^T(x)B + g(x)U) - \Phi(f^T(x)B - g(x)U) = C(x; B, U),
+\end{aligned}
+\quad (5) $$
+
+where $\Phi(\cdot)$ denotes the cumulative distribution function of the standard normal distribution. Then, the condition (3) of the MUCI's can be expressed as
+
+$$P_{B,U}\left(\frac{1}{K} \sum_{i=1}^{K} C(x_i; B, U) \geq \gamma\right) = 1 - \alpha. \quad (6)$$
+
+Inspired by a connection between MUCI's and a prediction interval in a linear regression, we will consider the function $g(\cdot)$ in the following form
+
+$$g_{new}(x) = v\{1+d^2(x)\}^{1/2}, \quad d^2(x) = f^T(x)(X^T X)^{-1}f(x) \quad (7)$$
+
+where the constant $v > 0$ will be chosen to satisfy the calibration condition. Note that for the case $K=1$ it holds $v = t_{n-q}(1-\alpha/2)$, where $t_{n-q}(1-\alpha/2)$ denotes the $(1-\alpha/2)$-quantile of the Student's $t$-distribution with $n-q$ degrees of freedom. For other possibilities for setting $g(\cdot)$ see Witkovský [17]. Since there is arbitrariness in the choice of the sequence $\{x_{n+i}\}$ it can be assumed that the sequence $\{x_{n+i}\}$ is randomly generated with a probability distribution on the interval $\mathcal{X}$. Here, we suggest to assume the uniform distribution of the $x$'s on $\mathcal{X}$. For a specified range of possible values of the explanatory variable, this is a natural choice of the distribution of the explanatory variable. Then, the scalar $v$ is a solution of the following integral equation
+
+$$P_{B,U} \left\{ (x_{max} - x_{min})^{-1} \int_{\mathcal{X}} C(x; B, U) dx \geq \gamma \right\} = 1 - \alpha. \quad (8)$$
+
+The equation (8) is a population counterpart to (6) with the average replaced by the expected value. The multiple integration is required for solving equation (8). Since the computation of constant $v$ depends on $\mathcal{X}$ and also on $X$, the tabulations of its values are difficult for various $\alpha, \gamma$. Hence, the value of $v$ is calculated for each problem anew. The unknown constant $v$ for the MUCI's can be estimated with adequate accuracy for practical work by a simulation. The detailed algorithm of calculation $v$ is shown in Algorithm 1. The code in MATLAB is available from the author upon request.
+
+Table 1. Algorithm for calculating $v$ for the new MUCI's.
+
+**Algorithm 1**
+
+1: **Input:** X, $\mathcal{X}$ = [$x_{min}, x_{max}$], $\alpha, \gamma, N$ - number of runs
+ ($N$ is the number of rows of X, $q$ is the number of columns of $\mathcal{X}$)
+2: Generate $N$ times $B \sim N_q(0_q, (X^T X)^{-1})$, and
+ $U \sim \sqrt{\chi_{n-q}^2/(n-q)}$ (e.g. $N = 500,000$)
+3: Find roots of the equation $\sum_{i=1}^{N} Ind(cov_i \geq \gamma)/N = 1 - \alpha$,
+ where $Ind(cov_i \geq \gamma) = 1$, if $cov_i \geq \gamma$ and 0 otherwise
+ $cov_i = (x_{max} - x_{min})^{-1} \int_{\mathcal{X}} C(x; B_i, U_i)dx$
+4: **Output:** v
+
+For example, suppose the simple linear regression, i.e. $f^T(x) = (1, x), q=2, B = (B_0, B_1)^T$,
+
+$$\begin{pmatrix} B_0 \\ B_1 \end{pmatrix} \sim N_2 \left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \frac{1}{S_x} \begin{pmatrix} \sum_{i=1}^{n} x_i^2 & -\bar{x} \\ -\bar{x} & 1 \end{pmatrix} \right), U \sim \sqrt{\frac{\kappa_{n-2}^2}{n-2}}, \quad (9)$$
+
+where $\bar{x} = \sum_{i=1}^{n} x_i/n$, and $S_x = \sum_{i=1}^{n} (x_i - \bar{x})^2$. Further, we shall assume that $\bar{x} = 0$, $\sum_{i=1}^{n} x_i^2 = n$, $n=30$, and $\mathcal{X} = [-3,3]$. We calculated $v$ by the algorithm by taking $\gamma = 0.90$, and $\alpha = 0.05$ for 50 times, the average equaled 2.150 and the standard deviation equaled 0.001. Note that the exact value of $v$ for the setting parameters equals 2.151, see Table 2. The Monte-Carlo approach is widely used in the development of statistical methods, and it was also used in Han et al. [5]. The calculated value $v$ is used repeatedly for determining $\mathcal{J}(Y_{x_{n+1}}), \mathcal{J}(Y_{x_{n+2}}), ...$ corresponding to a sequence of additional responses $Y_{x_{n+1}}, Y_{x_{n+2}}, ...$. The MUCI's are computed by using a built-in function for finding the root of a function in analytical software, e.g. `zero()` in MATLAB.
+
+## 3. NUMERICAL RESULTS
+
+We have numerically investigated the statistical properties of the MUCI's constructed by inverting the suggested band, i.e. with $g_{new}$ and by inverting the exact simultaneous tolerance intervals for the case of a simple linear regression, see Mee et al. [13], Krishnamoorthy and Mathew [7] (page 76, (3.3.15)). Mee et al. [13], for constructing the
+---PAGE_BREAK---
+
+Table 2. The values of $v$ and $\lambda$ for $\alpha = 0.05$ and selected combinations of $n, \tau, \gamma$.
+
+| γ | $n$ | $v$ | $\lambda$ |
|---|
| $τ = 2$ | $τ = 3$ | $τ = 4$ | $τ = 2$ | $τ = 3$ | $τ = 4$ |
|---|
| .90 | 10 | 2.846 | 2.873 | 2.894 | 1.367 | 1.379 | 1.386 | | 20 | 2.297 | 2.325 | 2.364 | 1.149 | 1.158 | 1.164 | | 30 | 2.128 | 2.151 | 2.187 | 1.089 | 1.096 | 1.102 | | 40 | 2.042 | 2.059 | 2.090 | 1.063 | 1.068 | 1.073 | | 50 | 1.988 | 2.002 | 2.029 | 1.046 | 1.051 | 1.055 | | 10 | 2.010 | 2.063 | 2.133 | 1.245 | 1.264 | 1.276 | | .75 | 20 | 1.612 | 1.646 | 1.703 | 1.056 | 1.070 | 1.080 | | 30 | 1.491 | 1.514 | 1.557 | 1.010 | 1.020 | 1.029 | | 40 | 1.429 | 1.446 | 1.470 | 0.992 | 0.999 | 1.005 | | 50 | 1.391 | 1.404 | 1.431 | 0.983 | 0.988 | 0.993 |
+
+two-sided STI's, supposed the function $g(x)$, $x \in \mathcal{X}$ in the fixed functional form $g_{STI}(x) = \lambda\{z_{(1+\gamma)/2} + \sqrt{(q+2)}d(x)\}$, where $z_{(1+\gamma)/2}$ denotes the $(1+\gamma)/2$-quantile of the standard normal distribution. The constant $\lambda > 0$ is chosen to satisfy the condition of the STI's for a multiple linear regression, where the first $m$ components are common for all rows of $X$. In the case of a simple linear regression the first component equals 1 for all $f(x)^T$, i.e. $f^T(x) = (1, x)$ (i.e., $q=2$) for all $x \in \mathcal{X}$. Under the assumption $\bar{x}=0$ it holds $d^2(x) = (1,x)(X^T X)^{-1}(1,x)^T = 1/n + x^2/S_{xx}^2$, where $S_{xx}^2 = \sum_{i=1}^n x_i^2$. Mee et al. [13] suggested a procedure for determining $\lambda$ over the range of $d(x)$ given [$d_{min}, d_{max}$]. The values of $\lambda$ reported in Mee et al. [13] and in Krishnamoorthy and Mathew [7] were calculated for a double regression and assuming $d_{min} = n^{-1/2}$ and $d_{max} = ((1+\tau^2)/n)^{1/2}$, $\tau = \{2,3,4\}$. It implies $d_{max}^2 = 1/n + \tau^2/n$. For simplicity and to obtain the same range, we considered $\mathcal{X} = [-\tau, \tau]$, i.e. $x_{max}^2 = x_{min}^2 = \tau^2$ and $S_{xx}^2 = n$. Under the above assumptions the distributions of the variables $B = (B_0, B_1)$, $U$ are the same as in (9). Table 2 provides the values of $v$ and $\lambda$ computed for $n = \{10, 20, 30, 40, 50\}$, $\alpha = .05$, $\gamma = \{.75, .90\}$, $\tau = \{2, 3, 4\}$ over $\mathcal{X} = [-\tau, \tau]$. The values of $v$ and $\lambda$ were determined by direct computation (i.e., three-dimensional quadrature). Note that the values of $\lambda$ presented in Table 2 are slightly smaller than the values reported in [7] and [13]. The difference between the values of $\lambda$ is caused by the fact that the values of $\lambda$ tabulated in [7], [13] were determined assuming a double regression (i.e., $Y_{x_0,x_1} = \beta_0 x_0 + \beta_1 x_1$), while the values of $\lambda$ in Table 2 were determined for the case of a simple linear regression (i.e., $Y_{1,x_1} = \beta_0 + \beta_1 x_1$, $x_0 = 1$).
+
+In what follows, the statistical properties of the two-sided MUCI's are numerically investigated for the considered settings of parameters $n, \alpha, \gamma, \tau$ in Table 2 and by using the values of $v$ and $\lambda$ from Table 2.
+
+### 3.1. Estimated confidence
+
+Three different sequences of $\{x_{n+i}\}_{i=1}^K$ were considered to investigate the confidence of the considered MUCI's,
+
+Fig. 2. Probability density functions of the considered distributions in the numerical experiment.
+
+Fig. 2. The first sequence (S1) consists of $x_{n+i}$'s generated from $U(\mathcal{X})$, where $U(a,b)$ denotes the uniform distribution on the interval $[a,b]$. Since $v$ is calculated by assuming the uniform distribution for $x$ on a $\mathcal{X}$, we considered two triangular distributions for $x$ of a different shape to analyse the behaviour of the suggested MUCI's when this assumption is not correct. The second sequence (S2) consists of $x_{n+i}$'s generated from $Tr(\mathcal{X}, 0)$, where $Tr(I,b)$ denotes the triangular distribution on an interval $I$ with parameter of non-centrality (mode) $b$. The third sequence (S3) consists of $x_{n+i}$'s generated from $Tr(\mathcal{X}, \tau)$. In addition, we considered three ranges of possible values given as $\mathcal{X} = [-\tau, \tau]$, $\tau = \{2, 3, 4\}$ for each sequence. The distribution of $\hat{\beta}$ depends on the design matrix $X$ through the fitted value $S_{xx}^2 = n$ in our setting. By considering three different $\mathcal{X}$ for the fixed value of $S_{xx}^2$, we tried to investigate the influence of $X$ on the confidence of MUCI's. The empirical confidences (6) are based on $N=100,000$ generated triples $(b_0, b_1, u)^T$ of the random variables $B_0, B_1$, $U$ and the mean coverage is analysed for $K=10,000$ $x_{n+i}$'s on $\mathcal{X}$ corresponding to the selected sequence. The values of $\lambda$ and of $v$ reported in Table 2 were used.
+
+The estimated confidences are presented in Table 3. The estimated confidence of the MUCI's based on the suggested band around the fitted calibration curve is satisfactory close to the prescribed level for all considered sequences of $x_i$. As we expected, the MUCI's constructed based on the exact STI's are conservative, the estimated confidence level is over the prescribed level and their empirical confidences grow by increasing the values of $\tau$ for all sequences.
+
+### 3.2. Average band width
+
+By inverting the narrower band, the narrower MUCI's are obtained which provide more accurate information about the unknown value $x$. Because the new band and STI's around the fitted calibration curve differ in the functional form of function $g(x)$, $x \in \mathcal{X}$ it is not possible to compare the bands based only on the values in Table 2. The functions $g_{STI}(x)$, $g_{new}(x)$, $x \in [-4,4]$ for $\gamma = 0.9$ with the values of $v$ and $\lambda$ from Table 2 are shown in Fig. 3. For the case $n=10$, a band constructed with $g_{new}$ for a $\hat{\beta}$, $S^2$ would be uniformly narrower on all $\mathcal{X}$, while for the case $n=50$ there is an interval on $\mathcal{X}$, where the tolerance band would be narrower.
+---PAGE_BREAK---
+
+Table 3. The estimated confidences of the two-sided MUCI's con-
+structed by inverting the suggested band (v) and by the exact STI's
+(λ), respectively. Precibred level 1 − α = 0.95.
+
+
+
+
+ |
+ γ |
+ n |
+ ν |
+ λ |
+
+
+ | 2 |
+ 3 |
+ 4 |
+ 2 |
+ 3 |
+ 4 |
+
+
+
+
+ | S1 |
+ .90 |
+ 10 |
+ .950 |
+ .950 |
+ .951 |
+ .972 |
+ .976 |
+ .980 |
+
+
+ | 20 |
+ .949 |
+ .950 |
+ .950 |
+ .979 |
+ .986 |
+ .988 |
+
+
+ | 30 |
+ .950 |
+ .949 |
+ .953 |
+ .983 |
+ .989 |
+ .991 |
+
+
+ | .75 |
+ 40 |
+ .952 |
+ .952 |
+ .952 |
+ .984 |
+ .992 |
+ .993 |
+
+
+ | 50 |
+ .952 |
+ .952 |
+ .950 |
+ .982 |
+ .990 |
+ .999 |
+
+
+ | 10 |
+ .950 |
+ .952 |
+ .952 |
+ .977 |
+ .982 |
+ .984 |
+
+
+ | 20 |
+ .949 |
+ .950 |
+ .951 |
+ .986 |
+ .991 |
+ .992 |
+
+
+ | .75 |
+ 30 |
+ .950 |
+ .950 |
+ .952 |
+ .987 |
+ .993 |
+ .994 |
+
+
+ | 40 |
+ .951 |
+ .952 |
+ .952 |
+ .989 |
+ .995 |
+ .996 |
+
+
+ | 50 |
+ .952 |
+ .951 |
+ .951 |
+ .988 |
+ .995 |
+ .995 |
+
+
+ | S2 |
+ .90 |
+ 10 |
+ .949 |
+ .952 |
+ .953 |
+ .968 |
+ .972 |
+ .975 |
+
+
+ | 20 |
+ .953 |
+ .955 |
+ .959 |
+ .974 |
+ .981 |
+ .986 |
+
+
+ | 30 |
+ .952 |
+ .957 |
+ .962 |
+ .974 |
+ .984 |
+ .988 |
+
+
+ | .75 |
+ 40 |
+ .951 |
+ .957 |
+ .964 |
+ .975 |
+ .986 |
+ .991 |
+
+
+ | 50 |
+ .953 |
+ .957 |
+ .963 |
+ .974 |
+ .983 |
+ .989 |
+
+
+ | 10 |
+ .950 |
+ .956 |
+ .96 |
+ .972 |
+ .979 |
+ .982 |
+
+
+ | .75 |
+ 20 |
+ .953 |
+ .958 |
+ .965 |
+ .978 |
+ .987 |
+ .992 |
+
+
+ | 30 |
+ .952 |
+ .959 |
+ .968 |
+ .970 |
+ .989 |
+ .993 |
+
+
+ | 40 |
+ .952 |
+ .960 |
+ .960 |
+ .980 |
+ .990 |
+ .994 |
+
+
+ | S3 | 50 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 .75 10 20 30 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 (x̄ = ∫X̄ g(x)dx/(xmax − xmin)). Table. The average width of a band over X̄ is defined as ξ = ∫X̄ g(x)dx/(xmax − xmin). Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̄ is defined as ξ = ∫X̄g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)g(x)dx/(xmax-gmin,x̄,ȳ)) provided values of ξ for the suggested band and for the STI's for the combinations of parameters n, τ, γ from Table. The average width of a band over X̅ is defined as ξ = ∫X̅g(x)dx/(xmax-x,y,y-min,g-x,y-min,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2,g=1/2.gnally known ASCS in ionized water and yᵢ is corresponding EC measurement obtained by using the Fisher conductivity
+
+STI's for all considered combinations of parameters *n*, $\tau$, $\gamma$.
+
+Fig.3. Illustration of function g for the suggested band ($g_{new}$) and
+for the exact STI's ($g_{STI}$).
+
+Table. Average width of the suggested band (ξv) and of STI's (ξλ).
+
+ | γ | n | τ = 2 | τ = 3 | τ = 4 |
|---|
| S3 | .6 λv | .6 λv | .6 λv | .6 λv | .6 λv |
|---|
.6 λv | .6 λv | .6 λv | .6 λv |
|---|
.6 λv | .6 λv | .6 λv | .6 λv |
|---|
.6 λλ | .6 λλ | .6 λλ | .6 λλ | .6 λλ |
|---|
.6 λλ | .6 λλ | .6 λλ | .6 λλ |
|---|
.6 λλ | .6 λλ | .6 λλ | .6 λλ |
|---|
+
+## **3.3. An Example**
+
+For a numerical illustration we considered a controlled ex-
+periment that was conducted at the National Biological Ser-
+vice, Louisiana, to predict the amount of sodium chloride so-
+lution in dionized water (ASCS) based on electric conduc-
+tivity (EC). The calibration data given in Johnson and Krish-
+namoorthy [6] involved $3\pi$ pairs of $(x_i, y_i)$, where $x_i$ is pre-
+cisely known ASCS in dionized water and $y_i$ is corresponding
+EC measurement obtained by using the Fisher conductivity
+---PAGE_BREAK---
+
+meter *i* = 1,2,...,31. The calibration data can be used repeatedly to construct MUCI's for ASCS corresponding to all future measurements of EC. In the analysis that followed, we used 28 randomly chosen measurements (out of 31) to estimate the parameters of a model. The omitted 3 measurements are used to construct MUCI's for corresponding ASCS. Because the three true ASCS in ionized water are known, we can see how well constructed MUCI's captured the true value.
+
+A polynomial regression of the second order fits data well. Based on an analysis of residuals the distribution of the response can be modeled as normal, i.e. $Y_x \sim N(\beta_0 + \beta_1x + \beta_2x^2, \sigma^2)$, where $\beta = (\beta_0, \beta_1, \beta_2)$ and $\sigma^2 > 0$ are unknown parameters. The ordinary least squares estimate $\hat{\beta}$ of $\beta$, and the residual mean square $S^2$ estimate of $\sigma^2$, are $\hat{\beta} = [1.5911, 0.4158, -0.0043]'$ and $S^2 = 0.0007$. More over $\bar{x} = 8.5893$, $s_x^2 = \sum_{i=1}^{28} (x_i^2/28 = 110.5089$, and $d^2(x) = 1/28 + 0.00972(x - 8.5893)^2 + 0.000019(x^2 - 110.5089)^2 - 0.00082(x - 8.5893)(x^2 - 110.5089)$. For given $q=3$, $n=28$, and chosen $\gamma = 0.90$, $x_{min}=0$, $x_{max}=24$, and the confidence level $\alpha = 0.05$, we evaluated $\lambda = 1.0607$ and $\nu = 2.1735$. Both determined bands are very close to each other ($\xi_v = 55.56$ and $\xi_\lambda = 59.46$), the suggested band is narrower than the tolerance band over the range of possible values of ASCS in the example. Table 5. gives the MUCI's based on the three omitted measurements of EC and the corresponding true ASCS in ionized water.
+
+Table 5. The multiple use confidence intervals from the example, where $\mathcal{I}_\lambda$ denotes MUCI based on STI's, and $\mathcal{I}_v$ denotes the new MUCI. The value x is in the artificial example known and given for comparison.
+
+| y | 2.4 | 3.8 | 7.5 | | x | 2.0 | 5.5 | 17.0 | | Iλ | (1.8137, 2.155) | (5.469, 5.806) | (17.001, 17.513) | | Iv | (1.8296, 2.141) | (5.472, 5.802) | (17.024, 17.488) |
+
+All MUCI's constructed based on inverting the suggested band are narrower than the MUCI's constructed by inverting the STI's. Although both MUCI's determined for EC equaled 7.5 missed the true ASCS value 17, it should be pointed out that they do provide accurate information on the true value.
+
+## 4. DISCUSSION AND CONCLUSION
+
+The procedure for constructing the multiple use confidence intervals is derived directly from the calibration condition (3) assuming a large number of future observations K and a uniformly distributed explanatory variable. The proposed multiple use confidence intervals are constructed by inverting a symmetric band around the fitted calibration curve of the fixed functional form, the width of the band is proportional to a scalar v. The value of v computed for given parameters $1 - \alpha, \gamma, n, q, X$ is repeatedly used for determining all future multiple use confidence intervals. It was demonstrated that the condition of the multiple use confidence intervals is satisfied
+
+quite well, and based on the provided numerical investigation it is concluded that the proposed MUCI's are narrower than the MUCI's constructed based on the STI's. We can recommend to use our MUCI's in the case of a calibration, where the range of possible values is spanned in the calibration experiment. The procedure for computing the value v can be modified appropriately to a known distribution of the explanatory variable.
+
+
+
+## 5. ACKNOWLEDGEMENT
+
+The work was supported by the Slovak Research and Development Agency, project APVV-15-0295, and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, project VEGA 2/0081/19 and VEGA 2/0054/18.
+
+## REFERENCES
+
+[1] Acton, F.S. (1959). *Analysis of Straight-Line Data*. New York: John Wiley.
+
+[2] Carlstein, E. (1986). Simultaneous Confidence Regions for Predictions. *The American Statistician*, 40, 277–279.
+
+[3] Eisenhart, C. (1939). The Interpretation of certain regression methods and their use in biological and industrial research. *Annals of Mathematical Statistics*, 10, 162–186.
+
+[4] Halperin, M. (1961). Fitting of straight lines and prediction when both variables are subject to error. *Journal of the American Statistical Association*, 56, 657–669.
+
+[5] Han, Y., Liu, W., Bretz, F., Wan, F., Yang, P. (2016). Statistical calibration and exact one-sided simultaneous tolerance intervals for polynomial regression. *Journal of Statistical Planning and Inference*, 168, 90–96.
+
+[6] Johnson, D., Krishnamoorthy, K. (1996). Combining independent studies in a calibration problem. *Journal of the American Statistical Association*, 91, 1707–1715.
+
+[7] Krishnamoorthy, K., Mathew, T. (2009). *Statistical Tolerance Regions: Theory, Applications, and Computation*. New Jersey: John Wiley&Sons.
+
+[8] Krishnamoorthy, K., Kulkarni, P.M., Mathew, T. (2001). Multiple use one-sided hypotheses testing in univariate linear calibration. *Journal of Statistical Planning and Inference*, 93, 211–223.
+
+[9] Lieberman, G. J. (1961). Prediction regions for several predictions from a single regression line. *Technometrics*, 3, 21–27.
+
+[10] Lieberman, G.J., Miller, R.G., Hamilton, M.A. (1967). Unlimited simultaneous discrimination intervals in regression. *Biometrika*, 54, 133–145.
+
+[11] Mandel, J. (1958). A note on confidence intervals in regression problems. *Annals of Mathematical Statistics*, 29, 903–907.
+
+[12] Mee, R.W., Eberhardt, K.R. (1996). A comparison of uncertainty criteria for calibration. *Technometrics*, 38, 221–229.
+---PAGE_BREAK---
+
+[13] Mee, R.W., Eberhardt, K.R., Reeve, C.P. (1991). Calibration and simultaneous tolerance intervals for regression. *Technometrics*, 33, 211–219.
+
+[14] Odeh, R.E., Mee, R. W. (1990). One-sided simultaneous tolerance limits for regression. *Communication in statistics -simulation and computation*, 19, 663–68.
+
+[15] Osborne, C. (1991). Statistical calibration: a review. *International Statistical Review*, 59, 309–336.
+
+[16] Scheffé, H. (1973). A statistical theory of calibration. *Annals of Statistics*, 1, 1–37.
+
+[17] Witkovský, V. (2014). On the exact two-sided tolerance intervals for univariate normal distribution and linear regression. *Austrian Journal of Statistic*, 43, 279–292.
+
+Received July 8, 2019.
+Accepted November 13, 2019.
\ No newline at end of file
diff --git a/samples/texts_merged/5844994.md b/samples/texts_merged/5844994.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d5ddb6487054b550084e3ce2d6bf0a41b538d0c
--- /dev/null
+++ b/samples/texts_merged/5844994.md
@@ -0,0 +1,214 @@
+
+---PAGE_BREAK---
+
+Quantifying the Uncertainty in Deterministic Phonon Transport Calculations of Thermal Conductivity using
+Polynomial Chaos Expansions
+
+Jackson R. Harter,† P. Alex Greaney,* Todd S. Palmer†
+
+†Department of Nuclear Science and Engineering, Oregon State University
+
+\*Department of Mechanical Engineering and MS&E Program, University of California - Riverside
+
+harterj@oregonstate.edu, agreaney@engr.ucr.edu, palmerts@engr.orst.edu
+
+INTRODUCTION
+
+The nature of computational simulations requires the in-
+clusion of an uncertainty analysis, as we have limited knowl-
+edge of all physically determined input parameters for a com-
+putationally simulated problem. We rely on uncertainty quan-
+tification (UQ) to characterize our confidence in the outcomes.
+Quantifying uncertainty can provide a basis for certifications
+in high-consequence decisions, such as nuclear reactor design,
+and is a fundamental component of model validation.
+
+We employ a previously developed method of uncertainty quantification, polynomial chaos expansion with stochastic collocation (PCE-SC) [1], applied to deterministic phonon transport simulations. In these simulations, we use the neutron transport code Rattlesnake [2], which solves the Self-Adjoint Angular Flux (SAAF) formulation of the transport equation with a continuous finite element (CFEM) spatial discretization and discrete ordinates, spherical harmonics angular discretizations. Rattlesnake was developed in the multi-physics object oriented simulation environment (MOOSE) framework [3]. We have previously shown Rattlesnake to be effective in simulating phonon transport [4].
+
+We aim to provide a deterministic phonon transport frame-
+work for heterogeneous nuclear fuel with fission product de-
+fects to predict thermal conductivity (κ) [5]. A first principles,
+physics-based calculation of thermal conductivity must in-
+volve factors such as the microstructure of nuclear fuel, which
+constantly changes during the fission process through the for-
+mation of isotopic decay products. Heat transport in oxide
+nuclear fuels is dominated by phonon transport. Impurities
+in the bulk material influence the transport of energy at the
+fundamental level, altering the scattering behavior of phonons
+and electrons.
+
+Conventionally, heat transport follows classical physics
+based on the heat equation derived from Fourier's law
+
+$$
+q = -\kappa \nabla T,
+$$
+
+(1)
+
+where *q* is heat flux, *κ* is thermal conductivity and ∇*T* is a
+temperature gradient. However, Fourier's law is a macroscopic
+empirical law in which the thermal conductivity *κ* does not
+have a mechanistic connection to the underpinning heat trans-
+port processes. Thermal conductivity of a material depends
+both on the material's intrinsic ability to transport heat and
+a variety of resistive effects caused by defects in the mate-
+rial. Thus *ab initio* prediction of the macroscopic conductivity
+— the property of interest for safe reactor operation — re-
+quires simulating detailed processes of heat transport, and
+then determining the *effective* thermal conductivity of the sim-
+ulated material from the resulting heat flux under the imposed
+temperature difference. The thermal conductivity of a bulk,
+
+homogeneous, dielectric material can be well estimated by the
+mechanistically derived expression [6]:
+
+$$
+\kappa_{\text{bulk}} = \frac{1}{3} C_v v_g \lambda,
+$$
+
+(2)
+
+where $C_v$, $v_g$, and $\lambda$ are the volumetric specific heat, phonon speed, and phonon mean free path, respectively.
+
+The importance of microstructure and boundary scattering is illustrated by Fig. 1, which shows the reduction in the effective thermal conductivity across a thin slab of material (Fig. 2) as compared to its bulk value. In these problems the relevant parameter is the material’s acoustic thickness, its characteristic distance L relative to the phonon mean free path λ [7]. In Eq. (2) the largest source of uncertainty is λ. While propagating uncertainty in λ to uncertainty in κ through Eq. (2) is trivial, systems with extrinsic scattering require more sophisticated approaches. At the microscopic level, uncertainty in λ changes both a material’s intrinsic thermal conductivity and its acoustic thickness.
+
+Fig. 1. Reduction in the thermal conductivity across a thin slab of material as compared to bulk.
+
+Our goal is to propagate the uncertainty in $\lambda$ through the
+deterministic transport computation of $\kappa$. Because there are a
+small number of uncertain input quantities, we use the method
+of polynomial chaos expansion (PCE), which expresses so-
+lutions in the form of spectral expansions of an uncertain
+variable [1]. This approach combines both intrusive and non-
+intrusive methods of uncertainty propagation techniques and
+results in a unique formulation which is very effective and
+efficient for problems with few uncertain parameters [8, 9].
+---PAGE_BREAK---
+
+We use PCE-SC to measure propagation of uncertainty in a 3-D phonon transport problem of homogeneous silicon, to compute the mean and variance in temperature, heat flux and thermal conductivity.
+
+## METHODS
+
+Brute force Monte Carlo can be used to calculate statistical moments of various relevant quantities from realizations of statistical distributions of input parameters, though it can be prohibitively slow because of the Central Limit Theorem. PCE-SC approximates integrals over the probability distributions using deterministic quadrature, and has been shown to be very computationally efficient for small numbers of uncertain input parameters.
+
+We solve the steady-state Boltzmann transport equation (BTE) to determine the radiant intensity of heat-carrying phonons and from the moments of the intensity, determine the effective thermal conductivity. The governing equation is the BTE for gray phonons in the SAAF formulation (which we have developed previously [5]) using the single mode relaxation time approximation [6], given by Eq. (3):
+
+$$ -\lambda \hat{\Omega} \cdot \nabla [\lambda \hat{\Omega} \cdot \nabla I(\mathbf{r}, \hat{\Omega})] + I(\mathbf{r}, \hat{\Omega}) = -\lambda \hat{\Omega} \cdot \nabla I^0(\mathbf{r}) + I^0(\mathbf{r}) \quad (3) $$
+
+Here, $I(r, \hat{\Omega})$ is phonon radiant intensity at position r(x, y, z) traveling in direction $\hat{\Omega}$. The radiance $I$ has units of W·m⁻²·sr⁻¹. In this transport problem, the change in the phonon intensity at a point has two contributions: a streaming term from the spatial variation in intensity, and a collision term due to the deviation of the radiance from the equilibrium phonon radiance $I^0(r)$. The mean free path $\lambda$ is the product of phonon speed $v_g$ and relaxation time $\tau$ and has units of length (m).
+
+The zeroth angular moment of phonon radiance $I$ is proportional to temperature, phonon speed, and volumetric specific heat capacity:
+
+$$ \int_{4\pi} I(r, \hat{\Omega}) d\Omega = \frac{C_v v_g T}{4\pi}. \qquad (4) $$
+
+The first angular moment is the heat flux:
+
+$$ q(r) = \int_{4\pi} I(r, \hat{\Omega}) \hat{\Omega} d\Omega \qquad (5) $$
+
+In a cube of silicon with side length $3\lambda$ (Fig. 2), we simulate a 1 K temperature gradient along the x-axis, with boundary temperatures of $T_R = 301$ K, $T_L = 300$ K, $C_v = 1.65 \cdot 10^6$ J·m⁻³·K⁻¹, $v_g = 8430$ m·s⁻¹. Simulations use the generalized minimal residual (GMRES) method [10] to solve the linear system, with solver convergence criteria of $\epsilon = 10^{-6}$. The finite element mesh is constructed in CUBIT and is composed of $10^3$ elements, and the simulation has ~$10^4$ degrees of freedom. The run-time is approximately 7 seconds on a 2.8 GHz Intel i7 CPU with 16 GB RAM. The reported mean free path for phonons in silicon at room temperature (300 K) varies widely [11, 12]. Reported values of $\lambda$ are averaged values that coalesce the scattering behavior of a broad spectrum of phonon wavelengths and their interactions with different
+
+types of extrinsic defects. The volumetric specific heat $C_v$ and phonon speed $v_g$ can be determined with good accuracy from first principles, while $\lambda$ is often obtained as a fitting parameter needed to make Eq. (2) match some empirically determined value of $\kappa$. The large variation in measured $\kappa$ means that applying an empirically obtained $\lambda$ from one problem to another is accompanied by a large degree of uncertainty, justifying the need for UQ for these types of problems.
+
+Fig. 2. Mesh of silicon domain with side length $3\lambda$.
+
+In this study, we have assigned a relative uncertainty on $\lambda$ of 33% ($\frac{1}{3}$) based on literature values of mean free path in silicon [11, 12], and so choose its range to be $\lambda \pm \lambda/3$. We assume that the statistical distribution for $\lambda$ is uniform, such that
+
+$$ \lambda(\Xi) = \bar{\lambda} + \frac{\bar{\lambda}}{3}\Xi, \quad \Xi \in (-1, 1), \qquad (6) $$
+
+The uniform distribution evaluated at the ordinates of an $S_8$ Gauss-Legendre quadrature is shown in Fig. 3. [Other statistical distributions are possible in the PCE-SE approach.]
+
+In PCE-SC we express the randomness introduced in the intensity (and its angular moments) as an expansion in orthogonal polynomials. The uniform statistical distribution is most efficiently treated with Legendre polynomials and Gauss quadrature. Each quantity of interest in the transport simulation has a statistical distribution and is expanded in terms of a finite series of Legendre polynomials in the random variable. The expansion for temperature, for example, becomes
+
+$$ T(\Xi) \approx \sum_{l=0}^{N} T_l P_l(\Xi), \qquad (7) $$
+
+where $P_l$ is the Legendre polynomial of order $l$ and $T_l$ is the expansion coefficient. For this work, Legendre polynomial order was set to $N=8$, with quadrature order set to $M=8$. An $S_8$ Gauss-Legendre quadrature will exactly integrate polynomials up to order 7.
+
+The BTE is solved for each of the discrete values of $\lambda$ associated with the chosen quadrature, and numerical integrals are performed to estimate the statistical moments of the temperature (zeroth angular moment of $I$) via
+
+$$ T_{\ell}(\mathbf{r}) \approx \frac{2\ell + 1}{2} \sum_{\ell=0}^{N} \sum_{m=1}^{M} w_m T_{\ell}(\mathbf{r}, \Xi_m) P_{\ell}(\Xi_m). \qquad (8) $$
+---PAGE_BREAK---
+
+Fig. 3. The continuous ($\lambda(\Xi)$) statistical mean free path distribution and the values of $\lambda$ corresponding to the $S_8$ Gauss quadrature ($\lambda(\Xi_m)$). $\bar{\lambda}$ is the value of the phonon mean free path [13].
+
+[An analogous equation exists for the heat flux $q$.]
+
+Once the moments have been obtained through Eq. (8), the mean and variance may be computed. The mean temperature at each spatial location is equal to zeroth Legendre moment of the temperature distribution:
+
+$$ \langle T(r) \rangle = T_0(r). \quad (9) $$
+
+The variance in temperature can be computed from the remaining Legendre moments:
+
+$$ \sigma_T^2(r) = \sum_{l=1}^{N} \frac{1}{2l+1} T_l^2(r) \quad (10) $$
+
+From the statistical mean and variance of $q(r)$ and the temperature gradient across the axis of transport, we are able to compute a volume-averaged thermal conductivity, $\langle \kappa_{\text{eff},x} \rangle$, as
+
+$$ \langle \kappa_{\text{eff},x} \rangle = - \frac{\int d^3 r \, e_x \cdot q(r)}{\Delta T / L}. \quad (11) $$
+
+Values for all output quantities are reported in Table I.
+
+TABLE I. Mean and standard deviation for $\langle q \rangle$, $\langle T \rangle$, $\langle \kappa_{\text{eff},x} \rangle$
+
+| ⟨qx⟩ (W/m2) | ⟨Tx⟩ (K) | ⟨κeff,x⟩ (W/(m·K)) |
|---|
| -1.05 ± 0.14 | 300.5 ± 0.01 | 107.5 ± 14.2 |
+
+## RESULTS AND ANALYSIS
+
+Temperature is calculated at each spatial location $x_i$, and is a function of $C_v$, $v_g$ and $I$. Figure 4 contains the spatial distribution of temperature for the mean free path associated with each of the quadrature points.
+
+Fig. 4. Spatial temperature distributions for each $\lambda$ computed from the chosen quadrature.
+
+Figure 5 shows the mean and standard deviation of temperature at each location $x_i$ along the domain. The uncertainty in the center goes to zero due to symmetry of the problem. $\sigma_T$ is highest at the boundaries, and is a function of the acoustic thickness of the problem. As the acoustic thickness increases, the radiant sources at the boundaries shift from ballistic to diffuse scattering regimes. In an acoustically thin medium, where $L \approx \lambda$, phonons leaving a colder boundary are in the ballistic scattering regime, and propagate far across the medium to reach the hotter boundary causing the material temperature to be smaller than that associated with the prescribed incident intensity. In an acoustically thick medium, where phonons are in the diffuse scattering regime, this effect is significantly diminished. These are boundary scattering effects, and are well characterized in simulations of phonon transport [7, 13].
+
+Fig. 5. Mean and standard deviation of temperature along x-axis of silicon.
+
+Mean values of $T$, $q$ and $\kappa$ over the spatial domain along with their associated standard deviations ($\pm \max |\sigma|$) are re-
+---PAGE_BREAK---
+
+ported in Table I. We expect $\langle T_x \rangle$ to be approximately the
+midpoint average of the boundary temperatures due to the
+homogeneity of the problem.
+
+The equilibrium heat flux is constant over the spatial domain, however, the same boundary effects which influenced the temperature distribution also materialize in $q$, which elevate the values of $q$ near the domain boundaries. This effect may be responsible for the inflated uncertainty in $q$, which is higher than expected. The uncertainty in $q$ propagates into the calculation of $\sigma_{\kappa}$
+
+The value of $\langle \kappa_{\text{eff},x} \rangle$ is in good agreement with $\kappa_{\text{eff}}$ at an acoustic thickness of $3\lambda$, roughly 68% of the value of $\kappa_{\text{bulk}} = 157 \text{ W} \cdot \text{m}^{-1} \cdot \text{K}^{-1}$ given by Eq. (2). Figure 6 compares $\kappa_{\text{eff},x}$ at many acoustic thicknesses normalized to $\kappa_{\text{bulk}}$ and $\langle \kappa_{\text{eff},x} \rangle$ with its associated standard deviation. Thermal conductivity has a strong dependence on acoustic thickness of the medium at the nanoscale, this relationship is made clear in Fig. 6.
+
+Fig. 6. $\kappa_{\text{eff}}$ normalized to $\kappa_{\text{bulk}}$ for varying acoustic thicknesses, $\langle \kappa_{\text{eff},x} \rangle$ and $\sigma_{\kappa}$.
+
+CONCLUSIONS
+
+We have shown the PCE-SC method to be effective in providing uncertainty quantification for a deterministic phonon transport simulation. This method is accurate and efficient for simulations with few uncertain variables. The phonon mean free path is our uncertain variable, and this uncertainty is projected into the solutions of the phonon transport equation which yield $\langle q \rangle$, which in turn yields $\langle \kappa_{\text{eff}} \rangle$; the uncertainty in $\langle \kappa_{\text{eff}} \rangle$ is proportional to the uncertainty in $\langle q \rangle$. The variation in $\lambda$ influences the acoustic thickness and thermal conductivity. We intend to broaden the application of PCE-SC to our heterogeneous phonon transport simulations. Parametric studies could be performed to measure the accuracy in using different orders for the Gaussian quadrature and chaos expansions, in addition to performing Monte Carlo studies to show an efficiency comparison. The deterministic sampling aspect of the PCE-SC method is very useful in simulations with a small number of uncertain parameters and is beneficial in providing
+
+UQ analysis to deterministic phonon transport simulations.
+
+ACKNOWLEDGMENTS
+
+This work was supported by Idaho National Laboratory
+and XSEDE.
+
+REFERENCES
+
+1. D. XIU and G. E. KARNIADAKIS, “The Wiener-Askey Polynomial Chaos for Stochastic Differential Equations,” *SIAM Journal of Scientific Computing*, **24**, 619–644 (2002).
+
+2. Y. WANG, H. ZHANG, and R. MARTINEAU, “Diffusion Acceleration Schemes for Self-Adjoint Angular Flux Formulation with a Void Treatment,” *Nuclear Science and Engineering*, **176**, 201–225 (2014).
+
+3. D. GASTON, C. NEWMAN, G. HANSEN, and D. LEBRUN-GRANDIE, “MOOSE: A Parallel Computational Framework for Coupled Systems of Nonlinear Equations,” *Nuclear Science and Engineering*, **239**, 1768–1778 (2009).
+
+4. J. HARTER, P. A. GREANEY, and T. PALMER, “Characterization of Thermal Conductivity using Deterministic Phonon Transport in Rattlesnake,” *Transactions of the American Nuclear Society*, **112**, 829–832 (2015).
+
+5. J. HARTER, *Predicting Thermal Conductivity in Nuclear Fuels using Rattlesnake-Based Deterministic Phonon Transport Simulations*, Master's thesis, Oregon State University (2015).
+
+6. J. ZIMAN, *Electrons and Phonons: The Theory of Transport Phenomena in Solids*, Oxford University Press (2001).
+
+7. A. MAJUMDAR, “Microscale heat conduction in dielectric thin films,” *Journal of Heat Transfer*, **115**, 7–16 (1993).
+
+8. E. FICHTL and A. PRINJA, “The stochastic collocation method for radiation transport in random media,” *Journal of Quantitative Spectroscopy & Radiative Transfer*, **112**, 646–659 (2011).
+
+9. S. DULLA, A. PRINJA, and P. RAVETTO, “Random effects on reactivity in molten salt reactors,” *Annals of Nuclear Energy*, **64**, 353–364 (2014).
+
+10. Y. SAAD and M. SCHULTZ, “GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems,” *SIAM Journal of Scientific Computing*, **7**, 856–869 (1986).
+
+11. G. ROMANO and J. GROSSMAN, “Heat Conduction in Nanostructured Materials Predicted by Phonon Bulk Mean Free Path Distribution,” *Journal of Heat Transfer*, **137** (2015).
+
+12. P. MAREPELLI, J. MURTHY, B. QIU, and X. RUAN, “Quantifying Uncertainty in Multiscale Heat Conduction Calculations,” *Journal of Heat Transfer*, **136** (2014).
+
+13. B. YILBAS and S. BIN MANSOOR, “Phonon Transport in Two-Dimensional Silicon Thin Film: Influence of Film Width and Boundary Conditions on Temperature Distribution,” *European Physical Journal B*, **85**, 234–242 (2012).
\ No newline at end of file
diff --git a/samples/texts_merged/5874027.md b/samples/texts_merged/5874027.md
new file mode 100644
index 0000000000000000000000000000000000000000..f686b5934eefa781494722a4ff5f7fc0e60a2444
--- /dev/null
+++ b/samples/texts_merged/5874027.md
@@ -0,0 +1,366 @@
+
+---PAGE_BREAK---
+
+# MULTI-SAMPLE NONPARAMETRIC TREATMENTS COMPARISON IN MEDICAL FOLLOW-UP STUDY WITH UNEQUAL OBSERVATION PROCESSES THROUGH SIMULATION AND BLADDER TUMOUR CASE STUDY
+
+P. L. Tan¹*, N. A. Ibrahim¹, M. B. Adam¹ and J. Arasan²
+
+¹Institute for Mathematical Research (INSPEM), University Putra Malaysia, 43400 Serdang, Selangor, Malaysia
+
+²Department of Mathematics, Faculty of Science, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia
+
+Published online: 10 November 2017
+
+## ABSTRACT
+
+In medical follow-up study, the diseases recurrent processes evolved in continuous time and the patients are usually monitor at distinct and different intervals. Therefore, most of the existing methods that assumed identical observation processes might provide misleading results in this case. To address this, a nonparametric test based on integrated weighted different between the mean cumulative functions which characterized both the recurrent processes and observation processes with condition on treatment is proposed to allow unequal observation processes. The empirical power of the proposed test has been investigated via Monte Carlo simulation study and bladder tumour case study. The results are in line with earlier research; the proposed test procedure works well for practical situations and had a good power in detecting treatment difference.
+
+**Keywords:** nonparametric; unequal observation; multi-sample; treatments comparison.
+
+Author Correspondence, e-mail: tan.peiling@student.upm.edu.my
+doi: http://dx.doi.org/10.4314/jfas.v9i6s.13
+---PAGE_BREAK---
+
+# 1. INTRODUCTION
+
+In medical follow-up study, patients are usually observed at several irregular time points, the actual time of diseases occurrences are unknown and only the number of occurrences between subsequence follow-up is recorded. These data are known as panel count data [11]. The example of panel count data that given in this paper arising from bladder tumours study conducted by the Veterans Administration Cooperative Urological Research Group [4]. All patients had superficial bladder tumours history, the tumours were removed and patients were randomized to one of the three treatments, placebo, thiotepa or pyridoxine. These patients experienced several recurrences during the follow-up study. The number of new tumours discovered at each follow-up was recorded and were removed at clinical visits. Furthermore, the number of clinical visit and the observation times are varying across the patients. The main interest in this paper is to compare the effectiveness of different treatments in medical follow-up studies that account for unequal observation processes.
+
+In medical, the diseases recurrent processes evolved in continuous time and the patients are often monitor at distinct time and different time intervals. In other words, the observation processes are not identical distributed. Most of the existing methods assumed that the observation processes for the patients in different treatment groups are identical distributed [2], [3, 8-9], [16], [20]. There exist limited literatures for nonparametric comparison, which consider unequal observation processes between treatments [7], [12], [19]. As medical follow-up data involves more than one observation time point for each subject that may vary across subject and the number of subject in each treatment groups may vary across treatments, the existing methods which assume identical observation processes may not be feasible in practice. To address this, a multi-sample distribution free test based on the integrated weighted difference between mean cumulative functions that characterized the recurrences and observation processes with condition on treatment group is present in this paper.
+
+# 2. FORMULATION
+
+## 2.1. Basic Notation
+
+Consider $k+1$ different treatment groups of independent subjects in a recurrent event study with total sample size $n$. Suppose only panel count data are available and observation
+---PAGE_BREAK---
+
+processes are different for the subjects from different groups. Let $n_1$ denote the number of
+subjects in the 1th group and $s_i$ the set of indices for subjects in group 1 where $n_1 + n_2 + ... + n_{l+c+1} = n$. Also let $N_{il}(t)$ denote the counting process of the total number of recurrent event
+occurrences up to time $t$ from subject $i$ in lth group with $\Lambda_1(t;Z_i) = E[N_{il}(t)|Z_i]$ the marginal
+expected number of recurrent events up to $t$ of $N_{il}(t)$ given $Z_i$ for $i$ in $s_1, ..., k+1$ and $Z_i$
+is a group-indicator associated with subject $i$.
+
+For panel count data, each subject is observed only at discrete time points where the ordered
+distinct observation time points for subject $i$ is denoted by $T_{i,1} < T_{i,2} < ... < T_{i,m_j}, j = 1, 2, ..., m_i$
+with $m_i$ representing the total number of observation time points for subject $i$. Let $C_i$ denote
+the censoring or follow-up time of subject $i$ and $\tau$ be the longest follow-up time of all subjects
+in the study. The observed data are taken to be independent and identical copies of $D_i$ where
+the observation data consist of $D_i = \{N_i, Z_i, C_i, T_{ij}, m_j\}$ and are independent of the counting
+process $N_i$'s. The union of all distinct observation time points denoted by $t_1, t_2, ..., t_m$ and the
+censoring times for subject $i$ is the last observation time point for subject $i$ in $[0, \tau]$.
+
+## 2.2. The Observation Processes
+
+Fig. 1 displays the distribution of the clinical visits for placebo treatment and thiotepa treatment in bladder tumour case study. It appears that the patients in the thiotepa group have more follow-up as compared to the patients treated with placebo treatment. The observation processes between placebo treatment and thiotepa treatment are not identical distributed in this case.
+---PAGE_BREAK---
+
+**Fig.1.** Distribution of clinical visits for Placebo treatment and Thiotepa treatment
+
+To deal with the unequal observation processes between treatment groups, the number of
+observations formulated is proportional between groups as it fit most of the event history
+analysis. An example from bladder tumour study is shown in Fig. 2.
+
+**Fig.2.** Cumulative number of clinical observations for Placebo treatment and Thiotepa treatment
+
+The total number of observation for subject i is formulated as model in Equation (1).
+
+$$m_i = \exp(\gamma Z_i) \quad (1)$$
+---PAGE_BREAK---
+
+where $γ = 0$ means the observation processes between two groups are equal and otherwise $γ \neq 0$.
+
+## 2.2. The Recurrence Processes
+
+In order to mimic the situation tested in this paper, the recurrence event, $N_{i1}$'s are assumed to follow a mixed Poisson processes that result more variability as compared with Poisson processes. The mean recurrence at time t for lth treatment group is the proportion of total number of recurrence observed in lth treatment group at time t over total number of observations at time t.
+
+$$N_i^*(t) = \frac{\sum_{i \in S_i} I(r_i(t) = 1)}{\sum_i I(O_i(t) = 1)} \quad (2)$$
+
+The mean function of recurrent event occurred up to time t from subject i, conditioning on treatment group, $Z_i$, has the form of Equation (3) which is similar to the function given in [6].
+
+$$E\left[\int_0^\tau \frac{N_i(t)dN_i^*}{\exp(\gamma Z_i)} | Z_i\right] = \int_0^\tau \mu(t)S(t)\lambda_0(t)dt \quad (3)$$
+
+where $\lambda_0(t)$ is known baseline mean, $\mu(t)$ common mean of $N_i(t)$ and $\gamma$ is a parameter representing the difference between two groups.
+
+The cumulative meannumber of recurrence for lth treatment group can be written as in Equation (4).
+
+$$\hat{N}_i(t) = \sum_{i \in S_i} \int_0^t \frac{N_i(s)dN_i^*}{\exp(\gamma Z_i)} \quad (4)$$
+
+The mean cumulative function given treatment group is accumulated over the proportion of the product of total number of patients at risk in lth treatment group and mean tumour recurrenceto the size of the risk set observed at time $t_j$, $Y(t_j)$ as written in Equation (5).
+
+$$\hat{\Lambda}_i(t, Z_i) = \int_0^t \frac{Y_i(s)d\tilde{N}_i(s)}{Y(s)} \quad (5)$$
+
+where $Y(t_j) = \sum_i I(t_j \le C_i)$ denote the at risk indicator prior to time t.
+
+## 2.2. The Test Statistics
+
+The proposed test statistic has the form of integrated weighted different between group-specific mean and the overall mean as given in Equation (6).
+---PAGE_BREAK---
+
+$$ \phi(\hat{\gamma}) = \frac{1}{\sqrt{n}} \sum_{i \in S_i} (Z_i - \bar{Z}) \int_0^\tau W_i(t) d\{\hat{\Lambda}_i(t) - \hat{\Lambda}_0(t)\} \quad (6) $$
+
+Followed [6-7], $\gamma$ can be estimated by solving the partial likelihood score in Equation (7)
+
+$$ U(\gamma) = \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \int_{0}^{\tau} \left\{ Z_i - \frac{S^{(1)}(t, \gamma)}{S^{(2)}(t, \gamma)} \right\} d\tilde{N}_i(t) \quad (7) $$
+
+where
+
+$$ S^{(0)}(t, \gamma) = \frac{1}{n} \sum_{i=1}^{n} I(t \le C_i) \exp(\gamma Z_i) \quad (8) $$
+
+and
+
+$$ S^{(\tau)}(t, \gamma) = \frac{\partial S^{(0)}(t, \gamma)}{\partial \gamma^{\tau}}. \quad (9) $$
+
+The null hypothesis can be tested based on statistics $T^* = \phi(\hat{\gamma})V^{-1}(\hat{\gamma})\phi(\hat{\gamma})'$ the null distribution can be approximated by a chi-square distribution with k degrees of freedom. $\phi(\hat{\gamma})$ is given in Equation (10).
+
+$$ \phi(\hat{\gamma}) = \sum_{i} \phi_i(\hat{\gamma}) \qquad (10) $$
+
+In [5] showed that $\phi(\hat{\gamma})$ is asymptotically normal with mean 0 and the variance can be consistently estimated by Equation (11).
+
+$$ V(\hat{\gamma}) = H(\hat{\gamma})\Gamma(\hat{\gamma})H(\hat{\gamma})' \quad (11) $$
+
+where
+
+$$ H(\hat{\gamma}) = \begin{pmatrix} I, & A(\hat{\gamma}) \\ 0, & B(\hat{\gamma}) \end{pmatrix}, I \text{ is identity matrix} \quad (12) $$
+
+and
+
+$$ \Gamma(\hat{\gamma}) = \frac{1}{n} \sum_{i=1}^{n} \int_{0}^{\tau} \left[ \begin{array}{c} \hat{a}_{i} + \hat{\alpha}_{i} \\ \hat{b}_{i} \end{array} \right] \left[ \begin{array}{c} \hat{a}_{i}' + \hat{\alpha}_{i}' \\ \hat{b}_{i}' \end{array} \right]' . \quad (13) $$
+
+Let $A(\hat{\gamma}) = \lim_{n \to \infty} \sum_i A_i(\hat{\gamma})$ and $B(\hat{\gamma}) = \lim_{n \to \infty} \sum_i B_i(\hat{\gamma})$,
+
+$$ A_i(\gamma) = \frac{\partial \phi_i(\gamma)}{\partial \gamma} \qquad (14) $$
+
+and
+---PAGE_BREAK---
+
+$$B(\gamma) = \frac{\partial U(\gamma)}{\partial \gamma}. \quad (15)$$
+
+Also, $\hat{a}_i$, $\hat{b}_i$ and $\hat{\alpha}_i$ are given in Equation (16)-(18).
+
+$$\hat{a}_i = \int_0^\tau (Z_i - \bar{Z}) W(t) d\hat{\Lambda}_i(t) \quad (16)$$
+
+$$\hat{b}_i = \int_0^\tau \left\{ Z_t - \frac{S^{(1)}(t, \hat{\gamma})}{S^{(0)}(t, \hat{\gamma})} \right\} \left\{ dN_i(t) - I(t \le T_i) \exp(\hat{\gamma} Z_i) d\hat{\Lambda}_0(t) \right\} \quad (17)$$
+
+and
+
+$$\hat{\alpha}_i = \int_0^\tau \left\{ \frac{R(t)}{S^{(0)}(t, \hat{\gamma})} \right\} \left\{ dN_i(t) - I(t \le T_i) \exp(\hat{\gamma} Z_i) d\hat{\Lambda}_0(t) \right\} \quad (18)$$
+
+where
+
+$$R(t) = n^{-1} \sum_{i=1}^{n} (Z_i - \bar{Z}) \int_0^t \frac{N_i(t) dN_i^*}{\exp(\hat{\gamma} Z_i)}. \quad (19)$$
+
+### 3. SIMULATION STUDY
+
+The Monte Carlo simulation study is conducted with $k = 1$ and condition on given treatment group covariate $Z_i$ where $Z_i = 0$ for $i$ in $s_1$ (group 1) and $Z_i = 1$ for $i$ in $s_2$ (group 2). All of the results are based on 5000 replications at a significance level of $\alpha = 0.05$. The computation for the simulation was carried out in written R function using version 3.2.5 of the R statistical software.
+
+The number of observation for subject i, $m_i$ is generated based on $m_i = \exp(\gamma Z_i)$, $\gamma = 0$ means the observation processes between treatment groups are equal. For unequal observation processes, $\gamma = 0.1, 0.2, 0.3$. Given $m_i$, the follow-up times $T_{i1}, T_{i2}, ..., T_{imi}$ for subject i are sampled from Uniform distribution over $(0, \tau)$ with $\tau = 10$ and $\tau = 20$ and the censoring time of subject i, $C_i$ is the last follow-up time of subject i, $T_{imi}$. Then, $t_1, t_2, ..., t_m$ is the unique order statistics of m observations of all follow-up times. The panel count data $N_i$'s are generated based on
+
+$$N_i(T_{i,j}) = N_i(T_{i,1}) + \{N_i(T_{i,2}) - N_i(T_{i,1})\} + \dots + \{N_i(T_{i,j}) - N_i(T_{i,j-1})\}$$
+
+and
+---PAGE_BREAK---
+
+$$N_i(T, j) - N_{i-1}(T, j) \sim \text{Poisson}\left(v_i \lambda_0 (t_{i,j} - t_{i,j-1}) \exp(\beta Z_i)\right)$$
+
+where $\lambda_0(t) = 1, \beta = 0.1, 0.2, 0.3$.
+
+For mixed Poisson processes, $v_i$'s are generated from Gamma distribution with shape parameter 2 and scale parameter of 0.5. For illustration, the data generated with $\gamma = 0.2, \beta = 0.2$ and $\tau = 10$ is showed in Fig. 3.
+
+**Fig.3.** Step chart of the simulated data for cumulative mean recurrences
+
+The test's performance is investigated through its power which is also the percentage of the
+test rejecting the false null hypothesis. The null hypothesis of testing no difference between
+mean cumulative function of treatments is rejected if p-value is less than 0.05. The asymptotic
+approximation of the test in Equation (10) is checked through the plot of the standardized test
+statistic against its theoretical quantile, which is showed in Fig. 4.
+---PAGE_BREAK---
+
+Fig. 4. Quantiles plot of standardized test statistics with $n_1=n_2=50$ and $\gamma=\beta=0.2$
+
+The asymptotic approximation of the test for $W_n^{(1)}$, $n = 100$ given in Fig. 4 is quite good.
+
+Similar plots are obtained for other tested situations. The asymptotic approximation of the test statistics get closer to normal distribution as the sample size increased.
+
+Tables 1 present the power of the proposed test for $\tau = 10$ and $\tau = 20$ respectively. The power of the test procedures increase when the sample sizes increase. Similar results are obtained for when the length of follow-up period increased from $\tau = 10$ to $\tau = 20$. Overall, the performance of the proposed test gives a good power to detect treatment differences under the tested situations. In [12] showed that the test worked well even when the sample sizes were imbalanced between two treatment groups.
+
+Table 1. The empirical power for the proposed test
+
+| γ | n1 | n2 | τ = 10 | τ = 20 |
|---|
| β = 0.1 | β = 0.2 | β = 0.3 | β = 0.1 | β = 0.2 | β = 0.3 |
|---|
| 0 | 10 | 10 | 0.9584 | 0.9648 | 0.9662 | 0.8970 | 0.8606 | 0.9130 | | 15 | 15 | 0.9818 | 0.9830 | 0.9800 | 0.9862 | 0.9832 | 0.9758 |
+---PAGE_BREAK---
+
+ | 30 | 30 | 0.9866 | 0.9890 | 0.9818 | 0.9950 | 0.9940 | 0.9938 | | 50 | 50 | 0.9986 | 0.9994 | 0.9986 | 0.9996 | 0.9996 | 0.9998 | | 0.1 | 10 | 10 | 0.9650 | 0.9710 | 0.9666 | 0.9596 | 0.9324 | 0.9624 | | 15 | 15 | 0.9858 | 0.9878 | 0.9848 | 0.9868 | 0.9848 | 0.9798 | | 30 | 30 | 0.9920 | 0.9880 | 0.9866 | 0.9980 | 0.9982 | 0.9952 | | 50 | 50 | 0.9996 | 0.9984 | 0.9990 | 0.9984 | 0.9992 | 0.9976 | | 0.2 | 10 | 10 | 0.9608 | 0.9712 | 0.9560 | 0.9302 | 0.9306 | 0.9308 | | 15 | 15 | 0.9778 | 0.9716 | 0.9576 | 0.9942 | 0.9920 | 0.9864 | | 30 | 30 | 0.9932 | 0.9912 | 0.9798 | 0.9958 | 0.9952 | 0.9874 | | 50 | 50 | 0.9988 | 0.9988 | 0.9986 | 0.9986 | 0.9996 | 0.9986 | | 0.3 | 10 | 10 | 0.9718 | 0.9604 | 0.9625 | 0.9482 | 0.9154 | 0.9136 | | 15 | 15 | 0.9810 | 0.9782 | 0.9716 | 0.9888 | 0.9830 | 0.9812 | | 30 | 30 | 0.9942 | 0.9922 | 0.9814 | 0.9944 | 0.9882 | 0.9888 | | 50 | 50 | 0.9992 | 0.9996 | 0.9994 | 0.9972 | 0.9978 | 0.9978 |
+
+## 4. BLADDER TUMOUR STUDY
+
+The nonparametric test described in previous sections will be illustrated by reproducing the data from the Veterans Administration Co-operative Urological Research Group (VACURG) and the data are presented in [1]. The original data consist of patients with history of superficial bladder tumours and treated with placebo, thiotepa and pyridoxine treatments. The third treatment pyridoxine was not included in the first part of data analysis as it did not have significant effect in reducing the recurrence of bladder tumour as discussed in [4, 10]. However, the results of multi-sample comparison are shown in Table 2 for comparison with existing nonparametric methods[3,16, 19].
+
+The data consist of 85 patients with 47 patients assigned in placebo group and 38 patients in thiotepa group. The observed data included the follow-up time and the numbers of recurrent tumours during the follow-up study as well as additional information on baseline covariates on the size of the largest initial tumour and the number of initial tumours. The initial tumours were removed before enter to 53 months of follow-up. The multiple recurrences of tumours during the study are recorded.
+---PAGE_BREAK---
+
+Fig. 5 shows the mean cumulative functions of occurrence of the bladder tumours for both treatment groups. It appears to be not much difference in early follow-up, but over time, it's seem to be different and are proportional to each other. The patients treated with placebo treatment have higher recurrences compare to those treated with thiotepa treatment. Additionally, the thiotepa treatment seems to be effective in reducing the recurrences of bladder tumours where the occurrences of bladder tumours are not obvious. Thus, the main interest is to test whether the treatment difference is statistically significant.
+
+**Fig.5.Mean cumulative number of recurrence tumours**
+
+Let $Z = 0$ for patients who treated with placebo and $Z = 1$ for patients who treated with thiotepa. The proposed test was carried out under different weight processes described in Equation (20) - (22).
+
+$$W_n^{(1)}(t) = 1 \qquad (20)$$
+
+$$W_n^{(2)}(t) = n^{-1} \sum_{i=1}^{n} I(t \le t_{i,m}) \qquad (21)$$
+
+$$W_n^{(3)} = 1 - W_n^{(2)} \quad (22)$$
+
+Based on the use of weight process $W_n^{(1)}$, $W_n^{(2)}$ and $W_n^{(3)}$, the proposed test yielded $T^*=3.5028$, 3.3064 and 3.7795 with p-values of 0.0613, 0.069 and 0.0519 respectively. The proposed test rejects the null hypothesis at 10% level of significance. These indicated that the mean recurrence of the bladder tumours is significantly different across treatment groups. The
+---PAGE_BREAK---
+
+proposed test has similar conclusion as discussed in [4, 10], where the treatment differences
+are statistical significance.
+
+Table 2 shows the comparison of the proposed method with existing nonparametric methods where in [3] based on nonparametric maximum likelihood estimator (NPMLE) and in [16] based on nonparametric maximum pseudolikelihood estimator (NPMPLE), both assumed identical observation processes while in [19] based on isotonic regression estimator (IRE) which assumed unequal observation processes across treatment groups.
+
+**Table 2.** P-values comparison for Multi-sample test on bladder tumour data
+
+
+
+
+ | Test |
+ |
+ W(1) |
+ W(2) |
+ W(3) |
+
+
+
+
+ | Proposed Test |
+ Test statistics |
+ 6.6215 |
+ 6.0534 |
+ 5.3456 |
+
+
+ | p-values |
+ 0.0365 |
+ 0.0485 |
+ 0.0691 |
+
+
+ | [3] |
+ Test statistics |
+ 3.617, 3.269 |
+ 1196123, 300179 |
+ 489000, 121908 |
+
+
+ | p-values |
+ 0.164, 0.195 |
+ <10-8 |
+ <10-8 |
+
+
+ | [16] |
+ Test statistics |
+ 4.9281 |
+ 3.8682 |
+ 4.9527 |
+
+
+ | p-values |
+ 0.0851 |
+ 0.1445 |
+ 0.0840 |
+
+
+ | [19] |
+ Test statistics |
+ 5.2805 |
+ 0.0379 |
+ 21.7701 |
+
+
+ | p-values |
+ 0.0713 |
+ 0.9812 |
+ 0.00002 |
+
+
+
+
+The test results of [3] based on NPMLE are more significant than others methods with $W_n^{(2)}$ and $W_n^{(3)}$, while the unweighted test failed to detect the treatment difference. On the other hand, test based on [16, 19] failed to reject the null hypothesis with $W_n^{(2)}$. This may be due to the test based on the use of isotonic regression estimator of the mean functions crossing at the early to middle follow-up time. In [19] showed that the treatments are significantly different at late follow-up period at 5% level of significance. In [16] suggested that the treatment differences are significant at 10% level of confidence with weight process $W_n^{(1)}$ and $W_n^{(3)}$.
+
+It appears that the proposed test is more effective than the existing tests in detecting the departure from null hypothesis with all three weight processes. The results also show that in [16] which assumed the observation processes are independent and identical across treatment groups is less significant as compared with the methods which considered unequal observation processes. In the presence of different observation processes, the tests assume the
+---PAGE_BREAK---
+
+observation processes are identical across treatment groups might provide misleading results.
+However, in [3] with weighted test gives significance results due to the estimator used are
+more efficient than other estimator as showed by [13]. Thus, one should choose the right test
+with proper weight process as most of the existing nonparametric comparison procedures are
+applicable to pre-schedule observations, where the observation processes across treatments
+are identical.
+
+5. CONCLUSION
+
+This paper discussed the distribution free test to compare treatment efficiency in medical follow-up study when the observation processes are differed across treatments. Based on the simulation study and the bladder tumour case study, the proposed test works well for situations consider here. Most of the existing nonparametric test for panel count data assumed identical observation processes between treatments [2-3, 8-9, 16, 20]. In reality, this assumption might not be true as shown in bladder tumour case study and might provide misleading results. Thus, one should carefully choose a test procedure based on the tested situations.
+
+There exist limited study on nonparametric test and a lot of further works still need to be done.
+The proposed test is concerned on univariate nonparametric comparisons with time
+independent covariate. One might consider the case for time dependent covariates. Also, the
+proposed test is depending on the assumption of independent censoring. In order words, the
+censoring processes are independent of the observation processes and the recurrent processes.
+Furthermore, researcher might be interest in studying the treatment differences for bivariate or
+multivariate cases for future study or consider informative censoring cases as in [17-18].
+
+6. ACKNOWLEDGEMENTS
+
+The authors are grateful to the editor and referees for their careful reading of the paper. The research was supported by the Malaysia Ministry of Higher Education fund MyBrain15 and Putra grant (No. 9483600).
+---PAGE_BREAK---
+
+7. REFERENCES
+
+[1] Andrews D. F., Herzberg A. M. Data: A collection of problems from many fields for the student and research worker. New York: Springer Science and Business Media, 2012
+
+[2] Balakrishnan N, Zhao X. New multi-sample nonparametric tests for panel count data. The Annals of Statistics, 2009, 37(3):1112-1149
+
+[3] Balakrishnan N, Zhao X. A Nonparametric test for the equality of counting process with panel count data. Computational Statistics and Data Analysis, 2010, 54(1):135-142
+
+[4] Byar D P, Blackard C, Veterans Administration Cooperative Urological Research Group. Comparisons of placebo, pyridoxine, and topical thiotepa in preventing recurrence of stage I bladder cancer. Urology, 1977, 10(6):556-561
+
+[5] Li Y, Suchy A, Sun J. Nonparametric treatment comparison for current status data. Journal of Biostatistics, 2010, 1(102):
+
+[6] Li Y, Zhao H, Sun J, Kim K. Nonparametric tests for panel count data with unequal observation processes. Computational Statistics and Data Analysis, 2014, 73:103-111
+
+[7] Lin D Y, Wei L J, Yang I, Ying Z. Semiparametric regression for the mean and rate functions of recurrent events. Journal of Royal Statistical Society Ser B, 2000, 62(4):711-730
+
+[8] Park D H, Sun J, Zhao X. A class of two-sample nonparametric tests for panel count data. Communication in Statistics: Theory Methods, 2007, 36(8):1611-1625
+
+[9] Sun J, Fang H. B. A nonparametric test for panel count data. Biometrika, 2003, 90(1):199-208
+
+[10] Sun J, Wei L J. Regression analysis of panel count data with covariate-dependent observation and censoring times. Journal Royal Statistical Society: Series B (Statistical Methodology), 2000, 62(2):293-302
+
+[11] Sun J., Zhao X. The statistical analysis of panel count data. New York: Springer, 2013
+
+[12] Tan P L, Ibrahim N A, Arasan J, Adam M B. Nonparametric treatments comparison for panel count data with application in medical follow-up study. In International Conference on Computing Engineering and Mathematical Sciences, 2017, pp. 18-22
+
+[13] Wellner J A, Zhang Y. Two estimators of the mean of a counting process with panel count data. The Annal of Statistics, 2000, 28:779-814
+
+[14] Wellner J A, Zhang Y. Two likelihood-based semiparametric estimation methods for
+---PAGE_BREAK---
+
+panel count data with covariates. The Annal of Statistics, 2007, 35(5):2106-2142
+
+[15] Zhang Y. A semiparametric pseudolikelihood estimation method for panel count data.
+Biometrika, 2002, 89(1):39-48
+
+[16] Zhang Y. Nonparametric K-sample test with panel count data. Biometrika, 2006, 93(4):777-790
+
+[17] Zhao H, Li Y, Sun J. Analyzing panel count data with dependent observation process and terminal event. Canadian Journal of Statistics, 2013, 41(1):174-191
+
+[18]Zhao H, Li Y, Sun J. Semiparametric analysis of multivariate panel count data with dependent observation process and terminal event. Journal of Nonparametric Statistics, 2013, 25(2):379-394
+
+[19] Zhao X, Sun J. Nonparametric comparison for panel count data with unequal observation processes. Biometrics, 2011, 67(3):770-779
+
+[20] Zhao H, Virkler K, Sun J. Nonparametric comparison for multivariate panel count data.
+Communications in Statistics-Theory and Methods, 2014, 43(3):644-655
+
+**How to cite this article:**
+
+Tan P L, Ibrahim N A, Adam M B, Arasan J. Multi-sample nonparametric treatments comparison in medical follow-up study with unequal observation processes through simulation and bladder tumour case study. J. Fundam. Appl. Sci., 2017, 9(6S), 147-161.
\ No newline at end of file
diff --git a/samples/texts_merged/5940089.md b/samples/texts_merged/5940089.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1f1905852dc6bdcb4e84ce2b1b969e72dd019bc
--- /dev/null
+++ b/samples/texts_merged/5940089.md
@@ -0,0 +1,321 @@
+
+---PAGE_BREAK---
+
+Design and Verification of a High Performance LED Driver with an Efficient Current Sensing Architecture*
+
+Jin He¹,², Wing Yan Leung³, Tsz Yin Man³, Lin He¹, Hailang Liang¹, Aixi Zhang¹,³,
+Qingxing He², Caixia Du², Xiaomeng He², Mansun Chan³
+
+¹Peking University, Shenzhen SOC Key Laboratory, PKU HKUST Shenzhen Institution,
+Shenzhen Institute of Peking University, Shenzhen, China
+²Shenzhen Huayue Terascale Chip Ltd. Co., Shenzhen, China
+³Department of Electrical and Electronic Engineering, The Hong Kong University of Science
+and Technology, Clear Water Bay, Kowloon, China
+Email: frankhe@pku.edu.cn
+
+Received January 10, 2013; revised February 10, 2013; accepted February 17, 2013
+
+Copyright © 2013 Jin He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
+
+ABSTRACT
+
+A high power buck-boost switch-mode LED driver delivering a constant 350 mA with a power efficient current sensing scheme is presented in this paper. The LED current is extracted by differentiating the output capacitor voltage and maintained by a feedback. The circuit has been fabricated in a standard 0.35 µm AMS CMOS process. Measurement results demonstrated a power-conversion efficiency over 90% with a line regulation of 8%/V for input voltage of 3.3 V and current output between 200 mA and 350 mA.
+
+**Keywords:** LED Driver; Current Sensing; Capacitor Voltage; Feedback; Circuit
+
+1. Introduction
+
+High-power LED has been widely used as flashlight
+for camera phones and electric torch for night vision. The
+most commonly used high-power LED is driven at 350
+mA, and LED manufacturers are constantly working on
+driving LED at higher output current, so that it can pro-
+vide sufficient light output for broader lighting appli-
+cations [1]. This brings about the need for high-power
+LED driver that can deliver and regulate LED-current.
+Switch Mode Power Converters (SMPCs) are common
+for high-power LED driver as they can deliver high out-
+put-current at high power-conversion efficiency. How-
+ever, SMPCs are usually output-voltage-regulated, and
+thus the output current has to be converted to a voltage
+before it can be regulated. Commercial products use a
+sensing resistor placed in series with the high-power
+LED to sense the current directly and convert it to a
+feedback voltage [2-7]. For applications like flashlights
+
+in camera phones or torches where only one high-power
+LED is used, power dissipation for the sensing circuit
+can take up more than 20% of the total power delivered
+[4]. This seriously compromises power-conversion effi-
+ciency and shortens battery-life.
+
+As Li-ion batteries with a voltage range from 2.7 V to
+4.2 V, it's commonly used as power source for handheld
+devices. A high-power LED driver has to step up or step
+down this supply voltage to drive a 350 mA high-power
+LED of forward voltage ranging from 3.4 V to 3.7 V.
+This makes buck-boost converter one of the most suit-
+able candidates. In this paper, an LED-current sensing
+circuit that is suitable for buck-boost and boost converter
+is presented. In the following sections, the operation
+principle and implementation of the proposed LED-cur-
+rent sensing circuit will be presented together with the
+measurement results.
+
+2. Design of Power Efficient Current Sensing Scheme
+
+The most common approach [2-7] for sensing the LED
+current is illustrated in Figure 1 with a sensing resistor in
+series with the LED. In such approach, the power dissi-
+pation in the sensing circuit is given by
+
+¹This is an extended version of “A High-Power-LED Driver with Power-Efficient LED-Current Sensing Circuit”, Proceeding of Solid-State Circuits Conference, 2008, ESSCIRC 2008, 34th European, pp. 354-357, DOI:10.1109/ESSCIRC.2008.4681865.
+More content refers to Wing Yan Leung's graduated thesis from Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology
+---PAGE_BREAK---
+
+Figure 1. Common LED-current sensing scheme with a current sensing resistor.
+
+$$P_{\text{sense}} = I_{\text{LED}}^2 R_{\text{sense}} \quad (1)$$
+
+Typical reference voltage used is between 110 mV to 1.23 V [1-7] so that Rsense ranges 314 Ohm to 3.5 Ohm. The power consumption can then be estimated to be in the range of 38.5 mW to 430 mW, which is relatively high. At the same time, the power consumption of the sensing circuit increases with the square of output current as given in Equation (1), which is undesirable for high current circuits.
+
+The output current of a buck-boost converter, however, can be directly extracted from the capacitor voltage at the output [8]. Figure 2(a) illustrates a buck-boost converter driving a high-power LED and the waveforms at a few important nodes are sketched in Figure 2(b). In the time interval $0 < t \le (1-D)T$ (the “reset” state), switches SP1 and SN1 are open and switches SP2 and SN2 are closed. The driver is detached from the voltage source $V_{in}$. Energy stored in inductor is delivered to output capacitor CO and the high-power LED. In the time interval $(1-D)T < t \le T$ (the “set state”), switches SP1 and SN1 are closed and switches SP2 and SN2 are open. Current flows from voltage source $V_{in}$ to inductor L where the energy is stored. Meanwhile, the high power LED is disconnected from the inductor and output-current is solely provided by output capacitor $C_O$. Therefore, current flowing out of output capacitor is equal to the LED-current in this time period, and this LED-current is equal to the slope of decreasing $V_O$ based on equation (2).
+
+$$I_{CO} = C_O \frac{dV_O}{dt} \qquad (2)$$
+
+By differentiating the output voltage, the current information can be obtained without using a sensing resistor network and thus achieve significant power saving.
+
+## 3. Circuit Implementation
+
+The overall system of a Buck-boost LED driver with proposed LED-current sensing circuit is shown in Figure 3. The proposed LED driver makes use of leading-edge
+
+Figure 2. (a) The current path of a buck-boost High-power LED driver at different phase in a period and (b) sketches of waveforms at some important nodes.
+
+modulation in PWM control and is voltage-programmed. Power transistors $S_{P1}$, $S_M$, $S_{P2}$ and, $S_{N2}$, inductor L and output capacitor $C_O$ form the power stage of the Buck-boost LED driver. Resistor $R_{fb1}$ and $R_{fb2}$ form the resistive feedback network to feedback a portion of the output voltage to the system control. There is a dead time control block to produce a delay between the switching of $S_{P1}$, $S_M$ and $S_{P2}$, $S_{N2}$ during each switching cycle to prevent shoot-through current in the power stage. There are also drivers to turn on and off the power transistors as they are very large in size. An oscillator OSC is used to generate a ramp and clock signal for both PWM control blocks. A dimming control block is implemented to allow PWM dimming of LED. There is also a start-up current limiting block to limit the power transistor and inductor current during system start up, and a shut-down current limiting block to perform similar function when the system is under dimming operation.
+
+### 3.1. LED-Current Sensing Circuit
+
+System diagram of the proposed LED-current sensing circuit is shown in Figure 4. The first amplifier, $R_{diff}$ and $C_{diff}$ form the differentiator to generate $V_{diff}$. Sampling
+---PAGE_BREAK---
+
+Figure 3. The system diagram of the proposed buck-boost LED driver.
+
+Figure 4. The system diagram of proposed LED-current sensing circuit.
+
+switch and capacitor form the sample and hold circuit that sample the output of the differentiator during the ‘set’ period and hold it for the remaining time. The second error amplifier compares the sampled voltage $V_{\text{samp}}$ and compare with reference voltage $V_{\text{ref}}$ to record the LED-current.
+
+To simplify the analysis, the sampling switch and voltage buffer are assumed to be ideal. The gain of error amplifier is assumed to be $G_{ma}R_{\alpha}$ and the current drawn by feedback resistor is negligible. The sample voltage of the proposed LED-current sensing circuit during $(1-D)T < t < T$ is given by:
+
+$$ \frac{\Delta V_{sample}}{\Delta I_{LED}} = \left( \frac{R_2}{R_1 + R_2} \right) \left( \frac{G_{ma} R_{\alpha}}{1 + G_{ma} R_{\alpha}} \right) \frac{R_O}{(1 + sC_O R_O)} \cdot \frac{s C_{diff} R_{diff}}{\left( 1 + s \frac{C_{diff} R_{diff}}{1 + G_{ma} R_{\alpha}} \right) (1 + s R_{\alpha} C_{\alpha})} \quad (3) $$
+
+From Equation (3), the relationship that the sampled
+
+DC voltage is proportional to the slope of the capacitor voltage only holds at relatively high frequency such that the “1” in the denominator of (3) become negligible. Also from the pole $1+s\frac{C_{diff}R_{diff}}{1+G_{ma}R_{\alpha}}$, we can see that the bandwidth of the proposed sensing circuit is limited by the differentiating capacitor, resistor and the gain of the error amplifier. The Bode plot of the proposed LED-current sensing circuit is shown in Figure 5. Such a transfer characteristics between $I_{LED}$ and $V_{samp}$ implies that the DC value and slow changes in LED-current is completely filtered out from the system control loop by the differentiator, and thus is not regulated. Intuitively it seems that $I_{LED}$ is not regulated. However, the information of LED-current actually appears in two frequency ranges: 1) the DC value of $I_{LED}R_O$ that is filtered out by the proposed LED-current sensing circuit and 2) the differentiated value of capacitor voltage over time during each switching cycle. It is the second frequency range that is of interested because we are not regulating $V_O$ but from the slope of $V_O$ that gives the LED-current. The output capacitor has to be chosen carefully in this case such that the cut-off frequency of the LED-current sensing circuit cannot be too low; otherwise, the sampled voltage $V_{samp}$ may become output voltage dependent.
+
+The behavior of the differentiator can be studied by performing Fourier analysis on the output voltage during $(1-D)T < t \leq T$ which is assumed to be triangular with the form
+
+$$ V_O(t) = \frac{I_{LED}}{C_O} DT - \frac{I_{LED}}{C_O} \frac{t}{DT} \quad (4) $$
+
+The result from the Fourier analysis (on the full triangular wave) is given by
+---PAGE_BREAK---
+
+$$
+\begin{align}
+V_O(f) &= \int_{-\infty}^{\infty} \frac{I_{LED}}{C_O} DT \cdot tri(t) \cdot e^{-j2\pi f t} dt \\
+&= \frac{I_{LED}}{C_O} DT \left[ \pi \delta(f) + \sum_{k \neq 0} \sin c^2(f) \cdot \delta(f - k f_0) \right] \tag{5}
+\end{align}
+$$
+
+Including all harmonics of the fundamental frequency of the output voltage will require a differentiator with unlimited bandwidth and is not practical. In general, including up to the second harmonics of the signal will be enough [9]. Therefore the bandwidth of the differentiator is designed to be 3 times that of $1/2DT$, and thus the maximum required –3 dB bandwidth is about 3.4 MHz. However, to achieve this is very difficult as parasitic poles would appear very close to the dominant pole even for a single stage amplifier. In the proposed LED driver, a current mirror amplifier is used and some gain is traded for stability that results in the transfer function given by Equation (3) and Bode plot in Figure 5. $R_{diff}$ = 1.25 MΩ and $C_{diff}$ = 4.7 pF have been chosen in our circuit implementation. Transient simulation has been performed and the result shown in Figure 6. In the simulation, a current source is used to simulate the current of $S_{P2}$ during each switching cycle, which is also the current passed to the LED and output capacitor. It is shown that the differentiator can give reasonably accurate sensing result and $V_{diff}$ is about 1.33 V for an LED-current of 350 mA.
+
+As $V_{diff}$ is sampled during $(1-D)T < t \le T$ in every switching cycle, the transfer function of the LED-current sensing circuit should not be directly multiplied to the transfer function of the whole converter. The sampling circuit acts as a continuous to discrete converter and at the same time a reconstruction filter that convert the continuous signal $V_{diff}(t)$ into a discrete signal and reconstruct it by a zero-order hold interpolation in between each sample [9]. The sampled signal $V_{sample}[n]$ can be represented by
+
+$$
+V_{\text{sample}}[n] = V_{\text{diff}}(nT) = V_{\text{bi}} - R_{\text{diff}} C_{\text{diff}} \frac{R_2}{R_1 + R_2} \cdot \frac{d}{dt} V_O(nT) \quad (6)
+$$
+
+Figure 5. Bode plot of proposed LED-current sensing circuit.
+
+As the sampling frequency is equal to the frequency of the differentiator output, $V_{sample}$ extracts the output value of the differentiator when $(1-D)T < t < T$ [9], hence the transfer function of the overall system can be obtained by simply multiplying the conversion ratio between $I_{LED}$ and $V_{sample}$. The resulted transfer function is given by [10]
+
+$$
+T(s) = A(s) \left( \frac{R_2}{R_1 + R_2} R_{diff} \frac{C_{diff}}{C_O} \right) \\
+\frac{1}{D(1-D)} \frac{I_{LED}}{V_m} \frac{\left( 1 + \frac{s}{\omega_d} \right)}{1 + \frac{1}{Q} \frac{s}{\omega_o} + \frac{s}{\omega_o^2}} \quad (7)
+$$
+
+
+
+
+ | where |
+ A(s) |
+ transfer function of compensation network |
+
+
+
+
+ | D |
+ |
+ duty ratio |
+
+
+ | Vm |
+ |
+ ramp amplitude |
+
+
+ | Q |
+ ωo |
+ ωo ≈ (1 - D)/(ωc + δωl) |
+
+
+ | ωo |
+ |
+ (1 - D)/√(LC) |
+
+
+ | ωc |
+ |
+ 1/RC |
+
+
+ | ωl |
+ |
+ R/L |
+
+
+ | ωd |
+ |
+ 1/(CRc / (1 - D)) ≈ L/(1 - D)2 R |
+
+
+ | Rc |
+ |
+ equivalent series resistance (ESR) of output capacitor |
+
+
+
+
+## 3.2. Compensation and System Stability
+
+The proposed LED driver has the same configuration as a
+
+Figure 6. Transient simulation of the differentiator at a fixed load.
+---PAGE_BREAK---
+
+typical Buck-boost converter with leading-edge-modula-
+tion except for the feedback network. From the transfer
+function given by Equation (7), a right-half-plane (RHP)
+zero $\omega_d$ exists and can be moved to left-half-plane (LHP)
+when the condition below is fulfilled.
+
+$$
+CR_c \geq \frac{D}{(1-D)} \frac{R}{L} \quad (8)
+$$
+
+For the proposed design, it is very hard to fulfill such
+criterion and thus the RHP zero is placed after Unity-
+Gain-Frequency (UGF). RHP zero $\omega_d$ occurs at around
+12.4 kHz, and the natural frequency $\omega_o$ of the LED driver
+occurs at around 13.3 kHz with Q estimated to be about 1.
+As the RHP zero locates quite close to the natural fre-
+quency, extending bandwidth with a zero is not very ef-
+fective, hence dominant pole compensation is used to
+ensure stability.
+
+To achieve the compensation, the transfer function A(s)
+of the compensation network used should place the UGF
+of the system at around 1 kHz. A simple differential am-
+plifier and an off-chip compensation capacitor are used
+to implement the compensation network. The Bode plot
+of the overall system and that of the power stage alone is
+shown in Figure 7 indicating reasonable stability has
+been achieved.
+
+**4. Measurement Result**
+
+The proposed LED-current sensing scheme is applied to
+a voltage-mode PWM Buck-boost LED driver and im-
+plemented in AMS 0.35 µm process. The chip micro-
+graph is shown in **Figure 8** and the area is 2.4 mm by 2.5
+mm. Measurements have been performed on sensing
+accuracy, line and load regulation, and power efficiency
+
+of the chip.
+
+In steady state measurement, a supply voltage is 3.6 V and preset output-current is 350 mA. Switching frequency is 1 MHz. **Figure 9** shows the waveform of the inductor current and the output current. It can be seen that the driver is stable with the preset output-current. Measured inductor current ripple is 400 mA with an average inductor current of about 600 mA. Measured average output voltage is 3.47 V and the ripple is about 50 mV. The corresponding output current is 350 mA with a ripple current of about 5 mA. Discontinuity in output voltage waveform between the two states in switching cycles indicates that there is a parasitic resistance in series to the output capacitor. When inductor current ramps up, current is drawn from output capacitor to the load leading to a drop in output voltage. On the other hand, when inductor current ramps down to charge up output capacitor, there is a sudden jump in output voltage. There are also spikes during switching instants, which is originated from the switching noise coupled from switching nodes to output node through parasitic capacitances of power MOSFETs.
+
+**Figure 10** shows the buffered output of differentiator $V_{diff}$ and the buffered sampled voltage $V_{samp}$ with a supply voltage is 3.6 V. Because of the switching noise coupled from switching nodes of the driver, there is distortion in the waveform of $V_{diff}$. However, as the value of $V_{diff}$ is sampled in instants when the driver is not switching, and the value is held constant thereafter, the switching noise induced distortion that appears in $V_{diff}$ is not passed to $V_{samp}$, as shown in the figure. The measured $V_{samp}$ is 1.39V for an output current of 350 mA.
+
+A plot of output-current for different reference voltages $V_{ref}$ is shown in Figure 11. The measured feedback
+
+**Figure 7.** Bode plot of the power stage and overall system of the proposed Buck-boost LED driver.
+---PAGE_BREAK---
+
+Figure 8. Micrograph of the Buck-boost LED driver with proposed LED-current sensing circuit.
+
+Figure 9. Measured inductor current (top) and output voltage ripple (bottom) at 3.6 V $V_{DD}$.
+
+factor $V_{comp}/V_{LED}$ is 1.85 mV/mA. When supply voltage $V_{in}$ is changed from 2.7 V to 3.6 V, the output current for the same reference voltage $V_{ref}$ does not change significantly.
+
+The output current of the circuit is also measured in response to line variation and load variation. As the actual resistance of the high power LED depends on the manufacturing company, two resistors of 10 Ω and 4.7Ω are used. The preset output current is 350 mA, and input voltage $V_{DD}$ is swept from 2.7 V to 3.6 V. Measurement result is shown in Figure 12 indicating that the driver provides a relatively constant output current under different loading resistance. The maximum difference in output current between the two resistance values is 6 mA, which occurs at an input voltage of 3.2 V. For line regulation, the driver provides a relatively constant output current under different supply voltages and the line regulation measured is below 30 mA/V.
+
+
+
+Figure 10. The buffered differentiator output and the sampled voltage of the system with 3.6 V $V_{DD}$.
+
+Figure 11. Output current versus reference voltage at $V_{DD} = 2.7$ V and 3.6 V.
+
+Figure 12 shows the efficiency of the driver at 250 mA and 350 mA output current under different supply voltages. The loading resistor used is 10 Ω for both cases. The minimum efficiency is 87%, and is measured at an input voltage of 2.7 V and an output current of 350 mA. The maximum efficiency for 350 mA output current is 92% and is measured at supply voltage of 3.6 V. It can be observed that efficiency is increasing with input voltage. When input voltage increases, average inductor current decreases for the same output, and so thus the conduction loss. It can be interpreted that the main loss of the converter at 350 mA output is the conduction loss. For 250 mA output current, the efficiency is relatively constant and ranges between 90% - 91%. The output voltage for a 250 mA load is about 2.5 V, and the LED driver is in step-down operation for the entire input range. The inductor current is lower compared to the case of
+---PAGE_BREAK---
+
+Figure 12. Measured current regulation with different loading resistor values and input voltages.
+
+350 mA output current, hence efficiency is not very seriously degraded even when input voltage is close to its minimum value.
+
+Figure 14 shows the efficiency for output current between 150 mA to 250 mA. Three curves are obtained with input voltages at 2.7 V, 3.3 V and 3.6 V. It can be observed that efficiency first increases and then decreases when the output current increases. At low output current, switching loss is dominant and it is relatively independent of the output conditions. Therefore, efficiency is low at low output power because the same amount of power is used to convert the input power to the output power. When the output power and the output current increases, the efficiency also increases. When the output current increases further, the efficiency is dominated by conduction loss and thus decreases with the output current. We can see from the figure that for an input voltage of 2.7 V, the efficiency is the highest when the output current is about 200 mA. The highest efficiency can be obtained with an output current of 300 mA for $V_{in}$ = 3.3 V or 350 mA for $V_{in}$ = 3.6 V. When the output current is low, the efficiency is lower for high input voltage due to the increase of switching loss with increasing input voltage.
+
+Figure 13. Measured efficiency at different supply voltages.
+
+## 5. Conclusion
+
+In this paper, an LED-current sensing circuit that consumes minimal power is proposed. The proposed sensing circuit senses LED-current by differentiating the voltage of the output capacitor and consumes less than 420 µW power. It is implemented in a Buck-boost LED driver and the design is fabricated in AMS 0.35 µm process. Measurement results confirm a reasonable sensing accuracy and line/load regulation. The maximum power efficiency is measured to be 92%. As the power consumption of the proposed LED-current sensing circuit does not
+
+Figure 14. Measured efficiency at different output currents and supply voltages.
+
+increase proportionally with the LED-current, the circuit is particularly useful when LED-current continues to increase in the future.
+
+## 6. Acknowledgements
+
+This work is supported by Industry Education and Research Foundation of PKU-HKUST Shenzhen-Hongkong Institution (sgxcy-hzjj-201204), by the Guangdong Natural Science Foundation (S2011040001822) and the Fundamental Research Project of Shenzhen Science & Technology Foundation (JCYC20120618163025041). This work is also supported by the National natural Science Funds of China (61204033, 61204043).
+
+## REFERENCES
+
+[1] Philips Lumileds Lighting Company, "Philips Lumileds
+---PAGE_BREAK---
+
+LED Technology Breakthrough Fundamentally Solves
+Efficiency Losses at High Drive Currents," Press Infor-
+mation, Philips Lumileds Lighting Company, San Jose,
+2007.
+
+[2] Linear Technology Corporation, "1A Synchronous Buck-Boost High Current LED Driver," Datasheet LTC3454, Linear Technology Corporation, Milpitas, 2005.
+
+[3] Maxim Integrated Products, "Offline and DC-DC PWM Controllers for High Brightness LED Drivers," Datasheet MAX 16802, Maxim Integrated Products, San Jose, 2006.
+
+[4] LLC, "NCP5030: Buckboost Converter to Drive a Single LED from 1 Li-Ion or 3 Alkaline Batteries," Datasheet NCP 5030, Rev. 0, Semiconductor Components Industries, LLC, Denver, 2006.
+
+[5] Maxim Integrated Products, "1.5MHz, 30A, High-Efficiency LED-Driver with Rapid Current Pulsing," Datasheet MAX 16818, Maxim Integrated Products, San Jose, 2006.
+
+[6] Texas Instrument Incorporated, "Synchronous Boost Con-
+
+verter with Down Mode High Power White LED Driver,"
+Datasheet tps61058/tps61059, Texas Instrument Incorpo-
+rated, Dallas, December 2005.
+
+[7] National Semiconductor, "1.2A High Power White LED Driver, 2MHz Synchronous Boost Converter with I2C Compatible Interface, Constant Current Buck Regulator for Driving High Power LEDs," Datasheet LM 3402, National Semiconductor, Santa Clara, 2006.
+
+[8] W. Y. Leung, T. Y. Man and M. Chan, "A High-Power-LED Driver with Power-Efficient LED-Current Sensing Circuit," 2008 European Solid-State Circuits Conference, Edinburgh, 15-19 September 2008, pp. 354-357.
+
+[9] McClelland, Schafer and Yoder, "Signal Processing First," Prentice Hall Inc., New Jersey, 2003, pp.71-79.
+
+[10] W. H. Ki, "Signal Flow Graph in Loopgain Analysis of DC-DC PWM CCM Switching Converters," *IEEE Transactions on Circuit and System I: Fundamental Theory and Applications*, Vol. 45, No. 6, 1998, pp. 664-655.
\ No newline at end of file
diff --git a/samples/texts_merged/5999157.md b/samples/texts_merged/5999157.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a551ccb81c8abf0b3d2c2e30ac83dc315e8bec7
--- /dev/null
+++ b/samples/texts_merged/5999157.md
@@ -0,0 +1,1355 @@
+
+---PAGE_BREAK---
+
+Semantic Foundations of Jade
+
+Martin C. Rinard and Monica S. Lam
+Computer Systems Laboratory
+Stanford University, CA 94305
+
+**Abstract**
+
+Jade is a language designed to support coarse-grain parallelism on both shared and distributed address-space machines. Jade is data-oriented: a Jade programmer simply augments a sequential imperative program with declarations specifying how the program accesses data. A Jade implementation dynamically interprets the access specification to execute the program concurrently while enforcing the program's data dependence constraints, thus preserving the sequential semantics.
+
+This paper describes the Jade constructs and defines both a serial and a parallel formal operational semantics for Jade. The paper proves that the two semantics are equivalent.
+
+# 1 Introduction
+
+Over the last decade, research in parallel architectures has led to many new parallel systems. These systems range from multiprocessors with shared address spaces, multi-computers with distributed address spaces, to networks of high-performance workstations. Furthermore, the development of high-speed interconnection networks makes it possible to connect the systems together, forming a tremendous computational resource. An effective way to use these machines is to partition a computation into coarse-grain tasks. The current language support for this computing environment is, however, rather primitive: programmers must explicitly manage the hardware resources using low level communication and synchronization primitives. This paper presents Jade, a new language designed to simplify the expression of coarse-grain parallelism.
+
+Instead of using explicitly parallel constructs to create and synchronize concurrent tasks, Jade programmers use declarative constructs to specify how parts of sequential program access data. The Jade implementation dynamically interprets these access specifications to determine which operations can execute concurrently without violating the program's sequential semantics. This data-oriented approach simplifies the programming process by preserving the familiar sequential, imperative programming paradigm. Jade programmers need not struggle with phenomena such as data races, deadlock and nondeterministic program behavior.
+
+Jade is a set of extensions to existing sequential languages. Programmers can therefore parallelize large existing applications simply by analyzing how the program uses data and augmenting the source code with Jade extensions. Because Jade hides the low-level coordination of parallel activity from the programmer, these applications are portable across different parallel architectures.
+
+We introduced the basic concepts of the data-oriented approach to concurrency in a previous paper [6]. The previous version of Jade was designed for machines with shared address spaces. We have implemented Jade as an extension to C, C++ and FORTRAN on the Encore Multimax, the Silicon Graphics IRIS 4D/240S, and Stanford DASH multiprocessor [7]. We have found it possible to parallelize sequential programs with a reasonable programming effort. Implemented applications include a parallel sparse Cholesky factorization algorithm due to Rothberg and Gupta [11], the Perfect Club benchmark MDG [1], LocusRoute, a VLSI routing system due to Rose [10], a parallel *make* program, and a program simulating the flow of smog in the Los Angeles basin.
+
+We have revised the definition of Jade so that it can now be implemented on machines with separate address spaces. The same Jade program could be executed, for example, on both a shared address space multiprocessor and a network of workstations. This revision also makes it possible for the Jade implementation to dynamically verify the correctness of the access specifications. If the program does not correctly
+
+This research was supported in part by DARPA contract N00039-91-C-0138.
+---PAGE_BREAK---
+
+declare how it will access data, the Jade implementation will signal an error. This verification guarantees that the parallel and serial executions of a Jade program compute the same result.
+
+This paper presents the revised Jade language, and establishes the semantic foundations for Jade. We first informally present the Jade constructs, and explain how a programmer uses the Jade access declaration statements to specify how parts of the program access data. As we present the Jade constructs, we describe the concurrency patterns that the data usage information generates. We then formally present both a sequential and a parallel operational semantics for Jade. Because Jade is a declarative language, it is not immediately obvious how the Jade implementation generates the parallel execution. The parallel operational semantics therefore provides insight into how to actually implement Jade. Finally, we prove that the sequential and parallel semantics are equivalent.
+
+## 2 Jade Programming Paradigm
+
+A Jade programmer provides the *program* knowledge required for efficient parallelization; the implementation combines its *machine* knowledge with this information to map the computation efficiently onto the underlying hardware. Here are the Jade programmer's responsibilities:
+
+* **Task Decomposition:** The programmer starts with a serial program and uses Jade constructs to identify the program's task decomposition.
+
+* **Data Decomposition:** The programmer determines the granularity at which tasks will access the data, and allocates data at that granularity.
+
+* **Access Specification:** The programmer provides a dynamically determined specification of the data each task accesses.
+
+The Jade implementation performs the following activities:
+
+* **Constraint Extraction:** The implementation uses the program's serial execution order and the tasks' access specifications to extract the dynamic inter-task dependence constraints that the parallel execution must obey.
+
+* **Synchronized Parallel Execution:** The implementation maps the tasks efficiently onto the hardware while enforcing the extracted dependence constraints.
+
+* **Data Distribution:** On machines with multiple address spaces, the implementation generates
+
+the messages required to move data between processors.
+
+### 2.1 Jade Data Model
+
+Each Jade program has a shared memory that all tasks can access; objects (dynamically or statically) allocated in this memory are called *shared objects*. Each task also has a private memory consisting of a stack for procedure parameters and local variables and a heap for dynamically allocated objects accessed only by that task. Objects allocated in private memory are called *private objects*.
+
+The implementation enforces the restriction that no shared object can contain a reference to a private object. This, along with the restriction that no task be directly given a reference to another task's private object, ensures that no task can access any other task's private objects.
+
+Each task has an *access specification* which specifies how the task will access shared objects. The programmer defines a task's access specification using *access declaration statements*. For example, the `rd` (read) statement declares that the task may read the given object, while the `wr` (write) statement declares that the task may write the given object.
+
+Accesses conflict if, to preserve the serial semantics, they must execute in the underlying sequential execution order. Accesses to different objects do not conflict. Writes to the same object conflict, while reads do not conflict. A read and a write to the same object also conflict. The Jade implementation exploits concurrency by relaxing the sequential execution order between tasks that declare no conflicting accesses.
+
+Since the parallelization is based on the access specification, it is important that the access specification be accurate. An undeclared access could introduce a data race and make a parallel execution of the program compute an erroneous result. The Jade implementation precludes this possibility by dynamically checking each task's accesses to shared objects. If a task attempts to perform an undeclared access, the implementation will generate a run-time error.
+
+The implementation serializes tasks that declare conflicting accesses to an object even though the tasks may actually access disjoint regions of the object. The programmer must therefore allocate objects at a fine enough granularity to expose the desired amount of concurrency in the program.
+
+### 2.2 Basic Concurrency
+
+Jade programmers use the `withonly-do` construct to identify a task and to specify how that task will access data. Here is the general syntactic form of the
+---PAGE_BREAK---
+
+construct:
+
+```javascript
+withonly { access declaration } do
+ (parameters for task body) {
+ task body
+ }
+```
+
+The task body section contains the serial code executed when the task runs. The parameters section contains a list of variables from the enclosing environment. When the task is created, the implementation copies the values of these variables into a new environment; the task will execute in this environment. To ensure that no task can reference another task's private objects, no variable in the parameters section can refer to a private object.
+
+When a task is created, the Jade implementation executes the access declaration section to generate the task's access specification. This section is an arbitrary piece of code containing access declaration statements. Each such statement declares how the task will access a given shared object; the task's access specification is the union of all such declarations. This section may contain dynamically resolved variable references and control flow constructs such as conditionals, loops and function calls. The programmer may therefore use information available only at run time when generating a task's access specification.
+
+The Jade implementation uses the access specification information to execute the program concurrently while preserving the program's sequential semantics. We illustrate this concept by tracing the execution of the following Jade program.
+
+```javascript
+x := sh(0);
+y := sh(1);
+if (g(0) > 0) {
+ x := y;
+};
+withonly { wr(x); } do (x) {
+ *x := f(1);
+};
+withonly { rd(y); wr(y); } do (y) {
+ *y := *y + f(2);
+}
+```
+
+This program first uses the `sh` construct to allocate two objects in the shared heap. `x` and `y` refer to these shared objects. The program then computes `g(0)`, making `x` and `y` refer to the same object if `g(0) > 0`. The program then executes the two `withonly-do` constructs. Each `withonly-do` construct creates a task and generates that task's access specification. If `x` and `y` refer to the same object, then the tasks' accesses may conflict because both specifications declare that the tasks will write the same
+
+object. In this case the implementation preserves the serial semantics by executing the first task before the second task. The program therefore generates the following sequential task graph:
+
+If `x` and `y` refer to different objects, the two tasks' access specifications declare that the tasks will access disjoint sets of objects. Therefore, the two tasks' accesses do not conflict. The implementation can execute the tasks concurrently without violating the program's serial semantics. In this case the program generates the following parallel task graph:
+
+Conceptually, the Jade implementation dynamically generates and executes a task graph. As the implementation creates tasks, it inserts each new task into the task graph. To preserve the serial semantics the implementation inserts precedence arcs between tasks whose accesses may conflict. Each such arc goes from the earlier task in the underlying sequential execution order to the later task. When a processor becomes idle it executes one of the initial tasks (a task with no incoming precedence arcs) in the current task graph. When the task completes it is removed from the task graph. By building the task graph incrementally at run time, the Jade implementation can detect and exploit dynamic, data-dependent concurrency available only as the program runs.
+
+The programmer controls the amount of exploited concurrency by choosing the appropriate task granularity. Because the statements in a task execute sequentially, two pieces of code can execute concurrently only if they are in different tasks. The programmer must therefore make the task decomposition fine enough to expose the desired amount of concurrency.
+
+## 2.3 Advanced Concurrency
+
+In the model of parallel computation presented in section 2.2, a task's access specification is determined once and for all when the task is created. Two tasks may either execute concurrently (if none of their accesses conflict) or sequentially (if their accesses may conflict). Therefore, all synchronization takes place at task boundaries. The following example demonstrates how synchronizing only at task boundaries can waste concurrency.
+---PAGE_BREAK---
+
+x := sh(0);
+y := sh(1);
+withonly { wr(x); } do (x) {
+ *x := f(1);
+};
+withonly { rd(y);rd_wr(x);wr(x); }
+do (y,x) {
+ s := g(*y);
+ *x := h(*x, s);
+};
+withonly { wr(y); } do (y) {
+ *y := f(2);
+}
+
+This program generates three tasks. The tasks must execute sequentially to preserve the serial semantics. However, the second task does not access x until it finishes the statement `s := g(*y)`. Therefore, the first task should be able to execute concurrently with the statement `s := g(*y)` from the second task. Similarly, the second task no longer accesses y after the statement `s := g(*y)` finishes. The statement `*x := h(*x, s)` from the second task should be able to execute concurrently with the third task. This example illustrates how information about when tasks access shared objects can expose concurrency.
+
+To allow programmers to express when a task will access shared objects, Jade provides both a new construct, `with-cont`, and new access declaration statements `df_rd`, `df_wr`, `no_rd` and `no_wr`. The `with-cont` construct allows the programmer to update a task's access specification as the task executes. This construct, in combination with the new access declaration statements, allows the programmer to exploit the kind of inter-task concurrency described above.
+
+Here is the general syntactic form of the `with-cont` construct:
+
+with { access declaration } cont;
+
+As in the `withonly-do` construct, the access declaration section is an arbitrary piece of code containing access declaration statements. These statements change the task's access specification so that it more precisely reflects how the rest (or continuation, as the `cont` keyword suggests) of the task will access shared objects.
+
+The `with-cont` construct combines the current access specification with the declarations in the access declaration section to generate a new access specification. Unless the access declaration explicitly declares otherwise, the new access specification has the same declarations as the old access specification. The `withonly-do` construct, on the other hand, builds its access specification from scratch. Unless it explicitly
+
+declares an access, the access specification does not contain that declaration. The keywords in the constructs (`with` vs. `withonly`) reflect this difference in the treatment of undeclared accesses.
+
+## 2.4 Deferred Accesses
+
+The `df_rd` and `df_wr` statements declare a deferred access to the shared object. That is, they specify that the task may eventually read or write the object, but that it will not do so immediately. Before the task can access the object, it must execute a `with-cont` construct that uses the `rd` or `wr` access declaration statements to convert the deferred declaration to an immediate declaration. Therefore, a task that initially declares a deferred access to a shared object does not have the right to access that object. It does, however, have the right to convert the deferred declaration to an immediate declaration. This immediate declaration then gives the task the right to access the object.
+
+Deferred declarations allow a task to defer its synchronization for a shared object until just before it actually accesses the object. The following modification to our example illustrates how deferred declarations can increase the amount of exploitable concurrency in a Jade program:
+
+x := sh(0);
+y := sh(1);
+withonly { wr(x); } do (x) {
+ x := f(1);
+};
+withonly { rd(y); df_rd(x); df_wr(x); }
+do (y,x) {
+ s := g(*y);
+ with { rd(x); wr(x); } cont;
+ *x := h(*x, s);
+};
+withonly { wr(y); } do (y) {
+ *y := f(2);
+}
+
+Because the second task declares a deferred read and a deferred write access on x, it cannot access x until it converts the deferred declarations to immediate declarations. The second task can therefore start to execute while the first task is still running. The `with-cont` statement in the second task converts the deferred declarations to immediate declarations. Because the immediate declarations give the second task the right to access x, it must wait until the first task completes before it can proceed. This example demonstrates how deferred declarations allow the Jade implementation to execute an initial segment of one task concurrently with another task even though
+---PAGE_BREAK---
+
+the second task may eventually carry out an access
+that conflicts with one of the first task's accesses.
+
+## 2.5 Completed Accesses
+
+Jade programmers use the `no_wr` and `no_rd` access declaration statements to explicitly remove a declaration from a task's access specification. These statements allow the programmer to indicate when a task has completed a specified access. The following example illustrates how the `no_wr` and `no_rd` statements can increase the amount of exploitable concurrency:
+
+```jade
+x := sh(0);
+y := sh(1);
+withonly { wr(x); } do (x) {
+ *x := f(1);
+};
+withonly { rd(y); df_rd(x); df_wr(x); }
+do (y,x) {
+ s := g(*y);
+ with { no_rd(y); rd(x); wr(x); } cont;
+ *x := h(*x, s);
+};
+withonly { wr(y); } do (y) {
+ *y := f(2);
+}
+```
+
+After the second task executes the `with-cont` statement, it no longer declares that it will read y. At this point the two tasks cannot perform conflicting accesses, so the rest of the second task can execute concurrently with the third task.
+
+This example illustrates how programmers can
+use the `with-cont` construct and the `no_wr` and `no_rd`
+access declaration statements to eliminate conflicts
+between the enclosing task and tasks occurring later
+in the underlying sequential execution order. This
+conflict elimination may allow later tasks to execute
+as soon as the enclosing task executes the `with-cont`
+statement. In the absence of the `with-cont` state-
+ment the tasks would have had to wait until the en-
+closing task terminated.
+
+## 2.6 Summary
+
+Access specifications give the Jade implementation all the information it needs to correctly execute a program in parallel. Programmers generate a task's initial access specification when it is created, and can update the specification as the task runs. At any time, the current access specification must accurately reflect how the rest of the task and its future sub-tasks will access data. It is this a priori restriction - the guarantee that neither the task nor any of its sub-tasks will ever access certain shared objects - that al-
+
+lows the Jade implementation to exploit concurrency
+between tasks.
+
+Each declaration *enables* a task and its sub-tasks
+to access a shared object in a certain way. For exam-
+ple, an immediate read declaration enables the task to
+read the data; it also enables the task's sub-tasks to
+declare a read access and then read the data. Con-
+versely, if a task has not declared a read access on
+a given object, a sub-task cannot declare a read ac-
+cess and thus cannot read the object. A declaration
+therefore allows a sub-task to access certain data by
+enabling the sub-task to declare certain accesses. Ta-
+ble 1 summarizes the actions that read declarations
+enable. The table for writes is similar.
+
+| Declaration | Enabled Access | Enabled Declarations |
|---|
| rd | read | rd df_rd no_rd | | df_rd | none | rd df_rd no_rd | | no_rd | none | none |
+
+Table 1: Enabled Actions
+
+When the implementation executes tasks, it must
+ensure that the parallel execution performs conflict-
+ing accesses in the same order as the serial execution.
+Therefore, a task cannot execute if any of its enabled
+accesses conflict with an immediate or deferred access
+declared by an earlier task.
+
+# 3 Operational Semantics
+
+Section 2 informally described the Jade language and
+the concurrent behavior of Jade programs. In this
+section we develop both a serial and a parallel opera-
+tional semantics for Jade, and show that the parallel
+and serial semantics are equivalent.
+
+Because Jade is a set of extensions to serial, im-
+perative languages, we base our Jade semantics on a
+semantics for such a language. In this section we use
+the language Simple defined in Appendix A as our
+base language. We made Simple simple for purposes
+of exposition, but it is powerful enough to illustrate
+the basic concepts behind the Jade semantics. Al-
+though Simple only supports integers and references,
+our semantic treatment trivially generalizes to lan-
+guages with more elaborate data structures. Simple
+also has no sophisticated flow of control constructs
+such as first class continuations or closures. Again,
+the semantic treatment presented in this section triv-
+---PAGE_BREAK---
+
+ially generalizes to languages with such constructs. Because Jade only deals with a program's dynamic memory accesses, it does not matter what part of the program happens to generate these accesses.
+
+We first define an operational semantics for Simple. This semantics consists of the definition of expression evaluation and the definition of a transition relation on Simple program states. This transition relation is in effect an interpreter for Simple. We use the transition relation for Simple to define a serial operational semantics for Jade programs. This semantics executes Jade programs in the standard serial execution order and dynamically checks the correspondence between each task's declared and actual accesses to shared objects.
+
+We also use the transition relation for Simple to define the parallel operational semantics for Jade. The parallel and serial semantics use the same mechanism to check that tasks perform no undeclared accesses to shared objects. The parallel semantics, however, runs tasks concurrently if they can perform no conflicting accesses. The parallel semantics maintains a set of active tasks and a set of suspended tasks. Active tasks can execute concurrently; suspended tasks wait for active tasks to complete conflicting accesses. This semantics uses the standard interleaving approach to modelling concurrency in that it models the parallel execution of tasks by interleaving their atomic transitions.
+
+The parallel semantics synchronizes tasks by maintaining, for each shared object, a queue of the declarations of tasks that may access that object. When all of a task's immediate declarations reach the front of their queues, the task is activated and can run. When a task terminates, it removes all of its declarations from the queues, potentially activating other tasks.
+
+When a task is created, the implementation inserts each of its declarations into the appropriate queue just before its parent's declarations. When a task creates several sub-tasks, the sub-tasks' declarations will appear in the queue in their task creation order. This task creation order is also the standard serial execution order. Because a sub-task's declarations appear before its parent's declarations, the sub-task takes precedence over its parent. Therefore, tasks with conflicting declarations will get activated and execute in the standard serial execution order.
+
+The main theoretical result of this paper is a theorem which establishes the correspondence between the serial semantics and the parallel semantics. The theorem states that a parallel execution of a Jade program will successfully halt if and only if the serial program successfully halts. Also, all such parallel and serial executions generate the same result.
+
+The Appendix contains the complete set of ax-
+
+ioms that define the operational semantics. In the paper we illustrate how the operational semantics work by reproducing representative axioms from the Appendix.
+
+## 3.1 Access Specifications
+
+In this section we formally define the access specifications that the semantics uses to check the correspondence between a task's declared and actual accesses. Each access specification is a set $s$ of *declarations*. A declaration is a tuple of the form $\langle di \in \{\text{df}, \text{im}, \text{no}\}, rw \in \{\text{rd}, \text{wr}\}, l \rangle$. The first field of the tuple determines whether the tuple represents a deferred $(\text{df})$, an immediate $(\text{im})$ or no (no) access declaration. The second field of the tuple determines whether the tuple represents a read (`rd`) or a write (`wr`) declaration, while the third field identifies the shared object to which the declaration refers. We say that a task with access specification `s` declares a deferred write access on a shared object `l` if $\langle df, wr, l \rangle \in s$, an immediate write access if $\langle im, wr, l \rangle \in s$, etc.
+
+A task with access specification `s` can read (or write) a shared object `l` only if $\langle im, rd, l \rangle \in s$ (or $\langle im, wr, l \rangle \in s$). A task can declare an access (or no access) on an object in a *withonly-do* or *with-cont* construct only if `s` declares either an immediate or a deferred access on that object. We formalize the second concept with the following definition:
+
+**Definition 1**
+
+$s \supset \langle di, rw, l \rangle \iff \langle df, rw, l \rangle \in s \text{ or } \langle im, rw, l \rangle \in s.$
+
+The Jade implementation verifies that a task's access specification is accurate by dynamically checking every access to a shared object that the task declares or performs. If the task attempts to perform or declare an access that its access specification does not enable, the implementation detects the violation at run time and signals an error.
+
+## 3.2 Expression Evaluation
+
+In this section we define how Jade programs evaluate expressions. We assume an infinite set of shared memory locations ShLoc and an infinite set of private memory locations PrLoc. Each location holds a shared or private object. In Simple, shared locations can hold integers and references to shared objects. Private locations can hold integers and references to either shared or private objects. A memory is a partial function from a finite number of active memory
+---PAGE_BREAK---
+
+locations to objects:
+
+$$
+\begin{align*}
+l & \in \mathit{Loc} = \mathit{ShLoc} \cup \mathit{PrLoc} \\
+v & \in \mathit{ShObj} = \mathit{ShLoc} \cup \mathcal{Z} \\
+v & \in \mathit{PrObj} = \mathit{ShObj} \cup \mathit{PrLoc} \\
+m & \in \mathit{ShMem} = \mathit{ShLoc} \xrightarrow{\text{fin}} \mathit{ShObj} \\
+n & \in \mathit{PrMem} = \mathit{PrLoc} \xrightarrow{\text{fin}} \mathit{PrObj}
+\end{align*}
+$$
+
+Programmers use identifiers ($id \in Id$) to refer to vari-
+ables. An environment $\mathbf{e}$ ($\in \mathrm{Env} = \mathrm{Id} \stackrel{\mathrm{fin}}{\longrightarrow} \mathrm{PrObj}$) gives
+the correspondence between variables and values.
+
+Expressions are evaluated in a context contain-
+ing an environment, a shared and a private memory,
+and an access specification s. As described in sec-
+tion 3.1, s determines which shared objects a task
+has the right to read and/or write. The notation
+$\exp \textbf{in} \langle e, m, n, s \rangle = v$ should be read as: "expression
+$\exp$ in context $\langle e, m, n, s \rangle$ evaluates to v".
+
+When a task evaluates an expression, it may read
+a shared object. At each such read, the semantics
+checks that the task declares an immediate read ac-
+cess on that object. This check takes the form of
+a precondition on the axiom that evaluates reads to
+shared objects. This axiom (reproduced below from
+Appendix B) requires that if a task reads a shared
+object, it must declare the access:
+
+$$
+\frac{}{} \exp \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m, \langle i\mathbf{m}, r\mathbf{d}, l \rangle \in s \\
+\frac{*}{*} \exp \textbf{in} \langle e, m, n, s \rangle = m(l)
+$$
+
+Appendix B contains the complete set of axioms that define expression evaluation.
+
+## 3.3 Simple Semantics
+
+We now define the operational semantics for Simple programs. This semantics takes the form of a transition relation $\to$. Intuitively, each transition starts with a sequence of statements and a statement evaluation context and executes the first statement in the sequence. The result is a new statement sequence and a new context. Statement evaluations take place in the same contexts as expression evaluations.
+
+A Simple statement may write a shared object.
+In this case the operational semantics must check
+that the task declares an immediate write access on
+that object. We reproduce that axiom here to show
+how the semantics uses a precondition to perform the
+check:
+
+$$
+\frac{}{} \exp_1 \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m, \langle i\mathbf{m}, w\mathbf{r}, l \rangle \in s, \\
+\frac{}{} \exp_2 \textbf{in} \langle e, m, n, s \rangle = v \in \text{ShObj} \\
+\frac{*}{*} \exp_1 := \exp_2; c \textbf{in} \langle e, m, n, s \rangle \to \\
+c \textbf{in} \langle e, m[l \mapsto v], n, s \rangle
+$$
+
+If the task that generates the write does not de-
+clare the access, the Jade implementation must gen-
+erate an error. Formally, the task takes a transition
+
+to the special state **error**. The semantics uses the
+following axiom to generate this transition:
+
+$$
+\frac{\exp_1 \mathbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m, \langle i\mathbf{m}, w\mathbf{r}, l \rangle \notin s}{*\exp_1 := \exp_2; c \mathbf{in} \langle e, m, n, s \rangle \to \text{error}}
+$$
+
+Finally, we demonstrate that when a task allo-
+cates a shared object, it acquires a deferred read and
+a deferred write declaration on the new object. The
+following axiom executes a sh statement, which allo-
+cates a new object:
+
+$$
+\begin{array}{@{}l@{}}
+l \in \mathit{ShLoc} \setminus (\mathit{Dom}\ m, \exp\,\mathit{in}\,\langle e, m, n, s\rangle = v \in \mathit{ShObj}, \\
+\phantom{l} s' = s \cup (\{\langle df, rd, l\rangle, \langle df, wr, l\rangle\}) \\
+\hline
+id := \mathit{sh}(exp); c\,\mathit{in}\,\langle e, m, n, s\rangle \to \\
+\phantom{id} c\,\mathit{in}\,\langle e[id \mapsto l], m[l \mapsto v], n, s'\rangle
+\end{array}
+$$
+
+This axiom arbitrarily chooses a new location for
+the allocated object. Therefore, different executions
+of the serial program may differ in their choice of
+which locations hold which objects. Such executions
+represent equivalent computations, but the actual
+program states are different. To capture this equiva-
+lence we define what it means for two contexts to be
+equivalent:
+
+**Definition 2** $\langle e_1, m_1, n_1, s_1 \rangle \equiv^b \langle e_2, m_2, n_2, s_2 \rangle$ iff
+Dom $e_1 = Dom e_2$, and there exist bijections $b_n$ :
+Dom $n_1 \to Dom n_2$, $b_m$ : Dom $m_1 \to Dom m_2$ and
+$b_s : s_1 \to s_2$ such that $b = b_n \cup b_m \cup I$ (where $I$ is the
+identity function on $\mathcal{Z} \cup \{\text{error}\}$) and
+
+1. $\forall l \in \text{Dom } m_1.b(n_1(l)) = n_2(b(l)),$
+
+2. $\forall l \in \text{Dom } m_1.b(m_1(l)) = m_2(b(l)),$
+
+3. $\forall id \in \text{Dom } e_1.b(e_1(id)) = e_2(id),$
+
+
+
+4. $\forall (di, rw, l) \in s_1.b_S((di, rw, l)) = (di, rw, b(l)).$
+
+**Lemma 1** If $\langle e_1, m_1, n_1, s_1 \rangle \equiv^b \langle e_2, m_2, n_2, s_2 \rangle$ then $\forall exp \in Exp.$
+
+$b(\exp\,\mathbf{in}\,\langle e_1, m_1, n_1, s_1\rangle) = \exp\,\mathbf{in}\,\langle e_2, m_2, n_2, s_2\rangle.$
+
+Appendix C contains the rest of the axioms that define the operational semantics for Simple.
+
+## 3.4 Jade Statement Semantics
+
+This section defines the transition relation $\to_j$ for
+Jade statements. This transition relation extends $\to$
+to the access declaration sections of Jade constructs.
+These transitions take place in Jade contexts; a Jade
+context is a tuple $\langle e, m, n, s, r \rangle$. These contexts are
+the same as statement evaluation contexts, except
+that they contain an additional set of declarations r.
+The axioms accumulate declarations from the access
+declaration sections of Jade constructs into this set.
+When the semantics has finished executing the access
+---PAGE_BREAK---
+
+declaration section of a **withonly-do** construct, *r* becomes the access specification of the new task. For a **with-cont** construct, the semantics uses *r* to update the current task's access specification.
+
+We first reproduce an axiom that demonstrates how access declaration sections can contain arbitrary code. The following axiom makes all of the Simple transitions valid in the access declaration section of a **with-cont** construct:
+
+$$
+\frac{c_1 \textbf{in} \langle e, m, n, s \rangle \rightarrow c'_1 \textbf{in} \langle e', m', n', s' \rangle}
+{\begin{array}{l} \textit{with} \{c_1\}\textit{cont}; c_2 \textbf{in} \langle e, m, n, s, r \rangle \rightarrow_j \\ \textit{with} \{c'_1\}\textit{cont}; c_2 \textbf{in} \langle e', m', n', s', r \rangle \end{array}}
+$$
+
+We next reproduce an axiom dealing with the accumulation of declarations into the access specification set *r*. To legally declare that a new task will access a given shared object, the parent task's access specification must enable the declaration. The semantics enforces this constraint with a precondition on the axiom which constructs the new task's access specification:
+
+$$
+\begin{align*}
+c = & \mathbf{withonly} \{ \mathrm{di\_rw}(exp); c_1 \} \mathrm{do}(ids) \{c_2\}; c_3, \\
+ & \exp \mathbf{in} \langle e, m, n, s \rangle = l \in \mathrm{Dom} m, \\
+ & r' = r \cup \{ \langle \mathrm{di}, \mathrm{rw}, l \rangle \}, s \vdash \langle \mathrm{di}, \mathrm{rw}, l \rangle \\
+& \frac{}{c \mathbf{in} \langle e, m, n, s, r \rangle \to_j} \\
+\mathbf{withonly} \{ c_1 \} \mathrm{do}(ids) \{c_2\}; c_3 \mathbf{in} \langle e, m, n, s, r' \rangle
+\end{align*}
+$$
+
+Appendix D contains the rest of the axioms which define the $\rightarrow_j$ transition relation, and the definition of the equivalence relation $\equiv_j^b$ for Jade contexts.
+
+## 3.5 Serial Jade Semantics
+
+We now define the transition relation $\rightarrow_s$ for serial Jade program states. A serial Jade program state $\langle m, ts \rangle = st \in \text{SerState} = \text{ShMem} \times (\text{Task}*)$ is a pair consisting of a shared store and a stack of tasks. Each task is a tuple $t = (c, e, n, s, r) \in \text{Task}$ containing the code $c$ for the task.
+
+The following axiom defines the execution of a **withonly-do** statement. The precondition first checks that none of the task parameters refers to a private object. The new task then becomes the first task in the sequence, with the parent task second. Therefore, the program's statements execute in the standard sequential, depth-first execution order:
+
+**Definition 3**
+
+$$
+s \uparrow r = (\{\langle di, rw, l \rangle \in s | \neg\exists\langle di', rw, l \rangle \in r\} \cup \\
+\{\langle im, rw, l \rangle \in r\} \cup \{\langle df, rw, l \rangle \in r\}).
+$$
+
+$$
+c = \mathbf{withonly} \{e\} do(id_1, ..., id_n)\{c_1\}; c_2,
+\quad
+\forall i \le n. id_i \in \operatorname{Dom} e \text{ and } e(id_i) \notin \operatorname{PrLoc},
+\\
+e' = [id_1 \mapsto e(id_1)] ... [id_n \mapsto e(id_n)],
+\\
+\frac{\langle m, (c, e, n, s, r) \circ ts \rangle \to_s
+}{\langle m, (c_1, e', \emptyset, \emptyset^\uparrow r, \emptyset) \circ (c_2, e, n, s, \emptyset) \circ ts \rangle}
+$$
+
+When a task completes, the computation continues with the rest of its parent task. The next axiom removes completed tasks from the top of the stack of tasks. The completed task's parent is the new first task, so the program's execution continues with the parent task:
+
+$$
+\overline{\langle m, (c, e, n, s, r) \circ ts \rangle} \to_s (m, ts)
+$$
+
+When a program successfully halts, it takes a transition to the integer that is the program's result:
+
+$$
+c = \mathbf{result}(exp), \exp_{\mathbf{in}}(e, m, n, s) = v \in Z \\
+\qquad (m, (c, e, n, s, r))_{\to s} v
+$$
+
+The rest of the axioms that define $\rightarrow_s$ appear in Appendix E. In particular, there is an axiom that takes the serial program state to **error** if there is an error in the execution of one of the tasks.
+
+We now define the notion of equivalence for serial Jade program states, and state a theorem that allows us to treat $\rightarrow_s$ as a transition function between equivalence classes of serial Jade program states.
+
+**Definition 4** $b \downarrow s$ is the restriction of $b$ to $s$. That is, $\text{Dom } b \downarrow s = \text{Dom } b \cap s$ and $b \downarrow s(v) = b(v)$.
+
+**Definition 5** $\langle m, t_1, \dots, t_n \rangle =_s \langle m', t'_1, \dots, t'_n \rangle$
+iff $b_m : \text{Dom } m \to \text{Dom } m'$ is a bijection and
+$\forall i \le n. i f t_i = (c, e, n, s, r), t'_i = (c', e', n', s', r')$
+then $c = c'$ and $\exists b.b \downarrow$ Dom $b_m = b_m(c', (e, m, n, s, r)) =_j^b (c', m', s', r')$.
+
+**Theorem 1** If $st_1 =_s st_2$ then
+
+1. $st_2 =_s st_1$,
+
+2. $st_1 \to_s st'_1 \Rightarrow \exists st'_2 . st_2 \to_s st'_2,$
+
+3. $st_1 \to_s st'_1$, $st_2 \to_s st'_2 \Rightarrow st'_1 =_s st'_2$ or $st'_1 = st'_2.$
+
+We can now view $\rightarrow_s$ as a program execution function. The value of $\rightarrow_s$ is the unique equivalence class of program states obtained by executing the next step of the program. Our serial semantics is therefore deterministic.
+
+We now define the notion of observation for the serial execution of Jade programs. The basic idea is that we start the program in a start state and run it until it can progress no further. If the program halts with an integer result, we observe the result. If the program halts in error or could only partially execute we observe error. If the program runs forever we observe ⊥:
+
+**Definition 6** $sst(c) = (\emptyset, (c, c), (\emptyset, (\emptyset)))$.
+$SObs(c) =$
+$$
+\left\{
+\begin{array}{ll}
+v & \text{if } sst(c) \to_s \cdots \to_s v \in Z \\
+\text{error} & \text{if } sst(c) \to_s \cdots \to_s \text{error or} \\
+& sst(c) \to_s \cdots \to_s st/ps, st \in \text{SerState} \\
+\bot & \text{if } sst(c) \to_s \cdots \to_s
+\end{array}
+\right.
+$$
+---PAGE_BREAK---
+
+## 3.6 Parallel Jade Semantics
+
+We now define the transition relation $\rightarrow_p$ for parallel Jade program states. A parallel program state $pt = \langle m, A, S, \square \rangle \in \text{ParState}$ consists of a shared memory m, a set A of active tasks, a set S of suspended tasks and an ordering relation $\sqsubseteq$ on the declarations of the parallel program state.
+
+### 3.6.1 Object Queues
+
+The following definitions impose some consistency requirements on the structure of $\sqsubseteq$:
+
+**Definition 7** *Given a set $T$ of tasks, $\text{decl}(T) = \bigcup_{\langle c,e,n,s,r \rangle \in T} s$.*
+
+**Definition 8** *We say that $\sqsubseteq$ is consistent for $A \cup S$ iff*
+
+1. $\sqsubseteq \subset \text{decl}(A \cup S) \times \text{decl}(A \cup S)$,
+
+2. $d_1 \sqsubseteq d_2$ and $d_2 \sqsubseteq d_3 \Rightarrow d_1 \sqsubseteq d_3$.
+
+3. $\langle di, rw, l \rangle \sqsubseteq \langle di', rw', l' \rangle \Rightarrow l = l'$.
+
+4. $\langle di, rw, l \rangle \not\sqsubseteq \langle di', rw', l \rangle$ and $\langle di', rw', l \rangle \not\sqsubseteq \langle di, rw, l \rangle$
+$\Leftrightarrow \exists c, e, n, s, r \in A \cup S.$
+$\langle di, rw, l \rangle \in s$ and $\langle di', rw', l \rangle \in s$.
+
+Given this definition, $\sqsubseteq$ represents a set of queues, one for each shared object. Each declaration $\langle di, rw, l \rangle$ appears in the queue for $l$.
+
+Declarations appear in a queue in their tasks' underlying sequential execution order. So, if task $t_1$ would execute before task $t_2$ if the program executed sequentially, then $t_1$'s declarations appear before the declarations of $t_2$. The operational semantics uses these queues to determine when tasks can execute concurrently. As soon as all of a task's immediate declarations reach the front of their queues, that task can execute. Therefore, if the declarations of two tasks are simultaneously at the front of their respective queues, the two tasks' access specifications do not conflict and the tasks can execute concurrently. We formalize the notion of "front of a queue" with the following definitions. $f(\langle di, rw, l \rangle, \sqsubseteq)$ is true just when $\langle di, rw, l \rangle$ is at the front of its queue.
+
+**Definition 9**
+
+$$ succ(s, \sqsubseteq) = \{d' | \exists d \in s.d. \sqsubseteq d'\}. $$
+
+$$ pred(s, \sqsubseteq) = \{d' | \exists d \in s.d' \sqsubseteq d\}. $$
+
+$$ f(\langle di, wr, l \rangle, \sqsubseteq) \text{ iff } pred(\{\langle di, wr, l \rangle\}, \sqsubseteq) = \emptyset. $$
+
+$$ f(\langle di, rd, l \rangle, \sqsubseteq) \text{ iff } $$
+
+$$ \forall \langle di', rw', l' \rangle \in pred(\{\langle di, rd, l \rangle\}, \sqsubseteq). rw' = rd. $$
+
+As the definition of $f$ shows, a write declaration is at the front of its queue when there are no declarations before it in the queue; a read declaration is at the front of its queue when there are only other read declarations before it in the queue. This reflects the fact that several tasks can concurrently read a shared object because reads do not change the object's state. Writes, of course, must execute in the underlying sequential execution order.
+
+The definition of $f$ demonstrates that a deferred declaration can prevent an immediate declaration from being at the front of its queue. In this case the deferred declaration prevents the immediate declaration's task from executing. This reflects the fact that a deferred declaration represents a potential access. The immediate declaration's task cannot proceed until there is no possibility that an earlier task can perform an access that conflicts with any of its accesses.
+
+We now discuss the definition of $\rightarrow_p$, the transition relation for parallel Jade program states. We model the parallel execution of a Jade program by interleaving the atomic transitions of that program's parallel tasks. $\rightarrow_p$ therefore arbitrarily picks one of the active tasks and executes the next step of that task's computation. The tasks themselves may change their specifications, create new tasks, or complete their execution. Each of these events changes the program state's set of specifications. The transition relation must therefore modify $\sqsubseteq$ to reflect these changes.
+
+There is a function to perform each kind of modification to $\sqsubseteq$. When the semantics executes a `withonly-do` construct, it uses the `ins` function to insert the new task's declarations into the queues just before its parent's declarations. When the task completes, the semantics uses the `rem` function to remove its declarations from the queues. When the semantics executes a `with-cont` construct, it uses the `upd` function to perform the queue modifications that correspond to the changes in the task's access specification.
+
+**Definition 10**
+
+$$ s@l = \{\langle di, rw, l \rangle \in s\}. $$
+
+$$ rpl(s, r, \sqsubseteq) = \\ \cup_{\langle di, rw, l \rangle \in r} (\text{pred}(s@l, \sqsubseteq) \times r@l) \cup (r@l \times \text{succ}(s@l, \sqsubseteq)). $$
+
+$$ ins(r, s, \sqsubseteq) = \sqsubseteq \cup rpl(s, r, \sqsubseteq) \cup_{\langle di, rw, l \rangle \in r} r@l \times s@l. $$
+
+$$ rem(s, \sqsubseteq) = \sqsubseteq \\ -\{\langle d, d' \rangle | d \in s \text{ or } d' \in s\}. $$
+
+$$ upd(s, r, \sqsubseteq) = rem(s, \sqsubseteq) \cup rpl(s, r, \sqsubseteq). $$
+
+### 3.6.2 Transition Relation
+
+We now present the axioms that define $\rightarrow_p$. We first present the axiom that executes a `withonly-do` statement when it creates a new task. This axiom sus-
+---PAGE_BREAK---
+
+pends both the new task and its parent. The parent
+task may be unable to run because its access specifi-
+cation may conflict with the new task's access speci-
+fication. The new task may be unable to run because
+its access specification may conflict with those of pre-
+viously created tasks.
+
+$$
+\begin{align*}
+c = & \textit{withonly} \{e\}\mathrm{do}(id_1, \dots, id_n)\{c_1\}; c_2, \\
+ & t = \langle c, e, n, s, r \rangle \in A, \\
+ & \forall i \le n. id_i \in \mathrm{Dom}\ e \text{ and } e(id_i) \notin \mathrm{PrLoc} \\
+ & e' = [id_1 \mapsto e(id_1)] \cdots [id_n \mapsto e(id_n)], \\
+t' = & \langle c_1, e', \emptyset, \emptyset \uparrow r, \emptyset \rangle, \\
+ & t'' = \langle c_2, e, n, s, \emptyset \rangle \\
+ & \frac{\langle m, A, S, \Box \rangle \to_p \\
+ \langle m, A \setminus \{t\}, S \uplus \{t', t''\}, \mathrm{ins}(\emptyset \uparrow r, s, \Box) \rangle}{\langle m, A, S, \Box \rangle \to_p}
+\end{align*}
+$$
+
+When all of a suspended task's immediate decla-
+rations reach the front of their respective queues, the
+semantics must transfer the task to the set of active
+tasks so that it can execute. The following axiom
+activates such suspended tasks:
+
+$$
+t = (c, e, n, s, r) \in S,
+\quad
+\forall (\mathbf{im}, rw, l) \in s.f((\mathbf{im}, rw, l), \Box)
+\quad
+\frac{\langle m, A, S, \Box \rangle \to_p \langle m, A \cup \{t\}, S \setminus \{t\}, \Box \rangle}{\langle m, A, S, \Box \rangle \to_p}
+$$
+
+When a task completes, it must remove its dec-
+larations from the queues. The semantics can then
+activate tasks whose accesses conflicted with the com-
+pleted task's accesses.
+
+$$
+t = (c, e, n, s, r) \in A,
+\frac{}{\langle m, A, S, \Box \rangle \to_p (m, A \setminus \{t\}, S, rem(s, \Box))}
+$$
+
+The rest of the axioms that define $\rightarrow_p$ are in Appendix F. In particular, there is an axiom that takes a program to **error** if the program violates some of the execution constraints, and an axiom that computes the program's result.
+
+We next define how to observe the parallel exe-
+cution of a Jade program. If the program success-
+fully halts, we observe the result. If the program has
+an error, or gets into a state from which it cannot
+progress, or has one of its active tasks get into a state
+from which it cannot progress, we observe **error**. If
+the program runs forever, we observe **⊥**. The paral-
+lel observation function PObs observes every parallel
+execution and takes the union of the resulting obser-
+vations. In this definition, PObs makes no fairness
+assumptions about the parallel execution.
+
+**Definition 11** $pst(c) = (\emptyset, \{\langle c, \emptyset, \emptyset, \emptyset\rangle, \emptyset, \emptyset\rangle)$.
+$hung((m, A, S, \Box))$ *iff*
+$\langle m, A, S, \Box\rangle\not\to_p$ or $\exists t \in A.\langle m, \{t\}, \emptyset, \Box\rangle\not\to_p$.
+PObs($c$) =
+$\begin{cases}
+\{v | pst(c) \to_p v \in \mathcal{Z}\} \\
+\cup \{error\} & \text{if } pst(c) \to_p ... \to_p error \text{ or} \\
+& pst(c) \to_p ... \to_p pt, hung(pt) \\
+\cup \{\bot\} & \text{if } pst(c) \to_p ... \to_p ...
+\end{cases}$
+
+We next define a notion of consistency for parallel
+program states, and prove that $\rightarrow_p$ preserves con-
+sistency. We use lemma 2 extensively in the proof
+of correspondence between the serial and parallel se-
+mantics.
+
+**Definition 12** *A parallel program state $\langle m, A, S, \Box \rangle$ is consistent iff $\Box$ is consistent for $A \cup S$ and $\forall \langle c, e, n, s, r \rangle \in A \cup S$*
+
+1. $\forall \langle di, rw, l \rangle \in r.l \in Dom m, s \vdash \langle di, rw, l \rangle$
+
+2. $\forall \in s.l \in Dom m$, $di \in \{df, im\}$,
+
+3. $\langle c,e,n,s,r\rangle \in A \Rightarrow \\[0.5em] \forall (\mathbf{im},rw,l) \in s.f(\langle (\mathbf{im},rw,l),\Box\rangle)$.
+
+**Lemma 2** If $pt \in \text{ParState}$ is consistent, $pt \rightarrow_{pt'} pt'$ and $pt' \in \text{ParState}$ then $pt'$ is consistent.
+
+*Proof Sketch:* The key aspect of the proof is to show that $\rightarrow_p$ preserves property 3 of definition 12. To show this, we must show that the legal queue updates and insertions do not cause the program state to violate this property.
+
+We first consider the queue insertions caused by a
+parent task spawning a new task. For each queue, the
+new task's declarations appear just before the parent
+task's declarations. Both the parent task and the new
+task are suspended in the new state. All active tasks
+other than the parent task remain active in the new
+state.
+
+We now show that any immediate declaration of
+an active task in the new state remains at the front
+of its queue. If there is no declaration from the par-
+ent task in the active task's declaration's queue, then
+by property 1 of definition 12 there is no declaration
+from the new task in that queue. Otherwise, the ac-
+tive task's declaration must have appeared either be-
+fore or after the parent's declarations in the old state.
+If the declaration appeared before those of the par-
+ent task, then it will appear before those of the new
+task in the new state. If the declaration appeared
+after those of the parent task, then all of the dec-
+larations in question must be read declarations. By
+property 1 of definition 12 the new task inserted no
+write declarations in the queue, so the active task's
+read declaration is still at the front of the queue.
+
+We next consider an update to the queue due to
+the execution of a *with-cont* statement. In the new
+state, the updating task’s declarations appear in the
+same place in the queues as the task’s declarations
+from the old state. We can use a case analysis similar
+to that for the insertion case to determine that the
+transition preserves property 3 of definition 12.
+---PAGE_BREAK---
+
+## 3.7 Semantic Correspondence
+
+In this section we present the proof of correspondence between the parallel and serial semantics. We first set up an equivalence between serial program states and parallel program states. We use this definition to show that the serial execution of a Jade program is also one of the legal parallel executions. This is the first step towards proving the correspondence between the parallel and serial semantics.
+
+**Definition 13** $\langle m, ts \rangle \equiv_{sp} \langle m, A, S, \sqsubseteq \rangle$ iff there exists an ordering $\langle c'_1, e'_1, n'_1, s'_1, r'_1 \rangle \circ \cdots \circ \langle c'_m, e'_m, n'_m, s'_m, r'_m \rangle$ on $A \cup S$ such that if $ts = \langle c_1, e_1, n_1, s_1, r_1 \rangle \circ \cdots \circ \langle c_n, e_n, n_n, s_n, r_n \rangle$ then $m = n$ and $\forall i \le n$
+
+$$
+\begin{align*}
+1. & c_i = c'_i, e_i = e'_i, n_i = n'_i, s_i = s'_i, r_i = r'_i, \\
+2. & \forall \langle di, rw, l \rangle \in s_i.f(\langle di, rw, l \rangle, \text{rem}(\bigcup_{j $$
+
+## B Expression Evaluation
+
+We define expression evaluation with the following axioms.
+
+$$ \frac{i \in \mathcal{Z}}{\mathbf{i in} (\boldsymbol{e}, m, n, s) = i} $$
+
+$$ \frac{id \in \mathrm{Dom} e}{\mathbf{id in} (\boldsymbol{e}, m, n, s) = e(id)} $$
+
+$$ \frac{\exp \mathbf{in} (\boldsymbol{e}, m, n, s) = l}{\exp \mathbf{in} (\boldsymbol{e}, m, n, s) = n(l)} $$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+& \frac{\exp \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m, \\
+& \phantom{{}\frac{\exp \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m, {}}{}} \langle \text{im}, \text{rd}, l \rangle \in s}{*\exp \textbf{in} \langle e, m, n, s \rangle = m(l)} \\
+& \frac{\exp_1 \textbf{in} \langle e, m, n, s \rangle = v_1 \in \mathcal{Z}, \\
+& \phantom{{}\frac{\exp_1 \textbf{in} \langle e, m, n, s \rangle = v_1 \in \mathcal{Z}, {}}{}} \exp_2 \textbf{in} \langle e, m, n, s \rangle = v_2 \in \mathcal{Z}}{\exp_1 \text{op} \exp_2 \textbf{in} \langle e, m, n, s \rangle = v_1 \text{op} v_2} \\
+& \frac{\exp \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } n}{\substack{\text{is-pr}(\exp) \textbf{in} \langle e, m, n, s \rangle = 1, \\ \text{is-sh}(\exp) \textbf{in} \langle e, m, n, s \rangle = 0}} \\
+& \frac{\exp \textbf{in} \langle e, m, n, s \rangle = l \in \text{Dom } m}{\substack{\text{is-pr}(\exp) \textbf{in} \langle e, m, n, s \rangle = 0, \\ \text{is-sh}(\exp) \textbf{in} \langle e, m, n, s \rangle = 1}} \\
+& \frac{\exp \textbf{in} \langle e, m, n, s \rangle = i \in \mathcal{Z}}{\substack{\text{is-pr}(\exp) \textbf{in} \langle e, m, n, s \rangle = 0, \\ \text{is-sh}(\exp) \textbf{in} \langle e, m, n, s \rangle = 0}}
+\end{align*}
+$$
+
+These axioms may leave an expression's value undefined if the expression or one of its subexpressions fails to satisfy the preconditions of one of the axioms. In this case the expression evaluates to the special value error indicating an evaluation error.
+
+### C Simple Transition Relation
+
+→ is defined to be the smallest relation satisfying the following axioms.
+
+$$
+\begin{align*}
+& \frac{\exp \textbf{in} (\langle e, m, n, s \rangle = v \in \mathrm{PrObj}}{\mathrm{id} := \exp; c \textbf{in} (\langle e, m, n, s \rangle) \to c \textbf{in} (\langle e[\mathrm{id} \mapsto v], m, n, s\rangle)} \\
+& \frac{\exp_1 \textbf{in} (\langle e, m, n, s \rangle) = l \in \mathrm{Dom} n, \\
+& \phantom{{}\frac{\exp_1 \textbf{in} (\langle e, m, n, s \rangle) = l \in \mathrm{Dom} n, {}}{} \exp_2 \textbf{in} (\langle e, m, n, s \rangle) = v \in \mathrm{PrObj}}}{*\exp_1 := \exp_2; c \textbf{in} (\langle e, m, n, s\rangle) \to c \textbf{in} (\langle e, m, n[l \mapsto v], s\rangle)} \\
+& \frac{\exp_1 \textbf{in} (\langle e, m, n, s\rangle) = l \in \mathrm{Dom} m, (\langle im, wr, l\rangle) \in s, \\
+& \phantom{{}\frac{\exp_1 \textbf{in} (\langle e, m, n, s\rangle) = l \in \mathrm{Dom} m, {}}{} \exp_2 \textbf{in} (\langle e, m, n,s\rangle) = v \in \mathrm{ShObj}}}{*\exp_1 := \exp_2; c \textbf{in} (\langle e,m,n,s\rangle) \to c \textbf{in} (\langle e,m[l\mapsto v],n,s\rangle)} \\
+& \frac{\exp_1 (\langle e,m,n,s\rangle) = l \in Dom m, (\langle im,wr,l\rangle) /s/}{*\exp_1 := \exp_2; c\ in\ (\langle e,m,n,s\rangle)\rightarrow\textit{error}} \\
+& \frac{l\ in\ PrLoc\setminus Dom\ n,\ exp\ in\ (\langle e,m,n,s\rangle)=v\ in\ PrObj}{\overline{\mathrm{id} := pr(\exp);c\ in\ (\langle e,m,n,s\rangle)
+\longrightarrow \\[-0.5em]
+& c\ in\ (\langle e[\mathrm{id}\mapsto l],m,n[l\mapsto v],s\rangle)}} \\
+& l\ in\ ShLoc\setminus Dom\ m,\ exp\ in\ (\langle e,m,n,s\rangle)=v\ in\ ShObj,
+\\[-1em]
+& s' = s\cup\{\langle df ,rd,l\rangle,\langle df ,wr,l\rangle\}
+\\[-1em]
+& id := sh(\exp);c\ in\ (\langle e,m,n,s\rangle)
+\longrightarrow \\[-0.5em]
+& c\ in\ (\langle e[\mathrm{id}\mapsto l],m[l\mapsto v],n,s'\rangle)
+\\[1em]
+& -\frac{\exp\ in\ (\langle e,m,n,s\rangle=1)}{\substack{\text{if }(\exp)\{c_1\}\quad\text{else}\quad\{c_2\};c_3\ in\ (\langle e,m,n,s\rangle)\longrightarrow \\[-0.5em]
+& c_1;c_3\ in\ (\langle e,m,n,s\rangle)}}}
+\end{align*}
+$$
+
+$$
+\begin{array}{@{}l@{}}
+\displaystyle\frac{\exp {\tt in} {\tt }= 0}{\begin{array}[t]{@{}l@{}}
+\textrm{if }(\exp)\{{c}_1\}\textrm{ else }\{{c}_2\};c_3 {\tt in}\ {\tt }\rightarrow \\
+c_2;c_3 {\tt in}\ {\tt }
+\end{array}} \\[4ex]
+\displaystyle\frac{\exp {\tt in}\ {\tt }= 1}{\begin{array}[t]{@{}l@{}}
+\textrm{while }(\exp)\{{c}_1\};c_2 {\tt in}\ {\tt }\rightarrow \\
+c_1;\textrm{while }(\exp)\{{c}_1\};c_2 {\tt in}\ {\tt }
+\end{array}} \\[4ex]
+\displaystyle\frac{\exp {\tt in}\ {\tt }= 0}{\begin{array}[t]{@{}l@{}}
+\textrm{while }(\exp)\{{c}_1\};c_2 {\tt in}\ {\tt }\rightarrow c_2 {\tt in}\ {\tt }
+\end{array}} \\
+\end{array}
+$$
+
+**Definition 15** subexp(exp, st) is true if exp appears in an expression of st.
+
+$$
+\frac{\exp_{\mathbf{in}}(\langle e, m, n, s\rangle = \mathbf{error}, \mathrm{subexp}(exp, st))}{st; c_2 ~~~~{\mathbf{in}} ~~~~(\langle e, m, n, s\rangle) ~~~~\rightarrow ~~~~{\mathbf{error})}
+$$
+
+## D Jade Transition Relation
+
+→j is defined to be the smallest relation satisfying the following axioms.
+
+$$
+\frac{
+ c_{\mathbf{in}}(\mathfrak{e}, m, n, s) \to c'_{\mathbf{in}}(\mathfrak{e}', m', n', s')
+}{
+ c_{\mathbf{in}}(\mathfrak{e}, m, n, s, r) \to_j c'_{\mathbf{in}}(\mathfrak{e}', m', n', s', r)
+}
+$$
+
+$$
+\frac{
+ c_{\mathbf{in}}(\mathfrak{e}, m, n, s) \to_{error}
+}{
+ c_{\mathbf{in}}(\mathfrak{e}, m, n, s, r) \to_{j} error
+}
+$$
+
+$$
+c_1: in (e,m,n,s) -> c'_1 in (e',m',n',s') \\
+withonly {c_1} do ids {c_2}; c_3 in (e,m,n,s,r) ->_j \\
+withonly {c'_1} do ids {c_2}; c_3 in (e',m',n',s',r)
+$$
+
+$$
+c_1: in (e,m,n,s) -> c'_1 in (e',m',n',s') \\
+with {c_1} cont; c_2 in (e,m,n,s,r) ->_j \\
+with {c'_1} cont; c_2 in (e',m',n',s',r)
+$$
+
+$$
+c = withonly \left\{
+ di\_rw(exp); c_1 do(ids){c_2};
+ exp in (e,m,n,s) = l
+ Dom m,
+ r' = r
+ di\_rw
+ rw
+ l
+ di\_rw
+ l
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ \
+ c in (, , , , ) ->_j
+ withonly {c₁} do(ids){c₂}; c₃ in (, , , , )
+ exp in (, , , ) = l ∈ Dom m,
+ r' = r ∪ {⟨di, rw, l⟩},
+ s ⊢ ⟨di, rw, l⟩
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+ ...
+
+$$
+c_{1} : in (e,m,n,s) →_{j} c'_{1} in (e',m',n',s') \\
+withonly {c_{1}} do(ids){c_{2}}; c_{3} in (e,m,n,s,r) →_{j} c'_{3} in (e',m',n',s',r)
+$$
+
+$$
+\overline{\exp_{in}(\langle e,m,n,s\rangle = l \in Dom m,s \vdash (\mathrm{di},rw,l)}}
+$$
+
+$$
+with {di\_rw(exp)}; c_1 cont; c_2 in ( 0$);
+If $Rwu$ and $w \in V(p, w)$, then $u \in V(p, v)$ (if $p$ is of arity 0).
+
+We uphold the convention in [9] and assume that for each world $w \in W$, $(D_w)^0 = \{w\}$, so $V(p, w) = \{w\}$ or $V(p, w) = \emptyset$, for a propositional variable $p$.
+
+The distinctive feature of relational frames for IF is the *connected* property, which states that for any $w, u, v \in W$ of a frame $F = (W, R, D)$, if $Rwu$ and $Rwv$, then either $Ruv$ or $Rvu$. Imposing this property on reflexive, transitive, and antisymmetric (i.e. intuitionistic) frames causes the set of worlds to become linearly ordered, thus validating the linearity axiom $(A \supset B) \lor (B \supset A)$ (shown in Fig. 1). Furthermore, the constant domain condition (**CD**) validates the quantifier shift axiom $\forall x(A(x) \lor B) \supset \forall xA(x) \lor B$ (also shown in Fig. 1).
+
+Rather than interpret formulae from $\mathcal{L}$ in relational models, we follow [9] and introduce $D_w$-sentences to be interpreted in relational models. This gives rise to a notion of validity for formulae in $\mathcal{L}$ (see Def. 3). The definition of validity also depends on the *universal closure* of a formula: if a formula $A$ contains only $x_0, \dots, x_m$ as free variables, then the universal closure $\forall A$ is taken to be the formula $\forall x_0 \dots \forall x_m A$.
+
+**Definition 2 (Dw-Sentence).** Let $M$ be a relational model with $w \in W$ of $M$. We define $\mathcal{L}_{D_w}$ to be the language $\mathcal{L}$ expanded with parameters from the set $D_w$. We define a $D_w$-formula to be a formula in $\mathcal{L}_{D_w}$, and we define a $D_w$-sentence to be a $D_w$-formula that does not contain any free variables. Last, we use $a, b, c, \dots$ to denote parameters in a set $D_w$.
+
+**Definition 3 (Semantic Clauses [9]).** Let $M = (W, R, D, V)$ be a relational model with $w \in W$ and $R(w) := \{v \in W | (w, v) \in R\}$. The satisfaction relation $M, w \models A$ between $w \in W$ and a $D_w$-sentence $A$ is inductively defined as follows:
+
+- $M, w \nvDash \bot$
+
+- If $p$ is a propositional variable, then $M, w \nvDash p$ iff $w \in V(p, w)$;
+
+- If $p$ is an $n$-ary predicate symbol (with $n > 0$), then $M, w \nvDash p(a_1, \dots, a_n)$ iff $(a_1, \dots, a_n) \in V(p, w)$;
+
+- $M, w \nvDash A \lor B$ iff $M, w \nvDash A$ or $M, w \nvDash B$;
+
+- $M, w \nvDash A \land B$ iff $M, w \nvDash A$ and $M, w \nvDash B$;
+
+- $M, w \nvDash A \supset B$ iff for all $u \in R(w)$, if $M, u \nvDash A$, then $M, u \nvDash B$;
+
+- $M, w \nvDash \forall x A(x)$ iff for all $u \in R(w)$ and all $a \in D_u$, $M, u \nvDash A(a)$;
+---PAGE_BREAK---
+
+$$ \neg M, w \Vdash \exists x A(x) \text{ iff there exists an } a \in D_w \text{ such that } M, w \Vdash A(a). $$
+
+We say that a formula *A* is globally true on *M*, written *M* **IMATELY** *A*, iff *M*, *u* **IMATELY** ∨*A* for all worlds *u* ∈ W. A formula *A* is valid, written **V**-**A**, iff it is globally true on all relational models.
+
+**Lemma 1 (Persistence).** Let $M$ be a relational model with $w, u \in W$ of $M$. For any $D_w$-sentence $A$, if $M, w \models A$ and $Rwu$, then $M, u \models A$.
+
+*Proof.* See [9, Lem. 3.2.16] for details.
+□
+
+A sound and complete axiomatization for the logic IF is provided in Fig. 1. We define the substitution [$y/x$] of the variable $y$ for the free variable $x$ on a formula $A$ in the standard way as the replacement of all free occurrences of $x$ in $A$ with $y$. The substitution [$a/x$] of the parameter $a$ for the free variable $x$ is defined similarly. Last, the side condition $y$ is free for $x$ (see Fig. 1) is taken to mean that $y$ does not become bound by a quantifier if substituted for $x$.
+
+Fig. 1. Axiomatization for the logic IF [9]. The logic IF is the smallest set of formulae from $\mathcal{L}$ closed under substitutions of the axioms and applications of the inference rules mp and gen. We write $\models_{IF} A$ to denote that $A$ is an element, or theorem, of IF.
+
+**Theorem 1 (Adequacy of IF).** For any $A \in \mathcal{L}$, $\models A$ iff $\models_{IF} A$.
+
+*Proof.* The forward direction follows from [9, Prop. 7.2.9] and [9, Prop. 7.3.6], and the backwards direction follows from [9, Lem. 3.2.31]. □
+
+### 3 Soundness and Completeness of LNIF
+
+Let us define linear nested sequents (which we will refer to as sequents) to be syntactic objects $\mathcal{G}$ given by the BNF grammar shown below:
+
+$$ \mathcal{G} ::= \Gamma \vdash \Gamma \mid \mathcal{G} \parallel \mathcal{G} \text{ where } \Gamma ::= A \mid \Gamma, \Gamma \text{ with } A \in \mathcal{L}. $$
+---PAGE_BREAK---
+
+Each sequent $\mathcal{G}$ is of the form $\Gamma_1 \vdash \Delta_1 // \dots // \Gamma_n \vdash \Delta_n$ with $n \in \mathbb{N}$. We refer to each $\Gamma_i \vdash \Delta_i$ (for $1 \le i \le n$) as a *component* of $\mathcal{G}$ and use $||\mathcal{G}||$ to denote the number of components in $\mathcal{G}$.
+
+We often use $\mathcal{G}$, $\mathcal{H}$, $\mathcal{F}$, and $\mathcal{K}$ to denote sequents, and will use $\Gamma$ and $\Delta$ to denote antecedents and consequents of components. Last, we take the comma operator to be commutative and associative; for example, we identify the sequent $p(x) \vdash q(x), r(y), p(x)$ with $p(x) \vdash r(y), p(x), q(x)$. This interpretation lets us view an antecedent $\Gamma$ or consequent $\Delta$ as a multiset of formulae.
+
+To ease the proof of cut-elimination (Thm. 4), we follow [8] and syntactically distinguish between *bound variables* $\{x, y, z, \dots\}$ and *parameters* $\{a, b, c, \dots\}$, which will take the place of free variables occurring in formulae. Thus, our sequents make use of formulae from $\mathcal{L}$ where each free variable has been replaced by a unique parameter. For example, we would use the sequent $p(a) \vdash \forall xq(x, b) // \bot \vdash r(a)$ instead of the sequent $p(x) \vdash \forall xq(x, y) // \bot \vdash r(x)$ in a derivation (where the parameter $a$ has been substituted for the free variable $x$ and $b$ has been substituted for $y$). We also use the notation $A(a_0, \dots, a_n)$ to denote that the parameters $a_0, \dots, a_n$ occur in the formula $A$, and write $A(\vec{a})$ as shorthand for $A(a_0, \dots, a_n)$. This notation extends straightforwardly to sequents as well.
+
+The linear nested calculus LNIF for IF is given in Fig. 2. (NB. The linear nested calculus LNG introduced in [17] is the propositional fragment of LNIF, i.e. LNG is the calculus LNIF without the quantifier rules and where propositional variables are used in place of atomic formulae.) The ($\supset_{r2}$) and ($\forall_{r2}$) rules in LNIF are particularly noteworthy; as will be seen in the next section, the rules play a vital role in ensuring the invertibility and admissibility of certain rules, ultimately permitting the elimination of (cut) (see Thm. 4).
+
+To obtain soundness, we interpret each sequent as a formula in $\mathcal{L}$ and utilize the notion of validity in Def. 3. The following definition specifies how each sequent is interpreted.
+
+**Definition 4 (Interpretation $\iota$).** The interpretation of a sequent is defined inductively as follows:
+
+$$
+\iota(\Gamma \vdash \Delta) := \bigwedge \Gamma \supset \bigvee \Delta
+$$
+
+$$
+\iota(\Gamma \vdash \Delta / \mathcal{G}) := \bigwedge \Gamma \supset (\bigvee \Delta \vee \iota(\mathcal{G}))
+$$
+
+We interpret a sequent $\mathcal{G}$ as a formula in $\mathcal{L}$ by taking the universal closure $\bar{\forall}\iota(\mathcal{G})$ of $\iota(\mathcal{G})$ and we say that $\mathcal{G}$ is valid if and only if $\models \bar{\forall}\iota(\mathcal{G})$.
+
+**Theorem 2 (Soundness of LNIF).** For any linear nested sequent $\mathcal{G}$, if $\mathcal{G}$ is provable in LNIF, then $\models \bar{\forall}\iota(\mathcal{G})$.
+
+Proof. We prove the result by induction on the height of the derivation of
+
+$$
+\mathcal{G} = \Gamma_1 \vdash \Delta_1 // \dots // \Gamma_n \vdash \Delta_n // \Gamma_{n+1} \vdash \Delta_{n+1} // \dots // \Gamma_m \vdash \Delta_m
+$$
+
+and only present the more interesting $\forall$ quantifier cases in the inductive step. All remaining cases can be found in App. A. Each inference rule considered is of one of the following two forms.
+---PAGE_BREAK---
+
+$$ \frac{\mathcal{G} // \Gamma, p(\vec{a}) \vdash p(\vec{a}), \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, p(\vec{a}) \vdash \Delta_1 // \mathcal{H} // \Gamma_2 \vdash p(\vec{a}), \Delta_2 // \mathcal{F}} \quad (id_1) $$
+
+$$ \frac{\mathcal{G} // \Gamma_1, p(\vec{a}) \vdash \Delta_1 // \mathcal{H} // \Gamma_2 \vdash p(\vec{a}), \Delta_2 // \mathcal{F}}{\mathcal{G} // \Gamma_1, p(\vec{a}) \vdash \Delta_1 // \mathcal{H} // \Gamma_2 \vdash p(\vec{a}), \Delta_2 // \mathcal{F}} \quad (id_2) $$
+
+$$ \frac{\mathcal{G} // \Gamma, \perp \vdash \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, \perp \vdash \Delta // \mathcal{H}} \quad (\perp l) $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B \vdash \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, A \wedge B \vdash \Delta // \mathcal{H}} \quad (\wedge l) $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B // \mathcal{H}}{\mathcal{G} // \Gamma, A \wedge B // \mathcal{H}} \quad (\wedge r) $$
+
+$$ \frac{\mathcal{G} // \Gamma, A \vdash \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, A \wedge B // \mathcal{H}} \quad (\wedge r) $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B \vdash \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, A, B \vdash \Delta // \mathcal{H}} $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B // \mathcal{H}}{\mathcal{G} // \Gamma, A, B // \mathcal{H}} $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B // \mathcal{H}}{\mathcal{G} // \Gamma, A, B // \mathcal{H}} $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, B // \mathcal{H}}{\mathcal{G} // \Gamma, A, B // \mathcal{H}} $$
+
+$$ \frac{\mathcal{G} // \Gamma_1, A \vdash \Delta_1 // \mathcal{H}}{\mathcal{G} // \Gamma_1, A \vdash \Delta_1 // \mathcal{H}} $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+$$ (lift) $$
+
+Fig. 2. The Calculus LNIF. The side condition † stipulates that the parameter *a* is an *eigenvariable*, i.e. it does not occur in the conclusion. Occasionally, we write ⊢LNIF 𝒜 to mean that the sequent 𝒜 is derivable in LNIF.
+
+$$ \frac{\mathcal{G}'}{\mathcal{G}}(r_1) \qquad \frac{\mathcal{G}_1}{\mathcal{G}}(r_2) $$
+
+We argue by contraposition and prove that if $\mathcal{G}$ is invalid, then at least one premise is invalid. Assuming $\mathcal{G}$ is invalid (i.e. $\Vdash \bar{\forall}\iota(\mathcal{G})$) implies the existence of a model $M = (W, R, D, V)$ with world $v \in W$ such that $Rvw_0$, $\vec{a} \in D_{w_0}$, and $M, w_0 \Vdash_i (\mathcal{G})(\vec{a})$, where $\vec{a}$ represents all parameters in $\iota(\mathcal{G})$. Hence, there is a sequence of worlds $w_1, \dots, w_m \in W$ such that $Rw_jw_{j+1}$ (for $0 \le j \le m-1$), $M, w_i \Vdash_i A$ and $M, w_i \Vdash_j A$ for each $1 \le i < j \le m$. We assume all parameters in $A$ and $\nabla_i$ are interpreted as elements of the associated domain $D_{w_i}$ (for $1 \le i < m$).
+
+**($\forall r_1$)-rule:** By our assumption $M, w_m \Vdash A$ and $M, w_m \Vdash_j A$ for some $b \in D_{w_{m+1}}$. The latter implies that $M, w_m \Vdash_j A$, meaning there exists a world $w_{m+1} \in W$ such that $Rw_m w_{m+1}$ and $M, w_{m+1} \Vdash A$ for some $b' = b$. If we interpret the eigenvariable of the premise as $b'$, then the premise is shown invalid.
+
+**($\forall r_2$)-rule:** It follows from our assumption that $M, w_n \Vdash A$ and $M, w_n \Vdash_j A$ for some $b' = b$. The fact that $M, w_n \Vdash_j A$ implies that there exists a world $w' = w$ such that $Rw_n w$ and for some $b' = b$, $M, w_n \Vdash A$ for some $b'' = b'$. Since our frames are connected, there are two cases to consider: (i) $Rww_{n+1}$, or (ii) $Rw_{n+1}w$. Case (i) falsifies the left premise, and case (ii) falsifies the right premise.
+---PAGE_BREAK---
+
+($\forall_l$)-rule: We know that $M, w_n \Vdash \bigwedge \Gamma_n \wedge \forall x A$ and $M, w_n \not\vDash \bigvee \Delta_n$. Hence, for any world $w \in W$, if $Rw_n w$, then $M, w \Vdash A[b/x]$ for all $b \in D_w$. Since $Rw_n w_n$, it follows that $M, w_n \Vdash A[b/x]$ for any $b \in D_{w_n}$. If $a$ occurs in the conclusion $\mathcal{G}$, then by the constant domain condition (CD), we know that $a \in D_{w_n}$, so we may falsify the premise of the rule. If $a$ does not occur in $\mathcal{G}$, then it is an eigenvariable, and assigning $a$ to any element of $D_{w_n}$ will falsify the premise. $\square$
+
+**Theorem 3 (Completeness of LNIF).** If $T \models_I A$, then $A$ is provable in LNIF.
+
+*Proof.* It is not difficult to show that LNIF can derive each axiom of IF and can simulate each inference rule. We refer the reader to App. A for details. $\square$
+
+# 4 Proof-Theoretic Properties of LNIF
+
+In this section, we present the fundamental proof-theoretic properties of LNIF, thus extending the results in [17] from the propositional setting to the first-order setting. (NB. We often leverage results from [17] to simplify our proofs.) With the exception of Lem. 14, Lem. 16, and Thm. 4, all results are proved by induction on the *height* of a given derivation $\Pi$, i.e. on the length (number of sequents) of the longest branch from the end sequent to an initial sequent in $\Pi$. Lemmata whose proofs are omitted can be found in App. A.
+
+$$ \frac{\mathcal{G} \ // \Gamma_1 \vdash \Delta_1 // \mathcal{H}}{\mathcal{G} // \Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2 // \mathcal{H}} \quad (\text{iw}) $$
+
+$$ \frac{\mathcal{G} // \Gamma, A, A \vdash \Delta // \mathcal{H}}{\mathcal{G} // \Gamma, A \vdash \Delta // \mathcal{H}} \quad (\text{icl}) $$
+
+$$ \frac{\mathcal{G} // \mathcal{H}}{\mathcal{G} // \vdash // \mathcal{H}} \quad (\text{ew}) $$
+
+$$ \frac{\mathcal{G} //_{\mathcal{G}[b/a]} (\text{sub})}{\mathcal{G} //_{\mathcal{G}[b/a]} A, \Delta_1 //_{\mathcal{G} //_{\Gamma_1} A, \Delta_1} \Gamma_2 //_{\mathcal{G} //_{\Gamma_2} A, \Delta_2} \mathcal{H}} \quad (\text{lwr}) $$
+
+$$ \frac{\mathcal{G} //_{\mathcal{G}[b/a]} A, A, \Delta //_{\mathcal{G} //_{\Gamma, \Gamma} A, \Delta //_{\mathcal{G}} \mathcal{H}}}{\mathcal{G} //_{\Gamma, \Gamma} A, \Delta //_{\mathcal{G}} \mathcal{H}} \quad (\text{icr}) $$
+
+$$ \frac{\mathcal{G} //_{\mathcal{G}[b/a]} A, A, \Delta //_{\mathcal{G} //_{\Gamma_1} A, A, \Delta //_{\mathcal{G}_1} \Gamma_2} \Gamma_2 //_{\mathcal{G} //_{\Gamma_2} A, A, \Delta_2} \mathcal{H}}{\mathcal{G} //_{\Gamma_1, \Gamma_2} A, A, \Delta //_{\mathcal{G}_1} \mathcal{H}} \quad (\text{mrg}) $$
+
+**Fig. 3.** Admissible rules in LNIF.
+
+We say that a rule is *admissible* in LNIF iff derivability of the premise(s) implies derivability of the conclusion in LNIF. Additionally, a rule is *height preserving (hp-)admissible* in LNIF iff if the premise of the rule has a derivation of a certain height in LNIF, then the conclusion of the rule has a derivation of the same height or less in LNIF. Last, a rule is *invertible (hp-invertible)* iff derivability of the conclusion implies derivability of the premise(s) (with a derivation of the same height or less). Admissible rules of LNIF are given in Fig. 3.
+
+**Lemma 2.** For any $A, \Gamma, \Delta, G$, and $H$, $\vdash_{\mathrm{LNIF}} G // \Gamma, A \vdash A, \Delta // H$.
+
+**Lemma 3.** The $(\perp_r)$ rule is hp-admissible in LNIF.
+---PAGE_BREAK---
+
+*Proof.* By induction on the height of the given derivation. In the base case, applying ($\perp_r$) to (id₁), (id₂), or ($\perp_l$) gives an initial sequent, and for each case of the inductive step we apply IH followed by the corresponding rule. □
+
+**Lemma 4.** The (sub) rule is hp-admissible in LNIF.
+
+**Lemma 5.** The (iw) rule is hp-admissible in LNIF.
+
+**Lemma 6.** The (ew) rule is admissible in LNIF.
+
+*Proof.* By [17, Lem. 5.6] we know that (ew) is admissible in LNG, thus leaving us to prove the $(\forall_{r1})$, $(\exists_l)$, $(\forall_{r2})$, $(\forall_l)$, and $(\exists_r)$ cases. The $(\exists_l)$, $(\forall_l)$, and $(\exists_r)$ cases are easily shown by applying IH and then the rule. We therefore prove the $(\forall_{r1})$ and $(\forall_{r2})$ cases, beginning with the former, which is split into the two subcases, shown below:
+
+$$ \frac{\frac{G // \Gamma \vdash \Delta // \vdash A[a/x]}{\mathcal{G} // \Gamma \vdash \Delta, \forall xA} (\forall_{r1})}
+ {\mathcal{G}' // \Gamma \vdash \Delta, \forall xA} (ew)}{\text{IH}} \qquad \frac{\frac{G // \Gamma \vdash \Delta // \vdash A[a/x]}{G // \Gamma \vdash \Delta, \forall xA} (\forall_{r1})}
+ {\frac{G // \Gamma \vdash \Delta // \forall xA}{G // \Gamma \vdash \Delta, \forall xA // \vdash} (ew)}(\forall_{r2}) $$
+
+In the top left case, where we weaken in a component prior to the component $\Gamma \vdash \Delta, \forall xA$, we may freely permute the two rule instances. The top right case is resolved as shown below.
+
+Suppose now that we have an $(\forall_{r2})$ inference (as in Fig. 2) followed by an (ew) inference. The only nontrivial case (which is resolved as shown below) occurs when a component is weakened in directly after the component $\Gamma_1 \vdash \Delta_1, \forall xA$. All other cases follow by an application of IH followed by an application of the $(\forall_{r2})$ rule.
+
+$$ \frac{\frac{\overline{G} // \overline{\Gamma_1} \vdash \overline{\Delta_1} // \vdash A[\overline{a/x}] // \vdash \overline{\Gamma_2} \vdash \overline{\Delta_2} // \overline{\mathcal{H}}}{G // \overline{\Gamma_1} \vdash \overline{\Delta_1}, \forall xA // \vdash \overline{\Gamma_2} \vdash \overline{\Delta_2} // \overline{\mathcal{H}}}^{\text{IH}}}{\Pi} (\forall_{r2}) $$
+
+$$ \Pi = \left\{ \frac{\frac{\overline{G} // \overline{\Gamma_1} \vdash \overline{\Delta_1} // \vdash A[a/x] // \overline{\Gamma_2} \vdash \overline{\Delta_2} // \overline{\mathcal{H}}}{G // \overline{\Gamma_1} \vdash \overline{\Delta_1} // \forall xA // \overline{\Gamma_2} \vdash \overline{\Delta_2} // \overline{\mathcal{H}}}^{\text{IH}}}{\overline{G} // \overline{\Gamma_1} \vdash \overline{\Delta_1} // \overline{\Gamma_2} \vdash \overline{\Delta_2}, \forall xA // \overline{\mathcal{H}}}^{\text{IH}} (\forall_{r2})} \right\}. $$
+
+**Lemma 7.** The rule (lwr) is hp-admissible in LNIF.
+
+*Proof.* By [17, Lem. 5.7] we know that (lwr) is admissible in LNG, and so, we may prove the claim by extending it to include the quantifier rules. We have two cases to consider: either (i) the lower-formula is a side formula in the quantifier inference, or (ii) it is principal. In case (i), the $(\forall_{r1})$, $(\forall_l)$, $(\exists_l)$, and $(\exists_r)$ cases can be resolved by applying IH followed by an application of the rule. Concerning the
+---PAGE_BREAK---
+
+($\forall_{r2}$) rule, all cases follow by applying IH and then the rule, with the exception
+of the following:
+
+$$
+\frac{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, A \parallel \vdash B[a/x] \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H} \quad \mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, A \parallel \Gamma_2 \vdash \Delta_2, \forall x B \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, A, \forall x B \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H} \quad (\text{lwr})} (\forall_{r2})
+$$
+
+In the above case, we apply IH twice to the top left premise and apply IH once
+to the top right premise. A single application of (∀r2) gives the desired result.
+
+Let us now consider case (ii). Observe that the principal formulae in ($\forall_{r1}$),
+($\forall_l$), and ($\exists_l$) cannot be principal in the use of (lwr), so we need only consider
+the ($\exists_r$) and ($\forall_{r2}$) cases. The ($\exists_r$) case is shown below top-left and the case is
+resolved as shown below top-right. In the ($\forall_{r2}$) case (shown below bottom), we
+take the derivation of the top right premise as the proof of the desired conclusion.
+
+$$
+\frac{
+ \frac{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, A[a/x], \exists x A \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, \exists x A \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}} (\exists_r)
+ \qquad
+ \frac{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, \Gamma_2 \vdash \Delta_2, A[a/x], \exists x A \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, \Gamma_2 \vdash \Delta_2, \exists x A \parallel \mathcal{H}} (\exists_r)
+}{\mathcal{G} \parallel \Gamma_1 \vdash \Delta_1, A[a/x] \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}}
+\text{IH } 2
+$$
+
+$$
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // A[a/x] // \Gamma_2 \vdash \Delta_2 // H}{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // \Gamma_2 \vdash \Delta_2, \exists x A // H} (\text{lwr}) \\
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // A[a/x] // \Gamma_2 \vdash \Delta_2 // H}{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // \Gamma_2 \vdash \Delta_2, \forall x A // H} (\text{lwr})
+$$
+
+$\square$
+
+Our version of the (lift) rule necessitates a stronger form of invertibility,
+called *m-invertibility*, for the ($\wedge_l$), ($\vee_l$), ($\supset_l$), ($\forall_l$), and ($\exists_l$) rules (cf. [17]). We
+use $A^{k_i}$ to represent $k_i$ copies of a formula $A$, with $i \in N$.
+
+**Lemma 8.** If $\sum_{i=1}^n k_n \ge 1$, then
+
+(i) (1) implies (2)
+
+(iii) (6) implies (7) and (8)
+
+(v) (11) implies (12)
+
+(ii) (3) implies (4) and (5)
+
+(iv) (9) implies (10)
+
+$$
+\begin{align*}
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (A \land B)^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, (A \land B)^{k_n} \vdash \Delta_n && (1) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, A^{k_1}, B^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, A^{k_n}, B^{k_n} \vdash \Delta_n && (2) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (A \lor B)^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, (A \lor B)^{k_n} \vdash \Delta_n && (3) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, A^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, A^{k_n} \vdash \Delta_n && (4) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, B^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, B^{k_n} \vdash \Delta_n && (5) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (A \supset B)^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, (A \supset B)^{k_n} \vdash \Delta_n && (6) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, B^{k_1} \vdash \Delta_1 / \dots / \Gamma_n, B^{k_n} \vdash \Delta_n && (7) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (A \supset B)^{k_1} \vdash \Delta_1, A^{k_1} / \dots / \Gamma_n, (A \supset B)^{k_n} / A^{k_n} && (8) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (\forall x A)^{k_1} / \dots / \Gamma_n, (\forall x A)^{k_n} / \dots / \Delta_n && (9) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, A[a/x]^{k_1} / \dots / \Gamma_n, A[a/x]^{k_n} / \dots / (\forall x A)^{k_n} / \dots / \Delta_n && (10) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, (\exists x A)^{k_1} / \dots / \Gamma_n, (\exists x A)^{k_n} / \dots / (\exists x A)^{k_n} / \dots / \Delta_n && (11) \\
+&\vdash_{\mathrm{LNIF}} \Gamma_1, A[a/x]^{k_1} / \dots / \Gamma_n, A[a/x]^{k_n} / \dots / (\forall x A)^{k_n} / \dots / \Delta_n && (12)
+\end{align*}
+$$
+---PAGE_BREAK---
+
+**Lemma 9.** The ($\wedge_r$), ($\vee_r$), and ($\exists_r$) rules are hp-invertible in LNIF.
+
+*Proof.* By [17, Lem. 5.8] we know that the claim holds for the ($\wedge_r$) and ($\vee_r$) rules relative to LNG. The proof may be extended to LNIF by considering the quantifier rules in the inductive step; however, it is quick to verify the claim for the quantifier rules by applying IH and then the corresponding rule. Proving invertibility of the ($\exists_r$) rule is straightforward, and follows from the hp-admissibility of (*iw*) (Lem. 5). $\square$
+
+**Lemma 10.** The ($\supset_{r2}$) rule is invertible in LNIF.
+
+*Proof.* We extend the proof of [17, Lem. 5.10] to include the quantifier rules, and prove the result by induction on the height of the given derivation of $G // \Gamma_1 \vdash \Delta_1$, $A \supset B // \Gamma_2 \vdash \Delta_2 // H$. Derivability of the right premise $G // \Gamma_1 \vdash \Delta_1 // \Gamma_2 \vdash \Delta_2$, $A \supset B // H$ follows from Lem. 7, so we focus on showing that the left premise $G // \Gamma_1 \vdash \Delta_1 // A \vdash B // \Gamma_2 \vdash \Delta_2 // H$ is derivable. For the ($\forall_{r1}$), ($\forall_l$), ($\exists_l$), and ($\exists_r$) rules the desired conclusion is obtained by applying IH, followed by an application of the corresponding rule. The nontrivial ($\forall_{r2}$) case is shown below top and is resolved as shown below bottom. In all other ($\forall_{r2}$) cases, we apply IH followed by the ($\forall_{r2}$) rule.
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1, A \supset B // C[a/x] // \Gamma_2 \vdash \Delta_2 // H \qquad G // \Gamma_1 \vdash \Delta_1, A \supset B // \Gamma_2 \vdash \Delta_2, \forall x C // H}{G // \Gamma_1 \vdash \Delta_1, \forall x C, A \supset B // \Gamma_2 \vdash \Delta_2 // H} (\forall_{r2}) $$
+
+$$ \frac{\Pi_1 \quad \Pi_2}{G // \Gamma_1 \vdash \Delta_1, \forall x C // A \vdash B // \Gamma_2 \vdash \Delta_2 // H} (\forall_{r2}) \quad \Pi_1 = \begin{cases} \frac{G // \Gamma_1 \vdash \Delta_1, A \supset B // C[a/x] // \Gamma_2 \vdash \Delta_2 // H}{G // \Gamma_1 \vdash \Delta_1 // C[a/x], A \supset B // \Gamma_2 \vdash \Delta_2 // H} & \text{Lem. 7} \\ \frac{G // \Gamma_1 \vdash \Delta_1 // C[a/x] // \Gamma_2 \vdash \Delta_2 // H}{G // \Gamma_1 \vdash \Delta_1 // C[a/x] // A \vdash B // \Gamma_2 \vdash \Delta_2 // H} & \text{IH} \end{cases} $$
+
+$$ \Pi_2 = \left\{ \frac{\frac{G // \Gamma_1 \vdash \Delta_1, A \supset B // C[a/x] // \Gamma_2 \vdash \Delta_2 // H}{G // \Gamma_1 \vdash \Delta_1 // A \vdash B // C[a/x] // \Gamma_2 \vdash \Delta_2 // H} \text{III} - \frac{G // \Gamma_1 \vdash \Delta_1, A \supset B // \Gamma_2 \vdash \Delta_2, \forall x C // H}{G // \Gamma_1 \vdash \Delta_1 // A \vdash B // \Gamma_2 \vdash -\Delta_2, \forall x C // H}^{\text{IH}}}{G // \Gamma_1 \vdash \Delta_1 // A + B, \forall x C // \Gamma_2 \vdash \Delta_2 // H} (\forall_{r2}) \right\} $$
+
+$\square$
+
+**Lemma 11.** The ($\forall_{r2}$) rule is invertible in LNIF.
+
+*Proof.* Let the sequent $G // \Gamma_1 \vdash \Delta_1, \forall x A // \Gamma_2 \vdash \Delta_2 // H$ be derivable in LNIF. Derivability of the right premise $G // \Gamma_1 \vdash \Delta_1 // \Gamma_2 \vdash \Delta_2, \forall x A // H$ follows from the hp-admissibility of (*lwr*) (Lem. 7). We prove that the left premise $G // \Gamma_1 \vdash \Delta_1 // A[a/x] // \Gamma_2 \vdash \Delta_2 // H$ is derivable by induction on the height of the given derivation.
+
+*Base case.* Regardless of if $G // \Gamma_1 \vdash \Delta_1, \forall x A // \Gamma_2 \vdash \Delta_2 // H$ is derived by an application of (*id*₁), (*id*₂), or ($\perp_l$), $G // \Gamma_1 \vdash \Delta_1 // A[a/x] // \Gamma_2 \vdash \Delta_2 // H$ is an initial sequent as well.
+
+*Inductive step.* For all rules, with the exception of (*lift*), ($\supset_{r2}$), ($\forall_{r1}$), ($\exists_l$), and ($\forall_{r2}$), we apply IH to the premise(s) followed by the corresponding rule. We consider the aforementioned nontrivial cases below.
+
+If the (*lift*) rule is applied as shown below left, then the desired conclusion may be derived as shown below right. In all other cases, we apply IH and then (*lift*) to achieve the desired result.
+---PAGE_BREAK---
+
+$$
+\frac{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1, \forall x A \parallel \Gamma_2, B \vdash \Delta_2 \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1, \forall x A \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}} (\text{lift}) = \frac{\frac{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1 \parallel \dots \parallel A[a/x] \parallel \Gamma_2, B \vdash \Delta_2 \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1 \parallel \dots \parallel B[a/x] \parallel \Gamma_2, B \vdash \Delta_2 \parallel \mathcal{H}}^{\text{IH}}}{\frac{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1 \parallel B[a/x] \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}}{\mathcal{G} \parallel \Gamma_1, B \vdash \Delta_1 \parallel A[a/x] \parallel \Gamma_2 \vdash \Delta_2 \parallel \mathcal{H}} (\text{lift})}^{\text{Lem. 5}}
+$$
+
+If the ($\supset r_2$) rule is applied as shown below top, then the desired conclusion may be derived as shown below bottom. In all other cases, we apply IH and then the ($\supset r_2$) rule to obtain the desired result.
+
+$$
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // B \vdash C // \Gamma_2 \vdash \Delta_2 // \mathcal{H}}{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A, B \supset C // \Gamma_2 \vdash \Delta_2 // \mathcal{H}} (\supset r_2) \\
+\frac{\prod_{i}^{n_i} \prod_{j}^{m_j}}{\mathcal{G} // \Gamma_1 \vdash \Delta_1, B \supset C // A[a/x] // \Gamma_2 \vdash \Delta_2 // \mathcal{H}} (\supset r_2) =
+\left\{
+\begin{array}{ll}
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // B \vdash C // \Gamma_2 \vdash \Delta_2 // \mathcal{H}}{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // B[a/x] // B[C][A[a/x]] // R_2 // R_2 - \Delta_2 // H} & (\text{Lem. 7}) \\[1em]
+\frac{\mathcal{G} // R_1 - \Delta_1 // A[a/x] // R_2 - \Delta_2 // H}{\mathcal{G} // R_1 - \Delta_1 // A[a/x] // R_2 - \Delta_2, B \supset C // R_2 - \Delta_2 // H} & (\text{IH})
+\end{array}
+\right.
+$$
+
+$$
+\Pi_2 = \left\{
+ \begin{array}{@{}l@{\quad}l@{}}
+ % Top row of the matrix:
+ % G // Γ₁ ⊢ Δ₁, ∀xA // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁, ∀xA // Γ₂ ⊢ Δ₂, B ⊃ C // H
+ % G // Γ₁ ⊢ Δ₁ // B[b/y] // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x] // Γ₂ ⊢ Δ₂, B ⊃ C // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x], B ⊃ C // Γ₂ ⊢ Δ₂ // H
+ % (⊤)
+ % End of top row
+
+ % Bottom row of the matrix:
+ % G // Γ₁ ⊢ Δ₁, ∀xA // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁ // B[b/y] // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x] // Γ₂ ⊢ Δ₂, ∀yB // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x], ∀yB // Γ₂ ⊢ Δ₂ // H
+ % (⊤)
+ % End of bottom row
+ % Note that the rows are reversed for the second half of the matrix
+ % compared to the top half.
+\end{array}
+\right.
+(\forall r_2)
+$$
+
+In the (∀r₁) and (∃l) cases, we must ensure that the eigenvariable of the inference is not identical to the parameter a in A[a/x] introduced by IH. However, this can always be ensured by Lem. 4. Therefore, we move onto the last nontrivial case, which concerns the (∀r₂) rule. The only nontrivial case occurs as shown below top and is resolved as shown below bottom. In all other cases, we apply IH followed by the (∀r₂) rule (invoking Lem. 4 if necessary).
+
+$$
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // B[b/y] // \Gamma_2 \vdash \Delta_2 // H}{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A, \forall y B // \Gamma_2 \vdash \Delta_2 // H} (\forall r_2)
+$$
+
+$$
+\frac{\prod_{i}^{n_i} \prod_{j}^{m_j}}{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall y B // A[a/x] // \Gamma_2 \vdash \Delta_2 // H} (\forall r_2) =
+\left\{
+\begin{array}{ll}
+\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // B[b/y] // \Gamma_2 \vdash \Delta_2 // H}{\mathcal{G} // \Gamma_1 \vdash -\Delta_1 // B[b/y] // A[a/x] // R_2 -\Delta_2 // H} & (\text{Lem. 4}) \\[1em]
+\frac{\mathcal{G} // R_1 -\Delta_1 // A[a/x] // R_2 -\Delta_2 // H}{\mathcal{G} // R_1 -\Delta_1 // A[a/x] // R_2 -\Delta_2, B \\[-0.5ex] (\forall r_2)} & (\text{IH})
+\end{array}
+\right.
+$$
+
+$$
+\Pi_2 =
+\left\{
+ \begin{array}{@{}l@{\quad}l@{}}
+ % Top row of the matrix:
+ % G // Γ₁ ⊢ Δ₁, ∀xA // B[b/y] // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x] // Γ₂ ⊢ Δ₂, ∀yB // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x] // Γ₂ ⊢ Δ₂, ∀yB // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x], ∀yB // Γ₂ ⊢ Δ₂ // H
+ % (⊤)
+ % End of top row
+
+ % Bottom row of the matrix:
+ % G // Γ₁ ⊢ Δ₁, ∀xA // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x] // Γ₂ ⊢ Δ₂, ∀yB // H
+ % G // Γ₁ ⊢ Δ₁ // A[a/x], ∀yB // Γ₂ ⊢ Δ₂ // H
+ % G // Γ₁ ⊢ Δ₁, ∀xA || ∀yB || Γ₂ ⊢ Δ₂ // H
+ % (⊤)
+ % End of bottom row
+ % Note that the rows are reversed for the second half of the matrix
+ % compared to the top half.
+\end{array}
+\right.
+\Box
+$$
+
+**Lemma 12.** The ($\supset r_1$) rule is invertible in LNIF.
+
+*Proof.* We extend the proof of [17, Lem. 5.11] to include the quantifier cases. The claim is shown by induction on the height of the given derivation. When the last rule of the derivation is $(\forall_l)$, $(\exists_l)$, $(\exists_r)$, or $(\forall r_2)$ in the inductive step, we apply IH to the premise(s) of the inference followed by an application of the corresponding rule. If the last inference of the derivation is an application of the $(\forall r_1)$ rule (as shown below left), then the case is resolved as shown below right.
+
+$$
+\frac{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B / / {\supset} ~ C[a/x]}{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B, {\forall} x C} (\forall r_1) =
+$$
+
+$$
+\frac{\frac{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B / / {\supset} ~ C[a/x]}{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B / / {\supset} ~ C[a/x], A ~ {\supset} ~ B}}{\frac{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B / / {\supset} ~ C[a/x]}{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, A ~ {\supset} ~ B, B ~ {\supset} ~ C[a/x]}}^{\text{Lem. 7}} (\forall r_1)
+$$
+
+$$
+\frac{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, {\forall} x C / / {A} + {B}}{\mathcal{G} / / {\Gamma} ~ {\vdash} ~ {\Delta}, {\forall} x C / / {A} + {B}}
+$$
+---PAGE_BREAK---
+
+☐
+
+**Lemma 13.** The $(\forall_{r1})$ rule is invertible in LNIF.
+
+*Proof.* We prove the result by induction on the height of the given derivation of $G // \Gamma \vdash \Delta, \forall xA$ and show that $G // \Gamma \vdash \Delta // \vdash A[a/x]$ is derivable.
+
+*Base case.* If $G // \Gamma \vdash \Delta, \forall xA$ is obtained via $(id_1)$, $(id_2)$, or $(\perp_l)$, then
+$G // \Gamma \vdash \Delta // \vdash A[a/x]$ is an instance of the corresponding rule as well.
+
+*Inductive step.* All cases, with the exception of the $(\supset r_1)$, $(\forall r_1)$, $(\exists l)$, and $(\forall r_2)$ rules, are resolved by applying IH to the premise(s) and then applying the relevant rule. Let us consider each of the additional cases in turn.
+
+The $(\supset r_1)$ case is shown below left and is resolved as shown below right.
+
+$$ \frac{\frac{G // \Gamma \vdash \Delta, \forall xA // B \vdash C}{G // \Gamma \vdash \Delta // B \vdash C, \forall xA} (\supset r_1)}{\frac{G // \Gamma \vdash \Delta // B \vdash C // A[a/x]}{G // \Gamma \vdash \Delta, B \supset C // A[a/x]}} \text{ IH} \quad \frac{\frac{G // \Gamma \vdash \Delta, \forall xA // B \vdash C}{G // \Gamma \vdash \Delta // A[a/x] // B \vdash C} (\supset r_1)}{\frac{G // \Gamma \vdash \Delta // A[a/x], B \supset C}{G // \Gamma \vdash \Delta, B \supset C // A[a/x]}} (\supset r_2) $$
+
+In the $(\forall r_1)$ case where the relevant formula $\forall xA$ is principal, the premise of the inference is the desired conclusion. If the relevant formula $\forall xA$ is not principal, then the $(\forall r_1)$ inference is of the form shown below left and is resolved as shown below right.
+
+$$ \frac{\frac{G // \Gamma \vdash \Delta, \forall xA // B[b/y]}{G // \Gamma \vdash \Delta, \forall xA, \forall yB} (\forall r_1)}{\frac{G // \Gamma \vdash \Delta // B[b/y] // A[a/x]}{G // \Gamma \vdash \Delta, \forall yB // A[a/x]}} \text{ Lem. 7} \quad \frac{\frac{G // \Gamma \vdash \Delta, \forall xA // B[b/y]}{G // \Gamma \vdash \Delta // A[a/x] // B[b/y]} (\forall r_1)}{\frac{G // \Gamma \vdash \Delta // A[a/x], \forall yB}{G // \Gamma \vdash \Delta, \forall yB // A[a/x]}} (\forall r_2) $$
+
+If the last inference is an instance of the $(\exists l)$ or $(\forall r_2)$ rule, then we must ensure that the eigenvariable of the inference is not identical to the parameter $a$ in $A[a/x]$ introduced by IH, but this can always be ensured due to Lem. 4. ☐
+
+**Lemma 14.** The $(ic_l)$ rule is admissible in LNIF.
+
+*Proof.* We extend the proof of [17, Lem. 5.12] and prove the result by induction on the lexicographic ordering of pairs $(|A|, h)$, where $|A|$ is the complexity of the contraction formula $A$ and $h$ is the height of the derivation. We know the result holds for LNG, and so, we argue the inductive step for the quantifier rules.
+
+With the exception of the $(\exists l)$ case shown below left, all quantifier cases are settled by applying IH followed by an application of the corresponding rule. The only nontrivial case occurs when a contraction is performed on a formula $\exists xA$ with one of the contraction formulae principal in the $(ic_l)$ inference. The situation is resolved as shown below right.
+
+$$ \frac{\frac{G // \Gamma, A[a/x], \exists xA \vdash \Delta // H}{G // \Gamma, \exists xA, \exists xA \vdash \Delta // H} (\exists l)}{\frac{G // \Gamma, \exists xA \vdash \Delta // H}{G // \Gamma, \exists xA} (\text{ic}_l)} $$
+
+$$ =\frac{\frac{G // \Gamma, A[a/x], \exists xA \vdash \Delta // H}{G // \Gamma, A[a/x], A[a/x] \vdash \Delta // H} (\text{Lem. 8})}{\frac{G // \Gamma, A[a/x] \vdash \Delta // H}{G // \Gamma, \exists xA \vdash \Delta // H} (\exists l)} $$
+
+Notice that IH is applicable since we are contracting on a formula of smaller complexity. ☐
+---PAGE_BREAK---
+
+**Lemma 15.** The (mrg) rule is admissible in LNIF.
+
+*Proof.* We extend the proof of [17, Lem. 5.13], which proves that (mrg) is admissible in LNG, and prove the admissibility of (mrg) in LNIF by induction on the height of the given derivation. We need only consider the quantifier rules due to [17, Lem. 5.13]. The ($\forall_{r_1}$), ($\forall_l$), ($\exists_l$), and ($\exists_r$) cases are all resolved by applying IH to the premise of the rule followed by an application of the rule. If (mrg) is applied to the principal components of the ($\forall_{r_2}$) rule as follows:
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1 // \vdash A[a/x] // \Gamma_2 \vdash \Delta_2 // H \quad G // \Gamma_1 \vdash \Delta_1 // \Gamma_2 \vdash \Delta_2, \forall x A // H}{G // \Gamma_1 \vdash \Delta_1, \forall x A // \Gamma_2 \vdash \Delta_2 // H} (\forall_{r_2}) $$
+
+(mrg)
+
+then the desired conclusion is obtained by applying IH to the top right premise. In all other cases, we apply IH to the premises of ($\forall_{r_2}$) followed by an application of the rule. $\square$
+
+**Lemma 16.** *The (icr) rule is admissible in LNIF.*
+
+*Proof.* We extend the proof of [17, Lem. 5.14] to include the quantifier rules and argue the claim by induction on the lexicographic ordering of pairs ($|A|, h$), where $|A|$ is the complexity of the contraction formula $A$ and $h$ is the height of the derivation. The ($\forall_l$) and ($\exists_l$) cases are settled by applying IH to the premise of the inference followed by an application of the rule. For the ($\exists_r$) case, we apply IH to the premise of the rule, followed by the rule. The nontrivial case (occurring when the principal formula is contracted) for the ($\forall_{r_1}$) rule is shown below left, and the desired conclusion is derived as shown below right (where IH is applicable due to the decreased complexity of the contraction formula).
+
+$$ \frac{G // \Gamma \vdash \Delta, \forall x A // \vdash A[a/x]}{G // \Gamma \vdash \Delta, \forall x A, \forall x A} (\forall_{r_1}) $$
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1, \forall x A // \vdash A[a/x]}{G // \Gamma_1 \vdash \Delta_1, \forall x A, \forall x A} (\text{ic}_r) $$
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1, \forall x A // \vdash A[a/x]}{G // \Gamma_1 \vdash \Delta_1, \forall x A} \quad \begin{array}{l} \text{Lem. 11} \\ \text{Lem. 15} \\ \text{IH} \\ (\forall r_1) \end{array} $$
+
+When the contracted formulae are both non-principal in an ($\forall_{r_1}$) inference, we apply IH to the premise followed by an application of the ($\forall_{r_1}$) rule. If the contracted formulae are both non-principal in an ($\forall_{r_2}$) inference, then we apply IH to the premises followed by an application of the rule. If one of the contracted formulae is principal in an ($\forall_{r_2}$) inference (as shown below top), then the case is settled as shown below bottom.
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1, \forall x A // \vdash A[a/x] // \Gamma_2 \vdash \Delta_2 // H}{G // \Gamma_1 \vdash \Delta_1, \forall x A, \forall x A // \Gamma_2 \vdash \Delta_2 // H} (\forall_{r_2}) $$
+
+$$ \frac{G // \Gamma_1 \vdash \Delta_1, \forall x A // \vdash A[a/x] // \Gamma_2 \vdash \Delta_2 // H}{G // \Gamma_1 \vdash \Delta_1, \forall x A, \forall x A // \Gamma_2 \vdash \Delta_2 // H} \quad \begin{array}{l} \text{Lem. 11} \\ \text{Lem. 15} \\ \text{IH} \\ (\forall r_2) \end{array} $$
+---PAGE_BREAK---
+
+Note that we may apply IH in the left branch of the derivation since the complexity of the contraction formula is less than $\forall xA$, and we may apply IH in the right branch since the height of the derivation is less than the original. □
+
+Before moving on to the cut-elimination theorem, we present the definition of the splice operation [17,21]. The operation is used to formulate the (cut) rule.
+
+**Definition 5 (Splice [17]).** The splice $\mathcal{G} \oplus \mathcal{H}$ of two linear nested sequents $\mathcal{G}$ and $\mathcal{H}$ is defined as follows:
+
+$$
+\begin{align*}
+(\Gamma_1 \vdash \Delta_1) \oplus (\Gamma_2 \vdash \Delta_2) &:= \Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2 \\
+(\Gamma_1 \vdash \Delta_1) \oplus (\Gamma_2 \vdash \Delta_2 \;/\; \mathcal{F}) &:= \Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2 \;/\; \mathcal{F} \\
+(\Gamma_1 \vdash \Delta_1 \;/\; \mathcal{F}) \oplus (\Gamma_2 \vdash \Delta_2) &:= \Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2 \;/\; \mathcal{F} \\
+(\Gamma_1 \vdash \Delta_1 \;/\; \mathcal{F}) \oplus (\Gamma_2 \vdash \Delta_2 \;/\; \mathcal{K}) &:= \Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2 \;/\; (\mathcal{F} \oplus \mathcal{K})
+\end{align*}
+$$
+
+**Theorem 4 (Cut-Elimination).** *The rule*
+
+$$
+\frac{\mathcal{G} // \Gamma \vdash \Delta, A // \mathcal{H} \quad \mathcal{F} // A^{k_1}, \Gamma_1 \vdash \Delta_1 // \dots // A^{k_n}, \Gamma_n \vdash \Delta_n}{(\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // \Gamma_n \vdash \Delta_n))} (\text{cut})
+$$
+
+where $\|\mathcal{G}\| = \| \mathcal{F} \|$, $\|\mathcal{H}\| = n-1$, and $\sum_{i=1}^n k_i \ge 1$, is eliminable in LNIF.
+
+*Proof.* We extend the proof of [17, Thm. 5.16] and prove the result by induction on the lexicographic ordering of pairs ($|A|, h_1, h_2$), where $|A|$ is the complexity of the cut formula $A$, $h_1$ is the height of the derivation of the right premise of the (cut) rule, and $h_2$ is the height of the derivation of the left premise of the (cut) rule. Moreover, we assume w.l.o.g. that (cut) is used once as the last inference of the derivation (given a derivation with multiple applications of (cut), we may repeatedly apply the elimination algorithm described here to the topmost occurrence of (cut), ultimately resulting in a cut-free derivation). By [17, Thm. 5.16], we know that (cut) is eliminable from any derivation in LNG, and therefore, we need only consider cases which incorporate quantifier rules.
+
+If $h_1 = 0$, then the right premise of (cut) is an instance of $(id_1)$, $(id_2)$, or $(\bot_l)$. If none of the cut formulae $A$ are principal in the right premise, then the conclusion of (cut) is an instance of $(id_1)$, $(id_2)$, or $(\bot_l)$. If, however, one of the cut formulae $A$ is principal in the right premise and is an atomic formula $p(\vec{a})$, then the top right premise of (cut) is of the form
+
+$$
+\mathcal{F} // p(\vec{a})^{k_1}, \Gamma_1 \vdash \Delta_1 // \dots // p(\vec{a})^{k_i}, \Gamma_i \vdash p(\vec{a}), \Delta'_i // \dots // p(\vec{a})^{k_n}, \Gamma_n \vdash \Delta_n
+$$
+
+where $\Delta_i = p(\vec{a}), \Delta'_i$. Observe that since $\Delta_i$ occurs in the conclusion of (cut), so does $p(\vec{a})$. To construct a cut-free derivation of the conclusion of (cut), we apply (lwr) to the left premise $\mathcal{G} // \Gamma \vdash \Delta, p(\vec{a}) // \mathcal{H}$ until $p(\vec{a})$ is in the $i$th component, and then apply hp-admissibility of $(iw)$ (Lem. 5) to add in the missing formulae. Last, if the cut formula $A$ is principal in the right premise and is equal to $\bot$, then the left premise of (cut) is of the form $\mathcal{G} // \Gamma \vdash \Delta, \bot // \mathcal{H}$. We obtain a cut-free derivation of the conclusion of (cut) by first applying hp-admissibility of $(\bot_r)$ (Lem. 3), followed by hp-admissibility of $(iw)$ (Lem. 5) to add in the missing formulae.
+---PAGE_BREAK---
+
+Suppose that $h_1 > 0$. If none of the cut formulae A are principal in the inference (r) of the right premise of (cut), then for all cases (with the exception of the $(\forall r_1)$, $(\supset r_1)$, $(\exists l)$, $(\forall r_2)$, and $(\supset r_2)$ cases) we apply IH with the left premise of (cut) and the premise(s) of (r), followed by an application of (r). Let us now consider the $(\forall r_1)$, $(\exists l)$, $(\forall r_2)$, $(\supset r_1)$, and $(\supset r_2)$ cases when none of the cut formulae A are principal. First, assume that $(\forall r_1)$ is the rule used to derive the right premise of (cut):
+
+$$ \frac{\mathcal{G} \parallel \Gamma \vdash \Delta, A \parallel \mathcal{H} \quad \frac{\mathcal{F} \parallel A^{k_1}, \Gamma_1 \vdash \Delta_1 \parallel \dots \parallel A^{k_n}, \Gamma_n \vdash \Delta_n \parallel \vdash B[a/x]}{(\forall r_1)} \\ (\mathcal{G} \oplus \mathcal{F}) \parallel \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 \parallel (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 \parallel \dots \parallel \Gamma_n \vdash \Delta_n, \forall xB))} {\mathcal{F} \parallel A^{k_1}, \Gamma_1 \vdash \Delta_1 \parallel \dots \parallel A^{k_n}, \Gamma_n \vdash \Delta_n, \forall xB} (\text{cut}) $$
+
+We invoke hp-admissibility of (sub) (Lem. 4) to substitute the eigenvariable *a* of ($\forall r_1$) with a fresh variable *b* that does not occur in either premise of (cut). We then apply admissibility of (ew) (Lem. 6) to the left premise of (cut), apply IH to the resulting derivations, and last apply the ($\forall r_1$) rule, as shown below:
+
+$$
+\frac{\mathcal{G} \parallel \Gamma \vdash \Delta, A \parallel \mathcal{H} \qquad \frac{\mathcal{F} \parallel A^{k_1}, \Gamma_1 \vdash \Delta_1 \parallel \dots \parallel A^{k_n}, \Gamma_n \vdash \Delta_n \parallel \vdash B[a/x]}{\mathcal{F} \parallel A^{k_1}, \Gamma_1 \vdash \Delta_1 \parallel \dots \parallel A^{k_n}, \Gamma_n \vdash \Delta_n \parallel \vdash B[b/x]} (\text{Lem. 6})}{(G \oplus F) \parallel G, R_1 \vdash D, D_1 // (H \oplus (R_2 \vdash D_2 // C // R_n \vdash D_n)) // B[b/x] // (G \oplus F) // G, R_1 \vdash D, D_1 // (H \oplus (R_2 \vdash D_2 // C // R_n \vdash D_n, \forall xB))} (\forall_{r_1})
+$$
+
+In the ($\exists l$) case below
+
+$$
+\frac{\mathcal{G} \| \Gamma \vdash \Delta, A \| \mathcal{H} \quad
+ \frac{\mathcal{F} \| A^{k_1}, \Gamma_1 \vdash \Delta_1 \| \cdots \| A^{k_i}, B[a/x], \Gamma_i \vdash \Delta_i \| \cdots \| A^{k_n}, \Gamma_n \vdash \Delta_n}{\mathcal{F} \| A^{k_1}, \Gamma_1 \vdash \Delta_1 \| \cdots \| A^{k_i}, \exists x B, \Gamma_i \vdash \Delta_i \| \cdots \| A^{k_n}, \Gamma_n \vdash \Delta_n} (\exists l) \\
+ (G \oplus F) \| G, R_1 \vdash D, D_1 \| (H \oplus (R_2 \vdash D_2 \| \cdots \| \exists x B, R_i \vdash D_i \| \cdots \| R_n \vdash D_n))
+ }{\mathcal{C}} (\mathcal{U})
+$$
+
+we also make use of the hp-admissibility of (sub) to ensure that the ($\exists l$) rule can be applied after invoking the inductive hypothesis:
+
+$$
+\frac{\mathcal{G} \| \Gamma \vdash \Delta, A \| \mathcal{H} &&\frac{\mathcal{F} \| A^{k_1}, \Gamma_1 \vdash \Delta_1 \| \cdots \| A^{k_i}, B[a/x], \Gamma_i \vdash \Delta_i \| \cdots \| A^{k_n}, \Gamma_n \vdash \Delta_n}{\mathcal{F} \| A^{k_1}, \Gamma_1 \vdash \Delta_1 \| \cdots \| A^{k_i}, B[b/x], \Gamma_i \vdash \Delta_i \| \cdots \| A^{k_n}, \Gamma_n \vdash \Delta_n} (\text{Lem. 4}) \\
+(G \oplus F) \| G, R_1 \vdash &\Delta, D_1 \| (H \oplus (R_2 \vdash D_2 \| &\cdots \| A^{k_i}, B[b/x], R_i \vdash D_i \| &\cdots \| R_n \vdash D_n)) \\
+&(\exists l) \\
+(G \oplus F) \| G, R_1 &\vdash D, D_1 \| (H \oplus (R_2 &\vdash D_2 \| &\cdots \| &\exists x B, R_i &\vdash D_i \| &\cdots \| R_n &\vdash D_n))
+$$
+
+Let us consider the ($\forall r_2$) case
+
+$$
+(1) - \frac{\mathcal{F} / / A^{k_1}, Γ_1 ⊢ Δ_1 / / ... / / A^{k_i}, Γ_i ⊢ Δ_i, ∀xB / / A^{k_{i+1}}, Γ_{i+1} ⊢ Δ_{i+1} / / ... / / A^{k_n}, Γ_n ⊢ Δ_n}{(G ⊕ F) / / Γ, Γ_1 ⊢ Δ, Δ_1 / / (H ⊕ (Γ_2 ⊢ Δ_2 / / ... / / Γ_i ⊢ Δ_i, ∀xB / / Γ_{i+1} ⊢ Δ_{i+1} / / ... / / Γ_n ⊢ Δ_n))} (�forall_{r2}) \\ (cut)
+$$
+
+(1) $\mathcal{G} // G, R_1 // R'_i // R'_{i+1} // R'_{i+1} : H_2$
+
+(2) $\mathcal{F} // A^{k_1}, R_1 // A^{k_2} // R'_i // R'_{i+1} // B[a/x] // A^{k_{i+1}}, R_{i+1} // R'_{i+1} // B[b/x] // A^{k_n}, R_n // R'_{n}$
+
+(3) $\mathcal{F} // A^{k_1}, R_1 // A^{k_2} // R'_i // R'_{i+1} // R'_{i+1}, xB // A^{k_n}, R_n // R'_{n}$
+---PAGE_BREAK---
+
+where $\mathcal{H} = \mathcal{H}_1 // \Gamma'_i \vdash \Delta'_i // \Gamma'_{i+1} \vdash \Delta'_{i+1} // \mathcal{H}_2$. To resolve the case we invoke admissibility of (ew) (Lem. 6) on (1) to obtain a derivation of
+
+$$ (1)' \quad \mathcal{G} // \Gamma \vdash \Delta, A // \mathcal{H}_1 // \Gamma'_i \vdash \Delta'_i // \Gamma'_{i+1} \vdash \Delta'_{i+1} // \mathcal{H}_2 $$
+
+Moreover, to ensure that the eigenvariable *a* in (2) does not occur in (1), we apply hp-admissibility of (sub) (Lem. 4) to obtain (2)' where *a* has been replaced by a fresh parameter *b*. Applying IH between (1)' and (2)', and (1) and (3), followed by an application of ($\forall r_2$), gives the desired result. Last, note that the ($\supset r_1$) and ($\supset r_2$) cases are resolved as explained in the proof of [17, Thm. 5.16].
+
+We assume now that one of the cut formulae $A$ is principal in the inference yielding the right premise of (cut). The cases where $A$ is an atomic formula $p(\vec{a})$ or is identical to $\perp$ are resolved as explained above (when $h_1 = 0$). For the case when $A$ is principal in an application of (lift), we simply apply IH between the left premise of (cut) and the premise of the (lift) rule. Also, if $A$ is of the form $B \wedge C$, $B \vee C$, or $B \supset C$, then all such cases can be resolved as explained in the proof of [17, Thm. 5.16]. Thus, we only consider the cases where $A$ is of the form $\exists x B$ and $\forall x B$. We first consider the former case, which can be split into two subcases: either the cut formula $\exists x B$ is principal in the left premise of (cut), or it is not. Let us consider the former case first, where the cut formula $\exists x B$ is principal in the left premise:
+
+$$ \frac{\frac{\mathcal{G} // \Gamma \vdash \Delta, B[b/x], \exists x B // \mathcal{H}}{(\exists_r)} \qquad \frac{\mathcal{F} // \exists x B^{k_1}, \Gamma_1 \vdash \Delta_1 // \dots // \exists x B^{k_i}, B[a/x], \Gamma_i \vdash \Delta_i // \dots // \exists x B^{k_n}, \Gamma_n \vdash \Delta_n}{(\exists_l)}}{\mathcal{G} // \Gamma \vdash \Delta, \exists x B // \mathcal{H}} \qquad \frac{\mathcal{F} // \exists x B^{k_1}, \Gamma_1 \vdash \Delta_1 // \dots // \exists x B^{k_{i+1}}, \Gamma_i \vdash \Delta_i // \dots // \exists x B^{k_n}, \Gamma_n \vdash \Delta_n}{(cut)} \\ (\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n))} $$
+
+By IH, the premise of ($\exists_r$) and the right premise of (cut) yield a cut-free derivation of:
+
+$$ (\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash B[b/x], \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n)) $$
+
+By hp-admissibility of (sub) (Lem. 4), we have a derivation of the premise of ($\exists_l$) where the eigenvariable *a* has been replaced by *b*. Invoking IH between the derivation of this sequent together with the left premise of (cut) gives a cut-free derivation of:
+
+$$ (\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // B[b/x], \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n)) $$
+
+Since $|B[b/x]| < |\exists x B|$, we can apply IH to the derivations of the above two sequents. After applying admissibility of ($ic_l$) and ($ic_r$) (Lem. 14 and 16), we obtain the desired conclusion.
+
+Let us now suppose that $\exists x B$ is not principal in the left premise of (cut) and that the left premise is obtained by an instance of the rule (*r*), that is, our derivation ends with:
+
+$$ \frac{\frac{\mathcal{G} // \Gamma \vdash \Delta, \exists x B // \mathcal{H}}{(\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n))}}{\mathcal{F} // \exists x B^{k_1}, \Gamma_1 \vdash \Delta_1 // \dots // \exists x B^{k_i}, B[a/x], \Gamma_i \vdash \Delta_i // \dots // \exists x B^{k_{i+1}}, \Gamma_i \vdash \Delta_i // \dots // \exists x B^{k_n}, \Gamma_n \vdash \Delta_n} (\exists_l) \\ (\exists_l) \\ (cut) $$
+---PAGE_BREAK---
+
+If (r) is a rule other than ($\supset r_2$) or ($\forall r_2$), then we obtain the desired conclusion by applying a (cut) between the premise(s) of (r) and the right premise of (cut), followed by an application of (r). If (r) is either the ($\supset r_2$) or ($\forall r_2$) rule, then we invoke IH between the right premise of (r) and the right premise of (cut) to obtain $\mathcal{G}_1$, apply admissibility of (ew) (Lem. 6) to the right premise of (cut) to weaken in the empty component $\vdash$ in the appropriate place so that IH can be applied to the resulting sequent and the left premise of (r) giving $\mathcal{G}_2$, and last, we obtain the desired conclusion by applying (r) to $\mathcal{G}_1$ and $\mathcal{G}_2$. (NB. If (r) is a rule with an eigenvariable condition, then it may be necessary to first apply hp-admissibility of (sub) (Lem. 4) to the relevant premise of (r) to ensure the satisfaction of the condition after the (cut) has been applied.)
+
+We obtain the desired result by applying a (cut) between the premise(s) of (r) and the right premise of (cut), followed by an application of (r). Note that if (r) is a rule with an eigenvariable condition, then it may be necessary to first apply hp-admissibility of (sub) (Lem. 4) to the premise of (r) to ensure the satisfaction of the condition after the (cut) has been applied.
+
+Last, let us consider the case where A is of the form $\forall x B$:
+
+$$ \frac{\mathcal{F} \ // \ \forall x B^{k_1}, \Gamma_1 \vdash \Delta_1 \ // \ \dots \ // \ \forall x B^{k_i}, B[a/x] , \Gamma_i \vdash \Delta_i \ // \ \dots \ // \ \forall x B^{k_n}, \Gamma_n \vdash \Delta_n}{(G \oplus \mathcal{F}) \ // \ \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 \ // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 \ // \ \dots \ // \ \Gamma_i \vdash \Delta_i \ // \ \dots \ // \ \Gamma_n \vdash \Delta_n))}(\forall_l) \\ (\text{cut}) $$
+
+Applying IH between the left premise of (cut) and the premise of the ($\forall l$) rule,
+we obtain
+
+$$ (\mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma_1 \vdash \Delta, \Delta_1 // (\mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // B[a/x], \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n)) $$
+
+Depending on if $\mathcal{H}$ is empty or not, we invoke the invertibility of ($\forall r_1$) or ($\forall r_2$)
+(Lem. 13 and 11), admissibility of (mrg) (Lem. 15), and hp-admissibility of
+(sub) (Lem. 4) to obtain a derivation of the sequent $\mathcal{G} // \Gamma \vdash \Delta, B[a/x] // \mathcal{H}$.
+Since $|B[a/x]| < |\forall x B|$ we can apply IH between this sequent and the one above
+to obtain a cut-free derivation of:
+
+$$ (\mathcal{G} \oplus \mathcal{G} \oplus \mathcal{F}) // \Gamma, \Gamma, \Gamma_1 \vdash \Delta, \Delta, \Delta_1 // (\mathcal{H} \oplus \mathcal{H} \oplus (\Gamma_2 \vdash \Delta_2 // \dots // \Gamma_i \vdash \Delta_i // \dots // \Gamma_n \vdash \Delta_n)) $$
+
+Admissibility of $(ic_l)$ and $(ic_r)$ (Lem. 14 and 16) give the desired conclusion. $\square$
+
+# 5 Conclusion
+
+This paper presented the cut-free calculus LNIF for intuitionistic fuzzy logic within the relatively new paradigm of linear nested sequents. The calculus possesses highly fundamental proof-theoretic properties such as (m-)invertibility of all logical rules, admissibility of structural rules, and syntactic cut-elimination.
+
+In future work the author aims to investigate corollaries of the cut-elimination theorem, such as a *midsequent theorem* [4]. In our context, such a theorem states that every derivable sequent containing only prenex formulae is derivable with
+---PAGE_BREAK---
+
+a proof containing quantifier-free sequents, called *midsequents*, which have only propositional inferences (and potentially (*lift*)) above them in the derivation, and only quantifier inferences (and potentially (*lift*)) below them. Moreover, the present formalism could offer insight regarding which fragments interpolate (or if all of IF interpolates) by applying the so-called *proof-theoretic method* of interpolation [17,19]. Additionally, it could be fruitful to adapt linear nested sequents to other first-order Gödel logics and to investigate decidable fragments [2] by providing proof-search algorithms with implementations (e.g. [16] provides an implementation of proof-search in PROLOG for a class of modal logics within the linear nested sequent framework).
+
+Last, [8] introduced both a nested calculus for first-order intuitionistic logic with *constant domains*, and a nested calculus for first-order intuitionistic logic with *non-constant domains*. The fundamental difference between the two calculi involves the imposition of a side condition on the left $\forall$ and right $\exists$ rules. The author aims to investigate whether such a condition can be imposed on quantifier rules in LNIF in order to readily convert the calculus into a sound and cut-free complete calculus for first-order Gödel logic with *non-constant domains*. This would be a further strength of LNIF since switching between the calculi for the constant domain and non-constant domain versions of first-order Gödel logic would result by simply imposing a side condition on a subset of the quantifier rules.
+
+**Acknowledgments.** The author would like to thank his supervisor A. Ciabattoni for her continued support, B. Lellmann for his thought-provoking discussions on linear nested sequents, and K. van Berkel for his helpful comments.
+
+## References
+
+1. A. Avron. Hypersequents, logical consequence and intermediate logics for concurrency. *Annals of Mathematics and Artificial Intelligence*, 4(3):225–248, Sep 1991.
+2. M. Baaz, A. Ciabattoni, and N. Preining. Sat in monadic gödel logics: A borderline between decidability and undecidability. In H. Ono, M. Kanazawa, and R. de Queiroz, editors, *Logic, Language, Information and Computation*, pages 113–123, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg.
+3. M. Baaz, N. Preining, and R. Zach. First-order gödel logics. *Annals of Pure and Applied Logic*, 147(1):23–47, 2007.
+4. M. Baaz and R. Zach. Hypersequents and the proof theory of intuitionistic fuzzy logic. In *Computer science logic (Fischbachau, 2000)*, volume 1862 of *Lecture Notes in Comput. Sci.*, pages 187–201. Springer, Berlin, 2000.
+5. N. D. Belnap, Jr. Display logic. *J. Philos. Logic*, 11(4):375–417, 1982.
+6. S. Borgwardt, F. Distel, and R. Peñaloza. Decidable Gödel description logics without the finitely-valued model property. In C. Baral, G. De Giacomo, and T. Eiter, editors, *Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR’14)*, pages 228–237. AAAI Press, 2014.
+7. M. Dummett. A propositional calculus with denumerable matrix. *The Journal of Symbolic Logic*, 24(2):97–106, 1959.
+---PAGE_BREAK---
+
+8. M. Fitting. Nested sequents for intuitionistic logics. *Notre Dame Journal of Formal Logic*, 55(1):41–61, 2014.
+
+9. D. Gabbay, V. Shehtman, and D. Skvortsov. *Quantification in Non-classical Logics*. Studies in Logic and Foundations of Mathematics. Elsevier, 2009.
+
+10. G. Gentzen. Untersuchungen über das logische schliessen. *Mathematische Zeitschrift*, 39(3):405–431, 1935.
+
+11. K. Gödel. Zum intuitionistischen aussagenkalkül. *Anzeiger der Akademie der Wissenschaften in Wien*, 69:65–66, 1932.
+
+12. R. Goré, L. Postniece, and A. Tiu. Cut-elimination and proof-search for bi-intuitionistic logic using nested sequents. In C. Areces and R. Goldblatt, editors, *Advances in Modal Logic 7, papers from the seventh conference on "Advances in Modal Logic," held in Nancy, France, 9-12 September 2008*, pages 43–66. College Publications, 2008.
+
+13. P. Hajek. *The Metamathematics of Fuzzy Logic*. Kluwer, 1998.
+
+14. A. Horn. Logic with truth values in a linearly ordered heyting algebra. *The Journal of Symbolic Logic*, 34(3):395–408, 1969.
+
+15. B. Lellmann. Linear nested sequents, 2-sequents and hypersequents. In H. de Nivelle, editor, *Automated Reasoning with Analytic Tableaux and Related Methods - 24th International Conference, TABLEAUX 2015, Wroclaw, Poland, September 21-24, 2015. Proceedings*, volume 9323 of Lecture Notes in Computer Science, pages 135–150. Springer, 2015.
+
+16. B. Lellmann. LNSprover: modular theorem proving with linear nested sequents. https://www logic at staff lellmann lnsprover/, 2016.
+
+17. B. Lellmann and R. Kuznets. Interpolation for intermediate logics via hyper- and linear nested sequents. *Advances in Modal Logic* **12**, pages 473–492, 2018.
+
+18. V. Lifschitz, D. Pearce, and A. Valverde. Strongly equivalent logic programs. *ACM Trans. Comput. Logic*, 2(4):526–541, Oct. 2001.
+
+19. T. Lyon, A. Tiu, R. Góre, and R. Clouston. Syntactic interpolation for tense logics and bi-intuitionistic logic via nested sequents. In M. Fernández and A. Muscholl, editors, *28th EACSL Annual Conference on Computer Science Logic (CSL 2020)*, volume 152 of Leibniz International Proceedings in Informatics (LIPIcs), Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. Forthcoming.
+
+20. T. Lyon and K. van Berkel. Automating agential reasoning: Proof-calculi and syntactic decidability for stit logics. In M. Baldoni, M. Dastani, B. Liao, Y. Sakurai, and R. Zalila-Wenkstern, editors, *PRIMA 2019: Principles and Practice of Multi-Agent Systems*. Springer International Publishing, 2019. Forthcoming.
+
+21. A. Masini. 2-sequent calculus: a proof theory of modalities. *Annals of Pure and Applied Logic*, 58(3):229–246, 1992.
+
+22. A. Masini. 2-Sequent Calculus: Intuitionism and Natural Deduction. *Journal of Logic and Computation*, 3(5):533–562, 10 1993.
+
+23. F. Poggiolesi. A cut-free simple sequent calculus for modal logic s5. *The Review of Symbolic Logic*, 1(1):3–15, 2008.
+
+24. G. Takeuti and S. Titani. Intuitionistic fuzzy logic and intuitionistic fuzzy set theory. *Journal of Symbolic Logic*, 49(3):851–866, 1984.
+
+25. L. Viganò. Labelled non-classical logics. Kluwer Academic Publishers, Dordrecht, 2000. With a foreword by Dov M. Gabbay.
+
+26. A. Visser. On the completeness principle: A study of provability in heyting's arithmetic and extensions. *Annals of Mathematical Logic*, 22(3):263–295, 1982.
+
+27. H. Wansing. Sequent Calculi for Normal Modal Propositional Logics. *Journal of Logic and Computation*, 4(2):125–142, 04 1994.
+---PAGE_BREAK---
+
+**A Proofs**
+
+**Theorem 2. (Soundness of LNIF)** *For any linear nested sequent G, if G is provable in LNIF, then $|\vdash \bar{\forall}t(G)$.*
+
+*Proof.* We prove the result by induction on the height of the derivation of $\mathcal{G}$.
+
+*Base case.* We argue by contradiction that $(r) \in \{(id_1), (id_2), (\bot_l)\}$ always produces a valid linear nested sequent under the universal closure of $\iota$. Each rule is of the form shown below left with the sequent $\mathcal{G}$ of the form shown below right:
+
+$$ \overline{\mathcal{G}}(r) \qquad \mathcal{G} = \Gamma_1 \vdash \Delta_1 // \dots // \Gamma_n \vdash \Delta_n // \dots // \Gamma_m \vdash \Delta_m $$
+
+We therefore assume that $\mathcal{G}$ is invalid. This implies that there exists a model $M = (W, R, D, V)$ with world $v$ such that $Rvw_0, \vec{a} \in D_{w_0}$, and $M, w_0 \nVdash \iota(\mathcal{G})(\vec{a})$. It follows that there is a sequence of worlds $w_1, \dots, w_m \in W$ such that $Rw_j w_{j+1}$ (for $0 \le j \le m-1$), $M, w_i \Vdash \bigwedge \Gamma_i$, and $M, w_i \nVdash \bigvee \Delta_i$, for each $1 \le i \le m$. We assume all parameters in $\bigwedge \Gamma_i$ and $\bigvee \Delta_i$ (for $1 \le i \le m$) are interpreted as elements of the associated domain.
+
+(id₁)-**rule:** Let $\mathcal{G}$ be in the form above and assume that $\Gamma_n = \Gamma'_n, p(\vec{b})$ and $\Delta_n = p(\vec{b}), \Delta'_n$. If this happens to be the case, then $M, w_n \Vdash \bigwedge \Gamma'_n \wedge p(\vec{b})$ and $M, w_n \nVdash \bigvee \Delta'_n \vee p(\vec{b})$. The former implies that $M, w_n \Vdash p(\vec{b})$ and the latter implies that $M, w_n \nVdash p(\vec{b})$, which is a contradiction.
+
+(id₂)-**rule:** Similar to the case above, but uses Lem. 1.
+
+($\bot_l$)-**rule:** Let $\mathcal{G}$ be as above and assume that $\Gamma_n = \Gamma'_n, \bot$. If this is the case, then it follows that $M, w_n \Vdash \bigwedge \Gamma_i \wedge \bot$, from which the contradiction $M, w_n \Vdash \bot$ follows.
+
+**Inductive step.** Each inference rule considered is of one of the following two forms:
+
+$$ \frac{\mathcal{G}'}{\mathcal{G}}(r_1) \qquad \frac{\mathcal{G}_1}{\mathcal{G}}(r_2) $$
+
+where $\mathcal{G} = \Gamma_1 \vdash \Delta_1 // \dots // \Gamma_n \vdash \Delta_n // \Gamma_{n+1} \vdash \Delta_{n+1} // \dots // \Gamma_m \vdash \Delta_m$.
+
+Assuming that $\mathcal{G}$ is invalid implies the existence of a model $M = (W, R, D, V)$ with world $v$ such that $Rvw_0, \vec{a} \in D_{w_0}$, and $M, w_0 \nVdash \iota(\mathcal{G})\sigma$. Hence, there is a sequence of worlds $w_1, \dots, w_m \in W$ such that $Rw_j w_{j+1}$ (for $0 \le j \le m-1$), $M, w_i \Vdash \bigwedge \Gamma_i$, and $M, w_i \nVdash \bigvee \Delta_i$, for each $1 \le i \le m$. We assume all parameters in $\bigwedge \Gamma_i$ and $\bigvee \Delta_i$ (for $1 \le i \le m$) are interpreted as elements of the associated domain.
+
+(lift)-**rule:** Follows from Lem. 1.
+
+($\wedge_l$), ($\wedge_r$), ($\vee_r$), ($\vee_l$)-**rules:** It is not difficult to show that for each of these rules the premise, or at least one of the premises, is invalid if the conclusion is assumed invalid.
+
+($\supset r_1$)-**rule:** It follows from our assumption that $M, w_m \Vdash \bigwedge \Gamma_m$ and $M, w_m \nVdash \bigvee \Delta_m \vee A \supset B$. The latter statement implies that $M, w_m \nVdash A \supset B$, from
+---PAGE_BREAK---
+
+which it follows that there exists a world $w_{m+1} \in W$ such that $Rw_m w_{m+1}$ and $M, w_{m+1} \Vdash A$, but $M, w_{m+1} \not\vDash B$, letting us falsify the premise.
+
+**(($\supset_r$)-rule:** Our assumption implies that $M, w_n \Vdash \bigwedge \Gamma_n$, $M, w_n \not\vDash \bigvee \Delta_n \vee A \supset B$, $M, w_{n+1} \Vdash \bigwedge \Gamma_{n+1}$, and $M, w_{n+1} \not\vDash \bigvee \Delta_{n+1}$. The fact that $M, w_n \not\vDash \bigvee \Delta_n \vee A \supset B$ holds implies that there exists a world $w \in W$ such that $Rw_n w$ and $M, w \Vdash A$ and $M, w \not\vDash B$. Since our frames are connected, we have two cases to consider: (i) $Rww_{n+1}$, or (ii) $Rw_{n+1}w$. In case (i), the left premise is falsified, and in case (ii) the right premise is falsified.
+
+**(($\supset_l$)-rule:** Our assumption implies that $M, w_n \Vdash \bigwedge \Gamma_n \wedge A \supset B$ and $M, w_n \not\vDash \bigvee \Delta_n$. Since $R$ is reflexive, we know that $Rw_n w_n$; this fact, in conjunction with the fact that $M, w_n \Vdash A \supset B$, entails that $M, w_n \not\vDash A$ or $M, w_n \Vdash B$, which confirms that one of the premises of the rule is falsified in $M$.
+
+**(($\exists_l$)-rule:** Our assumption implies that $M, w_n \Vdash \bigwedge \Gamma_n \wedge \exists x A$ and $M, w_n \not\vDash \bigvee \Delta_n$. Therefore, $M, w_n \Vdash A[b/x]$ for some $b \in D_{w_n}$. Since $a$ is an eigenvariable, this implies that the premise is falsified when we interpret $a$ as $b$ at the world $w_n$.
+
+**(($\exists_r$)-rule:** Similar to $(\forall_l)$ case. $\square$
+
+**Theorem 3. (Completeness of LNIF)** If $\vdash_{IF} A$, then $A$ is provable in LNIF.
+
+*Proof.* The claim is proven by showing that LNIF can derive each axiom of IF and simulate each inference rule. We derive the quantifier axioms and all inference rules of IF, referring the reader to [15] for the propositional cases.
+
+$$
+\frac{
+ \begin{array}{@{}l@{}}
+ \vdash \quad \forall x A(x), A[a/x] \vdash A[a/x] \\
+ \qquad \vdash \quad /A[a/x] \vdash A[a/x], \exists x A(x) \\
+ \qquad \vdash A[a/x]
+ \end{array}
+}{
+ \begin{array}{@{}l@{}}
+ \vdash \quad /A[x] \vdash A[a/x] \\
+ \qquad \vdash /A[a/x] \vdash \exists x A(x) \\
+ \qquad \vdash A[a/x] \supset \exists x A(x)
+ \end{array}
+}{
+ \begin{array}{@{}l@{}}
+ \vdash A[a/x] \\
+ \qquad \vdash /A[a/x] \\
+ \qquad \vdash \forall x A
+ \end{array}
+}
+\text{ Lem. 6} $$
+
+$$
+\frac{
+ \begin{array}{@{}l@{}}
+ \vdash /A[a/x], \forall x(A(x) \lor B) \vdash B /A[a/x] \vdash A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /A[a/x], \forall x(A(x) \lor B) \vdash B /A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /B, \forall x(A(x) \lor B) \vdash B /A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /A[a/x] \lor B, \forall x(A(x) \lor B) \vdash B /A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /A[a/x] \lor B, B /A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /A[a/x] \\
+ \qquad
+ \begin{array}{@{}l@{}}
+ \vdash /A[A(x) \lor B] \vdash B /A[a/x] \\
+ \\[-0.8ex]
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ & \\
+ &
+
+
+
+
+ &
+ \begin{array}{@{}l@{}}
+ \vdash /B, \forall x(A(x) \lor B) \vdash B /A[a/x] \\[1ex]
+ &
+
+
+
+
+}{
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+ &
+}{
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+
+$$
+\frac{
+\begin{array}{@{}l@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{}}
+\begin{array}{@{}c@{\hspace{0.5em}}c}
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+
+$$
+
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+
+Lem. 6
+
+Lem. 12
+
+Thm. 4
+
+Lem. 15
+---PAGE_BREAK---
+
+$$
+\begin{array}{c}
+\Pi_1 \qquad \Pi_2 \\
+\vdash \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;
+\\
+\vdash \; \; \forall x(A(x) \supset B) \vdash \; \; A[a/x], \exists xA(x), \forall x(A(x) \supset B), A[a/x] \supset B \vdash B \\
+\hline
+\vdash \; \; \forall x(A(x) \supset B) \vdash \; \; A[a/x], \exists xA(x), \forall x(A(x) \supset B) \vdash B \\
+\hline
+\vdash \; \; \forall x(A(x) \supset B) \vdash \; \; A[a/x], \exists xA(x) \vdash B \\
+\hline
+\vdash \; \; \forall x(A(x) \supset B) \vdash \; \; \exists xA(x) \vdash B \\
+\hline
+\vdash \; \; \forall x(A(x) \supset B) \vdash \; \; \exists xA(x) \supset B \\
+\hline
+\vdash \forall x(A(x) \supset B) \supset (\exists xA(x) \supset B)
+\end{array}
+$$
+
+$$
+\Pi_1 = \{\vdash \, \forall x (A(x) \supset B) \vdash A[a/x], \exists x A(x), \forall x (A(x) \supset B), B \vdash B\}
+$$
+
+$$
+\Pi_2 = \{\vdash \quad / \forall x(A(x) \supset B) / A[a/x], \exists x A(x), \forall x(A(x) \supset B), A[a/x] \supset B / B, A[a/x]
+$$
+
+$$
+\begin{array}{c}
+\Pi'_1 \\
+\hline
+\begin{array}{@{}l@{}}
+ \frac{}{} \\
+ \frac{\vdash}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \frac{\quad}{\quad} \\
+ \\[-1.5ex]
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\
+ A[a/x] \\[1ex]
+\end{array}
+\end{array}
+$$
+
+$$
+\Pi'_1 = \{ \vdash \; / \forall x(B \supset A(x)) / B, \forall x(B \supset A(x)) / / \forall x(B \supset A(x)), A[a/x] / A[a/x]
+$$
+
+$$
+\Pi'_2 = {\color{red}\{ } {\color{blue}\vdash} / /_{\color{red}\forall} x(B\supset A(x)) {\color{blue}\vdash} / /_{\color{red}\forall} x(B\supset A(x)) {\color{blue}\vdash} / /_{\color{red}\forall} x(B\supset A(x)), B\supset A[a/x], B {\color{blue}\vdash} B, {A}[a/x]} {\color{green}\Box}
+$$
+
+**Lemma 2.** For any *A*, Γ, Δ, G, and H, ⊢LNIFF // Γ, *A* ⊢ *A*, Δ // H.
+
+*Proof.* We prove the result by induction on the complexity of *A*.
+
+*Base case.* When *A* is atomic or ⊥, the desired result is an instance of (*id*₁) and (*⊥*₁), respectively.
+
+*Inductive step.* We consider each case below and use IH to denote the proof given by the inductive hypothesis.
+
+The cases where *A* is of the form *B* ∧ *C*, *B* ∨ *C*, or ∃xB are simple, and are shown below.
+
+$$
+\frac{\mathfrak{G} //_{\Gamma, B, C} \mathrm{IH}}{\mathfrak{G} //_{\Gamma, B, C} B, C, \Delta //_{\mathfrak{H}}} =
+\frac{\mathfrak{G} //_{\Gamma, B, C} B, C, C, \Delta //_{\mathfrak{H}}}{\mathfrak{G} //_{\Gamma, B, C} B, C, C, \Delta //_{\mathfrak{H}}} =
+\frac{\mathfrak{G} //_{\Gamma, B, C} C, C, B, C, D //_{\mathfrak{H}}}{\mathfrak{G} //_{\Gamma, B, C} C, C, B, C, D //_{\mathfrak{H}}} =
+\frac{\mathfrak{G} //_{\Gamma, B, C} C, C, B, C, D //_{\mathfrak{H}}}{\mathfrak{G} //_{\Gamma, B, C} C, C, B, C, D //_{\mathfrak{H}}} =
+\frac{\mathfrak{G} //_{\Gamma, B, C} [B/a], [B/a], [B/a], D //_{\mathfrak{H}}}{\mathfrak{G} //_{\Gamma, B, C} [B/a], [B/a], D //_{\mathfrak{H}}} =
+$$
+---PAGE_BREAK---
+
+The cases where $A$ is of the form $B \supset C$ or $\forall xB$ are a bit more cumbersome, and
+are explained below. We first define the linear nested sequents $\mathcal{G}_i$ (for $0 \le i \le n$)
+and $\mathcal{H}_j$, where $\mathcal{G}_0 = \mathcal{G}$.
+
+$$
+\begin{array}{@{}l@{\hspace{1em}}r@{\quad}l}
+\mathcal{G}_i = \mathcal{G} & / & \Gamma_1, B \supset C \vdash \Delta_1 / \cdots / \Gamma_i, B \supset C \vdash \Delta_i \\
+& \multicolumn{2}{@{}c}{\mathcal{H}_j = \Gamma_j \vdash \Delta_j / \cdots / \Gamma_n \vdash \Delta_n} \\
+\hline
+\frac{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C \vdash C / \mathcal{H}_2}{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, B \supset C \vdash B, C / \mathcal{H}_2} & ^{\text{IH}} & \frac{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, B \supset C \vdash B, C / \mathcal{H}_2}{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C / \mathcal{H}_2} \\[1ex]
+& \multicolumn{2}{@{}c}{\hphantom{\frac{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C \vdash C / \mathcal{H}_2}{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C / \mathcal{H}_2}}} \\
+& \multicolumn{2}{@{}c}{\hphantom{\frac{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C \vdash C / \mathcal{H}_2}{\mathcal{G} / \Gamma_1, B \supset C \vdash \Delta_1 / B, C / \mathcal{H}_2}}} \\
+& & \qquad \frac{\mathcal{G} / \Gamma_1, B \supset C \vdash B \supset C, \Delta_1 / \mathcal{H}_2}{\mathcal{G} / \Gamma_1, B \supset C \vdash B \supset C, \Delta_1 / \mathcal{H}_2} \\
+\hline
+\end{array}
+$$
+
+The derivations $\Pi_i$, with $0 \le i \le n - 3$, are as follows:
+
+$$
+\begin{array}{@{}c@{\hspace{2em}}c@{}}
+\frac{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash \Delta_{i+2} / B, C \vdash C / \mathcal{H}_{i+3}}{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash \Delta_{i+2} / B, B \supset C \vdash C / \mathcal{H}_{i+3}} & ^{\text{III}} \\
+\frac{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash \Delta_{i+2} / B, B \supset C \vdash B, C / \mathcal{H}_{i+3}}{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash \Delta_{i+2} / B, C \vdash C / \mathcal{H}_{i+3}} & ^{\text{III}} \\
+\hline
+\multicolumn{2}{@{}c}{\frac{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash B \supset C, \Delta_{i+1} / \mathcal{H}_{i+3}}{\mathcal{G}_{i+1} / \Gamma_{i+2}, B \supset C \vdash B, C \vdash C / \mathcal{H}_{i+3}}} \\
+= &
+\multicolumn{2}{@{}c}{\frac{\mathcal{G}_{i} / \Gamma_{i+1}, B \supset C \vdash A_{i+1} / A_{i+2}, B \supset C \vdash A_{i+2} / A_{i+3}}{\mathcal{G}_{i} / \Gamma_{i+1}, B \supset C \vdash A_{i+1} / A_{i+2}, B \supset C \vdash B, A_{i+2} / A_{i+3}}} \\
+&
+\Pi_{i+1}
+\end{array}
+$$
+
+Last, the derivation $\Pi_{n-2}$ is as follows:
+
+$$
+\begin{array}{@{}c@{\hspace{2em}}c@{}}
+\frac{\mathcal{G}_n // B, C \vdash C}{\mathcal{G}_n // B, B \supset C} & ^{\text{IH}} \\
+\frac{\mathcal{G}_n // B, B \supset C}{\mathcal{G}_{n-2} // \Gamma_{n-1}, B \supset C} & ^{\text{IH}} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x], A[x] // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], B \\[0.5ex] & =} \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], B \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =} \\
+& = \\
+\frac{\mathcal{G}_{n-2} // A[y/x]}{\mathcal{G}_{n-2} // A[y/x], A \\[0.5ex] & =}
+\end{array}
+$$
+
+Let us consider the case where $A$ is of the form $\forall xB$. We first define the
+linear nested sequents $\mathcal{G}_i$ (for $0 \le i \le n$) and $\mathcal{H}_j$, where $\mathcal{G}_0 = \mathcal{G}$.
+
+$$
+\begin{array}{@{}l@{\hspace{1em}}r@{\quad}l}
+\mathcal{G}_i = & G & / \Gamma_1, \forall xB \vdash \Delta_1 / \dotsb / \Gamma_i, \forall xB \vdash \Delta_i \\
+& & {\color[HTML]{333333}\mathcal{H}_j = } {\color[HTML]{333333}\Gamma_j} {\color[HTML]{333333}\vdash {\color[HTML]{333333}}\Delta_j} {\color[HTML]{333333}/\dotsb/ {\color[HTML]{333333}\Gamma_n}} {\color[HTML]{333333}\vdash {\color[HTML]{333333}}\Delta_n}
+\end{array}
+$$
+
+$$
+\begin{array}{@{}c@{\hspace{2em}}r@{}c@{}}
+ & {\color[HTML]{4F81BD}\overline{\mathcal{G}}}_{i+2} // & {\color[HTML]{4F81BD}\overline{\forall xB}} , {\color[HTML]{4F81BD}B}[y_i/x] {^{^{^{}}}|\!^{^{^{}}}B}[y_i/x] {^{^{^{}}}|\!^{^{^{}}}{\color[HTML]{4F81BD}\overline{{\color[HTML]{4F81BD}\mathcal{H}}}_{i+3}} & {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{IH}}}} \\
+ & {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{G}}}} // {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Gamma}}}} , {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Delta}}}} {^{^{^{}}}|\!^{^{^{}}} {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{A}}}} [y_i/x] {^{^{^{}}}|\!^{^{^{}}} {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\mathcal{H}}}}}_{i+2}} & & \\
+ & {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{G}}}} // {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Gamma}}}} , {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Delta}}}} {^{^{^{}}}|\!^{^{^{}}} {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Lambda}}}} [y_i/x] {^{^{^{}}}|\!^{^{^{}}} {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\mathcal{H}}}}}_{i+3}} & = & \\
+ & {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{G}}}} // {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Gamma}}}} , {\color[HTML]{4F81BD}\overline{{^{^{^{}}}^{^{^{}}\mathrm{\Delta}}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}B[y_i/x]}}} {^{{^{{^{{\scriptscriptstyle i + 1}}}}}|\!^{{^{{^{{\scriptscriptstyle i + 1}}}}} {\color[HTML]{4F81BD}\overline{{^{^{{\scriptscriptstyle i + 1}}}A[y_i/x]}}} {^ {{({)^{(})} }}|\\
+& & &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+& &
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+&
+%
+$$
+
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+$
+%
+$
+---PAGE_BREAK---
+
+The last component of the derivation, $\Pi_{n-2}$ is given below:
+
+$$ \frac{\frac{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash \Delta_n // \forall x B, B[y_n/x] \vdash B[y_n/x]}{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash \Delta_n // \forall x B \vdash B[y_n/x]} \text{IH} $$
+
+$$ \frac{\frac{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash \Delta_n // B[y_n/x]}{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash \Delta_n // B[y_n/x]} $$
+
+$$ \frac{\frac{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash B[y_n/x]}{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B \vdash B[y_n/x]} $$
+
+$$ \frac{\frac{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B, \Delta_n}{\mathcal{G}_{n-2} // \Gamma_{n-1}, \forall x B \vdash \Delta_{n-1} // \Gamma_n, \forall x B, \Delta_n} \\ \qquad \square $$
+
+**Lemma 4.** *The (sub) rule is hp-admissible in LNIF.*
+
+*Proof.* We prove the result by induction on the height of the given derivation of $\mathcal{G}$.
+
+*Base case.* Any instance of the rule (id₁), (id₂), or (⊥ₜ) is still an instance of the rule under the variable substitution [b/a].
+
+*Inductive step.* For all rules, with the exception of the (∀r₁), (�existsₜ), and (∀r₂) rules, the claim follows straightforwardly by applying IH followed by the corresponding rule. The nontrivial cases occur when the last rule applied is an instance of (∀r₁), (�existsₜ), or (∀r₂), and the variable substituted into the conclusion of the inference is also the eigenvariable of the inference:
+
+$$ \frac{\frac{\mathcal{G} // \Gamma \vdash \Delta // A[a/x]}{\mathcal{G} // \Gamma \vdash \Delta, \forall x A} ~~~~ \frac{\mathcal{G} // \Gamma, A[a/x] \vdash \Delta // H}{\mathcal{G} // \Gamma, \exists x A \vdash \Delta // H} }{(\mathcal{G} // \Gamma \vdash \Delta, \forall x A)[a/b] ~~~~ (\mathcal{G} // \Gamma, \exists x A \vdash \Delta // H)[a/b]} $$
+
+$$ \frac{\frac{\mathcal{G} // \Gamma_1 \vdash \Delta_1 // A[a/x] // \Gamma_2 \vdash \Delta_2 // H}{\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // \Gamma_2 \vdash \Delta_2 // H}}{(\mathcal{G} // \Gamma_1 \vdash \Delta_1, \forall x A // \Gamma_2 \vdash \Delta_2 // H)[a/b]} $$
+
+In such cases we invoke the inductive hypothesis twice: first, we apply the substitution [c/a] to the premise, where c is a fresh parameter, and then we invoke the inductive hypothesis again and apply the substitution [a/b]. The desired result follows by a single application of each rule. □
+
+**Lemma 5.** *The (iw) rule is hp-admissible in LNIF.*
+
+*Proof.* We know by [17, Lem. 5.5] that (iw) is admissible in LNG. We extend the argument to LNIF and consider the cases of permuting (iw) past the (∀r₁), (�existsₜ), and (∀r₂) rules; the (lift), (∀l), (∃r) cases are trivial.
+
+Suppose we have a (∀r₁) inference followed by an instance of (iw):
+
+$$
+\frac{\frac{\mathcal{G} // \Gamma \vdash \Delta // A[a/x]}{\mathcal{G} // \Gamma \vdash \Delta, \forall x A} (\forall r_1)}{\mathcal{G}' // \Gamma' \vdash \Delta', \forall x A} (\text{iw})
+$$
+---PAGE_BREAK---
+
+The nontrivial case occurs when the (iw) rule weakens in a formula containing
+the parameter *a*. If this happens to be the case, then we invoke Lem. 4 and apply
+a substitution [*b*/*a*] where *b* is a fresh parameter not occurring in the derivation
+above. After performing this operation, we may complete the derivation as shown
+below:
+
+$$
+\frac{\mathcal{G} \parallel \Gamma \vdash \Delta \parallel \vdash A[a/x] \\
+\mathcal{G}[\overline{b/a}] \parallel \overline{\Gamma}[\overline{b/a}] \vdash \overline{\Delta}[\overline{b/a}] \parallel \vdash \overline{A}[a/x][\overline{b/a}]}
+{\mathcal{G} \parallel \Gamma \vdash \Delta \parallel \vdash A[b/x] \\
+\mathcal{G}' \parallel \Gamma' \vdash \Delta' \parallel \vdash A[b/x] \\
+\mathcal{G}' \parallel \Gamma' \vdash \Delta', \forall x A}
+\text{Lem. 4} \\
+= \\
+\frac{(iw)}{(\forall r_1)}
+$$
+
+The second and third line are equal because $a$ does not occur in $\mathcal{G}$, $\Gamma$, or $\Delta$ (i.e., $a$ is an eigenvariable) and also because $A[a/x][b/a] = A[b/x]$ (we may assume w.l.o.g. that $A$ does not contain any occurrences of $a$). Last, we may apply the $(\forall r_1)$ rule since we are guaranteed that $b$ is an eigenvariable by choice.
+
+The cases for $(\exists t)$ and $(\forall r_2)$ are shown similarly.
+□
+
+**Lemma 8.** If $\sum_{i=1}^{n} k_n \ge 1$, then
+
+(i) (1) implies (2)
+
+(iii) (6) implies (7) and (8)
+
+(v) (11) implies (12)
+
+(ii) (3) implies (4) and (5)
+
+(iv) (9) implies (10)
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, (A \land B)^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, (A \land B)^{k_n} \vdash \Delta_n \quad (1)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, A^{k_1}, B^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, A^{k_n}, B^{k_n} \vdash \Delta_n \quad (2)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, (A \lor B)^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, (A \lor B)^{k_n} \vdash \Delta_n \quad (3)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, A^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, A^{k_n} \vdash \Delta_n
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, B^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, B^{k_n} \vdash \Delta_n
+\quad (5)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, (A \supset B)^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, (A \supset B)^{k_n} \vdash \Delta_n
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, B^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, B^{k_n} \vdash \Delta_n
+\qquad (7)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, (A \supset B)^{k_1} \vdash \Delta_1, A^{k_1} // \dots // \Gamma_n, (A \supset B)^{k_n} \vdash \Delta_n, A^{k_n} \quad (8)
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, (\forall x A)^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, (\forall x A)^{k_n} \vdash \Delta_n
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, A[a/x]^{k_1} (\forall x A)^{k_2} // \dots // \Gamma_n A[a/x]^{k_n} (\forall x A)^{k_n} // \Delta_n
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, A[a/x]^{k_1} (\exists x A)^{k_2} // \dots // \Gamma_n A[a/x]^{k_n} (\exists x A)^{k_n} // \Delta_n
+$$
+
+$$
+\vdash_{\text{LNIF}} \Gamma_1, A[a/x]^{k_1} // \dots // \Gamma_n A[a/x]^{k_n} // \Delta_n
+\quad (12)
+$$
+
+*Proof.* By [17, Lem. 5.9] we know that claims (i)-(iii) hold for LNG. It is easy to show that the proof of [17, Lem. 5.9] can be extended to the quantifier rules for claims (i) and (ii) by applying the IH and then the rule. We therefore argue that claim (iii) continues to hold in the presence of the quantifier rules, and also argue that claims (iv) and (v) hold.
+
+Claim (iii). We know that (6) implies (8) by Lem. 5. One can prove that (6)
+implies (7) by induction on the height of the given derivation. By [17, Lem. 5.9]
+---PAGE_BREAK---
+
+all cases with the exception of the quantifier rules hold. All of the quantifier cases
+are handled by applying IH followed by an application of the corresponding rule.
+
+*Claim (iv).* Statement (9) implies (10) by Lem. 5.
+
+*Claim (v).* We argue that (11) implies (12) by induction on the height of the given derivation of $\mathcal{G} = \Gamma_1, (\exists x A)^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, (\exists x A)^{k_n} \vdash \Delta_n$.
+
+*Base case.* If $\mathcal{G}$ is the result of $(id_1)$, $(id_2)$, or $(\perp_l)$, then $\Gamma_1, A[a/x]^{k_1} \vdash \Delta_1 // \dots // \Gamma_n, A[a/x]^{k_n} \vdash \Delta_n$ is an instance of the corresponding rule as well.
+
+*Inductive step.* For all rules, with the exception of $(\forall_{r_1})$, $(\exists_l)$, and $(\forall_{r_2})$, we apply IH to the premise(s) followed by the rule. In the $(\forall_{r_1})$, $(\exists_l)$, and $(\forall_{r_2})$ cases we must ensure that the eigenvariable of the inference is not identical to the parameter $a$ occurring in $A[a/x]$; however, due to Lem. 4, this can always be ensured. $\square$
\ No newline at end of file
diff --git a/samples/texts_merged/6061398.md b/samples/texts_merged/6061398.md
new file mode 100644
index 0000000000000000000000000000000000000000..444e05099cb43829fa7a0ce910698b8f189cc242
--- /dev/null
+++ b/samples/texts_merged/6061398.md
@@ -0,0 +1,310 @@
+
+---PAGE_BREAK---
+
+Further decay results on the system of NLS
+equations in lower order Sobolev spaces
+
+Chunhua Li
+
+Abstract.
+
+The initial value problem of a system of nonlinear Schrödinger equations with quadratic nonlinearities in two space dimensions is studied. We show there exists a unique global solution for this initial value problem which decays like $t^{-1}$ as $t \to +\infty$ in $L^\infty(\mathbb{R}^2)$ for small initial data in lower order Sobolev spaces.
+
+§1. Introduction and main results
+
+We consider global existence of solutions and time decay of the so-
+lutions to the following system of nonlinear Schrödinger equations
+
+$$
+(1) \quad \left\{
+\begin{array}{ll}
+i \partial_t v_j + \frac{1}{2m_j} \Delta v_j = F_j (v_1, \dots, v_l), & t \in \mathbb{R}, x \in \mathbb{R}^2, \\
+v_j(0, x) = \phi_j(x), & x \in \mathbb{R}^2,
+\end{array}
+\right.
+$$
+
+for $1 \le j \le l$, where $\overline{v}_j$ is the complex conjugate of $v_j$, $m_j$ is a mass of a particle and quadratic nonlinearity has the form
+
+$$
+F_j(v_1, \dots, v_l) = \sum_{1 0$ for $1 \le j \le l$. In fact, System (1) can be regarded as the nonrelativistic version of a system of nonlinear Klein-Gordon equations
+
+(3) $\frac{1}{2c^2m_j}\partial_t^2 u_j - \frac{1}{2m_j}\Delta u_j + \frac{m_jc^2}{2}u_j = -F_j(u_1, \dots, u_l), \quad j = 1, \dots, l,$
+
+under Condition (A₂), where *c* is the speed of light. The related systems of Klein Gordon equations were considered in [7], [8] and [10].
+
+In what follows, we use the same notations both for the vector function spaces and the scalar ones. For $m, s \in \mathbb{R}$, weighted Sobolev space $H^{m,s}(\mathbb{R}^2)$ is defined by
+
+$$ H^{m,s}(\mathbb{R}^2) = \left\{ \begin{array}{ll} f = (f_1, \dots, f_l) \in L^2(\mathbb{R}^2); & \\ \|f\|_{H^{m,s}(\mathbb{R}^2)} = \sum_{j=1}^l \|f_j\|_{H^{m,s}(\mathbb{R}^2)} < \infty. & \end{array} \right\}, $$
+---PAGE_BREAK---
+
+where $\|f_j\|_{H^{m,s}(\mathbb{R}^2)} = \| (1-\Delta)^{\frac{n}{2}} (1+|x|^2)^{\frac{s}{2}} f_j \|_{L^2(\mathbb{R}^2)}$. We write
+$\|f_j\|_{L^2(\mathbb{R}^2)} = \|f_j\|$ and $H^m(\mathbb{R}^2) = H^{m,0}(\mathbb{R}^2)$ for simplicity. We denote by the same letter $C$ various positive constants.
+
+Our main theorem is stated as follows :
+
+**Theorem 1.** Assume that $(A_1)$ and $(A_2)$ hold. We also assume that $\phi = (\phi_1, \dots, \phi_l) \in H^{\beta,0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2)$, where $1 < \beta < 2$. Then for some $\varepsilon > 0$ there exists a unique global solution $v = (v_1, \dots, v_l) \in C(\mathbb{R}; H^{\beta,0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2))$ and
+
+$$
+\|v(t)\|_{L^\infty(\mathbb{R}^2)} = \sum_{i=1}^{l} \|v_i(t)\|_{L^\infty(\mathbb{R}^2)} \le C(1+|t|)^{-1}
+$$
+
+for any $\phi = (\phi_1, \dots, \phi_l)$ satisfying
+
+$$
+\|\phi\|_{H^{0,\beta_0}(\mathbb{R}^2)} + \|\phi\|_{H^{0,\beta}(\mathbb{R}^2)} = \sum_{i=1}^{l} \left( \|\phi_i\|_{H^{0,\beta_0}(\mathbb{R}^2)} + \|\phi_i\|_{H^{0,\beta}(\mathbb{R}^2)} \right) \leq \varepsilon.
+$$
+
+The global existence result of System (1) can be obtained by using
+the method of [11] and [9]. $L^\infty(\mathbb{R}^2)$-time decay of small solutions for
+System (1) in $H^{0,\beta_0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2)$, where $1 < \beta < 2$, is our main
+result and will be proved by showing a priori estimates of local in time
+of solutions. This idea was used in [4] and [15].
+
+**Remark 1.** By the same method, we may obtain the similar time decay results to Theorem 1 in the case of $\beta > 2$.
+
+§2. A priori estimates of solutions
+
+For any $\phi = (\phi_1, \dots, \phi_l) \in H^{\beta,0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2)$, where $1 < \beta < 2$, we let $T > 0$ and $v = (v_1, \dots, v_l)$ be a solution of System (1) in Space $X_T = \{C([0,T]; H^{\beta,0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2)); \|v\|_{X_T} < \infty\}$ with norm
+
+$$
+\|v\|_{X_T} = \sum_{j=1}^{l} \|v_j\|_{X_T} = \sup_{t \in [0,T]} \sum_{j=1}^{l} (1+t)^{\delta} \left\| U_{\frac{1}{m_j}} (-t) v_j \right\|_{H^{0,\beta}(\mathbb{R}^2)}
+$$
+
+where $0 < \delta < \frac{1}{4}(\beta - 1)$. Existence of local in time of solutions can be
+obtained by contraction mapping principle. We give it without proof
+(See [14]).
+
+**Theorem 2.** Let $T > 1$, then there exists a small $\epsilon > 0$ such that
+for any $\phi = (\phi_1, \dots, \phi_l) \in H^{\beta,0}(\mathbb{R}^2) \cap H^{0,\beta}(\mathbb{R}^2)$ with $\|\phi\|_{H^{\beta,0}(\mathbb{R}^2)} =$
+---PAGE_BREAK---
+
+$$
+\|\phi\|_{H^0, \beta(\mathbb{R}^2)} \le \varepsilon, \text{ where } 1 < \beta < 2. \text{ System (1) has a unique pair of solutions } v = (v_1, \dots, v_l) \in X_T \text{ such that } \|v\|_{X_T} \le 2\varepsilon.
+$$
+
+Let $U_\delta(t)$ be the Schrödinger evolution group defined by $U_\delta(t) = F^{-1}E^\delta F$ with $\delta \neq 0$, $E = e^{-t/2}|\xi|^2$ for $t \neq 0$. In what follows we let $v$ be a solution given by the above theorem. We define the dilation operator by $(D_\delta \phi)(x) = \frac{1}{(\delta\epsilon)^{\frac{1}{\delta}}}\phi(\frac{x}{\delta})$ for $\delta \neq 0$ and define $E = e^{-t/2}|\xi|^2$, $M = e^{-t/\delta}; x|^2$ for $t \neq 0$. Evolution operator $U_\delta(t)$ for $t \neq 0$ is written as
+
+$$
+(U_{\delta}(t)\phi)(x) = M^{-\frac{1}{\delta}}(x) D_{\delta t} \left( (\mathcal{F}(M^{\frac{1}{\delta}}(y)\phi(y)))^{\circ}(\xi) \right) (x).
+$$
+
+We have
+
+$$
+U_{\delta}(-t)\phi(x) = -M^{\frac{1}{\delta}}(\mathcal{F}^{-1}E^{\delta}D_{\frac{1}{\delta t}}\phi)(x).
+$$
+
+Then the free evolution group is factorized as $U_{\delta}(t) \mathcal{F}^{-1} = M^{-\frac{1}{\delta}} D_{\delta t} M_{-\frac{1}{\delta}}$, where $M_{-\frac{1}{\delta}} = \mathcal{F} M^{\frac{1}{\delta}} \mathcal{F}^{-1}$. Moreover we have $\mathcal{F} U_{\delta}(-t) = -M_{\frac{1}{\delta}} E^{\delta} D_{\frac{1}{\delta t}}$. These formulas were used in [6] first.
+
+We estimate difference between the free Schrödinger solution and its main term. Lemma 1 is obtained in [4].
+
+**Lemma 1.** Let $f \in H^{0,\beta}(\mathbb{R}^2)$, $\delta \neq 0$. Then
+
+$$
+\| f - M^{-\frac{1}{2}} D_{\delta t} F U_{\delta}(-t) f \|_{L^{\infty}(\mathbb{R}^2)} \le C |t|^{-1-\alpha} \| U_{\delta}(-t) f \|_{H^{0,\beta}(\mathbb{R}^2)},
+$$
+
+for $|t| \ge 1$, where $0 < \alpha < 1$ and $\beta > 1 + 2\alpha$.
+
+If we multiply both sides of (1) by $F U_{m_j}^{-1}(-t)$, then we can divide the nonlinear term into the main term and the remainder term under the gauge condition $(A_1)$. Detailed calculations can be seen in [13].
+
+We define
+
+$$
+R_{1,j} \\
+= i (M_{m_j} - 1) \frac{m_j}{t} F_j \left( -D_{\frac{m_j}{m_1}} M_{m_1}^{-1} F U_{\frac{1}{m_1}} (-t) v_1, \dots, \right. \\
+\left. \qquad -D_{\frac{m_j}{m_l}} M_{m_l}^{-1} F U_{\frac{1}{m_l}} (-t) v_l \right)
+$$
+
+and
+
+$$
+R_{2,j} \\
+= i \frac{m_j}{t} F_j \left( -D_{\frac{m_j}{m_1}} M_{m_1}^{-1} F U_{\frac{1}{m_1}} (-t) v_1, \dots, -D_{\frac{m_j}{m_l}} M_{m_l}^{-1} F U_{\frac{1}{m_l}} (-t) v_l \right. \\
+\left. -i \frac{m_j}{t} F_j \left( -D_{\frac{m_j}{m_1}} F U_{\frac{1}{m_1}} (-t) v_1, \dots, -D_{\frac{m_j}{m_l}} F U_{\frac{1}{m_l}} (-t) v_l \right). \right)
+$$
+---PAGE_BREAK---
+
+Then the nonlinear term can be divided into two parts such that
+
+$$ (4) \qquad i\partial_t u_j = \frac{1}{t} F_j(u_1, \dots, u_l) + D_{\frac{1}{m_j}} \sum_{i=1}^{2} R_{i,j}, $$
+
+where
+
+$$ u_j = D_{\frac{1}{m_j}} F U_{\frac{1}{m_j}}(-t) v_j. $$
+
+We multiply both sides of (4) by $c_j \bar{u}_j$, take the imaginary part and use Condition (A₁) to obtain
+
+$$ (5) \quad \partial_t \left( \sum_{j=1}^l c_j |u_j|^2 \right) = 2\operatorname{Im} \left( \sum_{j=1}^l c_j \left( D_{\frac{1}{m_j}} \sum_{i=1}^2 R_{i,j} \right) \bar{u}_j \right), $$
+
+where $c_j > 0$ for $1 \le j \le l$. We prove the second term of the right hand side of (4) is a remainder term.
+
+**Lemma 2.** We have
+
+$$ \sum_{j=1}^{l} \sum_{i=1}^{2} \|R_{i,j}\|_{L^{\infty}(\mathbb{R}^2)} \le C|t|^{-1-\alpha} \|U_{\frac{1}{m}}(-t)v\|_{H^{0,\beta}(\mathbb{R}^2)}^2, $$
+
+for $|t| \ge 1$, where
+
+$$ \| U_m^{\pm}(-t) v \|_{H^{0,\beta}(\mathbb{R}^2)} = \sum_{j=1}^{l} \| U_{\frac{1}{m_j}}^{\pm}(-t) v_j \|_{H^{0,\beta}(\mathbb{R}^2)}, $$
+
+$0 < \alpha < 1$ and $\beta > 1 + 2\alpha$.
+
+*Proof.* By Schwarz inequality and Lemma X4 in [12], we have
+
+$$
+\begin{align*}
+& \|R_{1,j}\|_{L^\infty(\mathbb{R}^2)} \\
+&\le C|t|^{-1-\alpha} \|F_j\left(-D_{\frac{m_j}{m_1}} FM^{-m_1} U_{\frac{1}{m_1}}(-t)v_1, \dots, \right. \\
+&\qquad \left. -D_{\frac{m_j}{m_l}} FM^{-m_l} U_{\frac{1}{m_l}}(-t)v_l\right)\|_{H^{\beta,0}(\mathbb{R}^2)} \\
+&\le C|t|^{-1-\alpha} \sum_{p,q=1}^{l} \|U_{\frac{1}{m_p}}(-t)v_p\|_{L^1(\mathbb{R}^2)} \|U_{\frac{1}{m_q}}(-t)v_q\|_{H^{0,\beta}(\mathbb{R}^2)} \\
+&\le C|t|^{-1-\alpha} \sum_{p,q=1}^{l} \|U_{\frac{1}{m_p}}(-t)v_p\|_{H^{0,\beta}(\mathbb{R}^2)} \|U_{\frac{1}{m_q}}(-t)v_q\|_{H^{0,\beta}(\mathbb{R}^2)} \\
+&\le C|t|^{-1-\alpha} \sum_{j=1}^{l} \|U_{\frac{1}{m_j}}(-t)v_j\|_{H^{0,\beta}(\mathbb{R}^2)}^2,
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where $0 < \alpha < 1$ and $\beta > 1 + 2\alpha$.
+
+We can estimate $\|R_{2,j}\|_{L^\infty(\mathbb{R}^2)}$ by the same method.
+
+Q.E.D.
+
+We define
+
+$$ (6) \qquad \left| J_{\frac{1}{m_j}} \right|^s = U_{\frac{1}{m_j}}(t) |x|^s U_{\frac{1}{m_j}}(-t), $$
+
+where $s > 0$. Then (6) can be presented as (see [4])
+
+$$ \left| J_{\frac{1}{m_j}} \right|^s = \overline{M^{m_j}} \left( -\frac{t^2}{m_j^2} \Delta \right)^{\frac{s}{2}} M^{m_j}. $$
+
+Moreover we have commutation relations with $\left|J_{\frac{1}{m_j}}\right|^s$ and $L_{\frac{1}{m_j}} = i\partial_t + \frac{1}{2m_j}\Delta$ such that
+
+$$ \left[ L_{\frac{1}{m_j}}, \left| J_{\frac{1}{m_j}} \right|^s \right] = 0. $$
+
+We evaluate the derivative of $\left\| U_{\frac{1}{m_j}}(-t)v \right\|_{H^0, s(\mathbb{R}^2)}$ with respect to $t$. Then we have
+
+**Lemma 3.** We have
+
+$$ \begin{align*} & \frac{d}{dt} \| U_{\frac{1}{m}}(-t) v \|_{H^0, s(\mathbb{R}^2)} \\ & \le Ct^{-1} \| U_{\frac{1}{m}}(-t) v \|_{H^{0,s}(\mathbb{R}^2)} \| \mathcal{F}U_{\frac{1}{m}}(-t) v \|_{L^\infty(\mathbb{R}^2)} \\ & + Ct^{-1-\alpha} \| U_{\frac{1}{m}}(-t) v \|_{H^{0,s}(\mathbb{R}^2)}^2 \end{align*} $$
+
+for any $t \in [1,T]$, where $0 < \alpha < 1$, $2 > \beta > 1+2\alpha$ and
+
+$$ \| \mathcal{F} U_{\frac{1}{m}}(-t) v \|_{L^\infty(\mathbb{R}^2)} = \sum_{j=1}^{l} \| \mathcal{F} U_{\frac{1}{m_j}}(-t) v \|_{L^\infty(\mathbb{R}^2)}. $$
+
+By Lemma 3, we have the following desired a priori estimates of local solutions.
+
+**Lemma 4.** There exist small $\varepsilon > 0$ and $\delta$ with $\varepsilon^{1/2} < \delta < \alpha/2$ ($\alpha$ is mentioned in Lemma 3.) such that
+
+$$ \| U_{\frac{1}{m}}(-t) v \|_{H^{0, s}(\mathbb{R}^2)} (1+t)^{-\delta} + \| \mathcal{F} U_{\frac{1}{m}}(-t) v \|_{L^\infty(\mathbb{R}^2)} < \varepsilon^{1/2}, $$
+---PAGE_BREAK---
+
+and
+
+$$ \sum_{j=1}^{l} \left( \| \phi_j \|_{H^{\beta, 0}(\mathbb{R}^2)} + \| \phi_j \|_{H^{0, \beta}(\mathbb{R}^2)} \right) \leq \varepsilon $$
+
+for any $t \in [1, T]$, where $2 > \beta > 1$.
+
+The proofs of Lemma 3 and Lemma 4 are similar to the proofs in [13]. Because of the limitation of length, we omit the proofs of them here.
+
+### §3. Proof of Theorem 1.
+
+*Proof.* We consider the case of $t \ge 1$. From Lemma 1 we have
+
+$$ \|v_j\|_{L^\infty(\mathbb{R}^2)} \le Ct^{-1} \|F U_{\frac{1}{m}}(-t)v_j\|_{L^\infty(\mathbb{R}^2)} + Ct^{-1-\alpha} \|U_{\frac{1}{m}}(-t)v_j\|_{H^{0,\beta}(\mathbb{R}^2)}, $$
+
+where $0 < \alpha < 1$ and $\beta > 1+2\alpha$. By the standard continuation argument we have a unique time global solution such that
+
+$$ \begin{aligned} & \| U_{\frac{1}{m}} (-t) v \|_{H^{0, \beta}(\mathbb{R}^2)} \le \varepsilon^{\frac{1}{2}} (1+t)^{\delta}, \\ & \| F U_{\frac{1}{m}} (-t) v \|_{L^{\infty}(\mathbb{R}^2)} \le \varepsilon^{\frac{1}{2}}. \end{aligned} $$
+
+for any $t \ge 1$, where $\varepsilon^{\frac{1}{2}} < \delta < \frac{\alpha}{2}$ and $2 > \beta > 1+2\alpha$. Therefore we get the time decay estimates
+
+$$ \begin{align*} \|v\|_{L^{\infty}(\mathbb{R}^2)} &= \sum_{j=1}^{l} \|v_j\|_{L^{\infty}(\mathbb{R}^2)} \\ &\le Ct^{-1} \|F U_{\frac{1}{m}}(-t)v\|_{L^{\infty}(\mathbb{R}^2)} + Ct^{-1-\alpha} \|U_{\frac{1}{m}}(-t)v\|_{H^{0,\beta}(\mathbb{R}^2)} \\ &\le C (\varepsilon^{\frac{1}{2}} l^{-1} + t^{-1-\alpha+\delta}) \le Ct^{-1} \end{align*} $$
+
+for $t \ge 1$. If $t \in [0, 1]$, we have $\|v\|_{L^\infty(\mathbb{R}^2)} \le C\varepsilon$ by $\|v(0)\|_{H^{s,0}(\mathbb{R}^2)} \le \varepsilon$ for $2 > \beta > 1$. In the case of $t \le 0$, the theorem follows by the same method. This completes the proof of the theorem. Q.E.D.
+
+**Acknowledgments.** The author would like to express her deep thanks to Professor Nakao Hayashi for his helpful comments. The author would also like to thank the referees for their useful suggestions.
+---PAGE_BREAK---
+
+References
+
+[1] M. Colin, T. Colin and M. Ohta, Stability of solitary waves for a system of nonlincar Schrödinger equations with three wave interaction, Ann. Inst. H. Poincaré Anal. Non Linéaire, **26** (2009), 2211-2226.
+
+[2] N. Hayashi, C. Li and P. I. Naumkin, On a system of nonlinear Schrödinger equations in 2D, Differential Integral Equations, **24** (2011), 417–434.
+
+[3] N. Hayashi, C. Li and P. I. Naumkin, Modified wave operator for a system of nonlinear Schrödinger equations in 2D, Comm. Partial Differential Equations, **37** (2012), 947–968.
+
+[4] N. Hayashi and P. I. Naumkin, Asymptotics for large time of solutions to the nonlinear Schrödinger and Hartree equations, Amer. J. Math., **120** (1998), 369–389.
+
+[5] N. Hayashi, C. Li and T. Ozawa, Small data scattering for a system of nonlinear Schrödinger equations, Differ. Equ. Appl., **3** (2011), 415–426.
+
+[6] N. Hayashi and T. Ozawa, Scattering theory in the weighted $L^2(\mathbb{R}^n)$ spaces for some Schrödinger equations, Ann. Inst. H. Poincaré Phys. Théor., **48** (1988), 17–37.
+
+[7] Y. Kawahara and H. Sunagawa, Global small amplitude solutions for two-dimensional nonlinear Klein Gordon systems in the presence of mass resonance, J. Differential Equations, **251** (2011), 2549–2567.
+
+[8] S.Katayama, T.Ozawa and H. Sunagawa, A note on the null condition for quadratic nonlinear Klein-Gordon systems in two space dimensions, Comm. Pure Appl. Math., **65** (2012), 1285-1302.
+
+[9] T. Ozawa, Remarks on proofs of conservation laws for nonlinear Schrödinger equations, Calc. Var. Partial Differential Equations, **25** (2006), 403–408.
+
+[10] H. Sunagawa, On global small amplitude solutions to systems of cubic nonlinear Klein-Gordon equations with different mass terms in one space dimension, J. Differential Equations, **192** (2003), 308-325.
+
+[11] Y. Tsutsumi, $L^2$-solutions for nonlinear Schrödinger equations and nonlinear groups, Funkcial. Ekvac., **30** (1987), 115-125.
+
+[12] T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., **41** (1988), 891–907.
+
+[13] C. Li, Decay of solutions for a system of nonlinear Schrödinger equations in 2D, Discrete Contin. Dyn. Syst., **32** (2012), 4265–4285.
+
+[14] T. Cazenave and F. B. Weissler, The Cauchy problem for the critical nonlinear Schrödinger equation in $H^s$, Nonlinear Anal., **14** (1990), 807-836.
+
+[15] A. Shimomura, Asymptotic behavior of solutions for Schrödinger equations with dissipative nonlinearities, Comm. Partial Differential Equations, **31** (2006), 1407-1423.
+
+Department of Mathematics, Graduate School of Science
+Osaka University, Osaka 560-0043, Japan
+and
+Department of Mathematics, College of Science
+Yanbian University, Jilin Province 133002, China
+E-mail address: sxlch@ybu.edu.cn
\ No newline at end of file
diff --git a/samples/texts_merged/6152053.md b/samples/texts_merged/6152053.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9657c5134684b64f6d7e6f6e71cd2e8ccd01c46
--- /dev/null
+++ b/samples/texts_merged/6152053.md
@@ -0,0 +1,2047 @@
+
+---PAGE_BREAK---
+
+Convex and non-convex regularization
+methods for spatial point processes
+intensity estimation
+
+Achmad Choiruddin*
+
+Department of Mathematical Sciences, Aalborg University, Denmark
+e-mail: achmad@math.aau.dk
+
+Jean-François Coeurjolly†
+
+Department of Mathematics, Université du Québec à Montréal (UQAM), Canada
+Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
+e-mail: coeurjolly.jean-francois@uqam.ca
+and
+Frédérique Letué‡
+
+Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
+e-mail: frederique.letue@univ-grenoble-alpes.fr
+
+**Abstract:** This paper deals with feature selection procedures for spatial point processes intensity estimation. We consider regularized versions of estimating equations based on Campbell theorem. In particular, we consider two classical functions: the Poisson likelihood and the logistic regression likelihood. We provide general conditions on the spatial point processes and on penalty functions which ensure oracle property, consistency, and asymptotic normality under the increasing domain setting. We discuss the numerical implementation and assess finite sample properties in simulation studies. Finally, an application to tropical forestry datasets illustrates the use of the proposed method.
+
+MSC 2010 subject classifications: 62H11, 60G55, 62J07, 65C60, 97K80.
+Keywords and phrases: Campbell theorem, estimating function, feature selection, logistic regression likelihood, penalized regression, Poisson likelihood.
+
+Received March 2017.
+
+Contents
+
+
+
+ |
+ 1
+ |
+
+ Introduction
+ |
+
+ 1211
+ |
+
+
+ |
+ 2
+ |
+
+ Background and problem formulation
+ |
+
+ 1213
+ |
+
+
+ |
+ 2.1
+ |
+
+ Spatial point processes and intensity functions
+ |
+
+ 1213
+ |
+
+
+
+*The Danish Council for Independent Research — Natural Sciences, grant DFF - 7014-00074 "Statistics for point processes in space and beyond" and "Centre for Stochastic Geometry and Advanced Bioimaging" funded by grant 8721 from the Villum Foundation
+
+†Natural Sciences and Engineering Research Council of Canada
+
+‡Institute of Engineering Univ. Grenoble Alpes
+---PAGE_BREAK---
+
+
+
+
+ |
+ Regularization methods for spatial point processes |
+ 1211 |
+
+
+ | 2.2 |
+ Parametric intensity estimation |
+ 1214 |
+
+
+ | 2.3 |
+ Regularization techniques |
+ 1215 |
+
+
+ | 3 |
+ Asymptotic theory |
+ 1217 |
+
+
+ | 3.1 |
+ Notation and conditions |
+ 1217 |
+
+
+ | 3.2 |
+ Main results |
+ 1219 |
+
+
+ | 3.3 |
+ Discussion of the conditions |
+ 1220 |
+
+
+ | 4 |
+ Simulation study |
+ 1221 |
+
+
+ | 4.1 |
+ Simulation set-up |
+ 1221 |
+
+
+ | 4.2 |
+ Simulation results |
+ 1223 |
+
+
+ | 4.3 |
+ Logistic regression |
+ 1229 |
+
+
+ | 5 |
+ Application to forestry datasets |
+ 1231 |
+
+
+ | 6 |
+ Conclusion and discussion |
+ 1233 |
+
+
+ | A |
+ Parametric intensity estimation |
+ 1234 |
+
+
+ | A.1 |
+ Maximum likelihood estimation |
+ 1234 |
+
+
+ | A.2 |
+ Poisson likelihood. |
+ 1235 |
+
+
+ | A.3 |
+ Weighted Poisson likelihood |
+ 1235 |
+
+
+ | A.4 |
+ Logistic regression likelihood |
+ 1236 |
+
+
+ | B |
+ Examples of spatial point processes models with prescribed intensity function |
+ 1237 |
+
+
+ | B.1 |
+ Poisson point process. |
+ 1237 |
+
+
+ | B.2 |
+ Cox processes |
+ 1238 |
+
+
+ | C |
+ Numerical methods |
+ 1239 |
+
+
+ | C.1 |
+ Weighted Poisson regression |
+ 1239 |
+
+
+ | C.2 |
+ Logistic regression |
+ 1240 |
+
+
+ | C.3 |
+ Coordinate descent algorithm |
+ 1241 |
+
+
+ | C.3.1 |
+ Convex penalty functions |
+ 1241 |
+
+
+ | C.3.2 |
+ Non-convex penalty functions |
+ 1243 |
+
+
+ | C.4 |
+ Selection of regularization or tuning parameter |
+ 1243 |
+
+
+ | D |
+ A few references for regularization methods |
+ 1244 |
+
+
+ | E |
+ Auxiliary Lemma. |
+ 1245 |
+
+
+ | F |
+ Proof of Theorem 1. |
+ 1246 |
+
+
+ | G |
+ Proof of Theorem 2. |
+ 1248 |
+
+
+ | H |
+ Maps of covariates. |
+ 1251 |
+
+
+ | Acknowledgements. |
+ 1252 |
+
+
+ | References. |
+ 1252 |
+
+
+
+
+# 1. Introduction
+
+Spatial point pattern data arise in many contexts where interest lies in describing the distribution of an event in space. Some examples include the locations of trees in a forest, gold deposits mapped in a geological survey, stars in a cluster star, animal sightings, locations of some specific cells in the retina, or road accidents (see e.g. Møller and Waagepetersen, 2004; Illian et al., 2008; Baddeley et al., 2015). Interest in methods for analyzing spatial point pattern data is
+---PAGE_BREAK---
+
+rapidly expanding across many fields of science, notably in ecology, epidemiology, biology, geosciences, astronomy, and econometrics.
+
+One of the main interests when analyzing spatial point pattern data is to estimate the intensity which characterizes the probability that a point (or an event) occurs in an infinitesimal ball around a given location. In practice, the intensity is often assumed to be a parametric function of spatial covariates (e.g. Waagepetersen, 2007; Møller and Waagepetersen, 2007; Waagepetersen, 2008; Waagepetersen and Guan, 2009; Guan and Shen, 2010; Coeurjolly and Møller, 2014). In this paper, we assume that the intensity function $\rho$ is parameterized by a vector $\boldsymbol{\beta}$ and has a log-linear specification
+
+$$ \rho(u; \boldsymbol{\beta}) = \exp(\mathbf{z}(u)^{\top}\boldsymbol{\beta}), u \in D \subset \mathbb{R}^d, \quad (1.1) $$
+
+where $\mathbf{z}(u) = \{z_1(u), \dots, z_p(u)\}^\top$ are the $p$ spatial covariates measured at coordinate $u$, $\boldsymbol{\beta} = \{\beta_1, \dots, \beta_p\}^\top$ is a real $p$-dimensional parameter, $D$ is the domain of observation, and $d$ represents the state space of the spatial point processes (usually $d=2,3$).
+
+Methods to estimate $\boldsymbol{\beta}$ when $p$ is reasonable are now quite standard. Instead of the maximum likelihood estimation which is computationally expensive (Møller and Waagepetersen, 2004), standard methods are based on estimating equation derived from the Campbell theorem and include the Poisson likelihood (e.g. Waagepetersen, 2007) and logistic regression likelihood methods (e.g. Baddeley et al., 2014) (see Appendix A for the details on these methods). An important advantage of such methods is their simple implementation. From a numerical point of view, it has been demonstrated (see e.g. Baddeley et al., 2015) that the Poisson likelihood and logistic regression likelihood can be efficiently approximated by a generalized linear model (more precisely a weighted quasi-Poisson regression for the first one and a logistic regression for the second one). GLM software can, therefore, be adapted to accurately estimate $\boldsymbol{\beta}$. This is exactly what is proposed by the R package `spatstat` (Baddeley et al., 2015) devoted to the analysis of spatial point patterns.
+
+In recent decades, with the advancement of technology and huge investment in data collection, many applications for estimating the intensity function which involves a large number of covariates are rapidly available (e.g. Hubbell et al., 2005; Renner and Warton, 2013; Thurman et al., 2015). When the intensity is a function of many variables, covariates selection becomes inevitable. Variable selection in the context of spatial point processes is a recent topic. Thurman and Zhu (2014) focus on using adaptive lasso to select variables for inhomogeneous Poisson point processes. This study is extended to clustered spatial point processes by Thurman et al. (2015) who establish asymptotic properties (consistency, sparsity, and asymptotic normality distribution) of the estimates. Yue and Loh (2015) consider modeling spatial point data with Poisson, pairwise interaction point processes, and Neyman-Scott cluster models, incorporated lasso, adaptive lasso, and elastic net regularization methods. The latter work does not provide any theoretical result.
+
+In this paper, we intend to extend from a theoretical point of view the previous papers by considering more methods, more penalties. We propose regularized
+---PAGE_BREAK---
+
+versions of either the Poisson or logistic regression likelihoods to estimate the intensity of the spatial point processes. The penalty functions we consider are either convex or non-convex. We provide general conditions on the characteristics of the spatial point process (finite moments, mixing conditions) and on the penalty function to ensure an oracle property and a central limit theorem. It is also to be noted that our theoretical results hold under less restrictive assumptions on the model and on the asymptotic covariance matrix than the ones required by Thurman et al. (2015) (see Remark 3). Since we outline the link between the criteria we maximize and penalized generalized linear models, our work is mainly based on the pioneering paper by Fan and Li (2001). Our contribution is to exploit and extend this paper: First, the asymptotic we consider is an increasing domain asymptotic, i.e. the domain of observation, say $D_n \subset \mathbb{R}^d$, increases to $\mathbb{R}^d$ with $n$ (so $|D_n|$ the volume of $D_n$ plays the same role as $n$ in standard literature); Second, unlike the work by Fan and Li (2001) which assumes the independence of observations, our results can be applied to spatial point processes which exhibit dependence (e.g. Neyman-Scott processes, log-Gaussian Cox processes).
+
+From a numerical point of view, we are led to implement regularization methods for generalized linear models. This is quite straightforward since we only need to combine the spatstat R package with the two R packages implementing penalized estimation for generalized linear models, glmnet (Friedman et al., 2010) and ncvreg (Breheny and Huang, 2011).
+
+The rest of the paper is organized as follows. Section 2 gives necessary background on spatial point processes, details briefly how a parametric intensity function is classically estimated and formulates the problem we tackle. This section is quite short but the non expert readers can find more details in Appendices A-D. Our main contribution is to obtain asymptotic properties for various spatial point processes models, estimation methods, and penalty functions. These results are detailed in Section 3. Section 4 investigates the finite-sample properties of the proposed method in simulation study, followed by an application to tropical forestry datasets in Section 5, and finished by conclusion and discussion in Section 6. Proofs of the main results are postponed to Appendices E-G.
+
+## 2. Background and problem formulation
+
+### 2.1. Spatial point processes and intensity functions
+
+Let $\mathbf{X}$ be a spatial point process on $\mathbb{R}^d$. Let $D \subset \mathbb{R}^d$ be a compact set of Lebesgue measure $|D|$ which will play the role of the observation domain. We view $\mathbf{X}$ as a locally finite random subset of $\mathbb{R}^d$, i.e. the random number of points of $\mathbf{X}$ in $B$, $N(B)$, is almost surely finite whenever $B \subset \mathbb{R}^d$ is a bounded region. A realization of $\mathbf{X}$ in $D$ is thus a set $\mathbf{x} = \{x_1, x_2, \dots, x_m\}$, where $x_i \in D$ and $m$ is the observed number of points in $D$. Note that $m$ is obtained from the realization of a random variables and $0 \le m < \infty$.
+---PAGE_BREAK---
+
+Suppose $\mathbf{X}$ has intensity function $\rho$ and second-order product density $\rho^{(2)}$. Campbell theorem (see e.g. Møller and Waagepetersen, 2004) states that, for any function $k: \mathbb{R}^d \to [0, \infty)$ or $k: \mathbb{R}^d \times \mathbb{R}^d \to [0, \infty)$
+
+$$ \mathbb{E}\left(\sum_{u \in \mathbf{X}} k(u)\right) = \int_{\mathbb{R}^d} k(u)\rho(u)du \quad (2.1) $$
+
+$$ \mathbb{E}\left(\sum_{u,v \in \mathbf{X}}^{\neq} k(u,v)\right) = \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} k(u,v)\rho^{(2)}(u,v)dudv. \quad (2.2) $$
+
+In particular, Campbell theorem provides an intuitive interpretation of $\rho$ and $\rho^{(2)}$. We may interpret $\rho(u)du$ as the probability of occurrence of a point in an infinitesimally small ball with center $u$ and volume $du$. In the same way, $\rho^{(2)}(u, v)dudv$ is the probability for observing a pair of distinct points from $\mathbf{X}$ occurring jointly in each of two infinitesimally small balls with centers $u, v$ and volume $du, dv$. Without entering into details, we can define $\rho^{(k)}$ the $k$-th order intensity function (see Møller and Waagepetersen, 2004, for more details). For further background materials on spatial point processes, see for example Møller and Waagepetersen (2004); Illian et al. (2008).
+
+In order to study whether a point process deviates from independence (i.e., Poisson point process), we often consider the pair correlation function given by
+
+$$ g(u, v) = \frac{\rho^{(2)}(u, v)}{\rho(u)\rho(v)} $$
+
+when both $\rho$ and $\rho^{(2)}$ exist with the convention $0/0 = 0$. For a Poisson point process (Appendix B.1), we have $\rho^{(2)}(u, v) = \rho(u)\rho(v)$ so that $g(u, v) = 1$. If, for example, $g(u, v) > 1$ (resp. $g(u, v) < 1$), this indicates that pair of points are more likely (resp. less likely) to occur at locations $u, v$ than for a Poisson point process with the same intensity function as $\mathbf{X}$. If for any $u, v$, $g(u, v)$ depends only on $u-v$, the point process $\mathbf{X}$ is said to be second-order reweighted stationary.
+
+## 2.2. Parametric intensity estimation
+
+In our study, we assume that the intensity function depends on a vector of parameters $\beta$, i.e. $\rho(\cdot) = \rho(\cdot; \beta)$. As outlined in the introduction, maximum likelihood estimation is almost unfeasible for general spatial point processes models. Instead of this method, Campbell formula provides a nice tool for defining estimating equations based methods. These methods are now standard in the context of spatial point processes but we refer the reader to Appendix A for a more detailed presentation. The standard parametric methods for estimating $\beta$ are obtained by maximizing the weighted Poisson likelihood (e.g. Guan and Shen, 2010) or the logistic regression likelihood (e.g. Baddeley et al., 2014) given
+---PAGE_BREAK---
+
+respectively by
+
+$$
+\ell_{\text{PL}}(w; \boldsymbol{\beta}) = \sum_{u \in \mathbf{X} \cap D} w(u) \log \rho(u; \boldsymbol{\beta}) - \int_D w(u) \rho(u; \boldsymbol{\beta}) du, \quad (2.3)
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\ell_{\text{LRL}}(w; \boldsymbol{\beta}) = {}& \sum_{u \in \mathbf{X} \cap D} w(u) \log \left( \frac{\rho(u; \boldsymbol{\beta})}{\delta(u) + \rho(u; \boldsymbol{\beta})} \right) \\
+& - \int_D w(u) \delta(u) \log \left( \frac{\rho(u; \boldsymbol{\beta}) + \delta(u)}{\delta(u)} \right) du,
+\end{split}
+\tag{2.4}
+\end{equation}
+$$
+
+where $w(\cdot)$ is a non-negative weight function depending on the first and the
+second-order characteristics of $\mathbf{X}$ and $\delta(\cdot)$ is a non-negative real-valued function.
+Appendix A reminds the pertinence of (2.3)-(2.4): Campbell theorem shows that
+the gradient vector of (2.3)-(2.4) constitute unbiased estimating equations. The
+solution obtained by maximizing (2.3) (resp. (2.4)) is called Poisson estimator
+(resp. the logistic regression estimator). We refer readers to Appendix A for
+further details on the weight function $w(\cdot)$ and for the role of the function $\delta(\cdot)$.
+
+From a numerical point of view, it has been demonstrated that (2.3) and (2.4)
+can be efficiently approximated by a weighted generalized linear model (more
+precisely a weighted quasi-Poisson regression for the first one and a logistic
+regression for the second one). GLM software can therefore be adapted to ac-
+curately estimate $\beta$. More details about this numerical implementation can be
+found in Appendices C.1 and C.2 respectively.
+
+**2.3. Regularization techniques**
+
+Regularization techniques are introduced as alternatives to stepwise selection for variable selection and parameter estimation. In general, a regularization method attempts to maximize the penalized likelihood function $l(\theta) - \eta \sum_{j=1}^{p} p_{\lambda_j}(|\theta_j|)$, where $l(\theta)$ is the likelihood function of $\theta$, $\eta$ is the number of observations, and $p_\lambda(\cdot)$ is a nonnegative penalty function parameterized by a real number $\lambda \ge 0$. The same general strategy is adopted here in the context of spatial point processes.
+
+Let $l(w; \boldsymbol{\beta})$ be either the weighted Poisson likelihood function (2.3) or the
+weighted logistic regression likelihood function (2.4). In a similar way, we define
+the penalized weighted likelihood function given by
+
+$$
+Q(w; \boldsymbol{\beta}) = l(w; \boldsymbol{\beta}) - |D| \sum_{j=1}^{p} p_{\lambda_j}(|\boldsymbol{\beta}_j|), \quad (2.5)
+$$
+
+where $|D|$ is the volume of the observation domain, which plays the same role
+as the number of observations $\eta$ in our setting, $\lambda_j$ is a nonnegative tuning
+parameter corresponding to $\beta_j$ for $j = 1, \dots, p$, and $p_\lambda$ is a penalty function
+which we now describe. For any $\lambda \ge 0$, we say that $p_\lambda(\cdot) : \mathbb{R}^+ \to \mathbb{R}$ is a penalty
+function if $p_\lambda$ is a nonnegative function with $p_\lambda(0) = 0$. Examples of penalty
+function are the
+---PAGE_BREAK---
+
+* $\ell_2$ norm: $p_\lambda(\theta) = \frac{1}{2}\lambda\theta^2$,
+
+* $\ell_1$ norm: $p_\lambda(\theta) = \lambda\theta$,
+
+* Elastic net: for $0 < \gamma < 1$, $p_\lambda(\theta) = \lambda\{\gamma\theta + \frac{1}{2}(1-\gamma)\theta^2\}$,
+
+* SCAD: for any $\gamma > 2$,
+$$ p_\lambda(\theta) = \begin{cases} \lambda\theta & \text{if } \theta \le \lambda \\ \frac{\gamma\lambda\theta - \frac{1}{2}(\theta^2+\lambda^2)}{\gamma-1} & \text{if } \lambda \le \theta \le \gamma\lambda \\ \frac{\lambda^2(\gamma^2-1)}{2(\gamma-1)} & \text{if } \theta \ge \gamma\lambda, \end{cases} $$
+
+* MC+: for any $\gamma > 1$,
+$$ p_\lambda(\theta) = \begin{cases} \lambda\theta - \frac{\theta^2}{2\gamma} & \text{if } \theta \le \gamma\lambda \\ \frac{1}{2}\gamma\lambda^2 & \text{if } \theta \ge \gamma\lambda. \end{cases} $$
+
+The first and second derivatives of the above functions are given in Table 1. It is to be noticed that $p'_\lambda$ is not differentiable at $\theta = \lambda, \gamma\lambda$ (resp. $\theta = \gamma\lambda$) for SCAD (resp. for MC+) penalty.
+
+TABLE 1
+The first and the second derivatives of several penalty functions.
+
+| Penalty | p'λ(θ) | p''λ(θ) |
|---|
| ℓ2 | λθ | λ | | ℓ1 | λ | 0 | | Elastic net | λ{(1 − γθ) + γ} | λ(1 − γ) | | SCAD | {| λ | if θ ≤ λ | | (gammaλ−θ)/(gamma−1) | if λ ≤ θ ≤ γλ | | 0 | if θ ≥ γλ |
| {| 0 | if θ < λ | | (1−1)&br;/(γ−1) | if λ < θ < γλ | | 0 | if θ > γλ |
| MC+ | {| λ − θ/γ | if θ ≤ γλ | | 0 | if θ ≥ γλ |
| {| (1−1)&br;/γ | if θ < γλ | | 0 | if θ > γλ |
+
+Penalty functions give rise to specific well-known methods which are summarized in Table 2. More details can be found in Appendix D.
+
+The solution obtained by maximizing (2.5) is called either regularized Poisson or logistic estimator. From the previous section, the numerical implementation of the maximization of (2.5) can be done using procedures which estimate a penalized weighted generalized linear model. This is now quite standard for instance in $\mathbb{R}$ with packages such as `glmnet` and `ncvreg`. More details about this can be found in Appendix C.3.
+
+What is expected from maximizing (2.5) is that the procedure correctly selects the true covariates and that the estimate is consistent and still satisfies a central limit theorem. To obtain such properties when the observation domain increases to $\mathbb{R}^d$, specific conditions on the point process, the covariates, the regularity of the penalty function and most of all on the tuning parameters $\lambda_j$ are required. This is investigated in the next section.
+---PAGE_BREAK---
+
+TABLE 2
+Details of some regularization methods.
+
+| Method | Σpj=1 pλj(|βj|) |
|---|
| Ridge (Hoerl and Kennard, 1988) | Σpj=1 1⁄2λβj2 | | Lasso (Tibshirani, 1996) | Σpj=1 λ|βj| | | Enet (Zou and Hastie, 2005)* | Σpj=1 λ{γ|βj| + 1⁄2(1 - γ)βj2} | | AL (Zou, 2006)* | Σpj=1 λj|βj| | | Aenet (Zou and Zhang, 2009)* | Σpj=1 λj{γ|βj| + 1⁄2(1 - γ)βj2} | | SCAD (Fan and Li, 2001) | Σpj=1 pλ(|βj|), pλ(θ) = { if θ ≤ λ if λ ≤ θ ≤ γλ
| | MC+ (Zhang, 2010) | Σpj=1 {((λ|βj| - βj2⁄2γ)I(|βj| ≤ γλ) + 1⁄2γλ2I(|βj| ≥ γλ)} if γλ ≥ 0 |
+
+* Enet, AL and Aenet, respectively, stand for elastic net, adaptive lasso and adaptive elastic net
+
+**3. Asymptotic theory**
+
+In this section, we present the asymptotic results for the regularized Poisson estimator when considering $\mathbf{X}$ as a $d$-dimensional point process observed over a sequence of observation domain $D = D_n, n = 1, 2, \dots$ which expands to $\mathbb{R}^d$ as $n \to \infty$. The regularization parameters $\lambda_j = \lambda_{n,j}$ for $j = 1, \dots, p$ are now indexed by $n$. For sake of conciseness, we do not present the asymptotic results for the regularized logistic estimator. The results are very similar. The main difference is lying in the conditions (C.6) and (C.7) for which the matrices $\mathbf{A}_n, \mathbf{B}_n$, and $\mathbf{C}_n$ have a different expression (see Remark 2). So, from now on, we let $\ell_n = \ell_{n,\text{PL}}$ and $Q = Q_n$ be indexed by $n$.
+
+**3.1. Notation and conditions**
+
+We recall the classical definition of strong mixing coefficients adapted to spatial point processes (e.g. Politis et al., 1998): for $k, l \in \mathbb{N} \cup \{\infty\}$ and $q \ge 1$, define
+
+$$
+\begin{align}
+\alpha_{k,l}(q) &= \sup\{|P(A \cap B) - P(A)P(B)| : A \in \mathcal{F}(\Lambda_1), B \in \mathcal{F}(\Lambda_2), \nonumber \\
+&\phantom{{}= \sup\{|P(A \cap B) - P(A)P(B)| : } \Lambda_1 \in \mathcal{B}(\mathbb{R}^d), \Lambda_2 \in \mathcal{B}(\mathbb{R}^d), |\Lambda_1| \le k, |\Lambda_2| \le l, d(\Lambda_1, \Lambda_2) \ge q\}, \tag{3.1}
+\end{align}
+$$
+
+where $\mathcal{F}$ is the $\sigma$-algebra generated by $\mathbf{X} \cap \Lambda_i$, $i = 1, 2$, $d(\Lambda_1, \Lambda_2)$ is the minimal distance between sets $\Lambda_1$ and $\Lambda_2$, and $\mathcal{B}(\mathbb{R}^d)$ denotes the class of Borel sets in $\mathbb{R}^d$.
+
+Let $\boldsymbol{\beta}_0 = \{\boldsymbol{\beta}_{01}, \dots, \boldsymbol{\beta}_{0p}\}^\top = \{\boldsymbol{\beta}_{01}^\top, \boldsymbol{\beta}_{02}^\top\}^\top = \{\boldsymbol{\beta}_{01}^\top, \mathbf{0}^\top\}^\top$ be the $p$-dimensional vector of true coefficient values, where $\boldsymbol{\beta}_{01}$ is the $s$-dimensional ($s < p$) vector of nonzero coefficients and $\boldsymbol{\beta}_{02}$ is the $(p-s)$-dimensional vector of zero coefficients.
+---PAGE_BREAK---
+
+We define the $p \times p$ matrices $\mathbf{A}_n(w; \beta_0)$, $\mathbf{B}_n(w; \beta_0)$, and $\mathbf{C}_n(w; \beta_0)$ by
+
+$$ \mathbf{A}_n(w; \beta_0) = \int_{D_n} w(u)\mathbf{z}(u)\mathbf{z}(u)^{\top}\rho(u; \beta_0)du, $$
+
+$$ \mathbf{B}_n(w; \beta_0) = \int_{D_n} w(u)^2 \mathbf{z}(u) \mathbf{z}(u)^{\top} \rho(u; \beta_0) du, \text{ and} $$
+
+$$ \mathbf{C}_n(w; \beta_0) = \int_{D_n} \int_{D_n} w(u)w(v) \mathbf{z}(u)\mathbf{z}(v)^{\top} \{g(u,v) - 1\} \rho(u; \beta_0) \rho(v; \beta_0) dv du. $$
+
+In what follows, for a squared symmetric matrix $\mathbf{M}_n$, $\nu_{\min}(\mathbf{M}_n)$ denotes the smallest eigenvalue of $\mathbf{M}_n$. Consider the following conditions (C.1)-(C.8) which are required to derive our asymptotic results:
+
+(C.1) For every $n \ge 1$, $D_n = nE = \{ne : e \in E\}$, where $E \subset \mathbb{R}^d$ is convex, compact, and contains $o$ (the origin of $\mathbb{R}^d$) in its interior.
+
+(C.2) We assume that the intensity function has the log-linear specification given by (1.1) where $\beta \in \Theta$ and $\Theta$ is an open convex bounded set of $\mathbb{R}^p$.
+
+(C.3) The covariates $\mathbf{z}$ and the weight function $w$ satisfy
+
+$$ \sup_{u \in \mathbb{R}^d} ||\mathbf{z}(u)|| < \infty \quad \text{and} \quad \sup_{u \in \mathbb{R}^d} |w(u)| < \infty. $$
+
+(C.4) There exists an integer $t \ge 1$ such that for $k=2, \dots, 2+t$, the product density $\rho^{(k)}$ exists and satisfies $\rho^{(k)} < \infty$.
+
+(C.5) For the strong mixing coefficients (3.1), we assume that there exists some $\tilde{t} > d(2+t)/t$ such that $\alpha_{2,\infty}(q) = O(q^{-\tilde{t}})$.
+
+(C.6) $\liminf_{n \to \infty} \nu_{\min}(|D_n|^{-1}\{\mathbf{B}_{n,11}(w; \beta_0) + \mathbf{C}_{n,11}(w; \beta_0)\}) > 0$.
+
+(C.7) $\liminf_{n \to \infty} \nu_{\min}(|D_n|^{-1}\mathbf{A}_n(w; \beta_0)) > 0$.
+
+(C.8) The penalty function $p_\lambda(\cdot)$ is nonnegative on $\mathbb{R}+$, satisfies $p_\lambda(0) = 0$ and is continuously differentiable on $\mathbb{R}^+\setminus\{0\}$ with derivative $p'_\lambda$ assumed to be a Lipschitz function on $\mathbb{R}^+ \setminus \{0\}$. Furthermore, given $(\lambda_{n,j})_{n>1}$, for $j=1, \dots, s$, we assume that there exists $(\tilde{r}_{n,j})_{n>1}$, where $|\tilde{D}_n|^{1/2}\tilde{r}_{n,j} \to \infty$ as $n \to \infty$, such that, for $n$ sufficiently large, $p_{\lambda_{n,j}}$ is thrice continuously differentiable in the ball centered at $|\beta_{0j}|$ with radius $\tilde{r}_{n,j}$ and we assume that the third derivative is uniformly bounded.
+
+Under the condition (C.8), we define the sequences $a_n$, $b_n$ and $c_n$ by
+
+$$ a_n = \max_{j=1,\dots,s} |p'_{\lambda_{n,j}}(|\beta_{0j}|)|, \qquad (3.2) $$
+
+$$ b_n = \inf_{j=s+1,\dots,p} \inf_{\substack{\theta \\ |\theta| \le \epsilon_n \\ \theta \ne 0}} p'_{\lambda_{n,j}}(\theta), \quad \text{for } \epsilon_n = K_1 |D_n|^{-1/2}, \qquad (3.3) $$
+
+$$ c_n = \max_{j=1,\dots,s} |p''_{\lambda_{n,j}}(|\beta_{0j}|)|, \qquad (3.4) $$
+
+where $K_1$ is any positive constant. These sequences $a_n$, $b_n$ and $c_n$, detailed in Table 3 for the different methods considered in this paper, play a central role in our results. Even if this will be discussed later in Section 3.3, we specify right now that we require that $a_n|D_n|^{1/2} \to 0$, $b_n|D_n|^{1/2} \to \infty$ and $c_n \to 0$.
+---PAGE_BREAK---
+
+TABLE 3
+Details of the sequences $a_n$, $b_n$ and $c_n$ for a given regularization method.
+
+| Method | an | bn | cn |
|---|
| Ridge | λn maxj=1,...,s {|β0j|} | 0 | λn | | Lasso | λn | λn | 0 | | Enet | λn [(1 − γ) maxj=1,...,s {|β0j|} + γ] | γλn | (1 − γ)λn | | AL | maxj=1,...,s {λn,j} | minj=s+1,...,p {λn,j} | 0 | | Aenet | maxj=1,...,s {λn,j((1 − γ)|β0j| + γ)} | γ minj=s+1,...,p {λn,j} | (1 − γ) maxj=1,...,s {λn,j} | | SCAD | 0* | λn** | 0* | | MC+ | 0* | λn − K1/(γ|Dn|1/2)** | 0* |
+
+* if $\lambda_n \to 0$ as $n \to \infty$
+
+** if $|D_n|^{1/2}\lambda_n \to \infty$ as $n \to \infty$
+
+## 3.2. Main results
+
+We state our main results here. Proofs are relegated to Appendices E-G.
+
+We first show in Theorem 1 that the regularized Poisson estimator converges in probability and exhibits its rate of convergence.
+
+**Theorem 1.** Assume the conditions (C.1)-(C.8) hold and let $a_n$ and $c_n$ be given by (3.2) and (3.4). If $a_n = O(|D_n|^{-1/2})$ and $c_n = o(1)$, then there exists a local maximizer $\hat{\beta}$ of $Q_n(w;\beta)$ such that $\|\hat{\beta} - \beta_0\| = O_P(|D_n|^{-1/2} + a_n)$.
+
+This implies that, if $a_n = O(|D_n|^{-1/2})$ and $c_n = o(1)$, the regularized Poisson estimator is root-$|D_n|$ consistent. Furthermore, we demonstrate in Theorem 2 that such a root-$|D_n|$ consistent estimator ensures the sparsity of $\hat{\beta}$; that is, the estimate will correctly set $\beta_2$ to zero with probability tending to 1 as $n \to \infty$, and $\hat{\beta}_1$ is asymptotically normal.
+
+**Theorem 2.** Assume the conditions (C.1)-(C.8) hold. If $a_n|D_n|^{1/2} \to 0$, $b_n|D_n|^{1/2} \to \infty$ and $c_n \to 0$ as $n \to \infty$, the root-$|D_n|$ consistent local maximizers $\hat{\beta} = (\hat{\beta}_1^\top, \hat{\beta}_2^\top)^\top$ in Theorem 1 satisfy:
+
+(i) Sparsity: $P(\hat{\beta}_2 = 0) \to 1$ as $n \to \infty$,
+
+(ii) Asymptotic Normality: $|D_n|^{1/2}\Sigma_n(w;\beta_0)^{-1/2}(\hat{\beta}_1 - \beta_0) \xrightarrow{d} N(0,I_s)$,
+where
+
+$$
+\begin{align}
+\Sigma_n(w; \beta_0) &= |D_n| \{A_{n,11}(w; \beta_0) + |D_n| \Pi_n\}^{-1} \{B_{n,11}(w; \beta_0) + C_{n,11}(w; \beta_0)\} \nonumber \\
+&\quad \{A_{n,11}(w; \beta_0) + |D_n| \Pi_n\}^{-1}, \tag{3.5}
+\end{align}
+$$
+
+$$
+\Pi_n = \text{diag}\{p''_{\lambda_{n,1}}(|\beta_{01}|), \dots, p''_{\lambda_{n,s}}(|\beta_{0s}|)\}, \tag{3.6}
+$$
+
+and where $A_{n,11}(w;\beta_0)$ (resp. $B_{n,11}(w;\beta_0)$, $C_{n,11}(w;\beta_0)$) is the $s \times s$ top-left corner of $A_n(w;\beta_0)$ (resp. $B_n(w;\beta_0)$, $C_n(w;\beta_0)$).
+---PAGE_BREAK---
+
+As a consequence, $\Sigma_n(w; \beta_0)$ is the asymptotic covariance matrix of $\hat{\beta}_1$. Note that $\Sigma_n(w; \beta_0)^{-1/2}$ is the inverse of $\Sigma_n(w; \beta_0)^{1/2}$, where $\Sigma_n(w; \beta_0)^{1/2}$ is any square matrix with $\Sigma_n(w; \beta_0)^{1/2} (\Sigma_n(w; \beta_0)^{1/2})^\top = \Sigma_n(w; \beta_0)$.
+
+**Remark 1.** For lasso and adaptive lasso, $\Pi_n = 0$. For other penalties, because $c_n = o(1)$, then $\|\Pi_n\| = o(1)$. Since $\|A_{n,11}(w; \beta_0)\| = O(|D_n|)$ from conditions (C.2) and (C.3), $|D_n| \|\Pi_n\|$ is asymptotically negligible with respect to $\|A_{n,11}(w; \beta_0)\|$.
+
+**Remark 2.** Theorems 1 and 2 remain true for the regularized logistic estimator if we replace in the expression of the matrices $A_n$, $B_n$, and $C_n$, $w(u)$ by $w(u)\delta(u)/(\rho(u;\beta_0) + \delta(u))$, $u \in D_n$ and extend the condition (C.3) by adding $\sup_{u \in \mathbb{R}^d} |\delta(u)| < \infty$.
+
+The proofs of Theorems 1 and 2 for this estimator are slightly different mainly because unlike the Poisson likelihood for which we have $\ell_n^{(2)}(w; \beta) = -A_n(w; \beta)$, for the regularized logistic regression likelihood $\ell_n^{(2)}(w; \beta)$ is now stochastic and we only have $\mathbb{E}(\ell_n^{(2)}(w; \beta)) = -A_n(w; \beta)$. Despite the additional difficulty, we maintain that no additional assumption is required.
+
+**Remark 3.** We want to highlight here the main theoretical differences with the work by Thurman et al. (2015). First, the methodology and results are available for the logistic regression likelihood. Second, we consider very general penalty function while Thurman et al. (2015) only consider the adaptive lasso method. Third, Thurman et al. (2015) assume that $|D_n|^{-1}\mathbf{M}_n \to \mathbf{M}$ as $n \to \infty$, where $\mathbf{M}_n$ is $A_n$, $B_n$ or $C_n$, and where $\mathbf{M}$, i.e. either $A$, $B$ or $C$, are positive definite matrices. Instead we assume the sharper condition $\liminf_{n\to\infty} \nu_{\min}(|D_n|^{-1}\mathbf{M}_n) > 0$, where $\mathbf{M}_n$ is either $A_n$ or $B_n + C_n$. The latter point makes the proofs a little bit more technical.
+
+### 3.3. Discussion of the conditions
+
+We split the conditions we assume into two different categories: conditions (C.1)-(C.7) and condition (C.8) combined with the assumptions on the behavior of the sequences $a_n, b_n$ and $c_n$.
+
+Conditions (C.1)-(C.7) are standard in the literature, see e.g. Coeurjolly and Møller (2014). Essentially, these assumptions ensure that when there is no regularization, the estimate $\hat{\beta}$ is consistent and satisfies a central limit theorem. To help the reader, we reproduce comments that can be done on these assumptions. In condition (C.1), the assumption that $E$ contains $o$ in its interior can be made without loss of generality. If instead $u$ is an interior point of $E$, then condition (C.1) could be modified to that any ball with centre $u$ and radius $r > 0$ is contained in $D_n = nE$ for all sufficiently large $n$. Condition (C.3) is quite standard. From conditions (C.2)-(C.5), the matrices $A_n(w; \beta_0)$, $B_n(w; \beta_0)$ and $C_n(w; \beta_0)$ are bounded by $|D_n|$ (see e.g. Coeurjolly and Møller, 2014). As mentioned, conditions (C.1)-(C.6) are used to establish a central limit theorem for $|D_n|^{-1/2}\ell_n^{(1)}(w; \beta_0)$ using a general central limit theorem for triangular arrays of nonstationary random fields obtained by Karácsony (2006), which is an
+---PAGE_BREAK---
+
+extension from Bolthausen (1982), then later extended to nonstationary ran-
+dom fields by Guyon (1995). As pointed out by Coeurjolly and Møller (2014),
+condition (C.6) is a spatial average assumption. This assumption is similar for
+linear models to an assumption like $\nu_{\min}(n^{-1}\mathbf{X}^{\top}\mathbf{X})$ where $n$ would play the role
+of the number of observations and $\mathbf{X}$ would represent the design matrix.
+
+Conditions (C.6)-(C.7) ensure that the matrix $\Sigma_n(w; \beta_0)$ is invertible for suf-ficiently large $n$. We refer the reader to e.g. Coeurjolly and Møller (2014) where these conditions are shown to hold for large class of models including Poisson and Cox processes discussed in Appendix B.
+
+Condition (C.8) controls the higher order terms in Taylor expansion of the penalty function. Roughly speaking, we expect the penalty function to be at least Lipschitz and thrice differentiable in a neighborhood of the true parameter vector. As it is, the condition looks technical, however, it is obviously satisfied for ridge, lasso, elastic net (and the adaptive versions). According to the choice of $\lambda_n$, it is satisfied for SCAD and MC+ when $|\beta_{0j}|$, for $j = 1, \dots, s$, is not equal to $\gamma\lambda_n$ and/or $\lambda_n$.
+
+As a consequence of the previous discussion, the main assumptions we require in this paper are the ones related to the sequences $a_n, b_n$ and $c_n$. We require that $a_n|D_n|^{1/2} \to 0$, $b_n|D_n|^{1/2} \to \infty$ and $c_n \to 0$ as $n \to \infty$ simultaneously. For the ridge regularization method, $b_n = 0$, preventing from applying Theorem 2 for this penalty. For lasso and elastic net, $a_n = K_2b_n$ for some constant $K_2 > 0$ ($K_2=1$ for lasso). The two conditions $a_n|D_n|^{1/2} \to 0$ and $b_n|D_n|^{1/2} \to \infty$ as $n \to \infty$ cannot be satisfied simultaneously. This is different for the adaptive versions where a compromise can be found by adjusting the $\lambda_{n,j}$'s, as well as the two non-convex penalties SCAD and MC+, for which $\lambda_n$ can be adjusted. For the regularization methods considered in this paper, the condition $c_n \to 0$ is implied by the condition $a_n|D_n|^{1/2} \to 0$ as $n \to \infty$.
+
+**4. Simulation study**
+
+We conduct a simulation study with three different scenarios, described in Sec-
+tion 4.1, to compare the estimates of the regularized Poisson likelihood (PL)
+and that of the regularized weighted Poisson likelihood (WPL). We also want
+to explore the behavior of the estimates using different regularization methods.
+Empirical findings are presented in Section 4.2. Furthermore, we compare, in
+Section 4.3, the regularized Poisson and logistic estimators.
+
+**4.1. Simulation set-up**
+
+The setting is quite similar to that of Waagepetersen (2007) and Thurman et al. (2015). The spatial domain is $\mathcal{D} = [0, 1000] \times [0, 500]$. We center and scale the 201 × 101 pixel images of elevation ($x_1$) and gradient of elevation ($x_2$) contained in the *bei* datasets of spatstat library in R (R Core Team, 2016), and use them as two true covariates. In addition, we create three different scenarios to define extra covariates:
+---PAGE_BREAK---
+
+Scenario 1. We generate eighteen 201 × 101 pixel images of covariates as standard Gaussian white noise and denote them by $x_3, \dots, x_{20}$. We define $\mathbf{z}(u) = \{1, x_1(u), \dots, x_{20}(u)\}^\top$ as the covariates vector. The regression coefficients for $z_3, \dots, z_{20}$ are set to zero.
+
+Scenario 2. First, we generate eighteen 201 × 101 pixel images of covariates as in Scenario 1. Second, we transform them, together with $x_1$ and $x_2$, to have multicollinearity. In particular, we define $\mathbf{z}(u) = \mathbf{V}^\top\mathbf{x}(u)$, where $\mathbf{x}(u) = \{x_1(u), \dots, x_{20}(u)\}^\top$. More precisely, $\mathbf{V}$ is such that $\Omega = \mathbf{V}^\top\mathbf{V}$, and $(\Omega)_{ij} = (\Omega)_{ji} = 0.7^{|i-j|}$ for $i,j = 1, \dots, 20$, except $(\Omega)_{12} = (\Omega)_{21} = 0$, to preserve the correlation between $x_1$ and $x_2$. The regression coefficients for $z_3, \dots, z_{20}$ are set to zero.
+
+Scenario 3. We consider a more complex situation. We center and scale the 13
+50 × 25 pixel images of soil nutrients covariates obtained from the
+study in tropical forest of Barro Colorado Island (BCI) in central
+Panama (see Condit, 1998; Hubbell et al., 1999, 2005), convert them
+to be 201 × 101 pixel images as $x_1$ and $x_2$, and use them as the extra
+covariates. Together with $x_1$ and $x_2$, we keep the structure of the
+covariance matrix to preserve the complexity of the situation. In
+this setting, we have $\mathbf{z}(u) = \{1, x_1(u), \dots, x_{15}(u)\}^\top$. The regression
+coefficients for $z_3, \dots, z_{15}$ are set to zero.
+
+The different maps of the covariates obtained from Scenarios 2 and 3 are depicted in Appendix H. Except for $z_3$ which has high correlation with $z_2$, the extra covariates obtained from Scenario 2 tend to have a constant value (Figure 3). This is completely different from the ones obtained from Scenario 3 (Figure 4).
+
+The mean number of points over the domain $D$, $\mu$, is chosen to be 1600. We set the true intensity function to be $\rho(u; \beta_0) = \exp\{\beta_0 + \beta_1 z_1(u) + \beta_2 z_2(u)\}$, where $\beta_1 = 2$ represents a relatively large effect of elevation, $\beta_2 = 0.75$ reflects a relatively small effect of gradient, and $\beta_0$ is selected such that each realization has 1600 points in average. Furthermore, we erode regularly the domain $D$ such that, with the same intensity function, the mean number of points over the new domain $D \ominus R$ becomes 400. The erosion is used to observe the convergence of the procedure as the observation domain expands. We consider the default number of dummy points for the Poisson likelihood, denoted by nd², as suggested in the spatstat R package, i.e. nd² ≈ 4m, where m is the number of points. With these scenarios, we simulate 2000 spatial point patterns from a Thomas point process (see Appendix B.2) using the rThomas function in the spatstat package. We also consider two different $\kappa$ parameters ($\kappa = 5 \times 10^{-4}, \kappa = 5 \times 10^{-5}$) as different levels of spatial interaction and let $\omega = 20$. For each of the four combinations of $\kappa$ and $\mu$, we fit the intensity to the simulated point pattern realizations. We also fit the oracle model which only uses the two true covariates.
+
+All models are fitted using modified internal function in spatstat (Baddeley et al., 2015), glmnet (Friedman et al., 2010), and ncvreg (Breheny and Huang, 2011). A modification of the ncvreg R package is required to include the penalized weighted Poisson and logistic likelihoods.
+---PAGE_BREAK---
+
+## 4.2. Simulation results
+
+To better understand the behavior of Thomas processes designed in this study, Figure 1 shows the plot of the four realizations using different $\kappa$ and $\mu$. The smaller value of $\kappa$, the tighter the clusters since there are fewer parents. When $\mu = 400$, i.e. by considering the realizations observed on $D \ominus R$, the mean number of points over the 2000 replications and standard deviation are 396 and 47 (resp. 400 and 137) when $\kappa = 5 \times 10^{-4}$ (resp. $\kappa = 5 \times 10^{-5}$). When $\mu = 1600$, the mean number of points and standard deviation are 1604 and 174 (resp. 1589 and 529) when $\kappa = 5 \times 10^{-4}$ (resp. $\kappa = 5 \times 10^{-5}$).
+
+FIG 1. Realizations of a Thomas process for $\mu = 400$ (row 1), $\mu = 1600$ (row 2), $\kappa = 5 \times 10^{-4}$ (column 1), and $\kappa = 5 \times 10^{-5}$ (column 2).
+
+Tables 4 and 5 present the selection properties of the estimates using the penalized PL and the penalized WPL methods. Similarly to Bühlmann and Van De Geer (2011), the indices we consider are the true positive rate (TPR), the false positive rate (FPR), and the positive predictive value (PPV). TPR corresponds to the ratio of the selected true covariates over the number of true covariates, while FPR corresponds to the ratio of the selected noisy covariates over the number of noisy covariates. TPR explains how the model can correctly select both $z_1$ and $z_2$. Finally, FPR investigates how the model incorrectly select among $z_3$ to $z_p$ ($p = 20$ for Scenarios 1 and 2 and $p = 15$ for Scenario 3). PPV corresponds to the ratio of the selected true covariates over the total number of selected covariates in the model. PPV describes how the model can approximate the oracle model in terms of selection. Therefore, we want to find the methods which have a TPR and a PPV close to 100%, and a FPR close to 0.
+
+Generally, for both the penalized PL and the penalized WPL methods, the best selection properties are obtained for a larger value of $\kappa$ which shows weaker spatial dependence. For a more clustered one, indicated by a smaller value of $\kappa$, it seems more difficult to select the true covariates. As $\mu$ increases from 400 (Table 4) to 1600 (Table 5), the TPR tends to improve, so the model can select both $z_1$ and $z_2$ more frequently.
+
+Ridge, lasso, and elastic net are the regularization methods that cannot satisfy our theorems. It is firstly emphasized that all covariates are always selected
+---PAGE_BREAK---
+
+**TABLE 4**
+Empirical selection properties (TPR, FPR, and PPV in %) based on 2000 replications of Thomas processes on the domain $D \ni\ R$ ($\mu = 400$) for different values of $\kappa$ and for the three different scenarios. Different penalty functions are considered as well as two estimating equations, the regularized Poisson likelihood (PL) and the regularized weighted Poisson likelihood (WPL).
+
+| Method | κ = 5 × 10-4 | κ = 5 × 10-5 |
|---|
| Regularized PL | Regularized WPL | Regularized PL | Regularized WPL |
|---|
| TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV |
|---|
| Scenario 1 | | Ridge | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | | Lasso | 100* | 27 | 35 | 56 | 0* | 98 | 89 | 35 | 34 | 33 | 0* | 62 | | Enet | 100* | 59 | 18 | 39 | 4 | 36 | 91 | 60 | 21 | 31 | 0* | 57 | | AL | 100* | 1 | 93 | 58 | 0* | 100* | 88 | 7 | 72 | 35 | 0* | 67 | | Aenet | 100* | 6 | 72 | 59 | 0* | 99 | 89 | 12 | 61 | 34 | 0* | 64 | | SCAD | 100* | 18 | 41 | 66 | 0* | 98 | 90 | 17 | 46 | 31 | 0* | 56 | | MC+ | 100* | 21 | 36 | 68 | 0* | 96 | 90 | 21 | 42 | 30 | 0* | 54 |
+
+
+ Scenario 2
+
+
+ | Ridge |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+
+
+ | Lasso |
+ 100* |
+ 25 35 35 |
+ 52 1 88 88 |
+ 90 38 29 29 |
+ 31 0* 55 55 |
+ 31 0* 55 55 |
+ 31 0* 55 55 |
+ 31 0* 55 55 |
+ 31 0* 55 55 |
+
+
+ | Enet |
+ 100* |
+ 52 19 19 |
+ 49 4 62 62 |
+ 90 60 20 20 |
+ 24 1 38 38 |
+ 24 1 38 38 |
+ 24 1 38 38 |
+ 24 1 38 38 |
+ 24 1 38 38 |
+
+
+ | AL |
+ 99 4 80 80 |
+ 52 0* 100* |
+ 87 9 67 67 |
+ 36 0* 67 67 |
+ 36 0* 67 67 |
+ 36 0* 67 67 |
+ 36 0* 67 67 |
+ 36 0* 67 67 |
+ 36 0* 67 67 |
+
+
+ | Aenet |
+ 99 8 65 65 |
+ 53 0* 99 99 |
+ 88 14 54 54 |
+ 35 0* 65 65 |
+ 35 0* 65 65 |
+ 35 0* 65 65 |
+ 35 0* 65 65 |
+ 35 0* 65 65 |
+ 35 0* 65 65 |
+
+
+ | SCAD |
+ 100* 17 43 43 |
+ 64 0* 92 92 |
+ 88 17 45 45 |
+ 28 0* 50 50 |
+ 28 0* 50 50 |
+ 28 0* 50 50 |
+ 28 0* 50 50 |
+ 28 0* 50 50 |
+ 28 0* 50 50 |
+
+
+ | MC+ |
+ 100* 18 41 41 |
+ 59 1 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 (continued) |
+ 99* |
+ 99* |
+ 99* |
+ 99* |
+ 99* |
+ 99* |
+ 99* |
+ 99* |
+
+
+ Scenario 3
+
+
+ | Ridge |
+ 100 13 & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ±
+
+\* Approximate value
+
+by the ridge so that the rates are never changed whatever the method used. For
+the penalized PL with lasso and elastic net regularization, it is shown that they
+tend to have quite large values of FPR, meaning that they wrongly keep the
+noisy covariates more frequently. When the penalized WPL is applied, we gain
+smaller FPR, but we suffer from smaller TPR at the same time. This smaller
+---PAGE_BREAK---
+
+**TABLE 5**
+
+Empirical selection properties (TPR, FPR, and PPV in %) based on 2000 replications of Thomas processes on the domain $D$ ($\mu = 1600$) for different values of $\kappa$ and for the three different scenarios. Different penalty functions are considered as well as two estimating equations, the regularized Poisson likelihood (PL) and the regularized weighted Poisson likelihood (WPL).
+
+| Method | κ = 5 × 10-4 | κ = 5 × 10-5 |
|---|
| Regularized PL | Regularized WPL | Regularized PL | Regularized WPL | Regularized PL | Regularized WPL |
|---|
| TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV |
|---|
| Scenario 1 | | Ridge | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | 100 | 100 | 10 | | Lasso | 100 | 26 | 35 | 52 | 0* | 100* | 98 | 48 | 22 | 56 | 0* | 96 | | | | | | | | Enet | 100 | 64 | 16 | 55 | 6 | 50 | 99 | 76 | 14 | 50 | 5 | 45 | | | | | | | | AL | 100 | 0* | 98 | 50 | 0 | 100 | 96 | 6 | 77 | 55 | 0* | 98 | | | | | | | | Aenet | 100 | 4 | 79 | 54 | 0* | 100* | 97 | 11 | 60 | 57 | 0* | 96 | | | | | | | | SCAD | 100 | 17 | 50 | 60 | 0* | 100* | 98 | 18 | 47 | 52 | 0* | 90 | | | | | | | | MC+ | 100 | 22 | 47 | 60 | 0* | 97 | 98 | 23 | 42 | 44 | 0* | 79 | | | | | | |
+
+
+ Scenario 2
+
+
+ | Ridge |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 10 |
+ 100 100 13 |
+ 13 13 13 |
+
+
+ | Lasso |
+ 100 26 33 |
+ 51 0* 97 |
+ 98 43 24 |
+ 52 1 91 |
+ 98 43 24 |
+ 52 1 91 |
+ 98 43 24 |
+ 52 1 91 |
+ 98 43 24 |
+ 52 1 91 |
+ 48 4 75 |
+ 48 4 75 |
+
+
+ | Enet |
+ 100 56 18 |
+ 51 5 55 |
+ 99 69 15 |
+ 49 4 62 |
+ 99 69 15 |
+ 49 4 62 |
+ 99 69 15 |
+ 49 4 62 |
+ 99 69 15 |
+ 49 4 62 |
+ 48 4 75 |
+ 48 4 75 |
+
+
+ | AL |
+ 100 1 92 |
+ 51 0 100* |
+ 96 10 67 |
+ 53 0* 99* |
+ 96 10 67 |
+ 53 0* 99* |
+ 96 10 67 |
+ 53 0* 99* |
+ 96 10 67 |
+ 53 0* 99* |
+ 51 2 86* |
+ 51 2 86* |
+
+
+ | Aenet |
+ 100 4 78 |
+ 51 0* 10*** |
+ 97 15 52* |
+ 53 0* 98* |
+ 97 15 52* |
+ 53 0* 98* |
+ 97 15 52* |
+ 53 0* 98* |
+ 97 15 52* |
+ 53 0* 98* |
+ 50 3 82* |
+ 50 3 82* |
+
+
+ | SCAD |
+ 100 21 37 |
+ 53 1 85* |
+ 96 16 50* |
+ 45 1 77* |
+ 96 16 50* |
+ 45 1 77* |
+ 96 16 50* |
+ 45 1 77* |
+ 96 16 50* |
+ 45 1 77* |
+ 40 2 63* |
+ 40 2 63* |
+
+
+ | MC+ |
+ 100 24 35 |
+ 47 2 76* |
+ 97 19 47* |
+ 42 2 72* |
+ 97 19 47* |
+ 42 2 72* |
+ 97 19 47* |
+ 42 2 72* |
+ 97 19 47* |
+ 42 2 72* |
+ 37 2* 61* |
+ 37 61* |
+
+
+ Scenario 3
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+* Approximate value
+
+TPR actually comes from the unselection of $z_2$ which has smaller coefficient than that of $z_1$.
+
+When we apply adaptive lasso, adaptive elastic net, SCAD, and MC+, we achieve better performance, especially for FPR which is closer to zero which automatically improves the PPV. Adaptive elastic net (resp. elastic net) has
+---PAGE_BREAK---
+
+slightly larger FPR than adaptive lasso (resp. lasso). Among all regularization methods considered in this paper, adaptive lasso seems to outperform the other ones.
+
+Considering Scenarios 1 and 2, we observe best selection properties for the penalized PL combined with adaptive lasso. As the design is getting more complex for Scenario 3, applying the penalized PL suffers from much larger FPR, indicating that this method may not be able to overcome the complicated situation. However, when we use the penalized WPL, the properties seem to be more stable for the different designs of simulation study. One more advantage when considering the penalized WPL is that we can remove almost all extra covariates. It is worth noticing that we may suffer from smaller TPR when we apply the penalized WPL, but we lose the only less informative covariates. From Tables 4 and 5, when we are faced with a complex situation, we would recommend the use of the penalized WPL method with adaptive lasso penalty if the focus is on selection properties. Otherwise, the use of the penalized PL combined with adaptive lasso penalty is more preferable.
+
+Tables 6 and 7 give the prediction properties of the estimates in terms of biases, standard deviations (SD), and square root of mean squared errors (RMSE), some criteria we define by
+
+$$ \text{Bias} = \left[ \sum_{j=1}^{p} \{\hat{\mathbb{E}}(\hat{\beta}_j) - \beta_j\}^2 \right]^{\frac{1}{2}}, \quad \text{SD} = \left[ \sum_{j=1}^{p} \hat{\sigma}_j^2 \right]^{\frac{1}{2}}, \quad \text{RMSE} = \left[ \sum_{j=1}^{p} \hat{\mathbb{E}}(\hat{\beta}_j - \beta_j)^2 \right]^{\frac{1}{2}}, $$
+
+where $\hat{\mathbb{E}}(\hat{\beta}_j)$ and $\hat{\sigma}_j^2$ are respectively the empirical mean and variance of the estimates $\hat{\beta}_j$, for $j = 1, \dots, p$, where $p = 20$ for Scenarios 1 and 2, and $p = 15$ for Scenario 3.
+
+In general, the properties improve with larger value of $\kappa$ and $\mu$ due to weaker spatial dependence and larger sample size. For the oracle model where the model contains only $z_1$ and $z_2$, the WPL estimates are more efficient than the PL estimates, particularly in the more clustered case, agreeing with the findings by Guan and Shen (2010) in the unregularized setting.
+
+When the regularization methods are applied, the bias increases in general, especially when we consider the penalized WPL method. The regularized WPL has a larger bias since this method does not select $z_2$ much more frequently. Furthermore, weighted method seems to introduce extra bias, even though the regularization is not considered as in the oracle model. For a low clustered process, the SD using the penalized WPL is similar to that of the penalized PL which may be because of the weaker dependence represented by larger $\kappa$, making weight surface $w(\cdot)$ closer to 1. However, a larger RMSE is obtained from the penalized WPL. When we observe the more clustered process, we obtain smaller SD using the penalized WPL which explains why in some cases (mainly Scenario 3) the RMSE gets smaller.
+
+For the ridge method, the bias is closest to that of the oracle model, but it has the largest SD. Among the regularization methods, the adaptive lasso method has the best performance in terms of prediction.
+---PAGE_BREAK---
+
+TABLE 6
+Empirical prediction properties (Bias, SD, and RMSE) based on 2000 replications of Thomas processes on the domain $D \ni R$ ($\mu = 400$) for different values of $\kappa$ and for the three different scenarios. Different penalty functions are considered as well as two estimating equations, the regularized Poisson likelihood (PL) and the regularized weighted Poisson likelihood (WPL).
+
+| Method | κ = 5 × 10-4 | κ = 5 × 10-5 |
|---|
| Regularized PL | Regularized WPL | Regularized PL | Regularized WPL |
|---|
| Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE |
|---|
| Scenario 1 | | Oracle | 0.11 | 0.18 | 0.21 | 0.64 | 0.20 | 0.67 | 0.29 | 0.81 | 0.86 | 0.57 | 0.54 | 0.78 | | Ridge | 0.11 | 0.38 | 0.40 | 0.72 | 0.69 | 1.00 | 0.28 | 1.26 | 1.29 | 0.98 | 1.03 | 1.42 | | Lasso | 0.28 | 0.32 | 0.42 | 1.06 | 0.32 | 1.11 | 0.47 | 0.99 | 1.10 | 1.40 | 0.73 | 1.58 | | Enet | 0.24 | 0.38 | 0.44 | 1.28 | 0.28 | 1.31 | 0.45 | 1.04 | 1.13 | 1.59 | 0.58 | 1.70 | | AL | 0.10 | 0.29 | 0.31 | 0.87 | 0.32 | 0.92 | 0.38 | 0.96 | 1.03 | 1.18 | 0.93 | 1.50 | | Aenet | 0.14 | 0.30 | 0.33 | 0.93 | 0.39 | 1.01 | 0.40 | 0.96 | 1.04 | 1.29 | 0.82 | 1.53 | | SCAD | 0.26 | 0.27 | 0.38 | 1.06 | 0.37 | 1.12 | 0.46 | 0.79 | 0.91 | 1.49 | 0.67 | 1.64 | | MC+ | 0.28 | 0.28 | 0.39 | 1.04 | 0.38 | 1.11 | 0.47 | 0.78 | 0.92 | 1.48 | 0.70 | 1.64 |
+
+
+ Scenario 2
+
+
+ | Oracle |
+ 0.12 0.23 0.26 |
+ 0.71 0.26 0.76 |
+ 0.30 0.78 0.84 |
+ 0.59 0.62 0.84 |
+
+
+ | Ridge |
+ 0.14 0.46 0.48 |
+ 0.69 0.93 1.16 |
+ 0.32 1.23 1.27 |
+ 0.92 1.15 1.47 |
+
+
+ | Lasso |
+ 0.34 0.33 0.48 |
+ 1.20 0.37 1.26 |
+ 0.45 0.96 1.06 |
+ 1.50 0.69 1.65 |
+
+
+ | Enet |
+ 0.38 0.40 0.55 |
+ 1.40 0.35 1.44 |
+ 0.44 1.03 1.12 |
+ 1.78 0.49 1.85 |
+
+
+ | AL |
+ 0.20 0.33 0.39 |
+ 0.85 0.32 0.91 |
+ 0.37 0.93 1.00 |
+ 1.17 0.86 1.45 |
+
+
+ | Aenet |
+ 0.25 0.33 0.42 |
+ 0.96 0.34 1.02 |
+ 0.40 0.94 1.02 |
+ 1.29 0.78 1.51 |
+
+
+ | SCAD |
+ 0.38 0.30 0.48 |
+ 0.95 0.48 1.06 |
+ 0.44 0.80 0.91 |
+ 1.53 0.70 1.68 |
+
+
+ | MC+ |
+ 0.39 0.30 0.49 |
+ 1.01 0.49 1.13 |
+ 0.44 0.80 0.92 |
+ 1.52 0.71 1.68 |
+
+
+
+
+ Scenario 3
+
+
+ | Oracle |
+ 0.12 0.46 0.48 0.70 0.26 0.75 0.65 1.14 1.31 0.87 0.88 1.24 |
+
+
+ | Ridge |
+ 0.13 1.03 1.04 0.71 1.45 1.62 0.52 3.10 3.14 0.90 2.86 3.00 |
+
+
+ | Lasso |
+ 0.20 0.69 0.71 1.26 0.40 1.32 0.51 2.91 2.95 1.93 0.68 2.04 |
+
+
+ | Enet |
+ 0.21 0.83 0.86 1.53 0.40 1.58 0.52 2.94 2.99 2.03 0.60 2.12 |
+
+
+ | AL |
+ 0.18 0.57 0.60 0.91 0..33.. 9..9.. 5..5.. 2..8.. ..8.. ..7.. ..6.. ..5.. ..4.. ..3.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. ..2.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . |
+
+
+ | Aenet |
+ ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...... |
+
+
+ | SCAD... |
+ |
+
+
+ | MC+ |
+ |
+
+
+
+
+Considering Scenarios 1 and 2, we obtain best properties when we apply the penalized PL with adaptive lasso penalty.
+
+As the design is getting much more
+---PAGE_BREAK---
+
+**TABLE 7**
+
+Empirical prediction properties (Bias, SD, and RMSE) based on 2000 replications of Thomas processes on the domain D ($\mu = 1600$) for different values of $\kappa$ and for the three different scenarios. Different penalty functions are considered as well as two estimating equations, the regularized Poisson likelihood (PL) and the regularized weighted Poisson likelihood (WPL).
+
+| Method | κ = 5 × 10-4 | κ = 5 × 10-5 |
|---|
| Regularized PL | Regularized WPL | Regularized PL | Regularized WPL |
|---|
| Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE |
|---|
| Scenario 1 | | Oracle | 0.05 | 0.11 | 0.12 | 0.33 | 0.15 | 0.37 | 0.16 | 0.45 | 0.48 | 0.41 | 0.22 | 0.46 | | Ridge | 0.04 | 0.21 | 0.21 | 0.70 | 0.55 | 0.90 | 0.13 | 0.72 | 0.73 | 0.74 | 0.58 | 0.94 | | Lasso | 0.14 | 0.19 | 0.24 | 1.03 | 0.20 | 1.05 | 0.23 | 0.60 | 0.64 | 0.99 | 0.43 | 1.08 | | Enet | 0.11 | 0.22 | 0.24 | 1.14 | 0.29 | 1.17 | 0.20 | 0.62 | 0.65 | 1.12 | 0.43 | 1.20 | | AL | 0.04 | 0.18 | 0.18 | 0.87 | 0.18 | 0.89 | 0.16 | 0.58 | 0.60 | 0.87 | 0.42 | 0.96 | | Aenet | 0.05 | 0.18 | 0.18 | 0.96 | 0.22 | 0.99 | 0.17 | 0.58 | 0.60 | 0.90 | 0.48 | 1.02 | | SCAD | 0.19 | 0.18 | 0.26 | 1.30 | 0.34 | 1.34 | 0.14 | 0.53 | 0.55 | 1.37 | 0.51 | 1.46 | | MC+ | 0.20 | 0.18 | 0.27 | 1.33 | 0.28 | 1.36 | 0.15 | 0.53 | 0.55 | 1.38 | 0.52 | 1.48 |
+
+
+ Scenario 2
+
+
+ | Oracle |
+ 0.05 0.15 0.16 |
+ 0.36 0.17 0.40 |
+ 0.18 0.46 0.49 |
+ 0.39 0.26 0.47 |
+
+
+ | Ridge |
+ 0.05 0.27 0.27 |
+ 0.69 0.62 0.94 |
+ 0.17 0.74 0.80 |
+ 0.78 0.64 1.01 |
+
+
+ | Lasso |
+ 0.16 0.20 0.25 |
+ 1.16 0.24 1.18 |
+ 0.23 0.60 0.64 |
+ 1.14 0.43 1.22 |
+
+
+ | Enet |
+ 0.17 0.23 0.29 |
+ 1.24 0.24 1.26 |
+ 0.23 0.63 0.67 |
+ 1.33 0.42 1.40 |
+
+
+ | AL |
+ 0.07 0.18 0.20 |
+ 0.85 0.18 0.87 |
+ 0.18 0.58 0.61 |
+ 0.83 0.41 0.93 |
+
+
+ | Aenet |
+ 0.09 0.19 0.21 |
+ 0.94 0.20 0.96 |
+ 0.20 0.59 0.62 |
+ 0.92 0.41 1.01 |
+
+
+ | SCAD |
+ 0.26 0.20 0.33 |
+ 1.26 0.51 1.36 |
+ 0.19 0.51 0.55 |
+ 1.31 0.60 1.44 |
+
+
+ | MC+ |
+ 0.26 0.20 0.33 |
+ 1.31 0.55 1.42 |
+ 0.19 0.51 0.55 |
+ 1.32 0.61 1.46 |
+
+
+
+
+ Scenario 3
+
+
+ | Oracle |
+ 0·13 ·31 ·34 ·43 ·18 ·47 ·3·1 ·96 ·1·1 ·75 ·35 ·83 ·7·5 ·35 ·8·3 ·8·3 ·3·1 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 ·4·7 ·8·3 &midash;
+ |
+
+ | Ridge |
+
+
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+ —
+
+
+
+
+
+complex for Scenario **3**, when we use the penalized PL with adaptive lasso, the
+SD is doubled and even quadrupled due to the over selection of many unimpor-
+---PAGE_BREAK---
+
+tant covariates. In particular, for the more clustered process, the better properties are even obtained by applying the regularized WPL combined with adaptive lasso. From Tables 6 and 7, when the focus is on prediction properties, we would recommend to apply the penalized WPL combined with adaptive lasso penalty when the observed point pattern is very clustered and when covariates have a complex structure of covariance matrix. Otherwise, the use of the penalized PL combined with adaptive lasso penalty is more favorable. Our recommendations in terms of prediction support as what we recommend in terms of selection.
+
+### 4.3. Logistic regression
+
+Our concern here is to compare the regularized Poisson estimator to the regularized logistic estimator with a different number of dummy points. We remind that the number of dummy points comes up when we discretize the integral terms in (2.3) and in (2.4).
+
+TABLE 8
+Empirical selection properties (TPR, FPR, and PPV in %) based on 2000 replications of Thomas processes on the domain $D$ ($\mu = 1600$) for $\kappa = 5 \times 10^{-5}$, for two different scenarios, and for three different numbers of dummy points. Different estimating equations are considered, the regularized Poisson and logistic regression likelihoods, employing adaptive lasso regularization method.
+
+| Method | nd | Scenario 2 | Scenario 3 |
|---|
| Unweighted | Weighted | Unweighted | Weighted |
|---|
| TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV | TPR | FPR | PPV |
|---|
| Poisson | 20 | 96 | 35 | 32 | 53 | 0* | 96 | 98 | 82 | 16 | 47 | 2 | 79 | | 40 | 95 | 6 | 77 | 52 | 0* | 95 | 98 | 83 | 16 | 46 | 2 | 77 | | 80 | 95 | 4 | 83 | 50 | 0* | 94 | 98 | 83 | 16 | 43 | 2 | 74 | | Logistic | 20 | 94 | 11 | 60 | 49 | 0* | 91 | 98 | 72 | 20 | 41 | 2 | 73 | | 40 | 94 | 8 | 67 | 50 | 0* | 93 | 99 | 81 | 16 | 43 | 2 | 74 | | 80 | 94 | 5 | 77 | 50 | 0* | 93 | 99 | 83 | 16 | 42 | 2 | 73 |
+
+* Approximate value
+
+We consider three different numbers of dummy points denoted by nd². By these different numbers of dummy points, we want to observe the properties with three different situations: (a) nd² < m, (b) nd² ≈ m, and (c) nd² > m, where m is the number of points. In the following, m ≈ 1600 and nd² = 400, 1600, and 6400. Note that the choice by default from the Poisson likelihood in spatstat corresponds to case (c). Baddeley et al. (2014) show that for datasets with very large number of points and for very structured point processes, the logistic likelihood method is clearly preferable as it requires a smaller number of dummy points to perform quickly and efficiently. We want to investigate a similar comparison when these methods are regularized.
+
+We only repeat the results for $\kappa = 5 \times 10^{-5}$ and $\mu = 1600$, and for Scenarios 2
+---PAGE_BREAK---
+
+TABLE 9
+Empirical prediction properties (Bias, SD, and RMSE) based on 2000 replications of Thomas processes on the domain $D$ ($\mu = 1600$) for $\kappa = 5 \times 10^{-5}$, for two different scenarios, and for three different numbers of dummy points. Different estimating equations are considered, the regularized Poisson and logistic regression likelihoods, employing adaptive lasso regularization method.
+
+| Method | nd | Scenario 2 | Scenario 3 |
|---|
| Unweighted | Weighted | Unweighted | Weighted |
|---|
| Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE | Bias | SD | RMSE |
|---|
| No regularization | | Poisson | 20 | 0.37 | 0.64 | 0.74 | 0.29 | 0.74 | 0.79 | 0.28 | 2.15 | 2.16 | 0.42 | 2.06 | 2.11 | | 40 | 0.14 | 0.63 | 0.65 | 0.16 | 0.73 | 0.75 | 0.33 | 2.47 | 2.50 | 0.42 | 2.32 | 2.35 | | 80 | 0.17 | 0.64 | 0.66 | 0.11 | 0.75 | 0.76 | 0.26 | 2.57 | 2.58 | 0.43 | 2.40 | 2.43 | | Logistic | 20 | 0.03 | 0.69 | 0.69 | 0.32 | 1.34 | 1.37 | 0.20 | 2.31 | 2.32 | 0.36 | 2.95 | 2.97 | | 40 | 0.07 | 0.60 | 0.61 | 0.12 | 0.96 | 0.97 | 0.23 | 2.31 | 2.32 | 0.37 | 2.56 | 2.58 | | 80 | 0.10 | 0.60 | 0.61 | 0.14 | 0.81 | 0.82 | 0.25 | 2.36 | 2.38 | 0.42 | 2.38 | 2.42 |
+
+
+ Adaptive lasso
+
+
+ | Poisson |
+ 20 40 80 |
+ 0.30 0.20 0.18 |
+ 0.59 0.58 0.59 |
+ 0.67 0.61 0.62 |
+ 0.86 0.86 0.88 |
+ 0.47 0.49 0.51 |
+ 0.98 0.99 1.02 |
+ 0.30 0.33 0.28 |
+ 2.00 2.33 2.41 |
+ 2.03 2.35 2.43 |
+ 1.14 1.18 1.22 |
+ 0.68 0.70 0.71 |
+ 1.33 1.37 1.41 |
+
+
+ | Logistic |
+ 20 40 80 |
+ 0.19 0.18 0.18 |
+ 0.50 0.52 0.55 |
+ 0.53 0.55 0.58 |
+ 0.95 0.89 0.89 |
+ 0.55 0.52 0.52 |
+ 1.09 1.03 1.03 |
+ 0.23 0.23 0.25 |
+ 2.06 2.15 2.21 |
+ 2.07 2.16 2.22 |
+ 1.26 1.22 1.24 |
+ 0.73 0.72 0.71 |
+ 1.45 1.42 1.43 |
+
+
+
+
+and 3. We use the same selection and prediction indices examined in Section 4.2 and consider only the adaptive lasso method.
+
+Table 8 presents selection properties for the regularized Poisson and logistic estimators using adaptive lasso regularization. For unweighted versions of the procedure, the regularized logistic method outperforms the regularized Poisson method when nd = 20, i.e. when the number of dummy points is much smaller than the number of points. When nd² ≈ m or nd² > m, the methods tend to have similar performances. When we consider weighted versions of the methods, the results do not change that much with nd and the regularized Poisson likelihood slightly outperforms the regularized logistic likelihood. In addition, for Scenario 3 which considers a more complex situation, the methods tend to select the noisy covariates much more frequently.
+
+Empirical biases, standard deviation and the square root of mean squared errors are presented in Table 9. We include all empirical results for the standard Poisson and logistic estimates (i.e. no regularization is considered). Let us first consider the unweighted methods with no regularization. The logistic method clearly has a smaller bias, especially when nd = 20, which explains why in most situations the RMSE is smaller.
+
+However, for the weighted methods,
+---PAGE_BREAK---
+
+although the logistic method has a smaller bias in general, it produces much
+larger SD, leading to larger RMSE for all cases. When we compare the weighted
+and the unweighted methods for logistic estimates, in general, not only do we
+fail to reduce the SD, but we also have a larger bias. When the adaptive lasso
+regularization is considered, combined with the unweighted methods, we can
+preserve the bias in general and in parallel improve the SD, and hence improve
+the RMSE. The logistic likelihood method slightly outperforms the Poisson like-
+lihood method. When the weighted methods are considered, we obtain smaller
+SD, but we have a larger bias. For weighted versions of the Poisson and logis-
+tic likelihoods, the results do not change that much with nd and the weighted
+Poisson method slightly outperforms the weighted logistic method. From Ta-
+bles 8 and 9, when the number of dummy points can be chosen as nd² ≈ m
+or nd² > m, we would recommend to apply the Poisson likelihood method.
+When the number of dummy points should be chosen as nd² < m, the logistic
+likelihood method is more favorable. Our recommendations regarding whether
+weighted or unweighted methods follow the ones as in Section 4.2.
+
+**5. Application to forestry datasets**
+
+In a 50-hectare region ($D = 1,000\text{m} \times 500\text{m}$) of the tropical moist forest of Barro Colorado Island (BCI) in central Panama, censuses have been carried out where all free-standing woody stems at least 10 mm diameter at breast height were identified, tagged, and mapped, resulting in maps of over 350,000 individual trees with more than 300 species (see Condit, 1998; Hubbell et al., 1999, 2005). It is of interest to know how the very high number of different tree species continues to coexist, profiting from different habitats determined by e.g. topography or soil properties (see e.g. Waagepetersen, 2007; Waagepetersen and Guan, 2009). In particular, the selection of covariates among topological attributes and soil minerals as well as the estimation of their coefficients are becoming our most concern.
+
+We are particularly interested in analyzing the locations of 3,604 *Beilschmiedia pendula* Lauraceae (BPL) tree stems. We model the intensity of BPL trees as a log-linear function of two topological attributes and 13 soil properties as the covariates. Figure 2 contains maps of the locations of BPL trees, elevation, slope, and concentration of Phosphorus. BPL trees seem to appear in greater abundance in the areas of high elevation, steep slope, and low concentration of Phosphorus. The covariates maps are depicted in Figure 4.
+
+We apply the regularized Poisson and logistic likelihoods, combined with
+adaptive lasso regularization to select and estimate parameters. Since we do
+not deal with datasets which have a very large number of points, we can set
+the default number of dummy points for Poisson likelihood as in the spatstat
+package, i.e. the number of dummy points can be chosen to be larger than the
+number of points, to perform quickly and efficiently. It is worth emphasizing
+that we center and scale the 15 covariates to observe which one has the largest
+effect on the intensity. The results are presented in Table 10: 12 covariates for
+---PAGE_BREAK---
+
+FIG 2. Maps of locations of BPL trees (top left), elevation (top right), slope (bottom left), and concentration of Phosphorus (bottom right).
+
+the Poisson likelihood and 11 for the logistic method are selected out of the 15 covariates using the unweighted methods while only 5 covariates (both for the Poisson and logistic methods) are selected using the weighted versions. The unweighted methods tend to overfit the model by over selecting unimportant covariates.
+
+The weighted methods tend to keep out the uninformative covariates. Both Poisson and logistic estimates own similar selection and estimation results. First, we find some differences in estimation between the unweighted and the weighted methods, especially for slope and Manganese (Mn), for which the weighted methods have approximately two times larger estimators. Second, we may lose some nonzero covariates when we apply the weighted methods, even though it is only for the covariates which have relatively small coefficient. Borron (B) has a high correlation with many of the other covariates, particularly with them which are not selected. This is possibly why Boron which is selected and may have a non-negligible coefficient in the unweighted methods is not chosen in the model. This may explain why the weighted methods introduce extra biases. However, since the situation appears to be quite close to the Scenario 3 from the simulation study, the weighted methods are more favorable in terms of both selection and prediction.
+
+In this application, we do not face any computational problem. Nevertheless, if we have to model a species of trees with much more points, the default value for *nd* will lead to numerical problems. In such a case, the logistic likelihood would be a good alternative.
+
+These results suggest that BPL trees favor living in areas of higher elevation and slope. Further, higher levels of Manganese (Mn) and lower levels of both Phosphorus (P) and Zinc (Zn) concentrations in soil are associated with higher appearance of BPL trees.
+---PAGE_BREAK---
+
+TABLE 10
+Barro Colorado Island data analysis: Parameter estimates of the regression coefficients for Beilschmiedia pendula Lauraceae trees applying regularized Poisson and logistic regression likelihoods with adaptive lasso regularization.
+
+ | Unweighted method | Weighted method |
|---|
| Poisson estimates | Logistic estimates | Poisson estimates | Logistic estimates |
|---|
| Elev | 0.39 | 0.40 | 0.41 | 0.45 | | Slope | 0.26 | 0.32 | 0.51 | 0.60 | | Al | 0 | 0 | 0 | 0 | | B | 0.30 | 0.30 | 0 | 0 | | Ca | 0.10 | 0.15 | 0 | 0 | | Cu | 0.10 | 0.12 | 0 | 0 | | Fe | 0.05 | 0 | 0 | 0 | | K | 0 | 0 | 0 | 0 | | Mg | -0.17 | -0.18 | 0 | 0 | | Mn | 0.12 | 0.13 | 0.23 | 0.24 | | P | -0.60 | -0.60 | -0.50 | -0.52 | | Zn | -0.43 | -0.46 | -0.35 | -0.37 | | N | 0 | 0 | 0 | 0 | | N.min | -0.12 | -0.10 | 0 | 0 | | pH | -0.14 | -0.14 | 0 | 0 | | Nb of cov. | 12 | 11 | 5 | 5 |
+
+## 6. Conclusion and discussion
+
+We develop regularized versions of estimating equations based on Campbell theorem derived from the Poisson and the logistic regression likelihoods. Our procedure is able to perform covariates selection for modeling the intensity of spatial point processes. Furthermore, our procedure is also generally easy to implement in R since we need to combine `spatstat` package with `glmnet` and `ncvreg` packages. We study the asymptotic properties of both regularized weighted Poisson and logistic estimates in terms of consistency, sparsity, and asymptotic normality. We find that, among the regularization methods considered in this paper, adaptive lasso, adaptive elastic net, SCAD, and MC+ are the methods that can satisfy our theorems.
+
+We carry out some scenarios in the simulation study to observe selection and prediction properties of the estimates. We compare the penalized Poisson likelihood (PL) and the penalized weighted Poisson likelihood (WPL) with different penalty functions. From the results, when we deal with covariates having a complex covariance matrix and when the point pattern looks quite clustered, we recommend to apply the penalized WPL combined with adaptive lasso regularization. Otherwise, the regularized PL with the adaptive lasso is more preferable.
+---PAGE_BREAK---
+
+The further and more careful investigation to choose the tuning parameters may be needed to improve the selection properties. We note the bias increases quite significantly when the regularized WPL is applied. When the penalized WPL is considered, a two-step procedure may be needed to improve the prediction properties: (1) use the penalized WPL combined with the adaptive lasso to choose the covariates, then (2) use the selected covariates to obtain the estimates. This post-selection inference procedure has not been investigated in this paper.
+
+We also compare the estimates obtained from the Poisson and the logistic likelihoods. When the number of dummy points can be chosen to be either similar to or larger than the number of points, we recommend the use of the Poisson likelihood method. Nevertheless, when the number of dummy points should be chosen to be smaller than the number of points, the logistic method is more favorable.
+
+A further work would consist in studying the situation when the number of the covariates is much larger than the sample size. In such a situation, the coordinate descent algorithm used in this paper may cause some numerical troubles. The Dantzig selector procedure introduced by Candes and Tao (2007) might be a good alternative as the implementation for linear models (and for generalized linear models) results in a linear programming. It would be interesting to bring this approach to spatial point process setting.
+
+Another direction could consist in extending the intensity model itself to get more flexibility, for instance using single-index type models. Such models have already been proposed for spatial point processes by Fang and Loh (2017) with moderate number of covariates. Using e.g. Zhu et al. (2011), combining such models and regularization techniques for inhomogeneous spatial point processes seems feasible. Kernel-type regression methods could also appear as interesting perspectives and the work by Crawford et al. (2018) could serve as a basis to investigate such methods for spatial point processes feature selection problem.
+
+## Appendix A: Parametric intensity estimation
+
+One of the standard ways to fit models to data is by maximizing the likelihood of the model for the data. While maximum likelihood method is feasible for parametric Poisson point process models (Appendix A.1), computationally intensive Markov chain Monte Carlo (MCMC) methods are needed otherwise (Møller and Waagepetersen, 2004). As MCMC methods are not yet straightforward to implement, estimating equations based on Campbell theorem have been developed (see e.g. Waagepetersen, 2007; Møller and Waagepetersen, 2007; Waagepetersen, 2008; Guan and Shen, 2010; Baddeley et al., 2014). We review the estimating equations derived from the Poisson likelihood in Appendix A.2-A.3 and from the logistic regression likelihood in Appendix A.4.
+
+### A.1. Maximum likelihood estimation
+
+For an inhomogeneous Poisson point process with intensity function $\rho$ parameterized by $\beta$, the likelihood function is
+---PAGE_BREAK---
+
+$$L(\boldsymbol{\beta}) = \prod_{u \in \mathbf{X} \cap D} \rho(u; \boldsymbol{\beta}) \exp \left( \int_D (1 - \rho(u; \boldsymbol{\beta})) du \right),$$
+
+and the log-likelihood function of $\boldsymbol{\beta}$ is
+
+$$\ell(\boldsymbol{\beta}) = \sum_{u \in \mathbf{X} \cap D} \log \rho(u; \boldsymbol{\beta}) - \int_{D} \rho(u; \boldsymbol{\beta}) du, \quad (\text{A.1})$$
+
+where we have omitted the constant term $\int_{D} 1 du = |D|$. As the intensity function has log-linear form (1.1), (A.1) reduces to
+
+$$\ell(\boldsymbol{\beta}) = \sum_{u \in \mathbf{X} \cap D} \boldsymbol{\beta}^{\top} \mathbf{z}(u) - \int_{D} \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u)) du.$$
+
+Rathbun and Cressie (1994) show that the maximum likelihood estimator is consistent, asymptotically normal and efficient as the sample region goes to $\mathbb{R}^d$.
+
+## A.2. Poisson likelihood
+
+Let $\beta_0$ be the true parameter vector. By applying Campbell theorem (2.1) to the score function, i.e. the gradient vector of $l(\boldsymbol{\beta})$ denoted by $l^{(1)}(\boldsymbol{\beta})$, we have
+
+$$
+\begin{align*}
+\mathbb{E}l^{(1)}(\boldsymbol{\beta}) &= \mathbb{E} \sum_{u \in \mathbf{X} \cap D} \mathbf{z}(u) - \int_{D} \mathbf{z}(u) \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u)) du \\
+&= \int_{D} \mathbf{z}(u) \exp(\boldsymbol{\beta}_{0}^{\top} \mathbf{z}(u)) du - \int_{D} \mathbf{z}(u) \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u)) du \\
+&= \int_{D} \mathbf{z}(u) (\exp(\boldsymbol{\beta}_{0}^{\top} \mathbf{z}(u)) - \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u))) du = 0
+\end{align*}
+$$
+
+when $\beta_0 = \beta$. So, the score function of the Poisson log-likelihood appears to be an unbiased estimating equation, even though $\mathbf{X}$ is not a Poisson point process. The estimator maximizing (A.1) is referred to as the Poisson estimator. The properties of the Poisson estimator have been carefully studied. Schoenberg (2005) shows that the Poisson estimator is still consistent for a class of spatio-temporal point process models. The asymptotic normality for a fixed observation domain is obtained by Waagepetersen (2007) while Guan and Loh (2007) establish asymptotic normality under an increasing domain assumption and for suitable mixing point processes.
+
+Regarding the parameter $\psi$ (see Appendix B.2), Waagepetersen and Guan (2009) study a two-step procedure to estimate both $\boldsymbol{\beta}$ and $\psi$, and they proved that, under certain mixing conditions, the parameter estimates $(\hat{\boldsymbol{\beta}}, \hat{\psi})$ enjoy the properties of consistency and asymptotic normality.
+
+## A.3. Weighted Poisson likelihood
+
+Although the estimating equation approach derived from the Poisson likelihood is simpler and faster to implement than maximum likelihood estimation, it
+---PAGE_BREAK---
+
+potentially produces a less efficient estimate than that of maximum likelihood (Waagepetersen, 2007; Guan and Shen, 2010) because information about interaction of events is ignored. To regain some lack of efficiency, Guan and Shen (2010) propose a weighted Poisson log-likelihood function given by
+
+$$ \ell(w; \beta) = \sum_{u \in X \cap D} w(u) \log \rho(u; \beta) - \int_D w(u) \rho(u; \beta) du, \quad (\text{A.2}) $$
+
+where $w(\cdot)$ is a weight surface. By regarding (A.2), we see that a larger weight $w(u)$ makes the observations in the infinitesimal region $du$ more influent. By Campbell theorem, $\ell^{(1)}(w; \beta)$ is still an unbiased estimating equation. In addition, Guan and Shen (2010) prove that, under some conditions, the parameter estimates are consistent and asymptotically normal.
+
+Guan and Shen (2010) show that a weight surface $w(\cdot)$ that minimizes the trace of the asymptotic variance-covariance matrix of the estimates maximizing (A.2) can result in more efficient estimates than Poisson estimator. In particular, the proposed weight surface is
+
+$$ w(u) = \{1 + \rho(u)f(u)\}^{-1}, $$
+
+where $f(u) = \int_D \{g(\|v-u\|; \psi) - 1\}du$ and $g(\cdot)$ is the pair correlation function. For a Poisson point process, note that $f(u) = 0$ and hence $w(u) = 1$, which reduces to maximum likelihood estimation. For general point processes, the weight surface depends on both the intensity function and the pair correlation function, thus incorporates information on both inhomogeneity and dependence of the spatial point processes. When clustering is present so that $g(v-u) > 1$, then $f(u) > 0$ and hence the weight decreases with $\rho(u)$. The weight surface can be achieved by setting $\hat{w}(u) = \{1+\hat{\rho}(u)\hat{f}(u)\}^{-1}$. To get the estimate $\hat{\rho}(u)$, $\beta$ is substituted by $\tilde{\beta}$ given by Poisson estimates, that is, $\hat{\rho}(u) = \rho(u; \tilde{\beta})$. Alternatively, $\hat{\rho}(u)$ can also be computed non parametrically by kernel method. Furthermore, Guan and Shen (2010) suggest to approximate $f(u)$ by $K(r) - \pi r^2$, where $K(\cdot)$ is the Ripley's K-function estimated by
+
+$$ \hat{K}(r) = \sum_{u,v \in X \cap D}^{\neq} \frac{\Pi[\|u-v\| \leq r]}{\hat{\rho}(u)\hat{\rho}(v)|D \cap D_{u-v}|}. $$
+
+Guan et al. (2015) extend the study by Guan and Shen (2010) and consider more complex estimating equations. Specifically, $w(u)z(u)$ is replaced by a function $h(u; \beta)$ in the derivative of (A.2) with respect to $\beta$. The procedure results in a slightly more efficient estimate than the one obtained from (A.2). However, the computational cost is more important and since we combine estimating equations and penalization methods (see Section 2.3), we have not considered this extension.
+
+### A.4. Logistic regression likelihood
+
+Although the estimating equations discussed in Appendices A.2 and A.3 are unbiased, these methods do not, in general, produce unbiased estimator in practi-
+---PAGE_BREAK---
+
+cal implementations. Waagepetersen (2008) and Baddeley et al. (2014) propose another estimating function which is indeed close to the score of the Poisson log-likelihood but is able to obtain less biased estimator than Poisson estimates. In addition, their proposed estimating equation is in fact the derivative of the logistic regression likelihood.
+
+Following Baddeley et al. (2014), we define the weighted logistic regression
+log-likelihood function by
+
+$$
+\begin{equation}
+\begin{split}
+\ell(w; \beta) = & \sum_{u \in X \cap D} w(u) \log \left( \frac{\rho(u; \beta)}{\delta(u) + \rho(u; \beta)} \right) \\
+& - \int_D w(u) \delta(u) \log \left( \frac{\rho(u; \beta) + \delta(u)}{\delta(u)} \right) du,
+\end{split}
+\tag{A.3}
+\end{equation}
+$$
+
+where $\delta(u)$ is a nonnegative real-valued function. Its role as well as an explanation of the name 'logistic method' will be explained further in Appendix C.2.
+Note that the score of (A.3) is an unbiased estimating equation. Waagepetersen (2008) shows asymptotic normality for Poisson and some clustered point processes for the estimator obtained from a similar procedure. Furthermore, the methodology and results are studied by Baddeley et al. (2014) considering spatial Gibbs point processes.
+
+To determine the optimal weight surface $w(\cdot)$ for logistic method, we follow
+Guan and Shen (2010) who minimize the trace of the asymptotic covariance
+matrix of the estimates. We obtain the weight surface defined by
+
+$$
+w(u) = \frac{\rho(u) + \delta(u)}{\delta(u)\{1 + \rho(u)f(u)\}},
+$$
+
+where $\rho(u)$ and $f(u)$ can be estimated as in Appendix A.3.
+
+**Appendix B: Examples of spatial point processes models with prescribed intensity function**
+
+We discuss spatial point process models specified by deterministic or random in-
+tensity function. Particularly, we consider two important model classes, namely
+Poisson and Cox processes. Poisson point processes serve as a tractable model
+class for no interaction or complete spatial randomness. Cox processes form
+major classes for clustering or aggregation. For conciseness, we focus on the
+two later classes of models. We could also have presented determinantal point
+processes (e.g. Lavancier et al., 2015) which constitute an interesting class of
+repulsive point patterns with explicit moments. This has not been further in-
+vestigated for the sake of brevity. In this paper, we focus on log-linear models
+of the intensity function given by (1.1).
+
+**B.1. Poisson point process**
+
+A point process $\mathbf{X}$ on $D$ is a Poisson point process with intensity function $\rho$, assumed to be locally integrable, if the following conditions are satisfied:
+---PAGE_BREAK---
+
+1. for any $B \subseteq D$ with $0 \le \mu(B) < \infty$, $N(B) \sim \text{Poisson}(\mu(B))$,
+
+2. conditionally on $N(B)$, the points in $\mathbf{X} \cap B$ are i.i.d. with joint density proportional to $\rho(u)$, $u \in B$.
+
+A Poisson point process with a log-linear intensity function is also called a modulated Poisson point process (e.g. Møller and Waagepetersen, 2007; Waagepetersen, 2008). In particular, for Poisson point processes, $\rho^{(2)}(u, v) = \rho(u)\rho(v)$, and $g(u, v) = 1, \forall u, v \in D$.
+
+## B.2. Cox processes
+
+A Cox process is a natural extension of a Poisson point process, obtained by considering the intensity function of the Poisson point process as a realization of a random field. Suppose that $\Lambda = \{\Lambda(u) : u \in D\}$ is a nonnegative random field. If the conditional distribution of $\mathbf{X}$ given $\Lambda$ is a Poisson point process on $D$ with intensity function $\Lambda$, then $\mathbf{X}$ is said to be a Cox process driven by $\Lambda$ (see e.g. Møller and Waagepetersen, 2004). There are several types of Cox processes. Here, we consider two types of Cox processes: a Neyman-Scott point process and a log Gaussian Cox process.
+
+**Neyman-Scott point processes.** Let $\mathbf{C}$ be a stationary Poisson process (mother process) with intensity $\kappa > 0$. Given $\mathbf{C}$, let $\mathbf{X}_c, c \in \mathbf{C}$, be independent Poisson processes (offspring processes) with intensity function
+
+$$\rho_c(u; \boldsymbol{\beta}) = \exp(\boldsymbol{\beta}^\top \mathbf{z}(u)) k(u-c; \omega) / \kappa,$$
+
+where $k$ is a probability density function determining the distribution of off-spring points around the mother points parameterized by $\omega$. Then $\mathbf{X} = \cup_{c \in \mathbf{C}} \mathbf{X}_c$ is a special case of an *inhomogeneous Neyman-Scott point process* with mothers $\mathbf{C}$ and offspring $\mathbf{X}_c$, $c \in \mathbf{C}$. The point process $\mathbf{X}$ is a Cox process driven by $\Lambda(u) = \exp(\boldsymbol{\beta}^\top \mathbf{z}(u)) \sum_{c \in \mathbf{C}} k(u-c, \omega) / \kappa$ (e.g. Waagepetersen, 2007; Coeurjolly and Møller, 2014) and we can verify that the intensity function of $\mathbf{X}$ is indeed
+
+$$\rho(u; \boldsymbol{\beta}) = \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u)).$$
+
+One example of *Neyman-Scott point process* is the *Thomas process* where
+
+$$k(u) = (2\pi\omega^2)^{-d/2} \exp(-\|u\|^2/(2\omega^2))$$
+
+is the density for $N_d(0, \omega^2 I_d)$. Conditionally on a parent event at location $c$, children events are normally distributed around $c$. Smaller values of $\omega$ correspond to tighter clusters, and smaller values of $\kappa$ correspond to fewer number of parents. The parameter vector $\psi = (\kappa, \omega)^{\top}$ is referred to as the interaction parameter as it modulates the spatial interaction (or, dependence) among events.
+
+**Log Gaussian Cox process.** Suppose that log $\Lambda$ is a Gaussian random field. Given $\Lambda$, the point process $\mathbf{X}$ follows Poisson process. Then $\mathbf{X}$ is said to
+---PAGE_BREAK---
+
+be a log Gaussian Cox process driven by $\mathbf{\Lambda}$ (Møller and Waagepetersen, 2004). If the random intensity function can be written as
+
+$$ \log \mathbf{\Lambda}(u) = \beta^{\top} \mathbf{z}(u) + \phi(u) - \sigma^2/2, $$
+
+where $\phi$ is a zero-mean stationary Gaussian random field with covariance function $c(u, v; \boldsymbol{\psi}) = \sigma^2 R(v-u; \zeta)$ which depends on parameter $\boldsymbol{\psi} = (\sigma^2, \zeta)^{\top}$ (Møller and Waagepetersen, 2007; Coeurjolly and Møller, 2014). The intensity function of this log Gaussian Cox process is indeed given by
+
+$$ \rho(u; \boldsymbol{\beta}) = \exp(\boldsymbol{\beta}^{\top} \mathbf{z}(u)). $$
+
+One example of correlation function is the exponential form (e.g. Waagepetersen and Guan, 2009)
+
+$$ R(v-u; \zeta) = \exp(-\|u-v\|/\zeta), \text{ for } \zeta > 0. $$
+
+Here, $\boldsymbol{\psi} = (\sigma^2, \zeta)^{\top}$ constitutes the interaction parameter vector, where $\sigma^2$ is the variance and $\zeta$ is the correlation scale parameter.
+
+## Appendix C: Numerical methods
+
+We present numerical aspects in this section. For nonregularized estimation, there are two approaches that we consider. Weighted Poisson regression is explained in Appendix C.1, while logistic regression is reviewed in Appendix C.2. Penalized estimation procedure is done by employing coordinate descent algorithm (Appendix C.3). We separate the use of the convex and non-convex penalties in Appendices C.3.1 and C.3.2.
+
+### C.1. Weighted Poisson regression
+
+Berman and Turner (1992) develop a numerical quadrature method to approximate maximum likelihood estimation for an inhomogeneous Poisson point process. They approximate the likelihood by a finite sum that had the same analytical form as the weighted likelihood of generalized linear model with Poisson response. This method is then extended to Gibbs point processes by Baddeley and Turner (2000). Suppose we approximate the integral term in (A.1) by Riemann sum approximation
+
+$$ \int_D \rho(u; \boldsymbol{\beta}) du \approx \sum_{i}^{M} v_i \rho(u_i; \boldsymbol{\beta}) $$
+
+where $u_i, i = 1, ..., M$ are points in $D$ consisting of the $m$ data points and $M-m$ dummy points. The quadrature weights $v_i > 0$ are such that $\sum_i v_i = |D|$. To implement this method, the domain is firstly partitioned into $M$ rectangular pixels of equal area, denoted by $a$. Then one dummy point is placed in the
+---PAGE_BREAK---
+
+center of the pixel. Let $\Delta_i$ be an indicator of whether the point is an event of
+point process ($\Delta_i = 1$) or a dummy point ($\Delta_i = 0$). Without loss of gener-
+ality, let $u_1, \dots, u_m$ be the observed events and $u_{m+1}, \dots, u_M$ be the dummy
+points. Thus, the Poisson log-likelihood function (A.1) can be approximated and
+rewritten as
+
+$$
+\ell(\beta) \approx \sum_{i}^{M} v_i \{y_i \log \rho(u_i; \beta) - \rho(u_i; \beta)\}, \quad \text{where } y_i = v_i^{-1} \Delta_i. \tag{C.1}
+$$
+
+Equation (C.1) corresponds to a quasi Poisson log-likelihood function. Maximiz-
+ing (C.1) is equivalent to fitting a weighted Poisson generalized linear model,
+which can be performed using standard statistical software. Similarly, we can
+approximate the weighted Poisson log-likelihood function (A.2) using numerical
+quadrature method by
+
+$$
+\ell(w; \boldsymbol{\beta}) \approx \sum_{i}^{M} w_i v_i \{y_i \log \rho(u_i; \boldsymbol{\beta}) - \rho(u_i; \boldsymbol{\beta})\}. \quad (\text{C.2})
+$$
+
+where $w_i$ is the value of the weight surface at point $i$. The estimate $\hat{w}_i$ is obtained as suggested by Guan and Shen (2010). The similarity between (C.1) and (C.2) allows us to compute the estimates using software for generalized linear model as well. This fact is in particular exploited in the **ppm** function in the **spatstat R** package (Baddeley and Turner, 2005; Baddeley et al., 2015) with option **method="mpl"**. To make the presentation more general, the number of dummy points is denoted by **nd**² for the next sections.
+
+**C.2. Logistic regression**
+
+To perform well, the Berman-Turner approximation often requires a quite large
+number of dummy points. Hence, fitting such generalized linear models can be
+computationally intensive, especially when dealing with a quite large number
+of points. When the unbiased estimating equations are approximated using de-
+terministic numerical approximation as in Appendix C.1, it does not always
+produce unbiased estimator. To achieve unbiased estimator, we estimate (A.3)
+by
+
+$$
+\ell(w; \beta) \approx \sum_{u \in X \cap D} w(u) \log \left( \frac{\rho(u; \beta)}{\delta(u) + \rho(u; \beta)} \right) + \sum_{u \in D \cap D} w(u) \log \left( \frac{\delta(u)}{\rho(u; \beta) + \delta(u)} \right), \quad (C.3)
+$$
+
+where $\mathcal{D}$ is dummy point process independent of $\mathbf{X}$ and with intensity function $\delta$. The form (C.3) is related to the estimating equation defined by Baddeley et al. (2014, eq. 7). Besides that, we consider this form since if we apply Campbell theorem to the last term of (C.3), we obtain
+
+$$
+\mathbb{E} \sum_{u \in D \cap D} w(u) \log \left( \frac{\delta(u)}{\rho(u; \beta) + \delta(u)} \right) = \int_D w(u) \delta(u) \log \left( \frac{\rho(u; \beta) + \delta(u)}{\delta(u)} \right) du,
+$$
+---PAGE_BREAK---
+
+which is exactly what we have in the last term of (A.3). In addition, conditional
+on X $\cup D$, (C.3) is the weighted likelihood function for Bernoulli trials, $y(u) =$
+$1\{u \in X\}$ for $u \in X \cup D$, with
+
+$$
+P\{y(u) = 1\} = \frac{\rho(u; \boldsymbol{\beta})}{\delta(u) + \rho(u; \boldsymbol{\beta})} = \frac{\exp(-\log \delta(u) + \boldsymbol{\beta}^{\top}\mathbf{z}(u))}{1 + \exp(-\log \delta(u) + \boldsymbol{\beta}^{\top}\mathbf{z}(u))}.
+$$
+
+Precisely, (C.3) is a weighted logistic regression with offset term – log δ. Thus,
+parameter estimates can be straightforwardly obtained using standard software
+for generalized linear models. This approach is in fact provided in the spatstat
+package in R by calling the ppm function with option method="logi" (Baddeley
+et al., 2014, 2015).
+
+In spatstat, the dummy point process $D$ generates $nd^2$ points in average
+in $D$ from a Poisson, binomial, or stratified binomial point process. Baddeley
+et al. (2014) suggest to choose $\delta(u) = 4m/|D|$, where $m$ is the number of points
+(so, $nd^2 = 4m$). Furthermore, to determine $\delta$, this option can be considered as a
+starting point for a data-driven approach (see Baddeley et al., 2014, for further
+details).
+
+**C.3. Coordinate descent algorithm**
+
+LARS algorithm (Efron et al., 2004) is a remarkably efficient method for computing an entire path of lasso solutions. For linear models, the computational cost is of order $O(Mp^2)$, which is the same order as a least squares fit. Coordinate descent algorithm (Friedman et al., 2007, 2010) appears to be a more competitive algorithm for computing the regularization paths by costs $O(Mp)$ operations. Therefore we adopt cyclical coordinate descent methods, which can work really fast on large datasets and can take advantage of sparsity. Coordinate descent algorithms optimize a target function with respect to a single parameter at a time, iteratively cycling through all parameters until convergence criterion is reached. We detail this for some convex and non-convex penalty functions in the next two sections. Here, we only present the coordinate descent algorithm for fitting generalized weighted Poisson regression. A similar approach is used to fit penalized weighted logistic regression.
+
+### C.3.1. Convex penalty functions
+
+Since $\ell(w; \boldsymbol{\beta})$ given by (C.2) is a concave function of the parameters, the Newton-Raphson algorithm used to maximize the penalized log-likelihood function can be done using the iteratively reweighted least squares (IRLS) method. If the current estimate of the parameters is $\tilde{\boldsymbol{\beta}}$, we construct a quadratic approximation of the weighted Poisson log-likelihood function using Taylor expansion:
+
+$$
+\ell(w; \boldsymbol{\beta}) \approx \ell_Q(w; \boldsymbol{\beta}) = -\frac{1}{2} \sum_{i}^{M} \nu_i (y_i^* - \mathbf{z}_i^\top \boldsymbol{\beta})^2 + C(\tilde{\boldsymbol{\beta}}), \quad (\text{C.4})
+$$
+---PAGE_BREAK---
+
+where $C(\tilde{\beta})$ is a constant, $y_i^*$ are the working response values and $\nu_i$ are the weights,
+
+$$
+\begin{aligned}
+\nu_i &= w_i v_i \exp(\mathbf{z}_i^\top \tilde{\beta}) \\
+y_i^* &= \mathbf{z}_i^\top \tilde{\beta} + \frac{y_i - \exp(\mathbf{z}_i^\top \tilde{\beta})}{\exp(\mathbf{z}_i^\top \tilde{\beta})}.
+\end{aligned}
+ $$
+
+Regularized Poisson linear model works by firstly identifying a decreasing sequence of $\lambda \in [\lambda_{\min}, \lambda_{\max}]$, for which starting with minimum value of $\lambda_{\max}$ such that the entire vector $\tilde{\beta} = 0$. For each value of $\lambda$, an outer loop is created to compute $\ell_Q(w; \beta)$ at $\tilde{\beta}$. Secondly, a coordinate descent method is applied to solve a penalized weighted least squares problem
+
+$$ \min_{\beta \in \mathbb{R}^p} \Omega(\beta) = \min_{\beta \in \mathbb{R}^p} \{-\ell_Q(w; \beta) + |D| \sum_{j=1}^{p} p_{\lambda_j}(|\beta_j|)\}. \quad (C.5) $$
+
+The coordinate descent method is explained as follows. Suppose we have the estimate $\tilde{\beta}_l$ for $l \neq j$, $l, j = 1, \dots, p$. The method consists in partially optimizing (C.5) with respect to $\beta_j$, that is
+
+$$ \min_{\beta_j} \Omega(\tilde{\beta}_1, \dots, \tilde{\beta}_{j-1}, \beta_j, \tilde{\beta}_{j+1}, \dots, \tilde{\beta}_p). $$
+
+Friedman et al. (2007) provide the form of the coordinate-wise update for several penalized regression estimators. For instance, the coordinate-wise update for the elastic net, which embraces the ridge and lasso regularization by setting respectively $\gamma$ to 0 or 1, is
+
+$$ \tilde{\beta}_j \leftarrow \frac{S\left(\sum_{i=1}^{M} \nu_j z_{ij}(y_i - \tilde{y}_i^{(j)}), |D|\lambda\gamma\right)}{\sum_{i=1}^{M} \nu_j z_{ij}^2 + |D|\lambda(1-\gamma)}, \quad (C.6) $$
+
+where $\tilde{y}_i^{(j)} = \tilde{\beta}_0 + \sum_{l \neq j} z_{il}\tilde{\beta}_l$ is the fitted value excluding the contribution from covariate $z_{ij}$, and $S(z, \lambda)$ is the soft-thresholding operator with value
+
+$$ S(z, \lambda) = \operatorname{sign}(z)(|z| - \lambda)_+ = \begin{cases} z - \lambda & \text{if } z > 0 \text{ and } \lambda < |z| \\ z + \lambda & \text{if } z < 0 \text{ and } \lambda < |z| \\ 0 & \text{if } \lambda \ge |z|. \end{cases} \quad (C.7) $$
+
+The update (C.6) is repeated for $j = 1, \dots, p$ until convergence. Coordinate descent algorithm for several convex penalties is implemented in the R package `glmnet` (Friedman et al., 2010). For (C.6), we can set $\gamma = 0$ to implement ridge and $\gamma = 1$ to lasso, while we set $0 < \gamma < 1$ to apply elastic net regularization. For adaptive lasso, we follow Zou (2006), take $\gamma = 1$ and replace $\lambda$ by $\lambda_j = \lambda/|\tilde{\beta}_j|^{\tau}$,
+---PAGE_BREAK---
+
+where $\tilde{\beta}$ is an initial estimate, say $\tilde{\beta}(\text{ols})$ or $\tilde{\beta}(\text{ridge})$, and $\tau$ is a positive tuning parameter. To avoid the computational evaluation for choosing $\tau$, we follow Zou (2006, Section 3.4) and Wasserman and Roeder (2009) who also considered $\tau = 1$, so we choose $\lambda_j = \lambda/|\tilde{\beta}_j(\text{ridge})|$, where $\tilde{\beta}(\text{ridge})$ is the estimates obtained from ridge regression. Implementing adaptive elastic net follows along similar lines.
+
+### C.3.2. Non-convex penalty functions
+
+Breheny and Huang (2011) investigate the application of coordinate descent algorithm to fit penalized generalized linear model using SCAD and MC+, for which the penalty is non-convex. Mazumder et al. (2011) also study the coordinate-wise optimization algorithm in linear models considering more general non-convex penalties.
+
+Mazumder et al. (2011) conclude that, for a known current estimate $\tilde{\theta}$, the univariate penalized least squares function $Q_u(\theta) = \frac{1}{2}(\theta - \tilde{\theta})^2 + p_\lambda(|\theta|)$ should be convex to ensure that the coordinate-wise procedure converges to a stationary point. Mazumder et al. (2011) find that this turns out to be the case for SCAD and MC+ penalty, but it cannot be satisfied by bridge (or power) penalty and some cases of log-penalty.
+
+Breheny and Huang (2011) derive the solution of coordinate descent algorithm for SCAD and MC+ in generalized linear models cases, and it is implemented in the `ncvreg` package of R. Let $\tilde{\beta}_l$ be a vector containing estimates $\tilde{\beta}_l$ for $l \neq j$, $l, j = 1, \dots, p$, and we wish to partially optimize (C.5) with respect to $\beta_j$. If we define $\tilde{g}_j = \sum_{i=1}^M \nu_j z_{ij} (y_i - \tilde{y}_i^{(j)})$ and $\tilde{\eta}_j = \sum_{i=1}^M \nu_j z_{ij}^2$, the coordinate-wise update for SCAD is
+
+$$ \tilde{\beta}_j \leftarrow \begin{cases} \frac{S(\tilde{g}_j, |D|^\lambda)}{\hat{\eta}_j} & \text{if } |\tilde{g}_j| \le \lambda(\tilde{\eta}_j + |D|) \\ \frac{S(\tilde{g}_j, |D|^\gamma \lambda / (\gamma-1))}{\hat{\eta}_j - |D|/(\gamma-1)} & \text{if } \lambda(\tilde{\eta}_j + |D|) \le |\tilde{g}_j| \le \tilde{\eta}_j \lambda \gamma \\ \frac{\tilde{g}_j}{\hat{\eta}_j} & \text{if } |\tilde{g}_j| \ge \tilde{\eta}_j \lambda \gamma, \end{cases} $$
+
+for any $\gamma > \max_j(1 + 1/\tilde{\eta}_j)$. Then, for $\gamma > \max_j(1/\tilde{\eta}_j)$ and the same definition of $\tilde{g}_j$ and $\tilde{\eta}_j$, the coordinate-wise update for MC+ is
+
+$$ \tilde{\beta}_j \leftarrow \begin{cases} \frac{S(\tilde{g}_j, |D|^\lambda)}{\hat{\eta}_j - |D|/\gamma} & \text{if } |\tilde{g}_j| \le \tilde{\eta}_j \lambda \gamma \\ \frac{\tilde{g}_j}{\hat{\eta}_j} & \text{if } |\tilde{g}_j| \ge \tilde{\eta}_j \lambda \gamma, \end{cases} $$
+
+where $S(z, \lambda)$ is the soft-thresholding operator given by (C.7).
+
+## C.4. Selection of regularization or tuning parameter
+
+It is worth noticing that coordinate descent procedures (and other computation procedures computing the penalized likelihood estimates) rely on the tuning parameter $\lambda$ so that the choice of $\lambda$ is also becoming an important task. The
+---PAGE_BREAK---
+
+estimation using a large value of $\lambda$ tends to have smaller variance but larger biases, whereas the estimation using a small value of $\lambda$ leads to have zero biases but larger variance. The trade-off between the biases and the variances yields an optimal choice of $\lambda$ (Fan and Lv, 2010).
+
+To select $\lambda$, it is reasonable to identify a range of $\lambda$ values extending from a maximum value of $\lambda$ for which all penalized coefficients are zero to $\lambda = 0$ (e.g. Friedman et al., 2010; Breheny and Huang, 2011). After that, we select a $\lambda$ value which optimizes some criterion. By fixing a path of $\lambda \ge 0$, we select the tuning parameter $\lambda$ which minimizes WQBIC($\lambda$), a weighted version of the BIC criterion, defined by
+
+$$ \text{WQBIC}(\lambda) = -2\ell(w; \hat{\beta}(\lambda)) + s(\lambda) \log|D|, $$
+
+where $s(\lambda) = \sum_{j=1}^p I\{\hat{\beta}_j(\lambda) \neq 0\}$ is the number of selected covariates with nonzero regression coefficients and $|D|$ is the observation volume which represents the sample size. For linear regression models, $\mathbf{Y} = \mathbf{X}^\top\hat{\beta} + \epsilon$, Wang et al. (2007) propose a BIC-type criterion for choosing $\lambda$ by
+
+$$ \text{BIC}(\lambda) = \log \frac{\|\mathbf{Y} - \mathbf{X}^\top \hat{\beta}(\lambda)\|^2}{\eta} + \frac{1}{\eta} \log(\eta) \text{DF}(\lambda), $$
+
+where $\eta$ is the number of observations and DF($\lambda$) is the degree of freedom. This criterion is consistent, meaning that, it selects the correct model with probability approaching 1 in large samples when a set of candidate models contains the true model. Their findings are in line with the study of Zhang et al. (2010) for which the criterion was presented in more general way, called generalized information criterion (GIC). The criterion WQBIC is the specific form of GIC proposed by Zhang et al. (2010).
+
+The selection of $\gamma$ for SCAD and MC+ is another task, but we fix $\gamma = 3.7$ for SCAD and $\gamma = 3$ for MC+, following Fan and Li (2001) and Breheny and Huang (2011) respectively, to avoid more complexities.
+
+## Appendix D: A few references for regularization methods
+
+As a first penalization technique to improve ordinary least squares, ridge regression (e.g. Hoerl and Kennard, 1988) works by minimizing the residual sum of squares subject to a bound on the $\ell_2$ norm of the coefficients. As a continuous shrinkage method, ridge regression achieves its better prediction through a bias-variance trade-off. Ridge can also be extended to fit generalized linear models. However, the ridge cannot reduce model complexity since it always keeps all the predictors in the model. Then, it is introduced a method called lasso (Tibshirani, 1996), where it employs $\ell_1$ penalty to obtain variable selection and parameter estimation simultaneously. Despite lasso enjoys some attractive statistical properties, it has some limitations in some senses (Fan and Li, 2001; Zou and Hastie, 2005; Zou, 2006; Zhang, 2010), making huge possibilities to develop other methods. In the scenario where there are high correlations among predictors, Zou and Hastie (2005) propose an elastic net technique which is a convex
+---PAGE_BREAK---
+
+combination between $\ell_1$ and $\ell_2$ penalties. This method is particularly useful when the number of predictors is much larger than the number of observations since it can select or eliminate the strongly correlated predictors together.
+
+The lasso procedure suffers from nonnegligible bias and does not satisfy an oracle property asymptotically (Fan and Li, 2001). Fan and Li (2001) and Zhang (2010), among others, introduce non-convex penalties to get around these drawbacks. The idea is to bridge the gap between $\ell_0$ and $\ell_1$, by trying to keep unbiased the estimates of nonzero coefficients and by shrinking the less important variables to be exactly zero. The rationale behind the non-convex penalties such as SCAD and MC+ can also be understood by considering its first derivative (see Table 1). They start by applying the similar rate of penalization as the lasso, and then continuously relax that penalization until the rate of penalization drops to zero. However, employing non-convex penalties in regression analysis, the main challenge is often in the minimization of the possible non-convex objective function when the non-convexity of the penalty is no longer dominated by the convexity of the likelihood function. This issue has been carefully studied. Fan and Li (2001) propose the local quadratic approximation (LQA). Zou and Li (2008) propose a local linear approximation (LLA) which yields an objective function that can be optimized using least angle regression (LARS) algorithm (Efron et al., 2004). Finally, Breheny and Huang (2011) and Mazumder et al. (2011) investigate the application of coordinate descent algorithm to non-convex penalties.
+
+In (2.5), it is worth emphasizing that we allow each direction to have a different regularization parameter. By doing this, the $\ell_1$ and elastic net penalty functions are extended to the adaptive lasso (e.g. Zou, 2006) and adaptive elastic net (e.g. Zou and Zhang, 2009). Table 2 details the regularization methods considered in this study.
+
+## Appendix E: Auxiliary Lemma
+
+The following result is used in the proof of Theorems 1-2. Throughout the proofs, the notation $\mathbf{X}_n = O_P(x_n)$ or $\mathbf{X}_n = o_P(x_n)$ for a random vector $\mathbf{X}_n$ and a sequence of real numbers $x_n$ means that $\|\mathbf{X}_n\| = O_P(x_n)$ and $\|\mathbf{X}_n\| = o_P(x_n)$. In the same way for a vector $\mathbf{V}_n$ or a squared matrix $\mathbf{M}_n$, the notation $\mathbf{V}_n = O(x_n)$ and $\mathbf{M}_n = O(x_n)$ mean that $\|\mathbf{V}_n\| = O(x_n)$ and $\|\mathbf{M}_n\| = O(x_n)$.
+
+**Lemma 1.** Under the conditions (C.1)-(C.6), the following convergence holds in distribution as $n \to \infty$
+
+$$ \{\mathbf{B}_n(w; \boldsymbol{\beta}_0) + \mathbf{C}_n(w; \boldsymbol{\beta}_0)\}^{-1/2} \ell_n^{(1)}(w; \boldsymbol{\beta}_0) \xrightarrow{d} \mathcal{N}(0, \mathbf{I}_p). \quad (\text{E.1}) $$
+
+Moreover as $n \to \infty$,
+
+$$ |D_n|^{-\frac{1}{2}} \ell_n^{(1)}(w; \beta_0) = O_p(1). \quad (\text{E.2}) $$
+
+*Proof.* Let us first note that using Campbell Theorems (2.1)-(2.2)
+
+$$ \mathrm{Var}[\ell_n^{(1)}(w; \beta_0)] = \mathbf{B}_n(w; \beta_0) + \mathbf{C}_n(w; \beta_0). $$
+---PAGE_BREAK---
+
+The proof of (E.1) follows Coeurjolly and Møller (2014). Let $C_i = i+(-1/2, 1/2]^d$ be the unit box centered at $i \in \mathbb{Z}^d$ and define $\mathcal{I}_n = \{i \in \mathbb{Z}^d, C_i \cap D_n \neq \emptyset\}$. Set $D_n = \bigcup_{i \in \mathcal{I}_n} C_{i,n}$, where $C_{i,n} = C_i \cap D_n$. We have
+
+$$ \ell_n^{(1)}(w; \beta_0) = \sum_{i \in \mathcal{I}_n} Y_{i,n} $$
+
+where
+
+$$ Y_{i,n} = \sum_{u \in \mathbf{X} \cap C_{i,n}} w(u)\mathbf{z}(u) - \int_{C_{i,n}} w(u)\mathbf{z}(u) \exp(\boldsymbol{\beta}_0^\top \mathbf{z}(u)) du. $$
+
+For any $n \ge 1$ and any $i \in \mathcal{I}_n$, $Y_{i,n}$ has zero mean, and by condition (C.4),
+
+$$ \sup_{n \ge 1} \sup_{i \in \mathcal{I}_n} \mathbb{E}(\|Y_{i,n}\|^{2+\delta}) < \infty. \quad (\text{E.3}) $$
+
+If we combine (E.3) with conditions (C.1)-(C.6), we can apply Karácsony (2006, Theorem 4), a central limit theorem for triangular arrays of random fields, to obtain (E.1) which also implies that
+
+$$ \{\mathbf{B}_n(w; \boldsymbol{\beta}_0) + \mathbf{C}_n(w; \boldsymbol{\beta}_0)\}^{-1/2} \ell_n^{(1)}(w; \boldsymbol{\beta}_0) = O_P(1) $$
+
+as $n \to \infty$. The second result (E.2) is deduced from condition (C.6) which in particular implies that $|D_n|^{1/2}\{\mathbf{B}_n(w; \boldsymbol{\beta}_0) + \mathbf{C}_n(w; \boldsymbol{\beta}_0)\}^{-1/2} = O(1)$. $\square$
+
+## Appendix F: Proof of Theorem 1
+
+In the proof of this result and the following ones, the notation $\kappa$ stands for a generic constant which may vary from line to line. In particular this constant is independent of $n$, $\boldsymbol{\beta}_0$ and $\mathbf{k}$.
+
+*Proof*. Let $d_n = |D_n|^{-1/2} + a_n$, and $\mathbf{k} = \{k_1, k_2, \dots, k_p\}^\top \in \mathbb{R}^p$. We remind the reader that the estimate of $\boldsymbol{\beta}_0$ is defined as the maximum of the function $Q$ (given by (2.5)) over $\Theta$, an open convex bounded set of $\mathbb{R}^p$. For any $\mathbf{k}$ such that $\|\mathbf{k}\| \le K < \infty$, $\boldsymbol{\beta}_0 + d_n\mathbf{k} \in \Theta$ for $n$ sufficiently large. Assume this is valid in the following. To prove Theorem 1, we follow the main argument by Fan and Li (2001) and aim at proving that for any given $\epsilon > 0$, there exists $K > 0$ such that for $n$ sufficiently large
+
+$$ P\left( \sup_{\|\mathbf{k}\|=K} \Delta_n(\mathbf{k}) > 0 \right) \le \epsilon, \quad \text{where } \Delta_n(\mathbf{k}) = Q_n(w; \boldsymbol{\beta}_0 + d_n\mathbf{k}) - Q_n(w; \boldsymbol{\beta}_0). \tag{F.1} $$
+
+Equation (F.1) will imply that with probability at least $1-\epsilon$, there exists a local maximum in the ball $\{\boldsymbol{\beta}_0 + d_n\mathbf{k} : \|k\| \le K\}$, and therefore a local maximizer $\hat{\boldsymbol{\beta}}$ such that $\|\hat{\boldsymbol{\beta}} - \boldsymbol{\beta}_0\| = O_P(d_n)$. We decompose $\Delta_n(\mathbf{k})$ as $\Delta_n(\mathbf{k}) = T_1 + T_2$ where
+
+$$ T_1 = \ell_n(w; \boldsymbol{\beta}_0 + d_n\mathbf{k}) - \ell_n(w; \boldsymbol{\beta}_0) $$
+---PAGE_BREAK---
+
+$$T_2 = |D_n| \sum_{j=1}^{p} (p_{\lambda_{n,j}}(|\beta_{0j}|) - p_{\lambda_{n,j}}(|\beta_{0j} + d_n k_j|)).$$
+
+Since $\rho(u; \cdot)$ is infinitely continuously differentiable and $\ell_n^{(2)}(w; \beta) = -A_n(w; \beta)$, then using a second-order Taylor expansion there exists $t \in (0, 1)$ such that
+
+$$T_1 = d_n \mathbf{k}^\top \ell_n^{(1)}(w; \beta_0) - \frac{1}{2} d_n^2 \mathbf{k}^\top A_n(w; \beta_0) \mathbf{k} \\ + \frac{1}{2} d_n^2 \mathbf{k}^\top (\mathbf{A}_n(w; \beta_0) - \mathbf{A}_n(w; \beta_0 + t d_n \mathbf{k})) \mathbf{k}.$$
+
+Since $\Theta$ is convex and bounded and since $w(\cdot)$ and $z(\cdot)$ are uniformly bounded by conditions (C.2)-(C.3), there exists a nonnegative constant $\kappa$ such that
+
+$$\frac{1}{2} \| \mathbf{A}_n(w; \boldsymbol{\beta}_0) - \mathbf{A}_n(w; \boldsymbol{\beta}_0 + t d_n \mathbf{k}) \| \leq \kappa d_n |D_n|.$$
+
+Now, denote $\tilde{\nu} := \liminf_{n \to \infty} \nu_{\min}(|D_n|^{-1} \mathbf{A}_n(w; \boldsymbol{\beta}_0))$. By condition (C.6), we have that for any **k**
+
+$$0 < \tilde{\nu} \leq \frac{\mathbf{k}^\top (|D_n|^{-1} \mathbf{A}_n(w; \boldsymbol{\beta}_0)) \mathbf{k}}{||\mathbf{k}||^2}.$$
+
+Hence
+
+$$T_1 \leq d_n ||\ell_n^{(1)}(w; \boldsymbol{\beta}_0)|| ||\mathbf{k}|| - \frac{\tilde{\nu}}{2} d_n^2 |D_n| ||\mathbf{k}||^2 + \kappa d_n^3 |D_n|.$$
+
+Regarding the term $T_2$,
+
+$$T_2 \le T'_2 := |D_n| \sum_{j=1}^{s} (p_{\lambda_{n,j}}(|\beta_{0j}|) - p_{\lambda_{n,j}}(|\beta_{0j} + d_n k_j|))$$
+
+since for any $j$ the penalty function $p_{\lambda_{n,j}}$ is nonnegative and $p_{\lambda_{n,j}}(|\beta_{0j}|) = 0$ for $j = s+1, \dots, p$.
+
+Since $d_n|D_n|^{1/2} = O(1)$, then by (C.8), for $n$ sufficiently large, $p_{\lambda_{n,j}}$ is twice continuously differentiable for every $\beta_j = \beta_{0j} + t d_n k_j$ with $t \in (0, 1)$. Therefore using a third-order Taylor expansion, there exist $t_j \in (0, 1)$, $j = 1, \dots, s$ such that
+
+$$-T'_2 = d_n |D_n| \sum_{j=1}^{s} k_j p'_{\lambda_{n,j}}(|\beta_{0j}|) \text{sign}(\beta_{0,j}) + \frac{1}{2} d_n^2 |D_n| \sum_{j=1}^{s} k_j^2 p''_{\lambda_{n,j}}(|\beta_{0j}|) \\ + \frac{1}{6} d_n^3 |D_n| \sum_{j=1}^{s} k_j^3 p'''_{\lambda_{n,j}}(|\beta_{0j} + t_j d_n k_j|).$$
+
+Now by definition of $a_n$ and $c_n$ and from condition (C.8), we deduce that there exists $\kappa$ such that
+
+$$T'_2 \le a_n d_n |D_n| |\mathbf{k}^\top \mathbf{1}| + \kappa c_n d_n^2 |D_n| + \kappa d_n^3 |D_n|$$
+---PAGE_BREAK---
+
+$$
+\leq \sqrt{s} a_n d_n |D_n| ||\mathbf{k}|| + c_n d_n^2 |D_n| ||\mathbf{k}||^2 + \kappa d_n^3 |D_n|
+$$
+
+from Cauchy-Schwarz inequality. Since $c_n = o(1)$, $d_n = o(1)$ and $a_n d_n |D_n| = O(d_n^2 |D_n|)$, then for $n$ sufficiently large
+
+$$
+\Delta_n(\mathbf{k}) \leq d_n ||\ell_n^{(1)}(w; \boldsymbol{\beta}_0)|| \|\mathbf{k}\| - \frac{\tilde{\nu}}{4} d_n^2 |D_n| ||\mathbf{k}\|^2 + 2\sqrt{s} d_n^2 |D_n| ||\mathbf{k}\|
+$$
+
+We now return to (F.1): for *n* sufficiently large
+
+$$
+P\left(\sup_{||\mathbf{k}||=K} \Delta_n(\mathbf{k}) > 0\right) \le P\left(||\ell_n^{(1)}(w;\boldsymbol{\beta}_0)|| > \frac{\tilde{\nu}}{4}d_n|D_n|K - 2\sqrt{s}d_n|D_n|\right)
+$$
+
+Since $d_n|D_n| = O(|D_n|^{1/2})$, by choosing $K$ large enough, there exists $\kappa$ such that for $n$ sufficiently large
+
+$$
+P\left(\sup_{||\mathbf{k}||=K} \Delta_n(\mathbf{k}) > 0\right) \le P\left(||\ell_n^{(1)}(w; \boldsymbol{\beta}_0)|| > \kappa |D_n|^{1/2}\right) \le \epsilon
+$$
+
+for any given $\epsilon > 0$ from (E.2). $\square$
+
+**Appendix G: Proof of Theorem 2**
+
+To prove Theorem 2(i), we provide Lemma 2 as follows.
+
+**Lemma 2.** Assume the conditions (C.1)-(C.6) and condition (C.8) hold. If $a_n = O(|D_n|^{-1/2})$ and $b_n|D_n|^{1/2} \to \infty$ as $n \to \infty$, then with probability tending to 1, for any $\boldsymbol{\beta}_1$ satisfying $\|\boldsymbol{\beta}_1 - \boldsymbol{\beta}_{01}\| = O_P(|D_n|^{-1/2})$, and for any constant $K_1 > 0$,
+
+$$
+Q_n(w; (\boldsymbol{\beta}_1^\top, \mathbf{0}^\top))^\top = \max_{\|\boldsymbol{\beta}_2\| \le K_1 |D_n|^{-1/2}} Q_n(w; (\boldsymbol{\beta}_1^\top, \boldsymbol{\beta}_2^\top))^\top.
+$$
+
+Proof. It is sufficient to show that with probability tending to 1 as $n \to \infty$, for any $\boldsymbol{\beta}_1$ satisfying $\|\boldsymbol{\beta}_1 - \boldsymbol{\beta}_{01}\| = O_P(|D_n|^{-1/2})$, for some small $\varepsilon_n = K_1|D_n|^{-1/2}$, and for $j = s+1, \dots, p$,
+
+$$
+\frac{\partial Q_n(w; \boldsymbol{\beta})}{\partial \beta_j} < 0 \quad \text{for } 0 < \beta_j < \varepsilon_n, \text{ and} \tag{G.1}
+$$
+
+$$
+\frac{\partial Q_n(w; \boldsymbol{\beta})}{\partial \beta_j} > 0 \quad \text{for } -\varepsilon_n < \beta_j < 0. \tag{G.2}
+$$
+
+First note that by (E.2), we obtain $\|\ell_n^{(1)}(w; \boldsymbol{\beta}_0)\| = O_P(|D_n|^{1/2})$. Second, by conditions (C.2)-(C.3), there exists $t \in (0, 1)$ such that
+
+$$
+\begin{align*}
+\frac{\partial \ell_n(w; \boldsymbol{\beta})}{\partial \beta_j} &= \frac{\partial \ell_n(w; \boldsymbol{\beta}_0)}{\partial \beta_j} + t \sum_{l=1}^{p} \frac{\partial^2 \ell_n(w; \boldsymbol{\beta}_0 + t(\boldsymbol{\beta} - \boldsymbol{\beta}_0))}{\partial \beta_j \partial \beta_l} (\boldsymbol{\beta}_l - \boldsymbol{\beta}_{0l}) \\
+&= O_P(|D_n|^{1/2}) + O_P(|D_n||D_n|^{-1/2}) = O_P(|D_n|^{1/2}).
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Third, let $0 < \beta_j < \varepsilon_n$ and $b_n$ the sequence given by (3.3). By condition (C.8),
+$b_n$ is well-defined and since by assumption $b_n|D_n|^{1/2} \to \infty$, in particular, $b_n > 0$
+for $n$ sufficiently large. Therefore, for $n$ sufficiently large,
+
+$$
+\begin{align*}
+P\left(\frac{\partial Q_n(w; \boldsymbol{\beta})}{\partial \beta_j} < 0\right) &= P\left(\frac{\partial \ell_n(w; \boldsymbol{\beta})}{\partial \beta_j} - |D_n| p'_{\lambda_{n,j}}(|\boldsymbol{\beta}_j|) \operatorname{sign}(\boldsymbol{\beta}_j) < 0\right) \\
+&= P\left(\frac{\partial \ell_n(w; \boldsymbol{\beta})}{\partial \beta_j} < |D_n| p'_{\lambda_{n,j}}(|\boldsymbol{\beta}_j|)\right) \\
+&\geq P\left(\frac{\partial \ell_n(w; \boldsymbol{\beta})}{\partial \beta_j} < |D_n| b_n\right) \\
+&= P\left(\frac{\partial \ell_n(w; \boldsymbol{\beta})}{\partial \beta_j} < |D_n|^{1/2} |D_n|^{1/2} b_n\right).
+\end{align*}
+$$
+
+$P(\partial Q_n(w;\boldsymbol{\beta})/\partial \beta_j < 0) \to 1$ as $n \to \infty$ since $\partial \ell_n(w;\boldsymbol{\beta})/\partial \beta_j = O_P(|D_n|^{1/2})$ and
+$b_n|D_n|^{1/2} \to \infty$. This proves (G.1). We proceed similarly to prove (G.2). $\square$
+
+*Proof.* We now focus on the proof of Theorem 2. Since Theorem 2(i) is proved by Lemma 2, we only need to prove Theorem 2(ii), which is the asymptotic normality of $\hat{\beta}_1$. As shown in Theorem 1, there is a root-$|D_n|$ consistent local maximizer $\hat{\beta}$ of $Q_n(w;\boldsymbol{\beta})$, and it can be shown that there exists an estimator $\hat{\beta}_1$ in Theorem 1 that is a root-$(|D_n|)$ consistent local maximizer of $Q_n(w; (\boldsymbol{\beta}_1^\top, \mathbf{0}^\top)^\top)$, which is regarded as a function of $\boldsymbol{\beta}_1$, and that satisfies
+
+$$
+\frac{\partial Q_n(w; \hat{\beta})}{\partial \beta_j} = 0 \quad \text{for } j = 1, \dots, s, \text{ and } \hat{\beta} = (\hat{\beta}_1^\top, \mathbf{0}^\top)^\top.
+$$
+
+There exists $t \in (0, 1)$ and $\check{\beta} = \hat{\beta} + t(\beta_0 - \hat{\beta})$ such that
+
+$$
+\begin{align*}
+0 &= \frac{\partial \ell_n(w; \hat{\beta})}{\partial \beta_j} - |D_n| p'_{\lambda_{n,j}}(|\hat{\beta}_j|) \operatorname{sign}(\hat{\beta}_j) \\
+&= \frac{\partial \ell_n(w; \beta_0)}{\partial \beta_j} + \sum_{l=1}^{s} \frac{\partial^2 \ell_n(w; \check{\beta})}{\partial \beta_j \partial \beta_l} (\hat{\beta}_l - \beta_{0l}) - |D_n| p'_{\lambda_{n,j}}(|\hat{\beta}_j|) \operatorname{sign}(\hat{\beta}_j) \\
+&= \frac{\partial \ell_n(w; \beta_0)}{\partial \beta_j} + \sum_{l=1}^{s} \frac{\partial^2 \ell_n(w; \beta_0)}{\partial \beta_j \partial \beta_l} (\hat{\beta}_l - \beta_{0l}) + \sum_{l=1}^{s} \Psi_{n,jl}(\hat{\beta}_l - \beta_{0l}) \\
+&\quad - |D_n| p'_{\lambda_{n,j}}(|\beta_{0j}|) \operatorname{sign}(\beta_{0j}) - |D_n| \phi_{n,j}, \tag{G.3}
+\end{align*}
+$$
+
+where
+
+$$
+\Psi_{n,jl} = \frac{\partial^2 \ell_n(w; \check{\beta})}{\partial \beta_j \partial \beta_l} - \frac{\partial^2 \ell_n(w; \beta_0)}{\partial \beta_j \partial \beta_l}
+$$
+
+and $\phi_{n,j} = p'_{\lambda_{n,j}}(|\hat{\beta}_j|) \operatorname{sign}(\hat{\beta}_j) - p'_{\lambda_{n,j}}(|\beta_{0j}|) \operatorname{sign}(\beta_{0j})$. We decompose $\phi_{n,j}$ as
+$\phi_{n,j} = T_1 + T_2$ where
+
+$$
+T_1 = \phi_{n,j}I(|\hat{\beta}_j - \beta_{0j}| \leq r_{n,j}) \quad \text{and} \quad T_2 = \phi_{n,j}I(|\hat{\beta}_j - \beta_{0j}| > r_{n,j})
+$$
+---PAGE_BREAK---
+
+and where $\tilde{r}_{n,j}$ is the sequence defined in the condition (C.8). Under this condition, the following Taylor expansion can be derived for the term $T_1$: there exists $t \in (0, 1)$ and $\tilde{\beta}_j = \hat{\beta}_j + t(\beta_{0j} - \hat{\beta}_j)$ such that
+
+$$
+\begin{aligned}
+T_1 &= p''_{\lambda_{n,j}}(|\beta_{0j}|)(\hat{\beta}_j - \beta_{0j})\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| \le \tilde{r}_{n,j}) \\
+&\quad + \frac{1}{2}(\hat{\beta}_j - \beta_{0j})^2 p'''_{\lambda_{n,j}}(|\check{\beta}_j|)\mathrm{sign}(\check{\beta}_j)\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| \le \tilde{r}_{n,j}) \\
+&= p''_{\lambda_{n,j}}(|\beta_{0j}|)(\hat{\beta}_j - \beta_{0j})\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| \le \tilde{r}_{n,j}) + O_P(|D_n|^{-1})
+\end{aligned}
+$$
+
+where the latter equation ensues from Theorem 1 and condition (C.8). Again, from Theorem 1, $\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| \le \tilde{r}_{n,j}) \xrightarrow{L^1} 1$ which implies that $\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| \le \tilde{r}_{n,j}) \xrightarrow{P} 1$, so $T_1 = p''_{\lambda_{n,j}}(|\beta_{0j}|)(\hat{\beta}_j - \beta_{0j})(1 + o_P(1)) + O_P(|D_n|^{-1})$.
+
+Regarding the term $T_2$, since $p'_{\lambda}$ is a Lipschitz function, there exists $\kappa \ge 0$ such that
+
+$$ T_2 \leq \kappa |\hat{\beta}_j - \beta_{0j}| \mathbb{I}(|\hat{\beta}_j - \beta_{0j}| > \tilde{r}_{n,j}). $$
+
+By Theorem 1, $|\hat{\beta}_j - \beta_{0j}| = O_P(|D_n|^{-1/2})$ and $\mathbb{I}(|\hat{\beta}_j - \beta_{0j}| > \tilde{r}_{n,j}) = o_P(1)$, so
+$T_2 = o_P(|D_n|^{-1/2})$ and we deduce that
+
+$$ \phi_{n,j} = p''_{\lambda_{n,j}}(|\beta_{0j}|)(\hat{\beta}_j - \beta_{0j})(1 + o_P(1)) + o_P(|D_n|^{-1/2}). \quad (G.4) $$
+
+Let $\ell_{n,1}^{(1)}(w; \boldsymbol{\beta}_0)$ (resp. $\ell_{n,11}^{(2)}(w; \boldsymbol{\beta}_0)$) be the first $s$ components (resp. $s \times s$ top-left corner) of $\ell_n^{(1)}(w; \boldsymbol{\beta}_0)$ (resp. $\ell_n^{(2)}(w; \boldsymbol{\beta}_0)$). Let also $\boldsymbol{\Psi}_n$ be the $s \times s$ matrix containing $\boldsymbol{\Psi}_{n,jl}, j,l = 1, \dots, s$. Finally, let the vector $\mathbf{p}'_n$, the vector $\boldsymbol{\phi}_n$ and the $s \times s$ matrix $\mathbf{M}_n$ be
+
+$$
+\begin{align*}
+\mathbf{p}'_n &= \{p'_{\lambda_{n,1}}(|\boldsymbol{\beta}_{01}|) \operatorname{sign}(\boldsymbol{\beta}_{01}), \dots, p'_{\lambda_{n,s}}(|\boldsymbol{\beta}_{0s}|) \operatorname{sign}(\boldsymbol{\beta}_{0s})\}^\top, \\
+\boldsymbol{\phi}_n &= \{\phi_{n,1}, \dots, \phi_{n,s}\}^\top, \text{ and} \\
+\mathbf{M}_n &= \{\mathbf{B}_{n,11}(w; \boldsymbol{\beta}_0) + \mathbf{C}_{n,11}(w; \boldsymbol{\beta}_0)\}^{-1/2}.
+\end{align*}
+$$
+
+We rewrite both sides of (G.3) as
+
+$$
+\begin{gather*}
+\ell_{n,1}^{(1)}(w; \boldsymbol{\beta}_0) + \ell_{n,11}^{(2)}(w; \boldsymbol{\beta}_0)(\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) + \boldsymbol{\Psi}_n(\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) - |D_n|\mathbf{p}'_n - |D_n|\phi_n = 0. \tag{G.5} \\
+|D_n| = O_P(|D_n|^{-1/2}).
+\end{gather*}
+$$
+
+By definition of $\Pi_n$ given by (3.6) and from (G.4), we obtain $\phi_n = \Pi_n(\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01})(1+o_P(1)) + o_P(|D_n|^{-1/2})$. Using this, we deduce, by premultiplying both sides of (G.5) by $\mathbf{M}_n$, that
+
+$$
+\begin{align*}
+& \mathbf{M}_n \ell_{n,1}^{(1)}(w; \boldsymbol{\beta}_0) - \mathbf{M}_n (\mathbf{A}_{n,11}(w; \boldsymbol{\beta}_0) + |D_n| \mathbf{\Pi}_n)(\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) \\
+&= O(|D_n| \| \mathbf{M}_n \mathbf{p}'_n \|) + o_P(|D_n| \| \mathbf{M}_n \mathbf{\Pi}_n (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) \|) \\
+&\quad + o_P(\| \mathbf{M}_n \| |D_n|^{1/2}) + O_P(\| \mathbf{M}_n \boldsymbol{\Psi}_n (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) \|).
+\end{align*}
+$$
+
+The condition (C.6) implies that there exists an $s \times s$ positive definite matrix $\mathbf{I}''_0$ such that for all sufficiently large $n$, we have $|D_n|^{-1}(\mathbf{B}_{n,11}(w; \boldsymbol{\beta}_0) + \mathbf{C}_{n,11}(w; \boldsymbol{\beta}_0)) \ge \mathbf{I}''_0$, hence $\|\mathbf{M}_n\| = O(|D_n|^{-1/2})$.
+---PAGE_BREAK---
+
+Now, $\|\Psi_n\| = O_P(|D_n|^{1/2})$ by conditions (C.3)-(C.4) and by Theorem 1, and
+$\|\hat{\beta}_1 - \beta_{01}\| = O_P(|D_n|^{-1/2})$ by Theorem 1 and by Theorem 2(i). Finally, since
+by assumption $a_n = o(|D_n|^{-1/2})$, we deduce that
+
+$$
+\begin{align*}
+\|\mathbf{M}_n \Psi_n (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01})\| &= O_P(|D_n|^{-1/2}) = o_P(1), \\
+|D_n| \|\mathbf{M}_n \mathbf{\Pi}_n (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01})\| &= o_P(1), \\
+\|\mathbf{M}_n\| |D_n|^{1/2} &= o(1), \\
+|D_n| \|\mathbf{M}_n \mathbf{p}'_n\| &= O(a_n |D_n|^{1/2}) = o(1).
+\end{align*}
+$$
+
+Therefore, we have that
+
+$$
+\mathbf{M}_n \ell_{n,1}^{(1)}(w; \boldsymbol{\beta}_0) - \mathbf{M}_n (\mathbf{A}_{n,11}(w; \boldsymbol{\beta}_0) + |D_n|\mathbf{\Pi}_n)(\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) = o_P(1).
+$$
+
+From (E.1), Theorem 2(i) and by Slutsky’s Theorem, we deduce that
+
+$$
+\left\{ \mathbf{B}_{n,11}(w; \boldsymbol{\beta}_0) + \mathbf{C}_{n,11}(w; \boldsymbol{\beta}_0) \right\}^{-1/2} \left\{ \mathbf{A}_{n,11}(w; \boldsymbol{\beta}_0) + |D_n|\mathbf{\Pi}_n \right\} (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01})
+$$
+
+as $n \to \infty$, which can be rewritten, in particular under (C.7), as
+
+$$
+|D_n|^{1/2} \Sigma_n (w; \boldsymbol{\beta}_0)^{-1/2} (\hat{\boldsymbol{\beta}}_1 - \boldsymbol{\beta}_{01}) \xrightarrow{d} \mathcal{N}(0, \mathbf{I}_s)
+$$
+
+where $\Sigma_n(w, \beta_0)$ is given by (3.5). $\square$
+
+**Appendix H: Maps of covariates**
+
+FIG 3. Maps of covariates designed in Scenario 2. The first two top left images are the elevation and the slope. The other 18 covariates are generated as standard Gaussian white noise but transformed to get multicollinearity.
+---PAGE_BREAK---
+
+FIG 4. Maps of covariates used in Scenario 3 and in the application. From left to right: Elevation, slope, Aluminium, Boron, and Calcium (1st row), Copper, Iron, Potassium, Magnesium, and Manganese (2nd row), Phosphorus, Zinc, Nitrogen, Nitrogen mineralization, and pH (3rd row).
+
+## Acknowledgements
+
+We thank A. L. Thurman who kindly shares the R code used for the simulation study in Thurman et al. (2015) and P. Breheny who kindly provides his code used in `ncvreg` package. We thank R. Drouilhet for technical help. We thank the editor and associate editor giving constructive comments to improve an earlier draft of this article.
+
+The BCI soils data sets are collected and analyzed by J. Dalling, R. John, K. Harms, R. Stallard and J. Yavitt with support from NSF DEB021104,021115, 0212284,0212818 and OISE 0314581, and STRI Soils Initiative and CTFS and assistance from P. Segre and J. Trani. Datasets are available at the CTFS website http://ctfs.si.edu/webatlas/datasets/bci/soilmaps/BCIsoil.html.
+
+## References
+
+* Adrian Baddeley and Rolf Turner. Practical maximum pseudolikelihood for spatial point patterns. *Australian & New Zealand Journal of Statistics*, 42(3):283-322, 2000. MR1794056
+
+* Adrian Baddeley and Rolf Turner. Spatstat: An R package for analyzing spatial point patterns. *Journal of Statistical Software*, 12(6):1-42, 2005.
+
+* Adrian Baddeley, Jean-François Coeurjolly, Ege Rubak, and Rasmus Plenge Waagepetersen. Logistic regression for spatial Gibbs point processes. *Biometrika*, 101(2):377-392, 2014. MR3215354
+
+* Adrian Baddeley, Ege Rubak, and Rolf Turner. *Spatial Point Patterns: Methodology and Applications with R*. CRC Press, 2015. MR3059645
+
+* Mark Berman and Rolf Turner. Approximating point process likelihoods with glim. *Applied Statistics*, 41(1):31-38, 1992.
+
+* Erwin Bolthausen. On the central limit theorem for stationary mixing random fields. *The Annals of Probability*, 10(4):1047-1050, 1982. MR0672305
+---PAGE_BREAK---
+
+Patrick Breheny and Jian Huang. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. *The Annals of Applied Statistics*, 5(1):232–253, 2011. MR2810396
+
+Peter B¨uhlmann and Sara Van De Geer. *Statistics for high-dimensional data: methods, theory and applications*. Springer Science & Business Media, 2011. MR2807761
+
+Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. *The Annals of Statistics*, 35(6):2313–2351, 2007. MR2382644
+
+Jean-Fran¸cois Coeurjolly and Jesper Møller. Variational approach to estimate the intensity of spatial point processes. *Bernoulli*, 20(3):1097–1125, 2014. MR3217439
+
+Richard Condit. Tropical forest census plots. *Springer-Verlag and R. G. Landes Company, Berlin, Germany, and Georgetown, Texas*, 1998.
+
+Lorin Crawford, Kris C Wood, Xiang Zhou, and Sayan Mukherjee. Bayesian approximate kernel regression with variable selection. *To appear in Journal of the American Statistical Association*, 2018.
+
+Bradley Efron, Trevor Hastie, Iain Johnstone, Robert Tibshirani, et al. Least angle regression. *The Annals of Statistics*, 32(2):407–499, 2004. MR2060166
+
+Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. *Journal of the American Statistical Association*, 96(456):1348–1360, 2001. MR1946581
+
+Jianqing Fan and Jinchi Lv. A selective overview of variable selection in high dimensional feature space. *Statistica Sinica*, 20(1):101–148, 2010. MR2640659
+
+Yixin Fang and Ji Meng Loh. Single-index model for inhomogeneous spatial point processes. *Statistica Sinica*, 27(2):555–574, 2017. MR3674686
+
+Jerome Friedman, Trevor Hastie, Holger H¨ofling, Robert Tibshirani, et al. Pathwise coordinate optimization. *The Annals of Applied Statistics*, 1(2):302–332, 2007. MR2415737
+
+Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. *Journal of Statistical Software*, 33(1):1–22, 2010.
+
+Yongtao Guan and Ji Meng Loh. A thinned block bootstrap variance estimation procedure for inhomogeneous spatial point patterns. *Journal of the American Statistical Association*, 102(480):1377–1386, 2007. MR2412555
+
+Yongtao Guan and Ye Shen. A weighted estimating equation approach for inhomogeneous spatial point processes. *Biometrika*, 97(4):867–880, 2010. MR2746157
+
+Yongtao Guan, Abdollah Jalilian, and Rasmus Plenge Waagepetersen. Quasilikelihood for spatial point processes. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 77(3):677–697, 2015. MR3351450
+
+Xavier Guyon. *Random fields on a network: modeling, statistics, and applications*. Springer Science & Business Media, 1995. MR1344683
+
+Arthur E Hoerl and Robert W Kennard. Ridge regression. *Encyclopedia of statistical sciences*, 1988.
+
+Stephen P Hubbell, Robin B Foster, Sean T O’Brien, KE Harms, Richard Con-
+---PAGE_BREAK---
+
+dit, B Wechsler, S Joseph Wright, and S Loo De Lao. Light-gap disturbances, recruitment limitation, and tree diversity in a neotropical forest. *Science*, 283(5401):554-557, 1999.
+
+Stephen P Hubbell, Richard Condit, and Robin B Foster. Barro Colorado forest census plot data. 2005. URL http://ctfs.si.edu/datasets/bci.
+
+Janine Illian, Antti Penttinen, Helga Stoyan, and Dietrich Stoyan. *Statistical analysis and modelling of spatial point patterns*, volume 70. John Wiley & Sons, 2008. MR2384630
+
+Zsolt Karácsony. A central limit theorem for mixing random fields. *Miskolc Mathematical Notes*, 7:147–160, 2006. MR2310274
+
+Frédéric Lavancier, Jesper Møller, and Ege Rubak. Determinantal point process models and statistical inference. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 77(4):853–877, 2015. MR3382600
+
+Rahul Mazumder, Jerome H Friedman, and Trevor Hastie. Sparsenet: Coordinate descent with nonconvex penalties. *Journal of the American Statistical Association*, 106(495):1125–1138, 2011. MR2894769
+
+Jesper Møller and Rasmus Plenge Waagepetersen. *Statistical inference and simulation for spatial point processes*. CRC Press, 2004. MR2004226
+
+Jesper Møller and Rasmus Plenge Waagepetersen. Modern statistics for spatial point processes. *Scandinavian Journal of Statistics*, 34(4):643–684, 2007. MR2392447
+
+Dimitris N Politis, Efstathios Paparoditis, and Joseph P Romano. Large sample inference for irregularly spaced dependent observations based on subsampling. *Sankhyā: The Indian Journal of Statistics, Series A*, 60(2):274–292, 1998. MR1711685
+
+R Core Team. *R: A language and environment for statistical computing*. R Foundation for Statistical Computing, Vienna, Austria, 2016. URL https://www.R-project.org/.
+
+Stephen L Rathbun and Noel Cressie. Asymptotic properties of estimators for the parameters of spatial inhomogeneous Poisson point processes. *Advances in Applied Probability*, 26(1):122–154, 1994. MR1260307
+
+Ian W Renner and David I Warton. Equivalence of maxent and poisson point process models for species distribution modeling in ecology. *Biometrics*, 69(1):274–281, 2013. MR3058074
+
+Frederic Paik Schoenberg. Consistent parametric estimation of the intensity of a spatial-temporal point process. *Journal of Statistical Planning and Inference*, 128(1):79–93, 2005. MR2110179
+
+Andrew L Thurman and Jun Zhu. Variable selection for spatial Poisson point processes via a regularization method. *Statistical Methodology*, 17:113–125, 2014. MR3133589
+
+Andrew L Thurman, Rao Fu, Yongtao Guan, and Jun Zhu. Regularized estimating equations for model selection of clustered spatial point processes. *Statistica Sinica*, 25(1):173–188, 2015. MR3328809
+
+Robert Tibshirani. Regression shrinkage and selection via the lasso. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 58(1):267–288, 1996. MR1379242
+---PAGE_BREAK---
+
+Rasmus Plenge Waagepetersen. An estimating function approach to inference for inhomogeneous Neyman-Scott processes. *Biometrics*, 63(1):252–258, 2007.
+MR2345595
+
+Rasmus Plenge Waagepetersen. Estimating functions for inhomogeneous spatial point processes with incomplete covariate data. *Biometrika*, 95(2):351–363, 2008. MR2422695
+
+Rasmus Plenge Waagepetersen and Yongtao Guan. Two-step estimation for in-homogeneous spatial point processes. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 71(3):685–702, 2009. MR2749914
+
+Hansheng Wang, Runze Li, and Chih-Ling Tsai. Tuning parameter selectors for the smoothly clipped absolute deviation method. *Biometrika*, 94(3):553–568, 2007. MR2410008
+
+Larry Wasserman and Kathryn Roeder. High-dimensional variable selection. *The Annals of Statistics*, 37(5A):2178–2201, 2009. MR2543689
+
+Yu Ryan Yue and Ji Meng Loh. Variable selection for inhomogeneous spatial point process models. *Canadian Journal of Statistics*, 43(2):288–305, 2015.
+MR3353384
+
+Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. *The Annals of Statistics*, 38(2):894–942, 2010. MR2604701
+
+Yiyun Zhang, Runze Li, and Chih-Ling Tsai. Regularization parameter selec-tions via generalized information criterion. *Journal of the American Statistical Association*, 105(489):312–323, 2010. MR2656055
+
+Li-Ping Zhu, Lin-Yi Qian, and Jin-Guan Lin. Variable selection in a class of single-index models. *Annals of the Institute of Statistical Mathematics*, 63(6):1277–1293, 2011. MR2830860
+
+Hui Zou. The adaptive lasso and its oracle properties. *Journal of the American Statistical Association*, 101(476):1418–1429, 2006. MR2279469
+
+Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 67(2):301–320, 2005. MR2137327
+
+Hui Zou and Runze Li. One-step sparse estimates in nonconcave penalized like-lihood models. *The Annals of Statistics*, 36(4):1509–1533, 2008. MR2435443
+
+Hui Zou and Hao Helen Zhang. On the adaptive elastic-net with a diverging number of parameters. *The Annals of Statistics*, 37(4):1733–1751, 2009.
+MR2533470
\ No newline at end of file
diff --git a/samples/texts_merged/6159994.md b/samples/texts_merged/6159994.md
new file mode 100644
index 0000000000000000000000000000000000000000..6202f7193cd6b5ea297198f3c9f9f8b25ed7cc45
--- /dev/null
+++ b/samples/texts_merged/6159994.md
@@ -0,0 +1,2027 @@
+
+---PAGE_BREAK---
+
+# ITERATED INTEGRALS AND ALGEBRAIC CYCLES: EXAMPLES AND PROSPECTS
+
+RICHARD HAIN
+
+The goal of this paper is to produce evidence for a connection between the work of Kuo-Tsai Chen on iterated integrals and de Rham homotopy theory on the one hand, and the work of Wei-Liang Chow on algebraic cycles on the other. Evidence for such a profound link has been emerging steadily since the early 1980s when Carlson, Clemens and Morgan [13] and Bruno Harris [40] gave examples where the periods of non-abelian iterated integrals coincide with the periods of homologically trivial algebraic cycles. Algebraic cycles and the classical Chow groups are nowadays considered in the broader arena of motives, algebraic $K$-theory and higher Chow groups. This putative connection is best viewed in this larger context. Examples relating iterated integrals and motives go back to Bloch's work on the dilogarithm and regulators [9] in the mid 1970s, which was developed further by Beilinson [7] and Deligne (unpublished). Further evidence to support a connection between de Rham homotopy theory and iterated integrals includes [48, 4, 30, 57, 50, 22, 37, 59, 58, 25, 38, 51, 55, 19, 60, 20]. Chen would have been delighted by these developments, as he believed iterated integrals and loopspaces contain non-trivial geometric information and would one day become a useful mathematical tool outside topology.
+
+The paper is largely expository, beginning with an introduction to iterated integrals and Chen’s de Rham theorems for loop spaces and fundamental groups. It does contain some novelties, such as the de Rham theorem for fundamental groups of smooth algebraic curves in terms of “meromorphic iterated integrals of the second kind,” and the treatment of the Hodge and weight filtrations of the algebraic de Rham cohomology of loop spaces of algebraic varieties in characteristic zero. A generalization of the theorem of Carlson-Clemens-Morgan in Section 10 is presented, although the proof is not complete for lack of a rigorous theory of iterated integrals of currents. Even though there is no rigorous theory, iterated integrals of currents are a useful heuristic tool which illuminate the combinatorial and geometric content of iterated integrals. The development of this theory should be extremely useful for applications of de Rham theory to the study of algebraic cycles. The heuristic theory is discussed in Section 6.
+
+A major limitation of iterated integrals and rational homotopy theory of non-simply connected spaces is that they usually only give information about nilpotent completions of topological invariants. This is particularly limiting in many cases, such as when studying knots and moduli spaces of curves. By using iterated integrals of twisted differential forms or certain convergent infinite sums of iterated integrals, one may get beyond nilpotence. Non-nilpotent iterated integrals and their Hodge theory should emerge when studying the periods of extensions of variations
+
+*Date:* May 29, 2018.
+
+Supported in part by grants from the National Science Foundation.
+---PAGE_BREAK---
+
+of Hodge structure associated to algebraic cycles in complex algebraic manifolds, when one spreads the variety and the cycles. Some developments in the de Rham theory, which originate with a suggestion of Deligne, are surveyed in Section 12.
+
+Iterated integrals are the “de Rham realization” of the cosimplicial version of the cobar construction, a construction which goes back to Adams [2]. The paper ends with an exposition of the cobar construction. Logically, the paper could have begun with it, and some readers may prefer to start there. I hope that the examples in the paper will lead the reader to the conclusion, first suggested by Wojtkowiak [57], that the cosimplicial version of the cobar construction is important in algebraic geometry, and that the numerous occurrences of iterated integrals as periods of cycles and motives are not unrelated, but are the de Rham manifestation of a deeper connection between motives and the cobar construction.
+
+This paper complements the survey article [33], which emphasizes the fundamental group. I highly recommend Chen’s Bulletin article [15]; it surveys most of his work, and contains complete proofs of many of his important theorems; it also contains a useful account of the cobar construction. Polylogarithms are discussed from the point of view of iterated integrals in [34].
+
+There is much beautiful mathematics that connects iterated integrals to motives which is not covered in this paper. Most notable are Drinfeld’s work [23], in particular his associator, which appears in the study of the motivic fundamental group of $\mathbb{P}^1 - \{0, 1, \infty\}$, and the Kontsevich integral [45], which appears in the construction of Vassiliev invariants.
+
+*Acknowledgements:* It is a great pleasure to acknowledge all those who have inspired and contributed to my understanding of iterated integrals, most notably Kuo-Tsai Chen, my thesis adviser, who introduced me to them; Pierre Cartier, who influenced the way I think about them; and, Dennis Sullivan who influenced me and many others through his seminal paper [54] which still contains many paths yet to be explored.
+
+# 1. DIFFERENTIAL FORMS ON PATH SPACES
+
+Denote the space of piece wise smooth paths $\gamma : [0, 1] \to X$ in a smooth manifold $X$ by $PX$. Chen’s iterated integrals can be defined using any reasonable definition of differential form on $PX$, such as the one used by Chen (see [15], for example). We shall denote the de Rham complex of $X$, $PX$, etc by $E^\bullet(X)$, $E^\bullet(PX)$, etc.
+
+We will say that a function $\alpha: N \to PX$ from a smooth manifold into $PX$ is smooth if the mapping
+
+$$ \phi_{\alpha} : [0, 1] \times N \to X $$
+
+defined by $(t,x) \mapsto \alpha(x)(t)$ is piecewise smooth in the sense that there is a partition $0 = t_0 < t_1 < \cdots < t_{n-1} < t_n = 1$ of $[0,1]$ such that the restriction of $\phi_\alpha$ to each $[t_{j-1}, t_j] \times N$ is smooth.¹
+
+They key features of the de Rham complex should satisfy are:
+
+i. $E^\bullet(PX)$ is a differential graded algebra;
+
+ii. if $N$ is a smooth manifold and $\alpha: N \to PX$ is smooth, then there is an induced homomorphism
+
+$$ \alpha^* : E^\bullet(PX) \to E^\bullet(N) $$
+
+¹Recall that a function $f: K \to \mathbb{R}$ from a subset $K$ of $\mathbb{R}^N$ is smooth if there exists an open neighbourhood $U$ of $K$ in $\mathbb{R}^N$ and a smooth function $g: U \to \mathbb{R}$ whose restriction to $K$ is $f$.
+---PAGE_BREAK---
+
+of differential graded algebras;
+
+iii. if $D$ and $Q$ are manifolds and $D \times PX \to Q$ is smooth (that is, $D \times N \to D \times PX \to Q$ is smooth for all smooth $N \to PX$, where $N$ is a manifold), then there is an induced dga homomorphism
+
+$$E^{\bullet}(Q) \to E^{\bullet}(D \times PX).$$
+
+iv. If $D$ is compact oriented (possibly with boundary) of dimension $n$ and $p: D \times PX \to PX$ is the projection, then one has the integration over the fiber mapping
+
+$$p_* : E^{k+n}(D \times PX) \to E^k(PX)$$
+
+which satisfies
+
+$$p_*d \pm dp_* = (p|_{\partial D})_*$$
+
+Chen's approach is particularly elementary and direct. For him, a smooth $k$-form on $PX$ is a collection $w = (w_\alpha)$ of smooth $k$-forms, indexed by the smooth mappings $\alpha: N_\alpha \to PX$, where $w_\alpha \in E^k(N_\alpha)$. These are required to satisfy the following compatibility condition: if $f: N_\alpha \to N_\beta$ is smooth, then
+
+$$w_\alpha = f^* w_\beta.$$
+
+Exterior derivatives are defined by setting $d(w_\alpha) = (dw_\alpha)$. Exterior products are defined similarly. The de Rham complex of $PX$ is a differential graded algebra.
+
+This definition generalizes easily to other natural subspaces $W$ of $PX$, such as loop spaces and fixed end point path spaces. Just replace $PX$ by $W$ and consider only those $\alpha: N_\alpha \to PX$ that factor through the inclusion $W \hookrightarrow PX$. It also generalizes to products of such $W$ with a smooth manifold $Q$. To define a smooth form $w$ on $Q \times W$, one need specify only the $w_\alpha$ for those smooth mappings $\alpha$ of the form $\text{id} \times \alpha: Q \times N \to Q \times W$.
+
+Let this seem ad hoc, I should mention that Chen developed an elementary and efficient theory of “differentiable spaces”, the category of which contains the category of smooth manifolds and smooth maps, which is closed under taking path spaces and subspaces. Each differentiable space has a natural de Rham complex which is functorial under smooth maps. The details can be found in his Bulletin article [15].
+
+## 2. ITERATED INTEGRALS
+
+This is a brief sketch of iterated integrals. I have been deliberately vague about the signs as they depend on choices of conventions which do not play a crucial role in the theory. Another reason I have omitted them in this discussion is that, by using different sign conventions from those of Chen, I believe one should be able to make the signs in many formulas conform more to standard homological conventions. Chen’s sign conventions are given in Theorem 7.2 and will be used in all computations in this paper.
+
+Suppose that $w_1, \dots, w_r$ are differential forms on $X$, all of positive degree. The iterated integral
+
+$$\int w_1 w_2 \dots w_r$$
+
+is a differential form on $PX$ of degree $-r + \deg w_1 + \deg w_2 + \dots + \deg w_r$. Up to a sign (which depends on one’s conventions)
+
+$$\int w_1 w_2 \dots w_r = \pi_* \phi^* (p_1^* w_1 \wedge p_2^* w_2 \wedge \dots \wedge p_r^* w_r)$$
+---PAGE_BREAK---
+
+where
+
+i. $p_j : X^r \to X$ is projection onto the $j$th factor,
+
+ii. $\Delta^r = \{(t_1, \dots, t_r) : 0 \le t_1 \le t_2 \le \dots \le t_r \le 1\}$ is the *time ordered* form of the standard $r$-simplex,
+
+iii. $\phi : \Delta^r \times PX \to X^r$ is the *sampling map*
+
+$$\phi(t_1, \dots, t_r, \gamma) = (\gamma(t_1), \gamma(t_2), \dots, \gamma(t_r)),$$
+
+iv. $\pi_*$ denotes integration over the fiber of the projection
+
+$$\pi : \Delta^r \times PX \rightarrow PX.$$
+
+When each $w_j$ is a 1-form, $\int w_1 \dots w_r$ is a function $PX \rightarrow \mathbb{R}$. Its value on the path $\gamma: [0, 1] \rightarrow X$ is the *time ordered integral*
+
+$$ (1) \quad \int_{\gamma} w_1 \dots w_r := \int_{0 \le t_1 \le t_2 \le \dots \le t_n \le 1} f_1(t_1) \dots f_r(t_r) dt_1 \dots dt_r, $$
+
+where $\gamma^* w_j = f_j(t)dt$. Iterated integrals of degree zero are called *iterated line integrals*.
+
+The space of iterated integrals on $PX$ is the subspace $Ch^{\bullet}(PX)$ of its de Rham complex spanned by all differential forms of the form
+
+$$ (2) \quad p_0^* w' \wedge p_1^* w'' \wedge \int w_1 \dots w_r $$
+
+where for $a \in [0, 1]$, $p_a : PX \to X$ is the evaluation at time $a$ mapping $\gamma \mapsto \gamma(a)$.
+
+If $W$ is a subspace of $PX$ (such as a fixed end point path space, the free loop space, a pointed loop space), we shall denote the subspace of its de Rham complex generated by the restrictions of iterated integrals to it by $Ch^{\bullet}(W)$ and call it the *Chen complex of W*. It is naturally filtered by length:
+
+$$ Ch_0^{\bullet}(W) \subseteq Ch_1^{\bullet}(W) \subseteq Ch_2^{\bullet}(W) \subseteq \dots \subseteq Ch^{\bullet}(W), $$
+
+where $Ch_s^{\bullet}(W)$ consists of all iterated integrals that are sums of terms (2) where $r \le s$.
+
+The standard formula
+
+$$ \pi_* d \pm d \pi_* = (\pi|_{\partial\Delta^r})_* $$
+
+implies that iterated integrals are closed under exterior differentiation and that, with suitable signs (depending on one's conventions),
+
+$$ d \int w_1 \dots w_r = \sum_{j=1}^{r} \pm \int w_1 \dots dw_j \dots w_r \\ + \sum_{j=1}^{r-1} \pm \int w_1 \dots w_{j-1} (w_j \wedge w_{j+1}) w_{j+2} \dots w_r \\ \pm (\int w_1 \dots w_{r-1}) \wedge p_1^* w_r \pm p_0^* w_1 \wedge \int w_2 \dots w_r. $$
+
+This implies that each $Ch_s^{\bullet}(PX)$, and thus each $Ch_s^{\bullet}(W)$, is closed under exterior differentiation.
+---PAGE_BREAK---
+
+The standard triangulation² of $\Delta^r \times \Delta^s$ gives the shuffle product formula
+
+$$ (3) \quad \int w_1 \dots w_r \wedge \int w_{r+1} \dots w_{r+s} = \sum_{\sigma \in sh(r,s)} \pm \int w_{\sigma(1)} w_{\sigma(2)} \dots w_{\sigma(r+s)} $$
+
+where $sh(r, s)$ denotes the set of shuffles of type $(r, s)$ — that is, those permutations $\sigma$ of $\{1, 2, \dots, r + s\}$ such that
+
+$$ \sigma^{-1}(1) < \sigma^{-1}(2) < \dots < \sigma^{-1}(r) \text{ and} \\ \sigma^{-1}(s+1) < \sigma^{-1}(s+2) < \dots < \sigma^{-1}(s+r). $$
+
+With this product, $\mathrm{Ch}^\bullet(W)$ is a differential graded algebra (dga).
+
+In many applications, one considers the restrictions of iterated integrals to the fixed end-point path spaces
+
+$$ P_{x,y}X := \{\gamma \in PX : \gamma(0) = x, \gamma(1) = y\}. $$
+
+Multiplication of paths
+
+$$ \mu : P_{x,y}X \times P_{y,z}X \to P_{x,z}X $$
+
+induces a map of the complex of iterated integrals:³
+
+$$ \mu^* \int w_1 \dots w_r = \sum_{j=1}^{r} \pi_1^* \int w_1 \dots w_j \wedge \pi_2^* \int w_{j+1} \dots w_r $$
+
+where $\pi_1$ and $\pi_2$ denote the projections onto the first and second factors of $P_{x,y}X \times P_{y,z}X$. The inverse mapping
+
+$$ P_{x,y}X \to P_{y,x}X, \quad \gamma \mapsto \gamma^{-1} $$
+
+induces the “antipode”
+
+$$ \int w_1 \dots w_r \mapsto \pm \int w_r \dots w_1. $$
+
+The closed iterated line integrals $H^0(\mathrm{Ch}^\bullet(P_{x,y}X))$ are precisely those iterated line integrals that are constant on homotopy classes of paths relative to their endpoints.
+
+When $x=y$, the Chen complex of $P_{x,x}X$ is a differential graded Hopf algebra with diagonal
+
+$$ \int w_1 \dots w_r \mapsto \sum_{j=1}^{r} \int w_1 \dots w_j \otimes \int w_{j+1} \dots w_r. $$
+
+Its cohomology $H^\bullet(\mathrm{Ch}^\bullet(P_{x,x}X))$ is a graded Hopf algebra with antipode. Each element of $H^\bullet(\mathrm{Ch}^\bullet(P_{x,x}X))$ defines a function $\pi_1(X, x) \to \mathbb{R}$.
+
+Restricting elements of $\mathrm{Ch}^\bullet(P_{x,x}X)$ to the constant loop $c_x$ at $x$ defines a natural augmentation
+
+$$ \mathrm{Ch}^\bullet(P_{x,x}X) \to \mathbb{R}. $$
+
+Denote its kernel by $\mathcal{I}\mathrm{Ch}^\bullet(P_{x,x}X)$. These are the iterated integrals on the loop space $P_{x,x}X$ “with trivial constant term.”
+
+²This is
+
+$$ \Delta^r \times \Delta^s = \bigcup_{\sigma \in sh(r,s)} \{(t_{\sigma(1)}, t_{\sigma(2)}, \ldots, t_{\sigma(r+s)}): 0 \le t_1 \le \cdots \le t_r \le 1, 0 \le t_{r+1} \le \cdots \le t_{r+s} \le 1\} $$
+
+³Here we use the convention that when $s=0$, $\int \phi_1 \dots \phi_s = 1$.
+---PAGE_BREAK---
+
+Of course, if one takes iterated integrals of complex-valued forms $E^{\bullet}(X)_{\mathbb{C}}$, then
+one obtains complex-valued iterated integrals. We shall denote the Chen complex
+of complex-valued iterated integrals by $\mathrm{Ch}^{\bullet}(PX)_{\mathbb{C}}$, $\mathrm{Ch}^{\bullet}(P_{x,y}X)_{\mathbb{C}}$, etc.
+
+### 3. LOOP SPACE DE RHAM THEOREMS
+
+Chen proved many useful de Rham type theorems. (A comprehensive list can be found in Section 2 of [30].) In this section we present those of most immediate interest.
+
+**Theorem 3.1.** If $X$ is a simply connected manifold, then integration induces a natural Hopf algebra isomorphism
+
+$$H^{\bullet}(\mathrm{Ch}^{\bullet}(P_{x,x}X)) \cong H^{\bullet}(P_{x,x}X; \mathbb{R}).$$
+
+This, combined with standard algebraic topology, gives a de Rham theorem for homotopy groups of simply connected manifolds. First a review of the topology:
+
+Suppose that $(Z, x)$ is a connected, pointed topological space and that $A$ is any coefficient ring. Consider the adjoint
+
+$$h^t : H^{\bullet}(Z; A) \to \operatorname{Hom}_Z(\pi_{\bullet}(Z, z), A)$$
+
+of the Hurewicz homomorphism. An element of $H^{\bullet}(Z; A)$ is *decomposable* if it is in the image of the cup product mapping
+
+$$H^{>0}(Z; A) \otimes H^{>0}(Z; A) \to H^{\bullet}(Z; A).$$
+
+The set of indecomposable elements of the ring $H^{\bullet}(Z; A)$ is defined by
+
+$$\mathrm{QH}^{\bullet}(Z; A) := \mathrm{H}^{\bullet}(Z; A)/\{\text{the decomposable elements}\}.$$
+
+Since the cohomology ring of a sphere has no decomposables, the kernel of $h^t$
+contains the decomposable elements of $H^{\bullet}(Z; A)$, and therefore induces a mapping
+
+$$e: \mathrm{QH}^{\bullet}(Z; A) \to \operatorname{Hom}_Z(\pi_{\bullet}(Z, z), A).$$
+
+Typically, this mapping is far from being an isomorphism. However, if $A$ is a field of characteristic zero and $Z$ is a connected $H$-space, then $e$ is an isomorphism. (This is a Theorem of Cartan and Serre — cf. [47].)
+
+When $X$ is simply connected, $P_{x,x}X$ is a connected $H$-space. Chen’s de Rham theorem and the Cartan-Serre Theorem imply that integration induces an isomorphism
+
+$$\mathrm{QH}^j(P_{x,x}X; \mathbb{R}) \stackrel{\sim}{\longrightarrow} \mathrm{Hom}(\pi_j(P_{x,x}, c_x), \mathbb{R}) \cong \mathrm{Hom}(\pi_{j+1}(X, x), \mathbb{R})$$
+
+for each $j$.
+
+There is a canonical *subcomplex* $\mathrm{QCh}^{\bullet}(P_{x,x}X)$ of the Chen complex of $P_{x,x}X$, which is isomorphic to the indecomposable iterated integrals (see [29]) and whose cohomology is $\mathrm{QH}^{\bullet}(\mathrm{Ch}^{\bullet}(P_{x,x}X))$. This and Chen's de Rham Theorem above then yield the following de Rham Theorem for homotopy groups of simply connected spaces:
+
+**Theorem 3.2.** *Integration induces an isomorphism*
+
+$$H^{\bullet}(\mathrm{QCh}^{\bullet}(P_{x,x}X)) \xrightarrow{\sim} \mathrm{Hom}(\pi_{\bullet}(X, x), \mathbb{R})$$
+
+of degree +1 of graded vector spaces.
+---PAGE_BREAK---
+
+Both sides of the display in this theorem are naturally “Lie coalgebras.” It is
+not difficult to show that the integration isomorphism respects this structure.
+
+We now turn our attention to non-simply connected spaces. The augmentation
+$\epsilon : \mathbb{Z}\pi_1(X, x) \to \mathbb{Z}$ of the integral group ring of $\pi_1(X, x)$ is defined by taking each
+element of the fundamental group to 1. The augmentation ideal $J$ is the kernel of
+the augmentation $\epsilon$. The diagonal mapping $\pi_1(X, x) \to \pi_1(X, x) \times \pi_1(X, x)$ induces
+a coproduct
+
+$$
+\Delta : \mathbb{Z}\pi_1(X, x) \to \mathbb{Z}\pi_1(X, x) \otimes \mathbb{Z}\pi_1(X, x).
+$$
+
+The most direct statement of Chen’s de Rham theorem for the fundamental
+group is:
+
+**Theorem 3.3 (Chen [14]).** *The integration pairing*
+
+$$
+H^0(\mathrm{Ch}^\bullet(P_{x,x}X)) \otimes \mathbb{Z}\pi_1(X,x) \to \mathbb{C}
+$$
+
+is a pairing of Hopf algebras under which $H^0(\mathrm{Ch}_s^\bullet(P_{x,x}X))$ annihilates $J^{s+1}$. The
+induced mapping
+
+$$
+H^0(\mathrm{Ch}_s^\bullet(P_{x,x}X)) \rightarrow \operatorname{Hom}_{\mathbb{Z}}(\mathbb{Z}\pi_1(X,x)/J^{s+1}, \mathbb{C})
+$$
+
+is an isomorphism.
+
+An elementary proof is given in [33]. An equivalent statement, more amenable
+to generalization, will be given in Section 12. A second version uses the J-adic
+completion
+
+$$
+\mathbb{R}\pi_1(X, x)^{\wedge} := \varinjlim_s \mathbb{R}\pi_1(X, x)/J^s
+$$
+
+of $\mathbb{R}\pi_1(X, x)$. It is a complete Hopf algebra (cf. [52, Appendix A]), with diagonal
+
+$$
+\Delta : \mathbb{R}\pi_1(X, x)^{\wedge} \to \mathbb{R}\pi_1(X, x)^{\wedge} \hat{\otimes} \mathbb{R}\pi_1(X, x)^{\wedge}
+$$
+
+induced by that of $\mathbb{R}\pi_1(X, x)$.
+
+**Corollary 3.4.** If $H^1(X; \mathbb{R})$ is finite dimensional, then integration induces a natural homomorphism
+
+$$
+\mathbb{R}\pi_1(X, x)^{\wedge} \to \operatorname{Hom}(H^0(\mathrm{Ch}_s^{\bullet}(P_{x,x}X)), \mathbb{R})
+$$
+
+of complete *Hopf algebras*.
+
+Of course, each of these theorems holds with complex coefficients if we begin
+with complex-valued forms.
+
+*Remark 3.5.* It is tempting to think that one can extend Chen’s loop space de Rham theorem, or its homotopy version, to a de Rham theorem for higher homotopy groups of non-simply connected spaces. While this is true for “nilpotent spaces” (which include Lie groups), it is most definitely not true for most non-simply connected spaces that one meets in day-to-day life. In fact, Example 7.5 shows that there is unlikely to be any reasonable statement. One point we wish to make in this paper, however, is that in arithmetic and algebraic geometry, the cohomology of iterated integrals is intrinsic and may be a more interesting and geometric invariant of a complex algebraic variety than its higher homotopy groups or loop space cohomology.
+---PAGE_BREAK---
+
+4. MULTI-VALUED FUNCTIONS
+
+In this section, we give several examples of interesting multi-valued functions that
+can be obtained by integrating closed iterated line integrals. Although elementary,
+these examples are reflections of the relationship between iterated integrals and
+periods of certain canonical variations of mixed Hodge structure.
+
+The following result is easily proved by pulling back to the universal covering of
+X and using the definition (1) of iterated line integrals.
+
+**Proposition 4.1.** All closed iterated line integrals of length $\le 2$ on $P_{x,y}X$ are of the form
+
+$$
+\sum_{j,k} a_{jk} \int \phi_j \phi_k + \int \xi + a \text{ constant}
+$$
+
+where each $\phi_j$ is a closed 1-form on X, the $a_{jk}$ are scalars, and $\xi$ is a 1-form on X satisfying
+
+$$
+d\xi + \sum_{j,k} a_{jk} \phi_j \wedge \phi_k = 0.
+$$
+
+A relatively closed iterated integral is an element of Ch•(PX) that is closed on P_{x,y}X for all x,y ∈ X. The iterated line integrals given by the previous result are relatively closed.
+
+Multi-valued functions can be constructed by integrating relatively closed iter-
+ated integrals. For example, suppose that X is a Riemann surface and that w₁ and
+w₂ are holomorphic differentials on X. Then
+
+$$
+\int w_1 w_2
+$$
+
+is closed on each $P_{x,y}X$. This means that for any fixed point $x_o \in X$, the function
+
+$$
+x \mapsto \int_{x_o}^{x} w_1 w_2
+$$
+
+is a multi-valued function on X. It is easily seen to be holomorphic.
+
+**Example 4.2.** If $X = \mathbb{P}^1(\mathbb{C}) - \{0, 1, \infty\}$, then
+
+$$
+\int_0^x \frac{dz}{1-z} \frac{dz}{z}
+$$
+
+is a multi-valued holomorphic function on X. In fact, it is Euler’s dilogarithm,
+whose principal branch in the unit disk is defined by
+
+$$
+\ln_2(x) = \sum_{n \ge 1} \frac{x^n}{n^2}.
+$$
+
+More generally, the *k*-logarithm
+
+$$
+\ln_k(x) := \sum_{n \ge 1} \frac{x^n}{n^k} \quad |x| < 1
+$$
+
+can be expressed as the length *k* iterated integral
+
+$$
+\int_0^x \frac{dz}{1-z} \overbrace{\frac{dz}{z} \dots \frac{dz}{z}}^{k-1}
+$$
+---PAGE_BREAK---
+
+From this integral expression, it is clear that $\ln_k$ can be analytically continued to a multi-valued function on $\mathbb{C} - \{0, 1\}$.
+
+Note that $\zeta(k)$, the value of the Riemann zeta function at an integer $k > 1$, is $\ln_k(1)$. More information about iterated integrals and polylogarithms can be found in [34].
+
+More generally, the multiple polylogarithms
+
+$$L_{m_1, \dots, m_n}(x_1, \dots, x_n) := \sum_{0 < k_1 < \dots < k_n} \frac{x_1^{k_1} x_2^{k_2} \dots x_n^{k_n}}{k_1^{m_1} k_2^{m_2} \dots k_n^{m_n}} \quad |x_j| < 1$$
+
+and their special values, Zagier's multiple zeta values $\zeta(n_1, \dots, n_m)$, can be expressed as iterated integrals. For example,
+
+$$L_{1,1}(x,y) = \int_{(0,0)}^{(x,y)} \left( \frac{dy}{1-y} \frac{dx}{1-x} + \frac{d(xy)}{1-xy} \left( \frac{dy}{1-y} - \frac{dx}{1-x} - \frac{dx}{x} \right) \right).$$
+
+This expression defines a well defined multi-valued function on
+
+$$\mathbb{C}^2 - \{(x, y) : xy(1-x)(1-y)(1-xy) \neq 0\}$$
+
+as the relation
+
+$$\frac{dy}{1-y} \wedge \frac{dx}{1-x} + \frac{d(xy)}{1-xy} \wedge \left( \frac{dy}{1-y} - \frac{dx}{1-x} - \frac{dx}{x} \right) = 0$$
+
+holds in the rational 2-forms on $\mathbb{C}^2$. Formulas for all multiple polylogarithms and other properties can be found in Zhao's paper [60].
+
+Closed iterated integrals that involve antiholomorphic 1-forms can also yield multi-valued holomorphic functions.
+
+**Proposition 4.3.** Suppose that $X$ is a complex manifold and $w_1, \dots, w_n$ are holomorphic 1-forms. If $\xi$ is a (1,0) form such that
+
+$$\bar{\partial}\xi + \sum_{j,k} a_{jk} \overline{w}_j \wedge w_k = 0,$$
+
+then the multi-valued function
+
+$$F: x \mapsto \sum_{j,k} a_{jk} \int_{x_o}^{x} \overline{w}_j w_k + \int_{x_o}^{x} \xi$$
+
+is well defined and holomorphic.
+
+*Proof.* Since each $w_j$ is holomorphic,
+
+$$dF(x) = \sum_{j,k} a_{jk} \left( \int_{x_o}^{x} \overline{w}_j \right) w_k + \xi.$$
+
+Since $\xi$ has type (1,0), $dF$ also has type (1,0), which implies that $F$ is holomorphic. $\square$
+
+**Example 4.4.** Take $X$ to be a punctured elliptic curve $E = \mathbb{C}/\Lambda - \{0\}$. The point of this example is to show that the logarithm of the associated theta function $\theta(z)$ is a twice iterated integral. We may assume that $\Lambda = \mathbb{Z} + \mathbb{Z}\tau$ where $\tau$ has positive imaginary part. Denote the homology classes of the images of the intervals $[0, 1]$
+---PAGE_BREAK---
+
+and $[0, \tau]$ by $\alpha$ and $\beta$, respectively. These form a symplectic basis of $H_1(E, \mathbb{Z})$. The normalized abelian differential is $dz$ and $dz = \alpha^* + \tau\beta^*$ from which it follows that
+
+$$d\bar{z} \wedge dz = 2i \operatorname{Im} \tau \in H^2(E, \mathbb{C}) \cong \mathbb{C}.$$
+
+The multi-valued differential $\mu = (z - \bar{z})dz$ satisfies
+
+$$\mu(z+1) = \mu(z) \text{ and } \mu(z+\tau) = \mu(z) + 2i \operatorname{Im} \tau dz.$$
+
+On the other hand, the corresponding theta function $\theta(z) := \theta(z, \tau)$ satisfies
+
+$$\theta(z+1) = \theta(z) \text{ and } \theta(z+\tau) = \exp(-i\pi\tau - 2\pi i z)\theta(z).$$
+
+Thus
+
+$$\frac{d\theta}{\theta}(z+1) = \frac{d\theta}{\theta}(z) \text{ and } \frac{d\theta}{\theta}(z+\tau) = \frac{d\theta}{\theta}(z) - 2\pi i dz.$$
+
+It follows that
+
+$$\xi := \frac{\operatorname{Im} \tau}{\pi} \frac{d\theta}{\theta} + \mu(z)$$
+
+is a single-valued differential on $E - \{0\}$ of type $(1,0)$ having a logarithmic singularity at $0$ which satisfies
+
+$$d\bar{z} \wedge dz + d\xi = 0 \text{ in } E^2(E - \{0\}).$$
+
+(In fact, these properties characterize it up to a multiple of $dz$.) It follows that $\int d\bar{z}dz + \xi$ is relatively closed, so that
+
+$$x \mapsto \int_{x_o}^{x} d\bar{z} \, dz + \xi$$
+
+is a multi-valued holomorphic function on $E - \{0\}$. Applying the definition of iterated integrals yields
+
+$$\log \theta(x) = \log \theta(x_o) + \frac{\pi}{\operatorname{Im} \tau} \left( \int_{x_o}^x (d\bar{z} dz + \xi) - \frac{1}{2} (z(x) - \bar{z}(x_o))^2 + \frac{1}{2} (z(x_o) - \bar{z}(x_o))^2 \right),$$
+
+where $z(x) = \int_0^x dz$.⁴ This example generalizes easily to theta functions of several variables by replacing $E$ by a principally polarized abelian variety $A$ and $E - \{0\}$ by $A - \Theta$, where $\Theta$ is its theta divisor.
+
+**Remark 4.5.** This example can be developed further along the lines of Beilinson [7] and Deligne's approach to the dilogarithm. (An exposition of this, from the point of view of iterated integrals, can be found in [34].) Set
+
+$$G = \begin{pmatrix} 1 & \mathbb{C} & \mathbb{C} \\ 0 & 1 & \mathbb{C} \\ 0 & 0 & 1 \end{pmatrix} \quad \text{and} \quad F^0 G = \begin{pmatrix} 1 & \mathbb{C} & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}.$$
+
+The Lie algebra $\mathfrak{g}$ of $G$ is the Lie algebra of nilpotent upper triangular matrices. Let
+
+$$w = \begin{pmatrix} 0 & d\bar{z} & \xi \\ 0 & 0 & dz \\ 0 & 0 & 0 \end{pmatrix} \in E^1(E - \{0\}) \otimes \mathfrak{g}.$$
+
+⁴In the language of Beilinson and Levin [8], $\int dz$ is the elliptic logarithm for $E$, and $\log \theta$ is the elliptic dilogarithm.
+---PAGE_BREAK---
+
+This form is integrable: $dw + w \wedge w = 0$. It follows that the iterated integral
+
+$$T = \begin{pmatrix} 1 & \int d\bar{z} & \int d\bar{z} dz + \int \xi \\ 0 & 1 & \int dz \\ 0 & 0 & 1 \end{pmatrix}$$
+
+is relatively closed. (See [15] or [33].) It therefore defines a homomorphism
+
+$$\theta : \pi_1(E - \{0\}, x_o) \to G, \quad \gamma \mapsto \langle T, \gamma \rangle$$
+
+which is the monodromy representation of the flat connection on $(E - \{0\}) \times G$
+defined by $w$. There is thus a generalized Abel-Jacobi mapping
+
+$$\nu : E - \{0\} \to \Gamma\G/G^0G$$
+
+that takes $x$ to $\langle T, \gamma \rangle$, where $\gamma$ is any path in $E - \{0\}$ from $x_o$ to $x$. It is holomorphic as can be seen directly using the formulas above.
+
+Denote the center
+
+$$\begin{pmatrix} 1 & 0 & \mathbb{C} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$
+
+of $G$ by $Z$. The quotient $\Gamma\G/(F^0G \cdot Z)$ is naturally isomorphic to $E$ itself and the
+corresponding projection
+
+$$\Gamma\G/F^0G \to E$$
+
+is a holomorphic $\mathbb{C}^*$-bundle. The formulas in the previous example show that the section $\nu$ above is a non-zero multiple of the section $\theta$ of the line bundle $L \to E$ associated to the divisor $0 \in E$:
+
+This has an interpretation in terms of variations of MHS, which can be extracted
+from [39]. The construction given here of the second albanese mapping is special
+case of the direct construction given in [32]. There is an analogous construction
+with a similar interpretation where the pair (E, 0) is replaced by an abelian variety
+and its theta divisor (A, Θ).
+
+5. HARMONIC VOLUME
+
+Bruno Harris [40] was the first to explicitly combine Hodge theory and and (non-
+abelian) iterated integrals to obtain periods of algebraic cycles. Suppose that C is
+a compact Riemann surface of genus 3 or more. Choose any base point $x_o \in C$.
+Suppose that $L_1, L_2, L_3$ are three disjoint, non-separating simple closed curves on
+C. Let $\phi_j$ be the harmonic representative of the Poincaré dual of $L_j$, $j = 1, 2, 3$.
+Since the curves are pairwise disjoint, the product of any two of the $\phi_j$s vanishes
+in cohomology. Thus there are 1-forms $\phi_{jk}$ such that
+
+$$d\phi_{jk} + \phi_j \wedge \phi_k = 0$$
+
+and $\phi_{jk}$ is orthogonal to the $d$-closed forms. These two conditions characterize the
+$\phi_{jk}$. By Proposition 4.1, the iterated line integral
+
+$$\int \phi_j \phi_k + \phi_{jk}$$
+---PAGE_BREAK---
+
+is closed in $\mathcal{Ch}^\bullet(P_{x_o,x_o}C)$.
+
+Choose loops $\gamma_j$ ($j = 1, 2, 3$), based at $x_o$ and that are freely homotopic to the $L_j$. Harris sets
+
+$$I(L_1, L_2, L_3) = \int_{\gamma_3} \phi_1 \phi_2 + \phi_{12}.$$
+
+He shows that this integral is independent of the choice of the base point $x_o$ and
+that
+
+$$I(L_{\sigma(1)}, L_{\sigma(2)}; L_{\sigma(3)}) = \operatorname{sgn}(\sigma) I(L_1, L_2, L_3)$$
+
+for all permutations $\sigma$ of $\{1, 2, 3\}$.
+
+One can also use the $\phi_j$ to imbed $C$ into the three torus $T = \mathbb{R}^3/\mathbb{Z}^3$. Define
+$\Phi: C \to T$ by
+
+$$\Phi(x) = \int_{x_o}^{x} (\phi_1, \phi_2, \phi_3) \quad \mod \mathbb{Z}^3.$$
+
+If the coordinates in $\mathbb{R}^3$ are $(z_1, z_2, z_3)$, then $\Phi^*dz_j = \phi_j$ for $j = 1, 2, 3$. Since $H^2(T, \mathbb{Z})$ is spanned by the $dz_j \wedge dz_k$ and since
+
+$$\int_{\Phi} dz_j \wedge dz_k = \int_C \phi_j \wedge \phi_k = 0,$$
+
+the image of $C$ is homologous to zero in $T$. One can therefore find a 3-chain $\Gamma$ in $T$ such that $\partial\Gamma = \Phi_*C$. Since $\Gamma$ is only well defined up to a 3-cycle, the volume of $\Gamma$ is only well defined mod $\mathbb{Z}$. Harris's first main result is:
+
+**Theorem 5.1.** The volume of $\Gamma$ is congruent to $I(L_1, L_2, L_3)$ mod $\mathbb{Z}$.
+
+By an elementary computation, the span in $\Lambda^3 H_1(C, \mathbb{Z})$ of classes $L_1 \wedge L_2 \wedge L_3$
+is the kernel $K$ of the mapping
+
+$$\Lambda^3 H_1(C, \mathbb{Z}) \to H_1(C, \mathbb{Z})$$
+
+defined by
+
+$$a \wedge b \wedge c \mapsto (a \cdot b)c + (b \cdot c)a + (c \cdot a)b.$$
+
+The harmonic volume *I* thus determines a point in the compact torus *Hom*(*K*,*R/Z*).
+
+There is a lot more to this story — it has a deep relationship to the algebraic cycle $C_x - C_x^-$ in the jacobian of $C$. This is best explained in terms of the Hodge theory of the operator $\bar{\partial}$ rather than $d$. This shall be sketched in Section 9.
+
+6. ITERATED INTEGRALS OF CURRENTS
+
+There is no rigorous theory of iterated integrals of currents, although such a theory would be useful provided it is not too technical. The theory of iterated integrals makes essential use of the algebra structure of the de Rham complex. The problem one encounters when trying to develop a theory of iterated integrals of currents is that products of currents are only defined when the currents being multiplied (intersected) are sufficiently smooth (or sufficiently transverse). Nonetheless, this point of view is useful, even if it is not rigorous. The paper [28] was an attempt at making these ideas rigorous and using them to study links.
+
+**Example 6.1.** In this example $X$ is the unit interval. Suppose that $a_1, a_2, \dots, a_r$ are distinct points in the interior of the unit interval. Set $w_j = \delta(t - a_j)dt$, where $\delta(t)$ denotes the Dirac delta function supported at $t = 0$. Let $\gamma : [0, 1] \to X = [0, 1]$ be the identity path. Recall that $\Delta^r$ is the time ordered simplex
+
+$$\Delta^r = \{(t_1, t_2, \ldots, t_r) : 0 \le t_1 \le \cdots \le t_r \le 1\}.$$
+---PAGE_BREAK---
+
+By definition,
+
+$$
+\begin{align*}
+\int_{\gamma} w_1 w_2 \dots w_r &= \int_{\Delta^r} \delta(t_1 - a_1) \delta(t_2 - a_2) \dots \delta(t_r - a_r) dt_1 dt_2 \dots dt_r \\
+&= \int_{\Delta^r} \delta_{(a_1, \dots, a_r)} (t_1, \dots, t_r) dt_1 dt_2 \dots dt_r.
+\end{align*}
+$$
+
+Since the $a_j$ are distinct numbers satisfying $0 < a_j < 1$,
+
+$$
+\int_{\gamma} w_1 w_2 \dots w_r = \begin{cases} 1 & \text{if } a_1 < a_2 < \dots < a_r, \\ 0 & \text{otherwise.} \end{cases}
+$$
+
+**Example 6.2.** More generally, suppose that $H_1, \ldots, H_r$ are real hypersurfaces in a manifold $X$, each with oriented (and thus trivial) normal bundle. Suppose that $\gamma \in PX$ is transverse to the union of the $H_j$ — that is, the endpoints of $\gamma$ do not lie in the union of the $H_j$ and $\gamma$ does not pass through any singularity of their union. We can regard each $H_j$ as a current, which we shall denote by $w_j$. For such a path $\gamma$ which is transverse to $H_j$,
+
+$$
+\int_{\gamma} w_j = (H_j \cdot \gamma) := \text{the intersection number of } H_j \text{ with } \gamma.
+$$
+
+For simplicity, suppose that $\gamma$ passes through each $H_j$ at most once, at time $t = a_j$,
+say. Then
+
+$$
+\gamma^* w_j = \epsilon_j \delta_j (t - a_j)
+$$
+
+where $\epsilon_j$ is 1 if $\gamma$ passes through $H_j$ positively at time $a_j$, and $-1$ if it passes
+through negatively. By the previous example,
+
+$$
+\int_{\gamma} w_1 w_2 \dots w_r = \int_{0}^{1} \gamma^* w_1 \dots \gamma^* w_r = \begin{cases} \epsilon_1 \epsilon_2 \dots \epsilon_r & \text{if } a_1 < a_2 < \dots < a_r, \\ 0 & \text{otherwise.} \end{cases}
+$$
+
+This formula can be used to give heuristic proofs of many basic properties of iterated line integrals, such as the shuffle product formula, the antipode, the co-product and the differential. For example, suppose that $w_1, \dots, w_r$ are 1-currents corresponding to oriented lines in the plane and that $\alpha$ and $\beta$ are composable paths that are transverse to the union of the supports of the $w_j$. (See Figure 1.)
+
+FIGURE 1. Pointwise product of iterated integrals
+---PAGE_BREAK---
+
+Note that $\int w_1 \dots w_r$ is non-zero on $\alpha\beta$ if and only if there is an $i$ such that $\alpha$ passes through $w_1, \dots, w_i$ in order and $\beta$ passes through $w_{i+1}, \dots, w_r$ in order. In this case
+
+$$
+\begin{aligned}
+\int_{\alpha\beta} w_1 \dots w_r &= \int_{\alpha} w_i \dots w_i \int_{\beta} w_{i+1} \dots w_r \\
+&= \sum_{j=0}^{r} \int_{\alpha} w_j \dots w_i \int_{\beta} w_{j+1} \dots w_r
+\end{aligned}
+ $$
+
+as all the terms in the sum are zero except when $j = i$.
+
+Examples using higher iterated integrals also exist. The simplest I know of is a proof of the formula for the Hopf invariant of a mapping $f : S^{4n-1} \to S^{2n}$. It is a nice exercise, using the definition of iterated integrals, to show directly that if $f : S^{4n-1} \to S^{2n}$ is smooth and $p$ and $q$ are distinct regular values of $f$, then
+
+$$ \langle \int \delta_p \delta_q, f \rangle $$
+
+is the linking number of $f^{-1}(p)$ and $f^{-1}(q)$ in $S^{4n-1}$. Here $\delta_x$ denotes the 2n-current associated to $x \in S^{2n}$. This formula is equivalent to J.H.C. Whitehead's integral formula for the Hopf invariant [56] and Chen's version of it [15, p. 848].
+
+**6.1. First steps.** Suppose that $\Gamma_1, \dots, \Gamma_r$ are closed submanifolds of $X$ (possibly with boundary), where $\Gamma_j$ has codimension $d_j$. Denote the $d_j$-current determined by $\Gamma_j$ by $\delta_j$. Suppose that $N$ is a compact manifold and that $\alpha : N \to PX$ is smooth. We shall say that $\alpha$ is transverse to $\int \delta_1 \dots \delta_r$ if the mapping
+
+$$ \tilde{\alpha} : \Delta^r \times N \to X^r, \quad ((t_1, \dots, t_r), u) \mapsto (\alpha(u)(t_1), \dots, \alpha(u)(t_r)) $$
+
+is transverse to the submanifold $\Gamma := \Gamma_1 \times \dots \times \Gamma_r$ of $X^r$. That is, the restriction of $\tilde{\alpha}$ to each stratum of $\Delta^r \times N$ is transverse to each boundary stratum of $\Gamma$.
+
+If $N$ has dimension $-r+d_1+\dots+d_r$ and $\tilde{\alpha}$ is transverse to $\Gamma$, then we can evaluate the iterated integral
+
+$$ \int \delta_1 \dots \delta_r $$
+
+on $\alpha$. This transversality condition is satisfied in each of the examples above.
+
+## 7. THE REDUCED BAR CONSTRUCTION
+
+Chen discovered that the iterated integrals on a smooth manifold have a purely algebraic description [15, 16]. This algebraic description is an important technical tool as it allows the computation of various spectral sequences one obtains from iterated integrals, applications to Hodge theory, and it facilitates the algebraic de Rham theory of iterated integrals for varieties over arbitrary algebraically closed fields. (Cf. Section 13.) This algebraic description is expressed in terms of the *reduced bar construction*, a variant of the more standard bar construction [24], which is dual to Adam's cobar construction [2]. Chen's version has the useful property that it generates no elements of negative degree when applied to a non-negatively graded dga with elements of degree zero, unlike the standard version of the bar construction.
+
+In this section, we use Chen's conventions for iterated integrals. In particular, our description of the reduced bar construction gives a precise formula for the exterior derivative of iterated integrals.
+---PAGE_BREAK---
+
+Suppose that $A^{\bullet}$ is a differential graded algebra (hereafter denoted dga) and that
+$M^{\bullet}$ and $N^{\bullet}$ are complexes which are modules over $A^{\bullet}$. That is, the structure maps
+
+$$A^{\bullet} \otimes M^{\bullet} \to M^{\bullet} \text{ and } A^{\bullet} \otimes N^{\bullet} \to N^{\bullet}$$
+
+are chain maps. We shall suppose that $A^\bullet$, $M^\bullet$ and $N^\bullet$ are all non-negatively graded. Denote the subcomplex of $A^\bullet$ consisting of elements of positive degree by $A^{>0}$.
+
+The (reduced) bar construction $B(M, A^{\bullet}, N)$ is defined as follows. The underlying
+graded vector space is a quotient of the graded vector space
+
+$$T(M^{\bullet}, A^{\bullet}, N^{\bullet}) := \bigoplus_{s} M^{\bullet} \otimes (A^{>0}[1]^{\otimes r}) \otimes N^{\bullet}.$$
+
+Following convention $m \otimes a_1 \otimes \dots \otimes a_r \otimes n \in T(M^{\bullet}, A^{\bullet}, N^{\bullet})$ will be denoted by
+$m[a_1 | \dots | a_r]n$. To obtain the vector space underlying the bar construction, we
+impose the relations
+
+$$
+\begin{align*}
+m[dg|a_1|\dots|a_r]n &= m[ga_1|\dots|a_r]n - m \cdot g[a_1|\dots|a_r]n; \\
+m[a_1|\dots|a_i|dg|a_{i+1}|\dots|a_r]n &= m[a_1|\dots|a_i|g|a_{i+1}|\dots|a_r]n \\
+&\qquad - m[a_1|\dots|a_i|g|a_{i+1}|\dots|a_r]n \quad 1 \le i < s; \\
+m[a_1|\dots|a_r|dg]n &= m[a_1|\dots|a_r]g \cdot n - m[a_1|\dots|a_r]g;n \\
+m[dg]n &= 1 \otimes g \cdot n - m \cdot g \otimes 1
+\end{align*}
+$$
+
+Here each $a_i \in A^{>0}$, $g \in A^0$, $m \in M^{\bullet}$, $n \in N^{\bullet}$, and $r$ is a positive integer.
+
+Define an endomorphism $J$ of each graded vector space by $J : v \mapsto (-1)^{\deg v} v$.
+The differential is defined as
+
+$$d = d_M \otimes 1_T \otimes 1_N + J_M \otimes d_B \otimes 1_N + J_M \otimes J_T \otimes d_N + d_C.$$
+
+Here $T$ denotes the tensor algebra on $A^{>0}[1]$, $d_B$ is defined by
+
+$$
+\begin{equation}
+\begin{split}
+(4) \quad d_B[a_1 | \dots | a_r] = & \sum_{1 \le i \le r} (-1)^i [Ja_1 | \dots | Ja_{i-1} | da_i | a_{i+1} | \dots | a_r] \\
+& + \sum_{1 \le i < r} (-1)^{i+1} [Ja_1 | \dots | Ja_{i-1} | Ja_i \wedge a_{i+1} | a_{i+2} | \dots | a_r]
+\end{split}
+\end{equation}
+$$
+
+and $d_C$ is defined by
+
+$$d_C m[a_1 | \dots | a_r] n = (-1)^r J m[Ja_1 | \dots | Ja_{r-1}] a_r \cdot n - J m \cdot a_1 [a_2 | \dots | a_r] n.$$
+
+The reduced bar construction $B(M^{\bullet}, A^{\bullet}, N^{\bullet})$ has a standard filtration
+
+$$M^{\bullet} \otimes N^{\bullet} = B_0(M^{\bullet}, A^{\bullet}, N^{\bullet}) \subseteq B_1(M^{\bullet}, A^{\bullet}, N^{\bullet}) \subseteq B_2(M^{\bullet}, A^{\bullet}, N^{\bullet}) \subseteq \cdots$$
+
+by subcomplexes, which is called the *bar filtration*. The subspace
+
+$$B_s(M^{\bullet}, A^{\bullet}, N^{\bullet})$$
+
+is defined to be the span of those $m[a_1|...|a_r]n$ with $r \le s$. When $A^\bullet$ has connected homology (i.e., $H^0(A^\bullet) = \mathbb{R}$), the corresponding (second quadrant) spectral sequence, which is called the *Eilenberg-Moore spectral sequence (EMss)*, has $E_1$ term
+
+$$E_1^{-s,t} = [M^{\bullet} \otimes H^{>0}(A^{\bullet})^{\otimes s} \otimes N^{\bullet}]^t .$$
+
+A proof can be found in [16]. This computation has the following useful consequence:
+---PAGE_BREAK---
+
+**Lemma 7.1.** Suppose that $A_j^\bullet$ is a dga and $M_j^\bullet$ and $N_j^\bullet$ are right and left $A_j^\bullet$-modules, where $j=1,2$. Suppose that $f_A : A_1^\bullet \to A_2^\bullet$ is a dga homomorphism and
+
+$f_M : M_1^\bullet \to M_2^\bullet \text{ and } f_N : N_1^\bullet \to N_2^\bullet$
+
+are chain maps compatible with the actions of $A_1^\bullet$ and $A_2^\bullet$ and $f$. If $f_A$, $f_M$ and $f_N$ induce isomorphisms on homology, then so do the induced mappings
+
+$$B_s(M_1^{\bullet}, A_1^{\bullet}, N_1^{\bullet}) \to B_s(M_2^{\bullet}, A_2^{\bullet}, N_2^{\bullet}) \text{ and } B(M_1^{\bullet}, A_1^{\bullet}, N_1^{\bullet}) \to B(M_2^{\bullet}, A_2^{\bullet}, N_2^{\bullet})$$
+
+When $A^\bullet$, $M^\bullet$ and $N^\bullet$ are commutative dgas (in the graded sense), and when the $A^\bullet$-module structure on $M^\bullet$ and $N^\bullet$ is determined by dga homomorphism $A^\bullet \to M^\bullet$ and $A^\bullet \to N^\bullet$, $B(M^\bullet, A^\bullet, N^\bullet)$ is also a commutative dga. The multiplication is given by the shuffle product:
+
+$$
+\begin{align*}
+& (m'[a_1| \dots |a_r]n') \wedge (m''[a_{r+1}| \dots |a_{r+s}]n'') \\
+& \qquad = \sum_{sh(r,s)} \pm (m' \wedge m'') [a_{\sigma(1)}| \dots | a_{\sigma(r+s)}](n' \wedge n'').
+\end{align*}
+$$
+
+It is important to note that the shuffle product does not commute with the differential when $A^\bullet$ is not commutative.
+
+Many complexes of iterated integrals may be described in terms of reduced bar constructions of suitable triples. Here we give just one example — the iterated integrals on $P_{x,y}X$. A more complete list of such descriptions can be found in [15] and [30, §2].
+
+Suppose that $X$ is a manifold and that $x_0$ and $x_1$ are points of $X$. Evaluating at $x_j$, we obtain an augmentation $\epsilon_j : E^\bullet(X) \to \mathbb{R}$ for $j = 0,1$. Suppose that $A^\bullet$ is a sub dga of $E^\bullet(M)$ and that both augmentations restrict to non-trivial homomorphisms $\epsilon_j : A^\bullet \to \mathbb{R}$. We can take $M^\bullet$ and $N^\bullet$ both to be $\mathbb{R}$, where the action is given by $\epsilon_0$ and $\epsilon_1$, respectively. Now form the corresponding bar construction $B(\mathbb{R}, A^\bullet, \mathbb{R})$.
+
+Define $\mathrm{Ch}^\bullet(P_{x_0,x_1} X; A^\bullet)$ to be the subcomplex of $\mathrm{Ch}^\bullet(P_{x_0,x_1} X)$ spanned by those iterated integrals $\int w_1 \dots w_r$ where each $w_j \in A^\bullet$.
+
+**Theorem 7.2.** Suppose that $X$ is connected. If $H^0(A^\bullet) \cong \mathbb{R}$ and the natural map $H^1(A^\bullet) \to H^1(X; \mathbb{R})$ induced by the inclusion of $A^\bullet$ into $E^\bullet(X)$ is injective, then the natural mapping
+
+$$B(\mathbb{R}, A^{\bullet}, \mathbb{R}) \to \mathrm{Ch}^{\bullet}(P_{x_0, x_1} X; A^{\bullet}), \quad [w_1 | \dots | w_r] \mapsto \int w_1 \dots w_r$$
+
+is a well defined isomorphism of differential graded algebras.
+
+This and Adams’ work [2] are the basic ingredients in the proof of the loop space de Rham theorem, Theorem 3.1. The previous result has many useful consequences, such as:
+
+**Corollary 7.3.** If $X$ is connected and $A^\bullet$ is a sub dga of $E^\bullet(X)$ for which the inclusion $A^\bullet \hookrightarrow E^\bullet(X)$ induces an isomorphism on homology, then the inclusion
+
+$$\mathrm{Ch}^{\bullet}(P_{x,y}X; A^{\bullet}) \hookrightarrow \mathrm{Ch}^{\bullet}(P_{x,y}X)$$
+
+induces an isomorphism on homology.
+---PAGE_BREAK---
+
+This is proved using the previous two results. It has many uses, such as in the
+next example, where it simplifies computations, and in Hodge theory, where one
+takes $A^{\bullet}$ to be the subcomplex of $C^{\infty}$ logarithmic forms when $X$ is the complement
+of a normal crossings divisor in a complex projective algebraic manifold.
+
+**Example 7.4.** A nice application of the results so far is to compute the loop space cohomology $H^*(P_{x,x}S^n; \mathbb{R})$ and real homotopy groups $\pi_n(S^n, x) \otimes \mathbb{R}$ of the $n$-sphere ($n \ge 2$). This computation is classical.
+
+The first thing to do is to replace the de Rham complex of $S^n$ by a sub dga $A^\bullet$
+which is as small as possible, but which computes the cohomology of the sphere.
+To do this, choose an $n$-form $w$ whose integral over $S^n$ is 1 and take $A^\bullet$ to be
+the dga consisting of the constant functions and the constant multiplies of $w$. By
+Corollary 7.3, the iterated integrals constructed from elements of $A^\bullet$ compute the
+cohomology of $S^n$. But these are all linear combinations of
+
+$$
+\theta_m := \int \overbrace{w \dots w}^{m}, \quad m \ge 0.
+$$
+
+Each of these is closed, and no linear combination of them is exact. It follows that
+
+$$
+H^j(P_{x,x}S^n; \mathbb{R}) \cong \begin{cases} \mathbb{R} & j = m(n-1); \\ 0 & \text{otherwise.} \end{cases}
+$$
+
+The ring structure is also easily determined using the shuffle product formula (2).
+When *n* is odd, we have $\theta_1^m = m!\theta_m$; when *n* is even
+
+$$
+\theta_1 \wedge \theta_{2m} = \theta_{2m+1}, \quad \theta_1 \wedge \theta_{2m+1} = 0, \text{ and } \theta_2 \wedge \theta_{2m} = (m+1)\theta_{2m+2}.
+$$
+
+Applying Theorem 3.2 we have:
+
+$$
+\pi_j(S^n, x) \otimes \mathbb{R} =
+\begin{cases}
+\mathbb{R} & j = n; \\
+\mathbb{R} & j = 2n - 1 \text{ and } n \text{ even}; \\
+0 & \text{otherwise.}
+\end{cases}
+\hspace*{\fill} \square
+$$
+
+**Example 7.5.** This example illustrates the limits of the ability of iterated integrals to compute homotopy groups.⁵ The main point is that there are continuous maps $f: X \to Y$ between spaces that induce an isomorphism on cohomology, but not on homotopy. Properties of the bar construction (cf. Lemma 7.1) imply that for such $f$ the mapping
+
+$$
+f^* : H^*(Ch^*(P_f(x), f(x)Y)) \to H^*(Ch^*(P_{x,x}X))
+$$
+
+is an isomorphism.
+
+The prototype of such continuous functions is the mapping $X \to X^+$ from a connected topological space $X$, with the property that the commutator subgroup of $\pi_1(X, x)$ is perfect, to $X^+$, Quillen’s plus construction.
+
+⁵Minimal models do no better or worse. If (X, x) is a pointed topological space with minimal model $\mathcal{M}_X^\bullet$, there is a canonical Lie coalgebra isomorphism $\mathcal{Q}\mathcal{M}_X^\bullet \cong H^\bullet(\mathrm{Ch}^\bullet(P_{x,x}X))$. This follows from [17, §3].
+---PAGE_BREAK---
+
+By a standard trick, one can extend de Rham theory (and hence iterated integrals) to arbitrary topological spaces.⁶ In this setting, one can take a perfect group $\Gamma$ and consider the mapping
+
+$$ \phi : B\Gamma \to B\Gamma^+ $$
+
+from the classifying space of $\Gamma$ to its plus construction. This mapping induces an isomorphism on homology, and therefore a quasi-isomorphism
+
+$$ \phi^* : E^\bullet(B\Gamma^+) \to E^\bullet(B\Gamma). $$
+
+This induces, by Lemma 7.1, an isomorphism
+
+$$ H^\bullet(\mathrm{Ch}^\bullet(P_{x,x}B\Gamma^+)) \to H^\bullet(\mathrm{Ch}^\bullet(P_{x,x}B\Gamma)) $$
+
+Since the universal covering of $B\Gamma$ is contractible, $P_{x,x}B\Gamma$ is a disjoint union of contractible sets indexed by the elements of $\Gamma$. On the other hand, $B\Gamma^+$ is a simply connected $H$-space, the loop space de Rham theorem holds for it. It follows that
+
+$$ QH^j(\mathrm{Ch}^\bullet(P_{x,x}B\Gamma)) \cong \operatorname{Hom}(\pi_{j+1}(B\Gamma^+, x), \mathbb{R}). $$
+
+In particular, take $\Gamma = SL(\mathbb{Z})$, a perfect group. From Borel's work [10], we know that
+
+$$ \pi_j(BSL(\mathbb{Z})^+, x) \otimes_{\mathbb{R}} \begin{cases} \mathbb{R} & j \equiv 3 \mod 4 \\ 0 & \text{otherwise.} \end{cases} $$
+
+For those who would prefer an example with manifolds, one can approximate $BSL(\mathbb{Z})$ by a finite skeleton of $BSL_n(\mathbb{Z})$ for some $n \ge 3$ or take $\Gamma$ to be a mapping class group in genus $g \ge 3$.
+
+**7.1. An integral version.** Suppose that $X$ is a topological space and that $R$ is a ring. Each point $x$ of $X$ induces an augmentation $\epsilon_x : S^\bullet(X; R) \to R$ on the $R$-valued singular chain complex of $X$. If $x, y \in X$, we have augmentations
+
+$$ \epsilon_x : S^\bullet(X; R) \to R \quad \text{and} \quad \epsilon_y : S^\bullet(X; R) \to R, $$
+
+which give $R$ two structures as a module over the singular cochains. We can thus form the reduced bar construction $B(R, S^\bullet(X; R), R)$.
+
+The following result, which will be further elaborated in Section 14 and is proved using Adams cobar construction, is needed to put an integral structure on the cohomology of $\mathrm{Ch}_s^\bullet(P_{x,y}X)$, regardless of whether $X$ is simply connected or not.
+
+**Proposition 7.6 (Chen [15]).** For all $s \ge 0$, there are canonical isomorphisms
+
+$$ H^\bullet(B_s(\mathbb{Z}, S^\bullet(X, \mathbb{Z}), \mathbb{Z})) \otimes_{\mathbb{Z}}^\vee \mathbb{R} \cong H^\bullet(B_s(\mathbb{R}, S^\bullet(X, \mathbb{R})), \mathbb{R})) \cong H^\bullet(\mathrm{Ch}_s^\bullet(P_{x,y}X)). $$
+
+It is very important to note that the naïve mapping
+
+$$ B(I) : B(\mathbb{R}, E^\bullet(X), \mathbb{R}) \to B(\mathbb{R}, S^\bullet(X; \mathbb{R}), \mathbb{R}), \quad [w_1|...|w_r] \mapsto [I(w_1)|...|I(w_r)] $$
+
+induced by the integration mapping $I: E^\bullet(X) \to S^\bullet(X; \mathbb{R})$, is not a chain mapping. This is because $I$ is not an algebra homomorphism (except in trivial cases), which implies that $B(I)$ is not, in general, a chain mapping.
+
+⁶Basically, one replaces a space by the simplicial set consisting of its singular chains. This is canonically weak homotopy equivalent to the original space. One then can work with the Thom-Whitney de Rham complex of this simplicial set. It computes the cohomology of the space and is functorial under continuous maps.
+---PAGE_BREAK---
+
+## 8. EXACT SEQUENCES
+
+The algebraic description of iterated integrals gives rise to several exact sequences useful in topology and Hodge theory. We shall concentrate on iterated integrals of length $\le 2$ as this is the first interesting case — $H^k(I\mathcal{Ch}_1^\bullet(P_{x,x}X))$ is just $H^{k+1}(X; \mathbb{R})$.
+
+**Lemma 8.1.** If $X$ is a connected manifold, then the sequence
+
+$$0 \to QH^{2d-1}(X;\mathbb{R}) \to H^{2d-2}(I\mathcal{Ch}_2^\bullet(P_{x,x}X)) \to [H^{>0}(X;\mathbb{R})^{\otimes 2}]^{2d} \\ \xrightarrow[cup]{\qquad} H^{2d}(X;\mathbb{R}) \to QH^{2d}(X;\mathbb{R}) \to 0$$
+
+is exact. This sequence has a natural $\mathbb{Z}$-form and exactness holds over $\mathbb{Z}$ as well.
+
+*Sketch of Proof.* By the algebraic description of iterated integrals given in the previous section, the sequence
+
+$$0 \to I\mathcal{Ch}_1^\bullet(P_{x,x}X) \to I\mathcal{Ch}_2^\bullet(P_{x,x}X) \to (E^{>0}(X)/dE^0(X))^{\otimes 2} \to 0$$
+
+is exact. This gives rise to a long exact sequence. The formula for the differential and the identification of $I\mathcal{Ch}_1^\bullet(P_{x,x}X)$ with $E^{>0}(X)/dE^0(X)$ imply that the connecting homomorphism is the cup product
+
+$$[H^{>0}(X;\mathbb{R})^{\otimes 2}]^k \to H^k(X;\mathbb{R}).$$
+
+The integrality statement follows from Prop. 7.6 using the integral version of the reduced bar construction. □
+
+Combining it with the de Rham Theorems yields the following two results. For the first, note that the function
+
+$$\pi_1(X, x) \to J/J^2, \quad \gamma \mapsto (\gamma - 1) + J^2$$
+
+is a homomorphism and induces an isomorphism
+
+$$H_1(X, \mathbb{Z}) \cong J/J^2.$$
+
+Here $J$ denotes the augmentation ideal of $\mathbb{Z}\pi_1(X, x)$.
+
+**Corollary 8.2.** For all connected manifolds $X$, the sequence
+
+$$0 \to H^1(X;\mathbb{Z}) \to \operatorname{Hom}(J/J^3,\mathbb{Z}) \xrightarrow{\psi} H^1(X;\mathbb{Z})^{\otimes 2} \xrightarrow{\text{cup}} H^2(X;\mathbb{Z})$$
+
+is exact. The mapping $\psi$ is dual to the multiplication mapping
+
+$$H_1(X;\mathbb{Z})^{\otimes 2} \cong (J/J^2)^{\otimes 2} \to J/J^3.$$
+
+The analogue of this in the simply connected case is:
+
+**Corollary 8.3.** If $X$ is simply connected, then the sequences
+
+$$0 \to H^3(X;\mathbb{Q}) \to \operatorname{Hom}(\pi_3(X,x),\mathbb{Q}) \to S^2H^2(X;\mathbb{Q}) \xrightarrow{\text{cup}} H^4(X)$$
+
+and
+
+$$0 \to H^3(X;\mathbb{Z}) \to H^2(P_{x,x}X;\mathbb{Z}) \to H^2(X;\mathbb{Z})^{\otimes 2} \xrightarrow{\text{cup}} H^4(X;\mathbb{Z})$$
+
+are exact.
+---PAGE_BREAK---
+
+## 9. HODGE THEORY
+
+Just as in the case of ordinary cohomology, Chen’s de Rham theory is much more powerful when combined with Hodge theory, and is especially fertile when applied to problems in algebraic geometry. The Hodge theory of iterated integrals is best formalized in terms of Deligne’s mixed Hodge theory. I will not review Deligne’s theory here, but (at the peril of satisfying nobody) will attempt to present the ideas in a way that will make sense both the novice and the expert. More details can be found in [30, 31, 33, 39].
+
+9.1. **The riemannian case.** In the classical case, the Hodge theorem asserts that every de Rham cohomology class on a compact riemannian manifold has a unique harmonic representative which depends, in general, on the metric. If $X$ is a compact riemannian manifold, then every element of $H^{\bullet}(Ch_s^{\bullet}(P_{x,y}X))$ has a natural representative, which I shall call “harmonic” even though I do not know if it is annihilated by any kind of laplacian on $P_{x,y}X$.
+
+This is illustrated in the case $s = 2$. Every closed iterated integral of length $\le 2$ is of the form
+
+$$ (5) \qquad \sum_{j,k} a_{jk} \int w_j w_k + \xi $$
+
+where
+
+$$ d\xi = \sum_{j,k} (-1)^{\deg w_j} a_{jk} w_j \wedge w_k. $$
+
+The iterated integral (5) is defined to be harmonic if each $w_j$ is harmonic and $\xi$ is co-closed (i.e., orthogonal to the closed forms). This definition generalizes to iterated integrals of arbitrary length. Classical harmonic theory on $X$ and the Eilenberg-Moore spectral sequence imply that every element of $H^{\bullet}(Ch^{\bullet}(P_{x,y}X))$ has a unique harmonic representative.
+
+Harris's work on harmonic volume (Section 5) is a particularly nice application of harmonic iterated integrals.
+
+9.2. **The Kähler case.** This naïve picture generalizes to the case when $X$ is compact Kähler. In classical Hodge theory, certain aspects of the Hodge Theorem, such as the Hodge decomposition of the cohomology, are independent of the metric. Similar statements hold for iterated integrals: specifically, $H^{\bullet}(Ch^{\bullet}(P_{x,y}X))$ has a natural mixed Hodge structure, the key ingredient of which is the *Hodge filtration*, whose definition we now recall.
+
+The Hodge filtration
+
+$$ E^{\bullet}(X)_{\mathbb{C}} = F^{0}E^{\bullet}(X) \supseteq F^{1}E^{\bullet}(X) \supseteq E^{\bullet}(X) \supseteq \dots $$
+
+of the de Rham complex of a complex manifold is defined by
+
+$$ F^p E^\bullet(X) := \bigoplus_{r \ge p} E^{r,\bullet}(X) = \left\{ \begin{array}{l} \text{differential forms for which each term of} \\ \text{each local expression has at least } p \text{ dz's} \end{array} \right\} $$
+
+where $E^{p,q}(X)$ denotes the differential forms of type $(p, q)$ on $X$. Each $F^p E^\bullet(X)$ is closed under exterior differentiation.
+
+A fundamental consequence of the Hodge theorem for compact Kähler manifolds is the following:
+---PAGE_BREAK---
+
+**Proposition 9.1.** If $X$ is a compact Kähler manifold, then the mapping
+
+$$H^{\bullet}(F^p E^{\bullet}(X)) \to H^{\bullet}(X; \mathbb{C})$$
+
+is injective and has image
+
+$$F^p H^{\bullet}(X) := \bigoplus_{s \ge p} H^{s,t}(X)$$
+
+In other words, every class in $F^p H^{\bullet}(X)$ is represented by a class in $F^p E^{\bullet}(X)$, and if $w \in F^p E^{\bullet}(X)$ is exact in $E^{\bullet}(X)_{\mathbb{C}}$, one can find $\psi \in F^p E^{\bullet}(X)$ such that $\mathrm{d}\psi = w$.
+
+The Hodge filtration extends naturally to complex-valued iterated integrals: $F^p Ch_s^\bullet(P_{x,y}X)$ is the span of
+
+$$\int w_1 \dots w_r$$
+
+where $r \le s$ and $w_j \in F^{p_j} E^{\bullet}(X)$, where $p_1 + \dots + p_r \ge p$. The weight filtration is simply the filtration by length:
+
+$$W_m Ch^{\bullet}(P_{x,y}X) = Ch_m^{\bullet}(P_{x,y}X).$$
+
+The Hodge theory of iterated integrals for compact Kähler manifolds is summarized in the following result. A sketch of a proof can be found in [31] and a complete proof in [30].
+
+**Theorem 9.2.** If $X$ is a compact Kähler manifold, then $Ch^\bullet(P_{x,y}X)$, endowed with the Hodge and weight filtrations above, is a mixed Hodge complex. In particular:
+
+i. $H^\bullet(F^p Ch^\bullet(P_{x,y}X)) \to H^\bullet(Ch^\bullet(P_{x,y}X)_{\mathbb{C}})$ is injective;
+
+ii. $H^\bullet(Ch^\bullet(P_{x,y}X))$ has a natural mixed Hodge structure with Hodge and weight filtrations defined by
+
+$$F^p H^\bullet(Ch^\bullet(P_{x,y}X)) = H^\bullet(F^p Ch^\bullet(P_{x,y}X))$$
+
+and
+
+$$W_m H^k(Ch^\bullet(P_{x,y}X)) = \operatorname{im} \{ H^k(Ch_{m-k}^\bullet(P_{x,y}X)) \to H^k(Ch^\bullet(P_{x,y}X)) \}$$
+
+If $H^1(X; \mathbb{Q}) = 0$, this mixed Hodge structure is independent of the base point $x$.
+
+This theorem generalizes to all complex algebraic manifolds (using logarithmic forms) and to singular complex algebraic varieties (using simplicial methods). Details can be found in [30].
+
+**Corollary 9.3.** If $X$ is a complex algebraic manifold, then $H^{2d-2}(I Ch_2^\bullet(P_{x,x}X))$ has a canonical mixed Hodge structure defined over $\mathbb{Z}$ and the sequence
+
+$$0 \to QH^{2d-1}(X) \to H^{2d-2}(I Ch_2^\bullet(P_{x,x}X)) \to [H^{>0}(X)^{\otimes 2}]^{2d} \\ \xrightarrow[cup]{\text{cup}} H^{2d}(X) \to QH^{2d}(X) \to 0.$$
+
+is exact in the category of $\mathbb{Z}$ mixed Hodge structures.
+
+The minimal model approach to the Hodge theory for complex algebraic manifolds was developed by Morgan in [48]. From the point of view of Hodge theory, iterated integrals have the advantage that they provide a rigid invariant on which to do Hodge theory, whereas the minimal model of a manifold is unique only up to a homotopy class of isomorphisms, which makes the task of putting a mixed Hodge structure on a minimal model more difficult. Chen's theory is also better suited to studying the non-trivial role of the base point $x \in X$ in the theory, which is
+---PAGE_BREAK---
+
+particularly important when studying the Hodge theory of the fundamental group. On the other hand, minimal models (and other non-rigid models) are an essential tool in understanding how Hodge theory restricts fundamental groups and homotopy types of complex algebraic varieties, as is illustrated by Morgan's remarkable examples in [48].
+
+## 10. APPLICATIONS TO ALGEBRAIC CYCLES
+
+Recall that a Hodge structure $H$ of weight $m$ consists of a finitely generated abelian group $H_Z$ and a bigrading
+
+$$ H_{\mathbb{C}} = \bigoplus_{p+q=m} H^{p,q} $$
+
+of $H_{\mathbb{C}} = H_Z \otimes \mathbb{C}$ by complex subspaces satisfying $H^{p,q} = \overline{H^{q,p}}$. The standard example of a Hodge structure of weight $m$ is the $m$th cohomology of a compact Kähler manifold. Its dual, $H_m(X)$, is a Hodge structure of weight $-m$.⁷
+
+For and integer $d$, the *Tate twist* $H(d)$ of $H$ is defined to be the Hodge structure with the same underlying lattice $H_Z$ but whose bigrading has been reindexed:
+
+$$ H(d)^{p,q} = H^{p+d,q+d}. $$
+
+Equivalently, $H(d)$ is the tensor product of $H$ with the 1-dimensional Hodge structure $\mathbb{Z}(d)$ of weight $-2d$.
+
+The category of Hodge structures is abelian, and closed under tensor products and taking duals.
+
+### 10.1. Intermediate jacobians and Griffiths' construction.
+The $d$th intermediate jacobian of a compact Kähler manifold $X$ is defined by
+
+$$ J_d(X) := J(H_{2d+1}(X)(-d)) \cong \operatorname{Hom}(F^{d+1}H^{2d+1}(X), \mathbb{C})/H_{2d+1}(X; \mathbb{Z}). $$
+
+It is a compact, complex torus. For example, $J_0(X)$ is the albanese of $X$ and $J_{\dim X - 1}(X)$ is $\mathrm{Pic}^0 X$, the group of isomorphism classes of topologically trivial holomorphic line bundles over $X$.
+
+Suppose $Z$ is an algebraic $d$-cycle in $X$, that is trivial in homology. We can write $Z$ as the boundary of a $(2d+1)$-chain $\Gamma$, which determines a point $\int_{\Gamma}$ in
+
+$$ \operatorname{Hom}(F^{d+1}H^{2d+1}(X), \mathbb{C})/H_{2d+1}(X; \mathbb{Z}) \cong J_d(X) $$
+
+by integration:
+
+$$ \int_{\Gamma} : [w] \mapsto \int_{\Gamma} w $$
+
+where $w \in F^{d+1}E^{2d+1}(X)$. This mapping is well defined by Stokes' Theorem, Proposition 9.1, and because $F^{d+1}E^{2d}(Z) = 0$.
+
+It is also convenient to define $J^d(X) = J_{n-d}(X)$, where $n$ is the complex dimension of $X$.
+
+⁷Just define $H_m(X)^{-p,-q}$ to be the dual of $H^{p,q}(X)$.
+---PAGE_BREAK---
+
+10.2. **Extensions of mixed Hodge structures.** In this paragraph, we review some elementary facts about extensions of mixed Hodge structures (MHS). Complete details can be found in [12]. Suppose that $A$ and $B$ are Hodge structures of weights $n$ and $m$, respectively, and that
+
+$$ (6) \qquad 0 \to B \to E \xrightarrow{\pi} A \to 0 $$
+
+is an exact sequence of mixed Hodge structures. In concrete terms, this means:
+
+i. there is an exact sequence
+
+$$ (7) \qquad 0 \to B_Z \to E_Z \xrightarrow{\pi} A_Z \to 0; $$
+
+of finitely generated abelian groups;
+
+ii. $E_{\mathbb{C}} := E_Z \otimes \mathbb{C}$ has a filtration $\cdots \supseteq F^p E \supseteq F^{p+1} E \supseteq \dots$ satisfying
+
+$$ B_{\mathbb{C}} \cap F^p E = \bigoplus_{s \ge p} B^{s,m-s} \quad \text{and} \quad \pi(F^p E) = \bigoplus_{s \ge p} A^{s,n-s}. $$
+
+When $A_Z$ is torsion free, the extension (6) determines an element $\psi$ of the complex torus $J(\operatorname{Hom}(A, B))$. This is done as follows: by the property of $\pi$, there is a section $s_F : A_C \to E_C$ that preserves the Hodge filtration; since $A_Z$ is torsion free, there is an integral section $s_Z : A_Z \to E_Z$ of $\pi$. The coset $\psi$ of $s_F - s_Z$ in $J(\operatorname{Hom}(A, B))$ is independent of the choices $s_F$ and $s_Z$.
+
+10.3. **The Theorem of Carlson-Clemens-Morgan.** This is the first example in which periods of (non-abelian) homotopy groups were related to algebraic cycles.
+
+Here $X$ is a simply connected projective manifold. By Corollaries 8.3 and 9.3, the sequence
+
+$$ (8) \qquad 0 \to H^3(X;\mathbb{Z})/((\text{torsion})) \to \operatorname{Hom}(\pi_3(X),\mathbb{Z}) \to K \to 0 $$
+
+is an extension of $\mathbb{Z}$-mixed Hodge structures,⁸ where $K$ is the kernel of the cup product
+
+$$ S^2 H^2(X; \mathbb{Z}) \to H^4(X; \mathbb{Z}). $$
+
+Denote the class of a divisor $D$ in the Neron-Severi group
+
+$$ NS(X) := \{\text{group of divisors in } X\}/(\text{homological equivalence}) $$
+
+of $X$ by $[D]$. If the codimension 2 cycle
+
+$$ Z := \sum_{j,k} n_{jk} D_j \cap D_k $$
+
+is homologically trivial, where the $n_{jk}$ are integers and the $D_j$ divisors, then
+
+$$ \tilde{Z} := \sum_{j,k} n_{jk} [D_j][D_k] \in S^2 H^2(X; \mathbb{Z}) $$
+
+is an integral Hodge class of type $(2,2)$ in $K$. Pulling back the extension (8) along the mapping $\mathbb{Z}(-2) \to K$ that takes 1 to $\tilde{Z}$, we obtain an extension
+
+$$ 0 \to H^3(X;\mathbb{Z}(2)) \to E_Z \to \mathbb{Z} \to 0 $$
+
+of mixed Hodge structures. This determines a point
+
+$$ \phi_Z \in J(H^3(X;\mathbb{Z}(2))) = J^2(X). $$
+
+⁸The integral statement is proved in [13] — however, $H^3(X;\mathbb{Z})$ is implicitly assumed to be torsion free.
+---PAGE_BREAK---
+
+On the other hand, the homologically trivial cycle $Z$ determines a point
+
+$$ \Phi(Z) \in J^2(X). $$
+
+**Theorem 10.1 (Carlson-Clemens-Morgan).** The points $\phi_Z$ and $\Phi_Z$ of $J^2(X)$ are equal.
+
+10.4. **The Harris-Pulte Theorem.** Pulte [50] reworked Harris' work on harmonic volume using the Hodge theory of $\bar{\partial}$ and the language of mixed Hodge theory.
+Suppose that $C$ is a compact Riemann surface and that $x \in X$. Corollaries 8.2 and 9.3 imply that the sequence
+
+$$ 0 \to H^1(C) \to H^0(ICH_2^\bullet(P_{x,x}C)) \to K \to 0 $$
+
+is exact in the category of $\mathbb{Z}$-mixed Hodge structures, where $K$ is the kernel of the cup product $H^1(C) \otimes H^1(C) \to H^2(C)$. It therefore determines an element $m_x$ of
+$$ J(\operatorname{Hom}(K, H^1(C))). $$
+
+An element of $\operatorname{Hom}(K, H^1(C))$ can be computed using the recipe in the previous paragraph. For example, if
+
+$$ u := \sum_{j,k} a_{jk} [w_j] \otimes [\overline{w}_k] \in K $$
+
+where each $w_j$ is holomorphic, then, by Proposition 9.1, there is $\xi \in F^1E^1(C)$ such that
+
+$$ d\xi + \sum_{j,k} a_{jk} w_j \wedge \overline{w}_k = 0. $$
+
+Thus
+
+$$ \int (\sum_{j,k} a_{jk} w_j \overline{w}_k + \xi) \in F^1 H^0(ICH_2^0(P_{x,x}C)). $$
+
+($s_F$ can be chosen so that this is $s_F(u)$.) The value of the extension class $\psi$ on $u$ is represented by the homomorphism $H_1(C) \to \mathbb{C}$ obtained by evaluating this integral on loops based at $x$ representing a basis of $H_1(C; \mathbb{Z})$. (Full details can be found in [33].) These integrals are examples of the $\bar{\partial}$ analogues of those considered by Harris.
+
+On the other hand, one has the algebraic 1-cycles
+
+$$ C_x := \{[z] - [x] : z \in C\} \text{ and } C_x^- := \{[x] - [z] : z \in C\} $$
+
+in the jacobian Jac $C$ of $C$. These share the same homology class, so the algebraic cycle
+
+$$ Z_x := C_x - C_x^- $$
+
+is homologically trivial and determines a point
+
+$$ \nu_x \in J_1(\mathrm{Jac}\,C) = J(\Lambda^3 H_1(C)(-1)). $$
+
+The linear mapping $\Lambda^3 H_1(C) \to K^* \otimes H_1(C)$ defined by
+
+$$ a \wedge b \wedge c \mapsto \{u \mapsto \int_{a \times b} u\} \otimes c + \{u \mapsto \int_{b \times c} u\} \otimes a + \{u \mapsto \int_{c \times a} u\} \otimes b $$
+
+is an injective morphism of Hodge structures, and induces an injection
+
+$$ A : J_1(\mathrm{Jac}\,C) \hookrightarrow J(\operatorname{Hom}(K, H^1(C))). $$
+
+**Theorem 10.2 (Harris-Pulte [40, 50]).** With notation as above, $\nu_x = 2A(m_x)$.
+---PAGE_BREAK---
+
+*Remark 10.3.* If $C$ is hyperelliptic and $x$ and $y$ are two distinct Weierstrass points, the mixed Hodge structure on $J(C - \{y\}, x)/J^3$ is of order 2. In this case Colombo [19] constructs an extension of $\mathbb{Z}$ by the primitive part $PH_2(\text{Jac } C; \mathbb{Z})$ of $H_2(\text{Jac } C)$ from the MHS on $J(C - \{y\}, x)/J^4$ and shows that it is the class of the Collino cycle [18], an element of the Bloch higher Chow group $CH^g(\text{Jac } C, 1)$. This example shows that the MHS on $\pi_1(C - \{y\}, x)$ of a hyperelliptic curve contains information about the extensions associated to elements of higher $K$-groups, ($K_1$ in this case), not just $K_0$.
+
+## 11. GREEN'S OBSERVATION AND CONJECTURE
+
+Mark Green (unpublished) has given an interpretation of the Carlson-Clemens-Morgan Theorem. He also suggested a general picture relating the Hodge theory of homotopy groups to intersections of cycles. In this section, we briefly describe Green's ideas, then state and sketch a proof of a modified version.
+
+### 11.1. Green's interpretation.
+If one wants to understand the product
+
+$$CH^a(X) \otimes CH^b(X) \to CH^{a+b}(X)$$
+
+the first thing one may look at is:
+
+$$CH^a(X) \otimes CH^b(X) \to \Gamma H^{2a+2b}(X; \mathbb{Z}(a+b))$$
+
+After this, one may consider:
+
+$$
+\begin{array}{@{}l@{}}
+(9) \quad \ker \{CH^a(X) \otimes CH^b(X) \to \Gamma H^{2a+2b}(X; \mathbb{Z}(a+b))\} \\
+\qquad \to J^{a+b}(X) = \operatorname{Ext}_{\text{Hodge}}^1(\mathbb{Z}, H^{2a+2b-1}(X; \mathbb{Z}(a+b))) .
+\end{array}
+$$
+
+What Green observed is that when $X$ is a simply connected projective manifold and $a = b = 1$, the result of Carlson-Clemens-Morgan implies this mapping is determined by the class
+
+$$\epsilon(X) \in \operatorname{Ext}_{\text{Hodge}}^1(K, H^3(X; \mathbb{Z}(2)))$$
+
+of the extension
+
+$$0 \to H^3(X, \mathbb{Z}(2)) \to \operatorname{Hom}(\pi_3(X), \mathbb{Z}(2)) \to K \to 0,$$
+
+where $K$ is the kernel of the cup product $S^2H^2(X, \mathbb{Z}(1)) \to H^4(X, \mathbb{Z}(2))$. This works as follows: since the diagram
+
+commutes, there is a natural mapping
+
+$$\ker \{CH^1(X) \otimes CH^1(X) \to \Gamma H^4(X, \mathbb{Z}(2))\} \to \Gamma K(2).$$
+
+The result of Carlson-Clemens-Morgan implies that cupping this homomorphism with $e(X)$ gives the mapping (9).
+
+He went on to conjecture that all the “crossover mappings” (9) — more generally, all crossover mappings associated to the standard conjectured filtration of the Chow groups of $X$ — are similarly described by cupping with extensions one obtains from the mixed Hodge structure on homotopy groups of $X$. In his thesis [5], Archava proves that a conjecture of Green and Griffiths implies the analogue of Green’s
+---PAGE_BREAK---
+
+conjecture in the case where the category of mixed Hodge structures is replaced by
+the category of arithmetic Hodge structures of Green and Griffiths [26].
+
+11.2. **Iterated integrals and crossover mappings.** This section proposes a generalization of the theorem of Carlson-Clemens-Morgan to cycles of all codimensions and also to algebraic manifolds which may be neither compact nor simply connected.
+
+Suppose that $X$ is a complex algebraic manifold. By Corollary 9.3, the sequence
+
+$$ (10) \quad 0 \to QH^{2d-1}(X) \to H^{2d-2}(ICh_2^\bullet(P_{x,x}X)) \to [H^{>0}(X)^{\otimes 2}]^{2d} \\ \xrightarrow{\text{cup}} H^{2d}(X) \to QH^{2d}(X) \to 0. $$
+
+is exact in the category of $\mathbb{Z}$-mixed Hodge structures. Denote by $H^{\text{ev}}(X; \mathbb{Z})$ the sum of the even integral cohomology groups of $X$ of positive degree. Let
+
+$$ K^{\text{ev}} = \ker\{H^{\text{ev}}(X;\mathbb{Z})^{\otimes 2} \to H^{\text{ev}}(X;\mathbb{Z})\}. $$
+
+This underlies a graded $\mathbb{Z}$-Hodge structure. We can pull the extension (10) back along $K^{\text{ev}} \to K$ to obtain a new extension
+
+$$ (11) \qquad 0 \to QH^{2d-1}(X;\mathbb{Z}) \to E \to K^{\text{ev}} \to 0 $$
+
+which underlies an extension of MHS, which can be seen to be independent of the
+choice of the basepoint $x$. There is a natural mapping
+
+$$ \ker \left\{ \sum_{\substack{a+b=d \\ a,b>0}} CH^a(X) \otimes CH^b(X) \to \Gamma H^{2d}(X, \mathbb{Z}(d)) \right\} \to \Gamma K^{2d}(d). $$
+
+This, the quotient mapping $H^*(X) \to QH^*(X)$, and the extension (11) determine
+a homomorphism
+
+$$ \Phi : \ker \left\{ \sum_{\substack{a+b=d \\ a,b>0}} CH^a(X) \otimes CH^b(X) \to H^{2d}(X, \mathbb{Z}(d)) \right\} \\ \to \operatorname{Ext}_{\mathrm{Hodge}}^1(\mathbb{Z}, QH^{2d-1}(X; \mathbb{Z}(d))). $$
+
+The following, if proven, will generalize the theorem of Carlson, Clemens and
+Morgan.
+
+**Conjecture 11.1.** If $X$ is a quasi-projective complex algebraic manifold, the mapping $\Phi$ equals the composition of the crossover mapping (9) with the quotient mapping $J^d(X) \to J(QH^{2d+1}(X)(d))$.
+
+*Heuristic Argument.* By resolution of singularities, we may suppose that the quasi-
+projective algebraic manifold *X* is of the form *X* − *D*, where *X* is a complex pro-
+jective manifold and *D* is a normal crossings divisor. Suppose that *Z*₁, ..., *Z*ₘ are
+proper algebraic subvarieties of *X* of positive codimensions *c*₁, ..., *c*ₘ, respectively.
+By the moving lemma, we may move them within their rational equivalence classes
+so that they all meet properly. Suppose that the *n*ᵢₖ are integers and that the cycle
+
+$$ W = \sum_{j,k} n_{jk} Z_j \cdot Z_k $$
+
+is homologically trivial in X of pure codimension d.
+---PAGE_BREAK---
+
+The basic idea of the argument is easy. The extension class associated to $W$ is
+the difference $s_F(\tilde{W}) - s_Z(\tilde{W}) \mod F^d$ of Hodge and integral lifts of the class
+
+$$
+\tilde{W} := \sum_{j,k} n_{jk}[Z_j] \otimes [Z_k] \in K^{2d} \text{ to } W_{2d}H^{2d-2}(I Ch_2^\bullet(P_{x,x}X)).
+$$
+
+Suppose that $w_j \in E^{c_j, c_j}(\bar{X})$ is a smooth form representing the Poincaré dual of the closure of $Z_j$ in $\bar{X}$. Since $W$ is homologically trivial, there is a form $\xi \in F^d W_1 E^{2d-1}(\bar{X} \log D)$ satisfying $d\xi = \sum n_{jk} w_j \wedge w_k$ (cf. [30, I.3.2.8].$^9$) It follows that
+
+$$
+\sum_{j,k} n_{jk} \int w_j w_k + \int \xi \in F^d W_{2d} H^{2d-2} (I \mathrm{Ch}_2^\bullet (P_{x,x} X)),
+$$
+
+which we take as the Hodge lift $s_F(\tilde{W})$ of $\tilde{W}$.
+
+In the integral version, we shall use King's theory of logarithmic currents [43, 44].
+We would like to take the integral lift of $\tilde{W}$ to be
+
+$$
+(12) \quad s_Z(\tilde{W}) := \int \sum_{j,k} n_{jk} \delta_j \delta_k - \delta_\Gamma \in W_{2d} H^{2d-2} (I Ch_2^\bullet (P_{x,x} X))_\mathbb{Z},
+$$
+
+where $\Gamma$ is a chain of codimension $2d - 1$ whose boundary is $W$, and $\delta_j$ is the integration current defined by $Z_j$. To make this argument precise, one has to show that $s_Z(\tilde{W})$ makes sense. Assume this.
+
+The final task is to compute the extension data. Denote the complex of currents on $\overline{X}$ by $D^\bullet(\overline{X})$, and King's complex of log currents for $(\overline{X}, D)$ by $D^\bullet(\overline{X} \log D)$. These have natural Hodge and weight filtrations. There is a log current
+
+$$
+\psi_j \in F^{c_j} W_1 D^{2c_j} (\bar{X} \log D)
+$$
+
+such that $d\psi_j = w_j - \delta_j$. Using the formula for the differential, we have:
+
+$$
+\begin{align*}
+& \int (\omega_j w_k - \delta_j \delta_k) \\
+&= -d \int (\psi_j \delta_k - \delta_j \psi_k + \psi_j d\psi_k) - \int (\psi_j \wedge \delta_k + \delta_j \wedge \psi_k + \psi_j \wedge d\psi_k) \\
+&= -\int (\psi_j \wedge \delta_k + \delta_j \wedge \psi_k + \psi_j \wedge d\psi_k) \quad \text{mod exact forms}
+\end{align*}
+$$
+
+Combing this with the relations
+
+$$
+d\xi = \sum_{j,k} n_{jk} w_j \wedge w_k \text{ and } d\delta_{\Gamma} = -\sum_{j,k} n_{jk} \delta_j \wedge \delta_k
+$$
+
+we have, modulo exact forms,
+
+$$
+\begin{align*}
+s_F(\tilde{W}) - s_Z(\tilde{W}) &\equiv \int (\xi + \delta_\Gamma) - \sum_{j,k} n_{jk} \int (\psi_j \wedge \delta_k + \delta_j \wedge \psi_k + \psi_j \wedge d\psi_k) \\
+&\equiv \int_\Gamma \mod F^d + \text{exact forms},
+\end{align*}
+$$
+
+which is the desired result.
+$\square$
+
+⁹The ≤ there should be an equals.
+---PAGE_BREAK---
+
+The deficiency in this argument is that the theory of iterated integrals of currents is not rigorous. To make this argument rigorous, it would be sufficient to show that there is a complex of chains whose elements are transverse to $s_Z(\tilde{W})$, on which $s_Z(\tilde{W})$ takes integral values, and that computes the integral structure on $H^*(ICh_2^*(P_{x,x}X))$. One possible way to approach this is to triangulate $\bar{X}$ so that $D$, each $Z_j$ and $\Gamma$ are subcomplexes, and then to obtain the cycles that give the integral structure from some analogue of Adams-Hilton construction [3] associated to the dual cell decomposition. So far, I have not been able to make this work.
+
+This argument suggests that it is the Hodge theory of iterated integrals (or more generally, the cosimplicial cobar construction) rather than homotopy groups which determines periods associated to algebraic cycles, as this result holds even when the loop space de Rham theorem and rational homotopy theory fail. It would be interesting to have an example of an acyclic complex projective manifold where $\Phi$ is non-trivial to illustrate this point.
+
+This argument also applies in the relative case where the variety $X$ and the cycles are defined and flat over a smooth base $S$. In this case, the map $\Phi$ will take values in
+
+$$ \mathrm{Ext}_{\mathrm{Hodge}(S)}^1(\mathbb{Z}_S, R^{2d-1}f_*\mathbb{Z}_X(d)) $$
+
+where $f: X \to S$ and $Hodge(S)$ denotes the category of admissible variations of mixed Hodge structure over $S$. This can be seen using results from [30, Part II] and [39]. By combining this with the standard technique of spreading a variety defined over a subfield of $\mathbb{C}$, one should get elements of the Hodge realization of motivic cohomology as considered in [6], for example.
+
+## 12. BEYOND NILPOTENCE
+
+The applicability of Chen’s de Rham theory (equivalently, rational homotopy theory) is limited by nilpotence. Using ordinary iterated line integrals, one can only separate those elements of $\pi_1(X, x)$ that can be separated by homomorphisms from $\pi_1(X, x)$ to a group of unipotent upper triangular matrices. If the first Betti number $b_1(X)$ of $X$ is zero, all such homomorphisms are trivial, and iterated line integrals cannot separate any elements of $\pi_1(X, x)$ from the identity. If $b_1(X) = 1$, then the image of all such homomorphisms is abelian, and iterated line integrals can separate only those elements that are distinct in $H_1(X; \mathbb{R})$. Thus, in order to apply de Rham theory to the study of moduli spaces of curves and mapping class groups ($b_1(X) = 0$) or knot groups ($b_1(X) = 1$), for example, iterated integrals need to be generalized.
+
+Before explaining two ways of doing this we shall restate Chen’s de Rham theorem for the fundamental group in a form suitable for generalization.
+
+First recall the definition of unipotent (also known as Malcev) completion. A unipotent group is a Lie group that can be realized as a closed subgroup of the group of a group of unipotent upper triangular matrices. (That is, upper triangular matrices with 1's on the diagonal.) Unipotent groups are necessarily algebraic groups as the exponential map from the Lie algebra of strictly upper triangular matrices to the group of unipotent upper triangular matrices is a polynomial bijection.¹⁰
+
+¹⁰Here and below, I shall be vague about the field $F$ of definition of the group. It will always be either $\mathbb{R}$ or $\mathbb{C}$. Also, I will not distinguish between the algebraic group and its group of $F$-rational points.
+---PAGE_BREAK---
+
+Suppose that $\Gamma$ is a discrete group. A homomorphism $\rho$ from $\Gamma$ to a unipotent group $U$ is Zariski dense if there is no proper unipotent subgroup of $U$ that contains the image of $\rho$. The set of Zariski dense unipotent representations $\rho: \Gamma \to U_{\rho}$ forms an inverse system. The *unipotent completion* of $\Gamma$ is the inverse limit of all such representations; it is a homomorphism from $\Gamma$ into the *prounipotent group*
+
+$$\mathcal{U}(\Gamma) := \lim_{\overrightarrow{\rho}} U_{\rho}.$$
+
+Every homomorphism $\Gamma \to U$ from $\Gamma$ to a unipotent group factors through the
+natural homomorphism $\Gamma \to \mathcal{U}(\Gamma)$. The coordinate ring of $\mathcal{U}(\Gamma)$ is, by definition, the
+direct limit of the coordinate rings of the $U_{\rho}$:
+
+$$\mathcal{O}(\mathcal{U}(\Gamma)) = \lim_{\overrightarrow{\rho}} \mathcal{O}(U_{\rho}).$$
+
+It is isomorphic to the Hopf algebra of matrix entries $f: \Gamma \to \mathbb{R}$ of all unipotent
+representations of $\Gamma$.
+
+The following statement is equivalent to the statement of Chen’s de Rham the-
+orem for the fundamental group given in Section 3.
+
+**Theorem 12.1.** If $X$ is a connected manifold, then integration induces a Hopf algebra isomorphism
+
+$$\mathcal{O}(\mathcal{U}(\pi_1(X, x))) \cong H^0(\mathrm{Ch}^\bullet(P_{x,x}X)).$$
+
+One recovers the unipotent completion of $\pi_1(X, x)$ as $\text{Spec}H^0(\text{Ch}^\bullet(P_{x,x}X))$. The homomorphism $\pi_1(X, x) \to \mathcal{U}(\pi_1(X, x))$ takes the homotopy class of the loop $\gamma$ to the maximal ideal of iterated integrals that vanish on it.
+
+**12.1. Relative unipotent completion.** Deligne suggested the following generalization of unipotent completion, which is itself a generalization of the idea of the algebraic envelope of a discrete group defined by Hochschild and Mostow [41, §4].
+
+Suppose that $S$ is a reductive algebraic group. (That is, an affine algebraic
+group, all of whose finite dimensional representations are completely reducible,
+such as $SL_n$, $GL_n$, $O(n)$, $\mathbb{G}_m$, ...) Suppose that $\Gamma$ is a discrete group as above
+and that $\rho: \Gamma \to S$ is a Zariski dense homomorphism.
+
+Similar to the construction of the unipotent completion of Γ, one can construct
+a proalgebraic group *G*(Γ, ρ), which is an extension
+
+$$1 \to \mathcal{U}(\Gamma, \rho) \to \mathcal{G}(\Gamma, \rho) \xrightarrow{p} S \to 1$$
+
+of $S$ by a prounipotent group, and a homomorphism $\Gamma \to \mathcal{G}(\Gamma, \rho)$ whose composition with $p$ is $\rho$. Every homomorphism from $\Gamma$ into an algebraic group $G$ that is an extension of $S$ by a unipotent group, and for which the composite $\Gamma \to G \to S$ is $\rho$, factors through the natural homomorphism $\Gamma \to \mathcal{G}(\Gamma, \rho)$.
+
+The homomorphism $\Gamma \to \mathcal{G}(\Gamma, \rho)$ is called the *completion of $\Gamma$ relative to $\rho$*.
+When $S$ is trivial, the relative completion reduces to classical unipotent completion
+described above.
+
+The definition of iterated integrals can be generalized to more general forms to
+compute the coordinate rings of relative completions of fundamental groups. Sup-
+pose now that $\Gamma = \pi_1(X, x)$, where $X$ is a connected manifold. The representation
+$\rho$ determines a flat principal $S$-bundle, $P \to X$, together with an identification
+of the fiber over $x$ with $S$. One can then consider the corresponding (infinite di-
+mensional) bundle $\mathcal{O}(P) \to X$ whose fiber over $y \in X$ is the coordinate ring of
+---PAGE_BREAK---
+
+the fiber of $P$ over $y$. This is a flat bundle of $\mathbb{R}$-algebras. One can, consider the
+dga $E^{\bullet}(X, \mathcal{O}(P))$ of $S$-finite differential forms on $X$ with coefficients in $\mathcal{O}(P)$. In
+[35], Chen’s definition of iterated integrals is extended to such forms. The iterated
+integrals of degree 0 are, as before, functions $P_{x,x}X \to \mathbb{R}$.
+
+Two augmentations
+
+$$\delta : E^{\bullet}(X, \mathcal{O}(P)) \to \mathcal{O}(S) \text{ and } \epsilon : E^{\bullet}(X, \mathcal{O}(P)) \to \mathbb{R}$$
+
+are obtained by restricting forms to the fiber $S$ over $x$ and to the identity $1 \in S$ in
+this fiber. These, give $\mathcal{O}(S)$ and $\mathbb{R}$ structures of modules over $E^{\bullet}(X, \mathcal{O}(P))$. One
+can then form the bar construction
+
+$$B(\mathbb{R}, E^{\bullet}(X, \mathcal{O}(P)), \mathcal{O}(S)).$$
+
+This maps to the complex of iterated integrals of elements of $E^{\bullet}(X, \mathcal{O}(P))$.
+
+**Theorem 12.2.** *Integration of iterated integrals induces a natural isomorphism*
+
+$$H^0(B(\mathbb{R}, E^{\bullet}(X, \mathcal{O}(P)), \mathcal{O}(S))) \cong \mathcal{O}(\mathcal{G}(\pi_1(X, x), \rho)).$$
+
+The corresponding Hodge theory is developed in [35]. It is used in [36] to give an explicit presentation of the completion of mapping class groups $\Gamma_g$ with respect to the standard homomorphism $\Gamma_g \to Sp_g$ to the symplectic group given by the action of $\Gamma_g$ on the first homology of a genus $g$ surface when $g \ge 6$.
+
+One disadvantage of the generalization sketched above is that these generalized iterated integrals, being constructed from differential forms with values in a flat vector bundle, are not so easy to work with. A more direct and concrete approach is possible in the solvable case.
+
+**12.2. Solvable iterated integrals.** In his senior thesis, Carl Miller [46] considers the solvable case. Here it is best to take the ground field to be $\mathbb{C}$. The reductive group is a diagonalizable algebraic group:
+
+$$S = (\mathbb{C}^*)^k \times \mu_{d_1} \times \cdots \times \mu_{d_m}.$$
+
+He defines exponential iterated line integrals, which are certain convergent infinite sums of standard iterated line integrals of the type that occur as matrix entries of solvable representations of fundamental groups. Exponential iterated line integrals are linear combinations of iterated line integrals of the form
+
+$$\int e^{\delta_0} w_1 e^{\delta_1} w_2 e^{\delta_3} \dots e^{\delta_{n-1}} w_n e^{\delta_n} \\
+:= \sum_{k_j \ge 0} \int \overbrace{\delta_0}^{k_0} \dots \overbrace{w_1}^{k_1} \dots \overbrace{w_2}^{k_2} \dots \overbrace{\delta_{n-1}}^{k_{n-1}} \dots \overbrace{\delta_n}^{k_n} w_n \overbrace{\delta_n}^{k_n} \dots \overbrace{\delta_n}^{k_n}$$
+
+where $\delta_0, \dots, \delta_n, w_1, \dots, w_n$ are all 1-forms. This sum converges absolutely when evaluated on any path. The terminology and notation derive from the easily verified fact that
+
+$$\exp \int_\gamma w = \sum_{k \ge 0} \int_\gamma \overbrace{w}^{k} \dots w.$$
+---PAGE_BREAK---
+
+**Theorem 12.3 (Miller).** *Suppose that X is a connected manifold and*
+
+$$\rho : \pi_1(X, x) \to S$$
+
+is a Zariski dense representation to a diagonalizable $\mathbb{C}$-algebraic group. If $\rho$ factors through $H_1(X)/\text{torsion}$, then the Hopf algebra of locally constant exponential iterated integrals associated to $\rho$ is isomorphic to the coordinate ring $\mathcal{O}(\mathcal{G}(\pi_1(X, x), \rho))$ of the completion of $\pi_1(X, x)$ relative to $\rho$.
+
+He also shows that for a large class of knots $K$ (which includes all fibered knots), there is a representation $\rho: \pi_1(S^3-K, x) \to S$ into a diagonalizable algebraic group such that $\pi_1(S^3-K, x)$ injects into the corresponding relative completion. In particular, there are enough exponential iterated line integrals to separate elements of $\pi_1(S^3-K, x)$. The representation $\rho$ can be computed from the eigenvalues of the Alexander polynomial of $K$. The representation $\pi_1(S^3-K, x) \to S$ is the Zariski closure of the "semi-simplification" of the Alexander module of $K$.
+
+## 13. ALGEBRAIC ITERATED INTEGRALS
+
+A standard tool in the study of algebraic varieties over any field is algebraic de Rham theory, which originates in the theory of Riemann surfaces and was generalized by Grothendieck [27] among others. This algebraic de Rham theory extends to iterated integrals and several approaches will be presented in this section. I will begin with the most elementary and progress to the abstract, but powerful, approach of Wojtkowiak [57].
+
+**13.1. Iterated integrals of the second kind.** The historical roots of algebraic de Rham cohomology lie in the classical result regarding differentials of the second kind on a compact Riemann surface. Recall that a meromorphic 1-form $w$ on a compact Riemann surface $X$ is of the *second kind* if it has zero residue at each point. Alternatively, $w$ is of the *second kind* if the value of $\int w$ on each loop in $X - \{\text{singularities of } w\}$ depends only on the class of the loop in $H_1(X)$. A classical result asserts that there is a natural isomorphism
+
+$$H^1(X; \mathbb{C}) \cong \frac{\{\text{meromorphic differentials of the second kind on } X\}}{\{\text{differentials of meromorphic functions}\}}$$
+
+This can be generalized to iterated integrals. Suppose that $X$ is a compact Riemann and that $S$ is a finite subset. An *iterated line integral of the second kind* on $X-S$ is an iterated integral
+
+$$\sum_{r \le s} \sum_{|I|=r} a_I \int w_{i_1} \dots w_{i_r},$$
+
+where $a_I \in \mathbb{C}$ and each $w_j$ is a meromorphic differential on $X$, with the property that its value on each path in $X$ that avoids the singularities of all $w_j$ depends only on its homotopy class (relative to its endpoints) in $X-S$.
+
+**Example 13.1 (cf. [33, p. 260]).** We will assume that $S$ is empty (the case where $S$ is non empty is simpler). Suppose that $w_1, \dots, w_n$ are differentials of the second kind on $X$ and that $a_{jk} \in \mathbb{C}$. Since differentials of the second kind are locally (in the complex topology) the exterior derivative of a meromorphic function, for each point $x \in U$ we can find a function $f_j$, meromorphic at $x$, such that $df_j = w_j$ about $x$. Define
+
+$$r_{jk}(x) = \mathrm{Res}_{z=x} [f_j(z)w_k(z)].$$
+---PAGE_BREAK---
+
+Since $w_k$ is of the second kind, changing $f_j$ by a constant will not change $r_{jk}(x)$. If
+
+$$ \sum_{x \in X} \sum_{j,k} a_{jk} r_{jk}(x) = 0 $$
+
+there is a meromorphic differential $u$ on $X$ (which can be taken to be of the third kind) such that
+
+$$ \mathrm{Res}_{z=x} u(z) = - \sum_{j,k} a_{jk} r_{jk}(x). $$
+
+The iterated integral
+
+$$ \sum_{j,k} a_{jk} \int w_j w_k + \int u $$
+
+is of the second kind. This can be seen by noting that the integrand of this integral near $x$ is
+
+$$ \sum_{j,k} a_{jk} f_j(z) w_k(z) + u(z), $$
+
+which has zero residue at $x$. Equivalently, the pullback of the integrand of this iterated integral to the universal covering of $X-S$ is of the second kind.
+
+**Theorem 13.2.** If $X$ is a compact Riemann surface, $S$ a finite subset of $X$, and $1 \le s \le \infty$, then, for all $x, y \in X - S$, integration induces a natural isomorphism
+
+$$ H^0(\mathrm{Ch}_s^\bullet(P_{x,y}(X-S)))_C \cong \left\{ \text{The set of iterated integrals of the second kind of length $\le s$ on $X-S$} \right\} $$
+
+*Proof.* This is just an algebraic version of the proof of Chen’s $\pi_1$ de Rham theorem given in [33, §4]. Familiarity with that proof will be assumed. I will just make those additional points necessary to prove this variant.
+
+Set $U = X - S$. Suppose that $s < \infty$. We consider the truncated group ring $\mathbb{C}\pi_1(U,x)/J^{s+1}$ to be a $\pi_1(U,x)$-module via right multiplication. Let $E_s \to U$ be the corresponding flat bundle. This is a holomorphic vector bundle with a flat holomorphic connection. It is filtered by the flat subbundles corresponding to filtration
+
+$$ \mathbb{C}\pi_1(U, x)/J^{s+1} \supseteq J/J^{s+1} \supseteq \dots \supseteq J^s/J^{s+1} \supseteq 0 $$
+
+of $\mathbb{C}\pi_1(U,x)/J^{s+1}$ by right $\pi_1(U,x)$-submodules. Denote the corresponding filtration of $E$ by
+
+$$ E_s = E_s^0 \supseteq E_s^1 \supseteq \dots \supseteq E_s^s \supseteq 0. $$
+
+By the calculation in [33, Prop. 4.2], each of the bundles $E_s^t/E_s^{t+1}$ has trivial monodromy, so that each $E_s^t$ has unipotent monodromy.
+
+By the results of [21], each of the bundles $E_s^t$ has a canonical extension $\bar{E}_s^t$ to $X$. These satisfy:
+
+i. each $\bar{E}_s^t$ is a subbundle of $\bar{E}_s := \bar{E}_s^0$;
+
+ii. the connection on $E_s$ extends to a meromorphic connection on $\bar{E}_s$ which restricts to a meromorphic connection on each of the $\bar{E}_s^t$;
+
+iii. the connection on each of the bundles $\bar{E}_s^t/\bar{E}_s^{t+1}$ is trivial over $X$.
+
+(Take $\bar{E}_s^t = E_s^t$ when $S$ is empty.)
+
+The following lemma implies that there are meromorphic trivializations of each $\bar{E}_s$ compatible with all of the projections
+
+$$ \dots \to \bar{E}_s \to \bar{E}_{s-1} \to \dots \to \bar{E}_0 = \mathcal{O}_X $$
+---PAGE_BREAK---
+
+and where the induced trivializations of each graded quotient of each $\overline{E}_s$ is flat.
+Moreover, we can arrange for all of the singularities of the trivialization¹¹ to lie in
+any prescribed non-empty finite subset $T$ of $X$.
+
+The connection form $\omega_s$ of $\overline{E}_s$ with respect to this trivialization thus satisfies
+
+$$ \omega_s \in \{\text{meromorphic 1-forms on } X\} \otimes J^{-1} \text{End}(\mathbb{C}\pi_1(U,x)/J^{s+1}) $$
+
+with values in the linear endomorphisms of $\mathbb{C}\pi_1(U,x)/J^{s+1}$ that preserve the filtration
+
+$$ \mathbb{C}\pi_1(U, x)/J^{s+1} \supseteq J/J^{s+1} \supseteq \dots \supseteq J^s/J^{s+1} \supseteq 0 $$
+
+and act trivially on its graded quotients. This connection is thus nilpotent. Note
+that, even though $\omega_s$ may have poles in $U$, the connection given by $\omega_s$ has trivial
+monodromy about each point of $U$. This is the key point in the proof; it implies
+that the transport [33, §2]
+
+$$ T = 1 + \int \omega_s + \int \omega_s \omega_s + \dots + \int \overbrace{\omega_s \dots \omega_s}^{s} $$
+
+is an End $\mathbb{C}\pi_1(U,x)/J^{s+1}$-valued iterated integral of the second kind on $U$. Its
+matrix entries are iterated integrals of the second kind.
+
+The result when $x=y$ now follows as in the proof of [33, §4]. The case when $x \neq y$ is easily deduced from the case $x=y$. The result for $s=\infty$ is obtained by taking direct limits over $s$ using the fact that $\omega_s$ is the image of $\omega_{s+1}$ under the projection
+
+$$ \begin{align*} & \{\text{meromorphic 1-forms on } X\} \otimes J^{-1} \operatorname{End}(\mathbb{C}\pi_1(U,x)/J^{s+2}) \\ & \to \{\text{meromorphic 1-forms on } X\} \otimes J^{-1} \operatorname{End}(\mathbb{C}\pi_1(U,x)/J^{s+1}) \end{align*} \quad \square $$
+
+**Lemma 13.3.** *Suppose that*
+
+$$ 0 \to \mathcal{O}_X^N \to E \xrightarrow{p} \mathcal{F} \to 0 $$
+
+is an extension of holomorphic vector bundles over a compact Riemann surface $X$.
+If $T$ is a non-empty subset of $X$, there is a meromorphic splitting of $p$ which is
+holomorphic outside $T$.
+
+*Proof.* Set $\tilde{\mathcal{F}} = \operatorname{Hom}(\mathcal{F}, \mathcal{O}_X)$. Riemann-Roch implies that $H^1(X, \tilde{\mathcal{F}}(*T)) = 0$, where $\tilde{\mathcal{F}}(*T)$ is defined to be the sheaf of meromorphic sections of $\tilde{\mathcal{F}}$ that are holomorphic outside $T$. It follows from obstruction theory for extensions of vector bundles that the sequence has a meromorphic splitting that is holomorphic on $X-T$. $\square$
+
+*Remark 13.4.* Note that if $S$ is non-empty, the proof shows that the algebraic iter-
+ated line integrals built out of meromorphic forms that are holomorphic on $X-S$
+equals $H^0(\mathrm{Ch}^\bullet(P_{x,y}(X-S))_\mathbb{C})$. Since $X-S$ is affine, this is a very special case
+of Theorem 13.5 in the next paragraph, a consequence of Grothendieck's algebraic
+de Rham Theorem. The result above can also be used to show that if $X$ is a smooth
+curve defined over a subfield $F$ of $\mathbb{C}$, then $H^0(\mathrm{Ch}^\bullet(P_{x,y}(X-S))_\mathbb{C})$ has a canonical
+
+¹¹A meromorphic trivialization $\phi : E \to \mathcal{O}_X^N$ is singular at $x$ if either $\phi$ has a pole at $x$ or if the determinant of $\phi$ vanishes at $x$.
+---PAGE_BREAK---
+
+F-form — namely that consisting of those meromorphic differentials of the second kind on $X - S$ that are defined over $F$.
+
+It would be interesting and useful to have a description of the Hodge and weight filtrations on $H^0(\mathrm{Ch}^\bullet(P_{x,y}(X-S))_\mathbb{C})$, possibly in terms of some kind of pole filtration, as one has for cohomology.
+
+**13.2. Grothendieck's theorem and its analogues for iterated integrals.**
+
+Suppose that $X$ is a variety over a field $F$ of characteristic zero. Denote the sheaf of Kähler differentials of $X$ over $F$ by $\Omega_{X/F}$. Denote its global sections over $X$ by $H^0(\Omega_{X/F}^*)$. It is a commutative differential graded algebra over $F$. When $F = \mathbb{C}$ and $X$ is smooth, every algebraic differential $w \in H^0(\Omega_{X/\mathbb{C}})$ is a holomorphic differential on $X$. The corresponding mapping $H^0(\Omega_{X/\mathbb{C}}) \to E^\bullet(X)_\mathbb{C}$ is a dga homomorphism.
+
+**Theorem 13.5.** If $X$ is a complex affine manifold, then natural homomorphism
+
+$$ H^{\bullet}(H^0(\Omega_{X/\mathbb{C}}^*)) \to H^{\bullet}(X; \mathbb{C}) $$
+
+is a ring isomorphism.
+
+Note that Theorem 13.2 is a consequence of this when $S$ is non-empty and $S = T$. If $F \subset \mathbb{C}$ and $X$ is defined over $F$, then $H^0(\Omega_{X/F}^*) \otimes_F \mathbb{C} \cong H^0(\Omega_{X/\mathbb{C}}^*)$. One important consequence of Grothendieck's theorem is that if $F$ is a subfield of $\mathbb{C}$, then
+
+$$ H^{\bullet}(X(\mathbb{C}); \mathbb{C}) \cong H^{\bullet}(H^0(\Omega_{X/F}^*)) \otimes_F \mathbb{C}. $$
+
+That is, the de Rham cohomology of the complex manifold $X(\mathbb{C})$ has a natural $F$ structure which is functorial with respect to morphisms of affine manifolds over $F$.
+
+This can be generalized to arbitrary smooth varieties over $F$ by taking hypercohomology. Define the *algebraic de Rham cohomology* of $X$ by
+
+$$ H_{DR}^{\bullet}(X) = \mathbb{H}^{\bullet}(X, \Omega_{X/F}^{\bullet}). $$
+
+As above, if $F$ is a subfield of $\mathbb{C}$, then the ordinary de Rham cohomology of $X(\mathbb{C})$ has a natural $F$-structure:
+
+$$ H^{\bullet}(X(\mathbb{C}); \mathbb{C}) \cong H_{DR}^{\bullet}(X) \otimes_F \mathbb{C}. $$
+
+Using the classical Hodge theorem, one can show that if $X$ is also projective, the Hodge filtration
+
+$$ F^p H^m(X(\mathbb{C})) := \bigoplus_{s \ge p} H^{s,m-s}(X(\mathbb{C})) $$
+
+is obtained from a natural Hodge filtration
+
+$$ H_{DR}^{\bullet}(X) = F^0 H_{DR}^{\bullet}(X) \supseteq F^1 H_{DR}^{\bullet}(X) \supseteq F^2 H_{DR}^{\bullet}(X) \supseteq \dots $$
+
+of the algebraic de Rham cohomology by tensoring by $\mathbb{C}$.
+
+This can be extended to iterated integrals on affine manifolds in the obvious way. For an affine manifold $X$ over $F$ and $F$-rational points $x, y \in X(F)$, define the algebraic iterated integrals on $P_{x,y}X$ by
+
+$$ H_{DR}^{\bullet}(P_{x,y}X) = H^{\bullet}(B(F, H^0(\Omega_{X/F}^*), F)) $$
+---PAGE_BREAK---
+
+where $F$ is viewed as a module over $H^0(\Omega_{X/F}^\bullet)$ via the two augmentations induced by $x$ and $y$.¹² It follows from Corollary 7.3 and Grothendieck’s theorem above that if $F$ is a subfield of $\mathbb{C}$, there is a canonical isomorphism
+
+$$ (13) \qquad H_{DR}^\bullet(P_{x,y}X) \otimes_F \mathbb{C} \cong H^\bullet(\mathrm{Ch}^\bullet(P_{x,y}X(\mathbb{C}))). $$
+
+When $X$ is not affine, one can replace $X$ by a smooth affine hypercovering $U_\bullet \to X$ and apply the methods of [30, §5] or Navarro [49] (see below) to construct a commutative dga $A^\bullet(U_\bullet)$ over $F$ with the property that when tensored with $\mathbb{C}$ over $F$, it is naturally quasi-isomorphic to $E^\bullet(X)_\mathbb{C}$. One can then define $H_{DR}^\bullet(P_{x,y}X)$ to be the cohomology of the corresponding bar construction as above. It will give a natural $F$-form of $H^\bullet(\mathrm{Ch}^\bullet(P_{x,y}X(\mathbb{C}))$. However, in this general case, it is better to use Wojtkowiak’s approach, which is explained in the next paragraph.
+
+**13.3. Wojtkowiak’s approach.** The most functorial way to approach algebraic de Rham theory of iterated integrals on varieties is via the works of Navarro [49] and Wojtkowiak [57]. This approach has been used in the works of Shiho [51] and Kim-Hain [42] on the crystalline version of unipotent completion.
+
+Suppose that $D$ is a normal crossing divisor in a smooth complete variety $\overline{X}$, both defined over a field $F$ of characteristic zero. Set $X = \overline{X} - D$ and denote the inclusion $X \hookrightarrow \overline{X}$ by $j$. One then has the sheaf of logarithmic differentials $\Omega_{\overline{X}}^\bullet(\log D)$ on $\overline{X}$, which is quasi-isomorphic to $j_*F$.
+
+For a continuous map $f: U \to V$ between topological spaces, Navarro [49] has constructed a functor $\mathbb{R}_{TW}^\bullet f_*$ from the category of complexes of sheaves on $U$ to the category of complexes of sheaves on $V$ with many wonderful properties. Among them:
+
+i. $\mathbb{R}_{TW}^\bullet f_*$ takes sheaves of commutative dgas on $U$ to sheaves of commutative dgas on $V$;
+
+ii. if $V$ is a point, then the global sections $\Gamma \mathbb{R}_{TW}^\bullet f_* Q_U$ of $\mathbb{R}_{TW}^\bullet f_* Q_U$ is Sullivan’s rational de Rham complex of $U$;
+
+iii. $\mathbb{R}_{TW}^\bullet f_*$ takes quasi-isomorphisms to quasi-isomorphisms;
+
+iv. it induces the usual $Rf_*$ from the derived category of bounded complex of sheaves on $U$ to the bounded derived category of sheaves on $V$.
+
+For convenience, we denote the global sections $\Gamma \mathbb{R}_{TW}^\bullet$ of $\mathbb{R}_{TW}^\bullet$ by $R_{TW}^\bullet$. For an arbitrary topological space $Z$, define
+
+$$ A^\bullet(Z) = R_{TW}^\bullet Q_Z. $$
+
+This is the Thom-Whitney-Sullivan de Rham complex of $Z$. Its cohomology is naturally isomorphic to $H^\bullet(Z; \mathbb{Q})$.
+
+In the present situation, we can assign the commutative differential graded algebra
+
+$$ L^\bullet(\overline{X}, D) := R_{TW}^\bullet \Omega_{\overline{X}}^\bullet (\log D) $$
+
+to $(\overline{X}, D)$, where we are viewing $\overline{X}$ as a topological space in the Zariski topology. This dga is natural in the pair $(\overline{X}, D)$.
+
+If $x, y$ are $F$-rational points of $X$, there are natural augmentations $L^\bullet(\overline{X}, D) \to F$. We can therefore use them to form the bar construction $B(F, L^\bullet(\overline{X}, D), F)$.
+
+¹²This is not an unreasonable definition, but one should recall that when $X$ is not simply connected and $F = \mathbb{C}$, the de Rham theorem may not hold as we have seen in Example 7.5.
+---PAGE_BREAK---
+
+Following Wojtkowiak¹³ [57], we define
+
+$$H_{DR}^{\bullet}(P_{x,y}X) = H^{\bullet}(B(F, L^{\bullet}(\overline{X}, D), F)).$$
+
+This definition agrees with the ones above.
+
+If $F$ is a subfield of $\mathbb{C}$, then the naturality of Navarro's functor implies that there
+is a natural dga quasi-isomorphism
+
+$$A^{\bullet}(X(\mathbb{C})) \otimes_{\mathbb{Q}} \mathbb{C} \leftrightarrow L^{\bullet}(\overline{X}, D) \otimes_F \mathbb{C}$$
+
+where we regard $X(\mathbb{C})$ as a topological space in the complex topology. This quasi-isomorphism respects the augmentations induced by $x$ and $y$. Thus we have:
+
+**Theorem 13.6** (Wojtkowiak). If $F$ is a subfield of $\mathbb{C}$, there is a natural isomorphism
+
+$$ (14) \qquad H_{DR}^{\bullet}(P_{x,y}X) \otimes_F \mathbb{C} \cong H^{\bullet}(Ch^{\bullet}(P_{x,y}(X(\mathbb{C})))) . $$
+
+This result can be extended to the Hodge and weight filtrations. The Hodge
+filtration of $L^{\bullet}(\bar{X}, D)$ is defined by
+
+$$ F^p L^{\bullet}(\bar{X}, D) = R_{TW}^{\bullet} [\Omega_{\bar{X}}^p(\log D) \to \Omega_{\bar{X}}^{p+1}(\log D) \to \dots], $$
+
+where $\Omega_{\bar{X}}^p(\log D)$ is placed in degree $p$. This extends to a Hodge filtration on $B(F, L^{\bullet}(\bar{X}, D), F)$ as described in [30, §3.2]. The Hodge filtration of $H_{DR}^{\bullet}(P_{x,y}X)$ is defined by
+
+$$
+\begin{align*}
+F^p H_{DR}^{\bullet}(P_{x,y}X) &= \operatorname{im} \{ H^{\bullet}(F^p B(F, L^{\bullet}(\bar{X}, D), F)) \to H_{DR}^{\bullet}(P_{x,y}X) \} \\
+&\cong H^{\bullet}(F^p B(F, L^{\bullet}(\bar{X}, D), F)).
+\end{align*}
+$$
+
+Similarly, the weight filtration of $L^{\bullet}(\bar{X}, D)$ is defined by
+
+$$ W_m L^{\bullet}(\bar{X}, D) = R_{TW}^{\bullet} \tau \leq_m \Omega_{\bar{X}}^{\bullet}(\log D). $$
+
+Like the Hodge filtration, this extends to a weight filtration of $B(F, L^*(\bar{X}, D), F)$
+as in [30, §3.2]. The weight filtration of $H_{DR}^*(P_{x,y}X)$ is defined by
+
+$$ W_m H_{DR}^n(P_{x,y}X) = \operatorname{im}\{H^n(W_{m-n}B(F, L^*(\bar{X}, D), F)) \to H_{DR}^*(P_{x,y}X)\}. $$
+
+**Theorem 13.7.** Suppose, as above, that $\overline{X}$ is a smooth complete variety and $D$ a normal crossings divisor in $\overline{X}$, both defined over $F$. If $X = \overline{X} - D$, then there is a Hodge filtration
+
+$$ H_{DR}^{\bullet}(P_{x,y}X) = F^0 H_{DR}^{\bullet}(P_{x,y}X) \supseteq H_{DR}^{\bullet}(P_{x,y}X) \supseteq H_{DR}^{\bullet}(P_{x,y}X) \supseteq \dots $$
+
+and a weight filtration
+
+$$ \dots \subseteq W_m H_{DR}^{\bullet}(P_{x,y}X) \subseteq W_{m+1} H_{DR}^{\bullet}(P_{x,y}X) \subseteq \dots \subseteq HDR^{\bullet}(P_{x,y}X) $$
+
+which are functorial with respect to morphisms of smooth $F$-varieties and are com-
+patible with the product and, when $x = y$, the coproduct and antipode. These
+filtrations behave well under extension of scalars; that is, if $K$ is an extension field
+of $F$, then there are natural isomorphisms
+
+$$ F^p H_{DR}^\bullet (P_{x,y} X \otimes_F K) \cong (F^p H_{DR}^\bullet (P_{x,y} X)) \otimes_F K $$
+
+and
+
+$$ W_m H_{DR}^{\bullet}(P_{x,y}X \otimes_F K) \cong (W_m H_{DR}^{\bullet}(P_{x,y}X)) \otimes_F K. $$
+
+¹³Actually, he does not use logarithmic forms, just algebraic forms on X. However, it is necessary to use logarithmic forms in order to compute the Hodge and weight filtrations.
+---PAGE_BREAK---
+
+When $F = \mathbb{C}$, these filtrations agree with those defined in [30].
+
+*Proof.* The first point is that there is a natural filtered quasi-isomorphism
+
+$$ (E^{\bullet}(\overline{X} \log D), F^{\bullet}) \leftrightarrow (L^{\bullet}(\overline{X}, D), F^{\bullet}). $$
+
+The second is that there are natural quasi-isomorphisms
+
+$$ j_* F_X \hookrightarrow \Omega_{\overline{X}}^{\bullet}(\log D) \hookrightarrow j_* \Omega_X^{\bullet}. $$
+
+## 14. THE COBAR CONSTRUCTION
+
+In this section, we review the cobar construction (a cosimplicial models of loop and path spaces) and explain how iterated integrals are the “de Rham realization” of it. The applications of iterated integrals in earlier sections, and their role in the algebraic de Rham theorems for varieties over arbitrary fields, suggest that the cosimplicial version of the cobar construction plays a direct and deep role in the theory of motives and that the examples presented in this paper are just the Hodge-de Rham realizations of such motivic phenomena. Additional evidence for this view comes from the works of Colombo [19], Cushman [20], Shiho [51] and Terasoma [55].
+
+The original version of the cobar construction, due to Frank Adams [1, 2], grew out of earlier work [3] with Peter Hilton. Adams' cobar construction can be viewed as a functorial construction which associates to a certain singular chain complex $S_{\bullet}^{(0)}(X, x)$ of a pointed space $(X, x)$, a complex $Ad(S_{\bullet}^{(0)}(X, x))$ that maps to the reduced cubical chains on the loopspace $P_{x,x}X$ and which is dual, in some sense, to the bar construction on the dual of $S_{\bullet}^{(0)}(X, x)$. The map from Adams' cobar construction to the reduced cubical chains is a quasi-isomorphism when $X$ is simply connected. In the non-simply connected case, a result of Stallings [53] implies that $H_0(Ad(S_{\bullet}^{(0)}(X, x)))$ is naturally isomorphic to $H_0(P_{x,x}X; \mathbb{Z}) = \mathbb{Z}\pi_1(X, x)$.
+
+We begin with the abstract cobar construction and work back towards the classical one. The abstract approach appears to originate with the book of Bousfield and Kan [11]. Much of what we write here is an elaboration of the first section of Wojtkowiak's paper [57]. Chen has given a nice exposition of the classical cobar construction in the appendix of [15].
+
+**14.1. Simplicial and cosimplicial objects.** Denote the category of finite ordinals by $\Delta$; its objects are the finite ordinals $[n] := \{0, 1, \dots, n\}$ and the morphisms are order preserving functions. Among these, the face maps
+
+$$ d^j : [n-1] \to [n], \quad 0 \le j \le n $$
+
+play a special role; $d^j$ is the unique order preserving injection that omits the value $j$.
+
+A contravariant functor $\Delta \to C$ is called a *simplicial object* in the category *C*. A *cosimplicial object* of *C* is a covariant functor $\Delta \to C$.
+
+**Example 14.1.** Denote the standard $n$-simplex by $\Delta^n$. We can regard its vertices as being the ordinal $[n]$. Each order preserving mapping $f : [n] \to [m]$ induces a
+---PAGE_BREAK---
+
+linear mapping $|f| : \Delta^n \to \Delta^m$. These assemble to give the cosimplicial space $\Delta^\bullet$
+
+$$
+\begin{tikzcd}
+ \Delta^0 \arrow[r, "d^0"] & \Delta^1 \arrow[r, "d^0"] & \Delta^2 \arrow[r, "d^0"] & \Delta^3 & \dots \\
+ \phantom{\Delta^0} \arrow[r] & \phantom{\Delta^1} \arrow[r] & \phantom{\Delta^2} \arrow[r] & \phantom{\Delta^3} &
+\end{tikzcd}
+$$
+
+whose value on $[n]$ is $\Delta^n$.
+
+**Example 14.2.** Suppose that $K$ is an ordered finite simplicial complex (that is, there is a total order on the vertices of each simplex). Then one has the simplicial set $K_\bullet$, whose set of $n$-simplices $K_n$ is the set of order preserving mappings $\phi : [n] \to K$ (not necessarily injective) such that the images of the $\phi(j)$ span a simplex of $K$. In particular, we have the simplicial set $\Delta^\bullet$, whose set of $m$-simplices is the set of all order preserving mappings from $[m]$ to $[n]$.
+
+If one has a simplicial or cosimplicial abelian group, one obtains a chain complex simply by defining the differential to be the alternating sum of the (co)face maps. Likewise, if one has a simplicial or cosimplicial chain complex, one obtains a double complex.
+
+**14.2. Cosimplicial models of path and loop spaces.** Suppose that $X$ is a topological space. Denote the simplicial model of the unit interval $\Delta^\bullet$ by $I_\bullet$. Let
+
+$$X^{I\bullet} = \operatorname{Hom}(I_{\bullet}, X).$$
+
+This is a cosimplicial space which models the full path space $PX$. Its space of $n$-cosimplices is $\operatorname{Hom}(I_n, X)$. Since there are $n+2$ order preserving mappings $[n] \to \{0,1\}$, this is just $X^{n+2}$. The $j$th coface mapping $d^j: X^{I_{n-1}} \to X^{I_n}$ is
+
+$$
+\overbrace{\text{id} \times \dots \times \text{id}}^{j} \times (\text{diagonal}) \times \overbrace{\text{id} \times \dots \times \text{id}}^{n-j} : X^{n+1} \to X^{n+2}
+$$
+
+We shall denote it by $P^\bullet X$ and its set of $n$-cosimplices by $P^n X$.
+
+The simplicial set $\partial I_\bullet$ is the simplicial set associated to the discrete set $\{0,1\}$. Since $(\partial I)_n$ consists of just the two constant maps $[n] \to \{0,1\}$, the cosimplicial space $X^{\partial I_\bullet}$ consists of $X \times X$ in each degree. The mapping $X^{I_\bullet} \to X^{\partial I_\bullet}$ corresponds to the projection $PX \to X \times X$ that takes a path $\gamma$ to its endpoints $(\gamma(0), \gamma(1))$.
+
+One obtains a cosimplicial model $P_{x,y}^\bullet X$ for $P_{x,y}X$ by taking the fiber of $X^{I_\bullet} \to X^{\partial I_\bullet}$. Specifically, $P_{x,y}^n X = X^n$, with coface maps $d^j : P_{x,y}^{n-1} X \to P_{x,y}^n X$ given by
+
+$$
+d^j(x_1, \dots, x_{n-1}) = \begin{cases} (x, x_1, \dots, x_{n-1}) & j=0; \\ (x_1, \dots, x_j, x_{j+1}, \dots, x_{n-1}) & 0 < j < n; \\ (x_1, \dots, x_{n-1}, y) & j=n. \end{cases}
+$$
+
+**14.3. Geometric realization.** As is well known, each simplicial topological space $X_\bullet$ has a geometric realization $|X_\bullet|$, which is a quotient space
+
+$$
+|x_{\bullet}| = \left( \prod_{n \ge 0} X_n \times \Delta^n \right) / \sim
+$$
+
+where $\sim$ is a natural equivalence relation generated by identifications for each mor-
+phisms $f: [n] \to [m]$ of $\mathbf{\Delta}$. If $K$ is an ordered simplicial complex and $K_\bullet$ the
+associated simplicial set, then $|K_\bullet|$ is homeomorphic to the topological space asso-
+ciated to $K$.
+---PAGE_BREAK---
+
+Dually, each cosimplicial space $X[\bullet]$ has a kind of geometric realization $\|X[\bullet]\|$, which is called the *total space associated to $X^{\bullet}$* (cf. [11]). This is exactly the categorical dual of the geometric realization of a simplicial space. It is simply the subspace of
+
+$$ \prod_{n \ge 0} X[n]^{\Delta_n} $$
+
+consisting of all sequences compatible with all morphisms $f : [n] \to [m]$ in $\mathbf{\Delta}$, where $X[n]^{\Delta_n}$ denotes the set of continuous mappings from $\Delta_n$ to $X[n]$ endowed with the compact-open topology. Continuous mappings from a topological space $Z$ to $\|X[\bullet]\|$ correspond naturally to continuous mappings
+
+$$ \Delta^{\bullet} \times Z \to X[\bullet] $$
+
+of cosimplicial spaces.
+
+As in Section 1, we regard $\Delta^n$ as the time ordered simplex
+
+$$ \Delta^n = \{(t_1, \dots, t_n) : 0 \le t_1 \le \dots \le t_n \le 1\}. $$
+
+There are continuous mappings
+
+$$ PX \to \|P^{\bullet}X\| \text{ and } P_{x,y}X \to \|P_{x,y}^{\bullet}X\| $$
+
+defined by
+
+$$ \gamma \mapsto \{(t_1, \dots, t_n) \mapsto (\gamma(0), \gamma(t_1), \dots, \gamma(t_n), \gamma(1))\} $$
+
+and
+
+$$ \gamma \mapsto \{(t_1, \dots, t_n) \mapsto (\gamma(t_1), \dots, \gamma(t_n))\} $$
+
+These correspond to the adjoint mappings
+
+$$ \Delta^{\bullet} \times PX \to P^{\bullet}X \text{ and } \Delta^{\bullet} \times P_{x,y}X \to P_{x,y}^{\bullet}X, $$
+
+which are the continuous mappings of cosimplicial spaces used when defining iterated integrals in Section 1.
+
+**14.4. Cochains.** Applying the singular chain functor to a cosimplicial space $X[\ ]$ yields a simplicial chain complex. Taking alternating sums of the face maps, we get a double complex $S^{\bullet}(X[\bullet]; R)$ where
+
+$$ S^{s+t}(X[s]; R) $$
+
+sits in bidegree $(-s, t)$ and total degree $t - s$.¹⁴ The associated second quadrant spectral sequence is the Eilenberg-Moore spectral sequence.
+
+Elements of the corresponding total complex can be evaluated on singular chains
+$\sigma : \Delta^t \to \|X[\bullet]\|$ by replacing $\sigma$ by its adjoint
+
+$$ \hat{\sigma} : \Delta^{\bullet} \times \Delta^t \to X[\bullet]. $$
+
+To evaluate $c \in S^{\bullet}(X[s]; R)$ on $\sigma$, first subdivide $\Delta^s \times \Delta^t$ into simplices in the standard way and then evaluate $c$ on this subdivision of $\Delta^s \times \Delta^t \to X[s]$.
+
+When $X$ is a manifold we can apply the de Rham complex, as above, to obtain a double complex $E^{\bullet}(X[\bullet])$, where $E^s(X[t])$ is placed in bidegree $(-s, t)$. Integration induces a map of double complexes
+
+$$ E^{\bullet}(X[\bullet]) \to S^{\bullet}(X[\bullet]; \mathbb{R}). $$
+
+This is a quasi-isomorphism as is easily seen using the Eilenberg-Moore spectral sequence.
+
+¹⁴Note that this has many elements of negative total degree.
+---PAGE_BREAK---
+
+When $X[\bullet]$ is a cosimplicial model of a path space, we can say more. I will treat
+the case of $P^{\bullet}X$; the case of $P_{x,y}^{\bullet}X$ being obtained from it by restriction.
+
+The first thing to observe is that $E^{\bullet}(X)^{\otimes(s+2)}$ can be used in place of
+
+$$
+E^{\bullet}(X^{s+2}) = E^{\bullet}(P^s X).
+$$
+
+The corresponding double complex has
+
+$$
+[E^{\bullet}(X)^{\otimes(s+2)}]^{s+t}
+$$
+
+in bidegree $(-s,t)$. The associated total complex is (essentially by definition) the
+unreduced bar construction $\hat{B}(E^{\bullet}(X), E^{\bullet}(X), E^{\bullet}(X))$ on $E^{\bullet}(X)$. Here $E^{\bullet}(X)$ is
+considered as a module over itself by multiplication. The chain maps
+
+$$
+\hat{B}(E^{\bullet}(X), E^{\bullet}(X), E^{\bullet}(X)) \to E^{\bullet}(P^{\bullet}X) \to S^{\bullet}(P^{\bullet}X; \mathbb{R})
+$$
+
+are quasi-isomorphisms (use the Eilenberg-Moore spectral sequence). Similarly, in
+the case of $P_{x,y}^*X$,
+
+$$
+(15) \qquad \hat{B}(\mathbb{R}, E^*(X), \mathbb{R}) \to E^*(P_{x,y}^*X) \to S^*(P_{x,y}^*X; \mathbb{R})
+$$
+
+are quasi-isomorphisms.
+
+We can get cochains on $PX$ by pulling back these along the inclusion $PX \hookrightarrow \|P^\bullet X\|$, which allows us to evaluate elements of $S^\bullet(P^\bullet X; R)$ on singular simplices $\sigma : \Delta^t \to PM$ as above. In particular, if $\sigma$ is smooth and
+
+$$
+w' \otimes w_1 \otimes \cdots \otimes w_s \otimes w'' \in E^{\bullet}(X)^{\otimes(s+2)},
+$$
+
+then
+
+$$
+\begin{align*}
+\langle \sigma, w' \times w_1 \times w_2 \times \dots \times w_s \times w'' \rangle &= \int_{\hat{\sigma}} (w' \times w_1 \times w_2 \times \dots \times w_s \times w'') \\
+&= \pm \langle \sigma, p_0^* w' \wedge (\int w_1 \dots w_r) \wedge p_1^* w'' \rangle
+\end{align*}
+$$
+
+where the sign depends on one's conventions. Thus the cosimplicial constructions
+naturally lead to Chen's iterated integrals.
+
+In the case of $P_{x,y}X$, the chain mapping $B(\mathbb{R}, E^*(X), \mathbb{R}) \to \hat{B}(\mathbb{R}, E^*(X), \mathbb{R})$ is a quasi-isomorphisms. The cohomology of $S^*(P_{x,y}^*X; \mathbb{Z})$ then provides the cohomology of iterated integrals with the integral structure described in Paragraph 7.1 via (15).
+
+14.5. **Back to Adams.** What is missing from the story so far is chains, which are useful, if not essential, for computing periods of iterated integrals and mixed Hodge structures. They are especially useful in situations where the de Rham theorem is not true for loop spaces, but where the cohomology of iterated integrals has geometric meaning. Adams’ original work constructs cubical chains on $P_{x,x}X$ from certain singular chains on $X$.
+
+Denote the unit interval by $I$ and let $e_j^0, e_j^1: I^{n-1} \to I^n$ be the $j$th bottom and top face maps of the unit $n$-cube:
+
+$$
+e_j^\epsilon : (t_1, \dots, t_{n-1}) \mapsto (t_1, \dots, t_{j-1}, \epsilon, t_j, \dots, t_{n-1}).
+$$
+
+For $0 \le j \le n$, let $f_j : \Delta^j \to \Delta^n$ and $r_j : \Delta^j \to \Delta^n$ denote the front and rear $j$-faces of $\Delta^n$. These correspond to the order preserving injections $[j] \to [n]$ uniquely determined by $f_j(j) = j$ and $r_j(0) = n-j$.
+---PAGE_BREAK---
+
+The starting point is to construct continuous maps15
+
+$$ \theta_n : I^{n-1} \to P_{0,n} \Delta^n $$
+
+with the property that when $0 < j < n$,16
+
+$$ (16) \quad \theta_n \circ e_j^0 = P(d^j) \circ \theta_{n-1} : I^{n-2} \text{ and } \theta_n \circ e_j^1 = (P(f_j) \circ \theta_j) * (P(r_{n-j}) \circ \theta_{n-j}). $$
+
+These are easily constructed by induction on $n$ using the elementary fact that $P_{0,n}\Delta^n$ is contractible. When $n=1$, the unique point of $I^0$ goes to any path from 0 to 1 in $\Delta^1$. The cases $n=2$ and $3$ are illustrated in Figure 2.
+
+FIGURE 2. $\theta_2$ and $\theta_3$
+
+For a pointed topological space $(X, x)$, let $S^{(0)}(X, x)$ be the subcomplex of the singular chain complex generated by those singular simplices $\sigma : \Delta^n \to X$ that map all vertices of $\Delta^n$ to $x$. If $X$ is path connected, this computes the integral homology of $X$.
+
+For each such singular simplex $\sigma : \Delta^n \to X$, we have the singular cube $P(\sigma) \circ \theta_n : I^{n-1} \to P_{x,x}X$. Set
+
+$$ (17) \qquad [\sigma] = \begin{cases} P(\sigma) \circ \theta_n - c_x & n=1; \\ P(\sigma) \circ \theta_n & n > 1, \end{cases} $$
+
+where $c_x$ denotes the constant loop at $x$. Set17
+
+$$ [\sigma_1 | \sigma_2 | \dots | \sigma_s] = [\sigma_1] * [\sigma_2] * \dots * [\sigma_n] $$
+
+This extends to an algebra mapping
+
+$$ \bigoplus_{s \ge 0} (S^{(0)}_{>0}(X, x))^{\otimes s} \to \{\text{reduced cubical chains on } P_{x,x}X\} $$
+
+$$ \sigma_1 \otimes \cdots \otimes \sigma_s \mapsto [\sigma_1] \cdots [\sigma_s], $$
+
+which is easily seen to be injective. The formula (16) implies that
+
+$$ (18) \qquad \partial[\sigma] = -[\partial\sigma] + \sum_{1 \le j < n} (-1)^j [\sigma_{(j)}|\sigma^{(n-j)}], $$
+
+where $\sigma_{(j)}$ denotes the front $j$ face and $\sigma^{(n-j)}$ the rear $(n-j)$th face of $\sigma$.
+
+15With care, these can be made smooth — details can be found in [15].
+
+16For $\alpha: U \to P_{x,y}X$ and $\beta: V \to P_{y,z}X$, define $\alpha*\beta: U \times V \to P_{x,z}X$ by $(u,v) \mapsto \alpha(u)\beta(v)$.
+
+17Strictly speaking, we need to use Moore paths as we need path multiplication to be associative.
+---PAGE_BREAK---
+
+Adams' cobar construction is, by definition, the free associative algebra
+
+$$
+\mathrm{Ad}(S_{\bullet}^{(0)}(X, x)) = \bigoplus_{s \ge 0} (S_{>0}^{(0)}(X, x))^{\otimes s}
+$$
+
+on $S_{\bullet}^{(0)}(X, x)$ with the differential (18), where $\sigma_1 \otimes \cdots \otimes \sigma_s$ has degree $-s + \sum \deg \sigma_j$.
+This is an augmented, associative, dga, where the augmentation ideal is generated
+by the $[\sigma]$. Adams' main result may be stated by saying that the chain mapping
+
+$$
+\mathrm{Ad}(S_{\bullet}^{(0)}(X, x)) \to \{\text{reduced cubical chains on } P_{x,x}X\}
+$$
+
+is a quasi-isomorphism when $X$ is simply connected. Stallings' result [53] for $H_0$ is
+more elementary.
+
+**Proposition 14.3.** If *X* is path connected, then there are natural augmentation preserving algebra isomorphisms
+
+$$
+H_0(\mathrm{Ad}(S_{\bullet}^{(0)}(X, x))) \cong H_0(P_{x,x}X; \mathbb{Z}) \cong \mathbb{Z}\pi_1(X, x).
+$$
+
+*Sketch of proof.* The second isomorphism follows directly from the definitions. We will show that $H_0(\mathrm{Ad}(S_\bullet^{(0)}(X, x)))$ is isomorphic to $\mathbb{Z}\pi_1(X, x)$. Let $\mathrm{Simp}_\bullet(X, x)$ denote the simplicial set whose $k$-simplices consist of all singular simplices $\sigma : \Delta^k \to X$ that map all vertices of $\Delta^n$ to $x$. After unraveling the definitions (17) and (18), we see that $H_0(\mathrm{Ad}(S_\bullet^{(0)}(X, x)))$ is the algebra generated by the 1-simplices $\mathrm{Simp}_1(X, x)$ (augmented by taking the generator corresponding to each 1-simplex to 1) divided out by the ideal generated by $\sigma_{01} - \sigma_{02} + \sigma_{12}$, where $\sigma \in \mathrm{Simp}_2(X, x)$ and $\sigma_{jk}$ is the singular 1-simplex obtained by restricting $\sigma$ to the edge $jk$ of $\Delta^2$. It follows from van Kampen's Theorem that $H_0(\mathrm{Ad}(S_\bullet^{(0)}(X, x)))$ is naturally isomorphic to the integral group ring of the fundamental group of the geometric realization of $\mathrm{Simp}_\bullet(X, x)$. The result follows as the tautological mapping $|\mathrm{Simp}_\bullet(X, x)| \to X$ is a weak homotopy equivalence. □
+
+With the standard diagonal mapping
+
+$$
+\Delta : S_{\bullet}^{(0)}(X, x) \rightarrow S_{\bullet}^{(0)}(X, x) \otimes S_{\bullet}^{(0)}(X, x), \quad \sigma \mapsto \sum_{0 0$.
+
+Okay. Putting it all together now:
+
+$$
+\exp(\lambda_0 - 1) = \left( \frac{p}{1-p} + 1 \right)^{-M} = \left( \frac{1}{1-p} \right)^{-M} = (1-p)^M \quad (109)
+$$
+
+and
+
+$$
+\exp(\lambda_1)^N = \left( \frac{p}{1-p} \right)^N = \left( \frac{1-p}{p} \right)^{-N} = \left( \frac{1}{p-1} \right)^{-N} \quad (110)
+$$
+
+which lets us finally show that
+
+$$
+p_N = \frac{M!}{N!(M-N)!} \cdot (1-p)^M \cdot \left(\frac{1}{p}-1\right)^{-N} \quad (111)
+$$
+
+or, simply,
+
+$$
+p_N = \frac{M!}{N!(M-N)!} (p)^N (1-p)^{M-N} \qquad (112)
+$$
+
+which is the binomial distribution.
\ No newline at end of file
diff --git a/samples/texts_merged/6189594.md b/samples/texts_merged/6189594.md
new file mode 100644
index 0000000000000000000000000000000000000000..35adb89d57ef1d73c1b4bb5d51660b13b7ada809
--- /dev/null
+++ b/samples/texts_merged/6189594.md
@@ -0,0 +1,457 @@
+
+---PAGE_BREAK---
+
+# Multidimensional dark space and its underlying symmetries: towards dissipation-protected qubits
+
+Raul A. Santos¹*, Fernando Iemini²,³, Alex Kamenev⁴,⁵ and Yuval Gefen⁶
+
+¹T.C.M. Group, Cavendish Laboratory, University of Cambridge,
+J.J. Thomson Avenue, Cambridge, CB3 0HE, United Kingdom
+
+²Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói, Brazil
+
+³Abdus Salam ICTP, Strada Costiera 11, I-34151 Trieste, Italy
+
+⁴School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA
+
+⁵William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, Minnesota 55455, USA and
+
+⁶Department of Condensed Matter Physics, The Weizmann Institute of Science, Rehovot 76100, Israel
+
+(Dated: January 2020)
+
+Quantum systems are always subject to interactions with an environment, typically resulting in decoherence and distortion of quantum correlations. It has been recently shown that a controlled interaction with the environment may actually help to create a state, dubbed as “dark”, which is immune to decoherence. To encode quantum information in the dark states, they need to span a space with a dimensionality larger than one, so different orthogonal states act as a computational basis. We devise a symmetry-based conceptual framework to engineer such degenerate dark spaces (DDS), protected from decoherence by the environment. We illustrate this construction with a model protocol, inspired by the fractional quantum Hall effect, where the DDS basis is isomorphic to a set of degenerate Laughlin states. The long-time steady state of our driven-dissipative model exhibits thus all the characteristics of degenerate vacua of a unitary topological system. This approach offers new possibilities for storing, protecting and manipulating quantum information in open systems.
+
+It is believed that dissipation conspires against coherence of quantum states, rendering them to be close to a classical ensemble. This belief was recently challenged by approaches aimed at incorporating both drive and dissipation to reach a correlated coherent steady state¹⁻¹¹. One remarkable example has been the idea of harnessing dissipation to purify non-trivial topological states¹²⁻¹⁴. This is achieved by a careful interplay between radiation-induced drive, and coupling to an external bath, that provides a desired relaxation channel. A sequence of excitations and relaxations generates, in the long time limit, a non-equilibrium steady state that decouples from the external drive creating a decoherence free subspace¹⁵,¹⁶ dubbed a *dark* state. This idea opens a way to engineer a rich variety of non-trivial stationary states, going well beyond thermal states of equilibrium systems.
+
+In order to use this approach to design (and ultimately manipulate) qubits, it is necessary to engineer a non-equilibrium steady space, which is at least two-dimensional¹⁴,¹⁷. Here we develop a framework to construct driven dissipative schemes with degenerate dark
+
+spaces (DDS). We achieve this goal by analyzing the role of symmetries in dissipative dynamics. Specifically, we claim that the dimensionality of DDS is given by the period of the projective symmetry representation¹⁸, inherent to the system’s evolution (which is considered to be Lindbladian¹⁹,²⁰). To this end we extend the discussion of Lindbladian symmetries²¹⁻²³ to include those that are realized projectively, providing a link between the projective representations and the dimensionality of the DDS density operator.
+
+We illustrate this framework by studying driven dissipative evolution of a correlated one-dimensional (1D) system, inspired by Laughlin quantum Hall states with $\nu = 1/m$ filling fractions ($m$ is an odd integer) in a quasi-1D strip (the so-called “thin torus limit”). This evolution is described by a Lindbladian master equation that possesses a DDS. The latter is spanned by $m$ orthogonal vectors, isomorphic to the set of many-body Laughlin ground states on the torus. This correlated DDS has an extra advantage of being exactly described by computationally convenient matrix product states (MPS). We design a systematic protocol, based on adiabatically varied Lindbladians, that maximizes the purity and fidelity (overlap with the dark space) of its ultimate steady states.
+
+As a warm up for the symmetry discussion, let's consider the Hamiltonian case. If a Hamiltonian is invariant under the action of a symmetry group $G$, then action of the group elements, $g \in G$, on a state is implemented by a unitary representation¹⁸. In particular, the eigenstates of the Hamiltonian can be labeled by eigenvalues of the symmetry operator. As quantum states form rays in the Hilbert space, such that states $|\psi\rangle$ and $e^{i\phi}|\psi\rangle$ are equivalent, it is natural to consider representations that satisfy the group multiplication rule up to a phase, i.e. projective representations¹⁸, defined as
+
+$$D(g_1)D(g_2) = e^{i\phi(g_1,g_2)}D(g_1g_2), \quad (1)$$
+
+where $D(g)$ is a representation of a group element $g \in G$. Every projective representation is characterized by the set of phases $\omega_2(g_1,g_2) = e^{i\phi(g_1,g_2)}$, known as a 2-cocycle, which are strongly constrained by associativity¹⁸. For a quantum system invariant under the projective representation (1), the period of the 2-cocycle (i.e the minimum
+---PAGE_BREAK---
+
+number *m* such that [ω₂(g₁, g₂)] = 1 for all g₁, g₂) determines the dimension of the degenerate space. Given a non-trivial 2-cocycle, representations of at least some group elements do not commute, even for an Abelian group $\mathcal{G}$, i.e. for some $g, h \in \mathcal{G}$, $[D(g), D(h)] \neq 0$. One can thus label the eigenstates by eigenvalues of say $D(g)$ and generate a different state, with the same energy by acting on it with $D(h)$. Notice that this argument implies degeneracy of an entire spectrum.
+
+**Degenerate dark states in Lindbladian evolution.-** The way symmetry operators act on the Lindbladian operators is rather different from the Hamiltonian case. In a system with a combined unitary and dissipative dynamics, the most general Markovian evolution of the density matrix operator, $\rho$, is described by the quantum master equation, $\dot{\rho} = \mathcal{L}(\rho)$, with
+
+$$ \mathcal{L}(\rho) = -i[H, \rho] + \sum_i \left( \ell_i \rho \ell_i^\dagger - \frac{1}{2} (\ell_i^\dagger \ell_i \rho + \rho \ell_i^\dagger \ell_i) \right). \quad (2) $$
+
+Here *H* is an effective Hamiltonian that represents the unitary evolution. Generically non-Hermitian, quantum jump operators, $\ell_i$, describe environment-induced dissipation effects¹⁹,²⁰,²⁴.
+
+A Lindbladian is invariant under an irreducible unitary representation $D(g)$ with $g$ an element of $\mathcal{G}$, if the Hamiltonian and the quantum jump operators satisfy²¹,²²
+
+$$ D(g)HD^{\dagger}(g) = H; \quad D(g)\ell_i D^{\dagger}(g) = \sum_j U_{ij}^{(g)} \ell_j, \quad (3) $$
+
+(also known as weak symmetry²¹) where $U^{(g)}$ is a unitary matrix that depends on $g$. In particular, if $\sigma$ is an eigenmatrix of $\mathcal{L}$ with an eigenvalue $\lambda_\sigma$, i.e. $\mathcal{L}(\sigma) = \lambda_\sigma\sigma$, then $D(g)\sigma D^\dagger(g)$ is an eigenmatrix with the same eigenvalue. These eigenmatrices obtained from $\sigma$ by conjugation can represent either the same, or different states. For a projective representation $D(g)$, satisfying (1), $D(g)\sigma D^\dagger(g)$ and $\sigma$ are necessarily different for some element $g$. This is because if for a particular element $h \in \mathcal{G}$, $D(h)\sigma D^\dagger(h) = \sigma$, then $\sigma$ and $D(h)$ share eigenvectors, so one can take an element $g$ such that $D(g)$ does not commute with $D(h)$. Given that $[D(g), D(h)] \neq 0$, $D(g)\sigma D^\dagger(g)$ and $\sigma$ do not share eigenvectors, meaning that they are different. By virtue of the Schur's lemma¹⁸, the only case where this logic fails is for $\sigma$ a fully mixed state, proportional to the identity matrix.
+
+Focusing on the case of projective representations of Abelian groups, where $D(h)D(g) = e^{i\tilde{\phi}(h,g)}D(g)D(h)$ (here $\tilde{\phi}(g_1,g_2) = \phi(g_1,g_2) - \phi(g_2,g_1)$ is again a cocycle) one can determine the dimension of the degenerate subspace in terms of the factor $e^{i\tilde{\phi}(h,g)}$. Focusing on an eigenvector $|e\rangle$ of $D(h)$ with eigenvalue $e^{i\alpha}$, the operator $D(g)$ acts on this state as a cyclic raising operator, as $D(h)D(g)|e\rangle = e^{i(\alpha+\tilde{\phi}(h,g))}D(g)|e\rangle$. For $\tilde{\phi}(h,g) = 2\pi/a/m$, with *a*, *m* co-prime integers, one can raise a state *m* times before it returns to itself. This implies that all eigenspaces of $\mathcal{L}$ are *m*-fold degenerate. In particular,
+
+the stationary subspace, defined by the eigenvalue $\lambda = 0$, is *m*-dimensional. Note that the degeneracies of the eigenspaces of $\mathcal{L}$ do not translate into degeneracies of the density matrix in general, as the eigenmatrices of $\mathcal{L}$ need not to be self-adjoint, but can come in pairs $\{\sigma, \sigma^\dagger\}$ with eigenvalues $\{\lambda_\sigma, \lambda_\sigma^*\}$. For the DDS, this is not a problem as $\lambda = 0$.
+
+Although conditions presented above are sufficient for the existence of DDS, they are actually excessive and impractical. Indeed, the symmetry operators $D(g)$ split the entire Hilbert space into sectors of different quantum numbers that do not mix during the evolution. In other words, there is a certain number of conservation laws, which confine the long time evolution of any initial state to only a limited fraction of DDS. To access the entire DDS, this needs to be avoided. In order to achieve this, we consider states $\rho$ in the DDS that satisfy
+
+$$ \ell_i \rho = 0 \text{ for all } i. \quad (4) $$
+
+We call such DDS frustration free, as $\rho$ is annihilated individually by each quantum jump operator, $\ell_i$. For systems that satisfies Eq. (4), one can deform quantum jump operators in a way that the symmetry is broken in all the decaying subspaces, while it is maintained within the DDS. With this in mind, we define dressed quantum jump operators, that do not satisfy Eq. (3), as $\tilde{\ell}_i = R_i^\dagger \ell_i$, where $R_i^\dagger$ are for now arbitrary local operators. Dynamics, generated by the dressed operators, does not, in general, obey any conservation laws. Yet, $\tilde{\ell}_i$ still satisfy Eq. (4), which preserves the DDS and its degenerate multidimensional nature. For properly dressed quantum jump operators, a generic initial state evolves into a state within DDS, which has projections on all of its basis eigenmatrices.
+
+Below we demonstrate these considerations on a 1D model, borrowing intuition from the well-studied physics of the Laughlin states on a torus²⁵,²⁶. In particular, we demonstrate that the system is driven to DDS regardless of the nature of an initial state (pure or mixed). We also devise adiabatic time-dependent Lindbladians that guarantee that the initial state is fully driven into the DDS, resulting in a state with a maximized purity.
+
+**Laughlin states in a narrow torus geometry.-**
+
+A quantum Hall droplet of *N* electrons subject to a magnetic flux $N_\Phi = mN$ (in units of the flux quanta) and filling fraction $N/N_\Phi = \nu = 1/m$ (with odd integer *m*) develops nontrivial correlations that are reproduced by Laughlin wavefunctions²⁷. In a torus with periods $L_x$ and $L_y$ (with distances measured in units of the magnetic length), the area is related to the flux as $L_x L_y = 2\pi N_\Phi$. The Laughlin states at filling $\nu = 1/m$ correspond to exact zero energy states of a local Hamiltonian²⁸, which, after projecting onto the lowest Landau level (LLL), takes the form $\mathcal{H} = \sum_n (\ell_{0,n}^\dagger \ell_{0,n} + \ell_{1,n}^\dagger \ell_{1,n})$, where the operators $\ell_{s,n}$ ($s = 0,1$) are²⁹ (see supplemental material)
+
+$$ \ell_{s,n} = \sum_{l \ge 0} \eta \left(l + \frac{s}{2}\right) c_{n-l-s} c_{n+l}. \quad (5) $$
+---PAGE_BREAK---
+
+Here $c_n$ destroys an electron at orbital $n$; $\eta(l) \propto e^{-(\kappa l)^2}$ is a fast decaying function in the narrow torus limit, $\kappa^2 = \frac{2\pi}{N_\Phi} \frac{L_x}{L_{ij}} \gg 1$, (see supplemental material). The crucial property of Laughlin states $|\Psi\rangle$, that makes them useful for our discussion of the Lindbladian evolution, is that they satisfies $\ell_{s,n}|\Psi\rangle = 0$, for $s=0,1$ and all $n$.
+
+In the quantum Hall context, the operators $D(g)$ of Eq. (3) correspond to inserting fluxes through the two periods of the torus, Fig. 1. In the 1D representation they are the translation operator $T$ and the operator $U$, that measures the center of mass of the particles in orbital space. They are given by
+
+$$ T = \exp \left\{ \frac{2\pi i}{N_{\Phi}} \sum_{k=0}^{N_{\Phi}-1} k \hat{n}_k \right\}; \quad U = \exp \left\{ \frac{2\pi i}{N_{\Phi}} \sum_{l=1}^{N_{\Phi}} l \hat{n}_l \right\}, \qquad (6) $$
+
+where $\hat{n}_l$ and $\hat{\tilde{n}}_k$ are the number operators at position $l$ and with momentum $k$, correspondingly. Note that $T$ and $U$ satisfy $TUT^\dagger = e^{-2\pi i\nu}U$. They thus provide a projective representation of the group $G = \mathbb{Z}_m \times \mathbb{Z}_m$ with $m=1/\nu$. This group is represented by $D(g) = U^aT^b$, where $a,b=0,\dots,m-1$, while $U^m = T^m = \mathbf{1}$ and $UT = \omega TU$ with $\omega = \exp(2\pi i/m)$.
+
+One can choose a basis $\{|\Psi_a\rangle\}$, such that e.g. operator $U$ is diagonal, with its eigenvalues, $\omega^a$, along the diagonal. In this basis, the operator $T$ acts as a raising operator, because, if $U|\Psi_a\rangle = e^{i\frac{2\pi a}{m}}|\Psi_a\rangle$, then $UT|\Psi_a\rangle = e^{i\frac{2\pi(a+1)}{m}}T|\Psi_a\rangle$. In the context of the quantum Hall effect, states $|\Psi_a\rangle$ are the $m$-fold degenerate Laughlin ground states on the torus$^{25,30}$.
+
+Note that, although the construction of the Lindbladian using the projective symmetry ensures that the eigenvalue $\lambda = 0$ of $\mathcal{L}$ is $m$-fold degenerate, the frustration-free condition Eq (4) enlarges the degeneracy of $\lambda = 0$ to $m^2$. This is seen as follows: The symmetry operators $U$ and $T$ ensure that the density matrices $|\Psi_a\rangle\langle\Psi_a|$ for $a=0\dots m-1$ share the same eigenvalue (they are related by conjugation with $T$). In general the matrices $|\Psi_a\rangle\langle\Psi_{a+p}|$ are related, for a fixed $p$, by conjugation with $T$. Now these $m$ different families (each labeled by $p=0\dots m-1$) share the same eigenvalue between them due to the frustration-free condition which ensures that if $|\Psi_a\rangle$ is annihilated by the quantum jump operators, then all the matrices $|\Psi_a\rangle\langle\Psi_b|$ are as well.
+
+**Lindblad operators from Quantum Hall physics.-** In the narrow torus limit, $\kappa \gg 1$, one can truncate expressions for operators $\ell_{s,n}$ in Eq. (5), which become short-range in $n$. In this limit we have$^{31}$
+
+$$ \ell_{0,i} = c_i c_{i+2} \quad \text{and} \quad \ell_{1,i} = c_i c_{i+1} + \beta c_{i-1} c_{i+2}, \qquad (7) $$
+
+where $\beta = \eta(\frac{3}{2})/\eta(\frac{1}{2}) = 3e^{-2\kappa^2}$. These operators transform as $U\ell_{s,j}U^\dagger = e^{\frac{4\pi i}{N_\Phi}(j+1-\frac{s}{2})}\ell_{s,j}$ and $T\ell_{s,j}T^\dagger = \ell_{s,j+1}$. Hereafter we regard $\beta$ as an arbitrary parameter. We now employ these results to construct quantum jump operators that drive the system into the frustration free DDS. In contrast with the $\ell_{s,i}$ operators in the previous section,
+
+FIG. 1. LLL projection and flux insertion in the quantum Hall liquid. The LLL projection of a quantum Hall liquid on a torus maps the two dimensional state into a one-dimensional ring of particles in orbital space. Inserting a flux quanta through one of the two cycles of the torus (depicted in red and blue) corresponds to a unitary operation that acts between the different ground states in the torus. In the one dimensional representation, these unitary operations correspond to translation of the guiding centers by one orbital ($T$), or multiplication by a phase ($U$), depicted by blue and red arrows, respectively.
+
+here these operators represent processes in a real lattice of $N_\Phi$ sites, where $c_i$ destroys a fermion at the lattice site $i$. We assume $m=3$ and the fermion density $\nu=1/3$.
+
+We also assume a purely dissipative evolution $\dot{\rho} = \mathcal{L}(\rho)$ with $H=0$, Eq. (2), and the quantum jump operators $\tilde{l}_i = R_i^\dagger Q_i$ with $R_i^\dagger = c_i^\dagger c_{i+1}^\dagger + t(\ell_{0,i}^\dagger + \ell_{0,i+1}^\dagger)$, and
+
+$$ Q_i = \ell_{1,i} + A(\ell_{0,i-1} + \ell_{0,i+1}) + B(\ell_{1,i+1} + \ell_{1,i-1}). \qquad (8) $$
+
+Here operators $Q_i$ are linear combinations of the operators that annihilate the Laughlin states. Parameters $t, B, A$ and $\beta$ are determined by a realization of the dissipative dynamics and are non-universal. The DDS only depends on $\beta$. For the purposes of this work we take them as free parameters.
+
+**Realization of Lindblad operators $\tilde{l}_i$.** The dissipative evolution dictated by the $Q_i$ operators can be obtained by coupling two systems: a target fermion chain (where $c_i$ destroys a fermion at site $i$) with no intrinsic dynamics ($H_0=0$) and a fermionic chain with Hubbard interactions described by the Hamiltonian
+$$ H_1 = \sum_i J_1(f_i^\dagger f_{i+1} + \text{h.c.}) - U \sum_i n_i n_{i+1} + E_1 n_i, \text{ where } f_i \text{ destroys a fermion in this at position } i \text{ in this chain and } n_i = f_i^\dagger f_i. \text{ We assume that the chemical potential } E_1 \text{ is the largest energy scale in the system. These two chains interact through an external classical radiation, with coupling Hamiltonian } H_{\text{rad}} = \Omega \cos(\omega t) \sum_i f_i^\dagger (c_i + \alpha(c_{i-1} + c_{i+1})) + \text{h.c., where } \Omega \text{ is the intensity of the radiation, } \alpha \text{ parameterizes the spatial laser envelope, and } \omega \text{ is the frequency of the monochromatic light. The role of the driving is to excite particles from the target chain to the interacting chain. The particles then relax to the lower energy state interacting with the bath, that provides dissipation. These components are shown in Fig. 2a} $$
+---PAGE_BREAK---
+
+As a motivation for this construction, let's consider how a charge-density-wave (CDW) state decouples from the dynamics in this context, in the limit $J_1 \ll U$ and $\alpha = 0$, becoming a dark state. At first order in $\Omega$, exciting a single particle from the c to the f chain is strongly suppressed as $E_1$ is large. But in second order ($\frac{\Omega^2}{2E_1-U-\omega}$), the radiation Hamiltonian can create two states in the f chain, which can bound in a doublon, consisting on a tightly bound pair of fermions (due to the Hubbard attraction). The wavefunction of the doublon decays exponentially with the distance $d$ between its two constituents as $t^d$ with $t \sim J_1/U \ll 1$. This means that, for the laser to create a doublon, the particles in the c chain should be near each other, as the laser acts locally. In particular, the state which locally contains nearby fermions $c_i^\dagger c_{i+1}^\dagger|0\rangle$ will be affected by radiation, as well as $c_i^\dagger c_{i+2}^\dagger|0\rangle$, where $|0\rangle$ is the state with no particles. The first local configuration that is not affected by radiation, becoming dark is $c_i^\dagger c_{i+3}^\dagger|0\rangle$, as the fermions are too far away to be excited into a configuration with a non-vanishing matrix element with the doublon. The whole system will decouple when reaches one of the CDW states. Increasing the range of the laser (by letting $\alpha \neq 0$), creates superpositions.
+
+The Lindblad operators (8) are obtained considering transitions between the doublon band and the low energy band in the system (more details in the supplemental information). After performing the rotating wave approximation to account for the time dependence of the radiation Hamiltonian, the dynamics of the system occurs between the lower c, and the doublon bands (see Fig 2b upper panel). Using second order perturbation theory in $\Omega$, we obtain the matrix elements of the transitions between the doublon states $|d_i\rangle = f_i^\dagger f_{i+1}^\dagger|0\rangle$ and the lower band states $|i,j\rangle = c_i^\dagger c_j^\dagger|0\rangle$. The transition process from lower band to doublon reads (after adiabatic elimination of the doublon) $\tilde{l}_i = A_\Omega^2 c_i^\dagger c_{i+1}^\dagger Q_i$, valid for $t = \alpha^3 \ll 1$, with $Q_i$ given in (8). The prefactor $A_\Omega \sim \Omega^2/(2E_1 - U - \omega)$, while A, B, and $\beta$ entering the definition of Q satisfy $A = \alpha, B = \alpha A_\Omega$ and $\beta = \alpha^2$. The fermion operators in $Q_i$ all act on the chain c. Finally, taking into account the transitions from the doublon back to the c chain, which is mainly mediated by the dissipation with the bath, and integrating out the doublon states, we arrive at the Lindblad operators (8).
+
+**Structure of the DDS.-** Heuristically, one can understand the roles of $R_i^\dagger$ and $Q_i$ as follows. Operator $Q_i$ checks if at the site $i$ the state matches the local configuration of one of the Laughlin-like states. If true, it gives zero and the system stops evolving locally; if false, the operator $R_i^\dagger Q_i$ scrambles the particles. As long as this process can efficiently mix the particles locally, all states in the Hilbert space eventually evolve into DDS, spanned by the three Laughlin states. Crucially, the decay into these states is a consequence of the projective symmetry, enforcing existence of the degenerate space with $Q_i\rho = 0$ and thus $\hat{\rho} = \mathcal{L}(\rho) = 0$.
+
+The basis of such DDS is formed by the Laughlin states
+
+FIG. 2. Implementation of Lindblad operators. a.- Two chains c and f, filled with fermions are immersed in a bath, realized as a Bose-Einstein condensate (BEC). Transitions between the two chains, that have different chemical potential are mediated by the absorption of a photon from an external laser (blue wavy lines), or due to the relaxation induced by the bath (black arrows). Particles in the upper band are subject to nearest neighbor attractive interactions, of magnitude $U$ (red line). b.- Two-particle excitation spectrum, consisting of three free-particle bands (cc, cf and ff) and the doublon band as function of the total momentum of the pair. The laser frequency is red detuned from the transition energy $2E_1 - U$, so that the laser mainly creates doublon excitations. These excitations can decay to the lower band by emitting phonons in the bath (lower right panel).
+
+$|\Psi_a\rangle$, where $a=0,1,2$, which are annihilated by all composite operators, $\ell_{s,i}|\Psi_a\rangle = 0$, for all $s,i$ (and thus by the quantum jump operators). Assuming periodic boundary conditions, these states are given by$^{29,32}$ MPS$^{33,34}$
+
+$$|\Psi_a\rangle = N \operatorname{tr}\{g_1^a g_1^a \dots g_L^a g_L^a\}, \quad (9)$$
+
+where $N$ is a normalization factor, $a=0,1,2$ and
+
+$$\begin{align}
+g_i^0 &= \begin{pmatrix} |\circ \circ \bullet\rangle_i & |\circ \circ \circ\rangle_i \\ -\beta|\bullet \cdot \circ\rangle_i & 0 \end{pmatrix}; & g_i^1 &= \begin{pmatrix} |\bullet \circ \circ\rangle_i & |\circ \cdot \bullet\rangle_i \\ -\beta|\circ \circ \circ\rangle_i & 0 \end{pmatrix}; \\
+g_i^2 &= \begin{pmatrix} |\circ \cdot \circ\rangle_i & |\circ \circ \bullet\rangle_i \\ -\beta|\bullet \circ \circ\rangle_i & 0 \end{pmatrix}. & &
+\end{align} \quad (10)$$
+
+The state $|\circ \circ \circ\rangle_i$ represents the three consecutive empty sites at positions $(3i-2, 3i-1, 3i)$, while a full dot represents an occupied site, e.g. $|\bullet \circ \circ\rangle_i = c_{3i-2}^\dagger |\circ \circ \circ\rangle_i$, etc. The dark space basis vectors, $|\Psi_a\rangle$, are related by a translation $T$ by one site, as $T|\Psi_a\rangle = |\Psi_{a\oplus1}\rangle$, where $\oplus$ is an addition modulo 3. In the basis $|\Psi_a\rangle$, the operators $T$ and $U$ are represented by the $3 \times 3$ matrices
+
+$$T = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}; \quad U = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \omega^2 & 0 \\ 0 & 0 & \omega \end{pmatrix}; \quad \omega = e^{\frac{2\pi i}{3}}. \tag{11}$$
+
+Within DDS the density matrix is $\rho = \sum_{a,b=0}^{2} \varrho_{ab} |\Psi_a\rangle\langle\Psi_b|$, with $\varrho_{ab}$ a $3 \times 3$ positive semidefinite, hermitian matrix
+---PAGE_BREAK---
+
+with unit trace. In general, the structure of the density matrix within DDS depends on initial conditions, which determine the parameters $\rho_{ab}$.
+
+The basis vectors that define the DDS depend explicitly on the parameter $\beta$. For $\beta = 0$ the DDS is spanned by a mixture of three different classical CDW$^{30}$, e.g. $|\circ\bullet\circ\circ\circ\circ\ldots\rangle$, with periodic density and sharp structure factor in each basis vector. Changing this parameter modifies the average local density. The latter is given by
+
+$$ \mathrm{Tr}(\rho \hat{n}_{3i+a}) = \frac{3\rho_{aa} - 1}{2\sqrt{1+4|\beta|^2}} + \frac{1}{2}(1-\rho_{aa}), \quad (12) $$
+
+indicating that, if $\beta$ is known, the local density measurements are enough to determine the probabilities $\rho_{aa}$.
+
+FIG. 3. Characteristics of the DDS basis states a.- Expectation value $\mathrm{Tr}(\rho \hat{n}_x)$ for different positions $x$. Starting from a CDW configuration, the system is evolved using a protocol with a given $\beta$. This generates a DDS state that depends on $\beta$. The top row represents the pure state $\varrho_{00} = 1$, while the second row represents the mixed state $\varrho_{00} = 0.5$, $\varrho_{11} = 0.3$, $\varrho_{22} = 0.2$. Different colors are used to help track the changes in average occupation at each site and to highlight the 3-site periodicity of the density. b.- Static structure factor $S_1(k)$ for different values of $\beta$, for a pure state with $\rho_{11} = 1$. At $\beta = 0$ the system is in the CDW state, with a definite spatial periodicity, indicated by the peaks in $S_1(k)$ at $k = 0, \frac{2\pi}{3}, \frac{4\pi}{3}$. Increasing $|\beta|$, the system becomes more homogeneous. c.- Entanglement entropy of the DDS as a function of $\beta$.
+
+An alternative way of characterizing the DDS is through the correlation function of local observables. To highlight the relation with the CDW states, we focus on the static structure factor
+
+$$ S_a(k) = \frac{3}{L} \sum_{i=1}^{L} \mathrm{Tr}(\rho \hat{n}_a \hat{n}_{a+i-1}) e^{ik(i-1)}, \quad (13) $$
+
+shown in Fig. 3b. As $\beta$ increases, the system transitions from a crystal-like state with the well defined translational symmetry of three sites, into a more homogeneous state, where the density is uniform across the system.
+
+Finally, to describe the quantum nature of the DDS basis states, we compute their entanglement entropy. We separate the degrees of freedom of the system in two complementary large regions A and $A^c$ and define the partial density matrix $\rho_A = \mathrm{Tr}_{A^c}(|\Psi_a\rangle\langle\Psi_a|)$. The entanglement entropy is then $S(\beta) = -\mathrm{Tr}(\rho_A \ln(\rho_A))$. The result is shown in Fig. 3c. We observe that the entanglement entropy is monotonic with $\beta$. It reaches its maximum value of $2 \ln(2)$ for MPS of bond dimension 2 at $|\beta| \to \infty$.
+
+**Time evolution and global diagnostics.-** Now that we have constructed a dissipative evolution that drives the system into the DDS, we discuss how the system approaches the DDS. We analyze the Lindbladian evolution with the quantum jump operators (8) numerically. The decay into the DDS is evaluated using a quench protocol: the system is initiated in the CDW state $|\circ\circ\circ\circ\circ\circ\ldots\rangle$, which is one of the dark states of the Lindbladian at $\beta = 0$. This state is evolved then using the Lindblad operators (8) with $A=B=t=1$ and $\beta \in [0, 1]$ for simplicity (the results are qualitatively similar for slightly varying these parameters). In order to obtain the evolution of the system we perform exact diagonalization using Runge-Kutta (RK) integration$^{35}$ of the master equation, for systems of sizes up to $L=15$.
+
+To characterize the steady state mixture, we compute the purity of the state, defined as $\gamma(t) = \mathrm{Tr}\{\rho^2(t)\}$. From Fig. 4a, we find that the purity approaches $1/3$ with larger system sizes. This is indeed the case for a sudden quench from $\beta(t \le 0) = 0$ to $\beta(t > 0) = 1$. In this scenario, the system explores an extensive portion of the Hilbert space, becoming highly mixed, as seen in the intermediate region of Fig. 4a, where the purity plateaus at a minimum. Only after the system is sufficiently mixed, it starts approaching DDS and its purity increases. The information about the initial state is practically lost in the intermediate mixing process, and the eventual steady state is a highly mixed state within DDS.
+
+Convergence to DDS, spanned by Laughlin-like dark states $\{|\Psi_a\rangle\}_{a=0,1,2}$, may be visualized by
+
+$$ D_{\text{DDS}}(t) = \mathrm{Tr}\{\rho(t)\mathcal{P}_{\text{DDS}}\}, \quad (14) $$
+
+where $\mathcal{P}_{\text{DDS}} = \sum_{a=0}^2 |\Psi_a\rangle\langle\Psi_a|$ is a projector onto the DDS. Figure 4b shows that the system indeed evolves towards the Laughlin-like DDS, proving that this is the only non-decaying subspace of the Lindbladian evolution. At large times, one finds $1 - D_{\text{DDS}}(t) \propto e^{-\lambda_0 t}$, where the rate $\lambda_0$ is given by the lowest non-zero eigenvalue of the Lindblad operator.
+
+One notices that $\lambda_0$ is slowly decreasing upon increasing the system size. We note that local observables, like the particle density, fast approach their steady state values in a way which is independent of the system size, Fig. 4a. This separation of scales indicates that, while locally the system reaches a configuration that is close to the dark states that span the DDS, globally it takes much longer to fully reach the DDS.
+
+If instead of quenching the system into $\beta = 1$, we quench it into $\beta \ll 1$, we observe a very different behavior.
+---PAGE_BREAK---
+
+FIG. 4. Time evolution and decay into the DDS. a.- Evolution of the purity $\gamma(t)$, for different sizes. Starting from a pure CDW state, the system becomes highly mixed before starting leaking into the DDS. The dotted line shows the minimum purity possible in the DDS, corresponding to a fully mixed state. Inset.- Evolution of a local observable (density). We observe that the local density relaxes to it value in the DDS in a shorter timescale compared with the time required for the system to enter the DDS. This happens independently of the system size. b.- Approach of the DDS, spanned by the Laughlin-like states on the narrow torus, for different system lengths $L$. c.- Purity in the DDS as a function of $\beta$. The purity of the DDS depends strongly on $\beta$, indicating that the purity can be maintained if the final state is close to the initial one. The dotted lines for the $L = 15$ case for the purity and density evolution are obtained from an approximation for the Lindbladian on a smaller subspace$^{36}$.
+
+behavior. Here the system does not have to explore an extensive part of the Hilbert space before it reaches the DDS. As a result, the purity remains close to 1 at all times, as can be seen in Fig. 4c.
+
+**Adiabatic evolution.**- Although the previous analysis shows that the system does not generically end up in a pure state, it is possible to increase the purity of the final state by performing an adiabatic evolution from a pure state$^{37-39}$. To illustrate this, we evolve the system from an initial state given by a superposition of the three CDW configurations. This allows us to characterize the coherences in the MPS basis throughout the adiabatic evolution (sec. III in SI). Individual CDW states can be created using existing experimental techniques$^{40}$. We then evolve this state with the Lindblad operators (8) using a time-dependent $\beta$ parameter: $\beta(t) = \Delta \cdot t$ for $0 < t \le 1/\Delta$ and $\beta(t) = 1$ for $t > 1/\Delta$, where $\Delta$ is the ramp velocity. The purity of the final state depends on $\Delta$ as shown in Fig 5a. For small enough $\Delta$, the system does not explore the whole Hilbert space, but instead remains
+
+almost pure throughout its entire evolution. This mechanism can be used to achieve a purity arbitrarily close to unity. For larger ramp velocities, the system rapidly departs from the initial state, exploring the many-body Hilbert space, as shown for intermediate times in Fig. 5b, before leaking back to the DDS, which remains the only attractor of the dynamics. This increases the departure of the steady state from a pure state and erases the information about the initial state. The steady state purity as a function of the ramp velocity is shown in Fig. 5c.
+
+FIG. 5. Adiabatic evolution of the system. a.- The purity of the final state for different ramp velocities $\Delta$ in a system with $L=9$ sites. A slower variation in the adiabatic protocol leads to a higher purity. b.- The system ends up in the DDS regardless of the ramp velocity. Different $\Delta$'s control how much of the Hilbert space is explored. c.- The purity of the final state can be manipulated via the ramp velocity.
+
+**Conclusions.**- We have shown that to achieve a DDS the Lindbladian evolution should have an underlying symmetry, admitting a projective representation. The period of its 2-cocycle determines the dimensionality of the dark space. Reaching a DDS, protected against environmental influence, offers a way of maintaining quantum information. To manipulate this information, it is necessary to have an access to high purity states within the DDS. We found that an adiabatically varying Lindblad operator allows to reach such nearly-pure, entangled states. We have demonstrated these ideas by studying the thin torus limit of the $\nu = 1/3$ fractional quantum Hall state of matter. Being able to generate and manipulate states within a DDS may be utilized for quantum information processing platforms. The many-body nature of the state renders it less fragile against local disturbances.
+
+**Acknowledgments.**- We are indebted to R. Fazio for valuable discussions. R.S. acknowledges funding from EPSRC grant EP/M02444X/1, and the ERC Start-
+---PAGE_BREAK---
+
+ing Grant No.678795 TopInSy. F.I. acknowledges the financial support of the Brazilian funding agencies National Council for Scientific and Technological Development - CNPq (Grant No.308205/2019-7) and FAPERJ (Grant No.E-26/211.318/2019). A.K. was supported
+
+by NSF grant DMR-1608238. Y.G. was supported by the Deutsche Forschungsgemeinschaft (DFG) TRR 183 (project B02), and EG 96/13-1, and by the Israel Science Foundation.
+
+## SUPPLEMENTAL INFORMATION FOR: MULTIDIMENSIONAL DARK SPACE AND ITS UNDERLYING SYMMETRIES: TOWARDS DISSIPATION-PROTECTED QUBITS
+
+In this section we provide supplemental information not been inserted in the main text. We discuss the analysis of the Lindbladian gap of the dissipative model with and without perturbations, as well as an expanded analysis of the adiabatic evolution. We revisit the map of the Laughlin state from two dimensions to the orbital basis, where it becomes effectively one dimensional. Finally we include a longer discussion about the experimental realization of the Lindblad operators.
+
+The dissipative evolution $\dot{\rho} = \mathcal{L}(\rho)$ with $\hat{H} = 0$ studied in the main text is defined by the quantum jump operators $\tilde{l}_i = R_i^\dagger Q_i$ for $i = 1, ..., L$ with $R_i^\dagger = c_i^\dagger c_{i+1}^\dagger + t(\ell_{0,i}^\dagger + \ell_{0,i+1}^\dagger)$ and
+
+$$
+\begin{aligned}
+Q_i &= \ell_{1,i} + A(\ell_{0,i-1} + \ell_{0,i+1}) + B(\ell_{1,i+1} + \ell_{1,i-1}) \\
+\ell_{0,i} &= c_i c_{i+2}, \quad \ell_{1,i} = c_i c_{i+1} + \beta c_{i-1} c_{i+2}.
+\end{aligned}
+\quad (\text{S15})
+$$
+
+In Sec. (I) we show our results for the Lindbladian gap in finite system sizes with no perturbations, while the case of the dissipative evolution with imperfections are presented in Sec. (II). In Sec.(III) we show details of the adiabatic evolution analysis.
+
+# I. LINDBLANIAN GAP IN FINITE SYSTEM SIZES
+
+In this Section we show our analysis for the dissipative gap of the Lindbladian for finite system sizes. We obtain the gap in two different forms: (i) directly by exact diagonalization of the Lindbladian superoperator, or (ii) indirectly by the asymptotic decay rate (ADR) of the quantum state dynamics. While exact diagonalization allows us to study the Lindbladian gap for sizes up to $L \sim 12$, the asymptotic decay rate analysis allows the study of larger $L \sim 15$ system sizes.
+
+The exact diagonalization analysis is performed by first describing the Lindbladian superoperator as a linear operator in the extended Hilbert space,
+
+$$ \mathcal{L} \rightarrow |\mathcal{L}\rangle = -i(\mathbb{I} \otimes \hat{H} - \hat{H}^T \otimes \mathbb{I}) + \sum_i \hat{l}_i^* \otimes \hat{l}_i - \frac{1}{2}\mathbb{I} \otimes \hat{l}_i^\dagger \hat{l}_i - \frac{1}{2}(\hat{l}_i^\dagger \hat{l}_i)^T \otimes \mathbb{I} \quad (\text{S16}) $$
+
+and then diagonalizing the vectorized Lindbladian $|\mathcal{L}\rangle$. The eigenvalues $\lambda_j$ of the Lindbladian have non-positive real values, with the DDS described by those with zero eigenvalue. The gap of the Lindbladian corresponds in this way to the eigenvalue with largest nonzero real part. Ordering the eigenvalues according to their real part $\Re(\lambda_j) \ge \Re(\lambda_{j+1})$, for $j = 1$ to the extended Hilbert space dimension, the dissipative model of this work have in this way $\lambda_j = 0$ for $j = 1, ..., 9$ and the gap is described by the eigenvalue $\lambda_{10}$.
+
+On the other hand, one can also extract the gap by studying the dynamics of the quantum state. In the long time limit only the slower decaying modes of the Lindbladian are relevant to the dynamics and the expectation value of any observable $O(t) = \text{Tr}(\hat{O}\hat{\rho}(t))$ is approximated by
+
+$$ O(t) - O(t \to \infty) \approx e^{\lambda_{\text{ADR}} t} \quad (\text{S17}) $$
+
+where $\lambda_{\text{ADR}} < 0$ corresponds to the slower decay mode (or asymptotic decay mode) for the observable $\hat{O}$.
+
+In Fig.(S6) we show our results for the Lindbladian gap and ADR analysis of the DDS($t$) dynamics. We observe that the gap from exact diagonalization matches the one obtained from ADR. Furthermore we see that the dissipative gap increases going from 9 to 12 sites, and decreases again for 15 sites (to slightly the same value as 9 sites), which is the limit size that we can achieve. Although instructive, these numerical results for small systems do not allow us to unequivocally determine the nature of the dissipative gap in the thermodynamic limit. It is suggestive, however, for the presence of either (i) a gapped Lindbladian in the thermodynamics limit, or (ii) a gapless Lindbladian with
+---PAGE_BREAK---
+
+FIG. S6. Lindbladian gap for finite system sizes. In the left-panel we show our results for the Lindbladian gap ($\text{gap} = \lambda_{10}$) obtained from exact diagonalization of the vectorized Lindbladian (Eq.(S16)), and the asymptotic decay rate extracted from the dynamics of $D_{\text{DDS}}(t)$, for varying system sizes. In the right-panel we show the dynamics of $D_{\text{DDS}}(t)$ for a quench dynamics starting from an initial CDW state. We show the dynamics in a log-linear scale highlighting the exponential asymptotic behavior towards the steady state value $D_{\text{DDS}}(t \to \infty) = 1$. The dashed line corresponds to the expected long time dynamics for the $L=15$ case, according to its ADR coefficient. The Lindbladian parameters in all figures are $A=B=t=\beta=1$.
+
+a slow decaying of the gap with system size, excluding e.g. the possibility of an exponential closure of the gap with system size which would be detrimental for the preparation and manipulation of DDS in quantum information tasks.
+
+In the main text, we showed that the existence of the symmetry operators $U$ and $T$ ensures that all of the eigenvalues of the Lindbladian are at least m-fold degenerate. It is good to stress that the eigenmatrices of the Lindbladian do not translate directly into density matrices. In particular, this means that the symmetry operators $U$ and $T$ do not generate directly a symmetry on the density matrices. The Lindbladian is a linear operator acting on the extended Hilbert space $V = \mathcal{H} \otimes \mathcal{H}$, Eq. S16 which for concreteness we can take to be spanned by the vector states $|i\rangle|j\rangle$, each defined within a copy of $\mathcal{H}$. The vectorized density matrices form a subspace $S \subset V$ which is not a vector space, as the sum of two elements of $S$ does not generically belong to $S$ (e.g. given $\rho_{1,2}$ two positive definite operators with unit trace, the linear combination $\alpha\rho_1 + \beta\rho_2$ with $\alpha, \beta$ complex numbers is not necessarily positive definite).
+
+## II. LINDBLANIAN PERTURBATIONS
+
+In this Section we study the effects of Lindbladian imperfections on the DDS. We consider the following different imperfections:
+
+* Additional set of dephasing dissipative channels:
+
+$$ \hat{\ell}_{j,\text{dephasing}} = \sqrt{\epsilon_{\text{deph}}} (2\hat{c}_j^\dagger\hat{c}_j - 1), \quad j = 1, \dots, L \qquad (\text{S18}) $$
+
+describing fluctuations of on-site energies, which tend to suppress coherences between classical particle number basis of the fermionic system.
+
+* Imperfections in the current Lindblad operators of the form of extra hopping terms:
+
+$$ \tilde{\ell}_j \rightarrow \hat{\ell}_{j,\epsilon} = \tilde{\ell}_j + \sqrt{\epsilon_\ell} (c_j^\dagger c_{j+1} + \text{h.c.}), \quad j = 1, \dots, L \qquad (\text{S19}) $$
+
+* Coherent Hamiltonian competing with the dissipative dynamics:
+
+$$ \hat{H} = -\epsilon_H \sum_j (c_j^\dagger c_{j+1} + \text{h.c.}), \qquad (\text{S20}) $$
+
+describing the case in which despite the dissipation being the leading term, there are still some non-negligible coherent local dynamics within the fermionic system. We consider the simplest form of hopping terms, but small local coherent perturbations lead to the same conclusions.
+
+* Additional set of decay dissipative channels:
+
+$$ \hat{\ell}_{j,\text{decay}} = \sqrt{\epsilon_{\text{decay}}} \hat{c}_j, \quad j = 1, \dots, L \qquad (\text{S21}) $$
+---PAGE_BREAK---
+
+FIG. S7. Spectral properties of the Lindbladian with imperfections. We consider the Lindbladian of the manuscript with parameters $A = B = \beta = t = 1$ for a system with $L = 9$ sites and analyse the effects of the imperfections described in Eqs.(S18),(S19),(S20) on the spectral properties of the Lindbladian. We see that in the presence of an imperfection the DDS is not completely degenerated anymore, with a splitting of the degeneracy proportional to the imperfection strength. We obtain that $\lambda_1 = 0$ (by definition the Lindbladian has always a zero eigenvalue) while $\lambda_{j=2,3,...,9} < 0$ and are approximately equal to each other. We also show in the figure for clarity the eigenvalue $\lambda_{j=10}$ for $\epsilon = 0$, corresponding to the Lindbladian gap in the unperturbed case, which is also approximately equal to the cases with imperfections in the considered range of $\epsilon$ strengths.
+
+corresponding to losses of particles in the optical lattice due to an extra channel, driving the fermionic system towards an empty vacuum state
+
+Our results for the first three imperfections above are shown in Fig.(S7). We see that perturbation theory provides a qualitative picture: in the regime of small perturbations (in units of the rate of the original Lindbladian, $\epsilon \ll 1$) while imperfections in the jump operators lead to a linear splitting of the DDS, $\lambda_2 \sim \epsilon$, a Hamiltonian perturbation leads to a quadratic dependence $\lambda_2 \sim \epsilon^2$. Thus, as long as the perturbation is small compared to the unperturbed gap there is a time window between the system entering the DDS and the system characteristics of the state being destroyed by the imperfections, which gives a possibility to effectively use these states for quantum information tasks.
+
+The case of an additional set of decay dissipative channels follows in a similar form. In this case the steady state of the evolution is the vacuum state for $\epsilon_{\text{decay}} > 0$. However, as above, if the perturbation is small there is a time window over which the effects on the DDS are negligible. One may obtain the characteristic time of the dissipative decay effects by the dynamics of the total number of particles $\hat{N}$ in the system, which in the Heisenberg picture is described by $\mathcal{L}^\dagger[\hat{N}] = -\epsilon_{\text{decay}}\hat{N}$, i.e., $N(t) \sim e^{-\epsilon_{\text{decay}}t}$. Thus the effects of particle losses in the system are relevant for times of the order $t \sim 1/\epsilon_{\text{decay}}$, similarly to the other imperfections in the quantum jump operators considered above.
+
+### III. ADIABATIC EVOLUTION
+
+In this Section we expand the discussion of the adiabatic evolution in the dissipative dynamics. Using the same protocol as in the main manuscript, we evolve the system from an initial state given by a superposition of the three charge density wave configurations, precisely, $|\psi(t=0)\rangle = (|\Psi_{a=0}\rangle + \sqrt{2}|\Psi_{a=1}\rangle + \sqrt{3}|\Psi_{a=2}\rangle)/\sqrt{6}$ where $|\Psi_a\rangle$ are the Laughlin states for $\beta=0$ (i.e., product charge density wave configurations). We then evolve this state with the Lindblad operators (Eq.(S15)) using a time-dependent $\beta$ parameter: $\beta(t) = \Delta \cdot t$ for $0 < t \le 1/\Delta$ and $\beta(t) = 1$ for $t > 1/\Delta$, where $\Delta$ is the ramp velocity.
+
+We analyse the purity $\gamma(t)$ of the quantum state during the dynamics, as well as its overlap onto the DDS. For the later we first describe the 3 × 3 density matrix $\hat{\rho}_{\text{DDS}}$ representing the expectation value of the quantum state in the Laughlin dark states,
+
+$$ (\hat{\rho}_{\text{DDS}}(t))_{a,a'} \equiv \langle\Psi_a(t)|\hat{\rho}(t)|\Psi_{a'}(t)\rangle \quad (\text{S22}) $$
+
+for $a, a' = 0, 1, 2$, where $|\Psi_a(t)\rangle$ are the Laughlin dark states for the Lindblad parameters at time $t$. We study (i) the projection of the quantum state over the DDS, given by the diagonal terms of the matrix, $D_{\text{DDS}}(t) = \sum_a \rho_{\text{DDS}}(t)_{a,a'}$;
+---PAGE_BREAK---
+
+(ii) how the coherence of the initial state evolve under the adiabatic evolution, quantified by
+
+$$
+C(t) = \sqrt{\sum_{a \neq a'} \left( \frac{\rho_{\text{DDS}}(t)_{a,a'} - \rho_{\text{DDS}}(0)_{a,a'}}{\rho_{\text{DDS}}(0)_{a,a'}} \right)^2}, \quad (S23)
+$$
+
+and (iii) the distinguishability of the full matrix with the initial state, quantified by the trace norm as follows,
+
+$$
+\mathcal{D}(t) = ||\hat{\rho}_{\text{DDS}}(t)_{a,a'} - \hat{\rho}_{\text{DDS}}(0)_{a,a'}||_1 \tag{S24}
+$$
+
+Both coherence $C(t)$ as the distinguishability $\mathcal{D}(t)$ should be small if the evolved quantum state do not differ signifi-
+cantly from the initial state.
+
+We show our results in Fig.(S8). We see that for slow ramp rates the initial quantum state characteristics (purity and coherences) are preserved. We find in particular that for very slow rates $\Delta \ll 1$ the steady state properties have polynomial corrections compared to their initial conditions, i.e., $\gamma(t \to \infty) - \gamma(0)$, $C(t \to \infty)$ and $\mathcal{D}(t \to \infty) \sim \Delta^{\text{cte}}$. Interestingly also to notice that the coherence is approximately constant after an initial time $t \sim O(1/\Delta)$ (i.e., $\beta(t) \sim 1$), while it is the diagonal terms of the DDS subspace ($D_{\text{DDS}}(t)$) that still shows non trivial dynamics after this initial transient time.
+
+IV. MAPPING FRACTIONAL QUANTUM HALL GROUNDSTATE TO ONE DIMENSIONAL MODEL.
+
+In this section we revisit the exact mapping of the Laughlin state²⁷ into a one dimensional state⁴¹,⁴². We will be
+interested in filling fractions $\nu < 1$. Recalling that a 2DEG in a strong magnetic field displays Landau levels, we
+assume that the relevant physics occurs in the lowest Landau level (LLL). We place the system into a 2D torus with
+linear sizes $L_x$ and $L_y$ and area $A = L_x L_y \sin \theta$, defined by the region in the upper half complex plane enclosed by the
+points $w = (0, L_x, L_y \tau, L_x + L_y \tau)$. This torus is characterized by the modular parameter $\tau = L_y/L_x e^{i\theta} = \tau_1 + i\tau_2$,
+($\text{Im}(\tau) > 0, \theta \in [0, \pi]$). Following²⁵ we introduce the translation operators $t(\mathbf{L}) = \exp(\mathbf{L} \cdot (\nabla - ie\mathbf{A})) - iL_x y + iL_y x$
+(here $\mathbf{L} = (L_x, L_y)$ are measured in units of the magnetic length) which correspond to the usual translation operators
+in terms of the canonical momentum, and an extra space dependent phase. The single particle wavefunction satisfies
+the boundary conditions $t(\mathbf{L}_a)\Psi = e^{i\phi_a}\Psi$, with $\mathbf{L}_a$ a translation over the lattice vectors $\mathbf{L}_1 = (L_x, 0)$ and $\mathbf{L}_2 =$
+$L_y(\cos\theta, \sin\theta)$. Both conditions can be satisfied if the flux over the torus $\frac{L_x L_y \sin\theta}{2\pi} = N_\Phi$ is integral. We parameterize
+the coordinates on the torus by $z = \tilde{z}/L_x$ with $\tilde{z} = L_x(x + y\tau)$, where $x \in [0, 1]$ and $y \in [0, 1]$.
+
+The relation with the usual Cartesian coordinates is $x_1 = L_x(x + \tau_1 y)$ and $x_2 = L_y\tau_2 y$ The single particle wave-
+function has the form $\Psi = e^{-\frac{1}{2}(\operatorname{Im}(\tau)L_1 y)^2} f(z)$, where $f(z)$ is an entire (holomorphic) function in the complex plane.
+The we use units where $\sqrt{\hbar/eB} = 1$. In the Landau gauge **A** = $-By\hat{x}$, the boundary conditions read
+
+$$
+f(z+1) = f(z)e^{i\phi_1}, \quad f(z+\tau) = f(z)e^{i\phi_2}e^{-i\pi N_{\Phi}(2z+\tau)}, \qquad (S25)
+$$
+
+where the phases $\phi_a$ correspond to the fluxes piercing the torus in the two orthogonal directions $a=1,2$. From these
+relations it follows that $\int dz \frac{d}{dz} \ln(f(z)) = 2\pi i N_{\Phi}$, which implies that the function $f(z)$ has $N_{\Phi}$ zeroes. The single
+particle wavefunction that satisfies the boundary conditions (S25) and has $N_{\Phi}$ zeroes is given by the generalized theta
+function
+
+$$
+\phi_n(z; \tau, \phi_1, \phi_2) = e^{-\frac{1}{2}(\mathrm{Im}(\tau)L_x y)^2} \vartheta\left(z - z_n \left|\frac{\tau}{N_{\Phi}}\right.\right) e^{i\phi_1(z-z_n)} \quad \text{with} \quad \vartheta(z|\tau) = \sum_{m=-\infty}^{\infty} (e^{i\pi\tau})^{m^2} e^{2\pi imz} \quad (S26)
+$$
+
+and $z_n = \frac{2\pi n + \phi_2 - \tau \phi_1}{2\pi N_s}$. This corresponds to a normalizable wavefunction for $\text{Im}(\tau) > 0$. The zeroes of $\varphi_n(z; \tau, \phi_1, \phi_2)$ are located at $z = z_n + \frac{1}{2} + m + (\frac{1}{2} + n) \frac{\tau}{N_\Phi}$.
+
+As shown in Ref 28, the Laughlin state at filling $\nu = 1/3$ is the zero energy exact ground state of the Landau problem with the interaction $\mathcal{H} = V_0 \int dr |\nabla\rho(r)|^2$, where $\rho(r) = \psi^\dagger(r)\psi(r)$ and $r = (x_1, x_2)$. The projection of the electron operator into the first Landau level is $\psi = \sum_n \varphi_n(r)c_n$, where $c_n$ destroys a state at occupation $n$. The interaction Hamiltonian projected onto the first Landau level becomes
+
+$$
+\mathcal{H} = \frac{32 N_{\Phi} V_0}{|\tau|^3 L_1^2} \sum_j Q_j^{\dagger} Q_j &\text{ with } Q_j^{\dagger} = \sum_{k=-\infty}^{N_{\Phi}} \sum_{j=\infty}^{N_{\Phi}} (j-N_{\Phi}k) e^{-\frac{2\pi i}{N_{\Phi}\tau} (j-kN_{\Phi})^2} c_{j+k}^{\dagger} c_{j-k}^{\dagger}. \tag{S27}
+$$
+
+In this sum the pair of numbers $(j,k)$ are all integers or all half integers and satisfy $0 < j < N_\Phi$, $0 < k, l, < N_\Phi/2$.
+Separating both cases, and defining $\kappa^2 = \frac{2\pi}{N_\Phi} \frac{L_x}{L_y}$ gives the operators $\ell_{s,n}$ (Eq. 5) in the main text.
+---PAGE_BREAK---
+
+FIG. S8. **Adiabatic evolution.** We consider a system with $L = 9$ sites, $A = B = t = 1$ and varying $\beta(t)$ adiabatically according to the ramp rate $\Delta$. The initial state of the system is given by the superposition $|\psi(t=0)\rangle = (|\Psi_{a=0}\rangle + \sqrt{2}|\Psi_{a=1}\rangle + \sqrt{3}|\Psi_{a=2}\rangle)/\sqrt{6}$. We show the dynamics of the (**top-left**) purity, (**top-right**) $D_{\text{DDS}}(t)$, (**middle-left**) coherence $C(t)$ and (**middle-right**) distinguishability $D(t)$. In the **bottom panel** we show their steady state values.
+
+V. PHYSICAL REALIZATION
+
+We consider a one dimensional optical lattice (system) immersed in condensate that acts as a bath to the system,
+providing dissipation. Each site in the optical lattice consists of a potential-well accommodating two single particle
+levels denoted by *c* and *f*
+
+$$
+H_{\text{sys}} = - \sum_{i,\sigma}^{N} (J_{\sigma} a_{i,\sigma}^{\dagger} a_{i+1,\sigma} + \text{h.c.}) - U \sum_{i} n_{i,1} n_{i+1,1} + \sum_{i,\sigma} E_{\sigma} n_{i\sigma}, \quad (\text{S28})
+$$
+---PAGE_BREAK---
+
+Here $a_{i,\sigma}$ is a fermionic annihilation operator at site $i=1...N_\Phi$ and level $\sigma = \{0,1\}$. The relation with the main text operators is $a_{i,0} = c_i$ and $a_{i,1} = f_i$. The Hamiltonian includes an attractive ($U>0$) interaction between neighbour particles in the level $\sigma = 1$. The operator $n_{i\sigma} = a_{i\sigma}^\dagger a_{i\sigma}$ measures the occupation at the site $i$ and level $\sigma$. We assume that the number of particles in the system is given by $N_e = N_\Phi/3$.
+
+To capture the essential physics generated by the interaction, we first study the two particle problem. Defining the two particle state $|\sigma\sigma'\rangle = \sum_{i,j} \chi_{ij}^{\sigma\sigma'} a_{i,\sigma}^{\dagger} a_{j,\sigma'}^{\dagger} |0\rangle$, with $|0\rangle$ the state with no particles (vacuum), the Schrödinger equation for the wavefunction $\chi_{ij}^s = \frac{1}{\sqrt{2}}(\chi_{ij}^{11}-\chi_{ji}^{11})$ reads $-J_1(\Delta_i+\Delta_j)\chi_{ij}^s - U(\delta_{i,j+1}+\delta_{i+1,j})\chi_{ij}^s = (E-2E_1+4J_1)\chi_{ij}^s$, with $\Delta_i\chi_{ij} = \chi_{i+1,j} - 2\chi_{ij} + \chi_{i-1,j}$, the discrete Laplace operator. Introducing the central and relative coordinates $R=a(j+l)/2$ and $r=a(j-l)$ the wavefunction can be written as $\chi_{jl} = e^{iRK}\chi(K)_r = e^{iRK} \sum_q e^{irq} \tilde{\chi}_q^s$ where we have introduced the total and relative momentum $K=k_1+k_2$ and $2q=k_1-k_2$. The Schrödinger equation for $\chi_{ij}^s$ becomes
+
+$$ \tilde{\chi}_q^s = \frac{2U \sin q\bar{\chi}}{N_\Phi E - 2E_1 + 4J_1 \cos \frac{K}{2} \cos q} \rightarrow E_d(K) = 2E_1 - U - 4\frac{J_1^2}{U} \cos^2 \frac{K}{2} \quad (S29) $$
+
+where $\bar{\chi} = \sum_q 2 \sin q \tilde{\chi}_q^s$. For fixed center of mass momentum, the bound state energy $E_d(K)$ is found by solving self-consistently Eq. (S29). We consider the regime $J_0 \sim 0$, along with $|U| \gg E_1$ and $\frac{J_1}{E_1} \ll \frac{J_1}{|U|} \ll 1$. In this case, the bound state energy $E_d(k \sim 0)$ is far below the bottom of the (1,0)-pair band, but still above the (0,0) two-particle band. The amplitude for tunneling between two 0 states is taken to be negligible compared to all other energy scales ($J_0 \sim 0$). This implies a flat band for the (0,0)-pair. The two-particle energies $E_{\sigma\sigma'}$ of the continuous Bloch bands are
+
+$$ E_{\sigma\sigma'}(K,q) = (\sigma + \sigma')E_1 - 2(J_\sigma + J_{\sigma'}) \cos \frac{Ka}{2} \cos qa + 2(J_\sigma - J_{\sigma'}) \sin \frac{Ka}{2} \sin qa, \quad (S30) $$
+
+A doublen state of definite momentum is created by the combination $d_K^\dagger = \sum_R e^{iKR} \sum_l e^{-l/\xi(K)} f_{R+\frac{l}{2}}^\dagger f_{R-\frac{l}{2}}^\dagger$, with $\xi^{-1}(K) = \ln(\frac{J_1}{|U|} \cos \frac{K}{2})$. The state $|d_K\rangle = d_K^\dagger |0\rangle$ is normalized as $\langle d_{K'}|d_K\rangle = \delta_{K'K}$, with $|0\rangle$ a state with no particles.
+
+## VI. LASER DRIVING AND COUPLING TO A BATH
+
+We are interested in the dynamics generated between the low lying doublen states and the (0,0) lower energy band. We can induce Raman transitions between states in these bands using an external driving laser with Raman detuning $\Delta = 2E_1 - U - \omega$. The interaction between the (classical) radiation of amplitude $\Omega$ and frequency $\omega$, and the system is $H_{\text{rad}} = \Omega \cos(\omega t) \sum_i f_i^\dagger (c_i + \alpha(c_{i-1} + c_{i+1})) + \text{h.c.}$ with $\alpha = A_1/A_0 \ll 1$. The amplitudes $A_m$ decay fast with $m$ as they represent the matrix element of the different Wannier functions and the laser radiation profile.
+
+We couple our system to a 3-dimensional (3D) Bose Einstein Condensate (BEC). This coupling is realized by immersing the system in the BEC which acts as a dissipative, memoryless bath (valid when the spectral function of the bath is almost constant around the frequency of the laser $\omega$). The bath is very efficient to de-excite the system and does not induce excitations to higher energy bands as its temperature $T$ satisfy $T \ll \omega$ so thermal excitations in the bath cannot induce excitations in the system. This type of bath has been used to obtain superconducting states realizing Majorana zero modes¹². The quasiparticle excitations of this bath are described by the Hamiltonian $H_{\text{bath}} = \sum_k E_k b_k^\dagger b_k$, where the bogoliubov quasiparticles $b_k$ have mass $m_b$ and propagate with velocity $c_b$. These bosonic quasiparticles have energy $E_k = k(c_b^2 + k^2/4m_b^2)^{1/2}$, with $k = \sqrt{k^2} = (k_x^2 + k_y^2 + k_z^2)^{1/2}$ the magnitude of the three dimensional momentum of the Bogoliubov quasiparticles. Here $b_k(b_k^\dagger)$ destroys (creates) a bosonic bogoliubov excitation of momentum **k**. We work in the regime where the excitation induced by the laser is relaxed by the bath immediately, such that states with multiple excitations are not created. This approximation corresponds to the limit of weakly far detuned laser $E_d \gg |\Delta| \gg |g|, |\Omega|$.
+
+The interaction between the system and the condensate is described by the density interaction $H_{\text{bath/sys}} = g \int dr \delta\rho_s(\mathbf{r})\delta\rho_{\text{bath}}(\mathbf{r})^2$, where *g* is the strength of the system/bath coupling, which is assumed to be small. The density $\delta\rho_s(\mathbf{r})$ is the density of fermions at the point **r** in 3D, so it involves the Wannier wavefunctions of the fermions. The density $\delta\rho_{\text{bath}}$ corresponds to the phonon waves around the equilibrium BEC density induced by the interaction with the fermions in the optical lattice.² We move to a rotating frame of reference (the rotating wave approximation, RWA), where the problem, in the reduced Hilbert space which consists of the two lowest bands and the bath, looks static. This is achieved by introducing the time dependent unitary transformation $U_t = \exp(iH_{\text{sys}}t)$. The penalty is that terms involving higher bands of the extended Hilbert space are fast oscillating. By coarse graining in time we
+---PAGE_BREAK---
+
+are allowed to get rid of these terms, which amounts to projecting out the higher bands. The radiation Hamiltonian is modified by the rotating wave approximation accordingly. In this rotating basis, the interaction Hamiltonian with the bath $H_{\text{bath/sys}}$ is also modified as $H'_{\text{int}} = \mathcal{U}_t H_{\text{int}} \mathcal{U}_t^{\dagger}$. Retaining only the slow oscillating terms (with frequency $\sim \omega$) in the system-bath interaction, amounts to consider just the transition between the doublon and the (0,0) band. Each of these transitions is accompanied by the creation (or destruction) of a phonon excitation in the bath.
+
+Tracing over the bath, using the Born-Markov approximation, results in a quantum master equation for the effective density matrix in the reduced Hilbert space involving the bottom and the doublon bands. For small amplitude of the external drive $\Omega$, such that $|\Delta| \gg |g|, |\Omega|$, no more than one doublon is excited at any given time. We may then trace out the doublon band, and obtain a closed dissipative equation of motion within the Hilbert space of the lowest band. After this adiabatic elimination of the doublon band, we find the quantum master equation
+
+$$ \dot{\rho} = \mathcal{L}(\rho) = \sum_{i=1}^{N_\Phi} \gamma (\tilde{l}_i \rho \tilde{l}_i^\dagger - \frac{1}{2} \{\tilde{l}_i^\dagger \tilde{l}_i, \rho\}), \quad \text{with } \gamma = \frac{\Omega^2 \epsilon_i^2}{\Delta^2} \Gamma_0 \text{ and } \Gamma_0 \text{ parameterizing details of the bath/system coupling.} $$
+
+The quantum jump operator is in turn $\tilde{l}_i = R_i^\dagger Q_i$ with $R_i^\dagger = c_i^\dagger c_{i+1}^\dagger + t(c_i^\dagger c_{i+2}^\dagger + c_{i+1}^\dagger c_{i+3}^\dagger)$, and $Q_i = \ell_{1,i} + A(\ell_{0,i-1} + \ell_{0,i+1}) + B(\ell_{1,i+1} + \ell_{1,i-1})$.
+
+¹ A. Griessner, A. J. Daley, S. R. Clark, D. Jaksch, and P. Zoller, “Dark-state cooling of atoms by superfluid immersion,” Phys. Rev. Lett. 97, 220403 (2006).
+
+² S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Buchler, and P. Zoller, “Quantum states and phases in driven open quantum systems with cold atoms,” Nat. Phys. 4, 878 (2008).
+
+³ S. Diehl, W. Yi, A. J. Daley, and P. Zoller, “Dissipation-induced d-wave pairing of fermionic atoms in an optical lattice,” Phys. Rev. Lett. 105, 227001 (2010).
+
+⁴ M. Roncaglia, M. Rizzi, and J. I. Cirac, “Pfaffian state generation by strong three-body dissipation,” Phys. Rev. Lett. 104, 096803 (2010).
+
+⁵ W. Yi, S. Diehl, A. J. Daley, and P. Zoller, “Driven-dissipative many-body pairing states for cold fermionic atoms in an optical lattice,” New J. Phys. 14, 055002 (2012).
+
+⁶ A. C. Berceanu, H. M. Price, T. Ozawa, and I. Carusotto, “Momentum-space landau levels in driven-dissipative cavity arrays,” Phys. Rev. A 93, 013827 (2016).
+
+⁷ L. Zhou, S. Choi, and M. D. Lukin, “Symmetry-protected dissipative preparation of matrix product states,” Preprint at https://arxiv.org/abs/1706.01995 (2017).
+
+⁸ Z. Leghtas, U. Vool, S. Shankar, M. Hatridge, S. M. Girvin, M. H. Devoret, and M. Mirrahimi, “Stabilizing a bell state of two superconducting qubits by dissipation engineering,” Phys. Rev. A 88, 023849 (2013).
+
+⁹ Y. Liu, S. Shankar, N. Ofek, M. Hatridge, A. Narla, K. M. Sliwa, L. Frunzio, R. J. Schoelkopf, and M. H. Devoret, “Comparing and combining measurement-based and driven-dissipative entanglement stabilization,” Phys. Rev. X 6, 011022 (2016).
+
+¹⁰ M. E. Kimchi-Schwartz, L. Martin, E. Flurin, C. Aron, M. Kulkarni, H. E. Tureci, and I. Siddiqi, “Stabilizing entanglement via symmetry-selective bath engineering in superconducting qubits,” Phys. Rev. Lett. 116, 240503 (2016).
+
+¹¹ M Naghiloo, M Abbasi, Yogesh N Joglekar, and K W Murch, “Quantum state tomography across the exceptional point in a single dissipative qubit,” Nature Physics 15, 1232–1236 (2019).
+
+¹² S. Diehl, E. Rico, M. A. Baranov, and P. Zoller, “Topology by dissipation in atomic quantum wires,” Nat Phys 7, 971–977 (2011).
+
+¹³ C-E. Bardyn, M. A. Baranov, C. V. Kraus, E. Rico, A. Imamoglu, P. Zoller, and S. Diehl, “Topology by dissipation,” New J. Phys. 15, 085001 (2013).
+
+¹⁴ F. Iemini, D. Rossini, R. Fazio, S. Diehl, and L. Mazza, “Dissipative topological superconductors in number-conserving systems,” Phys. Rev. B 93, 115113 (2016).
+
+¹⁵ D. A. Lidar, I. L. Chuang, and K. B. Whaley, “Decoherence-free subspaces for quantum computation,” Phys. Rev. Lett. 81, 2594–2597 (1998).
+
+¹⁶ Emanuel Knill, Raymond Laflamme, and Lorenza Viola, “Theory of quantum error correction for general noise,” Phys. Rev. Lett. 84, 2525–2528 (2000).
+
+¹⁷ S. Touzard, A. Grimm, Z. Leghtas, S. O. Mundhada, P. Reinhold, C. Axline, M. Reagor, K. Chou, J. Blumoff, K. M. Sliwa, S. Shankar, L. Frunzio, R. J. Schoelkopf, M. Mirrahimi, and M. H. Devoret, “Coherent oscillations inside a quantum manifold stabilized by dissipation,” Phys. Rev. X 8, 021005 (2018).
+
+¹⁸ M. Hamermesh, *Group Theory and Its Application to Physical Problems*, Addison Wesley Series in Physics (Dover Publications, 1989).
+
+¹⁹ G. Lindblad, “On the generators of quantum dynamical semigroups,” Commun. Math. Phys. 48, 119–130 (1976).
+
+²⁰ V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, “Completely positive dynamical semigroups of N-level systems,” J. Math. Phys. 17, 821–825 (1976).
+
+²¹ Berislav Buča and Tomáž Prosen, “A note on symmetry reductions of the lindblad equation: transport in constrained open spin chains,” New Journal of Physics 14, 073007 (2012).
+
+²² V. Albert and L. Jiang, “Symmetries and conserved quantities in lindblad master equations,” Phys. Rev. A 89, 022118
+---PAGE_BREAK---
+
+(2014).
+
+23 Z Zhang, J Tindall, J Mur-Petit, D Jaksch, and B. Buča, “Stationary state degeneracy of open quantum systems with non-abelian symmetries,” arXiv:1912.12185 [quant-ph].
+
+24 H.P. Breuer and F. Petruccione, *The Theory of Open Quantum Systems* (Oxford University Press, 2002).
+
+25 F. D. M. Haldane and E. H. Rezayi, “Periodic laughlin-jastrow wave functions for the fractional quantized hall effect,” Phys. Rev. B **31**, 2529–2531 (1985).
+
+26 M. Hermanns, J. Suorsa, E. J. Bergholtz, T. H. Hansson, and A. Karlhede, “Quantum hall wave functions on the torus,” Phys. Rev. B **77**, 125321 (2008).
+
+27 R. B. Laughlin, “Quantized hall conductivity in two dimensions,” Phys. Rev. B **23**, 5632–5633 (1981).
+
+28 S. A. Trugman and S. Kivelson, “Exact results for the fractional quantum hall effect with general interactions,” Phys. Rev. B **31**, 5280–5284 (1985).
+
+29 G. Ortiz, Z. Nussinov, J. Dukelsky, and A. Seidel, “Repulsive interactions in quantum hall systems as a pairing problem,” Phys. Rev. B **88**, 165303 (2013).
+
+30 R. Tao and D. J. Thouless, “Fractional quantization of hall conductance,” Phys. Rev. B **28**, 1142–1144 (1983).
+
+31 Hereafter we switch from the orbital guiding center index $n$ to the real space site index $i$.
+
+32 M. Nakamura, Z-Y. Wang, and E. J. Bergholtz, “Exactly solvable fermion chain describing a $\nu = 1/3$ fractional quantum hall state,” Phys. Rev. Lett. **109**, 016401 (2012).
+
+33 M. Fannes, B. Nachtergaele, and R. F. Werner, “Finitely correlated states on quantum spin chains,” Commun. Math. Phys. **144**, 443–490 (1992).
+
+34 A. Klümper, A. Schadschneider, and J. Zittartz, “Matrix product ground states for one-dimensional spin-1 quantum antiferromagnets,” EPL **24**, 293 (1993).
+
+35 W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, *Numerical Recipes in C*, 2nd ed. (Cambridge University Press, Cambridge, USA, 1992).
+
+36 In the approximation we neglect the terms of the evolved state $\rho(t)$ which are smaller than a threshold $\epsilon = 10^{-5}$ and construct the effective Lindbladian for the remaining subspace, which has a smaller dimension. The dotted lines for $L = 15$ for the $D_{DDS}$ are obtained from the knowledge of the ADR, and simply performing a continuation of the dynamics.
+
+37 J. E. Avron, M. Fraas, and G. M. Graf, “Adiabatic response for lindblad dynamics,” J. Stat. Phys. **148**, 800–823 (2012).
+
+38 M. S. Sarandy and D. A. Lidar, “Adiabatic approximation in open quantum systems,” Phys. Rev. A **71**, 012331 (2005).
+
+39 V. Albert, B. Bradlyn, M. Fraas, and L. Jiang, “Geometry and response of lindbladians,” Phys. Rev. X **6**, 041031 (2016).
+
+40 M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, “Observation of many-body localization of interacting fermions in a quasirandom optical lattice,” Science **349**, 842–845 (2015).
+
+41 D-H. Lee and J. M. Leinaas, “Mott insulators without symmetry breaking,” Phys. Rev. Lett. **92**, 096401 (2004).
+
+42 A. Seidel, H. Fu, D-H. Lee, J. M. Leinaas, and J. Moore, “Incompressible quantum liquids and new conservation laws,” Phys. Rev. Lett. **95**, 266405 (2005).
\ No newline at end of file
diff --git a/samples/texts_merged/6284605.md b/samples/texts_merged/6284605.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ddd83f20ad170415770ddac87cabc11c50ab16d
--- /dev/null
+++ b/samples/texts_merged/6284605.md
@@ -0,0 +1,768 @@
+
+---PAGE_BREAK---
+
+# BIG INDECOMPOSABLE MODULES AND DIRECT-SUM RELATIONS
+
+WOLFGANG HASSLER, RYAN KARR, LEE KLINGLER, AND ROGER WIEGAND
+
+*Dedicated to Phillip Griffith, the master of syzygies*
+
+**ABSTRACT.** A commutative Noetherian local ring $(R, \mathfrak{m})$ is said to be *Dedekind-like* provided $R$ has Krull-dimension one, $R$ has no non-zero nilpotent elements, the integral closure $\bar{R}$ of $R$ is generated by two elements as an $R$-module, and $\mathfrak{m}$ is the Jacobson radical of $\bar{R}$. A classification theorem due to Klingler and Levy implies that if $M$ is a finitely generated indecomposable module over a Dedekind-like ring, then, for each minimal prime ideal $P$ of $R$, the vector space $M_P$ has dimension 0, 1 or 2 over the field $R_P$. The main theorem in the present paper states that if $R$ (commutative, Noetherian and local) has non-zero Krull dimension and is not a homomorphic image of a Dedekind-like ring, then there are indecomposable modules that are free of any prescribed rank at each minimal prime ideal.
+
+## 1. Introduction
+
+In a series of papers [14]–[16] Klingler and Levy proved the existence of tame-wild dichotomy for commutative Noetherian rings. They gave a complete classification of all finitely generated modules over Dedekind-like rings (cf. Definition 1.1) and showed that, over any ring that is not a homomorphic image of a Dedekind-like ring, the category of finite-length modules has wild representation type. A consequence of their classification is that if $M$ is an indecomposable finitely generated module over a Dedekind-like ring $R$, then $M_P$ is free of rank 0, 1 or 2 at each minimal prime ideal $P$ of $R$. The main theorem of the present paper complements this result of Klinger and Levy. We prove that if $(R, \mathfrak{m}, k)$ is a commutative local Noetherian ring of non-zero Krull dimension and $R$ is not a homomorphic image of a Dedekind-like ring,
+
+Received August 2, 2006; received in final form November 20, 2006.
+2000 Mathematics Subject Classification. Primary 13C05, 13E05, 13D07.
+The research of W. Hassler was supported by the Fonds zur Förderung der Wissenschaftlichen Forschung, project number P18779-N13. Wiegand's research was partially supported by NSA Grant H98230-05-1-0243.
+---PAGE_BREAK---
+
+then there are indecomposable modules that are free of any prescribed rank at each minimal prime.
+
+This result was obtained in [9] for the case of a Cohen-Macaulay ring, using a direct but highly intricate construction. In [10] we gave a much simpler argument that handles all rings—Cohen-Macaulay or not—for which some power of the maximal ideal requires at least 3 generators. The remaining case, when $(R, \mathfrak{m}, k)$ is not Cohen-Macaulay and each power of $\mathfrak{m}$ is two-generated, was treated via an indirect argument using the bimodule structure of certain Ext modules. In this paper we apply the Ext argument, together with periodicity of resolutions over hypersurface rings, to give a unified treatment of the case when each power of $\mathfrak{m}$ is two-generated. Thus this paper does not rely on the technical construction in [9]. Our goal is to make the paper pretty much self-contained, though we do refer without proof to some of the results of [6], [10] and [14]–[16].
+
+We actually obtain $\max\{|R/\mathfrak{m}|, \aleph_0\}$ pairwise non-isomorphic indecomposable of each rank. This refinement allows us, in dimension one, to obtain precise defining equations for the monoid of isomorphism classes of finitely generated modules that are free on the punctured spectrum. This generalizes the results of [6], which apply only to the Cohen-Macaulay case.
+
+Our main theorem provides indecomposable modules that are free of specified rank at each prime $P$ in a given finite set $\mathcal{P} \subseteq \text{Spec}(R) - \{\mathfrak{m}\}$. In dimension greater than one we have to allow for the fact that if $M_P \cong R_P^{(n)}$ and $Q$ is a prime ideal contained in $P$, then $M_Q \cong R_Q^{(n)}$. For $P_1, P_2 \in \mathcal{P}$ we write $P_1 \sim P_2$ if $P_1 \cap P_2$ contains a prime ideal of $R$ (not necessarily in $\mathcal{P}$). (Of course “$\sim$” is not necessarily a transitive relation.)
+
+**DEFINITION 1.1.** The commutative, Noetherian local ring $(R, \mathfrak{m}, k)$ is *Dedekind-like* [14, Definition 2.5] provided $R$ is one-dimensional and reduced, the integral closure $\bar{R}$ of $R$ in the total quotient ring of $R$ is generated by at most 2 elements as an $R$-module, and $\mathfrak{m}$ is the Jacobson radical of $\bar{R}$. We call $(R, \mathfrak{m}, k)$ an *exceptional Dedekind-like ring* provided, in addition, $\bar{R}/\mathfrak{m}$ is a purely inseparable field extension of $k$ of degree 2.
+
+There is a global notion of Dedekind-like, which is equivalent to Noetherian and locally Dedekind-like [16, Corollary 10.7]. In this article, “Dedekind-like” always means Dedekind-like and local, except in the last section, where we take up the question of the size of finitely generated indecomposable modules over arbitrary commutative Noetherian rings.
+
+The classification of finitely generated modules in [15] and [16] does not apply to exceptional Dedekind-like rings. The details in the exceptional case are extremely complicated and are currently being worked out by L. Klingler, G. Piepmeyer and S. Wiegand. It appears that the indecomposable modules over an exceptional Dedekind-like ring have torsion-free rank 0, 1 or 2, as in
+---PAGE_BREAK---
+
+the non-exceptional case. Thus everything in this paper would hold without
+the “non-exceptional” proviso. Nonetheless, since the classification of modules
+in the exceptional case is still a work in progress, we have decided to restrict
+to non-exceptional Dedekind-like rings in the second part of our main theorem
+below.
+
+**THEOREM 1.2 (Main Theorem).** Let $(R, \mathfrak{m}, k)$ be a commutative Noetherian local ring.
+
+(i) Suppose $R$ is not a homomorphic image of a Dedekind-like ring. Let $P$ be a finite set of non-maximal prime ideals of $R$, and let $n_P$ be a non-negative integer for each $P \in P$. Assume that $n_P = n_Q$ whenever $P \sim Q$. Then there exist $|k| \cdot \mathbb{N}_0$ pairwise non-isomorphic indecomposable finitely generated $R$-modules $X$ such that, for each $P \in P$, the localization $X_P$ is a free $R_P$-module of rank $n_P$.
+
+(ii) Conversely, assume $R$ is not an exceptional Dedekind-like ring, but that $R$ is a homomorphic image of some Dedekind-like ring. If $X$ is an indecomposable finitely generated $R$-module and $P$ is a non-maximal prime, then $X_P$ either is 0 or is isomorphic to $R_P$ or $R_P^{(2)}$.
+
+It is tempting to conjecture a substantial improvement of this result in
+higher dimensions. Let $(R, \mathfrak{m}, k)$ be a local ring of dimension at least two,
+and let $C_1, \dots, C_t$ be the connected components of the punctured spectrum
+$\text{Spec}(R) - \{\mathfrak{m}\}$. Given any sequence $n_1, \dots, n_t$ of non-negative integers, is
+there necessarily an indecomposable $R$-module $M$ such that $M_P \cong R_P^{(n_i)}$ for
+each $i$ and each $P \in C_i$? Our methods do not seem to yield modules that are
+free on the entire punctured spectrum.
+
+Part (ii) of the Main Theorem is an easy consequence of the classification
+theorem in [15]: Since the assertion is vacuous if $\dim(R) = 0$ and the hy-
+potheses fail if $\dim(R) > 1$, we assume $\dim(R) = 1$. Let $R = D/J$, where
+$D$ is a Dedekind-like ring. If $D$ were an exceptional Dedekind-like ring, then,
+by assumption, $J$ would have to be non-zero. But then $R$ would be zero-
+dimensional, since exceptional Dedekind-like rings are domains. Therefore
+$D$ is not exceptional, and we can apply the results in [15] and [16]. Write
+$P = Q/J$, where $Q$ is a non-maximal, hence minimal, prime ideal of $D$.
+Viewing $M$ as a $D$-module, we see, using [16, Corollary 16.4], that $M_Q$ is
+either 0 or is isomorphic to $D_Q$ or $D_Q^{(2)}$. Since the natural map $D_Q \to R_P$ is
+an isomorphism, the desired conclusion follows.
+
+## 2. When some power of *m* requires 3 or more generators
+
+PROPOSITION 2.1. Let $(R, \mathfrak{m}, k)$ be a commutative, Noetherian local ring for which some power $\mathfrak{m}^r$ of the maximal ideal requires at least three generators. Let $\mathcal{P}$ be a finite subset of $\text{Spec}(R) - \{\mathfrak{m}\}$, and let $n_P$ be a non-negative integer for each $P \in \mathcal{P}$. Assume that $n_P = n_Q$ whenever $P \sim Q$.
+---PAGE_BREAK---
+
+Let $n_1 < \cdots < n_t$ be the distinct integers in $\{n_P \mid P \in \mathcal{P}\}$, and put
+$n := n_1 + \cdots + n_t$. Given any integer $q > n$, there are $|k|$ pairwise non-
+isomorphic indecomposable finitely generated $\mathbb{R}$-modules $\mathcal{M}$ such that
+
+(i) *M* needs exactly 2q generators, and
+
+(ii) $M_P \cong R_P^{(n_P)}$ for each $P \in \mathcal{P}$.
+
+*Proof.* Choose $x \in \mathfrak{m}^r - (\mathfrak{m}^{r+1} \cup (\bigcup \mathcal{P}))$, $y \in \mathfrak{m}^r - ((\mathfrak{m}^{r+1} + Rx) \cup (\bigcup \mathcal{P}))$ and $z \in \mathfrak{m}^r - ((\mathfrak{m}^{r+1} + Rx + Ry) \cup (\bigcup \mathcal{P}))$. Thus $x, y$ and $z$ are outside the union of the primes in $\mathcal{P}$, and their images in $\mathfrak{m}^r/\mathfrak{m}^{r+1}$ are linearly independent.
+
+For $i = 1, \dots, t$, let $\mathcal{P}_i = \{P \in \mathcal{P} \mid n_P = n_i\}$. Put $S_i = R - \bigcup \mathcal{P}_i$, and let $K_i$ be the kernel of the natural map $R \to S_i^{-1}R$. We claim that $0 \in S_i S_j$ if $i \neq j$. If not, there would be a prime ideal $Q$ disjoint from the multiplicative set $S_i S_j$. But then $Q$ would be contained in $P_i \cap P_j$ for some $P_i \in \mathcal{P}_i$ and $P_j \in \mathcal{P}_j$, contradicting $P_i \not\subsetneq P_j$. It follows that $S_i^{-1}S_j^{-1}R = 0$ if $i \neq j$, that is, $K_i S_j^{-1}R = S_j^{-1}R$ if $i \neq j$. Therefore we can choose, for each $i = 1, \dots, t$, an element
+
+$$
+\xi_i \in K_i \mathfrak{m}^{r+1} - \bigcup_{j \neq i} (\bigcup \mathcal{P}_j).
+$$
+
+The image of $\xi_i$ in $S_j^{-1}R$ is 0 if $i=j$ and a unit if $i \neq j$.
+
+Let $I_l$ denote the $l \times l$ identity matrix and $0_l$ the $l \times l$ zero matrix. Let $H = H_q$ be the $q \times q$ nilpotent Jordan block with 1's on the superdiagonal and 0's elsewhere. Given any element $u \in R$, put
+
+$$
+\Delta = \Delta_{q,u} := (z + uy)I_q + yH_q.
+$$
+
+Consider the following matrix:
+
+$$
+(1) \quad A = A_{q,u} := \begin{bmatrix} \Xi & \Delta \\ 0_q & xI_q \end{bmatrix} \in \mathrm{Mat}_{2q \times 2q}(R),
+$$
+
+where
+
+$$
+\Xi := \begin{bmatrix}
+\xi_1 I_{n_1} & 0 & 0 & \cdots & 0 \\
+0 & \xi_2 I_{n_2} & 0 & \cdots & \vdots \\
+0 & \cdots & 0 & \cdots & 0 \\
+\vdots & \cdots & 0 & \xi_t I_{n_t} & 0 \\
+0 & \cdots & 0 & 0 & x^2 I_{q-n}
+\end{bmatrix} \in \mathrm{Mat}_{q \times q}(R).
+$$
+
+We let $A$ operate on $R^{(q)} \oplus R^{(q)}$ by left multiplication, and we put $M = M_{q,u} := \text{coker}(A)$. Since the entries of $A$ are in $\mathfrak{m}$, $M_{q,u}$ requires exactly 2$q$ generators.
+
+We now show that $M_{q,u}$ is indecomposable, and that $M_{q,u'} \not\cong M_{q,u}$ if $u, u' \in R$ and $u \not= u'$ (mod $\mathfrak{m}$). Fix $q$, let $u, u' \in R$, and put $A' := A_{q,u'}$, $M' := M_{q,u'}$ and $\Delta' := \Delta_{q,u'}$. Let $f$ be an arbitrary $R$-homomorphism from $M_{q,u}$ to
+---PAGE_BREAK---
+
+$M_{q,u'}$. We lift $f$ to homomorphisms $F$ and $G$ making the following diagram commutative:
+
+$$
+\begin{tikzcd}
+& & R^{(q)} \oplus R^{(q)} \arrow[r, "A"] & R^{(q)} \oplus R^{(q)} \arrow[r] & M \arrow[r] & 0 \\
+G \arrow[r] & & R^{(q)} \oplus R^{(q)} \arrow[r, "A'] & R^{(q)} \oplus R^{(q)} \arrow[r] & M' & 0
+\end{tikzcd}
+$$
+
+When we write $F$ and $G$ as $2 \times 2$ block matrices, this diagram yields the equation
+
+$$
+\begin{equation} \tag{2}
+\begin{aligned}
+\left[ \begin{array}{cc}
+F_{11} \Xi & F_{11} \Delta + F_{12} x \\
+F_{21} \Xi & F_{21} \Delta + F_{22} x
+\end{array} \right] &= FA = A'G \\
+&= \begin{bmatrix} \Xi G_{11} + \Delta' G_{21} & \Xi G_{12} + \Delta' G_{22} \\ G_{21}x & G_{22}x \end{bmatrix}.
+\end{aligned}
+\end{equation}
+$$
+
+Let stars denote the images, in $\mathfrak{m}^r/\mathfrak{m}^{r+1}$, of elements of $\mathfrak{m}^r$. Thus $\xi_i^* = 0$ for each $i$, $(x^2)^* = 0$, and $x^*, y^*$ and $z^*$ are linearly independent over $k$. Let bars denote reductions modulo $\mathfrak{m}$ of elements of $R$ and of matrices over $R$. Comparing the 2,2 entries of the matrix equation (2), we obtain the following equation:
+
+$$
+\overline{F}_{21}(\overline{u}\overline{I}_q + \overline{H})y^* + \overline{F}_{21}z^* + \overline{F}_{22}x^* = \overline{G}_{22}x^*.
+$$
+
+It follows that
+
+$$
+\overline{F}_{21} = 0 \text{ and } \overline{F}_{22} = \overline{G}_{22}.
+$$
+
+An examination of the 1,2 entries in (2) yields the following equation:
+
+$$
+\overline{F}_{11}(\overline{u}\overline{I}_q + \overline{H})y^* + \overline{F}_{11}z^* + \overline{F}_{12}x^* = G_{22}\overline{z}^* + (\overline{u}'\overline{I}_q + \overline{H})\overline{G}_{22}y^*
+$$
+
+It follows that
+
+$$
+(3) \quad \bar{F}_{12} = 0, \quad \bar{F}_{11} = \bar{G}_{22} \quad \text{and} \quad \bar{F}_{11}(\bar{u}\bar{I}_q + \bar{H}) = (\bar{u}'\bar{I}_q + \bar{H})\bar{G}_{22}.
+$$
+
+The last two equations in (3) show that
+
+$$
+(4) \quad (\bar{u} - \bar{u}') \bar{F}_{11} = \bar{H} \bar{F}_{11} - \bar{F}_{11} \bar{H}.
+$$
+
+Suppose now that $u \not\equiv u'$ (mod $\mathfrak{m}$). Then $\bar{u}-\bar{u}' \in k^\times$, and since $\bar{H}^q = 0$ we see, by descending induction, that $\bar{H}^i \bar{F}_{11} \bar{H}^j = 0$ for $i,j=0,\dots,q$. Setting $i=j=0$, we get $\bar{F}_{11}=0$. Since $\bar{F}_{12}=0$ too, $\bar{F}$ is not surjective, and now Nakayama's lemma implies that $f$ is not surjective. Since $f$ was an arbitrary element of $\operatorname{Hom}_R(M_{q,u}, M_{q,u'})$, this shows that $M_{q,u} \not\cong M_{q,u'}$.
+
+To prove that $M = M_{q,u}$ is indecomposable, we let $u' = u$, and we assume $f \in \operatorname{End}_R(M, M)$ is idempotent but not surjective. We will show that $f = 0$. Since $\bar{H}$ is non-derogatory, (4) implies that $\bar{F}_{11} \in k[\bar{H}]$. In particular, $\bar{F}_{11}$ is upper triangular with constant diagonal. Recall that $\bar{F}_{11} = \bar{G}_{22} = \bar{F}_{22}$ and $\bar{F}_{21} = 0$, so that $\bar{F}$ is upper triangular with constant diagonal. Since $\bar{F}$ is not surjective, it must be strictly upper-triangular. Therefore $\bar{F}^q = 0$. Then
+---PAGE_BREAK---
+
+$$\operatorname{im}(f) = \operatorname{im}(f^q) \subseteq \mathfrak{m}\mathcal{M}, \text{ whence } 1 - f \text{ is surjective. Since } f \text{ is idempotent,}$$
+$$f = 0.$$
+
+It remains to prove that $S_i^{-1}M \cong (S_i^{-1}R)^{(n_i)}$ for all $i$. Fix an index $i \le t$, and consider the image $\tilde{A}$ in $\operatorname{Mat}_{2q \times 2q}(S_i^{-1}R)$ of the matrix $A$. We recall that the $\xi_j, j \ne i$, become units in $\tilde{A}$, while $\xi_i$ maps to 0. Also, $x, y$ and $z$ map to units. Using these facts, one can easily do elementary row and column operations over $S_i^{-1}R$ to show that $\tilde{A}$ is equivalent to the $2q \times 2q$ matrix $B$ with $I_{2q-n_i}$ in the top left corner and zeros elsewhere. Thus $S_i^{-1}M \cong \operatorname{coker}(\tilde{A}) \cong \operatorname{coker}(B) \cong (S_i^{-1}R)^{(n_i)}$ as desired. $\square$
+
+By item (i) in the statement of the theorem, $M_{q,u} \not\cong M_{q',u'}$ if $q \ne q'$. Thus the Main Theorem is true if some power of m requires at least three generators.
+
+### 3. Bimodules and extensions
+
+In this section we concoct some homological machinery to handle the more difficult case of the Main Theorem—when each power of the maximal ideal is generated by two elements.
+
+Throughout this section let $R$ be a commutative Noetherian ring, not necessarily local, and let $A$ and $B$ be module-finite $R$-algebras (not necessarily commutative). Let $A E_B$ be an $A-B$-bimodule. We assume $E$ is $R$-symmetric, that is, $re = er$ for $r \in R$ and $e \in E$. (Equivalently, $E$ is a left $A \otimes_R B^{op}$-module.) Furthermore we assume that $E$ is finitely generated as an $R$-module. The Jacobson radical of a (not necessarily commutative) ring $C$ is denoted by $J(C)$, and the ring $C$ is said to be *local* provided $C/J(C)$ is a division ring; equivalently [7, Proposition 1.10], the set of non-units of $C$ is closed under addition. The next result is [10, Theorem 3.2], and we refer the interested reader to [10] for its elementary proof.
+
+**THEOREM 3.1.** With notation above, let $\alpha : _A A \to _A E$ and $B_B \to E_B$ be module homomorphisms such that $\alpha(1_A) = \beta(1_B) \ne 0$. Assume $A$ is local and ker($\beta) \subseteq J(B)$. Then $C := \beta^{-1}(\alpha(A))$ is an $R$-subalgebra of $B$ and is a local ring.
+
+Now we specialize the notation above. Still assuming that $R$ is a commutative Noetherian ring, let $M$ and $N$ be finitely generated $R$-modules. Put $A := \operatorname{End}_R(M)$ and $B := \operatorname{End}_R(N)$. Note that each of the $R$-modules $\operatorname{Ext}_R^n(N, M)$ has a natural $A-B$-bimodule structure. Indeed, any $f \in B$ induces an $R$-module homomorphism $f^* : \operatorname{Ext}_R^n(N, M) \to \operatorname{Ext}_R^n(N, M)$. For $x \in \operatorname{Ext}_R^n(N, M)$ put $x \cdot f = f^*(x)$. The left $A$-module structure is defined similarly, and the fact that $\operatorname{Ext}_R^n(N, M)$ is a bimodule follows from the
+---PAGE_BREAK---
+
+fact that $\text{Ext}_R^n(-, -)$ is an additive bifunctor. Note that $\text{Ext}_R^n(N, M)$ is $R$-symmetric, since, for $r \in R$, multiplications by $r$ on $N$ and on $M$ induce the same endomorphism of $\text{Ext}_R^n(N, M)$.
+
+Put $E := \text{Ext}_R^1(N, M)$, regarded as the set of equivalence classes of extensions $0 \to M \to X \to N \to 0$. Let $\alpha : _A A \to _A E$ and $\beta : B_B \to E_B$ be module homomorphisms satisfying $\alpha(1_A) = \beta(1_B) =: [\sigma]$. Then $\alpha$ and $\beta$ are, up to signs, the connecting homomorphisms in the long exact sequences of $\text{Ext}$ obtained by applying $\text{Hom}_R(-, M)$ and $\text{Hom}_R(N, -)$, respectively, to the short exact sequence $\sigma$. (When one computes $\text{Ext}$ via resolutions one must adorn maps with appropriate $\pm$ signs, in order to ensure naturality of the connecting homomorphisms. In what follows, the choice of sign will not be important.)
+
+Since it causes no extra effort, we phrase Lemma 3.2 and Theorem 3.3 in terms of a general *torsion theory* ($\mathcal{T}, \mathcal{F}$) (cf., e.g., [8]). Then, in Corollary 3.4, we apply Theorem 3.3 with $\mathcal{T} = \{\text{modules of finite length}\}$ and $\mathcal{F} = \{\text{modules of positive depth}\} = \{\text{modules with zero socle}\}$.
+
+An easy diagram chase establishes the following lemma, which is [10, Lemma 4.1]:
+
+**LEMMA 3.2.** Let $R$ be a commutative Noetherian ring, let $M$ and $N$ be finitely generated $R$-modules, with $M$ torsion and $N$ torsion-free (with respect to some torsion theory). Let $A, B$ and $E$ be as above, and let $\alpha : A \to E$ and $\beta : B \to E$ be module homomorphisms with $\alpha(1_A) = \beta(1_B) = [\sigma] \neq 0$. Choose a short exact sequence representing $\sigma$:
+
+$$
+(\sigma) \qquad 0 \to M \xrightarrow{i} X \xrightarrow{\pi} N \to 0
+$$
+
+Let $\rho : \text{End}_R(X) \to \text{End}_R(N) = B$ be the canonical homomorphism (reduction modulo torsion). Then the image of $\rho$ is exactly the ring $C := \beta^{-1}\alpha(A) \subseteq B$.
+
+The next result, which is [10, Theorem 4.2], follows easily from Theorem
+3.1 and Lemma 3.2:
+
+**THEOREM 3.3.** Keep the notation and hypotheses of Lemma 3.2.
+
+(i) Suppose $C$ has no idempotents other than $0$ and $1$. If $X = U \oplus V$ (a decomposition as $R$-modules), then either $U$ or $V$ is a torsion module.
+
+(ii) Suppose $A$ is local and $\ker(\beta)$ is contained in the Jacobson radical of $B$. Then $X$ is indecomposable.
+
+**COROLLARY 3.4.** Let $(R, \mathfrak{m}, k)$ be a commutative, Noetherian local ring, and let $M$ be an indecomposable finitely generated $R$-module of finite length. Let $N$ be a finitely generated $R$-module with depth$(N) > 0$. Put $A := \text{End}_R(M)$ and $B := \text{End}_R(N)$. Suppose there exists a right $B$-module homomorphism $\beta : B_B \to E_B := \text{Ext}_R^1(N, M)$ such that $\ker(\beta) \subseteq J(B)$ (equivalently, assume there is an element $\xi \in E$ with $(0 :_B \xi) \subseteq J(B))$. Let
+---PAGE_BREAK---
+
+$0 \to M \to X \to N \to 0$ represent $\xi = \beta(1_B) \in E$. Then $X$ is indecomposable.
+
+## 4. Building suitable finite-length modules
+
+To prove the Main Theorem in the remaining case, when each power of $m$ is two-generated, we need to build a sufficiently complicated indecomposable finite-length module $M$ and then choose a suitable module $N$ of positive depth. In this section we build the requisite finite-length modules.
+
+The following proposition is a slightly jazzed-up version of the “Warmup” in [10]. This construction is far from new. See, for example, the papers of Higman [12], Heller and Reiner [11], and Warfield [23]. Similar constructions can be found in the classification, up to simultaneous equivalence, of pairs of matrices. (Cf. Dieudonné’s discussion [3] of the work of Kronecker [17] and Weierstrass [24].)
+
+**PROPOSITION 4.1.** Let $(\Lambda, m, k)$ be a commutative Noetherian local ring with $m^2 = 0$, let $q$ be a positive integer and let $u$ be a unit of $\Lambda$. Let $I_q$ denote the $q \times q$ identity matrix and $H_q$ the $q \times q$ nilpotent Jordan block (with 1's on the superdiagonal and 0's elsewhere). Assume $m$ is minimally generated by two elements $x$ and $y$, let $\Psi_{q,u} := yI_q + x(uI_q + H_q)$ and put $M_{q,u} := \text{coker}(\Psi_{q,u})$.
+
+(i) $M_{q,u}$ is an indecomposable $\Lambda$-module requiring exactly $q$ generators.
+
+(ii) For every non-zero element $t \in m$, socle($M_{q,u}/tM_{q,u}$) $\cong k^{(q)}$.
+
+(iii) $M_{q,u} \cong M_{q',u'}$ if and only if $q = q'$ and $u \equiv u' (\text{mod } m)$.
+
+*Proof*. Clearly $M_{q,u}$ requires exactly $q$ generators, whence $M_{q,u} \not\cong M_{q',u'}$ if $q \neq q'$. Therefore we drop the subscripts $q$ from now on. The “if” assertion in (iii) is clear, since $m^2 = 0$. The proofs of the “only if” assertion in (iii) and of the indecomposability of the $M_u$ are similar to (but easier than) the proofs of the analogous assertions in Proposition 2.1. Alternatively, one can note that the associated graded modules $\text{gr}_m(M_u)$ are among the indecomposable modules in the classification of $k[X, Y]/(X^2, XY, Y^2)$-modules, found in the references above.
+
+To prove (ii), we drop the index $u$ and note that $M/tM = \text{coker}(\Phi)$, where $\Phi = [\Psi \ tI]$. Suppose first that $t = by$, where $b$ is a unit of $\Lambda$. Elementary column operations transform $\Phi$ to the matrix $[xH \ yI]$. Therefore $M/tM \cong k^{(q-1)} \oplus \Lambda/(y)$, and (ii) follows. The other possibility is that $t = ax+by$, where $a$ is a unit. In this case we can do elementary column operations to replace the superdiagonal elements of $\Psi$ by multiples of $y$. Further column operations transform the matrix to the form $[yI \ xI]$, and we have $M/tM \cong k^{(q)}$. $\square$
+
+If $(R, m, k)$ is Artinian and $m$ is principal, the zero-dimensional case of Cohen's Structure Theorem implies that $R$ is a homomorphic image of a complete discrete valuation ring. Thus, if $(R, m, k)$, as in the Main Theorem, is
+---PAGE_BREAK---
+
+zero-dimensional, we can apply Proposition 4.1 to the ring $R/\mathfrak{m}^2$ to get $|k|\cdot\mathfrak{n}_0$
+pairwise non-isomorphic indecomposable modules. Next, suppose $\dim(R) \ge$
+2. Then $\mathfrak{m}$ needs three generators unless $R$ is a two-dimensional regular local
+ring; and in that case $\mathfrak{m}^2$ needs three generators. By Proposition 2.1, the
+Main Theorem holds if $\dim(R) \ge 2$. Therefore it remains to prove the Main
+Theorem under the assumptions that $(R, \mathfrak{m}, k)$ is one-dimensional and each
+power of $\mathfrak{m}$ is at most two-generated.
+
+**DEFINITION 4.2.** A commutative, Artinian local ring $(\Lambda, \mathfrak{m}, k)$ is a *Drozd ring* provided its associated graded ring is the $k$-algebra $\text{gr}_{\mathfrak{m}}(\Lambda) \cong k[X, Y]/(X^2, XY^2, Y^3)$. (Equivalently, $\mathfrak{m}^3 = 0$, $\mathfrak{m}$ and $\mathfrak{m}^2$ each require exactly two generators, and there is an element $t \in \mathfrak{m} - \mathfrak{m}^2$ with $t^2 = 0$.)
+
+The main result in this section is a construction, in Proposition 4.4, of suitably complex indecomposable modules over Drozd rings. The idea of the construction below originated in work of Drozd [4] and Ringel [21]. The construction was adapted by Klingler and Levy [14] to show that the category of finite-length modules over a Drozd ring has wild representation type. Drozd rings enter the picture here because of the following result, a special case of the “Ring-theoretic Dichotomy” of Klingler and Levy [16, Theorem 14.3]:
+
+**THEOREM 4.3.** Let $(\Lambda, \mathfrak{m}, k)$ be a one-dimensional local ring whose max-
+imal ideal $\mathfrak{m}$ is generated by at most two elements. Then exactly one of the
+following possibilities occurs:
+
+(i) $\Lambda$ is a homomorphic image of a Dedekind-like ring.
+
+(ii) $\Lambda$ has a Drozd ring as a homomorphic image.
+
+Proposition 4.4 is a slight generalization of [10, Proposition 5.3]. The modification is needed to treat the case of a Cohen-Macaulay ring with multiplicity 2.
+
+**PROPOSITION 4.4.** Let $(\Lambda, \mathfrak{m}, k)$ be a Drozd ring, and let $t, y \in \Lambda$ with $(t, y) = \mathfrak{m}$ and $t^2 = 0$. There exists a family $(M_{q,\kappa})_{q \in \mathbb{N}, \kappa \in k^\times}$ of pairwise non-isomorphic indecomposable $\Lambda$-modules having the following properties:
+
+(i) For all $q \in \mathbb{N}$ and $\kappa \in k^\times$ we have
+
+$$
+\frac{(0 :_{M_{q,\kappa}} (t, y^2))}{t M_{q,\kappa}} \cong k^{(q)}.
+$$
+
+(ii) For every $\xi \in \mathfrak{m}$, all $\kappa \in k^\times$, and all $q \ge 1$ the $k$-vectorspace
+
+$$
+\frac{(0 :_{M_{q,\kappa}} \xi) + \mathfrak{m} M_{q,\kappa}}{\mathfrak{m} M_{q,\kappa}}
+$$
+
+has dimension greater than or equal to q.
+---PAGE_BREAK---
+
+*Proof.* Given $q \in \mathbb{N}$ and $\kappa \in k^\times$, choose $u \in \Lambda^\times$ with $u + m = \kappa$. Since $m^3 = 0$, $uy^2$ depends only on the coset $u + m$. Therefore we can define $M_{q,\kappa}$ to be the cokernel of the $3q \times 4q$ matrix
+
+$$
+\Psi_{q,\kappa} := \begin{bmatrix}
+yI_q & tI_q & 0 & 0 \\
+0 & -y^2I_q & tI_q & -yI_q \\
+0 & 0 & -(uI_q + H_q)y^2 & tI_q
+\end{bmatrix}
+$$
+
+with $H_q$ and $I_q$ as in Proposition 4.1. We let $\Lambda^{(3q)} \xrightarrow{\varepsilon_{q,\kappa}} M_{q,\kappa}$ denote the quotient map.
+
+To show that $M_{q,\kappa}$ is indecomposable and that $M_{q,\kappa} \not\cong M_{q,\kappa'}$ if $\kappa \neq \kappa'$, suppose $f: M_{q,\kappa} \to M_{q,\kappa'}$ is a $\Lambda$-homomorphism. Lift $\kappa'$ to $u' \in \Lambda^\times$. As in the proof of Proposition 2.1 we obtain a commutative diagram:
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+\Lambda^{(4q)} \arrow[r, "\Psi_{q,\kappa}"] & \Lambda^{(3q)} \arrow[r] & M_{q,\kappa} \\
+G \arrow[u, "f"] \arrow[d] & F \arrow[u, "f"] & f \arrow[u, "G"] \\
+\Lambda^{(4q)} \arrow[r, "\Psi_{q,\kappa'}"] & \Lambda^{(3q)} & M_{q,\kappa'}
+\end{tikzcd}
+$$
+
+In principle we could proceed as in Proposition 2.1 and derive restrictions for
+the entries of $F$ from the equation $F \cdot \Psi_{q,\kappa} = \Psi_{q,\kappa'} \cdot G$; instead, we consult
+[14, Lemma 4.8] to shorten the argument. If we let bars denote reductions
+modulo *m*, this lemma implies that
+
+$$
+\overline{F} = \begin{bmatrix}
+\overline{F}_{11} & * & * \\
+0 & \overline{F}_{11} & * \\
+0 & 0 & \overline{F}_{11}
+\end{bmatrix},
+$$
+
+where each block is a $q \times q$ matrix and $\overline{F}_{11} \cdot (u\overline{I}_q + \overline{H}_q) = (u'\overline{I}_q + \overline{H}_q) \cdot \overline{F}_{11}$.
+
+If $\kappa \neq \kappa'$, the argument following (4) in the proof of Proposition 2.1 shows
+that $M_{q,\kappa} \not\cong M_{q,\kappa'}$. Of course $M_{q,\kappa}$ requires exactly 3$q$ generators, so $M_{q,\kappa} \cong
+M_{q',\kappa'} \implies q=q'$. Thus we assume from now on that $\kappa=\kappa'$ and omit the
+subscripts $q$ and $\kappa$. The proof that $M$ is indecomposable is essentially the
+same as the proof of indecomposability in Proposition 2.1.
+
+We claim that $(0 :_M (t, y^2))$ is generated by the images, under $\varepsilon$, of the
+columns of the matrix
+
+$$
+\varphi := \begin{bmatrix}
+tI & 0 & 0 & 0 & 0 & I \\
+0 & tI & 0 & yI & 0 & 0 \\
+0 & 0 & tI & 0 & y^2I & -yI
+\end{bmatrix}
+$$
+
+(where each block is $q \times q$). An easy calculation shows that both $t$ and $y^2$
+knock the column space of $\varphi$ into the column space of $\Psi$, so the purported
+generators are, at least, in $(0 :_M (t, y^2))$.
+
+To prove the claim, suppose $\alpha \in \Lambda^{(3q)}$ and $t\alpha$ and $y^2\alpha$ are both in the image of $\Psi$. We will show that $\alpha \in \text{im}(\varphi)$.
+---PAGE_BREAK---
+
+We have
+
+$$ (5) \qquad t\alpha = \Psi \cdot \beta \text{ and } y^2\alpha = \Psi \cdot \gamma $$
+
+with $\beta, \gamma \in \Lambda^{(4q)}$. Write
+
+$$ \alpha = \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix}, \quad \beta = \begin{bmatrix} \beta_1 \\ \beta_2 \\ \beta_3 \\ \beta_4 \end{bmatrix}, $$
+
+where the $\alpha_i$ and $\beta_j$ are in $\Lambda^{(q)}$. The first equation in (5) yields
+
+$$ \begin{bmatrix} t\alpha_1 \\ t\alpha_2 \\ t\alpha_3 \end{bmatrix} = \begin{bmatrix} y\beta_1 + t\beta_2 \\ -y^2\beta_2 + t\beta_3 - y\beta_4 \\ -y^2(uI + H) \cdot \beta_3 + t\beta_4 \end{bmatrix}. $$
+
+We can write the $\alpha_i$ and $\beta_i$ in the form
+
+$$ \alpha_i = u_{i,0} + u_{i,1}t + u_{i,2}y + u_{i,3}ty + u_{i,4}y^2, $$
+
+$$ \beta_i = v_{i,0} + v_{i,1}t + v_{i,2}y + v_{i,3}ty + v_{i,4}y^2, $$
+
+where the entries of $u_{i,j}$ and $v_{i,j}$ are either units or 0. (Cf. [14, Lemma 4.2].) Since the images of $t$ and $y$ in $m/m^2$ are linearly independent over $k$, the equation $t\alpha_1 = y\beta_1 + t\beta_2$ yields $v_{1,0} = 0$ and $\bar{u}_{1,0} = \bar{v}_{2,0}$, where bars denote reduction modulo $m$. From $t\alpha_2 = -y^2\beta_2 + t\beta_3 - y\beta_4$, it follows that $\bar{u}_{2,0} = \bar{v}_{3,0}$ and $v_{4,0} = 0$ and, since the socle elements $ty$ and $y^2$ are linearly independent over $k$, that $\bar{v}_{2,0} = -\bar{v}_{4,2}$. From $t\alpha_3 = -y^2(uI+H) \cdot \beta_3 + t\beta_4$, it follows that $\bar{u}_{3,0} = \bar{v}_{4,0}$ and hence that $u_{3,0} = 0$.
+
+Using the equation $t\alpha_3 = -y^2(uI+H) \cdot \beta_3 + t\beta_4$ again, we see that $\bar{u}_{3,2} = \bar{v}_{4,2}$. Further, since $uI+H$ is invertible, it follows that $v_{3,0} = 0$ and hence that $u_{2,0} = 0$.
+
+To summarize, we have $\bar{u}_{3,2} = \bar{v}_{4,2} = -\bar{v}_{2,0} = -\bar{u}_{1,0}$, and $u_{2,0} = u_{3,0} = 0$. Putting $w := u_{1,0}$, we have $u_{3,2} = -w + t\mu + y\nu$ for suitable $\mu, \nu \in \Lambda^{(q)}$. Then
+
+$$ (6) \qquad \alpha = \begin{bmatrix} w & + & t u_{1,1} & + & y u_{1,2} & + & t y u_{1,3} & + & y^2 u_{1,4} \\ 0 & + & t u_{2,1} & + & y u_{2,2} & + & t y u_{2,3} & + & y^2 u_{2,4} \\ -y w & + & t u_{3,1} & + & 0 & + & t y(u_{3,3} + \mu) & + & y^2(u_{3,4} + \nu) \end{bmatrix}. $$
+
+From (6) it follows that $\alpha \in \text{im}(\varphi)$, as desired. This completes the proof of our claim.
+
+It is easy to see, using the invertibility of $uI+H$, that the image of the leftmost $3q \times 5q$ submatrix of $\varphi$ is contained in $t\Lambda^{(3q)}+\text{im}(\Psi)$. Letting $\gamma_1, \dots, \gamma_q$ be the last $q$ columns of $\varphi$, we see that $(0 :_M (t,y^2)) / tM$ is generated by $\zeta_1 := \varepsilon(\gamma_1) + tM, \dots, \zeta_q := \varepsilon(\gamma_q) + tM$. Since $t\gamma_i$, $y\gamma_i \in t\Lambda^{(3q)} + \text{im}(\Psi)$ for each $i$, we see that $(0 :_M (t,y^2)) / tM$ is a $k$-vector space of dimension at most $q$. To complete the proof of (i), we need only show that $\zeta_1, \dots, \zeta_q$ are linearly independent. Given a relation $\sum_{i=1}^q \lambda_i \zeta_i = 0$, with $\lambda_i \in \Lambda$, we have
+---PAGE_BREAK---
+
+$$ \sum_{i=1}^{q} \lambda_i \gamma_i \in \operatorname{im}(\Psi) + t\Lambda^{(3q)} \subseteq m\Lambda^{(3q)}. \text{ This relation obviously forces } \lambda_i \in m \text{ for all } i, \text{ as desired.} $$
+
+It remains to prove assertion (ii) of the proposition. Given $\xi \in m$, write
+$\xi = at+by$. Suppose first that $b$ is a unit of $\Lambda$. For each unit vector $e_i \in \Lambda^{(q)}$,
+put
+
+$$ \sigma_i := \begin{bmatrix} e_i \\ \frac{a^2}{b^2}t - \frac{a}{b}y)e_i \\ 0 \end{bmatrix}, $$
+
+and check that
+
+$$ \xi \sigma_i = b \begin{bmatrix} y e_i \\ 0 \\ 0 \end{bmatrix} + a \begin{bmatrix} t e_i \\ -y^2 e_i \\ 0 \end{bmatrix} \in \operatorname{im}(\Psi). $$
+
+This shows that $\varepsilon(\sigma_i) \in (0 :_M \xi)$ for each $i$, and the assertion follows easily in this case.
+
+If $b$ is not a unit, $\xi$ has the form $\xi = ct + dy^2$. With $e_i$ as above, put
+
+$$ \tau_i := \begin{bmatrix} e_i \\ 0 \\ -y e_i \end{bmatrix}. $$
+
+Then
+
+$$ \xi \tau_i = dy \begin{bmatrix} y e_i \\ 0 \\ 0 \end{bmatrix} + c \begin{bmatrix} t e_i \\ -y^2 e_i \\ 0 \end{bmatrix} - cy \begin{bmatrix} 0 \\ -y e_i \\ te_i \end{bmatrix} \in \operatorname{im}(\Psi). $$
+
+As before, the assertion follows easily. $\square$
+
+**5. When all powers of m are at most 2-generated**
+
+In this section we complete the proof of the Main Theorem in the remaining case—each power of **m** is generated by at most two elements. Recall that by Theorem 4.3 *R* maps onto a Drozd ring. We refer the reader to [10, Lemma 6.2] for the proof of the next result (note that $e(R) = e(m, R)$ denotes the multiplicity of *R*):
+
+LEMMA 5.1. Let $(R, m, k)$ be a one-dimensional local ring. Assume that $m$ and $m^2$ are two-generated and $R/L$ is a Drozd ring for some ideal $L$. Write $m = Rt + Ry$, with $t^2 \in L$. Then $L = m^3$, and $m^r = y^{r-1}m = Rty^{r-1} + Ry^r$ for each $r \ge 1$. If, further, $R$ is not Cohen-Macaulay, then the following also hold:
+
+(i) $m^r = Ry^r$ for all $r \gg 1$. In particular, $e(R) = 1$.
+
+(ii) $R$ has exactly one minimal prime ideal $P$. Moreover, $R_P$ is a field and $R/P$ is a discrete valuation ring.
+
+(iii) $P$ is a principal ideal, and $P \not\subseteq m^2$.
+---PAGE_BREAK---
+
+PROPOSITION 5.2. Let $(R, \mathfrak{m}, k)$ be a commutative local Noetherian ring, let $P$ be a non-maximal prime ideal of $R$, and let $n$ be any non-negative integer. Suppose there is an indecomposable finite-length $R$-module $M$ such that $\dim_k(\text{socle}_R(\text{Ext}_R^1(R/P, M))) \ge n$. Then there is a short exact sequence
+
+$$ (7) \qquad 0 \to M \to X \to (R/P)^{(n)} \to 0, $$
+
+in which $X$ is indecomposable.
+
+*Proof.* Put $E_1 := \text{Ext}_R^1(R/P, M)$, $N := (R/P)^{(n)}$, $A := \text{End}_R(M)$, $B := \text{End}_R(N) = \text{Mat}_{n \times n}(R/P)$ and $E := \text{Ext}_R^1(N, M) = E_1^{(n)}$. If we write elements of $E$ as $1 \times n$ row vectors with entries in $E_1$, then the right $B$-module structure is given by matrix multiplication. Since $M$ has finite length, $A$ is a local ring [7, Lemmas 2.20 and 2.21].
+
+Let $e_1, \dots, e_n$ be linearly independent elements of $\text{socle}_R(E_1)$, and put $\xi := [e_1, \dots, e_n] \in E$. We claim that $(0 :_B \xi) \subseteq J(B)$. For, suppose $\varphi := [a_{ij}] \in B$ with $\xi\varphi = 0$. Then $e_1 a_{1j} + \dots + e_n a_{nj} = 0$ for each $j = 1, \dots, n$. Linear independence of the $e_i$ now implies that $a_{ij} \in \mathfrak{m}/P$ for each $i, j$. Then $\varphi \in J(B)$, and the claim is proved.
+
+To complete the proof, we let (7) represent the element $\xi \in E$ and apply Corollary 3.4. □
+
+We will divide the proof of the Main Theorem into three cases.
+
+**5.1. Case 1: $R$ is not Cohen-Macaulay.** Suppose now that $(R, \mathfrak{m}, k)$ is one-dimensional and not Cohen-Macaulay, as in the Main Theorem, and assume also that each power of $\mathfrak{m}$ is generated by at most two elements. By Theorem 4.3 and Lemma 5.1, $R$ has a unique minimal prime ideal $P$; moreover, $P$ is principal, say, $P = Rt$. Given a non-negative integer $n$, we seek $|k| : \mathfrak{n}_0$ pairwise non-isomorphic indecomposable modules $X$ such that $X_P \cong (R/P)^{(n)}$. The proof is a slight modification of the corresponding case in [10]; we give a sketch of the argument. (See [10, Proposition 6.3 and the succeeding paragraphs] for details.)
+
+Suppose, first, that $(0 :_R t) \subseteq \mathfrak{m}^2$. Given an arbitrary integer $q \ge \max\{1, n\}$, we apply Proposition 4.1 to $R/\mathfrak{m}^2$, getting $|k| - 1$ pairwise non-isomorphic indecomposable finite-length modules $M$ satisfying
+
+$$ \text{socle}_R(M/tM) \cong k^{(q)} \text{ and } \mathfrak{m}^2M = 0. $$
+
+Applying $\operatorname{Hom}_R(-, M)$ to the short exact sequence
+
+$$ 0 \to Rt \to R \to R/(t) \to 0, $$
+
+we obtain an exact sequence
+
+$$ (8) \qquad \operatorname{Hom}_R(R, M) \to \operatorname{Hom}_R(Rt, M) \to E_1 \to 0, $$
+
+where $E_1 := \text{Ext}_R^1(R/P, M)$. Since $Rt \cong R/(0 :_R t)$ and $(0 :_R t)M = (0)$, the map $f \mapsto f(t)$ provides an isomorphism $\operatorname{Hom}_R(Rt, M) \cong M$. Combining this
+---PAGE_BREAK---
+
+isomorphism with the usual isomorphism $\operatorname{Hom}_R(R, M) \cong M (g \mapsto g(1))$, we transform (8) to the exact sequence $M \xrightarrow{t} M \to E_1 \to 0$. Thus $E_1 \cong M/tM$. Now Proposition 5.2 provides, for each $M$, an indecomposable module $X$ and a short exact sequence (7). Then $X_P \cong R_P^{(n)}$. Also, since $M \cong \operatorname{H}^0_m(X)$ (the finite-length part of $X$), we see that non-isomorphic $M$'s yield non-isomorphic $X$'s, and the proof is complete in this case.
+
+Next, we consider the more difficult case, when $(0 :_R t) \not\subseteq \mathfrak{m}^2$. Since $R$ maps onto a Drozd ring by Theorem 4.3, one can show easily that $t^2 \in \mathfrak{m}^3$. Also, $t \notin \mathfrak{m}^2$ by (iii) of Lemma 5.1, so we can choose $y$ such that $\mathfrak{m} = Rt+Ry$. To summarize, we have
+
+$$ (9) \qquad P = Rt, \mathfrak{m} = Rt + Ry, \text{ and } t^2 \in \mathfrak{m}^3. $$
+
+We now complete the proof under the additional assumption that
+
+$$ (10) \qquad t^2 = ty^2 = 0. $$
+
+In this case, one checks easily that $(0 :_R t) = (t, y^2)$. Applying Proposition 4.4 to the Drozd ring $\Lambda := R/\mathfrak{m}^3$, we get $|k| :_{\mathfrak{N}_0}$ indecomposable $R$-modules $M$ such that $\mathfrak{m}^3 M = 0$ and the $k$-vector space $\frac{(0:M(t,y^2))}{tM}$ has dimension $n$. Again, we obtain the exact sequence (8), and since $Rt \cong R/(t,y^2)$, we see that $\operatorname{Hom}_R(Rt, M) \cong (0 :_M (t, y^2))$, and hence $E_1 \cong \frac{(0:M(t,y^2))}{tM}$. Thus $E_1 = \operatorname{socle}_R(E_1)$ has dimension $n$. As before, we can use Proposition 5.2 to produce $|k| :_{\mathfrak{N}_0}$ pairwise non-isomorphic indecomposable modules $X$ such that $X_P \cong R_P^{(n)}$.
+
+Finally, we complete the proof when (10) is not necessarily satisfied. Since $t^2 \in \mathfrak{m}^3$ by (9), $S := R/(t^2, ty^2)$ maps onto the Drozd ring $R/\mathfrak{m}^3$. Therefore, by Theorem 4.3, $S$ is not a homomorphic image of a Dedekind-like ring. Moreover, $S$ is not Cohen-Macaulay, since $ty \notin Rt^2 + Rty^2$ (else $\mathfrak{m}^2$ would be principal) but $mty \subseteq Rt^2 + Rty^2$. By the argument in the previous paragraph, we obtain $|k| :_{\mathfrak{N}_0}$ pairwise non-isomorphic $S$-modules $X$ such that $X_Q \cong S_Q^{(n)}$, where $Q = P/(t^2, ty^2)$. Now view these modules as $R$-modules and note that the natural map $R_P \to S_Q$ is an isomorphism. This completes the proof of Theorem 1.2 when $R$ is not Cohen-Macaulay.
+
+For the rest of Section 5, we assume that $(R, \mathfrak{m}, k)$ is a one-dimensional Noetherian local Cohen-Macaulay ring such that each power of $\mathfrak{m}$ is generated by two elements, and we assume that $R$ is not a homomorphic image of a Dedekind-like ring, equivalently (Theorem 4.3), $R$ has a Drozd ring as a homomorphic image. By Lemma 5.1, $\Lambda := R/\mathfrak{m}^3$ is a Drozd ring. Moreover, we have the “associativity formula” (cf. [20, Theorem 14.7] or [2, Corollary 4.6.8]):
+
+$$ (11) \qquad 2 = e(R) = \sum_i e(R/P_i)\ell(R_{P_i}), $$
+---PAGE_BREAK---
+
+where the sum ranges over all minimal prime ideals $P_i$ of $R$, and $\ell(R_{P_i})$ is the length of $R_{P_i}$ as an $R_{P_i}$-module. Thus, $R$ has either one or two minimal prime ideals.
+
+**5.2. Case 2: R is Cohen-Macaulay with two minimal prime ideals.**
+
+Let $P_1$ and $P_2$ denote the minimal primes of $R$. We are given two non-negative integers $n_1$ and $n_2$, and we want to find $|k| : \mathbb{N}_0$ indecomposable modules $X$ such that $X_{P_i} \cong R_{P_i}^{(n_i)}$ for $i=1,2$. By (11), each $R/P_i$ is a discrete valuation domain and $R_{P_i}$ is a field. Since $m$ needs two generators, it follows that each $P_i \not\subseteq \mathfrak{m}^2$, so we can choose $t_i \in P_i \not\subseteq \mathfrak{m}^2$. Then $R/(t_i)$ is one-dimensional with principal maximal ideal, i.e. a discrete valuation ring; hence $P_i = Rt_i$. Suppose $r$ is in the kernel of the diagonal map $R \to R_{P_1} \times R_{P_2}$. Then $(0 :_R r) \not\subseteq P_1 \cup P_2$, so $(0 :_R r)$ contains a non-zerodivisor. It follows that $R$ is reduced, with total quotient ring $R_{P_1} \times R_{P_2}$ and normalization $R/P_1 \times R/P_2$. Moreover, $(0 :_R t_1) = Rt_2$ and $(0 :_R t_2) = Rt_1$.
+
+Given any integer $n \ge \max\{n_1, n_2\}$, let $M := M_{n,\kappa}$ be one of the indecomposable $\Lambda$-modules from Proposition 4.4. Applying $\operatorname{Hom}_R(\_, M)$ to the short exact sequence
+
+$$
+0 \to Rt_1 \to R \to R/(t_1) \to 0,
+$$
+
+we obtain an exact sequence
+
+$$
+\operatorname{Hom}_R(R, M) \to \operatorname{Hom}_R(Rt_1, M) \to E_1 \to 0,
+$$
+
+where $E_1 = \operatorname{Ext}_R^1(R/P_1, M)$. Now $\operatorname{Hom}_R(Rt_1, M) \cong (0 :_M (0 :_R t_1)) = (0 :_M t_2)$. Therefore $E_1 \cong (0 :_M t_2)/t_1 M$, and, by symmetry, $E_2 := \operatorname{Ext}_R^1(R/P_2, M) \cong (0 :_M t_1)/t_2 M$. By (ii) of Proposition 4.4, $E_1$ and $E_2$ each need at least $n$ generators.
+
+The rest of the proof is very similar to that of Case 1. Let $N = (R/P_1)^{(n_1)} \oplus (R/P_2)^{(n_2)}$. By the annihilator relations above, $\operatorname{Hom}_R(R/P_i, R/P_j) = 0$ if $i \ne j$. Therefore $B := \operatorname{End}_R(N) = \operatorname{Mat}_{n_1 \times n_1}(R/P_1) \times \operatorname{Mat}_{n_2 \times n_2}(R/P_2)$. Put $E := \operatorname{Ext}_R^1(N, M) = E_1^{(n_1)} \times E_2^{(n_2)}$. We regard elements of $E$ as ordered pairs $(\xi_1, \xi_2)$, where $\xi_i$ is a $1 \times n_i$ row vector with entries in $E_i$. The right action of $B$ on $E$ is matrix multiplication on each of the two coordinates.
+
+Let $e_1, \dots, e_n \in E_1$ map to linearly independent elements of $E_1/\mathfrak{m}E_1$, and let $f_1, \dots, f_n \in E_2$ map to linearly independent elements of $E_2/\mathfrak{m}E_2$. Consider the elements $\xi_1 := [e_1 \dots e_{n_1}] \in E_1^{(n_1)}$ and $\xi_2 := [f_1 \dots f_{n_2}] \in E_2^{(n_2)}$, and put $\xi := (\xi_1, \xi_2) \in E$. One checks easily that $(0 :_B \xi) \in \mathfrak{m}B \subseteq J(B)$ (cf., e.g., [10, Lemma 4.4]). Corollary 3.4 now provides a short exact sequence $0 \to M \to X \to N \to 0$ with X indecomposable. This completes the proof of Theorem 1.2 when R is Cohen-Macaulay and has two minimal prime ideals.
+
+There is one remaining case, for which we will use a very different approach.
+---PAGE_BREAK---
+
+**5.3. Case 3: $R$ is Cohen-Macaulay with one minimal prime ideal**
+
+$P$. Given a non-negative integer $n$, we seek $|k| \cdot \mathbb{N}_0$ indecomposable modules $X$ with $X_P \cong R_P^{(n)}$.
+
+Obviously no power of $m$ can be principal, so the multiplicity of $R$ is two.
+Cohen's Structure Theorem implies that $R$ is an abstract hypersurface, that
+is, the completion $\tilde{R}$ has the form $S/(f)$, where $(S,n,k)$ is a two-dimensional
+regular local ring and $f \in n - \{0\}$.
+
+Again, we consider the indecomposable $\Lambda$-modules $M := M_{n,\kappa}$ provided by Proposition 4.4. This time we will take $N$, the torsion-free part of the desired module $X$, to be a suitable direct summand of the first syzygy of $M$.
+
+The next three results apply more generally to any one-dimensional ab-
+stract hypersurface.
+
+**THEOREM 5.3.** *Suppose $(D, n, k)$ is an abstract hypersurface of dimension 1. Let $M$ be an indecomposable finite-length $D$-module whose first syzygy is isomorphic to $D^{(r)} \oplus F$, where $F$ has no non-zero free direct summand. Let $F'$ be an arbitrary direct summand of $F$. Then there is a short exact sequence*
+
+$$
+(12) \qquad 0 \to M \to X \to F' \to 0,
+\textit{in which } X \textit{ is indecomposable.}
+$$
+
+*Proof.* We may assume $F' \neq 0$. Put $A := \text{End}_D(M)$ and $B := \text{End}_D(F')$.
+We have a short exact sequence
+
+$$
+(13) \qquad 0 \to D^{(r)} \oplus F \to D^{(r+s)} \to M \to 0,
+$$
+
+where $s = \operatorname{rank}(F)$. Since $F'$ is maximal Cohen-Macaulay over the Gorenstein
+ring $D$, we have $\operatorname{Ext}_D^i(F', D) = 0$ for $i > 0$ (cf. [2, Theorems 3.3.7 and
+3.3.10]). Therefore, on applying the functor $\operatorname{Hom}_D(F', \cdot)$ to (13), we obtain
+an isomorphism
+
+$$
+(14) \qquad \operatorname{Ext}_D^1(F', M) \cong \operatorname{Ext}_D^2(F', F).
+$$
+
+By Eisenbud's theory of matrix factorizations [5] (cf. also [26, Chapter 7]),
+$F'$ has a periodic resolution with period at most 2 and with constant Betti
+numbers. Thus we have short exact sequences
+
+$$
+(15) \qquad 0 \to G \to D^{(t)} \xrightarrow{\psi} F' \to 0
+$$
+
+and
+
+$$
+(16) \qquad 0 \to F' \to D^{(t)} \to G \to 0.
+$$
+
+Applying $\operatorname{Hom}_D(F', _)$ to (16), we get an isomorphism
+
+$$
+(17) \qquad \mathrm{Ext}_D^1(F', G) \cong \mathrm{Ext}_D^2(F', F').
+$$
+
+Moreover, naturality of the connecting homomorphisms in the long exact se-
+quences of $\mathrm{Ext}$ implies that the isomorphisms in (14) and (17) are actually
+isomorphisms of right $B$-modules.
+---PAGE_BREAK---
+
+Next, applying $\operatorname{Hom}_D(F', \_)$ to (15), we get an exact sequence of right $B$-modules
+
+$$ \operatorname{Hom}_D(F', D^{(t)}) \xrightarrow{\psi_*} B \xrightarrow{\eta} \operatorname{Ext}_D^1(F', G) \to 0. $$
+
+Since $F'$ is a direct summand of $F$, there is an injection of right $B$-modules $\operatorname{Ext}_D^2(F', F') \hookrightarrow \operatorname{Ext}_D^2(F', F)$. Composing this injection with the isomorphisms in (14) and (17), we get an injection of right $B$-modules $j : \operatorname{Ext}_D^1(F', G) \hookrightarrow \operatorname{Ext}_D^1(F', M)$. Putting $\beta = j\eta$, we obtain an exact sequence of right $B$-modules
+
+$$ \operatorname{Hom}_D(F', D^{(t)}) \xrightarrow{\psi_*} B \xrightarrow{\beta} \operatorname{Ext}_D^1(F', M) $$
+
+We claim that $\ker(\beta)$ is contained in the Jacobson radical $J(B)$ of $B$. To prove this, let $g \in \ker(\beta) = \operatorname{im}(\psi_*)$. Then $g$ lifts to a map $h : F' \to D^{(t)}$, with $\psi h = g$. Since $F'$ has no non-zero free summand, $h(F') \subseteq nD^{(t)}$. This shows that $g(F') \subseteq nF'$, and the claim follows easily (cf., e.g., [10, Lemma 4.4]). The existence of the short exact sequence (12) now follows from Corollary 3.4. $\square$
+
+In the following, we say that a $D$-module $M$ has rank $s$ provided $M_P \cong R_P^{(s)}$ for every associated prime $P$ of $D$.
+
+**PROPOSITION 5.4.** Let $(D, n, k)$ be an abstract hypersurface of dimension 1, and assume that $D$ has a Drozd ring $\Lambda$ as a homomorphic image. Let $M := M_{n,\kappa}$ be the indecomposable $\Lambda$-module built in Proposition 4.4, and let $L := \text{syz}_D^1(M)$ be the first syzygy of the $D$-module $M$. Write $L = D^{(r)} \oplus F$, where $F$ has no non-zero free direct summand. Then $\text{rank}(F) \ge \frac{n}{e-1}$, where $e = \text{rank}(D)$ is the multiplicity of $D$.
+
+*Proof.* Obviously $F$ has a rank. Put $s := \text{rank}(F)$ and $m := \mu_D(F)$ ($\mu$ = minimal number of generators required). It follows, e.g., from [13, (1.6)], that $m \le es$. (The statement of [13, (1.6)] assumes that $k$ is infinite. This is not a problem, since none of $m, e, s$ is changed by the flat local base change $D \to D(X) := D[X]_{n[X]}$.) Now $\mu_D(L) = r + m = 3n - s + m$, whence $\mu_D(L) - 3n \le (e-1)s$. Therefore it will suffice to show that $\mu_D(L) \ge 4n$. Since $\mu_D(n) = 2$, the following lemma completes the proof: $\square$
+
+**LEMMA 5.5.** Keep the notation above. There is a surjective $D$-homomorphism from $L = \text{syz}_D^1(M)$ onto $n^{(2n)}$.
+
+*Proof.* Let $\chi$ denote the composition $D^{(3n)} \twoheadrightarrow \Lambda^{(3n)} \xrightarrow{\epsilon} M \to 0$, so that $\ker \chi = L$, and let $\pi : D^{(3n)} \to D^{(2n)}$ be the projection onto the first two coordinates. We will show that $\pi(L) = n^{(n)} \oplus n^{(n)}$. The inclusion $\pi(L) \subseteq n^{(n)} \oplus n^{(n)}$ is obvious. For the reverse inclusion, fix $i, 1 \le i \le n$, and let $\mathbf{e}_i \in D^{(n)}$ be the $i$th unit vector. Let $\tilde{t}, \tilde{y} \in n$ lift the elements $t, y \in \Lambda$ (notation as in Proposition 4.4). It will suffice to show that the four elements
+---PAGE_BREAK---
+
+($\tilde{t}e_i$, $0$), ($\tilde{y}e_i$, $0$), $(0, \tilde{t}e_i)$ and $(0, \tilde{y}e_i)$ are all in $\pi(L)$. But this follows easily
+from the definition of the matrix $\Psi_{n,\kappa}$. $\square$
+
+We now return to our special ring $(R, m, k)$ and the modules $M = M_{\kappa,n}$. As
+in Theorem 5.3, we write the first syzygy of $M$ in the form $R^{(r)} \oplus F$, where $F$
+has no non-zero free summand. To complete the proof of the Main Theorem,
+it will suffice, by Theorem 5.3, to show that $F$ has a direct summand $F'$ of
+rank $n$. By Proposition 5.4 we know that $F$ has rank at least $n$. By [22] $F$
+is isomorphic to a direct sum of ideals of $R$. (Cf. also [18, Theorem 2.1] for a
+more general statement and [1] for the analytically unramified case.) Each of
+these ideals must have rank 0 or 1. Therefore the desired module $F'$ can be
+obtained from $F$ by throwing out a few rank-one summands, if necessary.
+
+**6. The monoid of vector bundles**
+
+Let $(R, m, k)$ be a commutative Noetherian local ring. By a *vector bundle* we mean a finitely generated module $M$ such that $M_P$ is a free $R_P$-module for each prime ideal $P \neq m$. We denote by $R$-mod the category of finitely generated $R$-modules and by $\mathcal{F}(R)$ the full subcategory of vector bundles. Our goal is to obtain, in Theorem 6.3, a complete set of invariants for the monoid $\mathcal{V}(\mathcal{F}(R))$ of isomorphism classes of modules in $\mathcal{F}(R)$ when $\dim(R) = 1$, where the monoid operation is given by the direct sum. (Of course $\mathcal{F}(R) = R$-mod if each $R_P$ is a field, e.g., if $R$ is reduced and one-dimensional, or if $R$ is the non-Cohen-Macaulay ring given in Lemma 5.1.) The description of these monoids was worked out in [6] for the case of a one-dimensional Cohen-Macaulay ring. We will see here that the same results hold in the one-dimensional non-Cohen-Macaulay case, thanks to our Main Theorem. We refer the reader to [6, Section 1] for the relevant terminology and basic results concerning Krull monoids, divisor homomorphisms, and the class group $\text{Cl}(H)$ of a Krull monoid $H$.
+
+Suppose now that $(R, m, k)$ is a one-dimensional commutative Noetherian
+local ring. Let $\hat{R}$ denote the $m$-adic completion of $R$. The Krull-Schmidt
+theorem implies that $\mathcal{V}(\hat{R}\text{-mod})$ and $\mathcal{V}(\mathcal{F}(\hat{R}))$ are free monoids, with bases
+consisting of the isomorphism classes of the indecomposables. In other words,
+$\mathcal{V}(\mathcal{F}(\hat{R})) \cong \mathbb{N}^{(\tau)}$, the direct sum of $\tau$ copies of the additive monoid $\mathbb{N}$ of non-
+negative integers, where $\tau$ is the number of isomorphism classes of indecom-
+posable vector bundles over $\hat{R}$. It is easy to see that if $M$ is a finitely generated
+$R$-module, then $M$ is a vector bundle if and only if $\hat{R} \otimes_R M$ is a vector bundle.
+(This follows from the faithful flatness of $R_P \to \hat{R}_{Q_1} \times \cdots \times \hat{R}_{Q_t}$, where $P$ is
+a minimal prime of $R$ and the $Q_j$ are the primes of $\hat{R}$ lying over $P$.) Thus the
+divisor homomorphism [6, Section 1.1] $\mathcal{V}(R\text{-mod}) \to \mathcal{V}(\hat{R}\text{-mod})$ taking $[M]$
+to $[\hat{R} \otimes_R M]$ restricts to a divisor homomorphism $\mathcal{V}(\mathcal{F}(R)) \to \mathcal{V}(\mathcal{F}(\hat{R}))$. In
+particular, we can regard $\mathcal{V}(\mathcal{F}(R))$ as a submonoid of $\mathcal{V}(\mathcal{F}(\hat{R}))$. The key is to
+---PAGE_BREAK---
+
+understand exactly how $\mathcal{V}(\mathcal{F}(R))$ sits inside $\mathcal{V}(\mathcal{F}(\hat{R}))$, that is, which modules over the $\mathfrak{m}$-adic completion $\hat{R}$ are extended from $R$-modules.
+
+**PROPOSITION 6.1.** Let $(R, \mathfrak{m}, k)$ be a one-dimensional commutative Noetherian local ring with $\mathfrak{m}$-adic completion $\hat{R}$, and let $N$ be a vector bundle over $\hat{R}$. Then $N \cong \hat{R} \otimes_R M$ for some $R$-module $M$ (necessarily a vector bundle) if and only if $\text{rank}_{R_P}(N_P) = \text{rank}_{R_Q}(N_Q)$ whenever $P$ and $Q$ are minimal prime ideals of $\hat{R}$ with $P \cap R = Q \cap R$.
+
+*Proof.* The “only if” direction is clear. For the converse, let $P_1, \dots, P_s$ be the minimal prime ideals of $R$, and, for each $i$, let $n_i$ be the rank of $N$ at the primes lying over $P_i$. Let $K = R_{P_1} \times \cdots \times R_{P_s}$, and let $V$ be the projective $K$-module having rank $n_i$ at $P_i$. The $K \otimes_R \hat{R}$-module $K \otimes_R N$ is extended from the $K$-module $V$, and now [19, Theorem 3.4] implies that $N$ is extended from an $R$-module. $\square$
+
+The next result puts an upper bound on the number of non-isomorphic vector bundles. The case of a Cohen-Macaulay ring is [6, Lemma 2.3].
+
+**PROPOSITION 6.2.** Let $(R, \mathfrak{m}, k)$ be a one-dimensional commutative Noetherian local ring. Then $\left|\mathcal{V}(\mathcal{F}(R))\right| \leq |k| : \mathfrak{n}_0$.
+
+*Proof.* We observe, as in the first paragraph of the proof of [6, Lemma 2.3], that each finite-length module has cardinality at most $\tau := |k| : \mathfrak{n}_0$, and that there are at most $\tau$ isomorphism classes of finite-length $R$-modules.
+
+Let $P_1, \dots, P_s$ be the minimal prime ideals of $R$. Fix a vector bundle $M$, and let $n_i$ be the rank of $M$ at $P_i$. Since there are only countably many sequences $(n_1, \dots, n_s)$, it will suffice to show that there are at most $\tau$ non-isomorphic vector bundles with the same ranks as $M$ at the minimal primes.
+
+Let $K = R_{P_1} \times \cdots \times R_{P_s}$, the localization of $R$ at the complement of the union of the minimal prime ideals. Given a vector bundle $N$ with $\text{rank}_{P_i}(N_i) = n_i$ for each $i$, one can choose a homomorphism $\varphi : M \to N$ such that $1_K \otimes_R \varphi$ is an isomorphism. Then $U := \text{ker}(\varphi)$ and $V := \text{coker}(\varphi)$ have finite length. Put $W := \text{im}(\varphi) \cong \text{coker}(U \hookrightarrow M)$. By the first paragraph, there are at most $\tau$ choices for $U$ and $V$. Also, for each $U$, $\text{Hom}_R(U, M)$ has finite length and therefore has cardinality at most $\tau$. Therefore there are at most $\tau$ possibilities for $W$. Finally, the exact sequence $0 \to W \to N \to V \to 0$ and the fact that $\text{Ext}_R^1(V, W)$ has finite length, and hence cardinality bounded by $\tau$, show that there are at most $\tau$ possibilities for $N$. $\square$
+
+Fix a positive integer $q$ and an infinite cardinal $\tau$. Let $B$ be any $q \times \tau$ integer matrix such that each element of $\mathbb{Z}^{(q)}$ occurs $\tau$ times as a column of $B$. We let $\mathfrak{F}(q, \tau) := \mathbb{N}^{(\tau)} \cap \text{ker}(B : \mathbb{Z}^{(\tau)} \to \mathbb{Z}^{(q)})$, where $\mathbb{N}$ denotes the set of non-negative integers. Finally, we put $\mathfrak{F}(0, \tau) = \mathbb{N}^{(\tau)}$. These are the monoids
+---PAGE_BREAK---
+
+we will obtain as $\mathcal{V}(R\text{-mod})$ for the rings that are not Dedekind-like. Not
+surprisingly, the isomorphism class of the monoid $H(q, \tau)$ does not depend
+on how the columns of $B$ are arranged, as long as each column is repeated $\tau$
+times. (Cf. [6, Lemmas 1.1 and 2.1].)
+
+For some Dedekind-like rings, we will obtain a different monoid. Let $E$ be
+the $1 \times \mathbb{N}_0$ matrix $[1 \ -1 \ 1 \ -1 \ 1 \ -1 \ \cdots]$, and put $\mathfrak{S}_1 := \mathbb{N}^{(\mathbb{N}_0)} \cap \ker(E : \mathbb{Z}^{(\mathbb{N}_0)} \to \mathbb{Z})$.
+
+For a one-dimensional local ring $(R, \mathfrak{m}, k)$, we define the *splitting number* $\operatorname{spl}(R)$ to be the difference $|\operatorname{Spec}(\hat{R})| - |\operatorname{Spec}(R)|$. Thus, for example, $\operatorname{spl}(R) = 0$ means that the natural map $\operatorname{Spec}(\hat{R}) \to \operatorname{Spec}(R)$ is bijective.
+
+We can now state the main theorem of this section. For the proof, we refer
+the reader to the proof of [6, Theorem 2.2]. The only modification needed to
+eliminate the Cohen-Macaulay hypothesis is to replace Lemmas 2.3, 2.4 and
+2.5 in [6] by, respectively, Proposition 6.2, Theorem 1.2 and Proposition 6.1
+of this paper.
+
+**THEOREM 6.3.** Suppose $(R, \mathfrak{m}, k)$ is a one-dimensional commutative Noetherian local ring. Let $q := \operatorname{spl}(R)$ be the splitting number of $R$, and let $\tau = \tau(R) = |k| \cdot \mathbb{N}_0$.
+
+(1) If $R$ is not Dedekind-like, then $\mathfrak{V}(\mathfrak{F}(R)) \cong \mathfrak{F}(q, \tau)$.
+
+(2) If $R$ is a discrete valuation ring, then $\mathfrak{V}(\mathfrak{F}(R)) = \mathfrak{V}(R\text{-mod}) \cong \mathbb{N}^{(\mathbb{N}_0)}$.
+
+(3) If $R$ is Dedekind-like but not a discrete valuation ring, and if $q = 0$, then $\mathfrak{V}(\mathfrak{F}(R)) = \mathfrak{V}(R\text{-mod}) \cong \mathbb{N}^{(\tau)}$.
+
+(4) If $R$ is Dedekind-like and $q > 0$, then $q = 1$ and $\mathfrak{V}(\mathfrak{F}(R)) = \mathfrak{V}(R\text{-mod}) \cong \mathbb{N}^{(\tau)} \oplus \mathfrak{S}_1$.
+
+In every case, $\mathrm{Cl}(\mathfrak{V}(\mathfrak{F}(R))) \cong \mathbb{Z}^{(q)}$.
+
+We remark that the yet-unpublished results on the structure of modules over exceptional Dedekind-like rings have no bearing on the validity of this theorem: If $R$ is an exceptional Dedekind-like ring, then $\mathrm{spl}(R) = 0$, and hence all that is needed is the straightforward construction of $\tau(R)$ indecomposable modules over $R$, given in [6, Lemma 2.6].
+
+7. Non-local rings
+
+In this section only, we do not assume that Dedekind-like rings are neces-
+sarily local, calling the commutative, Noetherian ring *R* a (*global*) *Dedekind-*
+*like ring* if, for each maximal ideal *m* of *R*, the localization *R*_*m* is a (local)
+*Dedekind-like ring* [16, Corollary 10.7]. If *R* is a (global) Dedekind-like ring
+such that none of the localizations of *R* is exceptional, and if *M* is a finitely
+generated indecomposable *R*-module, then the rank of *M*_P is at most two for
+every minimal prime *P* of *R* [16, Corollary 16.9]. In this section, we prove that
+this result fails if at least one of the localizations of *R* is not a homomorphic
+image of a Dedekind-like ring.
+---PAGE_BREAK---
+
+**THEOREM 7.1.** Let $R$ be a connected, commutative, Noetherian ring, and suppose that $R$ is not a homomorphic image of a (global) Dedekind-like ring. Then, for every integer $n \ge 1$, there exist infinitely many indecomposable finitely generated $R$-modules $M$ such that $M_P \cong R_P^{(n)}$ for each minimal prime $P$ of $R$.
+
+*Proof.* We begin by fixing a maximal ideal $\mathfrak{m}$ of $R$ such that $R_{\mathfrak{m}}$ is not a homomorphic image of a (local) Dedekind-like ring. If $R$ has dimension greater than one, then we can take $\mathfrak{m}$ to be any maximal ideal of height greater than one, since (local) Dedekind-like rings have dimension one. If $R$ has dimension one, then the existence of such a maximal ideal $\mathfrak{m}$ follows immediately from [16, Proposition 14.1 and Corollary 13.6].
+
+Note that a Noetherian ring $A$ is connected if and only if, for every non-empty, proper subset $\mathcal{V}$ of the set of minimal prime ideals of $A$, there exist a maximal ideal $\mathfrak{m}_{\mathcal{V}}$ of $A$ and minimal primes $P \in \mathcal{V}$ and $Q \notin \mathcal{V}$ such that $P+Q \subseteq \mathfrak{m}_{\mathcal{V}}$. Thus we can find a finite list $\mathfrak{m}_1 = \mathfrak{m}, \mathfrak{m}_2, \dots, \mathfrak{m}_t$ of maximal ideals of $R$ such that each minimal prime of $R$ is contained in at least one maximal ideal in the list, and such that, for every non-empty, proper subset $\mathcal{V}$ of the set of minimal prime ideals of $R$, there are minimal primes $P \in \mathcal{V}$ and $Q \notin \mathcal{V}$ such that $P, Q \subseteq \mathfrak{m}_i$ for some index $i$. Therefore, if we set $S := R - \bigcup_{i=1}^t \mathfrak{m}_i$, it follows that the localization $S^{-1}R$ is connected, with minimal primes precisely the localization of the minimal primes of $R$.
+
+Suppose that we can find a finitely generated indecomposable $S^{-1}R$-module $M$ such that $M_{S^{-1}P} \cong (S^{-1}R)_{S^{-1}P}^{(n)}$ for each minimal prime $P$ of $R$. Let $N$ be a finitely generated $R$-module such that $S^{-1}N \cong M$, and let $N = N_1 \oplus \cdots \oplus N_k$ be a decomposition of $N$ into indecomposable $R$-modules. Since $S^{-1}N$ is indecomposable, we have $S^{-1}N_i = 0$ for all except one index $i$, and hence $S^{-1}N_i = M$. Then $N_i$ is an indecomposable $R$-module such that $(N_i)_P \cong M_{S^{-1}P} \cong (S^{-1}R)_{S^{-1}P}^{(n)} \cong R_P^{(n)}$ for each minimal prime $P$ of $R$, and the theorem is proved.
+
+Therefore it suffices to prove the theorem under the additional hypothesis that $R$ be semilocal. Let $\mathfrak{m}_1, \dots, \mathfrak{m}_t$ be the maximal ideals of $R$, where $R_{\mathfrak{m}_1}$ is not a homomorphic image of a Dedekind-like ring. Further, suppose $\mathfrak{m}_1$ has height greater than one if $\dim R > 1$. We distinguish the two cases in which $R$ has dimension one or dimension greater than one.
+
+Suppose first that $R$ has dimension one. Let $M_1$ be an indecomposable $R_{\mathfrak{m}_1}$-module with constant rank $n$ at the minimal primes of $R$ contained in $\mathfrak{m}_1$ (Theorem 1.2); for $2 \le j \le t$, let $M_j = R_{\mathfrak{m}_j}^{(n)}$. Since $R$ has only finitely many prime ideals, there exists, by [25, Lemma 1.11], an $R$-module $M$ such that $M_{\mathfrak{m}_j} \cong M_j$ for all $j = 1, \dots, t$. If $M = U \oplus V$, then, since $M_{\mathfrak{m}_1}$ is indecomposable, we can assume that $U_{\mathfrak{m}_1} = 0$. Since $M_{\mathfrak{m}_j}$ is $R_{\mathfrak{m}_j}$-free for $2 \le j \le t$, $U_{\mathfrak{m}_j}$ is $R_{\mathfrak{m}_j}$-free for all $j = 1, \dots, t$, and it follows that $U$ is $R$-projective.
+---PAGE_BREAK---
+
+Since $R$ is connected and $U_{m_1} = 0$, it follows that $U = 0$. This shows that $M$ is indecomposable. Since Theorem 1.2 produces infinitely many pairwise non-isomorphic indecomposable $R_{m_1}$-modules locally of constant rank $n$ at the minimal primes of $R_{m_1}$, the theorem is proved in case $R$ has dimension one.
+
+Suppose instead that $R$ has dimension greater than one, so that $\mathfrak{m}_1$ is a maximal ideal of height greater than one. Thus, either the maximal ideal of $R_{\mathfrak{m}_1}$ requires three or more generators, or $R_{\mathfrak{m}_1}$ is a regular local ring of dimension two, and the square of its maximal ideal requires three generators. Either way, let $r$ be a positive integer such that $\mathfrak{m}_1^r/\mathfrak{m}_1^{r+1}$ is a vector space of dimension at least three over the residue field $R/\mathfrak{m}_1$. We adapt Proposition 2.1 to construct $R$-modules directly.
+
+Let $\mathcal{P}$ be the set consisting of the minimal primes of $R$ together with the remaining maximal ideals $\mathfrak{m}_2, \dots, \mathfrak{m}_t$, and choose $x, y$, and $z$ as in the first sentence of the proof of Proposition 2.1, where $\mathfrak{m} = \mathfrak{m}_1$. As in that proof, given any integer $q > n$, set $\Delta := (z+y)I_q + yH_q$, and let
+
+$$ \Xi := \begin{bmatrix} 0_n & 0 \\ 0 & x^2 I_{q-n} \end{bmatrix} \in \operatorname{Mat}_{q \times q}(R). $$
+
+Let $A$ be the $2q \times 2q$ matrix over $R$ defined by (1), and set $M := \operatorname{coker}(A)$. Since the images of $x, y$ and $z$, in $\mathfrak{m}_1^r/\mathfrak{m}_1^{r+1}$, are linearly independent over $R/\mathfrak{m}_1$, while the image of $x^2$ in $\mathfrak{m}_1^r/\mathfrak{m}_1^{r+1}$ is 0, the proof of Proposition 2.1 shows that $M_{\mathfrak{m}_1}$ is indecomposable. Moreover, for $P \in \mathcal{P}$, localizing at $P$ yields a matrix $\tilde{A}$ which is equivalent to $I_{2q-n} \oplus 0_n$ (because $x, y$, and $z$ become units in $R_P$), and hence $M_P \cong R_P^{(n)}$.
+
+To show that $M$ is indecomposable, suppose $M = U \oplus V$. Since $M_{\mathfrak{m}_1}$ is indecomposable, we can assume that $U_{\mathfrak{m}_1} = 0$. For $2 \le j \le t$, $U_{\mathfrak{m}_j}$ is a direct summand of the free $R_{\mathfrak{m}_j}$-module $M_{\mathfrak{m}_j}$ and thus is free. Therefore $U$ is $R$-projective; since $U_{\mathfrak{m}_1} = 0$ and $R$ is connected, $U$ must be zero. Thus $M$ is indecomposable. As noted in the proof of Proposition 2.1, the localization $M_{\mathfrak{m}_1}$ of the $R$-module $M$ just constructed requires exactly $2q$ generators as an $R_{\mathfrak{m}_1}$-module. Thus, by varying $q > n$, we get infinitely many pairwise non-isomorphic indecomposable $R$-modules locally of constant rank $n$ at the minimal primes of $R$. $\square$
+
+We leave to the reader the minor adjustments required to obtain $|k| \cdot \aleph_0$ pairwise non-isomorphic indecomposable $R$ modules of constant rank $n$, where $k$ is the residue field at the maximal ideal $\mathfrak{m}_1$. One might be able to extend Theorem 7.1 to allow for some non-constant ranks at the minimal primes, but it is doubtful that one can obtain arbitrary ranks at the minimal primes. For example, if $R$ has dimension one, two maximal ideals $\mathfrak{m}_1$ and $\mathfrak{m}_2$, and three minimal primes $P_0, P_1$, and $P_2$, such that $P_0, P_1 \subseteq \mathfrak{m}_1$ and $P_1, P_2 \subseteq \mathfrak{m}_2$, but $P_0 \not\subseteq \mathfrak{m}_2$ and $P_2 \not\subseteq \mathfrak{m}_1$, then it is not clear that there exists an indecomposable
+---PAGE_BREAK---
+
+module *M* of rank one at $P_0$ and $P_2$ but rank zero at $P_1$. Moreover, in dimension greater than one, R. Wiegand's "gluing lemma" [25, Lemma 1.11] does not apply, and it is difficult to imagine how to construct a module with arbitrary localizations at finitely many maximal ideals.
+
+## REFERENCES
+
+[1] H. Bass, *On the ubiquity of Gorenstein rings*, Math. Z. **82** (1963), 8-28. MR 0153708 (27 #3669)
+
+[2] W. Bruns and J. Herzog, *Cohen-Macaulay rings*, Cambridge Studies in Advanced Mathematics, vol. 39, Cambridge University Press, Cambridge, 1993. MR 1251956 (95h:13020)
+
+[3] J. Dieudonné, *Sur la réduction canonique des couples de matrices*, Bull. Soc. Math. France **74** (1946), 130-146. MR 0022826 (9,264f)
+
+[4] Yu. A. Drozd, *Representations of commutative algebras (Russian)*, Funktsional. Anal. i Priložhen. **6** (1972), 41-43; English Transl., Funct. Anal. Appl. **6** (1972), 286-288. MR 0311718 (47 #280)
+
+[5] D. Eisenbud, *Homological algebra on a complete intersection, with an application to group representations*, Trans. Amer. Math. Soc. **260** (1980), 35-64. MR 570778 (82d:13013)
+
+[6] A. Facchini, W. Hassler, L. Klingler and R. Wiegand, *Direct-sum decompositions over one-dimensional Cohen-Macaulay rings*, Multiplicative ideal theory in commutative algebra: a tribute to the work of Robert Gilmer (J. Brewer, S. Glaz, W. Heinzer, B. Olberding, eds.), Springer, New York, 2006, pp. 153-168. MR 2265807
+
+[7] A. Facchini, *Module theory. Endomorphism rings and direct sum decompositions in some classes of modules.*, Progress in Mathematics, vol. 167, Birkhäuser Verlag, Basel, 1998. MR 1634015 (99h:16004)
+
+[8] J. S. Golan, *Torsion theories, Pitman Monographs and Surveys in Pure and Applied Mathematics*, vol. 29, Longman Scientific & Technical, Harlow, 1986. MR 880019 (88c:16034)
+
+[9] W. Hassler, R. Karr, L. Klingler, and R. Wiegand, *Indecomposable modules of large rank over Cohen-Macaulay local rings*, Trans. Amer. Math. Soc., to appear.
+
+[10] __________, *Large indecomposable modules over local rings*, J. Algebra **303** (2006), 202-215. MR 2253659
+
+[11] A. Heller and I. Reiner, *Indecomposable representations*, Illinois J. Math. **5** (1961), 314-323. MR 0122890 (23 #A222)
+
+[12] D. G. Higman, *Indecomposable representations at characteristic p*, Duke Math. J. **21** (1954), 377-381. MR 0067896 (16,794c)
+
+[13] C. Huneke and R. Wiegand, *Tensor products of modules and the rigidity of Tor*, Math. Ann. **299** (1994), 449-476. MR 1282227 (95m:13008)
+
+[14] L. Klingler and L. S. Levy, *Representation type of commutative Noetherian rings. I. Local wildness*, Pacific J. Math. **200** (2001), 345-386. MR 1868696 (2002i:13008a)
+
+[15] __________, *Representation type of commutative Noetherian rings. II. Local tameness*, Pacific J. Math. **200** (2001), 387-483. MR 1868697 (2002i:13008b)
+
+[16] __________, *Representation type of commutative Noetherian rings. III. Global wildness and tameness*, Mem. Amer. Math. Soc. **176** (2005). MR 2147090 (2006g:13037)
+
+[17] L. Kronecker, *Über die congruenten Transformationen der bilinearen Formen*, Monatsberichte Königl. Preuß. Akad. Wiss. Berlin (1874), 397-447; reprinted in: Leopold Kronecker's Werke (K. Hensel, Ed.), Vol. 1, pp. 423-483, Chelsea, New York, 1968.
+
+[18] G. J. Leuschke and R. Wiegand, *Hypersurfaces of bounded Cohen-Macaulay type*, J. Pure Appl. Algebra **201** (2005), 204-217. MR 2158755 (2006c:13014)
+---PAGE_BREAK---
+
+[19] L. S. Levy and C. J. Odenthal, *Package deal theorems and splitting orders in dimension 1*, Trans. Amer. Math. Soc. **348** (1996), 3457–3503. MR 1351493 (96m:16006b)
+
+[20] H. Matsumura, *Commutative ring theory*, Cambridge Studies in Advanced Mathematics, vol. 8, Cambridge University Press, Cambridge, 1986. MR 879273 (88h:13001)
+
+[21] C. M. Ringel, *The representation type of local algebras*, Lecture Notes in Mathematics, vol. 488, Springer-Verlag, New York, 1975, pp. 282–305.
+
+[22] D. E. Rush, *Rings with two-generated ideals*, J. Pure Appl. Algebra **73** (1991), 257–275. MR 1124788 (92j:13008)
+
+[23] R. B. Warfield, Jr., *Decomposability of finitely presented modules*, Proc. Amer. Math. Soc. **25** (1970), 167–172. MR 0254030 (40 #7243)
+
+[24] K. Weierstrass, *Zur Theorie der bilinearen und quadratischen Formen*, Monatsberichte Königl. Preuß. Akad. Wiss. Berlin (1868), 310–338.
+
+[25] R. Wiegand, *Noetherian rings of bounded representation type*, Commutative algebra (Berkeley, CA, 1987), Math. Sci. Res. Inst. Publ., vol. 15, Springer, New York, 1989, pp. 497–516. MR 1015536 (90i:13010)
+
+[26] Y. Yoshino, *Cohen-Macaulay modules over Cohen-Macaulay rings*, London Mathematical Society Lecture Note Series, vol. 146, Cambridge University Press, Cambridge, 1990. MR 1079937 (92b:13016)
+
+WOLFGANG HASSLER, INSTITUT FÜR MATHEMATIK UND WISSENSCHAFTLICHES RECHNEN, KARL-FRANZENS-UNIVERSITÄT GRAZ, HEINRICHSTRASSE 36/IV, A-8010 GRAZ, AUSTRIA
+
+*E-mail address: wolfgang.hassler@uni-graz.at*
+
+RYAN KARR, HONORS COLLEGE, FLORIDA ATLANTIC UNIVERSITY, JUPITER, FL 33458, USA
+
+*E-mail address: rkarr@fau.edu*
+
+LEE KLINGLER, DEPARTMENT OF MATHEMATICAL SCIENCES, FLORIDA ATLANTIC UNIVERSITY, BOCA RATON, FL 33431-6498, USA
+
+*E-mail address: klingler@fau.edu*
+
+ROGER WIEGAND, DEPARTMENT OF MATHEMATICS, UNIVERSITY OF NEBRASKA, LINCOLN, NE 68588-0323, USA
+
+*E-mail address: rwiegand@math.unl.edu*
\ No newline at end of file
diff --git a/samples/texts_merged/633594.md b/samples/texts_merged/633594.md
new file mode 100644
index 0000000000000000000000000000000000000000..28b0d8ebe48c4f72c2ca48e451b77d53b3fa9603
--- /dev/null
+++ b/samples/texts_merged/633594.md
@@ -0,0 +1,992 @@
+
+---PAGE_BREAK---
+
+Optimal equity infusions in interbank networks
+
+Hamed Amini, Andreea Minca, Agnès Sulem
+
+► To cite this version:
+
+Hamed Amini, Andreea Minca, Agnès Sulem. Optimal equity infusions in interbank networks. Journal of Financial Stability, Elsevier, 2017, 31, pp.1-17. 10.1016/j.jfs.2017.05.008. hal-01614759
+
+HAL Id: hal-01614759
+
+https://hal.inria.fr/hal-01614759
+
+Submitted on 23 Oct 2017
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
+---PAGE_BREAK---
+
+Optimal equity infusions in interbank networks*
+
+Hamed Amini,† Andreea Minca,‡ Agnès Sulem§
+
+*Acknowledgements. We are grateful to the editors and the anonymous referees for very helpful comments. Andreea Minca is partially funded under NSF grant CMMI 1638230.
+
+†Corresponding author. University of Miami, Coral Gables, FL, 33146, USA, email: h.amini@math.miami.edu
+
+‡School of Operations Research and Information Engineering, Cornell University, Ithaca, NY 14850, USA, email: acm299@cornell.edu
+
+§INRIA Paris, Mathrisk research group, 2 rue Simone Iff, CS 42112, 75589 Paris Cedex 12, France, email: agnes.sulem@inria.fr
+---PAGE_BREAK---
+
+Optimal equity infusions in interbank networks
+
+Hamed Amini, Andreea Minca, Agnès Sulem
+
+Abstract
+
+We study optimal equity infusions into a financial network prone to the risk of contagious failures, which may be due to insolvency or to bank runs by short term creditors. Bank runs can be triggered by failures of connected banks.
+
+Under complete information on interbank linkages, we show that the problem reduces to a combinatorial optimization problem. Subject to budget constraints, the government chooses the set of minimal cost whose survival induces the maximum network stability. Our results demonstrate that the optimal equity infusion might significantly mitigate failure contagion risk and stabilize the system. In the case of partial information on the network, the controllers' focus swiftly changes from preventing insolvencies to preventing runs by short term creditors.
+
+Keywords: Systemic risk, Markov decision process, Liquidity risk, Rollover risk,
+Financial contagion.
+
+# 1 Introduction
+
+During the financial crisis, systemic risk has emerged as a major concern for governments, financial regulators and risk managers. By contrast with the traditional approach in risk management, the focus is no longer on modeling and managing the risks faced by a single financial institution, but on taking into account the interrelations between financial institutions and the complex mechanisms of distress propagation.
+
+Limiting systemic risk requires new analytical and computational tools. Most research in this area focuses on systemic risk measurement and attribution. Recent works propose systemic risk measures within the framework of network models for insolvency risk in banking systems, see e.g. Amini et al. (2016), Battiston et al. (2012), Cont et al. (2012), Lehar (2005). The related issue of systemic risk attribution is addressed in Liu and Staum (2011).
+
+*Acknowledgements. We are grateful to the editors and the anonymous referees for very helpful comments. Andreea Minca is partially funded under NSF grant CMMI 1638230.
+
+†Corresponding author. University of Miami, Coral Gables, FL, 33146, USA, email: h.amini@math.miami.edu
+
+‡School of Operations Research and Information Engineering, Cornell University, Ithaca, NY 14850, USA, email: acm299@cornell.edu
+
+§INRIA Paris, Mathrisk research group, 2 rue Simone Iff, CS 42112, 75589 Paris Cedex 12, France, email: agnes.sulem@inria.fr
+---PAGE_BREAK---
+
+Avram and Minca (2017), Blanchet and Shi (2012), Kley et al. (2016) extend the application field of the network framework to insurance-reinsurance markets. Network models for systemic risk have the advantage of being both structural (they integrate explicitly the details of banks’ balance sheets and the interbank exposures) and tractable. Such features are crucial for determining best responses to systemic crisis. An example in this sense is Rogers and Veraart (2012), who use networks as decision tools for the establishment of rescue consortia.
+
+Our contribution is within the area of systemic risk management. We use network models to investigate a government’s problem of optimal intervention in the form of equity infusions. This problem is motivated by the government interventions during the recent crisis, which took forms of recapitalizations, see e.g. Swagel (2009), Veronesi and Zingales (2010). The rationale and the net gain from these equity infusion programs is an important topic in the finance literature, both empirical and theoretical, see Bayazitova and Shivdasani (2012), Philippon and Schnabl (2009), Philippon and Skreta (2010), Veronesi and Zingales (2010). Several possible reasons for equity infusion programs have been advanced in this literature (see Veronesi and Zingales (2010)). One theory is that government recapitalized a banking sector that restricted lending because of debt overhang. The resulting optimization program has been investigated in Philippon and Schnabl (2009).
+
+The second possible reason is that government intervened in order to prevent runs by short term creditors, since runs destroy value and are inefficient. Runs by short term creditors played a central role during the crisis, see Gorton and Metrick (2012a). In this paper we investigate this rationale.
+
+Some of the questions we ask are: Is an equity infusion program such as the British equity infusion program or the US CPP (Capital Purchase Program) of 2008 justified in terms of the net gains when we take into account the runs of short term creditors? How does the optimal decision depend on the percentage of banks that use short term funding? Is the intervention budget constraint saturated at the optimal solution?¹ To answer these questions, we build a model in which runs by short term creditors can be triggered by failures of connected banks. We set up an optimal equity infusion program by a government with constrained budget who aims at minimizing the total loss in the system. The loss is defined as the total capital of the failed banks, plus the write-downs recorded by surviving banks. The general model of failure propagation incorporates a bank run component on top of an insolvency component. The bank run component lowers the contagion threshold. The underlying network of exposures transmits losses among financial institutions, which then see their capital depleted and consequently may face bank runs even if they are solvent. The failure due to bank runs of some participants prompts more write-downs to their counterparties, leading to more contagion. Importantly, our model features a channel of *indirect contagion*. Increases in the cost of funding and possible runs can happen even in absence of exposures to failed banks, as short-term lenders may become skeptical to all short-term borrowers. Our perspective on the relation between insolvency risk and liquidity risk differs and thus complements the recent literature on funding liquidity. While, similarly to the recent single-bank funding liquidity risk models as He and Xiong (2012), Liang et al. (2013), Minca and Wissel (2015), Morris and Shin (2009), the risk of insolvency prompts creditors to withdraw
+
+¹In the US, the intervention budget is imposed to the government by Congress vote, and therefore it is important to determine in a quantitative model whether this constraint is saturated.
+---PAGE_BREAK---
+
+funding, in our paper the insolvency risk is carried through the network. Previous models are single-bank models in which the focus is on the interaction of the short term creditors and in which the illiquidity barrier as a function of the capital is endogenous. Here, this function is exogenous, while insolvency risk comes from “far-away” in the network.
+
+We assume that banks cannot refuse the decision of the government².
+
+We analyze two information settings. The first setting provides computational tools for the choice of a set of banks of minimal intervention cost and which can stabilize the network. The partial information case is highly stylized and intended to provide qualitative answers to the questions we stated above.
+
+First, in the complete information setting, the controller observes the entire interbank exposure network. We show in Proposition 1 that the optimal equity infusion problem becomes a combinatorial optimization problem. This problem is tractable for networks of realistic sizes, several dozens of nodes, and with arbitrary structure. The idea is that complete information on the network reveals the causal structure of failures, that is, which banks fail because of bankruptcies of others. Therefore, we explore the intervention cost of insuring subsets of nodes, with the understanding that this leads to a modified causal structure of failures.
+
+In our numerical examples, we find that the solution does not saturate the budget constraint. This result indicates that there is a tipping point, beyond which intervention is no longer optimal. It would, in effect, transfer the losses from the existing creditors and shareholders of failed banks to the taxpayer. The results in the complete information setting are given in the case of zero recovery rates, but can be extended to other recovery rate models such as Eisenberg and Noe (2001), Rogers and Veraart (2012). The combinatorial aspect becomes relevant when the continuum of losses of the seminal paper by Eisenberg and Noe (2001) is lost. This continuum is lost as soon as there are fixed and important bankruptcy costs, see e.g. Amini et al. (2016), Rogers and Veraart (2012). This aspect is highly relevant to systemic crisis which have an illiquidity component (such as the crisis of 2007-2009).
+
+In the second part of the paper we propose a partial information setting, where the controller observes the interbank exposure network in continuous time. Information about balance sheets is revealed progressively, when banks record their exposures to failed banks. We stylize the network model to obtain numerical tractability. The network is regular in structure, all nodes having the same degrees. The inhomogeneity in the model stems from the stability of the funding across banks. When intervention is adapted to the information given by the partial observation of the contagion cluster and, under certain assumptions on the distribution of the timing of when new information is learned, the cascade model is shown to become a search model. The recording of an exposure to a failed bank appears to a partial network observer as the result of a search process: A failed bank (belonging to the observed set of failed banks) will “search” for a counterparty, with a probability depending on banks’ connectivities and their number of already recorded exposures. The search process
+
+²During the 2008 crisis, amid the Capital Purchase Program, the US banks that received equity infusions did not have the option to refuse the government decision, see e.g. Landler and Dash (2008), Veronesi and Zingales (2010). The Capital Assistance Program (CAP) replaced the Capital Purchase Program in 2009, offering participating banks redemption options. CAP securities valuation is investigated in Glasserman and Wang (2011).
+---PAGE_BREAK---
+
+is a Markov process, which consists of bilateral interactions between a failed bank and one of its counterparties. The successive interaction times correspond to the times when write-downs are recorded. To prove the equivalence in law of the cascade and the search process, we need to make an extension of the coupling argument in Amini et al. (2016) to the case of continuous time and adapted control. Our result in this setting, Theorem 1, allows us to study the cascade dynamics using a Markov Decision Process.
+
+Our results point to a strong dependence of the optimal intervention policy on the proportion of banks that use unstable short-term funding. The main takeaway message within the partial information setting is that, the presence of even a small proportion of banks using unstable funding will prompt the optimizing government to inject equity in banks that are otherwise solvent, in order to prevent a run by short term creditors.
+
+We define the net gain from intervention as the difference between the magnitude of losses due to financial contagion with and without intervention, net of the cost to the government. We find that optimal intervention significantly mitigates financial contagion, both in the complete and partial information setting. We finally assess the value of information, namely the difference in the net gain from intervention between the complete and the partial information case. We find that the value of information is limited, as the net gain under partial information compares well with the net gain under complete information.
+
+We show the net gain under partial information is due to a large extent to the availability of an adapted intervention strategy. Indeed, we further compare the net gain from intervention in two cases: First, the government observes the spread of distress continuously and adapts their strategy to the flow of information. Second, we restrict the government to optimally injecting equity only once after the exogenous shock applied to the system. We find that, the net gain is significantly larger in the first case, thereby justifying multiple infusions in the same bank, as it was the case in 2008.
+
+In recent work, Amini et al. (2015) have investigated other types of interventions, namely those by a lender of last resort. The results in Amini et al. (2015) clearly point to the relation between the value of the financial system (defined there the sum of all external projects), connectivity and optimal intervention. Up to a certain connectivity, the value of the financial system increases with connectivity. This implies that a connected system prone to contagion and thus depending on intervention is preferable to a disconnected system. However, this is no longer the case if connectivity becomes too large and even in the case there is intervention, the value of the system may fall below the value of the disconnected system. In Minca and Sulem (2014) the problem of optimal intervention is treated in the context of Eisenberg and Noe (2001), and the authors show that it may be optimal to restrict intervention to a subset of the banks.
+
+In contrast, here we compare the two settings - partial and complete, and the main driver of the equity infusions is the existence of inefficient bank runs. It is the probability of bank runs that plays an important role in the case of incomplete information and leads the controller to have preventive infusions on otherwise well capitalized banks.
+
+Our paper is organized as follows. In the next section we present a model for distress propagation in a network of financial institutions that are prone both to bank runs and to insolvency. The cascade dynamics in presence of the two channels of distress propagation and under intervention of a government is defined in Section 2.2. In Section 3, we study the complete information setting. We study the partial information setting in Section 4. The
+---PAGE_BREAK---
+
+| External assets (Illiquid) | Short-term debt (net of liquid reserves) s(i) | Interbank assets Σj e(i, j) | Interbank liabilities Σj e(j, i) | Capital c(i) | | Assets | Liabilities |
+
+Table 1: Stylized balance sheet of a bank.
+
+numerical results in Section 5 compare the different information settings and the respective net gains from intervention of a government. Section 6 concludes. The paper finishes with a technical appendix providing the list of notations and the proofs.
+
+# 2 Distress propagation in a financial network
+
+## 2.1 Interlinked balance sheets
+
+At a fixed time, say time 0, a financial system is represented as a network ($N, e$), with $N := \{1, ..., n\}$ the set of financial institutions (banks). For any two financial institutions i and j, $e(i, j)$ represents the **exposure** of i to j, i.e., the write-down of i's capital if j were to failure. The exposures $e(i, j)$ may encompass several types of liabilities and contracts, including interbank loans or financial derivatives. Taking into account all possible liabilities between the two counterparties i and j would involve a multi-layer analysis of the financial network, see e.g. Bookstaber and Kenett (2016). This is not the aim of the present paper and we focus on one representative layer.
+
+The asset of one party is the liability of another party. If $e(i, j) > 0$, we also say that j has a liability to i. The total **interbank assets** of i are given by $\Sigma_j e(i, j)$, while the **total interbank liabilities** of i are given by $\Sigma_j e(j, i)$. We denote by $s(i)$ the total **short term debt** on the balance sheet, net of the banks' liquid reserves. If $s(i)$ is positive, then the bank depends on refinancing this net short term debt, and consequently it is prone to bank runs. In this case we let $f(i)$ the short term funding capacity of bank i, i.e., the amount of short term debt that it can refinance on the market. If $s(i)$ is negative, then the bank does not need to refinance this short term debt, and consequently it is not prone to bank runs. It can only failure due to insolvency.
+
+We assume that the bank's external assets are illiquid. It is for simplicity and without loss of generality that we assume zero external debt, and such debt could easily be added to the liability side of a bank's balance sheet.
+
+We let $c(i)$ the capital of bank i, defined as the total value of assets minus the total value of liabilities. Table 1 represents a snapshot of the balance sheet of bank i.
+
+**Definition 1.** A bank *i* is said to **fail** if either one of following two conditions holds:
+
+(i) **Balance-sheet insolvency:** The total value of its assets is smaller than the total value of its liabilities, i.e. $c(i) < 0$;
+---PAGE_BREAK---
+
+(ii) **Illiquidity:** The net short term debt cannot be refinanced, i.e., $f(i) < s(i)$.
+
+When setting the above illiquidity condition, we assume that the cannot liquidate the asset to repay this debt. We stress that $s(i)$ is short term debt net of the liquid assets of the bank. Therefore, having more liquid assets amounts to lower net short term debt, and thus lower chances that the inequality $f(i) < s(i)$ holds.
+
+**Assumption 1.** We make the following assumptions:
+
+(i) *The bank does not sell the illiquid assets;*
+
+(ii) *The bank cannot claim back its long-term interbank assets;*
+
+(iii) *The bank is not acting at the same time a short term lender and a short term borrower.*
+
+The first assumption states that there is no market for the illiquid asset.³ This is a conservative setting in which banks avoid fire sales in order to avoid further asset declines and, therefore, equity depletion. In times of distress many institutions do not sell at fire-sale prices to avoid the obligation to record a loss, even if they are close to failure, see e.g. Diamond and Rajan (2011).
+
+The second assumption states that long-term debtors do not repay their debt prior to maturity. Indeed, during a crisis, banks have been shown to hoard on liquidity, see e.g. Acharya and Skeie (2011), Gale and Yorulmazer (2011), so it is reasonable to assume that they have no incentive to pay back debt before it is due.
+
+The third assumption is a tractability assumption. If banks acted at the same time as both short term lenders and short term borrowers, then, if unable to refinance the short term debt they could withdraw funding from their own short term borrowers. Our assumption insulates our analysis from modeling the decision of each short term lender whether or not to withdraw funding. The assumption that short term borrowers and lenders are disjoint is in line with e.g. Geanakoplos (2010), Simsek (2012) where short term debt is taken on by firms with limited wealth but who are optimistic about the future prospects of the illiquid asset, while it is provided by firms who are pessimistic about the value of the illiquid asset.
+
+## 2.2 Distress propagation with intervention
+
+We consider that there exists a government that makes equity infusions in the form of cash. The government has a constrained budget $M$ and their objective is to minimize the magnitude of contagion (expressed as total loss) in the financial system.
+
+Our model is of a short term contagion, in which case recovery rates are low because assets cannot be liquidated fast. For simplicity, we set recovery rates to zero. The cascade of failures is to be interpreted as a causal structure of failures. Under complete information on exposures, the cascade is understood as instantaneous, whereas under incomplete information the causal structure will be revealed over time.
+
+³ The Troubled Asset Relief Program, before becoming an equity infusion program, was set up as an asset purchase program to avoid fire-sales of illiquid assets.
+---PAGE_BREAK---
+
+Distress propagation begins with an exogenous set of banks $\mathcal{D}_0$, called **fundamental failures**, that are either insolvent or illiquid following a shock.
+
+$$ \mathcal{D}_0 = \{i \in \mathcal{N} \mid c(i) < 0 \text{ or } f(i) < s(i)\}. \quad (1) $$
+
+We consider that, after the exogenous shock, the government injects equity $\xi(i) \in [0, M]$ in all banks which have not failed after the exogenous shock, i.e., $i \in \mathcal{D}_0^c$, where $^c$ denotes set complement. In this section it is helpful to think of as a deterministic quantity $\xi(i) \ge 0$ that is injected instantaneously.⁴
+
+After intervention, bank $i \in \mathcal{D}_0^c$ holds capital $c(i) + \xi(i)$. We now describe the cascading failures triggered by the set $\mathcal{D}_0$ of fundamentally failed banks. Let us first describe the contagion mechanism. There are three sources of contagion in the model. First, a creditor of failed bank is affected by the direct balance-sheet loss (write-down). The failure of an institution $j$ leads to a loss equal to $e(i, j)$ to its counterparty $i$. If the new capital of $i$ reaches the failure barrier, then $i$ fails. Second, any creditor of a failed bank will be also affected by the changes in funding conditions: A fraction of its own short-term creditors that concerned by bank $i$'s decreased capital withdraw funds. Third, all banks depending on short term borrowing, irrespective of being a creditor of the failed bank or not, are affected by changes in the funding conditions, due to the depletion of the risk bearing capital of the financial system.
+
+We capture the bank run component of the contagion mechanism by assuming that the short term funding capacity is a function of the bank's capital and of the number of failed banks: $f(i) = f(c(i), |\mathcal{D}|)$ gives the short term funding capacity of bank $i$ when the set of failed banks is $\mathcal{D} \subseteq \mathcal{N}$. This assumption is consistent with structural models of bank runs He and Xiong (2012), Liang et al. (2013), Minca and Wissel (2015), Morris and Shin (2009) where liquidity is a function of capital and bank runs happen when the capital (usually assumed a stochastic process) reaches a barrier.
+
+We assume that the function $f$ is non-negative, increasing in the first argument (the capital), and decreasing in the second argument (the number of failures in the system). The condition of illiquidity (given a failure set $\mathcal{D}$) in Definition 1 writes as
+
+$$ c(i) < f^{-1}(s(i), |\mathcal{D}|), $$
+
+where $f^{-1}$ represents the inverse (in the first argument) of the debt capacity function $f$. The parallel of the insolvency condition and the illiquidity condition is now apparent: insolvency happens when capital drops below zero, while illiquidity happens when capital drops below a barrier $f^{-1}(s(i), |\mathcal{D}|)$ (given a failure set $\mathcal{D}$). The higher the barrier, the more likely it is to fail. The larger the size of the failure set, the higher is the barrier. Equity infusions, in turn, decrease the failure barrier, because the function $f^{-1}$ is increasing in its first argument.
+
+If $s(i) \le 0$, i.e., the liquid reserves are larger than the short term debt, then the bank cannot fail due to illiquidity, and thus $f^{-1}(s(i), |\mathcal{D}|) = 0$, for any failure set $\mathcal{D}$. Otherwise if $s(i) > 0$, the illiquidity barrier is higher than the insolvency barrier, i.e., $f^{-1}(s(i), |\mathcal{D}|) > 0$. We generically call $f^{-1}$ the **failure barrier**, which is either an insolvency barrier if $s(i) \le 0$
+
+⁴In contrast to the complete information case, in the partial information setting of Section 4, $\xi(i)$ will not represent an instantaneous investment, but the cumulative injection over an entire time horizon in which information is revealed.
+---PAGE_BREAK---
+
+or an illiquidity barrier if $s(i) > 0$, in which case the failure barrier increases with the number of failures in the system.
+
+Because the bank fails when its capital reaches the failure barrier, its loss absorbing capacity is $c - f^{-1}$. This quantity is related to the well known distance to default in Merton (1974). There it denotes the probability of bankruptcy. Since assets are deterministic, here we can think of the remaining loss absorbing capacity as a threshold to contagion.
+
+For a bank $i \in \mathcal{N}$ and a set of failed banks $\mathcal{D} \subseteq \mathcal{N}$, the **remaining capital** of bank i by
+
+$$ \theta(i, \mathcal{D}) := c(i) - \sum_{j \in \mathcal{D}} e(i, j). \quad (2) $$
+
+We are now able to specify the first round of contagious failures:
+
+$$ \mathcal{D}_1(\xi) = \left\{ i \in \mathcal{N} \mid c(i) + \xi(i) - \sum_{j \in \mathcal{D}_0} e(i, j) < f^{-1}(s(i), |\mathcal{D}_0|) \right\}, \quad (3) $$
+
+represents the set of banks whose failure is triggered by fundamentally failed banks, either due to direct balance sheet exposures or because of the changes in funding conditions. Their remaining capital reaches the failure barrier following the fundamental failures.
+
+We have the following definition of cascading failures:
+
+**Definition 2 (Cascading failures).** Starting from the set of fundamental failures $\mathcal{D}_0$, define $\mathcal{D}_k(\xi)$ for $k = 1, \dots, n-1$ as the set of institutions whose remaining capital reaches the failure barrier following the failures in $\mathcal{D}_{k-1}(\xi)$:
+
+$$ \mathcal{D}_k(\xi) = \left\{ i \in \mathcal{N} \mid c(i) + \xi(i) - \sum_{j \in \mathcal{D}_{k-1}(\xi)} e(i,j) < f^{-1}(s(i), |\mathcal{D}_{k-1}(\xi)|) \right\}, \quad (4) $$
+
+We note that this is a joint cascade of insolvencies and bank runs. We expect that under realistic scenarios the cascade will not pick up in absence of the bank run component. This is in line with Glasserman and Young (2015), who find that that cascades of insolvencies do not pick up in absence of an additional channel of contagion. On the other hand, in presence of an additional source of contagion they are likely to pick up. Other works that analyze multiple channels of contagion, in general cross-exposures in a network context and fire sale externalities, include Aldasoro et al. (2016), Battiston et al. (2016), Cifuentes et al. (2005), see also Laux and Leuz (2010). The focus of our work is quite different from these works, as we study the control of this cascade.
+
+**Example 1.** Suppose that the initial failure set is comprised of two banks and one bank has an exposure of 2 (units of numérique) to these banks. If its capital is initially equal to 3, then after the write-down it becomes 1. If the bank does not depend on short term funding, then the bank does not fail. However, if the bank depends on short term funding, there may be a bank run following the write-downs, in which case the bank fails and there is contagion.
+---PAGE_BREAK---
+
+The previous example shows that a cascade can pick up if some of the banks have small initial distances to failure, which is equivalent to them having low contagion thresholds. In particular, inhomogeneous networks may have phase transitions when sufficiently many nodes have low contagion thresholds, see Amini and Minca (2016). A cascade as defined above is may pick up in particular if the exposures aggregate several sources (loans, derivatives). More importantly, thresholds for contagion decrease with the number of failures in the system is a very powerful source of indirect contagion on top of the direct cascade.
+
+**Final set of failures.** It is easy to see that the cascade is monotonic, i.e. $D_{k-1}(\xi) \subseteq D_k(\xi)$. Moreover, if the size of the network is $n$, the cascade finishes in at most $n-1$ rounds. The final set of failures is given by
+
+$$D(\xi) := D_{n-1}(\xi).$$
+
+The final set of failures in absence of intervention is $\mathcal{D}(0)$ and we clearly have for all $\xi \ge 0$, $\mathcal{D}(\xi) \subseteq \mathcal{D}(0)$.
+
+If a bank $i$ belongs to the final set of failures, i.e. $i \in \mathcal{D}(\xi)$, we identify the following losses:
+
+(i) Loss absorbed by shareholders : $c(i) + \xi(i)$;
+
+(ii) Loss absorbed by counterparties: $\sum_{j \in \mathcal{D}(\xi)^c} e(j, i)$.
+
+Since the loss is absorbed by the capital cushion after infusions, it is understood that it includes the government cost.
+
+**Definition 3.** We define the loss in the system
+
+$$L(\xi) := \sum_{i \in \mathcal{D}(\xi)} c(i) + \sum_{i \in \mathcal{D}(\xi)} \xi(i) + \sum_{i \in \mathcal{D}(\xi)} \sum_{j \in \mathcal{D}(\xi)^c} e(j, i). \quad (5)$$
+
+It is reasonable to assume that the government recapitalizes only banks that have not failed after the initial shock, i.e., $\xi(i) = 0$ for all $i \in \mathcal{D}_0(\xi)$. In the next section we will see that if the government has complete information on balance sheets, then their optimization program will yield that $\xi(i) = 0$, for all $i \in \mathcal{D}(\xi)$. It is important to note that the loss we expressed above includes any loss that is absorbed by the government cushion of capital injected in the banks.
+
+In the sequel, we refer to time 0 the time *after the fundamental failures*, and we consider that the controller knows the set of fundamental failures at time 0.
+
+We will analyze two specifications for the equity infusion problem. In the complete information case the controller minimizes the loss in the system, when the network structure is known (this is a deterministic optimization). In the partial information case, the controller minimizes the expected loss, when the network structure is only partially observed (this is a stochastic control problem). The expected value in the latter case is over the linkages in the network, revealed over time.
+---PAGE_BREAK---
+
+Figure 1: Left: Causal structure of failures without intervention (the arrow represents the sense of causality). Right: Structure of failures after “insuring” a set of nodes. Chains of causal failures that pass through insured nodes are removed, and banks are indirectly saved.
+
+**3 Optimal intervention with complete network information**
+
+We consider a deterministic and static setting for the equity infusion problem in which the
+complete information of the interbank structure is available to the government at time 0.
+In this case, the cascade (under any equity infusion) can be seen as instantaneous and the
+idea is to transform the optimization over the equity infusions into an optimization over
+sets of banks that are insured against contagion by equity infusions. The government has
+the following optimization problem.
+
+**Problem 1** (Optimal equity infusion). We define the problem of optimal equity infusion with maximum budget *M*
+
+$$
+\[
+\begin{array}{ll}
+\underset{\xi}{\text{Minimize}} & L(\xi) \\
+\text{s.t.} & \displaystyle\sum_{i=1}^{n} \xi(i) \leq M.
+\end{array}
+\tag{6}
+\]
+$$
+
+The controller will seek a set of banks that maximize stability in the network, in the
+sense that insuring these banks removes most causal chains of failures. Figure 1 illustrates
+the idea. On the left, a set of fundamental failures leads by causal failure chains to a
+final set of failures. On the right, some banks are "insured" and the final set of failures is
+smaller. Not only the "insured" banks are removed from the final set of failures, but also
+some indirectly saved banks, that would have failed due to causal chains. The "insured"
+banks act as buffers and cut the causal failure chains that pass through them.
+
+The equity infusion problem may have multiple solutions. We make the following assumption.
+
+**Assumption 2.** If there are multiple solutions to the above problem, the government prefers the solution(s) with minimal total equity infusion.
+
+The rationale of this assumption is apparent in the following example.
+---PAGE_BREAK---
+
+**Example 2.** We assume a causal chain of failures: the initial failure of 0 leads to the failure of 1 which leads to the failure of 2. The solution of minimal cost that ensures the survival of both 1 and 2 is to make an equity infusion in 1 equal to the exposure of 1 to 0. This infusion will automatically save 2. Any solution that would make higher equity infusions would achieve the same gain, namely the capital of banks 1 and 2. This gain is the maximum that can be achieved in this network. This example demonstrates why beyond a certain point, equity infusions may not be efficient. All banks survive here following the intervention on a smaller subset of banks. We conclude that even under infinite budget, it is in general not optimal to make equity infusions in all banks.
+
+A more subtle point is that it may also be optimal to let a bank fail, even if there is
+infinite budget. The idea is that it may be better to let a bank fail (and have the loss
+bounded to the capital of this bank), than save the bank and absorb (with taxpayer money)
+the exposures of this bank to the failed banks.
+
+**Example 3.** Assume that bank 0 is the initial failure, with $c(0) = 3$. We let $c(1) = 3$ the capital of bank 1 and $e(1,0) = 5$ the exposure of bank 1 to the failed bank. Further down the chain, bank 2 has a capital $c(2) = 3$ and $e(2,1) = 2$, the exposure of bank 2 to bank 1. In absence of intervention, the failure of 0 leads to the failure of 1, while bank 2 survives. From (5), the total loss in the system is $c(0) + c(1) + e(2,1) = 8$, i.e. the capital of the failed banks plus the exposure of the surviving bank to the failed banks. To save bank 1, the government needs to inject sufficient funds so that it survives the write-down of $e(1,0)$. If needs thus to inject $(e(1,0) - c(1))^{+} = 2$. The loss in the system is now $c(0) + e(1,0) = 8$, i.e. the capital of the failed bank plus the exposure of surviving banks to the failed banks. The loss is the same as in the case without intervention, but at a cost for the government. Because we give preference to the minimal equity infusion, the optimal solution is to let 1 fail and 2 absorb the write-down.
+
+If there are multiple solutions of minimal equity infusion, we may add criteria to distin-
+guish between the solutions. Such criteria may include in particular the quality and riskiness
+of the external assets. Preference towards saving banks with less risky assets can reduce
+moral hazard. However, we expect multiple solutions only in presence of rather unnatural
+symmetries in the network, for example in presence of banks with an indistinguishable path
+of exposures to the initial set of failures. As we will see, in the partial information setting
+with linkages revealed over time the problem will have a unique solution because infusions
+are made sequentially.
+
+We now show how to choose the set of minimal cost that reduces contagion loss by the
+most. We start by the following lemma, which states that either a node is left to fail, in
+which case no equity is injected in it, or, if a node is saved, then the equity infusion the
+minimal amount needed such that any node is above the failure barrier $f^{-1}$.
+
+**Lemma 1.** Suppose that $\tilde{\xi}$ is a solution to Problem 1. Then the following properties hold
+
+(i) If node *i* does not fail during the cascade, i.e., *i* ∈ D(**ξ**)$^c$, we have
+
+$$
+\tilde{\xi}(i) = \left( \sum_{j \in \mathcal{D}(\tilde{\xi})} (e(i,j) - c(i)) + f^{-1}(s(i), |\mathcal{D}(\tilde{\xi})|) \right)^{+}. \quad (7)
+$$
+---PAGE_BREAK---
+
+(ii) If bank i fails under infusion $\tilde{\xi}$, i.e., $i \in D(\tilde{\xi})$, then $\tilde{\xi}(i) = 0$.
+
+It is immediate to check the proof, by considering the basic tradeoff in the loss function in (5): higher equity infusion $\xi$ may decrease the set of failures $D(\tilde{\xi})$, but increases the exposure of the government. Therefore, for fixed $D(\tilde{\xi})$, a bank $i \in D(\tilde{\xi})^c$ receives the minimum infusion that guarantees that it does not fail, i.e. it “insures” it. If a bank $i \in D(\tilde{\xi})^c$ fails under infusion $\tilde{\xi}$, then $\tilde{\xi}(i) = 0$ (otherwise $\tilde{\xi}$ would not be an optimum: by setting $\tilde{\xi}(i) = 0$, the exposure of the government to i’s failure can be decreased to zero, while $D(\tilde{\xi})$ would be unchanged.)
+
+We now further characterize those infusions $\tilde{\xi}$ (not necessarily solutions of Problem 1) which satisfy conditions (i) and (ii) of Lemma 1. Let $\mathcal{V} \subseteq \mathcal{N}$ and consider the following algorithm, which gives the minimal infusions that insures the set $\mathcal{V}$ when only nodes in $\mathcal{V}$ may receive infusions.
+
+**Algorithm 1 (Insurance of a set of banks).** (i) Let $D^\mathcal{V}$ the final set of failures, as in Definition 2, when all banks in $\mathcal{V}$ receive infinite equity infusions, and all other banks do not receive any equity infusions;
+
+$$
+\text{(ii) Let now } \xi^\mathcal{V}(i) := \left( \sum_{j \in \mathcal{D}^\mathcal{V}} e(i,j) - c(i) + f^{-1}(s(i), \mathcal{D}^\mathcal{V}) \right)^+.
+$$
+
+In step one of the algorithm we compute the failure set when banks in $\mathcal{V}$ receive infinite equity infusions. The causal chains of failure are modified, since insured nodes act as stabilizers, see Figure 1. The fictitious infinite equity infusions serve only to determine the modified final set of failures $\mathcal{D}^\mathcal{V}$ after we remove the causal chains of failures going through insured banks. The actual amount of equity infusions needed to insure $\mathcal{V}$ is of course not infinite. This amount is computed in step two of the algorithm, such that the capital of insured banks survives the exposures to the modified final set of failures.
+
+We will thus use the failure set $\mathcal{D}^{\mathcal{V}}$ to compute the actual equity infusions in $\mathcal{V}$. The critical observation is that $\mathcal{D}^{\mathcal{V}}$ is equal to $\mathcal{D}(\xi^{\mathcal{V}})$, i.e., the cascade with infinite equity infusions in $\mathcal{V}$ is the same as the cascade with equity infusions $\xi^{\mathcal{V}}$. We also have that $\xi^{\mathcal{V}}$ is the minimal infusion that insures the set $\mathcal{V}$ when only nodes in $\mathcal{V}$ receive infusions.
+
+We now can recast the optimal equity infusion problem as a combinatorial optimization problem.
+
+**Proposition 1.** *Problem 1 can be stated as the following combinatorial optimization problem, over the set $\mathcal{V} \subseteq \mathcal{N} \setminus \mathcal{D}_0$ of banks to receive equity infusions:*
+
+$$
+\begin{align*}
+& \underset{\mathcal{V}}{\text{Minimize}} \quad \sum_{i \in \mathcal{D}(\xi^{\mathcal{V}})} c(i) + \sum_{i \in \mathcal{D}^c(\xi^{\mathcal{V}})} \sum_{j \in \mathcal{D}(\xi^{\mathcal{V}})} e(i, j) \\
+& \text{subject to} \quad \sum_{i \in \mathcal{V}} \xi^{\mathcal{V}}(i) \leq M.
+\end{align*}
+$$
+
+The proof of Proposition 1 is given in the appendix.
+
+Since the set of banks is finite, the combinatorial optimization problem has a solution.
+Finding the optimal solution requires however exploring all possible sets V that satisfy the
+budget constraint.
+---PAGE_BREAK---
+
+Figure 2
+
+Concerning the uniqueness of the solution, we can construct an example where there are multiple solutions to the optimization problem. We expect that such examples feature symmetries in the network structure. In the next example, banks 1 and 2 have an indistinguishable path to the initial failure, node 0.
+
+**Example 4.** Consider the network in Figure 2. Node 0 is the initial failure. It causes the failure of nodes 1 and 2, because each of these node's exposure to 0 is 1, equal to their capital. These two nodes, in turn, cause the failure of node 3. Finally, node 3 causes the failure of node 4. One can note there are two solutions with minimal cost: one can inject 1 in node 1 and save all nodes except for 2, or symmetrically one can inject 1 in node 2 and save all nodes except for 1. The cost in both cases is equal to 1. According to the criteria we considered so far, nodes 1 and 2 are exchangeable from the point of view of the controller. In practice, this poses a fairness problem and additional criteria need to be added. For example, the government can have a preference on the type of assets of the banks. In the case of multiple solutions, the government could inject equity in banks with the preferred type of assets. Additional criteria on the quality and riskiness of the assets of banks that have access to equity infusions will also mitigate the moral hazard problem associated with equity infusions.
+
+Full information on the network structure allows the government to select the solution of minimal cost. Because the objective is to minimize losses, then the banks that receive equity infusions are those that would otherwise lead to the most failures. This in turn creates an obvious moral hazard problem. Banks would seek to position themselves high in the causal failure chains, by borrowing from several large counterparties, particularly those with small distances to failure, e.g., those who fund themselves through short term debt. This in turn creates fragile networks. We expect that the existence of multiple solutions to the equity infusion problem would actually lead to less moral hazard, since there could be ambiguity about the preferred solution.
+
+Data collection on exposures has thus important tradeoffs. In time of distress, it allows the government to achieve the minimal loss possible given the network. However, because of the structure of the solution, data collection could turn out to be detrimental to financial
+---PAGE_BREAK---
+
+stability, because banks may create fragile network structures. It is thus imperative that data collection and the availability of equity infusion programs is accompanied by capital requirements and charges that disincentivize banks from creating network structures with potentially large causal default chains. In the next section, we consider the intervention in presence of partial information. Partial information makes the equity infusions less predictable for the banks and thus mitigates moral hazard.
+
+# 4 Optimal intervention with partial network information
+
+In this section, we consider that the complete interbank network is unknown to the controller from the beginning, but it is revealed over time. At each time $t$, the controller can decide (subject to budget constraints) to inject equity in any bank so as to minimize the total expected loss in the system until the cascade ends.
+
+Contagion begins with the set of fundamental failures $\mathcal{D}_0$, which are assumed to be recorded. Contrary to the full information case when all the write-downs are instantaneous, there is now a span between a bank's failure and the time when an exposed counterparty will record its write-down.
+
+The government observes all failures and the recorded exposures to failed banks. The providers of short term debt learn the same information as the government, but with a delay that would allow intervention before a bank run. To motivate this assumption, consider that bank $i$ is about to reach the illiquidity barrier $f^{-1} = 1$ after a write-down. If the market observed the write-down before the government, then the run ensues and the bank fails. It cannot be then saved by intervention, even if the government increases its capital to 2 (above the failure barrier). However, if the government observes the write-down before the market, then it can make the capital infusion and the bank run does not occur because the providers of short term debt integrate the information that the government already made equity infusions, and that the new capital is 2, which is above the failure barrier.
+
+## 4.1 A random network model
+
+We assume now a regular network, in which all nodes have the same number of linkages. Heterogeneity in degrees can be introduced and all theoretical results in this section would still hold under some mild conditions on the degrees. At the same time, heterogeneity in degrees would increase the dimension of the Markov processes. The problem would remain tractable for more realistic two-tiered network structures, such as core periphery-networks. We illustrate the theory on the regular case, and differ the discussion of these extensions to the end of Section 5. Thus we present the regular case from the outset. We will retain heterogeneity in the funding stability, and thus the model has the important feature that the failure barrier is uncertain: zero for banks that do not depend on short term funding and thus can only fail due to insolvency, and equal to one for banks that do depend on short term funding. As we will see, this heterogeneity will drive the results.
+
+We endow each node with a number of $\lambda$ incoming half-links and $\lambda$ outgoing half-links. Each link represents one unit of exposure (in terms of the numéraire). The network results from the uniform matching of all incoming half links and all outgoing half-links. We allow
+---PAGE_BREAK---
+
+for parallel links, with the meaning that the exposures between the same pair of nodes add up.
+
+We further simplify the problem by setting the failure barrier $f^{-1}$ constant and equal to 1 for those banks prone to bank runs. We let $\alpha$ the fraction of banks that are prone to bank runs. The other banks (representing a fraction $1 - \alpha$) have a failure barrier equal to 0. We thus have
+
+$$f^{-1} = \begin{cases} 1 & \text{with probability } \alpha \\ 0 & \text{with probability } 1-\alpha. \end{cases}$$
+
+**Maximum capital.** From now on, we assume that the initial capital is constant across banks, $c(i) = c$ for all $i \in \mathcal{N}$, for $c \le \lambda$ (the capital is smaller than the interbank assets). Throughout contagion, the remaining capital decreases, so $c$ also represents the maximum remaining capital.
+
+We consider that, after a recorded failure $i$, any node $j$ exposed to $i$ will record the write-down related to its exposure after a random time span, independent of everything else, and distributed as an exponential random variable with mean 1 (time unit) Exp(1).⁵ Contagion stops at a random time $T$, which is finite almost surely (a.s.). Indeed, there will be at most $m$ (the total number of edges) recorded write-downs due to exposures to failed banks, and therefore $T$ will be smaller than a sum of $m$ i.i.d. exponentials Exp(1).
+
+We denote by $T_k$, for $k \le m$, the arrival times of the superposed recorded write-downs (we take by convention $T_0 = 0$). We denote $(i_k, j_k)$ the pair of banks whose exposure is recorded at time $T_k$, with the usual direction that $j_k$ is exposed to $i_k$.
+
+## 4.2 Optimal intervention
+
+Unlike in the complete information case where all intervention happened at once, here we assume that the cascading failures and the intervention occur over time.
+
+We let $\xi_t(i)$ the cumulative equity infusion in bank $i$ up to time $t$, with $\xi_T$ the cumulative equity infusion up to the end of the cascade. We let $\mathcal{D}(\xi)$ the final failure set under the equity infusion $\xi_t, t \le T$. The optimal equity infusion problem 1 becomes now
+
+**Problem 2** (Optimal equity infusion under partial information).
+
+$$
+\begin{align}
+\underset{\xi}{\text{Minimize}} \quad & \mathbb{E}\left[\sum_{i \in \mathcal{D}(\xi)} c(i) + \sum_{i \in \mathcal{D}(\xi)} \xi_T(i) + \sum_{i \in \mathcal{D}(\xi)^c} \sum_{j \in \mathcal{D}(\xi)} e(i, j)\right], \tag{8} \\
+\text{subject to} \quad & \sum_{i=1}^{n} \xi_T(i) \le M. \tag{9}
+\end{align}
+$$
+
+Note that, contrary to the complete information setting, the second term is not neces-
+sarily zero, since it is possible that the government injects equity in a bank that afterwards
+could record additional write-downs and fail. In the numerical results, we consider the
+difference between the loss without intervention and the loss under the optimal interven-
+tion, which is our criterion for the assessment of the equity infusion program. The “wasted
+
+⁵The exponential distribution of the random time span is a tractability assumption.
+---PAGE_BREAK---
+
+government money” term $\mathbb{E}(\sum_{i \in D(\xi)} \xi_T(i))$ is part of our criterion, allowing us to capture the tradeoff associated with intervention: potentially wasted money versus less capital depletion.
+
+The solution to this problem can be given in terms of a Markov Decision Process for the embedded Markov chain, i.e., it is sufficient to determine the optimal intervention only at the times when the write-downs are recorded. The Markov Decision Process is based on partitioning the set of banks according their remaining capital.
+
+The next proposition states that intervention only occurs at the random times $T_k$ when write-downs are recorded. A bank $i$ may receive infusions only when it records an exposure to a failed bank and when the government decides to reinforce the capital of the exposed bank.
+
+**Proposition 2.** *The optimal solution to the Problem 2 is for all $i \in \mathcal{N}$,*
+
+$$ \tilde{\xi}_t(i) = \sum_{k, T_k \le t, i_k=i} u_k, $$
+
+where $u_k \in \{0,1\}$ represents the equity infusion at time $T_k$ in the exposed bank $i_k$. We have the budget constraint $\sum_k, T_k \le T$ $u_k \le M$.
+
+The proof of this proposition is given as part of the proof of a technical Lemma 2 in the Appendix.
+
+We can describe contagion in our model using a Markov Decision Process. What determines whether a bank fails, is whether either its distance to the illiquidity barrier is zero (if the bank uses unstable sources of funding) or its distance to insolvency is zero (if the bank does not use unstable sources of funding).
+
+Recall that contagion starts from fundamental failures and that the counterparties of failed nodes record their exposures after random exponential times. Thanks to the uniform matching, a recorded exposure belongs to a counterparty of a failed node with probability proportional to the number of unrecorded exposures of that counterparty. A node with an exposure to a recorded failure, will have its remaining capital decrease.
+
+Finally, remark that, from point of view of the government, nodes that have the same connectivity and remaining capital represent the same type. Therefore, we only need to keep track of their number during the cascade, rather than their individual state, which is a significant reduction in the dimension of the Markov process.
+
+We make now this intuition mathematically precise. At time $t \le T$, the state variables are
+
+* $X_t(\theta)$, $\theta = 1, \dots, c$: the number of banks with remaining capital equal to $\theta$ at time $t$,
+
+* $N_t(\theta)$, $\theta = 1, \dots, c$: the number of interventions up to time $t$ on banks with remaining capital $\theta$ (at the time of intervention).
+
+Thanks to Proposition 2, the control variables are
+
+$$ u_k := (u_k(\theta)), $$
+---PAGE_BREAK---
+
+giving the 0/1 infusion decision corresponding to intervention at *k*-th event (at time $T_k$) on banks with remaining capital $\theta$.
+
+We let $X_t := (X_t(\theta))_{\theta=1,...,c}$ and $N_t := (N_t(\theta))_{\theta=1,...,c}$. By abuse of notation, we will use from now on the embedded controlled Markov chain (as a function not of time but of the recorded exposure)
+
+$$X_k := X_{T_k}, \quad \text{and} \quad N_k := N_{T_k}.$$
+
+The transition probabilities of this embedded Markov chain are given in Appendix A.2.2.
+
+Note that by Proposition 2, the optimal strategy consists in injecting liquidity at the random times $T_k$, i.e., as soon as exposures are recorded, and, moreover, the amount of these interventions are either 0 or 1.
+
+By virtue of the theorem 1 below, the optimal intervention problem in continuous time (Problem 2) reduces to the following optimization problem in discrete time 0, 1, ..., m.
+
+**Problem 3 (Optimization program in discrete time).**
+
+$$
+\begin{array}{@{}l@{}}
+V_k(x, \ell) := \min_{u_k(\theta) \in \{0,1\}} L_k(x, \ell) \\
+\qquad \text{subject to} \quad \sum_{\theta=1}^{c} N_m(\theta) \le M,
+\end{array}
+$$
+
+with
+
+$$L_k(x, \ell) := \mathbb{E} \left[ c \left( n - \sum_{\theta=1}^{c} X_m(\theta) \right) + \sum_{\theta=1}^{c-1} (c-\theta)X_m(\theta) + \sum_{\theta=1}^{c-1} N_m(\theta) \mid X_k = x, N_k = \ell \right].$$
+
+The quantity $L_k(x, \ell)$ represents the expected loss at the end of the cascade, assuming that at step $k$ there have been $j$ interventions and the system is in state $x$, i.e., $X_k(\theta) = x(\theta)$, for all $\theta = 1, \dots, c$. Just as in Equation (5) the loss consists of the capital of the failed banks $c(n - \sum_{\theta=1}^c X_m(\theta))$, as well as the loss absorbed by surviving banks (including the loss absorbed by the new equity) $\sum_{\theta=1}^{c-1}(c-\theta)X_m(\theta) + \sum_{\theta=1}^{c-1} N_m(\theta)$. For example, each bank at distance $c-1$ at the end of the cascade absorbed one unit of loss from their own capital.
+
+A solution exists for Problem 3, since the state space is discrete and finite, as well as the time horizon. Uniqueness follows due to the uniqueness of the solution to the associated dynamic programming equation, given in section A.2.4. Formally, we have:
+
+**Proposition 3.** There exists an unique solution to the stochastic control Problem 3, given by a Markovian optimal control $\tilde{u}$:
+
+$$
+\begin{align*}
+V_k(x, \ell) &:= \min_{u_k(\theta) \in \{0,1\}} L_k(x, \ell), \\
+&= L_k^{(\tilde{u})}(x, \ell).
+\end{align*}
+$$
+
+We now state the main technical result of this section, which links the Markovian optimal control $\tilde{u}$ and the equity infusion problem 1.
+---PAGE_BREAK---
+
+**Theorem 1.** The optimal solution to Problem 2 is given by $V_0(X_0, 0)$ and the optimal strategy is determined by the Markovian optimal control $(\tilde{u})$ as
+
+$$ \tilde{\xi}_t(i) = \sum_{k, T_k \le t, i_k=i} \tilde{u}_k(\theta_k(i)), $$
+
+for $i \in \mathcal{N}$, where $\theta_k(i)$ is the remaining capital of node i at step k.
+
+The proof is given in Appendix A.2.3. This result will allow us to study the optimal solution numerically, using dynamic programming.
+
+## 4.3 The value of adapted intervention in the partial information setting
+
+We end this section by a technical result, that allows to compute the optimal intervention in the case when the corresponding budget is used at time 0 to increase the capital of certain banks. We denote by $\hat{V}_0$ the value function when we restrict intervention to be made at time 0. As before, the objective is to minimize the expected loss during the cascade. The **value of adapted intervention** is defined as the difference $V_0(X_0, 0) - \hat{V}_0$.
+
+It is easy to see that the value $\hat{V}_0$ results as an optimization problem in dimension $c$, over the increases in the initial distances to insolvency, under budget constraints. Let us denote by $\Delta X$ the increase in the initial number of banks at different distances to insolvency. Remark that the risk bearing capacity of the system is given by the total capitalization of the financial system: $\sum_{\theta=1}^{c} X(\theta)\theta$.
+
+The capital infusion will increase the risk bearing capacity of the system. After intervention, individual banks will have increased their capital, therefore the increase in the initial number of banks at different distances to insolvency verify $\sum_{k \ge 1} \Delta X(k) = 0$, since the number of fundamentally failed banks does not change. Moreover, banks can only increase their distances to failure after intervention: for all $2 \le \theta \le c$, $\sum_{k > \theta} \Delta X(k) \ge 0$.
+
+Thus, we have the following technical result, which allows us to study numerically the value of adapted control.
+
+**Proposition 4.** *The solution of the optimization Problem 2 when we restrict intervention to be made at time 0, i.e., $\xi_T(i) = \xi_0(i)$, for all i, is given by*
+
+$$
+\begin{array}{l}
+\hat{V}_0 = \min_{\Delta X} \mathbb{E}(L | X_0 + \Delta X) \\
+\text{subject to:} \quad \sum_{\theta=1}^{c} \Delta X(\theta) \cdot d \le M, \\
+\qquad \sum_{k \ge 1} \Delta X(k) = 0, \quad \text{and} \quad \sum_{k \ge \theta} \Delta X(k) \ge 0 \quad \text{for all } 2 \le \theta \le c.
+\end{array}
+$$
+
+# 5 Numerical results
+
+In this section we will compare the net gain from intervention in various settings, where the net gain is understood as the difference between loss without intervention and loss with intervention. The net gain from intervention will be expressed as a percentage of total value of the financial system, where we define **the total value of the financial system**
+---PAGE_BREAK---
+
+Figure 3: Configuration model: Links of are formed by uniformly matching half edges.
+
+| Number of banks | n = 20 | | Connectivity | λ = 12 | | Initial capital (also maximum remaining capital) | c = 3 | | Fraction of banks depending on unstable short term debt | α = 0.2 | | Intervention budget | M = 6 |
+
+Table 2: Baseline parameters.
+
+as the sum over all banks of their capital, i.e., $\sum_{i \in N} c(i)$. We are particularly interested in comparing the gain from intervention in the partial information and complete information case. The difference in the net gain is the value of information. A cautionary point is that this value of information does not integrate moral hazard, which we expect to be higher in the case of complete information.
+
+In the complete information case, we choose uniformly 2 banks as initial failures. We check robustness of the results by generating a regular network and an inhomogeneous network. By means of the configuration model, see e.g. Molloy and Reed (1995), we obtain sample networks in which the nodes have prescribed degrees. Nodes are assigned in-coming and out-going "half-linkages" and the sample graph results from a uniform matching model of the in-coming half linkages and out-going half linkage, as shown in Figure 3. The regular network is a network in which all nodes have the same degree $\lambda = 12$.
+
+Likewise, an inhomogeneous sample network is obtained by the configuration model, but the degrees are not all equal to $\lambda$. They are instead chosen randomly, equal to $\lambda_{min} = 6$ or $\lambda_{max} = 18$ each with probability .5, such that the average degree remains $\lambda = 12$. In a regular network, each bank can be reached from other $\lambda$ banks and can reach other $\lambda$ banks. When $\lambda$ is large by comparison the $n$, we can expect that the graph is strongly connected and that it contains cycles (with sufficiently high probability). The same holds in the case of the inhomogeneous graph when $\lambda_{min}$ is sufficiently large by comparison the $n$. The numerical results of this section depend on the strong connectivity property of the networks we consider.
+
+In the partial information setting we do not need to choose the banks that fail initially, but only their number. We will consider the cases of 1 and 2 initial failures (out of 20 banks).
+
+The partition of non-failed banks according to the remaining capital can be interpreted in the following way:
+---PAGE_BREAK---
+
+• $\theta = 3$: Banks which are not prone to bank runs in case they record a write-down;
+
+• $\theta = 2$: Banks that recorded one write-down. Among these banks, those who depend on short term funding will face a bank run as soon as they record a new write-down;
+
+• $\theta = 1$: Banks that recorded two write-downs. They do not depend on short term debt (A non-failed bank that depends on short term debt cannot have remaining capital $\theta = 1$, since it fails due to illiquidity as soon as its remaining capital reaches the failure barrier $f^{-1} = 1$).
+
+Before contagion (after the initial shock), non-failed banks have remaining capital $\theta = 3$. If there is additional depletion of capital beyond the initial failures, we will allow non-failed banks to start with $\theta = 2$ in order to illustrate the effect of contagion.
+
+## 5.1 Optimal intervention in the complete information setting
+
+We generate several samples from our random network model with the parameters in Table 2. Figure 4 plots the net gain from intervention versus the total equity infusion. Each point represents one of the sets of “insured” banks. We plot the cost versus the net gain for all possible choices of insured sets of banks, for which the infusion is smaller than $M$.
+
+More precisely, we are in the context of Proposition 1 and we plot for each set of “insured” banks $\mathcal{V} \subseteq \mathcal{N} \setminus \mathcal{D}_0$ with $\sum_{i \in \mathcal{V}} \xi^\mathcal{V}(i) \le M$ the cost $\sum_{i \in \mathcal{V}} \xi^\mathcal{V}(i)$ versus the net gain
+
+$$ \sum_{i \in \mathcal{D}(\xi^{\mathcal{V}})} c(i) + \sum_{i \in \mathcal{D}^c(\xi^{\mathcal{V}})} \sum_{j \in \mathcal{D}(\xi^{\mathcal{V}})} e(i,j) - \left( \sum_{i \in \mathcal{D}(0)} c(i) + \sum_{i \in \mathcal{D}^c(0)} \sum_{j \in \mathcal{D}(0)} e(i,j) \right), $$
+
+where $\mathcal{D}(0)$ represents the final set of failed banks under no intervention (0 equity infusions).
+
+If the net gain from insuring a set of banks is positive, then it is efficient to intervene on that set. However, the government will chose to insure the set with the maximal net gain. If there are multiple optimal solutions, we choose the one of minimal cost.
+
+We make three plots, for the same network, but for different $\alpha$ (the probability that a node depends on short term funding). For each of these plots, we note that the net gain is in general decreasing with the cost of insurance. To understand this unintuitive result, consider the example of a long chain of causal defaults: 1 leads to the default of 2, which leads to the default of 3, and so on, until the default of last bank 20. In absence of intervention all 20 banks default. However, the causal structure of defaults is particularly simple here, and its complete knowledge will let the controller identify that there is only one bank, 1, that has a critical position in the network. Saving this bank will have the smallest intervention cost, 1 and lead to the maximum net gain equal to the capital of 20 banks. When the cost to insure a set of banks is small, this means that most causal chains would be removed if only a few critical banks receive infusions.
+
+The second observation from Figure 4 is that the optimal solution is not very sensitive to the fraction of banks prone to bank runs. Only when the fraction of such banks increases significantly, the cost of the optimal solution increases from 4 to 5. The net gain is the same. Clearly, because in the complete information the controller can identify the critical nodes in the network that need to be insured, then the default chains are cut "early" before an illiquidity crisis can unfold.
+---PAGE_BREAK---
+
+Figure 4: Net gain from intervention vs. total equity infusion, $\alpha = 0, 0.2, 0.4$ and no indirect contagion. Each point represents a set $V$ that are insured. The net gain is represented as a percentage of total value of the financial system.
+
+Third, as expected, the general cost to insure sets of nodes increases with the fraction $\alpha$ of banks prone to bank runs. This is explained by the fact that there are less choices of insurable sets that can satisfy the budget constraint. However, because the optimal solution is interior and not very sensitive to the fraction $\alpha$ of banks prone to bank runs, then increasing the intervention budget will not lead to a better solution.
+
+Further, we let $\alpha = 0.2$ and plot the cases when there is a dependence of the illiquidity barrier and the number of defaults in the system. The first plot is the case when there is no dependence, the second one is when the illiquidity barrier increases by 1 with every 5 defaults in the system, and the last case when the illiquidity barrier increases by 1 with every 2 defaults in the system. The optimal solution has now a higher cost 6. As the indirect contagion channel becomes more powerful, much fewer sets are “insurable”. The sensitivity is much larger to the indirect contagion channel than to the increase in the fraction $\alpha$ of banks prone to bank runs.
+
+In all considered cases, beyond a certain value for the total capital infusion, it is not optimal for the government to inject more equity in the system. The budget constraint is not saturated in general. This suggests that injecting equity beyond a certain limit would, in effect, not serve to mitigate the contagious losses, but to transfer losses from the creditors and existing shareholders of failed banks to the taxpayers.
+---PAGE_BREAK---
+
+Figure 5: Net gain from intervention vs. total equity infusion; $\alpha = 0.2$: In the first plot there is no indirect contagion, in the second and third plot there is indirect contagion: the barrier increases by 1 every 5 and respectively 2 defaults. Each point represents a set $V$ that are insured. The net gain is represented as a percentage of total value of the financial system.
+---PAGE_BREAK---
+
+Figure 6: Heterogeneous network. Net gain from intervention vs. total equity infusion, $\alpha = 0, 0.2, 0.4$, indirect contagion in the last plot. Each point represents a set $V$ that are insured. The net gain is represented as a percentage of total value of the financial system.
+
+We now check the robustness of these results with respect to the network structure. We have so far considered a regular network. We now consider intervention in an inhomogeneous network in Figure 6. The network is obtained by the configuration model, but the degrees are not all equal to $\lambda$. They are instead chosen randomly, equal to $\lambda_{min} = 6$ or $\lambda_{max} = 18$ each with probability .5, such that the average degree remains $\lambda = 12$.
+
+The same results hold qualitatively, with two important distinctions. Unlike in homogeneous networks, in heterogeneous networks there is no contagion in absence of the bank run component. The maximal gain that can be achieved is 0.4 (compared to 0.5) of the value of the financial system. This is because there is less contagion in the first place. The second distinction is that in heterogeneous networks there are more cases in which the net gain from intervention is negative. This is because in such networks, exposures are heterogeneous. Or, as seen in Example 3, in presence of large exposures the government will not save a bank and absorb a large exposure with taxpayer money, according to our optimization criterion.
+
+Given that the networks we consider here have $n = 20$ nodes, it is reasonable to interpret these nodes as core banks. While the entire financial system is highly heterogeneous, the subset of core banks is more homogenous in term of connectivities. In the sequel, we consider the partial information setting, in which the network is homogeneous and in fact regular.
+---PAGE_BREAK---
+
+## 5.2 Optimal intervention in the partial information setting
+
+We compare in Figure 7 the net gain from intervention in the complete and partial information setting. Given complete information on the network, the loss mitigation will be more effective. Here we study numerically the value of information, defined the difference between the value function in the complete versus the incomplete information case. Policymakers would need to balance the value of information against the costs associated with the complete information case: the costs of data collection and the moral hazard costs.
+
+The complete information case is shown as before, and the optimal solution is easy to identify on the plots as the maximum net gain. In the incomplete information case we plot the expected net gain as a function of $\alpha$ and the number of banks with remaining capital $\theta = 2$. Because we consider that non-failed banks have either $\theta = 3$ or $\theta = 2$ remaining capital, it is understood that the larger the number of banks with remaining capital $\theta = 2$, the smaller the initial capitalization of the non-failed banks.
+
+We note that optimal intervention reduces significantly the magnitude of contagion, in both the complete and partial information settings whenever $\alpha$ is small and there are few banks with remaining capital $\theta = 2$. The resulting net gain in the partial information setting compares well to the case when the government observes the whole network. In both the complete information and the partial information the gain is close to .5 of the value of the system. Importantly however, for the partial information case, this large gain holds only for small values of $\alpha$, whereas in the previous section the dependence on $\alpha$ of the optimal solution was not as significant.
+
+This points to the fact that an optimal intervention of a government “one step ahead” of the market yields good results in terms of mitigating contagion when the fraction of banks prone to bank runs is small and most non-failed banks are well capitalized initially. The fact that the value of information is small is particularly important from the perspective that the government needs to put into balance the value of information with the increase in moral hazard when the network structure is known (full information on who would receive infusions). Moreover, data collection is costly, see e.g. Abad et al. (2016) for details on the amount of data availability and the costs this data collection entails.
+
+We further compare the optimal solution under partial information with the solution when we constrain equity infusions to take place only once. The difference is the value of an adaptive strategy. Figure 8 shows that the adapted intervention strategy performs *significantly* better than an infusion constrained to take place at time 0.
+
+## 5.3 Short term debt and interbank contagion
+
+In this section we assess the intervention policy in presence of banks which depend on unstable sources of funding. We vary the proportion $\alpha$ of banks that fund themselves via short term debt. In figure 9 we plot the net gain from intervention as a function of the number of banks at remaining capital $\theta = 2$ and as a function of $\alpha$. The more banks at remaining capital 2 (as opposed to 3) the less capital of non-failed banks. We consider two cases, with one initial failure and with two initial failures. For each of these cases, we compare the net gain under two intervention budgets, $M = 10$ and $M = 4$.
+
+Consider the case of one initial default and $M = 4$. For low $\alpha$, the net gain increases with the number of banks at remaining capital $\theta = 2$; this is because contagion can be
+---PAGE_BREAK---
+
+Figure 7: Net gain from intervention: complete vs. partial information.
+
+Figure 8: Difference between value functions in the case with all equity infusions at time 0 and the case with adaptive intervention. $M = 10$ and there is one initial failure.
+---PAGE_BREAK---
+
+Figure 9: Expected net gain from intervention, for one and respectively two initial failures. For each plot, we consider the intervention budget $M = 10$ (surface above) vs. $M = 4$ (surface below).
+
+contained with the allocated budget. When $\alpha$ is large, the net gain quickly decreases with the number of banks at remaining capital $\theta = 2$; when there is a large number of such banks, the net gain is close to zero; this means that contagion under intervention is almost the same as without intervention.
+
+For $M = 10$, contagion can be better contained, so the expected net gain remains large even for a large $\alpha$ and a large number of banks with remaining capital $\theta = 2$.
+
+This is no longer true in the case of two initial defaults. Then the expected net gain is sharply decreasing with $\alpha$ and the number of banks at remaining capital $\theta = 2$ even under the larger budget $M = 10$.
+
+In Figure 10 we plot the difference between the expected number of interventions on banks with remaining capital $\theta = 2$ and interventions on banks with remaining capital $\theta = 1$, as a function of $\alpha$ and time. More interventions on banks with remaining capital $\theta = 2$ means that the focus is on avoiding runs. More interventions on banks with remaining capital $\theta = 1$ means that the focus is on avoiding insolvencies. Whenever banks are not prone to illiquidity ($\alpha = 0$) the government has no incentive to inject equity before the bank is in actual danger of insolvency.
+
+All plots in Figure 10 have a jump at $\alpha = 0$. This means that as soon as there is even a small fraction of banks that depend on short term funding, there is a threat that short term creditors can withdraw funding and the banks could fail due to illiquidity. In this case the government cannot wait for the banks to reach $\theta = 1$ to make equity infusions, because it fears failures due to bank runs on banks that are still solvent. The government chooses instead to make equity infusions in banks that have $\theta = 2$. These are banks that are adequately capitalized but which can fail due to runs, and not insolvency.
+
+When there is one initial default, the strategies are indistinguishable for $M = 10$ and $M = 6$. When there are two initial defaults, the switch (the aforementioned jump at $\alpha = 0$) towards interventions to prevent bank runs are much swifter when the intervention budget is larger. We conclude that under partial information, and in contrast with the complete information case, a larger intervention budget allows the government to take better decisions
+---PAGE_BREAK---
+
+and avoid illiquidity crises. Whenever there are banks which depend on short term funding, the primary reason for equity infusions is to eliminate the frictions associated to inefficient bank runs, and not to prevent insolvency. An important observation is that it suffices to have an arbitrarily small $\alpha$ to observe this sharp jump in the rationale of the intervention policy.
+
+Figure 10: Difference between the expected number of interventions on banks with remaining capital $\theta = 2$ and $\theta = 1$ for budget $M = 10$ (surface above), $M = 6$ (surface in the middle) and $M = 4$ (surface below). With one initial default (left), the strategies are indistinguishable for $M = 10$ and $M = 6$. The negative region of each surface represents the case when the controller's focus is on preventing insolvencies, and the positive region of each surface represents the case when the controller's focus is on preventing runs.
+
+All results of this section have been given in the regular network case, in which all banks have the same connectivity. The regular network is to be thought of as a network of core banks, which have access to repo funding and are prone to bank runs. We expect that our results would go through qualitatively if we considered a tiered system, the core and the periphery, as long as the core is strongly connected with sufficiently high probability. In such an extended model, the peripheral banks may have a smaller connectivity than the core banks and only core-peripheral directed paths.
+
+The dimension of the problem would double, because the value function in 3 would depend, in addition to the state of the core banks and the number of interventions on the core banks, on the state of the peripheral banks and the number of interventions on the peripheral banks. Let us compute the size of the state space in the homogenous and tiered case, for the working example in 2. Consider the homogenous case. There are $n = 20$ banks, and the solvent banks have remaining capital $\theta = 1, \dots, c$ ($c = 3$), in total $\binom{n+c}{c=1771}$. There are $\binom{M+2}{2=28}$ possible ways to intervene on core banks at levels 1 and 2, for $M = 6$. For our example, the size of the state space is $1771 \times 28$. Suppose now that we introduce a periphery of $n = 20$ peripheral banks, and suppose that the interventions budget is $M = 12$. Then there are $\binom{M+3}{3=455}$ possible ways to intervene on core banks at levels 1 and 2 and on peripheral banks at level 1 (assuming that peripheral banks are not prone to repo runs). The size of the state space becomes the $1771 \times 1771 \times 455 \sim 10^9$. Computing the optimal solution for a two tiered system under partial information remains tractable (and also for
+---PAGE_BREAK---
+
+some three-tiered system, using today's computational power). By comparison, note that keeping tract of the state of each bank, would yield in the case $n = 40$ a state space of size $3^4 \cdot 0$. It is thus remarkable that the solution for systems with dozens of banks can become tractable and yield insights on the role of solvency vs. runs. It is clear that such tractability comes at the expense of imposing a tiered structures with a small number of levels, within each level banks having the same connectivity.
+
+The dependence of the government strategy on funding capacity has implications on data collection. From data, one must estimate the probability of bank runs. A large fraction of the short term lending is done through repo markets. However, because of a lack of data on bilateral repos in the United States, "the full picture on repo is yet to be assembled" Gorton and Metrick (2012b), Krishnamurthy et al. (2014). This is in contrast to European markets, where repo lending is much more transparent as it goes through a CCP (Eurex) Mancini et al. (2015). Currently, for US markets only data on money market funds lending is available see e.g. Gorton and Metrick (2012b), Krishnamurthy et al. (2014). It is critical to estimate the cost of funding across a variety of types of repo lenders, e.g., money market funds, securities lender, insurance companies. This would lead to better estimation of bank run probabilities, which we have shown are central in intervention programs.
+
+# 6 Conclusions
+
+We analyzed the optimal equity infusions of a government with constrained budget, in two different information settings. The optimal strategy depends on the government's information and is highly sensitive to the presence of banks that are susceptible to bank runs. From this perspective, to minimize loss it is critical to estimate the banks' funding capacity and to acknowledge the role of indirect contagion from capital loss to funding withdrawal across the system. Collecting data that would allow to estimate funding capacity is equally (if not more important) than collecting data on exposures.
+
+In presence of banks that are susceptible to bank runs, the intervention must be swifter and in banks that are otherwise not close to insolvency. The threshold to contagion is smaller when banks are prone to illiquidity. Under incomplete information, the government does not know the threshold to contagion. Even for a small fraction of banks that depend on unstable short term funding, there is discontinuity in the optimal strategy from "patient" intervention only on banks with low capital to "immediate" reinforcing of banks with medium capital (that could be in danger of illiquidity).
+
+We have further compared the reduction in the magnitude of contagion in the partial and complete information settings. We found that even in the case of partial information, contagion is significantly mitigated by intervention, provided the government uses an adapted strategy. The mitigating effect of intervention is in this case comparable to the case when government has complete information. This has significant implications for moral hazard. Under complete information, the government will inject in banks that are higher in the causal chain of failures. This in turn creates incentives for banks to have many creditors, and can potentially turn them into "super"-spreaders of contagion or "too-interconnected to fail". In the incomplete information case, the creditors of the initially failed banks will receive infusions more equitably since there is no information as to which of them is higher in the causal chain of failure. Less information may decrease moral hazard
+---PAGE_BREAK---
+
+and have comparable mitigation.
+
+There exist, of course, other kinds of interventions for supporting liquidity which do not take the form of equity infusions, but that of loans or credit guarantees. These are important topics for future research. Equity infusions in 2008, on the other hand, constituted the largest ever U.S. government intervention in the financial sector, and our results are intended to be used as tools for the decision process in this type of interventions.
+
+# A Appendix
+
+## A.1 Preliminary list of notations
+
+* $n$ - the number of banks;
+
+* $\mathcal{N}$ - the set of banks;
+
+* $\mathcal{D}$ - generic set of failed banks;
+
+* $\mathcal{D}_0, \mathcal{D}_1, \mathcal{D}_k, \dots$ - cascading failures;
+
+* $^c$ - set complement
+
+* $\xi(i)$ - arbitrary equity infusion;
+
+* $\mathcal{D}(\xi)$ - the final set of defaults under equity infusion $\xi$
+
+* $\mathcal{D}^\nu(i)$ - the final set of defaults when all banks in $\mathcal{V}$ receive infinite equity infusions, and all other banks do not receive any equity infusions;
+
+* $(e(i,j))$ - the exposure matrix (bank $i$ is exposed to bank $j$);
+
+* $c(i)$ - initial capital;
+
+* $c$ - initial capital, assumed constant across banks in the second part; it gives the maximum remaining capital throughout the contagion;
+
+* $s(i)$ - short term debt, net of liquid reserves;
+
+* $f(i)$ - the funding capacity;
+By abuse of notation, $f$ denotes also the map that gives debt capacity as a function of capital and number of failed banks,
+
+$$f(i) = f(c(i), |\mathcal{D}|).$$
+
+* $f^{-1}(s(i), |\mathcal{D}|)$ - the failure barrier for the capital (as a function of the net short term debt and number of failed banks), the inverse of the debt capacity function;
+
+* $\theta(i, \mathcal{D})$ - remaining capital (after recorded losses);
+
+* $\theta_k(i)$ - in the second part, remaining capital after there are $k$ recorded losses in the system;
+---PAGE_BREAK---
+
+* $\alpha$ - fraction of banks depending on unstable short term debt;
+
+* $L$ - loss in the system, the optimization criterion;
+
+* $M$ - the intervention budget;
+
+* $\tilde{\xi}(i)$ - optimal equity infusion;
+
+* $\xi^\nu$ - the minimal equity infusion that "insures" the set $\mathcal{V}$;
+
+* $\lambda$ - the connectivity, assumed constant across banks in the second part; also the number of exposures;
+
+* $(T_k, i_k, j_k)$, $k=1, \dots, m$ - the recording of exposure $k$ consists of a 4-tuple: the time when this exposure is recorded $T_k$, the pair of banks, with the convention that $i_k$ is exposed to $j_k$.
+
+* $u_k \in \{0,1\}$ - decision to inject equity in the exposed bank at time $T_k$;
+
+* $u_k(\theta) \in \{0,1\}$ - decision to inject equity in the exposed bank at time $T_k$ given that the bank with remaining capital $\theta$;
+
+* $\tilde{u}_k(\theta)$ - the optimal decision to inject equity in the exposed bank at time $T_k$;
+
+* $X_k(\theta)$ - state variables, the number of banks with remaining capital $\theta$;
+
+* $N_k(\theta)$ - control variables, the number of interventions on banks at remaining capital $\theta$;
+
+* $L_k(x, \ell)$ - the expected loss, starting at step $k$ from state $x$ and a number of interventions $\ell$;
+
+* $V_k(x, \ell)$ - the minimal expected loss; this is the value function in the dynamic programming equation;
+
+* $V_0(X_0, 0)$ - the optimal solution under partial information;
+
+* $\hat{V}$ - the solution with partial information and one-time infusions;
+
+## A.2 Proofs
+
+### A.2.1 Proof of Proposition 1
+
+The set of infusions $\tilde{\xi}$ which satisfies conditions (i) and (ii) of Lemma 1 is equal to the set $\{\xi^\nu | \nu \subseteq \mathcal{N}\}$. The proof of the equality of the two sets is easy to check. Let $\xi^\nu$ defined as above for a set $\mathcal{V}$. It is clear that $\mathcal{V} \cap \mathcal{D}(\xi^\nu) = \emptyset$, so failed banks do not receive infusions. Therefore $\xi^\nu$ satisfies condition (ii) of Lemma 1. It satisfies condition (i) by construction. Conversely, for $\tilde{\xi}$ which satisfies conditions (i) and (ii) of Lemma 1, $\tilde{\xi} = \xi^\nu$ for $\nu := \{i | \tilde{\xi}(i) > 0\}$.
+
+Lemma (1) combined with the budget constraint states that solutions to Problem 1 are a subset of $\{\xi^\nu | \nu \subseteq \mathcal{N}, \sum_i \xi^\nu(i) \le M\}$. Therefore, in Problem 1 it suffices to
+---PAGE_BREAK---
+
+minimize loss over the restricted equity infusion set $\{\xi^\nu | \nu \subseteq \mathcal{N}, \sum_i \xi^\nu(i) \le M\}$ (and not $\{\xi | \sum_i \xi(i) \le M\}$). We conclude that the optimal equity infusion problem can be recast as the combinatorial optimization problem in the statement of Proposition 1.
+
+### A.2.2 Transition probabilities of the controlled Markov chain $(X, N)$
+
+We define the stopping time
+
+$$ \bar{k} = \inf \left\{ k \mid \left( n - \sum_{\theta=1}^{c} X_k(\theta) \right) \lambda = k \right\}, $$
+
+with $\bar{k} \le m$, representing the time the cascading failures stop, meaning that all exposures to defaulted banks (there are $(n - \sum_{\theta=1}^c X_k(\theta))\lambda$ such exposures) have been recorded (the number of recorded exposures is $k$).
+
+The transition probabilities of the Markov chain are as follows. For all $\theta \in [1, \dots, c]$, with probability $\frac{(\lambda-c+\theta)X_k(\theta)-N_k(\theta)}{m-k}$, the next state will be:
+
+$$
+\begin{array}{ll}
+\multicolumn{2}{c}{u_k(\theta) = 0} \\[1ex]
+\left\{
+\begin{array}{l@{\quad}l}
+X_{k+1}(\theta) = X_k(\theta) - 1, & X_{k+1}(\theta) = X_k(\theta), \\
+N_{k+1}(\theta) = N_k(\theta), & N_{k+1}(\theta) = N_k(\theta) + 1.
+\end{array}
+\right. \\
+\left\{
+\begin{array}{l}
+\theta > 2 \\[1ex]
+\theta = 2
+\end{array}
+\right. &
+\left.
+\begin{array}{l}
+X_{k+1}(\theta-1) = X_k(\theta-1) + 1 \\[1ex]
+\text{with probability } \alpha: X_{k+1}(\theta-1) = X_k(\theta-1) \\[1ex]
+\text{with probability } 1-\alpha: X_{k+1}(\theta-1) = X_k(\theta-1) + 1.
+\end{array}
+\right|
+\end{array}
+$$
+
+### A.2.3 Proofs of Proposition 2 and Theorem 1
+
+For the proofs of Proposition 2 and Theorem 1 we need the following steps:
+
+(i) We introduce a sequential algorithm that generates a failure cluster with will be shown to have the same conditional law as the failure cluster in the random financial network.
+
+(ii) We then show by a coupling argument the equivalence of the conditional laws of the failure clusters in the two algorithms.
+
+In Amini et al. (2016) it has been shown that in the case without intervention, the cluster of failures on the random network at the end of the cascade process has the same law as a random graph constructed sequentially. The intuition is as follows. A random matching can be constructed sequentially: At any step, choose an in-coming half edge **according to any rule based on the history of the matching**, and then choose its pair uniformly among all unmatched out-going half edges.
+
+We now show that this is also the case when the controller has partial information.
+---PAGE_BREAK---
+
+**Step 1:** An algorithm for sequential construction of the failure cluster. We describe the construction of the failure cluster in the random financial network, as introduced in Section 4.1, in the form of the following algorithm.
+
+**Algorithm 2.** (i) Choose randomly a network from the Configuration Model.
+
+(ii) Let $Q$ the set of unrecorded exposures to failed banks. The set $Q$ initially contains exposures to nodes in $\mathcal{D}_0$. All exposures in $Q$ are assigned a clock which rings after a random time with law Exp(1), independent of everything else.
+
+(iii) While the queue $Q$ is non-empty, let $(i_k, j_k)$ the exposure whose clock rings first. Remove $k$ from $Q: Q \leftarrow Q \setminus \{k\}$ and record the exposure. The government may intervene by injecting equity in the node $i_k$ (according to their optimization program under partial information). We denote by $u_k^1$ the optimal decision.
+
+If $i_k$ is left to fail, the exposures of other nodes to $i_k$ are added to the queue $Q$ and assigned a clock of law Exp(1), independent of everything else.
+
+(iv) Repeat until $Q = \emptyset$. We denote by $C_1 = (\mathcal{D}^1, E^1)$ the failure cluster observed by the government at the end of the contagion process, with $\mathcal{D}^1$ the set of failures, and $E^1$ the set of recorded exposures.
+
+We now introduce a second algorithm, which has the advantage of being sequential, and which will construct a failure cluster with the same conditional law as Algorithm 2.
+
+**Algorithm 3.** Let $Q$ the queue of unexplored in-coming half edges belonging to failed banks.
+
+(i) Assign to each node $i \in \mathcal{N}$, $\lambda$ out-coming half edges and $\lambda$ in-coming half edges.
+
+(ii) Add the set of in-coming half edges of fundamentally failed banks to the queue $Q$ and assign them a clock of law Exp(1), independent of everything else.
+
+(iii) While $Q \neq \emptyset$, let $k$ be the half edge whose clock rings first. Remove $k$ from its queue: $Q \leftarrow Q \setminus \{k\}$. Choose uniformly a matching out-going half edge among all unmatched out-going half edges and form an edge. Let $i_k$ be the node to which the chosen out-going half-edge belongs and record the exposure.
+
+The government may intervene by injecting equity in node $i_k$ (according to their optimization program under partial information). We denote by $u_k^2$ the optimal decision. If $i_k$ is left to fail, then the $\lambda$ in-coming half edges of $i_k$ are added to the queue $Q$ and assigned a clock of law Exp(1), independent of everything else.
+
+(iv) Repeat until $Q = \emptyset$. We denote by $C_2 = (\mathcal{D}^2, E^2)$ the failure cluster observed by the government at the end of the contagion process.
+
+Note that the difference between Algorithms 2 and 3 is that, in the first case, the network is drawn from the configuration model in the first step of the algorithm and then the failure cluster is generated using a supplementary queue which is unobserved by the government, whereas, in the second case, the network is generated at the same time as the failure cluster.
+---PAGE_BREAK---
+
+**Step 2.** We now need to show that, at any time $t$, the law of the failure cluster is the same for the two algorithms, conditional on observing at time $t$ the same partial failure cluster and number of interventions.
+
+We let the failure cluster observed at time $t$, $C_t^1 := (\mathcal{D}_t^1, E_t^1)$ in Algorithm 2 and respectively $C_t^2 := (\mathcal{D}_t^2, E_t^2)$ in Algorithm 3. We denote by $N_t^1$ and $N_t^2$ the respective number of interventions. The respective failure clusters $C_t^1$ and $C_t^2$ are observed, in case we have a clock ringing, *before* the exposure is recorded and the intervention decision is made. We denote by $\mathcal{H}_t^1$ and respectively $\mathcal{H}_t^2$ the history of the cluster observation in the two algorithms.
+
+**Lemma 2.** For any network function F, and time t there exists a deterministic function (depending on F) $G_t$ such that:
+
+$$ \mathbb{E}(F(C^1)|\mathcal{H}_t^1) = G_t(C_t^1, N_t^1), $$
+
+and
+
+$$ \mathbb{E}(F(C^2)|\mathcal{H}_t^2) = G_t(C_t^2, N_t^2). $$
+
+*Proof.* We prove this claim by backward induction on the number of links in the observed failure cluster.
+
+Let $t$ such that $|E_t^1| = m$, i.e., we have recorded $m$ write-downs. Thus, we must be after the end of the cascade. So $C^1 = C_t^1$, and therefore
+
+$$ \mathbb{E}(F(C^1)|\mathcal{H}_t^1) = F(C_t^1, N_t^1). $$
+
+Likewise, for $t$ such that $|E_t^2| = m$,
+
+$$ \mathbb{E}(F(C^1)|\mathcal{H}_t^2) = F(C_t^2, N_t^2). $$
+
+Suppose now that the two claims hold for observed failure clusters with a number of links greater than $k$. We now show that the two claims hold when the observed failure clusters have $k$ links.
+
+Let us consider first the case when a clock rings precisely at time $t$. In this case, the history $\mathcal{H}_{t+}^1$ is strictly larger than $\mathcal{H}_t^1$, since it contains the information about the newly recorded exposure and the controller's decision. More importantly, because at time $t+$ there is an additional exposure added to the observed cluster, we are under the induction hypothesis. Using iterated conditioning, we have
+
+$$
+\begin{align*}
+\mathbb{E}(F(C^1)|\mathcal{H}_t^1) &= \mathbb{E}(\mathbb{E}(F(C^1)|\mathcal{H}_{t+}^1)|\mathcal{H}_t^1) \\
+&= \mathbb{E}(G_{t+}^1(C_{t+}^1, N_{t+}^1))|\mathcal{H}_t^1 \\
+&:= G_t^1(C_t^1, N_t^1),
+\end{align*} $$
+
+where in the second equality, we have used the induction step. The last equality (and implicit definition) holds because, conditional on a clock ringing, $C_{t+}^1$ results from $C_t^1$ in two steps:
+---PAGE_BREAK---
+
+(i) First, the newly recorded exposure must have been formed at Step (i) of Algorithm 2 by matching an in-coming half-edge belonging to a recorded failure with an out-going half edge. Due to the independence of the clock on everything else and the memoryless property of exponential distribution, this matching is conditionally uniform over all such matchings of unexplored half-edges entering the failure cluster $C_t^1$.
+
+(ii) The induction step can be applied for the total loss $L$. There exists a function $G_{t+}^L$ such that the optimization in (iii) of Algorithm 2
+
+$$ \min_u \mathbb{E}(G_{t+}^L(C_{t+}^1, N_{t+}^1) | \mathcal{H}_t), $$
+
+with the understanding that $C_{t+}^1$ and $N_{t+}^1$ depend on the decision $u_t^1$. Since $u_t^1$ is decided after the exposure is recorded, and since nodes of same remaining capital $\theta$ are indistinguishable, we have that $u_t^1$ depends on the new information (the new exposure) only through the exposed node's remaining capital $\theta$. Moreover, because the loss criterion depends on the history of the default cluster only through the current state $C_t^1$ and $N_t^1$ It follows that $u_t^1 = u_t^1(d, C_t^1, N_t^1)$.
+
+Take now a time $t$ when no clock rings, and let $T^1$ the next time a clock rings. In this case
+
+$$
+\begin{align*}
+\mathbb{E}(F(C^1)|\mathcal{H}_t^1) &= \mathbb{E}(\mathbb{E}(F(C^1)|\mathcal{H}_{T_1}^1)|\mathcal{H}_t^1) \\
+&= \mathbb{E}(G_{T^1}^1(C_{T_1}^1, N_{T_1}^1)|\mathcal{H}_t^1) \\
+&= G_t^1(C_t^1, N_t^1),
+\end{align*}
+$$
+
+where the second equality holds since there is a jump at time $T_1$. We have that $C_{T_1}^1 = C_t^1$, the failure cluster does not change between $t$ and $T_1$. Moreover, because the loss criterion depends only on the failure cluster and is time independent, it also holds that there are no interventions between $t$ and $T_1$, so $N_{T_1}^1 = N_t^1$ and we obtain the last equality with $G_t^1 = G_{T_1}^1$.
+
+Note that this point concludes the proof of the statement of Proposition 2, that interventions only occur when a clock rings.
+
+For the second algorithm, we consider a time $t$ when a clock rings (The case when no clock rings is treated similarly as in the first algorithm).
+
+$$
+\begin{align*}
+\mathbb{E}(F(C^2)|\mathcal{H}_t^2) &= \mathbb{E}(\mathbb{E}(F(C^2)|\mathcal{H}_{t+}^2)|\mathcal{H}_t^2) \\
+&= \mathbb{E}(G_{t+}^2(C_{t+}^2, N_{t+}^2))|\mathcal{H}_t^2 \\
+&:= G_t^2(C_t^2, N_t^2),
+\end{align*}
+$$
+
+where $C_{t+}^2$ denotes the failure cluster resulting from $C_t^2$ using the procedure at Step (iii) of Algorithm 3. We obtain that the control used by the government at this point is Markovian and depends on the new information only through the remaining capital of the exposed node $u_t^2 = u_\theta^2(C_t^2, N_t^2) \in \{0, 1\}$.
+
+We are only left to show that, $G_t^1 = G_t^2$, for which it is enough to show that the conditional laws of $C_{t+}^1$ are the same as the conditional laws of $C_{t+}^2$, for any control choice $u \in \{0, 1\}$. For this, remark that in Algorithm 3, the out-going half edge is chosen uniformly among all unmatched half edges and the in-coming half-edge belongs to a recorded failure.
+---PAGE_BREAK---
+
+Since the past control $(u_s)_{s 0$ depending only on $s, q, k$ and the geometric properties of $\partial\Omega$.
+---PAGE_BREAK---
+
+## 2 Remarks on the equation $\text{div } v = f$
+
+Let $G \subset \mathbb{R}^n$ be a bounded domain, star-shaped with respect to a ball $B_R$. It is well known that for all $f \in L^q(G)$ with $(f)_G = 0$ ¹) the equation $\text{div } v = f$ has a solution $v \in W_0^{1,q}(G)$ such that
+
+$$ \|\nabla v\|_{L^q(G)} \le c \|f\|_{L^q(G)} $$
+
+with $c = \text{const} > 0$, depending on $n, q$ and $G$ (cf. [2], [8]). In fact, the constant $c$ depends on the geometric property of $G$, namely the ratio of $G$ which is defined by
+
+$$ \text{ratio}(G) := \frac{R_a(G)}{R_i(G)}, $$
+
+where
+
+$$ R_a(G) = \inf\{R > 0 \mid \exists B_R(x_0) : G \subset B_R(x_0)\}, $$
+
+$$ R_i(G) = \sup\{r > 0 \mid \exists B_r(x_0) : G \text{ is star-shaped w.r.t } B_r(x_0)\}. $$
+
+For instance $\text{ratio}(G) = 1$ if $G$ is a ball, and $\text{ratio}(G) = \sqrt{n}$ if $G$ is a cube. Moreover, the ratio is invariant under translation and scaling, i. e.
+
+$$ \text{ratio}(\lambda G) = \text{ratio}(G) \quad \forall \lambda > 0. $$
+
+Now, let $G$ such that $2 < R_i(G) < 3$. In particular, $G$ is star shaped with respect to a ball $B_2 = B_2(x_0)$. Without loss of generality we may assume that $x_0 = 0$. Let $\phi \in C_0^\infty(B_2)$. We define
+
+$$ \mathcal{B}_\phi f(x) = \int_{\mathbb{R}^n} f(x-y) K_\phi(x,y) dy, \quad x \in \mathbb{R}^n, \quad f \in C_0^\infty(G), $$
+
+where
+
+$$ K_\phi(x,y) = \frac{y}{|y|^n} \int_0^\infty \phi\left(x+r\frac{y}{|y|^n}\right) (|y|+r)^{n-1} dr, \quad (x,y) \in \mathbb{R}^n \times (\mathbb{R}^n \setminus \{0\}). $$
+
+As in [2], [8] it has been proved that $\mathcal{B}_\phi f \in C_0^\infty(G)$ for all $f \in C_0^\infty(G)$. In addition, there holds
+
+$$ (2.1) \qquad \|\nabla^k \mathcal{B}_\phi f\|_{L^q(G)} \le c \|\nabla^{k-1} f\|_{L^q(G)} \quad \forall f \in C_0^\infty(G) $$
+
+with a constant depending on $n, k, q, \phi$ and $\text{ratio}(G)$ only. Furthermore, there holds
+
+$$ (2.2) \qquad \text{div} \, \mathcal{B}_\phi f = f \int_{B_1} \phi(y) dy - \phi \int_G f(y) dy \quad \text{in } G. $$
+
+In particular, if $\int_{B_1} \phi(y)dy = 1$ and $\int_G f(y)dy = 0$ then $v = \mathcal{B}_\phi f$ solves the equation $\text{div } v = f$.
+
+Finally, by (2.1) we may extend $\mathcal{B}_\phi$ to an operator $\mathcal{L}(W^{k-1,q}(G), W^{k,q}(G))$ denoted again by $\mathcal{B}_\phi$.
+
+¹) Let $A \subset \mathbb{R}^n$ be a measurable set with $\text{mes}(A)$. Given $v \in L^1(A)$ by $(v)_A$ we denote the mean value $\frac{1}{\text{mes}(A)} \int_A v(x)dx$.
+---PAGE_BREAK---
+
+Let $i, j \in \{1, \dots, n\}$. Observing, that
+
+$$
+\begin{align*}
+\partial_j \mathcal{B}_\phi (\partial_i f) &= \partial_i \partial_j \mathcal{B}_\phi (f) - \partial_j \mathcal{B}_{\partial_i \phi} (f) && \text{in } G, \\
+\partial_i \partial_j \mathcal{B}_\phi (f) &= \partial_i \mathcal{B}_\phi (\partial_j f) + \partial_j \mathcal{B}_{\partial_i \phi} (f) && \text{in } G,
+\end{align*}
+$$
+
+we see that
+
+$$
+\partial_j \mathcal{B}_\phi (\partial_i f) = \partial_i \mathcal{B}_\phi (\partial_j f) + \partial_j \mathcal{B}_{\partial_i \phi} (f) - \partial_j \mathcal{B}_{\partial_i \phi} (f) \quad \text{in } G.
+$$
+
+By the aid of (2.1), and Poincaré's inequality, using the above identity, we get
+
+$$
+(2.3) \quad \| \partial_j \mathcal{B}_\phi (\partial_i f) \|_{L^q(G)} \le c ( \| \partial_j f \|_{L^q(G)} + \| f \|_{L^q(G)} ) \quad \forall f \in W_0^{1,q}(G),
+$$
+
+$$
+(2.4) \quad \left\{
+\begin{array}{ll}
+\|\nabla^2 \partial_j \mathcal{B}_\phi(\partial_i f)\|_{L^q(G)} & \le c(\|\partial_i \nabla_* \nabla f\|_{L^q(G)} + \|\partial_j \partial_n \partial_n f\|_{L^q(G)} + \|\nabla^2 f\|_{L^q(G)}) \\[0.5ex]
+\forall f \in W_0^{3,q}(G),
+\end{array}
+\right.
+$$
+
+where $c = \text{const} > 0$, depending on $n, q$ and $\text{ratio}(G)$.
+
+Now, let $G$ be a bounded domain, star-shaped with respect to a ball $B$. Let $R := \frac{1}{2}R_i(G)$.
+Thus, there exist $B_R(x_0)$ such that $G$ is star shaped to the ball $B_R(x_0)$. Without loss of
+generality we may assume that $x_0 = 0$. Let $\phi \in C_0^\infty(B_1)$ with $\int_{B_1} \phi(y)dy = 1$. We define
+
+$\mathcal{B} : W_0^{k-1,q}(G) \to W_0^{k,q}(G)$ by setting
+
+$$
+\mathcal{B}(f)(x) = R\mathcal{B}_{\phi}(\tilde{f})\left(\frac{x}{R}\right), \quad x \in G, \quad f \in W_0^{k-1,q}(G),
+$$
+
+where $\tilde{f}(y) = f(Ry)$ ($y \in R^{-1}G$). Using the transformation formula of the Lebesgue integral, in view of (2.1), we see that
+
+$$
+\begin{equation}
+\begin{aligned}
+\|\nabla^k \mathcal{B}(f)\|_{L^q(G)} &= R^{n/q-k+1} \|\nabla^k \mathcal{B}_\phi(\tilde{f})\|_{L^q(R^{-1}G)} \le c R^{n/q-k+1} \|\nabla^{k-1} \tilde{f}\|_{L^q(R^{-1}G)} \\
+&= c \|\nabla^{k-1} f\|_{L^q(G)},
+\end{aligned}
+\tag{2.5}
+\end{equation}
+$$
+
+where $c = \text{const} > 0$ depends on $n, q$ and $\text{ratio}(R^{-1}G) = \text{ratio}(G)$. In addition, from (2.3), and
+(2.4) we deduce
+
+$$
+(2.6) \quad \| \partial_j \mathcal{B}(\partial_i f) \|_{L^q(G)} \le c ( \| \partial_j f \|_{L^q(G)} + R^{-1} \| f \|_{L^q(G)} ) \quad \forall f \in W_0^{1,q}(G),
+$$
+
+$$
+(2.7) \quad \left\{
+\begin{array}{@{}l@{}}
+\displaystyle \| \nabla^2 \partial_j \mathcal{B}(\partial_i f) \|_{L^q(G)} \le c ( \| \partial_i \nabla_* \nabla f \|_{L^q(G)} + \| \partial_j \partial_n \partial_n f \|_{L^q(G)} + R^{-1} \| \nabla^2 f \|_{L^q(G)} ) \\
+\\
+\forall f \in W_0^{3,q}(G),
+\end{array}
+\right.
+$$
+
+$(i, j = 1, \dots, n)$ with a constant $c$, depending on $n, q$ and $\text{ratio}(G)$ only. Furthermore, from (2.2)
+we get
+
+$$
+(2.8) \quad \operatorname{div} \mathcal{B}(f)(x) = f(x) - \phi\left(\frac{x}{R}\right) R^{-n} \int_G f(y) dy \quad \text{for a.e. } x \in G.
+$$
+
+$^2$Here $\nabla_*$ denotes the reduced gradient $(\partial_1, \ldots, \partial_{n-1})$.
+---PAGE_BREAK---
+
+**3 Proof of Theorem 1**
+
+**Proof** 1° By decomposing the right-hand side into a solenoidal field, and a gradient field, we are able to reduce the problem to the case div **f** = 0. Let **E**: $\mathbf{W}^{k,q}(\Omega) \rightarrow \mathbf{W}^{k,q}(\mathbb{R}^n)$ denote an extension operator such that
+
+$$
+\|Ev\|_{W^{k,q}(\mathbb{R}^n)} \leq c\|v\|_{W^{k,q}(\Omega)} \quad \forall v \in W^{k,q}(\Omega).
+$$
+
+Let $P : W^{k,q}(\mathbb{R}^n) \to W_{0,\sigma}^{k,q}(\mathbb{R}^n)$ denote the Helmholtz-Leray projection. Given $v \in W^{k,q}(\Omega)$ we have
+
+$$
+v = PEv + (I - P)Ev \quad \text{a. e. in } \Omega.
+$$
+
+In addition, there exists a constant $c > 0$ depending only on $n, q, k$ and $\Omega$ such that
+
+$$
+(3.1) \quad \|PEv\|_{W^{k,q}(\Omega)} + \|(I-P)Ev\|_{W^{k,q}(\Omega)} \le c\|v\|_{W^{k,q}(\Omega)} \quad \forall v \in W^{k,q}(\Omega).
+$$
+
+Now, for $f \in L^s(0,T; W^{k,r}(\Omega))$ let $(u,p)$ be a strong solution to (1.1)–(1.4). Observing $I-P = \nabla(\Delta^{-1}\text{div})$ recalling the definition of $E$ we get
+
+$$
+Ef = PEf + (I - P)Ef = PEf + \nabla(\Delta^{-1} \operatorname{div} Ef) \quad \text{a. e. in } Q.
+$$
+
+Since, $\nabla(\Delta^{-1} \operatorname{div} Ef) = (\Delta^{-1} \nabla \operatorname{div} Ef) \in L^s(0,T; W^{k,q}(\mathbb{R}^n))$ we see that $PEf \in L^s(0,T; W^{k,q}(\mathbb{R}^n))$. Thus, we can replace $f$ by the restriction of $PEf$ on $Q$, and $p$ by the restriction of $-\Delta^{-1} \operatorname{div} Ef + p$ on $Q$. Hence, in what follows without loss of generality we may assume that
+
+$$
+(3.2) \quad \operatorname{div} f = 0, \quad \text{and} \quad \Delta p = 0 \quad \text{a. e. in } Q.
+$$
+
+2° Secondly, we recall a well-known result by Giga and Sohr [7] which is the following
+
+**Lemma 3.1** Let $\Omega = \mathbb{R}^n$, $\Omega = \mathbb{R}_+^n$, $\Omega$ bounded or $\Omega$ an exterior domain with $\partial\Omega \in C^2$. For every $g \in L^s(0, T; L_\sigma^q(\Omega))$ ($1 < s, q, < +\infty$) there exists a unique solution $(v, \pi) \in L^s(0, T; W_{\text{loc}}^{2,q}(\Omega)) \times L^s(0, T; W_{\text{loc}}^{1,q}(\Omega))$ to the Stokes problem
+
+$$
+\begin{align*}
+& \partial_t v - \Delta v = -\nabla\pi + g \quad \text{and} \quad \operatorname{div} v = 0 \quad \text{in} \quad \Omega \times (0, T), \\
+& v = 0 \quad \text{on} \quad \partial\Omega \times (0, T), \\
+& v(0) = 0 \quad \text{on} \quad \Omega \times \{0\},
+\end{align*}
+$$
+
+such that $\partial_t v$, $\partial_i \partial_j v$, $\nabla \pi \in L^s(0, T; L^q(\Omega))$ ($i, j = 1, \dots, n$), and there holds,
+
+$$
+(3.3) \quad \| \partial_t v \|_{L^s(0,T; L^q(\Omega))} + \| \nabla^2 v \|_{L^s(0,T; L^q(\Omega))} + \| \nabla \pi \|_{L^s(0,T; L^q(\Omega))} \le c \| g \|_{L^s(0,T; L^q(\Omega))},
+$$
+
+where the constant $c$ depends only on $n, s, q$ and $\Omega$.
+
+As a consequence of Lemma 3.1 we get the existence of a unique solution $(u, p) \in L^s(0, T; W_{\text{loc}}^{2,q}(\Omega)) \times L^s(0, T; W_{\text{loc}}^{1,q}(\Omega))$ to the Stokes system (1.1)–(1.4), such that
+
+$$
+(3.4) \quad \| \partial_t u \|_{L^s(0,T; L^q(\Omega))} + \| \nabla^2 u \|_{L^s(0,T; L^q(\Omega))} + \| \nabla p \|_{L^s(0,T; L^q(\Omega))} \le c \| f \|_{L^s(0,T; L^q(\Omega))}.
+$$
+
+3° Local estimates We restrict ourself to case that Ω is an exterior domain. The opposite case can be treated in a similar way. Clearly, G := R^n \ Ω is a bounded domain. Let G', G'' are
+---PAGE_BREAK---
+
+bounded open sets such that $\bar{G} \subset G'$ and $\bar{G}' \subset G''$. Set $\Omega'' = \mathbb{R}^3 \setminus \bar{G}''$ and $\Omega' = \mathbb{R}^3 \setminus \bar{G}'$. Then, let $\zeta \in C^\infty(\mathbb{R}^3)$ denote a cut-off function such that $\zeta \equiv 1$ on $\Omega''$, and $\zeta \equiv 0$ in $G'$. In particular, supp($\nabla \zeta$) $\subset G'' \setminus G'$. Observing $\text{div}(\mathbf{u}(t)\zeta) = \mathbf{u}(t) \cdot \nabla \zeta$, it follows that $\text{supp}(\mathbf{u}(t) \cdot \nabla \zeta) \subset \subset G'' \setminus G'$ for a. e. $t \in (0,T)$.
+
+Next, let $1 < R < +\infty$ such that $G'' \subset B_R$. By $\mathcal{B}: W_0^{k-1,q}(B_R) \to W_0^{k,q}(B_R)$ we denote the Bogowskii operator defined in Section 2. We now define
+
+$$ z(t) = \mathcal{B}(\mathbf{u}(t) \cdot \nabla \zeta), \quad t \in [0, T). $$
+
+Let $t \in (0, t)$. Since $\int_{B_R} \mathbf{u}(t) \cdot \nabla \zeta dx = 0$, in view of (2.8) we have
+
+$$ \operatorname{div} z(t) = \mathbf{u}(t) \cdot \nabla \zeta \quad \text{a. e. in } B_R. $$
+
+Thanks to (2.6), recalling that $\text{ratio}(B_R) = 1$, there exists a constant $c > 0$ depending only on $q$ and $n$ such that
+
+$$ \|z(t)\|_{W^{3,q}(B_R)} \le c\|\mathbf{u}(t) \cdot \nabla \zeta\|_{W^{2,q}(B_R)} \quad \text{for a. e. } t \in (0, T). $$
+
+Making use of the embedding $W_0^{3,q}(B_R) \hookrightarrow W^{3,q}(\mathbb{R}^n)$ the above inequality implies that $z \in L^s(0,T; W^{3,q}(\mathbb{R}^n))$. Together with (3.4), and the Sobolev-Poincaré inequality we obtain
+
+$$ (3.5) \qquad \|z\|_{L^s(0,T; W^{3,q}(\mathbb{R}^n))} \le c\|\mathbf{u}\|_{L^s(0,T; W^{2,q}(\Omega \cap B_R))} \le c\|\mathbf{f}\|_{L^s(0,T; L^q(\Omega))}. $$
+
+By an analogous reasoning taking into account $\partial_t z = \mathcal{B}(\partial_t u \cdot \nabla \zeta)$ a. e. in $\mathbb{R}^n \times (0, T)$ we see
+that $\partial_t z \in L^s(0, T; W^{1,q}(\mathbb{R}^n))$. In addition, by virtue of (3.4) we obtain
+
+$$ (3.6) \qquad \| \partial_t z \|_{L^s(0,T; W^{1,q}(\mathbb{R}^3))} \le c \| \partial_t u \|_{L^s(0,T; L^q(\Omega))} \le c \| f \|_{L^s(0,T; L^q(\Omega))}. $$
+
+Next, let $k \in \{1, \dots, n\}$ be fixed. We define
+
+$$
+\left\{
+\begin{array}{ll}
+v(x,t) = \partial_k (\boldsymbol{u}(x,t)\zeta(x) - z(x,t)), & (x,t) \in (G'' \setminus G') \times (0,T), \\
+v(x,t) = -\partial_k z(x,t), & (x,t) \in (\mathbb{R}^n \setminus (G'' \setminus G')) \times (0,T),
+\end{array}
+\right.
+$$
+
+and
+
+$$
+\left\{
+\begin{array}{ll}
+\pi(x,t) = \partial_k(p(x,t)\zeta(x)), & (x,t) \in (G'' \setminus G') \times (0,T), \\
+\pi(x,t) = 0, & (x,t) \in (\mathbb{R}^n \setminus (G'' \setminus G')) \times (0,T).
+\end{array}
+\right.
+$$
+
+Then the pair $(v, \pi)$ solves the Stokes system
+
+$$
+\begin{align*}
+& \operatorname{div} v = 0 && \text{in } \mathbb{R}^n \times (0, T), \\
+& \partial_t v - \Delta v = -\nabla \pi + g && \text{in } \mathbb{R}^n \times (0, T), \\
+& v = 0 && \text{on } \mathbb{R}^n \times \{0\},
+\end{align*}
+$$
+
+where
+
+$$
+\begin{aligned}
+g ={}& (p - p_{B_R})\nabla\zeta - 2\partial_k(\nabla u \cdot \nabla\zeta) - \partial_k(u\Delta\zeta) \\
+ & - \partial_k\partial_t z + \partial_k\Delta z + \partial_k(f\zeta)
+\end{aligned}
+\quad
+\text{a. e. in } \mathbb{R}^n \times (0, T).
+$$
+---PAGE_BREAK---
+
+In view of (3.3), (3.5), and (3.6) we see that $\mathbf{g} \in L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))$. In addition, there holds
+
+$$
+\|\mathbf{g}\|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))} \le c \|\mathbf{f}\|_{L^s(0,T; \mathcal{W}^{1,q}(\Omega))}.
+$$
+
+Thus, applying Lemma 3.1 with $\Omega = \mathbb{R}^n$, and using the last inequality we see that
+
+$$
+\begin{align*}
+& \| \partial_t \boldsymbol{v} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))} + \| \nabla^2 \boldsymbol{v} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))} + \| \nabla \pi \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))} \\
+&\le c \| \boldsymbol{g} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}^n))} \\
+&\le c \| \boldsymbol{f} \|_{L^s(0,T; \mathcal{W}^{1,q}(\Omega))}.
+\end{align*}
+$$
+
+Recalling the definition of **v**, making use of (3.5), (3.6), and (3.4), we infer from above
+
+$$
+\[
+\begin{aligned}
+& \left\| \zeta \partial_t \partial_k \boldsymbol{u} \|_{L^s(0,T; \mathcal{L}^q(\Omega))} + \| \zeta \nabla^2 \partial_k \boldsymbol{u} \|_{L^s(0,T; \mathcal{L}^q(\Omega))} + \| \zeta \nabla \partial_k p \|_{L^s(0,T; \mathcal{L}^q(\Omega))} \right. \\
+& \qquad \left. \le c \| \boldsymbol{f} \|_{L^s(0,T; \mathcal{W}^{1,q}(\Omega))} .
+\end{aligned}
+\]
+$$
+
+Iterating the above argument $k$ times, we get
+
+$$
+\begin{equation}
+\begin{aligned}
+& \| \partial_t \boldsymbol{u} \|_{L^s(0,T; \boldsymbol{W}^{k,q}(\Omega'))} + \| \boldsymbol{u} \|_{L^s(0,T; \boldsymbol{W}^{k+2,q}(\Omega'))} + \| \nabla p \|_{L^s(0,T; \boldsymbol{W}^{k,q}(\Omega'))} \\
+& \quad \le c \| \boldsymbol{f} \|_{L^s(0,T; \boldsymbol{W}^{k,q}(\Omega))}
+\end{aligned}
+\tag{3.7}
+\end{equation}
+$$
+
+($k \in \mathbb{N}$), where $c = \text{const} > 0$, depending on $s, q, k$, and $\Omega$ only.
+
+4° Boundary regularity Let $x_0 \in \partial\Omega$. Up to translation and rotation we may assume that $x_0 = 0$
+and $\mathbf{n}(0) = -e_n$, where $\mathbf{n}(0)$ denotes the outward unit normal on $\Omega$ at $x_0$. According to our
+assumption on the boundary of $\Omega$ there exists $0 < R < +\infty$, and $h \in C^{2+k}(B_R')$ such that
+
+(i) $\partial\Omega \cap (B'_R \times (-R, R)) = \{(y', h(y')) ; y' \in B'_R\}$;
+
+(ii) $\{(y', y_n); y' \in B'_R, h(y') < y_n < h(y') + R\} \subset \Omega$;
+
+(iii) $\{(y', y_n); y' \in B'_R, -R + h(y') < y_n < h(y')\} \subset \Omega^c$ ³).
+
+Set $U_R = B'_R \times (-R, R)$, $U_R^+ = B'_R \times (0, R)$, and define $\Phi : U_R \to \Phi(U_R)$ by
+
+$$
+\Phi(y) = (y', h(y') + y_n)^{\top}, \quad y \in U_R.
+$$
+
+Elementary,
+
+$$
+D\Phi(y) =
+\begin{pmatrix}
+1 & 0 & \dots & 0 & 0 \\
+0 & 1 & \dots & 0 & 0 \\
+\vdots & \vdots & \ddots & \vdots & \vdots \\
+0 & 0 & \dots & 1 & 0 \\
+\partial_1 h(y) & \partial_2 h(y) & \dots & \partial_{n-1} h(y) & 1
+\end{pmatrix},
+$$
+
+$$
+(D\Phi(y))^{-1} =
+\begin{pmatrix}
+1 & 0 & \dots & 0 & 0 \\
+0 & 1 & \dots & 0 & 0 \\
+\vdots & \vdots & \ddots & \vdots & \vdots \\
+0 & 0 & \dots & 1 & 0 \\
+-\partial_1 h(y) & -\partial_2 h(y) & \dots & -\partial_{n-1} h(y) & 1
+\end{pmatrix}.
+$$
+
+³) Here $y' = (y_1, \dots, y_{n-1}) \in \mathbb{R}^{n-1}$, and $B'_R$ denotes the two dimensional ball $\{(y_1, \dots, y_{n-1}) : y_1^2 + \dots + y_{n-1}^2 < R^2\}$.
+---PAGE_BREAK---
+
+For the outward unit normal at $x = \mathbf{\Phi}(y)$ we have
+
+$$
+\boldsymbol{n}(x) = \boldsymbol{N}(y) = \frac{(\partial_1 h(y), \dots, \partial_{n-1} h(y), -1)}{\sqrt{1 + |\nabla h(y)|^2}}, \quad y \in B'_{R} \times \{0\}.
+$$
+
+In addition, one calculates
+
+$$
+(3.8) \qquad \partial_{x_i} \circ \mathbf{\Phi} = \partial_{y_i} - (\partial_{x_i} h) \partial_{y_n} \quad \text{in } U_R, \quad i = 1, \ldots, n^4.
+$$
+
+We set **U** = **u** ◦ **Φ**, P = p ◦ **Φ** and **F** = **f** ◦ **Φ** a.e. in U+R × (0, T). By the aid of (3.8) we easily get
+
+$$
+(3.9) \qquad (\operatorname{div}_x u) \circ \mathbf{\Phi} = \operatorname{div}_y U - \nabla h \cdot \partial_{y_n} U = 0,
+$$
+
+$$
+(3.10) \qquad (\Delta_x u) \circ \mathbf{\Phi} = \Delta_y U - 2\nabla h \cdot \nabla_y \partial_{y_n} U + |\nabla h|^2 \partial_{y_n} \partial_{y_n} U - (\Delta h) \partial_{y_n} U,
+$$
+
+$$
+(3.11) \qquad (\nabla_x p) \circ \mathbf{\Phi} = \nabla_y P - (\nabla h) \partial_{y_n} P,
+$$
+
+a. e. in $U_R^+ \times (0, T)$. Firstly, owing to (3.9) from the equation (1.1) we get
+
+$$
+(3.12) \qquad \operatorname{div}_y U = \nabla h \cdot \partial_{y_n} U \quad \text{a.e. in } U_R^+ \times (0, T),
+$$
+
+and with help of (3.10) and (3.11) the equation (1.2) turns into
+
+$$
+\begin{equation}
+\begin{aligned}
+\partial_t U - \Delta U &= -\nabla P + (\partial_{y_n} P)\nabla h - 2\nabla h \cdot \nabla \partial_{y_n} U + |\nabla h|^2 \partial_{y_n} \partial_{y_n} U \\
+&\quad - (\Delta h)\partial_{y_n} U + F
+\end{aligned}
+\tag{3.13}
+\end{equation}
+$$
+
+a. e. in $U_R^+ \times (0, T)$.
+
+Note that the assumption $n(0) = -e_n$ implies $\nabla h(0) = 0$. We now choose $0 < \delta < +\infty$ sufficiently small, which will be specified later. Since $\nabla h \in C^0(U_R)$, there exists $0 < \rho < \frac{R}{2}$ such that
+
+$$
+(3.14) \qquad |\nabla h(y)| \leq \delta \quad \forall y \in U_{2\rho}.
+$$
+
+Let $\zeta \in C_0^\infty(U_{2\rho})$ denote a cut-off function such that $0 \le \zeta \le 1$ in $U_{2\rho}$, and $\zeta \equiv 1$ on $U_\rho$. We define $\tilde{U}: \mathbb{R}_+^n \times (0,T) \to \mathbb{R}^n$ by
+
+$$
+\tilde{U}(y,t) = \zeta(y)U(y,t), \quad y \in U_{2\rho}^{+} \times (0,T), \quad \tilde{U}(y,t) = 0 \quad \text{if} \quad y \in \mathbb{R}_{+}^{n} \setminus U_{2\rho}^{+} \times (0,T).
+$$
+
+Let $\mathcal{B} : W_0^{k-1,q}(U_{2\rho}^+) \to W^{k,q}(\mathbb{R}_+^n)$ denote the Bogowskii operator defined in Section 2. We set
+
+$$
+\begin{align*}
+z_1(y,t) &= \mathcal{B}(\zeta \nabla h \cdot \partial_{y_n} U)(y,t), \\
+z_2(y,t) &= \mathcal{B}(\nabla \zeta \cdot U)(y,t), && (y,t) \in \mathbb{R}_+^n \times (0,T).
+\end{align*}
+$$
+
+Let $k \in \{1, \dots, n-1\}$ be fixed. We define
+
+$$
+\begin{align*}
+V(y,t) &= \partial_k (\tilde{U}(y,t) - z_1(y,t) - z_2(y,t)), \\
+\Pi(y,t) &= \partial_k (\zeta(y)P(y,t)),
+\end{align*}
+$$
+
+⁴⁾ Since *h* is independent on *y**n* there holds ∂x*n ∘ Φ = ∂y*n.
+---PAGE_BREAK---
+
+$(y, t) \in \mathbb{R}_+^n \times (0, T).$ Observing that
+
+$$
+\int_{U_{2\rho}^{+}} \zeta \nabla h \cdot \partial_n U(t) + \nabla \zeta \cdot U(t) dy = \int_{U_{2\rho}^{+}} \text{div}_y \tilde{U}(t) dy = 0 \quad \text{for a. e. } t \in (0, T),
+$$
+
+by the aid of (2.8) we calculate
+
+$$
+(3.15) \quad \operatorname{div}_y \mathbf{V} = \partial_k \left( \zeta \nabla h \cdot \partial_n U + \nabla \zeta \cdot U - \zeta \nabla h \cdot \partial_n U - \nabla \zeta \cdot U \right) = 0
+$$
+
+a. e. in $\mathbb{R}_+^n \times (0,T)$. In addition, taking into account (3.13), we find
+
+$$
+\begin{align*}
+\partial_t V - \Delta V &= \partial_k (\zeta \partial_t U - \zeta \Delta U - 2\nabla\zeta \cdot \nabla U - (\Delta\zeta)U) \\
+&\quad - \partial_k (\partial_t z_1 - \Delta z_1) - \partial_k (\partial_t z_2 - \Delta z_2) \\
+&= -\nabla\Pi + \partial_k((P - P_{U_2^+})\nabla\zeta) - \partial_k (2\nabla\zeta \cdot \nabla U + (\Delta\zeta)U) \\
+&\quad - \partial_k (\partial_t z_1 - \Delta z_1) - \partial_k (\partial_t z_2 - \Delta z_2) \\
+&\quad + \partial_k (\zeta(\partial_n P)\nabla h - 2\zeta\nabla h \cdot \nabla\partial_n U + \zeta|\nabla h|^2\partial_n\partial_n U \\
+&\qquad - \zeta(\Delta h)\partial_n U + \zeta F).
+\end{align*}
+$$
+
+Thus, $(V, \Pi)$ solves the following Stokes system
+
+$$
+\begin{array}{lll@{\hspace{4em}}ll}
+\operatorname{div} V = 0 & \text{in} & \mathbb{R}_+^n \times (0, T), & \\
+\partial_t V - \Delta V = -\nabla\Pi + G & \text{in} & \mathbb{R}_+^n \times (0, T), & \\
+V = 0 & \text{on} & \partial\mathbb{R}_+^n \times (0, T),
+\end{array}
+$$
+
+where $G = G_1 + \dots + G_6$ with
+
+$$
+\begin{align*}
+G_1 &= \partial_{y_k} ((P - P_{U_2^+}) \nabla \zeta), \\
+G_2 &= -\partial_k (2 \nabla \zeta \cdot \nabla U + (\Delta \zeta) U), \\
+G_3 &= -\partial_k (\partial_t z_1 - \Delta z_1), \\
+G_4 &= -\partial_k (\partial_t z_2 - \Delta z_2), \\
+G_5 &= \partial_k (\zeta (\partial_n P) \nabla h - 2 \zeta \nabla h \cdot \nabla (\partial_n U + |\nabla h|^2 (\partial_n U)), \\
+G_6 &= \partial_k (-\zeta (\Delta h) (\partial_n U + \zeta F)).
+\end{align*}
+$$
+
+In what follows we shall establish some important estimates of $z_1$ and $z_2$, where we will make
+essential use of the properties of $\mathcal{B}$ (cf. Section 2). Starting with $z_1$, we write $z_1 = z_{1,1} + z_{1,2}$,
+where
+
+$$
+z_{1,1} = B(\partial_n(\zeta\nabla h \cdot U)), \quad z_{1,2} = -B((\partial_n\zeta)\nabla h \cdot U)).
+$$
+
+Let $t \in (0, T)$ be fixed. Using (2.5), (2.6) with $j=k,i=n$ and $f = \zeta\nabla h \cdot U$, and observing
+$\partial_t B = B\partial_t$, we see that
+
+$$
+\begin{align*}
+\| \partial_t \partial_k z_1(t) \|_{L^q(\mathbb{R}_+^n)} &\le \| \partial_t \partial_k z_{1,1}(t) \|_{L^q(\mathbb{R}_+^n)} + \| \partial_t \partial_k z_{1,2}(t) \|_{L^q(\mathbb{R}_+^n)} \\
+&\le c \| \partial_t \partial_k (\zeta \nabla h \cdot U)(t) \|_{L^q(\mathbb{R}_+^n)} + c\rho^{-1} \| \partial_t U(t) \|_{L^q(\mathbb{R}_+^n)} \\
+&\le c\delta \| \partial_t \partial_k \tilde{U}(t) \|_{L^q(\mathbb{R}_+^n)} + c (\|h\|_{C^2} + \rho^{-1}) \| \partial_t U(t) \|_{L^q(U_{2\rho}^+)}.
+\end{align*}
+$$
+---PAGE_BREAK---
+
+Taking the above inequality to the s-th power, and integrating the resulting equation in time over (0, T), we get
+
+$$
+\begin{equation}
+\begin{aligned}
+& \|\partial_t \partial_k z_1\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \\
+& \le c\delta \|\partial_t \partial_k \tilde{U}\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} + c(\|h\|_{C^2} + \rho^{-1}) \|\partial_t U\|_{L^s(0,T; L^q(U_R^+))}.
+\end{aligned}
+\tag{3.16}
+\end{equation}
+$$
+
+On the other hand, using (2.5), (2.7) with $j=k$, $i=n$, and $f = \zeta \nabla h \cdot U(t)$, we see that
+
+$$
+\begin{align*}
+\|\nabla^2 \partial_k z_1(t)\|_{L^q(\mathbb{R}_+^n)} &\le c \|\partial_n \nabla_* \nabla(\zeta \nabla h \cdot U)(t)\|_{L^q(\mathbb{R}_+^n)} + c \|\partial_n \partial_n \partial_k (\zeta \nabla h \cdot U)(t)\|_{L^q(\mathbb{R}_+^n)} \\
+&\quad + c\rho^{-1} \|\nabla^2 (\zeta \nabla h \cdot U)(t)\|_{L^q(\mathbb{R}_+^n)} + c \|\nabla^2 ((\partial_n \zeta) \nabla h \cdot U)(t)\|_{L^q(\mathbb{R}_+^n)}.
+\end{align*}
+$$
+
+By means of product rule and Poincaré's inequality we find
+
+$$
+\|\nabla^2 \partial_k z_1(t) \|_{L^q(\mathbb{R}_+^n)} \le c\delta \|\nabla^2 \nabla_* \tilde{U}(t) \|_{L^q(\mathbb{R}_+^n)} + c (\|h\|_{C^3} + \rho^{-1}) \|\nabla^2 U(t) \|_{L^q(U_R^+)}.
+$$
+
+We now take the above inequality to the s-th power, integrating the result in time over (0, T),
+we obtain
+
+$$
+\begin{equation}
+\begin{aligned}
+\|\nabla^2 \partial_k z_1\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} &\le c\delta \|\nabla^2 \nabla_* \tilde{U}\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \\
+&\quad + c(\|h\|_{C^3} + \rho^{-1}) \|\nabla^2 U\|_{L^s(0,T; L^q(U_R^+))}.
+\end{aligned}
+\tag{3.17}
+\end{equation}
+$$
+
+By an analogous reasoning, making use of (2.5), and Poincaré’s inequality, we infer
+
+$$
+\begin{equation}
+\begin{aligned}
+& \| \partial_t z_2 \|_{L^s(0,T; L^q(\mathbb{R}_+^n))} + \| \nabla^2 z_2 \|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \\
+& \le c \rho^{-1} \left( \| \partial_t U \|_{L^s(0,T; L^q(U_R^+))} + \| \nabla^2 U \|_{L^s(0,T; L^q(U_R^+))} \right).
+\end{aligned}
+\tag{3.18}
+\end{equation}
+$$
+
+We are now in a position to estimate $G_1, \dots, G_6$. First by virtue of Poincaré's inequality
+we easily estimate
+
+$$
+\|\mathbf{G}_1\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \leq c\rho^{-1} \|\nabla P\|_{L^s(0,T; L^q(U_R^+))}.
+$$
+
+Analogously,
+
+$$
+\|\mathbf{G}_2\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \le c\rho^{-1} \|\nabla^2 \mathbf{U}\|_{L^s(0,T; L^q(U_R^+))}.
+$$
+
+Next, with the help of (3.16), (3.17), and (3.18) we see that
+
+$$
+\begin{align*}
+& \|G_3\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} + \|G_4\|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \\
+&\le c\delta \left(
+ \begin{aligned}[t]
+ & \| \partial_t \partial_k \tilde{U} \|_{L^s(0,T; L^q(\mathbb{R}_+^n))} + \| \nabla^2 \nabla_* \tilde{U} \|_{L^s(0,T; L^q(\mathbb{R}_+^n))} \\
+ & + c (\|h\|_{C^3} + \rho^{-1}) (\| \partial_t \partial_k \tilde{U} \|_{L^s(0,T; L^q(U_R^+))} + \| \nabla^2 \nabla_* \tilde{U} \|_{L^s(0,T; L^q(U_R^+))})
+ \end{aligned}
+\right).
+\end{align*}
+$$
+
+Then applying the product rule, and using Poincaré’s inequality, we get
+
+$$
+\[
+\begin{split}
+\|\mathbf{G}_5\|_{L^s(0,T;\mathcal{L}^q(\mathbb{R}_+^n))} &\le c\delta (\|\nabla\Pi\|_{L^s(0,T;\mathcal{L}^q(\mathbb{R}_+^n))} + \|\nabla_*\nabla^2\tilde{\mathcal{U}}\|_{L^s(0,T;\mathcal{L}^q(\mathbb{R}_+^n))}) \\
+&\quad + c(\|h\|_{C^2} + \rho^{-1})(\|\nabla P\|_{L^s(0,T;\mathcal{L}^q(U_R^+))} + \|\nabla^2 U\|_{L^s(0,T;\mathcal{L}^q(U_R^+))}).
+\end{split}
+\]
+$$
+---PAGE_BREAK---
+
+Finally, we estimate
+
+$$
+\begin{align*}
+\|\mathbf{G}_6\|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^n))} &\le c (\|h\|_{C^3} + \rho^{-1}) (\|\nabla P\|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + \|\nabla^2 \mathbf{U}\|_{L^s(0,T; \mathcal{L}^q(U_R^+))}) \\
+&\quad + c\rho^{-1} \|\mathbf{F}\|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + c \|\partial_k \mathbf{F}\|_{L^s(0,T; \mathcal{L}^q(U_R^+))}.
+\end{align*}
+$$
+
+Appealing to Lemma 3.1 (cf. [7]) for the case $\Omega = \mathbb{R}_+^n$ using the above estimates for $\mathbf{G}_1, \dots, \mathbf{G}_6$, we obtain
+
+$$
+\begin{align*}
+& \| \partial_t \mathbf{V} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla^2 \mathbf{V} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla \Pi \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} \\
+&\le c \| \mathbf{G}_1 + \dots + \mathbf{G}_6 \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^n))} \\
+&\le c\delta (\| \partial_t \nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla^2\nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla\Pi \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))}) \\
+&\quad + c (\|h\|_{C^3} + \rho^{-1}) (\| \partial_t \mathbf{U} \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + \| \nabla^2 \mathbf{U} \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} \\
+&\quad\quad + \| \nabla P \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + \| \mathbf{F} \|_{L^s(0,T; \mathbf{W}^{1,q}(U_R^+))}) .
+\end{align*}
+$$
+
+Recalling $\mathbf{V} = \partial_k(\tilde{\mathbf{U}} - z_1 - z_2)$, making use of (3.16), (3.17) and (3.18), from the last inequality
+we infer
+
+$$
+\begin{equation}
+\begin{aligned}
+& \| \partial_t \nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla^2 \nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla \Pi \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} \\
+&\le c_0 \delta ( \| \partial_t \nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla^2 \nabla_* \tilde{\mathbf{U}} \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))} + \| \nabla \Pi \|_{L^s(0,T; \mathcal{L}^q(\mathbb{R}_+^3))}) \\
+&\quad + c_1 ( \| \partial_t \mathbf{U} \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + \| \nabla^2 \mathbf{U} \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} \\
+&\quad\quad + \| \nabla P \|_{L^s(0,T; \mathcal{L}^q(U_R^+))} + \| \mathbf{F} \|_{L^s(0,T; \mathbf{W}^{1,q}(U_R^+))} )
+\end{aligned}
+\tag{3.19}
+\end{equation}
+$$
+
+where $c_0 = c_0(n, q, s)$ and $c_1 = c_1(n, q, s, \|h\|_{C^3}, \rho)$. On the other hand, recalling the definition of $\mathbf{U}, P$, and $\mathbf{F}$, with the help of (3.10), (3.11), and (3.7) we find
+
+$$
+\begin{equation}
+\begin{split}
+& \| \partial_t \boldsymbol{U} \|_{L^s(0,T;\boldsymbol{L}^q(U_R^+))} + \| \nabla^2 \boldsymbol{U} \|_{L^s(0,T;\boldsymbol{L}^q(U_R^+))} \\
+& \phantom{{}\| \partial_t \boldsymbol{U} \|_{}} + c \| \nabla P \|_{L^s(0,T;\boldsymbol{L}^q(U_R^+))} + \| \boldsymbol{F} \|_{L^s(0,T;\boldsymbol{W}^{1,q}(U_R^+))}
+\le c \| \boldsymbol{f} \|_{L^s(0,T;\boldsymbol{W}^{1,q}(\Omega))}
+\end{split}
+\tag{3.20}
+\end{equation}
+$$
+
+with a constant $c$ depending on $n, q, s$ and $h$. Now, in (3.19) we take $\delta = \frac{1}{2c_0}$ and estimate the right-hand side of (3.19) by the aid of (3.20). This leads to
+
+$$
+\begin{align*}
+& \| \partial_t \nabla_* \tilde{\boldsymbol{U}} \|_{L^s(0,T; \boldsymbol{L}^q(\mathbb{R}_+^3))} + \| \nabla^2 \nabla_* \tilde{\boldsymbol{U}} \|_{L^s(0,T; \boldsymbol{L}^q(\mathbb{R}_+^3))} + \| \nabla \Pi \|_{L^s(0,T; \boldsymbol{L}^q(\mathbb{R}_+^3))} \\
+&\le c_2 \| \boldsymbol{f} \|_{L^s(0,T; \boldsymbol{W}^{1,q}(\Omega))},
+\end{align*}
+$$
+
+where $c_2 = c_2(n, q, s, \|h\|_{C^\delta}, \rho)$.
+
+By a standard iteration argument we obtain
+
+$$
+\begin{equation}
+\begin{split}
+& \| \partial_t \nabla_*^\kappa U \|_{L^{s}(0,T; L^{q}(U_\rho^{+}))} + \| \nabla^2 \nabla_*^\kappa U \|_{L^{s}(0,T; L^{q}(U_\rho^{+}))} + \| \nabla\nabla_*^\kappa P \|_{L^{s}(0,T; L^{q}(U_\rho^{+}))} \\
+&\le c \| f \|_{L^{s}(0,T; W^{1,q}(\Omega))},
+\end{split}
+\tag{3.21}
+\end{equation}
+$$
+
+where $c = $ const depending only on $n, q, s, k, \|h\|_{C^{k+2}}$ and $\rho$.
+
+5° Estimation of the full pressure gradient Recalling that $\Delta_x p = 0$, with the help of (3.10) we calculate
+
+$$
+\begin{align*}
+0 &= \Delta_x p \circ \Phi = \Delta_y P - 2\nabla h \cdot \nabla \partial_y P + |\nabla h|^2 \partial_y \partial_y P - (\Delta h) \partial_y P \\
+ &= (1 + |\nabla h|^2) \partial_n \partial_y P + \Delta'_y P - 2\nabla h \cdot \nabla_* \partial_n P - (\Delta h) \partial_y P
+ ^5
+\end{align*}
+$$
+---PAGE_BREAK---
+
+a. e. in $U_R^+$. Thus,
+
+$$ (1 + |\nabla h|^2) \partial_{y_n} \partial_{y_n} P = -\Delta_y' P + 2\nabla h \cdot \nabla_* \partial_{y_n} P + (\Delta h) \partial_{y_n} P $$
+
+a. e. in $U_R^+$. From this identity along with (3.21) with $k=1$ it follows that
+
+$$
+\begin{aligned}
+\|\nabla_y^2 P\|_{L^s(0,T; L^q(U_\rho^+))} &\le c (\|\nabla_* \partial_{y_n} P\|_{L^s(0,T; L^q(U_\rho^+))} + \|\nabla_y P\|_{L^s(0,T; L^q(U_\rho^+))}) \\
+&\le c \|f\|_{L^s(0,T; W^{1,q}(\Omega))}.
+\end{aligned}
+$$
+
+Choosing $\rho \in (0, \frac{R}{2})$ sufficiently small, and applying the above argument $k$-times, we get
+
+$$ (3.22) \quad \|\nabla_y^{k+1} P\|_{L^s(0,T; L^q(U_\rho^+))} \le c \|f\|_{L^s(0,T; W^{k,q}(\Omega))} $$
+
+with a constant $c$ depending on $n, q, s, k$, $\|h\|_{C^{k+2}}$, and $\rho$.
+
+Finally a standard covering argument, together with (3.22), and (3.7) gives the estimate (1.5), which completes the proof of the Theorem 1. ■
+
+**Acknowledgements** The present research has been supported by the German Research Foundation (DFG) through the project WO1988/1-1; 612414.
+
+References
+
+[1] Adams, R. A., Sobolev spaces, Academic Press, Boston 1978.
+
+[2] M. E. Bogowskii, *Solution to the First Boundary Value Problem for the Equation of Continuity of an Incompressible Medium*, Soviet Math. Dokl. **20** (1979), 1094–1098.
+
+[3] W. Borchers, H. Sohr, *On the semigroup of the Stokes operator for exterior domains*, Math. Z. **196** (1987), 415–425.
+
+[4] Farwig, R.; Kozono, H.; Sohr, H., *An $L^q$-approach to Stokes and Navier-Stokes equations in general domains*, Acta Math. **195**, 21-53 (2005).
+
+[5] Y. Giga, *Analyticity of the semi group generated by the Stokes operator in $L_r$ spaces*, Math. Z. **178** (1981), 297–329.
+
+[6] Y. Giga, *Domains of fractional powers of the Stokes operator in $L_r$ spaces*, Arch. Rational Mech. Anal. **89** (1985), 251–265.
+
+[7] Y. Giga, H. Sohr Abstract $L^p$ estimates for the Cauchy problem with applications to the Navier-Stokes equations in exterior domains, J. Funct. Anal. **102** (1991), 72–94.
+
+[8] P. G. Galdi, *An Introduction to the Mathematical Theory of the Navier-Stokes Equations, Linearized Steady Problems*, Vol. 38, Springer, New York, 1994.
+
+⁵) Here $\Delta_y'$ stands for the differential operator $\partial_{y_1}\partial_{y_1} + \dots + \partial_{y_{n-1}}\partial_{y_{n-1}}$.
+---PAGE_BREAK---
+
+[9] J. G. Heywood, O. D. Walsh, *A counter-example concerning the pressure in the Navier-Stokes equation*, as $t \to 0^+$, Pacific J. Math. **164**, No. 2, 1994.
+
+[10] Hopf, E., *Über die Anfangswertaufgabe für die Hydrodynamischen Grundgleichungen*, Math. Nachr. **4**, 213 (1950/1951).
+
+[11] Leray, J., *Sur le mouvements d'un liquide visqueux emplissant l'espace*, Acta Math. **63**, 193-284 (1934).
+
+[12] H. Sohr, *The Navier-Stokes Equations. An Elementary Functional Analytic Approach*, Birkhäuser Advanced Texts, Birkhäuser Verlag, Basel, 2001. 1994.
+
+[13] H. Sohr, W. von Wahl, *On the regularity of the pressure of weak solutions of Navier-Stokes equations* Arch. Math. **46** (1986), 428–439.
+
+[14] V. A. Solonnikov, *Estimates for solutions of non-stationary Navier-Stokes equations*, J. Soviet Math **8** (1977), 467–523.
\ No newline at end of file
diff --git a/samples/texts_merged/6377513.md b/samples/texts_merged/6377513.md
new file mode 100644
index 0000000000000000000000000000000000000000..f27396bae22dc069d49462e4d3a365fcc07af0ea
--- /dev/null
+++ b/samples/texts_merged/6377513.md
@@ -0,0 +1,707 @@
+
+---PAGE_BREAK---
+
+HUANG Liang, LAI Ying-cheng, Kwangho Park, WANG Xin-gang,
+LAI Choy Heng, Robert A. Gatenby
+
+Synchronization in complex clustered networks
+
+© Higher Education Press and Springer-Verlag 2007
+
+**Abstract** Synchronization in complex networks has been an active area of research in recent years. While much effort has been devoted to networks with the small-world and scale-free topology, structurally they are often assumed to have a single, densely connected component. Recently it has also become apparent that many networks in social, biological, and technological systems are clustered, as characterized by a number (or a hierarchy) of sparsely linked clusters, each with dense and complex internal connections. Synchronization is fundamental to the dynamics and functions of complex clustered networks, but this problem has just begun to be addressed. This paper reviews some progress in this direction by focus-
+
+ing on the interplay between the clustered topology and network synchronizability. In particular, there are two parameters characterizing a clustered network: the intra-cluster and the inter-cluster link density. Our goal is to clarify the roles of these parameters in shaping network synchronizability. By using theoretical analysis and direct numerical simulations of oscillator networks, it is demonstrated that clustered networks with random inter-cluster links are more synchronizable, and synchronization can be optimized when inter-cluster and intra-cluster links match. The latter result has one counterintuitive implication: more links, if placed improperly, can actually lead to destruction of synchronization, even though such links tend to decrease the average network distance. It is hoped that this review will help attract attention to the fundamental problem of clustered structures/synchronization in network science.
+
+HUANG Liang, LAI Ying-cheng(✉), Kwangho Park
+Department of Electrical Engineering, Arizona State University, Tempe,
+Arizona 85287, USA
+E-mail: yclai2@chaos1.la.asu.edu
+
+**Keywords** clustered networks, network analysis, synchronization, oscillators
+
+PACS numbers 05.45.Xt, 05.45.Ra, 89.75.Hc
+
+LAI Ying-cheng
+Department of Physics and Astronomy, Arizona State University,
+Tempe, Arizona 85287, USA
+
+WANG Xin-gang
+Temasek Laboratories, National University of Singapore, Singapore,
+117508
+
+WANG Xin-gang, LAI Choy Heng
+Beijing-Hong Kong-Singapore Joint Centre for Nonlinear & Complex
+Systems (Singapore), National University of Singapore, Kent Ridge,
+Singapore, 119260
+
+LAI Choy Heng
+Department of Physics, National University of Singapore, Singapore,
+117542
+
+Robert A Gatenby
+Department of Radiology and Applied Mathematics, University of Arizona,
+Tucson, Arizona 85721, USA
+
+Received July 24, 2007
+
+# 1 Introduction
+
+Recent years have witnessed a growing interest in the synchronizability of complex networks [1–21]. Earlier works [1–10] suggest that small-world [22], random [23] and scale-free [24] networks, due to their short network distances, are generally more synchronizable than regular networks. It has been found, however, that heterogeneous degree distributions typically seen in scale-free networks can inhibit their synchronizability [11], but add suitable weights to the network elements, i.e. the assignment of large weights to nodes with large degrees (the number of links) can enhance their chances to synchronize with each other [12–21]. Modifying local connecting structures, if done properly, could also change the
+---PAGE_BREAK---
+
+synchronizability significantly [25–32]. Synchronizability of complex clustered networks has not been investigated until recently [33–36]. The purpose of this paper is to review recent progress in this new area of research in network science.
+
+In general, a clustered network consists of a number of groups, in which nodes within each group are densely connected, but the linkages among the groups are sparse [37–45]. Clustered networks were first described by Zachary in 1977 as a model of social networks with group structures [46]. In particular, he examined the organization of martial-art clubs in a city and found that a number of clubs were actually originated from one root club, where the owners of those clubs had been former students in the root club. As a result, members within each club are close to each other, but interactions with members from a different club are much less likely. This type of organization can also be found commonly in the business world where restructuring and recombination are routine practices. The clustered structure can explain familiar social experiences such as quick identification of acquaintances. For example, once two people are introduced, they describe themselves in terms of their social characteristics (e.g., professions, places of work, and leisure activities, etc.). Next, each of them cites friends with social characteristics “close” to those of the other person. This is actually a second step in the process of introduction, but can be effectively seen as an attempt to find chains of acquaintances linking them. The success of this attempt depends on the clustered structure of the social network. Clustered networks have recently been systematically studied and analyzed in social science [37–39].
+
+Besides in social science, clustered networks can arise in biological situations [40–42], as a key feature in a biological system is the tendency to form a clustered structure. For example, proteins with a common function are usually physically associated via stable protein-protein interactions to form larger macromolecular assemblies. These protein complexes (or clusters) are often linked by extended networks of weaker, transient protein-protein interactions to form interaction networks that integrate pathways mediating major cellular processes[40]. As a result, the protein-protein interaction network can naturally be viewed as an assembly of interconnected functional modules, or a clustered network. Furthermore, macroscopic tissues, a network of intercellular communication, typically exhibit clustered organizational structures in which organs largely consist of repeating, densely-connected, functional subunits such as the glomeruli in kidneys, liver lobules, and colon crypts. Interestingly, these organizational structures are greatly diminished in a cancer but still often persist as non-functional caricatures of the tissue of
+
+origin. The organizational structure of biological networks in organs represents an optimal strategy in the sense that synchronization must be maintained over a wide range of environments, e.g., an organ needs to be able to adapt itself to the environment and continue to function in the face of perturbations such as injury or infection. In addition, the strategy for optimal system dynamics within the cluster will probably be different from that for connecting the clusters into an organ. The clustered structure has also been identified in technological networks such as electronic circuits and computer networks [43–45].
+
+A complex clustered network is typically small-world in that its average network distance is short. Moreover, its degree distribution can be made quite homogeneous. For the synchronization problem, an interesting question regarding the clustered structure is that for a given number of inter-cluster links, how would their distributions affect the synchronizability? By Use of linear stability analysis [47] and its generalization [48], the dependence of synchronizability on the number of clusters in a network has been studied [34], and it has been found that the network can become more synchronizable as the number of clusters is increased if the inter-cluster links are random. If those links of the clusters are mostly local or diametrical in a topological ring structure, the synchronizability would deteriorate continuously as more clusters appear in the network. Therefore, how links distribute among clusters has a significant influence on network synchronizability. A relevant question is that for a given distribution of the inter-cluster links, say random distribution, how does the number of links influence synchronizability? Given a complex network with a fixed (large) number of nodes, intuitively, its synchronizability can be improved by increasing the number of links, as a denser linkage makes the network more tightly coupled or, “smaller,” thereby facilitating synchronization. Our recent work [35, 36] on this problem has revealed a phenomenon that apparently contradicts this intuition. Namely, more links, which makes the network smaller, do not necessarily lead to stronger synchronizability. There can be situations where more links can even suppress synchronization if placed improperly. In particular, it is found that the synchronizability of a clustered network is determined by the interplay between the inter-cluster and intra-cluster links in the network. Strong synchronizability requires that the numbers of the two types of links be approximately matched. In this case, increasing the number of links can indeed enhance synchronizability. However, if the matching is deteriorated, synchronization can be severely suppressed or even totally destroyed.
+
+Our finding can have a potential impact on real network dy-
+---PAGE_BREAK---
+
+nematics. In biology, synchronization is fundamental, on which
+many biological functions rely. Our result implies that, in
+order to achieve robust synchronization for a clustered bio-
+logical network, the characteristics of links are more impor-
+tant than the number of them. Simply counting the number
+of links may not be enough to determine its synchronizabil-
+ity. Instead, links should be distinguished and classified to
+predict synchronization-based functions of the network. In
+technological applications, it is supposed that a large-scale,
+parallel computational task is to be accomplished by a com-
+puter network, for which synchronous timing is of paramount
+importance. Our result can provide clues as to how to design
+the network to achieve the best possible synchronization and
+consequently optimal computational efficiency.
+
+In Section 2, a general linear-stability analysis is described for solving the synchronization problem in both continuous-time oscillator networks and discrete-time coupled-map networks. In Section 3, a theory is developed and numerical results are presented to demonstrate the effects of the distribution of inter-cluster links. In Section 4, the emphasis is placed on clustered networks with random inter-cluster links, and how the number of links affects the synchronizability of the oscillator network is examined. Two types of coupling schemes are studied in detail, theoretically and numerically. Extensive discussions of the main results and their broader implications are offered in Section 5.
+
+**2 Linear-stability analysis for synchronization**
+
+The approach taken to establish the result is to introduce non-
+linear dynamics to each node in the network and then perform
+stability and eigenvalue analyses [48, 49].
+
+## 2.1 Continuous-time oscillators networks
+
+The goal is to establish synchronization conditions of clus-
+tered networks in a proper network-parameter space. Each
+oscillator, when isolated, is described by
+
+$$
+\frac{dx}{dt} = F(x) \tag{1}
+$$
+
+where $x$ is a $d$-dimensional vector and $F(x)$ is the velocity
+field. Without loss of generality a prototype oscillator model
+is chosen - the Rössler oscillator, for which $x = [x, y, z]^T$
+$([*]^T$ denotes transpose), and
+
+$$
+F(x) = [-(y+z), x+ay, b+z(x-c)]^T \quad (2)
+$$
+
+The parameters of the Rössler oscillator are chosen such that
+they exhibit chaotic oscillations. The network dynamics is
+
+described by
+
+$$
+\frac{dx_i}{dt} = F(x_i) - \epsilon \sum_{j=1}^{N} G_{ij} H(x_j) \quad (3)
+$$
+
+where $H(x) = [x, 0, 0]^T$ is a linear coupling function, $\epsilon$ is a global coupling parameter, and $G$ is the coupling matrix determined by the network topology. For theoretical convenience, $G$ is assumed to satisfy the condition $\sum_{j=1}^{N} G_{ij} = 0$ for any $i$, where $N$ is the network size: therefore, the system permits an exact synchronized solution: $x^1 = x^2 = \dots = x^N = s$, where $\frac{ds}{dt} = F(s)$.
+
+For the system described by Eq. (3), the variational equa-
+tions governing the time evolution of the set of infinitesimal
+vectors $\delta x_i(t) \equiv x_i(t) - s(t)$ are
+
+$$
+\frac{d\delta x_i}{dt} = DF(s) \cdot \delta x_i - \epsilon \sum_{j=1}^{N} G_{ij} DH(s) \cdot \delta x_j \quad (4)
+$$
+
+where *DF(s)* and *DH(s)* are the Jacobian matrices of the
+corresponding vector functions evaluated at *s(t)*. Diagonal-
+izing the coupling matrix *G* yields a set of eigenvalues {*λ**i*} (*i* = 1, ..., *N*) and the corresponding normalized eigenvectors are denoted by *e*1, *e*2, ..., *e**N*. Generally, the eigenvalues are real and non-negatives [49] and thus can be sorted as
+0 = λ1 < λ2 ≤ ... ≤ λ*N*. The smaller the ratio λ*N*/λ2, the stronger the synchronizability of the network (to be discussed below) [11–14, 17]. The transform δ*y* = *O*-1·δ*x*, where *O* is a matrix whose columns are the set of eigenvectors, leads to the following block-diagonally decoupled form of Eq. (4):
+
+$$
+\frac{dy_i}{dt} = [DF(s) - \epsilon \lambda_i DH(s)] \cdot \delta y_i
+$$
+
+Letting $K = \epsilon\lambda_i$ ($i = 2, \dots, N$) be the normalized coupling parameter, it can be written as follows:
+
+$$
+\frac{d\delta y}{dt} = [DF(s) - KDH(s)] \cdot \delta y \quad (5)
+$$
+
+The largest Lyapunov exponent from Eq. (5) is the master-
+stability function $\Psi(K)$[48]. If $\Psi(K)$ is negative, a small
+disturbance from the synchronization state will be diminished
+exponentially, and the system is stable and can be synchro-
+nized: if $\Psi(K)$ is positive, a small disturbance will be mag-
+nified and the system cannot be synchronized.
+
+The function $\Psi(K)$ is negative in an interval $[K_1, K_2]$,
+where $K_1$ and $K_2$ solely depend on $F(x)$ that governs the
+node dynamics, and the output function $H(x)$. Thus, for
+$K_1 < K < K_2$, all the eigenvectors (eigenmodes) are trans-
+---PAGE_BREAK---
+
+versely stable and the network can be synchronized, which
+gives the condition of the boundary of the synchronization re-
+gion:
+
+$$
+\lambda_2 \ge \frac{K_1}{\epsilon} \tag{6}
+$$
+
+$$
+\lambda_N \le \frac{K_2}{\epsilon} \qquad (7)
+$$
+
+For the Rössler oscillators used in the simulation in Section 4(4.2), $a = b = 0.2$, $c = 9$ is chosen and the master stability function is shown in Fig. 1, where $K_1 \approx 0.2$ and $K_2 \approx 4.62$.
+
+Fig. 1 For the Rössler oscillator network, an example of the master stability function $\Psi(K)$ calculated from Eq. (5).
+
+2.2 Discrete-time coupled-map network
+
+The following general class of coupled-map networks is con-
+sidered:
+
+$$
+x_{m+1}^i = f(x_m^i) - \epsilon \sum_j G_{ij} H[f(x_m^j)] \quad (8)
+$$
+
+where $x_{m+1} = f(x_m)$ is a $d$-dimensional map, $\epsilon$ is a global coupling parameter, $G$ is the coupling matrix, and $H$ is a coupling function. If the rows of the coupling matrix $G$ have zero sum, Eq. (8) permits an exact synchronized solution: $x_m^1 = x_m^2 = \cdots = x_m^N = s_m$, where $s_{m+1} = f(s_m)$.
+
+For the system described by Eq. (), the variational equation governing the time evolution of the set of infinitesimal vectors $\delta x^i \equiv x^i - s$ is
+
+$$
+\delta x_{m+1}^i = Df(s_m) \cdot \delta x_m^i - \epsilon \sum_j G_{ij} DH[f(s_m)] \\
+\cdot Df(s_m) \cdot \delta x_m^j \quad (9)
+$$
+
+where *Df* and *DH* are the Jacobian matrices of the cor-
+responding vector functions evaluated at *s**m* and *f*(*s**m*), re-
+spectively. Diagonalizing the coupling matrix *G* yields a set
+of eigenvalues {*λ**i*} (*i* = 1, ..., *N*), where *0* = *λ*1 < *λ*2 ≤
+... ≤ *λ**N*. The transform *δy* = *O*-1 ⋅ *δx*, where *O* is a
+
+matrix whose columns are the set of eigenvectors, leads to the
+following block-diagonally decoupled form of Eq. (9):
+
+$$
+\delta y_{m+1}^{i} = \{\mathbf{I} - \epsilon \lambda_i \mathbf{D} \mathbf{H}[\mathbf{f}(s_m)]\} \cdot \mathbf{D} \mathbf{f}(s_m) \cdot \delta y_m^i \quad (10)
+$$
+
+Compared with Eq. (5), and letting $K = \epsilon\lambda_i$, the largest Lyapunov exponent of Eq. (10) can be called its master stability function, denoted again by $\Psi(K)$. Generally, there exist $K_1$ and $K_2$ such that $\Psi(K)$ is negative in the interval $[K_1, K_2]$, for which the synchronization solution of the coupled system is linearly stable. For a special class of coupled-map networks, $K_1$ and $K_2$ can be obtained explicitly as follows.
+
+The system is stable if $2 \le i \le N$ for any $i$. The following
+holds:
+
+$$
+\lim_{m \to \infty} \frac{1}{m} \ln \frac{|\delta \mathbf{y}_m^i|}{|\delta \mathbf{y}_0^i|} =
+\\
+\lim_{m \to \infty} \frac{1}{m} \ln \prod_{j=0}^{m-1} \frac{|\delta \mathbf{y}_{j+1}^i|}{|\delta \mathbf{y}_j^i|} < 0
+\quad (11)
+$$
+
+For a linear coupling function *H*, *DH* is a constant matrix.
+If the system is one dimensional, *DH* is simply a constant,
+say, $\gamma$. Equation (11) becomes
+
+$$
+\ln |1 - \epsilon \lambda_i \gamma| + \lim_{m \to \infty} \frac{1}{m} \ln \prod_{j=0}^{m-1} |f'(s_j)| < 0 \quad (12)
+$$
+
+Recognizing that the second term in the above equation is the
+Lyapunov exponent $\mu$ of a single map, it can be obtained
+
+$$
+\ln |1 - \epsilon \lambda_i \gamma| + \mu < 0
+\quad (13)
+$$
+
+which is
+
+$$
+|e^\mu(1 - \epsilon\lambda_i\gamma)| < 1, \quad i = 2, \dots, N
+\quad (14)
+$$
+
+To gain insight, $f(\boldsymbol{x})$ is set to be the one-dimensional lo-
+gistic map $f(x) = 1 - ax^2$ and $H(f) = f$ is chosen. Choos-
+ing $\sum_{j \neq i} G_{ij} = -1$, $\gamma = 1$ is obtained and Eq. (14) becomes
+
+[49]
+
+$$
+|e^\mu (1 - \epsilon\lambda_i)| < 1, \quad i = 2, \dots, N
+\quad (15)
+$$
+
+Because of the ordering of the eigenvalues, the above inequal-
+ity will hold for all the *i* if it holds for *i* = 2 and *i* = *N*.
+Therefore, condition (15) can be further simplified as
+
+$$
+\lambda_2 > \frac{1}{\epsilon}(1 - e^{-\mu})
+\quad (16)
+$$
+
+$$
+\lambda_N < \frac{1}{\epsilon}(1 + e^{-\mu}) \tag{17}
+$$
+---PAGE_BREAK---
+
+Compared with Eqs. (6) and (7), it can be seen that for the coupled logistic-map network, $K_1 = 1 - e^{-\mu}$ and $K_2 = 1 + e^{-\mu}$. The boundary of the synchronization region in the phase diagram can be determined by setting $\lambda_2 = (1 - e^{-\mu})/\epsilon$ and $\lambda_N = (1 + e^{-\mu})/\epsilon$. In our simulation in Sec. IVC, $\alpha = 1.9$ is used, and the corresponding Lyapunov exponent is $\mu = 0.55$, so $\lambda_2 = 0.423/\epsilon$, and $\lambda_N = 1.577/\epsilon$. If the coupling function $H$ is nonlinear, $DH[f(s_m)]$ will depend on the value of $f(s_m)$ and generally Eq. (11) cannot be simplified further.
+
+## 2.3 Physical understanding of synchronization boundaries
+
+Now it can be seen that the synchronization conditions for both continuous-time oscillators and discrete-time coupled-map networks have the same form of Eqs. (6) and (7). Physically, the existence of the $K_1$ and $K_2$ boundaries can be understood by the roles of the coupling terms: (1) they serve to establish coherence among oscillators, and (2) they are effective perturbations to the dynamics of individual oscillators. Whether synchronization can occur depends on the interplay between these two factors. In particular, for small coupling, synchronization may not occur because of the fact that although the perturbing effect of the coupling terms is small, the amount of coherence provided by them is also small. For very large coupling, although the coupling terms can provide strong coherence, the effective perturbations are also large. As a large perturbation requires longer time for the system to reach an equilibrium state (e.g., synchronization), the system will have no time to respond to the perturbations, which consequently makes it unable to synchronize. Thus, synchronization may not occur if the coupling is too strong. In general, there exists a finite interval of the coupling parameter for which synchronization can occur [48]. These considerations are demonstrated in Fig. 1, the master stability function of the coupled Rössler system versus the generalized coupling parameter $K$. We see that synchronization can occur only when the coupling parameter $K$ falls in the interval $(K_1, K_2)$. Indeed, this behavior appears to be typical for a large class of coupled chaotic oscillators [48, 50].
+
+The synchronizability of a complex network of oscillators for any linear coupling scheme can be inferred from Fig. 1. A given network can be characterized by the set of eigenvalues ($\lambda_2$ through $\lambda_N$) of the corresponding coupling matrix. For the fixed value of $\epsilon$, the spread of the eigenvalues determines the range of possible variations in $K$. This suggests a general quantity that determines the synchronizability of a complex network, regardless of detailed oscillator models: the spread of the eigenvalues of the coupling matrix. In Fig. 1, it can be seen that in order to achieve synchronization, the spread
+
+of the eigenvalues must not be too large to fit the generalized coupling parameter $K$ in the interval $(K_1, K_2)$. That is, only if
+
+$$ \frac{\lambda_N}{\lambda_2} < \frac{K_2}{K_1} \equiv \beta \qquad (18) $$
+
+there could exist values of $\epsilon$ of positive Lebesgue measure such that the synchronized state is linearly stable. Otherwise, the system is unstable for any $\epsilon$ value. For various chaotic oscillators, the value of $\beta$ ranges from 5 to 100 [17]. The left-hand side of the inequality (18), the ratio $\lambda_N/\lambda_2$, depends only on the topology of interactions among oscillators. Hence, the impact of a particular coupling topology on the network's ability to synchronize is represented by a single quantity $\lambda_N/\lambda_2$: the larger the ratio is, the more difficult it is to synchronize the oscillators, and vice versa [7].
+
+## 3 Synchronizability transition phenomena in clustered networks
+
+To address the synchronizability of a clustered network, it is insightful to explore the relationship between the eigenratio $\lambda_N/\lambda_2$ and the number of clusters. To construct an analyzable model of a clustered network, it is imagined that there is a network of $N$ nodes grouped into $M$ clusters located on a ring, each being connected to their nearest-neighboring clusters. The Laplacian matrix of the network is similar to that of a regular, “ring” network of $M$ nodes. The eigenvector corresponding to the first non-zero eigenvalue of the ring network is
+
+$$ \sqrt{2/N} \left[ \sin \left( \frac{2\pi j}{N} \right) \right]_{j=1}^N $$
+
+which can be considered as an envelope function of the eigenvectors in the clustered network, as shown in Fig. 2. Since the variance of the components in a cluster is small compared with the difference between the means of two consecutive clusters, components within a cluster are further approximated to obtain the same value and, as a result, the eigenvector of the network with $M$ clusters is given as a piecewise continuous step function:
+
+$$ e_2^T = \underbrace{\overbrace{h_M^1}^{N/M}, \dots, \overbrace{h_M^1}^{N/M}}_{\text{...}}, \dots, \underbrace{\overbrace{h_M^i}^{N/M}, \dots, \overbrace{h_M^i}^{N/M}}_{\text{...}}, \dots, \underbrace{\overbrace{h_M^M}^{N/M}}_{\text{...}} \qquad (19) $$
+
+where all $N/M$ components of the $i$th cluster have the same value $h_M^i = \sqrt{2/N} \sin(2\pi i/M)$. The first non-zero eigenvalue of the clustered network is
+---PAGE_BREAK---
+
+Fig. 2 Numerically obtained eigenvectors of the first non-zero eigenvalue $e_2$ for clustered networks with different number of clusters [ Reused with permission from K. Park et al., Chaos, 2006, 16: 015105. Copyright 2006, American Institute of Physics]. Eigenvectors can be encapsulated by a single curve (thick solid line), $\sqrt{2/N}[\sin(2\pi j/N)]_{j=1}^N$, except for some small phase differences (Note that eigenvectors are shifted vertically).
+
+$$ \lambda_2 = e_2^T G e_2, \quad (20) $$
+
+where G is the Laplacian matrix of the network. Inserting Eq. (19) into Eq. (20), and assuming there are only two nearest connections per cluster, the following equation can be obtained
+
+$$ \lambda_2 = \frac{2}{N} \sum_{i=1}^{M} \sin \frac{i}{\theta_0} \times \left[ 2 \sin \frac{i}{\theta_0} - \sin \frac{i-1}{\theta_0} - \sin \frac{i+1}{\theta_0} \right] \quad (21) $$
+
+where $\theta_0 = M/2\pi$. For $M \gg 1$, we have
+
+$$ \lambda_2 \approx \frac{4\pi}{NM} \int_0^{2\pi} \sin\theta(-\nabla^2\sin\theta)d\theta = \frac{4\pi^2}{NM} \quad (22) $$
+
+Although in general no approximation to the largest eigenvalue can be obtained in a similar way, approximations for it are known in some special cases. For example, when the cluster is a regular “ring” network with 2k nearest connections, the largest eigenvalue of the cluster is $\lambda_N \approx (2k+1)(1+2/3\pi)$ [51]. Since $\lambda_N$ depends on the number of nearest neighbors only, the same $\lambda_N$ may be used as the largest eigenvalue of the network. Therefore, the ratio is obtained
+
+$$ \frac{\lambda_N}{\lambda_2} \approx \frac{(2k + 1)(1 + 2/3\pi)NM}{4\pi^2} \quad (23) $$
+
+which indicates that the eigenratio increases linearly with M, as shown in Fig. 3(a). That is, only with local connections between clusters, it is more difficult to achieve synchronization as the number of clusters increases.
+
+Now a more general clustered network model that allows long range interactions can be considered. That is, two arbitrary clusters can make a connection with a probability p(l),
+
+Fig. 3 Eigenratio versus the number of clusters: (a) “ring” network where only two nearest neighbor clusters are connected and (b) “globally” connected network where each cluster is connected to all the other clusters. Circles and Squares are numerically obtained eigen ratios, while solid lines are from analytic solutions. The parameters are N = 4086 and the number of nearest-neighbor connections is 2k = 10 ( Reused with permission from Ref. [34]. Copyright 2006, American Institute of Physics).
+
+where *l* is the distance between the clusters. It is reasonable to assume that *p*(*l*) has an exponential dependence on *l*,
+
+$$ p(l) = \frac{\exp(-\alpha l)}{N_l} \quad (24) $$
+
+where $\alpha$ is a parameter and $N_l$ is a proper normalization constant. Inserting this $p(l)$ into Eq. (29) and again replacing the sum with an integral, it can be obtained
+
+$$ \lambda_2 \approx \frac{4\pi^2 K_0}{NM} \frac{\int_1^{M/2} l^2 e^{-\alpha l} dl}{\int_1^{M/2} e^{-\alpha l} dl} \quad (25) $$
+
+where the factor $K_0$ comes from the fact that each cluster can have multiple $(2K_0)$ connections. As an example, for the case of a “globally” connected clustered network, $\alpha \to$
+---PAGE_BREAK---
+
+0 and $K_0 \sim M/2$ can be obtained. The integral can then be simplified as $M^2$ and $\lambda_2 \sim M^2$. Thus, the eigenratio decreases as $M$ increases:
+
+$$ \frac{\lambda_N}{\lambda_2} \sim M^{-2} \qquad (26) $$
+
+as shown in Fig. 3(b). This is opposite to the result in Fig. 3(a), where no long-range links among clusters are allowed. It is noted that for $\alpha \to \infty$ and $K_0 = 1$, Eq. (25) becomes Eq. (22).
+
+An interesting consequence of Eq. (25) is that a transition phenomenon may occur in the synchronizability of clustered networks. For $\alpha \gtrsim 0$, connections between clusters are *decentralized* and the integral in Eq. (25) is proportional to $M^2$, and, as a result, the eigenratio decreases as $M$ increases. However, for $\alpha \gg 0$, connections between clusters are *centralized* among nearest neighboring clusters and the integral becomes independent of $M$ for sufficiently large $M \gg \alpha^{-1}$ and the eigenratio eventually increases as $M$ increases. A similar transition phenomenon is also expected when $K_0$ is regarded as a function of $M$ (i.e., $K_0 \sim M^\eta$). For example, it is supposed that $\alpha \to 0$ and the total number of connections between clusters is fixed. Then $K_0$ is inversely proportional to $M$: as a result, the eigenratio remains constant. Distinct synchronization behavior can arise depending on the value of $\eta$.
+
+Our considerations so far have been limited to the case where the parameter $\alpha$ is positive. From Eq. (24), it can be seen that a relatively large, positive value of $\alpha$ stipulates smaller probabilities for long-range links among clusters as compared with those for short-range links. Thus, as $\alpha$ is increased from zero, long-range links become increasingly improbable, reducing the network synchronizability. An interesting question is what happens if $\alpha$ is negative. Intuitively, it is expected that for a negative value of $\alpha$, long-range links among clusters can be much more probable than short-range links: as a result, network synchronizability should improve as $\alpha$ is decreased from zero. But is this really the case?
+
+Insight has been provided in Ref. [34], where a ring clustered-network model is employed and different networks are generated for visualization with several different values of $\alpha$, as shown in Fig. 4(a-c). For Fig. 4(a), the value of $\alpha$ is positive so that short-range links among the clusters are favored. In this case, the average network distance can be large. For $\alpha = 0$ [Fig. 4(b)], short-range and long-range links are equally probable, making the connections among the clusters small-world like with the small average network distance. For $\alpha < 0$, long-range links are favored but, the most favorable links are those that are squarely across the ring configuration. For example, for a circular ring, the inter-cluster links close
+
+to the diameter of the circle are the most favorable ones, as shown in Fig. 4(c). This, in fact, makes the average network distance large.
+
+**Fig. 4** Clustered network configurations with different values of $\alpha$: (a) $\alpha = 1.0$, (b) $\alpha = 0.0$, (c) $\alpha = -1.0$. Every network consists of clusters with 5 nodes in which one-to-all connections are assumed. Neighboring clusters are connected to each other to give the ring topology. The link ratio, ratio of the number of links between the clusters to the total number of links in the network, is kept constant: $p = 0.1$, for the purpose of clear visualization (Reused with permission from Ref. [34]). Copyright 2006, American Institute of Physics).
+
+In Ref. [11], it is shown that scale-free networks are gen-
+---PAGE_BREAK---
+
+erally more difficult to be synchronized, despite their smaller
+average network distances as compared with small-world net-
+works. Intuitively, this is mainly because of the highly hetero-
+geneous degree distribution in scale-free networks that stipu-
+lates the existence of a small subset of nodes with extraordi-
+narily larger numbers of links as compared with most nodes
+in networks. It is speculated in Ref. [11] that communi-
+cation can be blocked at these nodes, significantly reducing
+the synchronizability of the whole network as compared with
+networks with more homogeneous degree distributions, such
+as small-world networks or random networks. However, for
+networks with similar characteristics, either homogeneous or
+heterogeneous, the average network distance (or diameter)
+is the determining factor for synchronizability [3]. These
+considerations suggest that, quite counterintuitively, as $\alpha$ be-
+comes negative from a positive value, the network synchro-
+nizability is expected to increase and reach its maximum for
+$\alpha = 0$, and then to decrease as $\alpha$ is decreased from zero.
+
+For the ring clustered-network configuration, estimates of
+the average network distance can be obtained for some lim-
+iting cases. In particular, for $\alpha \to \infty$, there is a strong ten-
+dency for a cluster to connect only to its nearest neighboring
+clusters. For such a configuration with a large number ($M$)
+of clusters, the average network distance is $d \sim M/4$. For
+$\alpha = 0$, the probabilities for a cluster to connect to other clus-
+ters are equal, so the inter-cluster links appear random. In this
+case, the average network distance is $d \sim \ln M$, as for random
+networks [23]. In the limiting case where $\alpha \to -\infty$, there is
+a tendency for a cluster to connect to diametrically opposite
+clusters as seen in Fig. 4(c), making most links nearly pass
+through the center of the ring configuration. In this case, the
+average network distance is $d \sim M/8$. For reasonably large
+values of $M$, it can be obtained
+
+$$ \ln M < \frac{M}{8} < \frac{M}{4} $$
+
+suggesting that ring clustered networks with $\alpha$ near zero are
+most synchronizable.
+
+For the ring clustered network model, the eigenratio and
+the average network distance versus $\alpha$ are shown in Fig. 5.
+The network consists of $M$ clusters, each containing five
+nodes that are connected in a one-to-all manner. For com-
+parison, the link ratio (the ratio between the number of links
+among the clusters and the total number of links in the net-
+work) is fixed at a small value (0.01 for Fig. 5). In Fig. 5(a),
+the eigenratio versus $\alpha$ is shown for three values of $M$. It is
+seen that for relatively large value of $M$, the eigenratio ex-
+hibits the expected behavior in that it decreases as $\alpha$ is de-
+creased from a positive value to zero, and it increases as $\alpha$
+
+becomes negative from zero. Figure 5(b) shows the behavior
+of the average network distance, which is consistent with that
+exhibited by the eigenratio, as expected.
+
+**Fig. 5** Effect of $\alpha$ on the synchronizability of ring clustered network, (a) eigenratio versus $\alpha$ for $M = 20, 40,$ and $80$, and (b) the corresponding average network distances versus $\alpha$. All data points are averaged over 100 realizations of networks with clusters having 5 nodes per cluster and link ratio of $p = 0.01$ (Reused with permission from Ref. [34]. Copyright 2006, American Institute of Physics).
+
+Similar transition behavior persists in internally scale-
+free clustered networks, hierarchical clustered networks and
+Zachary networks [34].
+
+# 4 Quantitative analysis of synchronization in clus-
+tered networks
+
+The preceding section discusses how the distribution of inter-
+cluster links affects the synchronizability of the network.
+Here, it will be examined, for a given distribution of links,
+how the number of links affects synchronizability. The em-
+---PAGE_BREAK---
+
+phasis will be placed on a random clustered network model for developing a theory and providing numerical support that theoretical predictions are typical for clustered networks.
+
+To be concrete, the following clustered network model is considered: $N$ nodes are classified into $M$ groups, where each group has $n = N/M$ nodes. In a group, a pair of nodes are connected with probability $p_s$, and nodes of different groups are connected with probability $p_l$. This forms a clustered random network. Typically, the number of interconnections is typically far less than that of intra-connections. As a result, the parameter region of small $p_l$ values is more relevant.
+
+Theoretical analysis should be developed for synchronization, which yields the stability regions for synchronization in the two-dimensional parameter space defined by the probabilities of the two types of links. The analytical predictions are verified by direct numerical simulations of corresponding dynamical networks. The following coupling scheme is considered: for any $i$ ($1 \le i \le N$), $G_{ii} = 1$, $G_{ij} = -1/k_i$ if there is a link between node $i$ and $j$, and $G_{ij} = 0$ otherwise, where $k_i$ is the degree of node $i$. The coupling matrix $G$ is not symmetric since $G_{ij} = -1/k_i$ while $G_{ji} = -1/k_j$. The Gerschgorin theorem stipulates that all the eigenvalues be located within a disc centered at 1 with radius 1, thus $\lambda_N \le 2$. One of the synchronization conditions, $\lambda_N < K_2/\epsilon$, can usually be satisfied. Thus the synchronizability of the system is determined by $\lambda_2$. In the following section, a theoretical formula will be derived to understand the dependence of $\lambda_2$ on $p_l$ and $p_s$ for small values of $p_l$, the typical parameter regime for realistic clustered networks.
+
+Fig. 6 A typical profile of components of the eigenvector e₂. Parameters are N = 500, M = 5, p_l = 0.01, and p_s = 0.8 [Reprinted with permission from L. Huang et al., Phys. Rev Lett., 2006, 97: 164101 (http://link.aaps.org/abstract/PRL/v97/p164101). Copyright 2006, American Physical Society].
+
+## 4.1 Dependence of $\lambda_2$ on $p_l$ and $p_s$
+
+For a clustered network, the components of the eigenvector $e_2$ have approximately the same value within any cluster, while they can be quite different among different clusters, as illustrated in Fig. 6. It is written that $e_2 \approx [\tilde{e}_1, \dots, \tilde{e}_1, \tilde{e}_2, \dots, \tilde{e}_2, \dots, \tilde{e}_M, \dots, \tilde{e}_M]^T$, and for each $I$, $1 \le I \le M$, there are $n$ $\tilde{e}_I$'s in $e_2$. By definition, $G \cdot e_2 = \lambda_2 e_2$ and $e_2 \cdot e_2 = 1$, $\lambda_2 = e_2^T \cdot G \cdot e_2 = \sum_{i,j=1}^{N} e_{2i} G_{ij} e_{2j}$ is obtained, where $e_{2i}$ is the $i$th component of $e_2$. Expanding the summation in $j$ gives
+
+$$ \lambda_2 = \sum_{i=1}^{N} e_{2i} \left\{ G_{i1} \tilde{e}_1 + G_{i2} \tilde{e}_1 + \cdots + G_{in} \tilde{e}_1 + G_{in+1} \tilde{e}_2 + \cdots + G_{iN} \tilde{e}_M \right\} \quad (27) $$
+
+It is recalled that $G_{ii} = 1$: and if $i$ and $j$ belong to the same cluster, $G_{ij}$ equals $-1/k_i$ with probability $p_s$ and 0 with probability 1-$p_s$: while if $i$ and $j$ belong to different clusters, $G_{ij}$ equals $-1/k_i$ with probability $p_l$ and 0 with probability 1-$p_l$, where $k_i$ is the degree of node $i$. Thus, the following equation can be obtained:
+
+$$ \lambda_2 = \sum_{i=1}^{N} e_{2i} \left\{ -n \frac{p_l}{k_i} \tilde{e}_1 - n \frac{p_l}{k_i} \tilde{e}_2 + \dots + \tilde{e}_I - n \frac{p_s}{k_i} \tilde{e}_I + \dots - n \frac{p_l}{k_i} \tilde{e}_M \right\} $$
+
+where $\tilde{e}_I$ is the eigenvector component value corresponding to the cluster that contains node $i$. When $1 - np_s/k_i = (N-n)p_l/k_i$ is noted, the following equation can be obtained
+
+$$
+\begin{aligned}
+\lambda_2 &= \sum_{i=1}^{N} e_{2i} \left\{ (N-n) \frac{p_l}{k_i} \tilde{e}_I - n \frac{p_l}{k_i} \sum_{J \neq I}^{M} \tilde{e}_J \right\} \\
+&= \sum_{i=1}^{N} e_{2i} \left\{ N \frac{p_l}{k_i} \tilde{e}_I - n \frac{p_l}{k_i} \sum_{J=1}^{M} \tilde{e}_J \right\}
+\end{aligned}
+$$
+
+For the clustered random network models, the degree distribution has a narrow peak centered at $k = np_s + (N-n)p_l$, thus $k_i \approx k$. The summation over $i$ can be carried out as follows:
+
+$$
+\begin{aligned}
+\lambda_2 &\approx \sum_{I=1}^{M} n \tilde{e}_I \left\{ N \frac{p_l}{k} \tilde{e}_I - n \frac{p_l}{k} \sum_{J=1}^{M} \tilde{e}_J \right\} \\
+&= N \frac{p_l}{k} \sum_{I=1}^{M} n \tilde{e}_I^2 - \left( n \sum_{J=1}^{M} \tilde{e}_J \right)^2 \frac{p_l}{k}
+\end{aligned}
+$$
+
+Since $\sum_{I=1}^{M} n\tilde{e}_I^2 \approx \sum_{i=1}^{N} e_{2i}^2 = 1$, and $\sum_{J=1}^{M} \tilde{e}_J = \sum_{i=1}^{N} e_{2i}$, it is obtained
+---PAGE_BREAK---
+
+$$
+\lambda_2 \approx \frac{N p_l}{n p_s + (N-n)p_l} - \left( \sum_{i=1}^{N} e_{2i} \right)^2 \frac{p_l}{k} \quad (28)
+$$
+
+The normalized eigenvector $e_1$ of $\lambda_1$ corresponds to the syn-
+chronized state, so its components have constant values: $e_1 =$
+$[1/\sqrt{N}, \dots, 1/\sqrt{N}]^\mathrm{T}$. If $G$ is symmetric, eigenvectors asso-
+ciated with different eigenvalues are orthogonal: $e_i \cdot e_j = \delta_{ij}$,
+where $\delta_{ij} = 1$ for $i = j$ and $0$ else. When $i = 1$ and $j = 2$,
+$\sum_{l=1}^N e_{2l} = 0$ is obtained. If the coupling matrix $G$ is slightly
+asymmetric, $\sum_{i=1}^N e_{2i}$ is nonzero but small, and the second term
+in Eq. (28) can be omitted, thus obtaining
+
+$$
+\lambda_2 \approx \frac{N p_l}{n p_s + (N - n) p_l} \qquad (29)
+$$
+
+For fixed $p_l$ and large $p_s$, $\lambda_2$ decreases as $p_s$ increases, indi-
+cating that the network becomes more difficult to be synchro-
+nized. This is an abnormal behavior in the network synchro-
+nizability, which will be verified numerically. Furthermore,
+since $\lambda_2$ depends only on the ratio of $p_l/p_s$ in Eq. (29), the
+synchronization-desynchronization boundaries in the $(p_s, p_l)$
+parameter plane should consist of straight-line segments.
+
+The above analysis can be extended to more general clus-
+tered networks, i.e. those with different cluster sizes or het-
+erogeneous degree distributions in each cluster, by replacing
+$n$ with $n_I$ - the size of the Ith cluster - for each I, and using
+the degree distribution $P_I(k)$ in the summation over $1/k$. In
+this case, $p_s$ and $p_l$ can be regarded as effective parameters,
+and may vary in different clusters. A formula similar to Eq.
+(29) can be obtained, because even in such a case, the contri-
+bution of the second term in Eq. (28) to $\lambda_2$ is small. Thus,
+the abnormal synchronization phenomenon is due to the clus-
+tered network structure: it does not depend on the details of
+dynamics.
+
+4.2 Numerical support with coupled Rössler networks
+
+Depending on the initial conditions and the network realization, the Rössler system may have desynchronization bursts [52, 53]. It is thus necessary to characterize the network synchronizability statistically. Psyn is defined as the probability that the fluctuation width of the system W(t) is smaller than a small number δ (chosen somewhat arbitrarily) at all times during a long observational period T0 in the steady state, say, from T1 to T1 + T0, where W(t) = |⟨x(t)⟩ - ⟨x(t)⟩||, and ⟨·⟩ denotes the average over the nodes of the network. If δ is small enough, the system can be deemed as being synchro-
+
+nized in the period T₀, so Pₚₐ₴ is the probability of synchro-
+nization of the system in the period T₀, with Pₚₐ₄ = 1 if
+the network for the given parameters can synchronize. Practi-
+cally, Pₚₐ₄ can be calculated by the ensemble average, i.e. the
+ratio of the number of synchronized cases over the number of
+all random network realizations. Since Pₚₐ₄ can change dras-
+tically from 0 to 1 in a small region in the parameter space, it
+is possible to define the boundary between the synchronizable
+region and the unsynchronizable region as follows: for a fixed
+pₛ, the boundary value pᵢᵇ is such that the quantity
+
+$$
+\|\nabla P_{\text{syn}}(p_s, p_l)\| = \sqrt{\left(\frac{\partial P_{\text{syn}}}{\partial p_s}\right)^2 + \left(\frac{\partial P_{\text{syn}}}{\partial p_l}\right)^2} \Bigg|_{(p_s, p_l)}
+$$
+
+is maximized at $(p_s, p_l)$. Figure 7 shows the synchronization boundary in the parameter space $(p_s, p_l)$ from both numerical calculation and theoretical prediction of Eqs. (6) and (7). It can be seen that both results agree with each other. If the number of inter-cluster connections is fixed, say, $p_l = 0.2$, as the number of intra-cluster links exceeds a certain value (about 0.78), the system becomes desynchronized. When $p_s$ is small, the number of the inter-cluster connections and the number of the intra-cluster connections are approximately matched, and the networks are synchronizable. As $p_s$ becomes larger, the matching condition deteriorates and the network loses its synchronizability, even though its average distance becomes smaller. That is, too many intra-cluster links tend to destroy the global synchronization. The same phenomenon persists for different parameter values. This is precisely the abnormal synchronization phenomenon predicted by theory.
+
+**Fig. 7** (Color online) Contour plot of synchronization probability of a clus-
+tered network of Rössler oscillators with *N* = 100 and *M* = 2. *T*₀ = 10⁴
+and *ϵ* = 0.5. Each data is the result of averaging over 1000 network realiza-
+tions. The boundary is obtained by theoretical analysis (From Ref. [35]).
+---PAGE_BREAK---
+
+4.3 Numerical support with coupled logistic-map networks
+
+For the coupled logistic-map network, if the system is syn-
+chronizable, starting from a random initial condition, it will
+approach the synchronization state. In the simulation, syn-
+chronization is defined as ⟨|xi - ⟨xi⟩|⟩ < 10-10, where ⟨·⟩
+denotes the average over the network. The average time T
+required for the system to become synchronized can be con-
+veniently used to characterize the ability of the system to syn-
+chronize. If the system is unsynchronizable, the time T is infi-
+nite. Figure 8 shows the behavior of T in the two-dimensional
+parameter space (pl, ps) for networks with two clusters (a)
+and ten clusters (b). This gives the synchronizable region
+(grey regions in Fig. 8) in the parameter space in which the
+system is able to synchronize within a certain time, and the
+unsynchronizable region (white regions in Fig. 8). The shape
+of the figure depends on the coupling strength ε and the con-
+tour lines of λ2 and λN. For the two-cluster network, if ε = 1,
+
+the shape appears to be symmetric, while if ε < 1, the boundary is asymmetric. Figure 8(a) demonstrates that for a given $p_l$ (e.g., 0.2) as $p_s$ is increased from 0.2, synchronization time $T$ is also increased, and at a certain point (about 0.75 in this case), the system becomes unsynchronizable. The same phenomenon persists for different networks and dynamical parameters. Again, when the number of inter-cluster links is fixed, too many intra-cluster links violate the matching condition and thus tend to destroy the global synchronization.
+
+One remark concerning the physical meaning of the result,
+as exemplified by Figs. 7 and 8, is in order. Consider two
+clustered networks where (A) the two types of links are ap-
+proximately matched and (B) there is a substantial mismatch.
+Theory predicts that network A is more synchronizable than
+network B. This statement is meaningful in a probabilistic
+sense, as whether or not a specific system may achieve syn-
+chronization is also determined by many other factors such as
+the choice of the initial condition, possible existence of mul-
+tiple synchronized states, and noise, etc. Our result means
+that, under the influence of these random factors, there is a
+higher probability for network A to be synchronized than for
+network B.
+
+Fig. 8 (Color online) Contour plot of the synchronization time T (on a logarithmic scale lg T) in (pl, ps) space for coupled logistic-map network with (a) N = 100, M = 2, and (b) N = 500, M = 10. ε = 1, a = 1.9. The line segments defining the boundaries between the synchronizable and unsynchronizable regions are determined by theory. Each data point is the result of averaging over 100 network realizations (From Ref. [35]).
+
+# 5 Discussions
+
+In summary, some recent results about the synchronizability of complex networks with a cluster structure have been reviewed. The first result is that *random*, long-range couplings among clusters can enhance the synchronizability, while connections among nodes within individual clusters have little impact on the network’s ability to synchronize. In terms of the relationship between the synchronizability and the number of clusters in the network, an interesting transition phenomenon is uncovered, where the network synchronizability exhibits a different behavior depending on parameter that controls the probability of random, long-range links among the clusters. In particular, when these links are less probable, the synchronizability tends to deteriorate as the number of clusters is increased: the opposite occurs when the links are more probable. There has been some theoretical understanding of this phenomenon based on the analysis of a class of simplified networks with clusters distributed according to a ring topology. These findings imply that, in the context of social networks, a viable strategy to achieve synchronization is to devote resources to establishing and enhancing connections among distant communities.
+
+The second result concerns theoretical and numerical ev-
+idence that the optimal synchronization of complex clus-
+tered networks can be achieved by matching the probabili-
+---PAGE_BREAK---
+
+ties of inter-cluster and intra-cluster links. That is, at a global level, the network has the strongest synchronizability when these probabilities are approximately equal. Overwhelmingly, strong intra-cluster connection can counterintuitively weaken the network synchronizability. This phenomenon persists for another typical coupling scheme, i.e., for any $i$ ($1 \le i \le N$), $G_{ii} = k_i$, $G_{ij} = -1$ if there is a link between node $i$ and $j$, and $G_{ij} = 0$ otherwise. A new set of analysis and numerical justification has been provided in Ref. [36]. While the network model used to arrive at this result is somewhat idealized, it can be argued that similar phenomena should persist in more general clustered networks. In real systems with a clustered structure, if global synchronization is the best performance of the system, special attention needs to be devoted to distinguishing the inter-cluster and intra-cluster connections as in this case, where a proper distribution of the links is more efficient than adding links blindly.
+
+For biological networks, such as metabolic networks and protein-protein interaction networks, certain nodes may have many more links than others, which forms a hierarchical cluster structure [58]. This indicates a power-law distribution of the degree $k$: $P(k) \sim k^{-\gamma}$, and the network is scale-free. Therefore, it is interesting to study clustered scale-free networks, in which each cluster contains a scale-free subnetwork. The synchronizability of such clustered networks has been studied. In particular, for each cluster, the subnetwork is generated via the preferential attachment rule [24]. Initially, there is a fully connected small subset of size $m_0$, then a new node is added with $m$ links, and the probability that a previous node $i$ is connected to this new node is proportional to its current degree $k_i$. New nodes are continuously added until a prescribed network size $n$ is reached. In our simulation, $m_0 = 2m + 1$ is set so that the average degree of this network is $2m$. Given $M$ such scale-free subnetworks, each pair of nodes in different clusters are connected with probability $p_l$. For this model, $p_l$ controls the number of inter-clustered links, and $m$ controls the number of intra-clustered links. Numerical simulations have been carried out, and it is found that the patterns for the eigenvalues $\lambda_N$ and $\lambda_2$ are essentially the same as that for the clustered network where each cluster contains a random subnetwork. This indicates that optimization of synchronization by matching different types of links is a general phenomenon, regardless of the detailed topology in each individual cluster.
+
+Some networks, e.g., intercellular communication networks, may have a locally regular structure. For example, a tissue network can be defined such that the nodes are cells and the links are the interactions between cells, i.e., the transmission of signal molecules. These interactions mainly occur
+
+between adjacent cells, which form a locally regular linkage structure. In addition, larger diffusing growth factors provide long-range links. A clustered network with each cluster having a regular backbone is thus a plausible model for biological tissue organization. The synchronizability of such clustered regular networks has been studied. First, for each cluster, a one-dimensional regular lattice with the periodic boundary condition is generated, i.e. each node is connected with 2$m$ of its neighbors. For example, node $i$ connects with nodes $i-m, i-m+1, \dots, i-1, i+1, \dots, i+m$. Then each pair of nodes in different clusters are connected with probability $p_l$. Thus $p_l$ and $m$ are the control parameters for the number of inter- and intra-clustered links, respectively. Numerical simulations show that for large $m$ and small $p_l$, which is typical for clustered networks, optimization of synchronization can also be achieved by constraining $m$ such that the number of inter- and intra-links are approximately matched. For intermediate $m$ values, an interesting synchronization phenomenon is uncovered. That is, for a locally regular clustered networks the synchronizability exhibits an alternating, highly non-monotonic behavior as a function of the intra-cluster link density. In fact, there are distinct regions of the density for which the network synchronizability is maximized, but there are also parameter regions in between for which the synchronizability is diminished [59].
+
+A basic assumption in the existing works is that all the clusters in a network are on the equal footing in the sense that their sizes are identical and the interactions between any pair of clusters are symmetrical. In realistic applications the distribution of the cluster size can be highly uneven. For example, in a clustered network with a hierarchical structure, the size of a cluster can in general depend on the particular hierarchy to which it belongs. More importantly, the interactions between clusters in different hierarchies can be highly asymmetrical. For instance, the coupling from a cluster at the top of the hierarchy to a cluster in a lower hierarchy can be much stronger than the other way around. An asymmetrically interacting network can in general be regarded as the superposition of a symmetrically coupled network and a directed network, both being weighted. A weighted, directed network is actually a *gradient network* [60, 61], a class of networks for which the interactions or couplings among nodes are governed by some gradient field on the network. For a complex gradient network, a key parameter is the strength of the gradient field (the extent of the directness of links), denoted by *g*. A central issue is how the network synchronizability depends on *g*. As *g* is increased, the interactions among various clusters in the network become more directed. From a dynamical-system point of view, uni-directionally coupled systems often possess
+---PAGE_BREAK---
+
+strong synchronizability [62]. Thus, intuitively, we expect to observe enhancement of the network synchronizability with the increase of $g$. The question is whether there exists an optimal value of $g$ for which the network synchronizability can be maximized. This is in fact the problem of optimizing synchronization in clustered gradient networks, and our recent findings [63] suggest an affirmative answer to the question. In particular, we are able to obtain solid analytic insights into a key quantity that determines the network synchronizability. The theoretical formulas are verified by both numerical eigenvalue analysis and direct simulation of oscillatory dynamics on the network. The existence of an optimal state for gradient clustered networks to achieve synchronization may have broad implications for fundamental issues such as the evolution of biological networks and for practical applications such as the design of efficient computer networks.
+
+After all, the general observation is that the synchronizability of the clustered networks is mainly determined by the underlying clustered structure. Insofar as there is a clustered structure, details such as the link topology within each cluster, node dynamics and parameters, etc. do not appear to have a significant influence on the synchronization of the coupled oscillator networks supported by the clustered backbone. A practical usage is that, even if the details about the dynamics of a biological system are not available, insofar as the underlying network has a clustered structure, it is possible to make predictions about synchronization of the network. Such insights may begin to provide, for example, first principles for the organizational dynamics of normal and abnormal (i.e. cancer) tissue which currently remain largely unknown.
+
+The clustered topology has also been identified in technological networks such as certain electronic circuit networks and computer networks [43–45]. For a computer network, the main functions include executing sophisticated codes to carry out extensive computations. Suppose a large-scale, parallel computational task is to be accomplished by the network, for which synchronous timing is of paramount importance. Our results can provide useful clues as to how to design the network to achieve the best possible synchronization and consequently optimal computational efficiency.
+
+**Acknowledgements** This work was supported by an ASU-UA Collaborative Program on Biomedical Research, by AFOSR under Grant No. FA9550-07-1-0045, and by NSF under Grant No. ITR-0312131.
+
+## References
+
+1. Lago-Fernandez L. F., Huerta R., Corbacho F., and Siguenza J. A., Phys. Rev. Lett., 2000, 84(12): 2758
+
+2. Gade P. M. and Hu C.-K., Phys. Rev. E, 2000, 62(5): 6409
+
+3. Wang X. F. and Chen G., Int. J. of Bifur. Chaos, 2002, 12: 187
+
+4. Wang X. F. and Chen G., IEEE Trans. on Circ. Sys., Part I, 2002, 49: 54
+
+5. Hong H., Choi M. Y., and Kim B. J., Phys. Rev. E, 2002, 65(2): 026139
+
+6. Jalan S. and Amritkar R. E., Phys. Rev. Lett., 2003, 90(1): 014101
+
+7. Barabási A.-L. and Percora L. M., Phys. Rev. Lett., 2002, 89: 054101
+
+8. Lv J., Yu X., Chen G., and Cheng D., IEEE Trans. on Circ. Sys., Part I, 2004, 51: 787
+
+9. Fan J. and Wang X. F., Physica A, 2005, 349: 443
+
+10. Gong B., Yang L., and Yang K., Phys. Rev. E, 2005, 72: 037101
+
+11. Nishikawa T., Motter A. E., Lai Y.-C., and Hoppensteadt F. C., Phys. Rev. Lett., 2003, 91: 014101
+
+12. Motter A. E., Zhou C., and Kurths J., Europhys. Lett., 2005, 69(3): 334
+
+13. Motter A. E., Zhou C., and Kurths J., Phys. Rev. E, 2005, 71: 016116
+
+14. Zhou C., Motter A. E., and Kurths J., Phys. Rev. Lett., 2006, 96: 034101
+
+15. Zhou C. and Kurths J., Phys. Rev. Lett., 2006, 96: 164102
+
+16. Hwang D.-U., Chavez M., Amann A., and Boccaletti S., Phys. Rev. Lett., 2005, 94: 138701
+
+17. Chavez M., Hwang D.-U., Amann A., Hentschel H. G. E., and Boccaletti S., Phys. Rev. Lett., 2005, 94: 218701
+
+18. Huang D., Phys. Rev. E, 2006, 74: 046208
+
+19. Chavez M., Hwang D.-U., Martinerie J., and Boccaletti S., Phys. Rev. E, 2006, 74: 066107
+
+20. Wang X., Lai Y.-C., and Lai C.-H., Phys. Rev. E, 2007, 75: 056205
+
+21. Nishikawa T. and Motter A. E., Phys. Rev. E, 2006, 73: 065106
+
+22. Watts D. J. and Strogatz S. H., Nature, 1998, 393: 440
+
+23. Erdös P. and Rényi A., Publ. Math. Inst Hung Acad Sci, 1960, 5: 17; Bollobás B., Random Graphs (Academic, London, 1985)
+
+24. Barabási A.-L. and Albert R., Science, 1999, 286: 509
+
+25. Zhao M., Zhou T., Wang B.-H., and Wang W.-X., Phys. Rev. E, 2005, 72: 057102
+
+26. Yin C.-Y., Wang W.-X., Chen G., and Wang B.-H., Phys. Rev. E, 2006, 74: 047102
+
+27. Atay F. M. and Bıyıkoğlu T., Phys. Rev. E, 2005, 72: 016217
+
+28. Donetti L., Hurtado I., and Muñoz M.A., Phys. Rev. Lett., 2005, 95: 188701
+
+29. Atay F. M., Bıyıkoğlu T., and Jost J., IEEE Trans. Circuits Syst. I: Regular Papers, 2006, 53: 92
+
+30. Wang X. F. and Chen G., J. Systems Science and Complexity, 2003, 16: 1
+
+31. Boccaletti S., Hwang D.-U., Chavez M., Amann A., Kurths J., and Percora L. M., Phys. Rev. E, 2006, 74: 016102
+
+32. Restrepo J. G., Ott E., and Hunt B. R., Phys. Rev. Lett., 2006, 97: 094102
+
+33. Oh E., Rho K., Hong H., and Kahng B., Phys. Rev. E, 2005, 72: 047101
+
+34. Park K., Lai Y.-C., Gupte S., and Kim J.-W., Chaos, 2006, 6: 015105
+
+35. Huang L., Park K., Lai Y.-C., Yang L., and Yang K., Phys. Rev. Lett., 2006, 97: 164101
+
+36. Huang L., Lai Y.-C., Park K., and Gatenby R.A., submitted
+---PAGE_BREAK---
+
+37. Watts D J, Dodds S., and Newman M E J., Science, 2002, 296: 1302
+
+38. Girvan M. and Newman M E J., Proc. Natl. Acad Sci. U.S.A, 2002, 99: 7821
+
+39. Motter A E, Nishikawa T, and Lai Y-C., Phys. Rev. E, 2003, 68: 036105
+
+40. Spirin V and Mirny L A., Proc. Natl. Acad Sci. USA, 2003, 100: 12123
+
+41. Ravasz E., Somera A L., Mongru D A., Oltvai Z., and Barabási A-L., Science, 2002, 297: 1551
+
+42. Palla G., Derényi I., Farkas I., and Vicsek T., Nature, 2005, 435: 814
+
+43. Milo R., Shen-Orr S., Itzkovitz S., Kashtan N., Chkllovskii D., and Alon U., Science, 2002, 298(5594): 824
+
+44. Vázquez A., Pastor-Satorras R., and Vespignani A., Phys. Rev. E, 2002, 65(6): 066130
+
+45. Eriksen K A, Simonsen I, Maslov S, and Sneppen K, Phys. Rev. Lett., 2003, 90: 148701
+
+46. Zachary W W, J. Anthropol. Res., 1977, 33: 452
+
+47. Fujisaka H. and Yamada T., Prog. Theor. Phys., 1983, 69: 32
+
+48. Pecora L M. and Carroll T L., Phys. Rev. Lett., 1998, 80: 2109
+
+49. Jost J. and Joy M P., Phys. Rev. E, 2002, 65: 016201
+
+50. Fink K S., Johnson G., Carroll T., Mar D., and Pecora L., Phys. Rev. E, 2000, 61: 5080
+
+51. Monasson R., Eur. Phys. J. B, 1999, 12: 555
+
+52. Restrepo J G., Ott E., and Hunt B R., Phys. Rev. Lett., 2004, 93: 114101
+
+53. Restrepo J G., Ott E., and Hunt B R., Phys. Rev. E, 2004, 69: 66215
+
+54. Wigner E P., Ann. Math., 1955, 62: 548
+
+55. Wigner E P., Ann. Math., 1957, 65: 203
+
+56. Mehta M L., Random Matrices, 2nd ed., New York: Academic, 1991
+
+57. Farkas I J., Derényi I., Barabási A-L., and Vicsek T., Phys. Rev. E, 2001, 64: 026704
+
+58. Barabási A-L. and Oltvai Z N., Nature Reviews—Genetics, 2004, 5: 101
+
+59. Huang L., Lai Y-C., and Gatenby R A., submitted
+
+60. Toroczkai Z. and Bassler K E., Nature, 2004, 428: 716
+
+61. Park K, Lai Y-C., Zhao L., and Ye N., Phys. Rev. E, 2005, 71: 065105
+
+62. Kocarev L. and Parlitz U., Phys. Rev. Lett., 1996, 76: 1816
+
+63. Wang X., Huang L., Lai Y-C., and Lai C-H., submitted
\ No newline at end of file
diff --git a/samples/texts_merged/6379889.md b/samples/texts_merged/6379889.md
new file mode 100644
index 0000000000000000000000000000000000000000..43416d63853e3af74e95d6fc918781cd379d0eef
--- /dev/null
+++ b/samples/texts_merged/6379889.md
@@ -0,0 +1,730 @@
+
+---PAGE_BREAK---
+
+# A phase transition in the distribution of the length of integer partitions
+
+Dimbinaina Ralaivaosaona
+
+► To cite this version:
+
+Dimbinaina Ralaivaosaona. A phase transition in the distribution of the length of integer partitions. 23rd International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods in the Analysis of Algorithms (AofA'12), 2012, Montreal, Canada. pp.265-282. hal-01197255
+
+HAL Id: hal-01197255
+
+https://hal.inria.fr/hal-01197255
+
+Submitted on 11 Sep 2015
+
+**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
+---PAGE_BREAK---
+
+# A phase transition in the distribution of the length of integer partitions
+
+Dimbinaina Ralaivaosaona†
+
+Stellenbosch University, Department of Mathematical Sciences, Mathematics Division, Private Bag X1, Matieland 7602, South Africa
+
+We assign a uniform probability to the set consisting of partitions of a positive integer $n$ such that the multiplicity of each summand is less than a given number $d$ and we study the limiting distribution of the number of summands in a random partition. It is known from a result by Erdős and Lehner published in 1941 that the distributions of the length in random restricted ($d=2$) and random unrestricted ($d \ge n+1$) partitions behave very differently. In this paper we show that as the bound $d$ increases we observe a phase transition in which the distribution goes from the Gaussian distribution of the restricted case to the Gumbel distribution of the unrestricted case.
+
+**Keywords:** Asymptotic expansions, integer partitions, multiplicities, limit distribution.
+
+## 1 Introduction and statement of the results
+
+The distribution of the number of summands in a random partition of an integer $n$ was first studied by Erdős and Lehner [2] and then later by many other mathematicians. They showed that it follows a Gaussian distribution for restricted partitions (all parts distinct) and a Gumbel distribution for unrestricted partitions (arbitrary multiplicities). Their results were generalised and extended in many directions: for instance, analogous limit theorems were proved for general $\lambda$-partitions. See Haselgrove-Temperley [4], Richmond [8], and Lee [6] on unrestricted partitions, Hwang [5] on restricted partitions. We will closely follow the ideas of Hwang who proved that the distribution of the length of a random restricted $\lambda$-partition is asymptotically Gaussian.
+
+In this paper, we consider partitions with no parts of multiplicity greater than $d$ which has already been studied by Mutafchiev in [7], among others. Mutafchiev's result states that if $d \sim \alpha\sqrt{n}$ then among all partitions of $n$ the set of partitions with no parts of multiplicity greater than $d$ has a positive density asymptotically equal to
+
+$$ \prod_{\lambda} (1 - e^{-\alpha\lambda})^{-1}. \qquad (1) $$
+
+Here we are interested in the number of summands of such a partition, and we show that when $d$ is asymptotically equal to $\sqrt{n}$, then we observe a phase transition in the distribution of the number of summands. More precisely we prove the following theorem:
+
+†Email: naina@sun.ac.za. This project is supported by the German Academic Exchange Service (DAAD), in association with the African Institute for Mathematical Sciences (AIMS). Code No.:A/09/04406.
+---PAGE_BREAK---
+
+**Theorem 1** Let $S_{d,n}$ be the set of partitions of an integer $n$ with no parts of multiplicity greater than $d$ ($d$ may be a function of $n$) and assume that all partitions in $S_{d,n}$ are equally likely. Then we have the following behaviour for the limit distribution of the number of summands in a random partition:
+
+• if $d = o(\sqrt{n})$ then it is asymptotically Gaussian,
+
+• if $dn^{-1/2}$ is unbounded then the distribution is asymptotically Gumbel,
+
+• if $d \sim b\sqrt{n}$ where $b$ is a positive constant, then when normalized, the distribution of the number of summands converges to a distribution with moment generating function given by
+
+$$M(x) = \prod_{\lambda} \frac{e^{-\frac{a}{\lambda}}}{1 - \frac{a}{\lambda}} \prod_{\lambda} \left( \frac{1 - e^{-(\lambda-a)\vartheta}}{1 - e^{-\lambda\vartheta}} \right) e^{\frac{a\vartheta}{e^{\lambda\vartheta} - 1}}$$
+
+where the product is taken over the set of positive integers, and
+
+$$a = \frac{x}{\sqrt{\frac{\pi^2}{6} - \kappa}}, \quad \vartheta = \frac{\pi}{\sqrt{6}}b, \quad \text{and} \quad \kappa = \sum_{\lambda} \frac{\vartheta^2 e^{-\lambda\vartheta}}{(1 - e^{-\lambda\vartheta})^2}.$$
+
+The mean and variance satisfy the following asymptotic formulae: for $d = o(\sqrt{n})$,
+
+$$\mu_{d,n} \sim \log d \sqrt{\frac{6dn}{\pi^2(d-1)}}, \quad \sigma_{d,n}^2 = \left( \frac{d-1}{2} - \frac{3d \log^2 d}{\pi^2(d-1)} \right) \sqrt{\frac{6dn}{\pi^2(d-1)}}$$
+
+and for $d \gg \sqrt{n}$,
+
+$$\mu_{d,n} \sim \frac{\sqrt{6n}}{2\pi} \log n, \quad \sigma_{d,n}^2 \sim \left(\frac{\pi^2}{6} - \kappa\right) \frac{6n}{\pi^2}$$
+
+as $n \to \infty$ ($\kappa = 0$ if $dn^{-1/2} \to \infty$).
+
+These results are obtained by analysing the corresponding generating function. We are interested in the number of summands, and so the generating function for our problem is the following: for a positive integer $d$,
+
+$$Q(d, u, z) = \prod_{\lambda} \sum_{j=0}^{d-1} u^j z^{j\lambda}, \qquad (2)$$
+
+where the product is taken over the set of positive integers, the second variable $u$ counts the number of summands. Let $Q_{d,n}(u)$ be the coefficient of $z^n$ in $Q(d, u, z)$ and let $\varpi_{d,n}$ be the random variable counting the number of summands in a random partition. Denote by $\mu_{d,n}$ and $\sigma_{d,n}$ its mean and its standard deviation, as in Theorem 1. Then we have the following immediate consequences:
+
+$$E(u^{\varpi_{d,n}}) = \frac{Q_{d,n}(u)}{Q_{d,n}(1)}, \qquad (3)$$
+
+$$\mu_{d,n} = \left. \frac{\partial Q_{d,n}(u)}{\partial u} \right|_{u=1}, \qquad (4)$$
+
+$$\sigma_{d,n}^2 = \left. \frac{\partial^2 Q_{d,n}(u)}{\partial^2 u Q_{d,n}(1)} \right|_{u=1} + \mu_{d,n} - \mu_{d,n}^2. \qquad (5)$$
+---PAGE_BREAK---
+
+Let us first define some useful functions that we will use throughout this paper:
+
+$$
+\begin{align*}
+F(u, \tau) &:= \log Q(d, u, e^{-\tau}), \\
+f(u, \tau) &:= -\sum_{\lambda} \log(1 - ue^{-\lambda\tau}), \\
+g(\tau) &:= \sum_{\lambda} \frac{e^{-\lambda\tau}}{1 - e^{-\lambda\tau}}, \\
+h(\tau) &:= \sum_{\lambda} \frac{e^{-\lambda\tau}}{(1 - e^{-\lambda\tau})^2}, \\
+G(\tau) &:= g(\tau) - dg(d\tau), \\
+H(\tau) &:= h(\tau) - d^2h(d\tau).
+\end{align*}
+$$
+
+Note that we are omitting the parameter *d* in these definitions for simplicity. We denote by $F_\tau$ the partial derivative of $F$ with respect to the second variable, $F_{\tau\tau}$, $F_u$, ... are defined similarly. The same notations apply to the other functions.
+
+It is possible to compute asymptotic formulae for the mean and variance by means of the saddle point method using equations (4) and (5), but we decided to not include these computations here explicitly since they do not differ much from those for the moment generating function that will be presented in more detail. Let us only state these asymptotic formulae, in which mean and variance are expressed in terms of the saddle point $r_0$, defined by the equation
+
+$$
+n = -F_{\tau}(1, r_0). \tag{6}
+$$
+
+For $d = o(\sqrt{n})$, we have
+
+$$
+\mu_{d,n} = \frac{\log d}{r_0} + O(d),
+$$
+
+$$
+\sigma_{d,n}^2 = \left( \frac{d-1}{2} - \frac{3d \log^2 d}{\pi^2 (d-1)} \right) r_0^{-1} + O \left( \frac{dr_0^c + r_0^{7c-3} \log^2 \frac{1}{r_0}}{r_0} \right), \quad (8)
+$$
+
+and for $d \gg \sqrt{n}$, we have
+
+$$
+\mu_{d,n} = G(r_0) + O\left(\log \frac{1}{r_0}\right) = \left(\log \frac{1}{r_0} + \gamma\right)r_0^{-1} + O\left(\log \frac{1}{r_0}\right), \quad (9)
+$$
+
+$$
+\sigma_{d,n}^2 = \left( \frac{\pi^2}{6} - (dr_0)^2 h(dr_0) \right) r_0^{-2} + O(r_0^{-1} \log^2 \frac{1}{r_0}). \quad (10)
+$$
+
+These asymptotic formulae for the mean and variance imply the formulae in Theorem 1 by using the Mellin transform method on Equation (6) (see the appendix for details and [3] for a nicely presented overview of the Mellin transform technique). We shall now prove the rest of Theorem 1 in a series of lemmas. Since the proofs of some of these lemmas are quite technical, they are mostly deferred to the appendix. The first ingredient is the following lemma, which plays an important role as we shall see in the next sections.
+---PAGE_BREAK---
+
+**Lemma 2** Let $2 \le d \le n$, and suppose that there are positive constants $c_1$ and $c_2$ such that $\frac{c_1}{\sqrt{n}} \le r \le \frac{c_2}{\sqrt{n}}$. If furthermore $\tau = r + iy$ with $\pi \ge |y| \ge r^{1+c}$, where c is any number within $(\frac{1}{3}, \frac{1}{2})$, and $\frac{1}{2} \le u \le 2$, then there are positive constants $c_3$ and $\delta$ depending only on c, $c_1$ and $c_2$ such that
+
+$$ \frac{|Q(d, u, e^{-\tau})|}{Q(d, u, e^{-r})} \le e^{-c_3 n^{\delta}} $$
+
+for sufficiently large $n$.
+
+**Proof:** See appendix. $\square$
+
+## 2 The Case $d \gg \sqrt{n}$
+
+Throughout this section, we assume that $d \gg \sqrt{n}$. To get the limit distribution we consider the normalized random variable
+
+$$ X_n = \frac{\varpi_{d,n} - \mu_{d,n}}{\sigma_{d,n}}, $$
+
+and we want to estimate the moment generating function
+
+$$ M_n(x) = \mathbb{E}(e^{xX_n}) = e^{-x\mu_{d,n}/\sigma_{d,n}} \frac{Q_{d,n}(e^{x/\sigma_{d,n}})}{Q_{d,n}(1)}. \quad (11) $$
+
+It remains to determine an asymptotic formula for the coefficient $Q_{d,n}(u)$ for certain values of $u$. So we use the following integral representation:
+
+$$ Q_{d,n}(u) = \frac{e^{nr}}{2\pi} \int_{-\pi}^{\pi} \exp\left(nit + F(u, r + it)\right) dt. \quad (12) $$
+
+From now on we set $u = e^{ar}$ where $a$ is within some fixed interval around zero; $a$ is always as such until the end of this section. We use the saddle point method and we choose $r = r(a, n)$ as the positive solution of the equation
+
+$$ n = -F_{\tau}(u, r). \quad (13) $$
+
+It is not hard to check that the function on the right hand side is a monotone decreasing function of $r$ for $r > 0$. So the solution exists, and it is unique. To obtain the asymptotic behaviour of the solution in terms of $n$ we need the next result.
+
+**Lemma 3** We have the estimates
+
+$$ F_{\tau}(u, r) = -\frac{\pi^2}{6}r^{-2} + O(r^{-1}\log \frac{1}{r}) \quad \text{and} \quad F_{\tau\tau}(u, r) = \frac{\pi^2}{3}r^{-3} + O(r^{-2}\log \frac{1}{r}) $$
+
+as $r \to 0^+$ uniformly in $a$.
+
+**Proof:** See appendix. $\square$
+---PAGE_BREAK---
+
+As a direct corollary of this lemma, we find that the solution $r$ admits the asymptotic expansion
+
+$$r = \frac{\pi}{\sqrt{6n}} (1 + \mathcal{O}(n^{-1/2} \log n))$$
+
+as $n \to \infty$, uniformly in $a$ and $d \gg \sqrt{n}$.
+
+Then the next step is to split the integral (12) into three parts, namely the central part $|t| \le r^{1+c}$, from where the main term will come, and the tails. Here $c$ is an arbitrary constant within the range $(1/3, 1/2)$. For $|t| \le r^{1+c}$, we have
+
+$$nit + F(u, r + it) = F(u, r) - \frac{t^2}{2} F_{\tau\tau}(u, r) + O\left(|t|^3 \max_{|\eta| \le r^{1+c}} |F_{\tau\tau\tau}(u, r + i\eta)|\right).$$
+
+One can use a similar approach as in the proof of Lemma 3 to obtain the following bound:
+
+$$\max_{|\eta| \le r^{1+c}} |F_{\tau\tau\tau}(u, r + i\eta)| \ll r^{-4}.$$
+
+Therefore, the central part of the integral can be estimated as:
+
+$$e^{F(u,r)} \int_{r^{1+c}}^{r^{1+c}} e^{-F_{\tau\tau}(u,r)} \frac{t^2}{2} dt (1 + O(r^{3c-1})). \quad (14)$$
+
+Note that we can extend the range of integration in (14) to $(-\infty, \infty)$ at the cost of an exponentially small error:
+
+$$2 \left| \int_{r^{1+c}}^{\infty} e^{-F_{\tau\tau}(u,r)} \frac{t^2}{2} dt \right| \ll \int_{r^{1+c}}^{\infty} e^{-r^{c-2}t} dt = r^{2-c}e^{-r^{2c-1}}.$$
+
+
+
+Thus the asymptotic formula for the central part of the integral follows:
+
+$$e^{F(u,r)} \int_{-\infty}^{\infty} e^{-F_{\tau\tau}(u,r)} \frac{t^2}{2} dt (1 + O(r^{3c-1})) = \frac{Q(d,u,e^{-r})}{\sqrt{2\pi F_{\tau\tau}(u,r)}} (1 + O(n^{-\frac{(3c-1)}{2})))$$
+
+as $n \to \infty$. For the tails, we make use of Lemma 2. Indeed, we have
+
+$$\frac{1}{Q(d, u, e^{-r})} \left| \int_{r^{1+c} < |t| \le \pi} e^{nit + F(u, r+it)} dt \right| \ll \int_{r^{1+c} < |t| \le \pi} \frac{Q(d, u, e^{-(r+it)})}{Q(d, u, e^{-r})} dt \ll e^{-c_3 n^\delta}.$$
+
+Finally we obtain the following asymptotic formula:
+
+$$Q_{d,n}(u) = \frac{e^{nr} Q(u, e^{-r})}{\sqrt{2\pi F_{\tau\tau}(u,r)}} (1 + O(n^{-\frac{(3c-1)}{2}))) \quad (15)$$
+
+as $n \to \infty$, uniformly in $a$, where $u = e^{ar}$. Now we use the latter asymptotic formula to derive an estimate for the moment generating function $M_n(x)$. For a fixed value of $x$, we define $a$ and $r$ such that $r$ is the solution of
+
+$$n = -F_{\tau}(e^{ar}, r) \text{ and } ar = \frac{x}{\sigma_{d,n}}.$$
+---PAGE_BREAK---
+
+This equation has a solution when $x$ is within some appropriate fixed interval containing zero since $\sigma_{d,n}$ is of order $\sqrt{n}$, and so $a$ is a bounded function of $x$, $d$ and $n$. Before we continue our calculations, we call $r_0$ the value of $r$ when $a=0$ ($u=1$). Then we deduce from (15) that
+
+$$ \frac{Q_{d,n}(e^{ar})}{Q_{d,n}(1)} = \exp(n(r-r_0) + F(e^{ar}, r) - F(1, r_0))(1+o(1)) \quad (16) $$
+
+as $n \to \infty$, uniformly in $a$.
+
+The rest of the section is to estimate the exponent of (16) and to apply the result to determine the behaviour of (11). We first need to estimate the difference $|r - r_0|$.
+
+**Lemma 4** *We have*
+
+$$ |r - r_0| \ll \frac{\log n}{n} $$
+
+as $n \to \infty$, uniformly in $a$.
+
+**Proof:** See appendix. $\square$
+
+We can approximate $F(1, r)$ by means of the Taylor expansion around $r_0$. From Lemma 4, we get
+
+$$ F(1, r_0) = F(1, r) + n(r - r_0) + O(n^{-1/2} \log^2 n). \quad (17) $$
+
+Note here that $F_\tau(1, r_0) = -n$ by our choice of $r_0$. Hence the exponent of (16) is reduced to
+
+$$ F(e^{ar}, r) - F(1, r) + O(n^{-1/2} \log^2 n), $$
+
+and this estimate is uniform in $a$. We also have
+
+$$
+\begin{aligned}
+F(e^{ar}, r) - F(1, r) = & a r G(r) + \sum_{\lambda} \left(-\log\left(1-\frac{a}{\lambda}\right) - \frac{a}{\lambda}\right) \\
+& + \underbrace{f(1, dr) - f(e^{adr}, dr) + adr \cdot g(dr)}_{(\text{at most of constant order})} + o(1).
+\end{aligned}
+ $$
+
+To see this, one only needs to take the Mellin transform of the left hand side, see the Appendix section for more details on this calculation.
+
+Now we are going to use the latter equation to estimate (11). From the estimate (9) we have
+
+$$
+\begin{aligned}
+\frac{x\mu_{d,n}}{\sigma_{d,n}} &= arG(r_0) + O\left(r \log \frac{1}{r}\right) \\
+&= arG(r) + O\left(r \log^2 \frac{1}{r}\right),
+\end{aligned}
+ $$
+
+since
+
+$$ |G(r) - G(r_0)| \ll |G_\tau(r)||r - r_0| \ll \log^2 r. $$
+
+Furthermore if we set $\vartheta := dr$, which is a function of $x$, $d$ and $n$, then we finally have
+
+$$ M_n(x) \sim \prod_{\lambda} \frac{e^{-\frac{a}{\lambda}}}{1 - \frac{a}{\lambda}} \cdot \prod_{\lambda} \left( \frac{1 - e^{-(\lambda-a)\vartheta}}{1 - e^{-\lambda\vartheta}} \right) e^{\frac{a\vartheta}{e^{\lambda\vartheta}-1}} \quad (18) $$
+---PAGE_BREAK---
+
+as $n \to \infty$ and $d \gg \sqrt{n}$.
+
+Let us remark here that if $dn^{-1/2}$ goes to infinity then $\vartheta$, which is a function of $n$, also goes to infinity, therefore
+
+$$M_n(x) \rightarrow \prod_{\lambda} \frac{e^{-\frac{a}{\lambda}}}{1 - \frac{a}{\lambda}} \quad \text{and} \quad a \sim \frac{\sqrt{6}}{\pi} x$$
+
+as $n \to \infty$, which is the moment generating function of the Gumbel distribution.
+
+By Curtiss's theorem [1], the normalised random variable $X_n$ converges in distribution to the Gumbel distribution as $n \to \infty$ just like in the case of unrestricted partitions ($d = n + 1$). This is not surprising since almost all partitions are covered in this case.
+
+If now $dn^{-1/2}$ converges to some positive number $b$, then $\vartheta$ is asymptotically constant, more precisely $\vartheta \sim \frac{\pi}{\sqrt{6}}b$. These observations prove the second and the third part of our main theorem.
+
+### 3 The case $d = o(n^{1/2})$
+
+We will follow the lines in the previous section though there are several differences where we have to use other techniques. So the main goal is to compute the moment generating function of the normalized random variable $X_n$. We need to have an estimate of $Q_{d,n}(u)$ for $u$ within an interval containing 1 to understand the limit behaviour of $X_n$. Let $r = r(u, d, n)$ be the unique positive solution of the equation
+
+$$n = -F_{\tau}(u, r). \qquad (19)$$
+
+The right hand side of (19) is a decreasing function of $r$ if $r > 0$, and it tends to $\infty$ as $r \to 0^+$. This confirms the existence and the uniqueness of the solution $r$. Furthermore, the solution $r$ goes to zero as $n$ goes to infinity. We shall now find the asymptotic relation between $r$ and $n$.
+
+**Lemma 5** If $u = e^{x/\sigma_{d,n}}$, where $x$ is a fixed real number, then
+
+$$F_{\tau}(u, y) = F_{\tau}(1, y)\left(1 + O\left(\sqrt{d}n^{-1/4}\right)\right) \quad (20)$$
+
+as $n \to \infty$, uniformly for $y > 0$.
+
+**Proof:** Since $\sigma_{d,n}$ is of order $\sqrt{dn}^{1/4}$, the result follows from the fact that
+
+$$u^j = 1 + O(\sqrt{dn}^{-1/4}),$$
+
+uniformly for $0 \le j < d$ by replacing all powers of $u$ in the expression of $F_\tau(u, y)$. $\square$
+
+This lemma implies that the solution of (19) is also of order $n^{-1/2}$ by estimating $F(1, y)$ (now that the parameter $u$ is no longer present the asymptotic behaviour of this function can be determined easily). Therefore, $dr$ is tending to zero as $n$ tends to infinity. Furthermore, we have
+
+$$n = \frac{\pi^2(d-1)}{6d}r^{-2} + O(\sqrt{dn}^{3/4}). \quad (21)$$
+
+We also need to estimate $F_{\tau\tau}(u, r)$ and $|F_{\tau\tau\tau}(u, r + it)|$ for $|t| \le r^{1+c}$.
+---PAGE_BREAK---
+
+**Lemma 6** If $u = e^{x/\sigma_{d,n}}$, where $x$ is a fixed real number, then we have the estimates
+
+$$F_{\tau\tau}(u,r) \sim \frac{\pi^2(d-1)}{3d}r^{-3} \quad (22)$$
+
+and
+
+$$|F_{\tau\tau\tau}(u, r + it)| \ll r^{-4} \quad (23)$$
+
+uniformly for $|t| \le r^{1+c}$.
+
+**Proof:** See appendix. $\square$
+
+Now again the saddle point method applies and we get that if $u = e^{x/\sigma_{d,n}}$ for a fixed real number $x$, then
+
+$$Q_{d,n}(u) \sim \frac{1}{\sqrt{2\pi F_{\tau\tau}(u,r)}} \exp\left(nr + F(u,r)\right) \quad (24)$$
+
+as $n \to \infty$. This immediately implies that
+
+$$\frac{Q_{d,n}(u)}{Q_{d,n}(1)} \sim \exp\left(n(r-r_0) + F(u,r) - F(1,r_0)\right) \quad (25)$$
+
+as $n \to \infty$, since $F_{\tau\tau}(u,r)$ and $F_{\tau\tau}(1,r)$ are asymptotically equal, uniformly in $u$. It now remains to estimate the exponent of the right hand side of (25). As before at this stage we let $r_0$ be $r(1, d, n)$.
+
+**Lemma 7** We have
+
+$$F(u, r) = F(1, r) + \frac{x}{r\sigma_{d,n}} \log d + \frac{(d-1)x^2}{4r\sigma_{d,n}^2} + o(1). \quad (26)$$
+
+**Proof:** See appendix. $\square$
+
+On the other hand, we have
+
+$$n(r - r_0) - F(1, r_0) = -F(1, r) + F_{\tau\tau}(1, r_0) \frac{(r - r_0)^2}{2} + O(r_0^{-4}|r - r_0|^3)$$
+
+since $F_\tau(1, r) = -n$ by definition, so we need an estimate of the difference $|r - r_0|$.
+
+**Lemma 8** We have
+
+$$r - r_0 \sim (u-1) \frac{3d \log d}{\pi^2 (d-1)} r_0 \quad (27)$$
+
+if *d* is fixed, and
+
+$$|r - r_0| = O\left(\frac{\log d}{\sqrt{d}} n^{-3/4}\right). \quad (28)$$
+
+if *d* goes to infinity with *n*.
+---PAGE_BREAK---
+
+**Proof:** See appendix.
+
+We deduce that
+
+$$
+\begin{align*}
+n(r - r_0) + F(1, r) - F(1, r_0) &= F_{\tau\tau}(1, r_0) \frac{(r - r_0)^2}{2} + \mathcal{O}(d^{-3/2} (\log d)^3 \sqrt{r}) \\
+&= \frac{3d(\log d)^2}{\pi^2(d-1)} \times \frac{x^2}{2r_0\sigma_{d,n}^2} + o(1)
+\end{align*}
+$$
+
+in the either case (if $d \to \infty$, then the first summand is also $o(1)$). By the latter equation combined with Lemma 7, we obtain the following formula for the exponent on the right hand side of (25):
+
+$$
+n(r - r_0) + F(u, r) - F(1, r_0) = \frac{\log d}{r} \frac{x}{\sigma_{d,n}} + \left( \frac{d-1}{2} + \frac{3d(\log d)^2}{\pi^2(d-1)} \right) \frac{x^2}{2r_0 \sigma_{d,n}^2} + o(1).
+$$
+
+Hence, by using the estimates for $\mu_{d,n}$ and $\sigma_{d,n}^2$ in equations (7) and (8), respectively, we have, for a fixed real number $x$,
+
+$$
+\begin{align*}
+M_{d,n}(x) &= \mathbb{E}\left(e^{\frac{x(\varpi_{d,n}-\mu_{d,n})}{\sigma_{d,n}}}\right) \\
+&= e^{-x\mu_{d,n}/\sigma_{d,n}} \frac{Q_{d,n}(e^{x/\sigma_{d,n}})}{Q_{d,n}(1)} \\
+&= \exp\left(-\frac{x}{\sigma_{d,n}}\left(\frac{1}{r_0} - \frac{1}{r}\right)\log d + \left(\frac{d-1}{2} + \frac{3d(\log d)^2}{\pi^2(d-1)}\right)\frac{x^2}{2r_0\sigma_{d,n}^2} + o(1)\right) \\
+&= \exp\left(\frac{x^2}{2r_0\sigma_{d,n}^2}\left(\frac{d-1}{2} - \frac{3d(\log d)^2}{\pi^2(d-1)}\right) + o(1)\right) \\
+&= e^{\frac{x^2}{2}(1+o(1))}
+\end{align*}
+$$
+
+as $n \to \infty$. This and Curtiss's theorem in [1] prove that if $d = o(n^{1/2})$ then we have convergence in law to the Gaussian distribution. That completes the proof of our main theorem.
+
+References
+
+[1] J. H. Curtiss. A note on the theory of moment generating functions. *Ann. Math. Statistics*, 13:430–433, 1942.
+
+[2] Paul Erdös and Joseph Lehner. The distribution of the number of summands in the partitions of a positive integer. *Duke Math. J.*, 8:335–345, 1941.
+
+[3] Philippe Flajolet, Xavier Gourdon, and Philippe Dumas. Mellin transforms and asymptotics: harmonic sums. *Theoret. Comput. Sci.*, 144(1-2):3–58, 1995. Special volume on mathematical analysis of algorithms.
+
+[4] C. B. Haselgrove and H. N. V. Temperley. Asymptotic formulae in the theory of partitions. *Proc. Cambridge Philos. Soc.*, 50:225–241, 1954.
+---PAGE_BREAK---
+
+[5] Hsien-Kuei Hwang. Limit theorems for the number of summands in integer partitions. *J. Combin. Theory Ser. A*, 96(1):89–126, 2001.
+
+[6] D. V. Lee. The asymptotic distribution of the number of summands in unrestricted Λ-partitions. *Acta Arith.*, 65(1):29–43, 1993.
+
+[7] Ljuben R. Mutafchiev. On the maximal multiplicity of parts in a random integer partition. *Ramanujan J.*, 9(3):305–316, 2005.
+
+[8] L. B. Richmond. Some general problems on the number of parts in partitions. *Acta Arith.*, 66(4):297–313, 1994.
+
+# Appendix
+
+## Mellin transforms
+
+Most of our functions are expressed in the form of harmonic sums, and we use the Mellin transform method to estimate them. More precisely, we are using the following result from [3]:
+
+**Theorem 9** Let $\phi(x)$ be a continuous function on $(0, \infty)$ with Mellin transform $\phi^*(s)$ having a non empty fundamental strip $\langle\alpha, \beta\rangle$. Assume that $\phi^*(s)$ admits a meromorphic continuation to the strip $\langle\gamma, \beta\rangle$ for $\gamma < \alpha$ with a finite number of poles there, which is analytic on $\text{Re}(s) = \gamma$. Assume also that there exists a real number $\eta \in (\alpha, \beta)$ such that
+
+$$ \phi^*(s) = O(|s|^{-c}) \quad (29) $$
+
+with $c > 1$ as $|s| \to \infty$ in the strip $\gamma \le \text{Re}(s) \le \eta$. If $\phi^*(s)$ admits the singular expansion for $s \in \langle\gamma, \alpha\rangle$
+
+$$ \phi^*(s) \approx \sum_{(\xi,k) \in A} \frac{d_{\xi,k}}{(s-\xi)^k}, $$
+
+then an asymptotic expansion of $\phi(x)$ at 0, $x > 0$, is
+
+$$ \phi(x) = \sum_{(\xi,k) \in A} \frac{(-1)^{k-1} d_{\xi,k}}{(k-1)!} x^{-\xi} (\log x)^{k-1} + O(x^{-\gamma}). $$
+
+The advantage that we have is that most of our functions have a nicely behaved Mellin transform, for example:
+
+$$
+\begin{aligned}
+\mathcal{M}(f(1,r), s) &= \zeta(s+1)\Gamma(s)\zeta(s), \\
+\mathcal{M}(g(r), s) &= \zeta^2(s)\Gamma(s), \\
+\mathcal{M}(h(r), s) &= \zeta(s-1)\Gamma(s)\zeta(s).
+\end{aligned}
+ $$
+
+The above functions are all expressed in terms of the Riemann zeta function $\zeta(s)$ and the gamma function $\Gamma(s)$. We know that $\zeta(s)$ admits a simple pole at $s = 1$ with residue 1 and is analytic everywhere else
+---PAGE_BREAK---
+
+in the complex plane, also $\Gamma(s)$ is analytic everywhere except for simple poles at $s = 0, -1, -2, \dots$. Furthermore, all the above Mellin transforms satisfy the hypothesis of Theorem 9 therefore one has
+
+$$f(1, r) = \frac{\pi^2}{6} r^{-1} - \frac{1}{2} \log \frac{1}{r} + \mathcal{O}(1), \quad (30)$$
+
+$$g(r) = (\log \frac{1}{r} + 2\gamma)r^{-1} + \mathcal{O}(1), \qquad (31)$$
+
+$$h(r) = \frac{\pi^2}{6} r^{-2} - \frac{1}{2} r^{-1} + \mathcal{O}(1), \qquad (32)$$
+
+where $\gamma$ is the Euler-Mascheroni constant. In order to estimate $f(e^{ar}, r)$ for fixed $a$ within the interval $(-1, 1)$, we also need the Hurwitz zeta function
+
+$$\zeta(s, 1-a) = \sum_{\lambda} \frac{1}{(\lambda - a)^s}.$$
+
+Note that the Mellin transform of the difference $f(e^{ar}, r) - f(1, r)$ is
+
+$$M(f(e^{ar}, r) - f(1, r), s) = \zeta(s+1)\Gamma(s)(\zeta(s, 1-a) - \zeta(s)).$$
+
+The Hurwitz zeta function admits a simple pole at $s=1$ with residue 1, therefore the pole $s=1$ of the Mellin transform cancels out, and the pole at $s=0$ becomes important. All we need to know for our purposes is that
+
+$$\lim_{s \to 0} \frac{(\zeta(s, 1-a) - \zeta(s) - a\zeta(s+1))}{s} = \sum_{\lambda} \left( -\log\left(1-\frac{a}{\lambda}\right) - \frac{a}{\lambda} \right),$$
+
+and so we have the equation
+
+$$f(e^{ar}, r) - f(1, r) = a \operatorname{arg}(r) + \sum_{\lambda} \left( -\log \left( 1 - \frac{a}{\lambda} \right) - \frac{a}{\lambda} \right) + o(1) \quad (33)$$
+
+as $r \to 0^+$, the term $a \arg(r)$ is the inverse Mellin transform of $a\zeta(s+1)\Gamma(s+1)\zeta(s+1)$.
+
+## Proofs of intermediate results
+
+In the following, we give proofs of all the lemmas that are used in the proof of our main theorem.
+
+**Proof of Lemma 2:** First we are going to estimate the quantity
+
+$$\mathrm{Re} \left( \sum_{\lambda} (e^{-\lambda r} - e^{-\lambda \tau}) \right),$$
+
+which can be written in the following form:
+
+$$\begin{align*}
+\frac{1}{1-e^{-r}} - \operatorname{Re} \left( \frac{1}{1-e^{-\tau}} \right)
+&= \frac{e^{-r}(1+e^{-r})(1-\cos y)}{(1-e^{-r})(1-2e^{-r}\cos y + e^{-2r})} \\
+&\gg \frac{|y|^2}{r(\max\{r, |y|\})^2} \gg r^{2c-1}.
+\end{align*}$$
+---PAGE_BREAK---
+
+as $r \to 0^+$. Now if $|z| \le 2$ then we claim that there are positive constants $c_4$ and $c_5$ such that
+
+$$ \frac{|1+z|}{1+|z|} \le e^{-c_4(|z|-{\rm Re}(z))} $$
+
+and
+
+$$ \frac{|1+z+z^2|}{1+|z|+|z|^2} \le e^{-c_5(|z|-{\rm Re}(z))}. $$
+
+Indeed for $|z| \le 2$ we have
+
+$$ \begin{aligned} \frac{|1+z|^2}{(1+|z|)^2} &= 1 - 2\frac{|z|-{\rm Re}(z)}{(1+|z|)^2} \\ &\le 1 - \frac{2}{9}(|z|-{\rm Re}(z)) \\ &\le e^{-\frac{2}{9}(|z|-{\rm Re}(z))}. \end{aligned} $$
+
+Similarly,
+
+$$ \begin{aligned} \frac{|1+z+z^2|^2}{(1+|z|+|z|^2)^2} &= 1 - 2(|z|-{\rm Re}(z)) \frac{1+|z|^2 + (2-{\rm Re}(z)){\rm Re}(z) + |z|}{(1+|z|+|z|^2)^2} \\ &\le 1 - \frac{2}{49}(|z|-{\rm Re}(z)) \\ &\le e^{-\frac{2}{49}(|z|-{\rm Re}(z))}. \end{aligned} $$
+
+Hence for any $2 \le d \le n$
+
+$$ |1+z+z^2+z^3+\cdots+z^{d-1}| \le |1+z||z|^2|1+z|+\cdots, $$
+
+where the last term is either $|z|^{d-2}|1+z|$ or $|z|^{d-3}|1+z+z^2|$ depending on the parity of $d$. Therefore by the claim we have
+
+$$ |1+z+z^2+\cdots+z^{d-1}| \le e^{-c_6(|z|-{\rm Re}(z))}(1+|z|+|z|^2+\cdots+|z|^{d-1}), $$
+
+where $c_6 = \min\{c_4, c_5\}$. Now we set $z = ue^{-\lambda\tau}$ and take the product over all $\lambda \ge 1$ to obtain
+
+$$ \frac{|Q(u, e^\tau)|}{Q(u, e^r)} \le \exp\left(-c_6 \sum_\lambda (e^{-\lambda r} - {\rm Re}(e^{-\lambda \tau}))\right). $$
+
+This completes the proof. $\square$
+
+**Proof of Lemma 3:**
+
+We start with $F_\tau(e^{ar}, r)$, which can be written as a difference of two sums:
+
+$$ \sum_{\lambda} \frac{d\lambda}{e^{(\lambda-a)dr} - 1} - \sum_{\lambda} \frac{\lambda}{e^{(\lambda-a)r} - 1}. $$
+---PAGE_BREAK---
+
+We estimate these sums separately. First we have
+
+$$
+\sum_{\lambda} \frac{\lambda}{e^{(\lambda-a)r-1}} = \sum_{\lambda} \frac{\lambda-a}{e^{(\lambda-a)r-1}} + a \sum_{\lambda} \frac{1}{e^{(\lambda-a)r-1}},
+$$
+
+and hence the Mellin transform can be computed as
+
+$$
+\zeta(s)\Gamma(s)(\zeta(s-1, 1-a) + a\zeta(s, 1-a)).
+$$
+
+The dominant singularity is at $s=2$ which is a simple pole, and the next singularity is at $s=1$ which is
+a double pole, therefore by Theorem 9 we have
+
+$$
+\sum_{\lambda} \frac{\lambda}{e^{(\lambda - a)r} - 1} = \frac{\pi^2}{6} r^{-2} + O(r^{-1} \log \frac{1}{r})
+$$
+
+as $r \to 0^+$. It also follows that
+
+$$
+d \sum_{\lambda} \frac{\lambda}{e^{(\lambda - a)dr} - 1}
+\begin{align*}
+&= O(d(dr)^{-2}) \\
+&= O(d^{-1}r^{-2}).
+\end{align*}
+$$
+
+Therefore, *r* is of order *n*-1/2, and by the assumption that *d* ≫ √*n*, *dr* is bounded below. Hence
+
+$$
+d \sum_{\lambda} \frac{\lambda}{e^{(\lambda - a)dr} - 1} = O(r^{-1})
+$$
+
+and the first part of the lemma follows. The second part is proved analogously.
+
+**Proof of Lemma 4:** Since $r=r(a):=r(a, d, n)$ is uniquely determined by $a, d$, and $n$, we can apply implicit differentiation on the equation
+
+$$
+n = -F_{\tau}(e^{ar}, r).
+$$
+
+We get
+
+$$
+\left. \frac{\partial}{\partial a} r(a) \right|_{a=a_1} = - \frac{\left. \frac{\partial}{\partial a} F_{\tau}(e^{ar(a_1)}, r(a_1)) \right|_{a=a_1}}{\left. \frac{\partial}{\partial r} F_{\tau}(e^{a_1 r}, r) \right|_{r=r(a_1)}} . \quad (34)
+$$
+
+We can compute the numerator:
+
+$$
+\left. \frac{\partial}{\partial a} F_{\tau}(e^{ar}, r) \right|_{a=a_1} = r \left( \sum_{\lambda} \frac{\lambda e^{-(\lambda-a)r}}{(1-e^{-(\lambda-a)r})^2} - d^2 \sum_{\lambda} \frac{\lambda e^{-(\lambda-a)dr}}{(1-e^{-(\lambda-a)dr})^2} \right).
+$$
+
+By using the Mellin transform we can show that
+
+$$
+\sum_{\lambda} \frac{\lambda e^{-(\lambda - a)r}}{(1 - e^{-(\lambda - a)r})^2} \ll r^{-2} \log \frac{1}{r}.
+$$
+---PAGE_BREAK---
+
+For the second term, we know that $dr \gg 1$, therefore we have
+
+$$d^2 \sum_{\lambda} \frac{\lambda e^{-(\lambda-a)dr}}{(1 - e^{-(\lambda-a)dr})^2} \ll r^{-2} \sum_{\lambda} (\lambda dr)^2 e^{-\lambda dr} \ll r^{-2}.$$
+
+The denominator can also be estimated in the same way and we have
+
+$$\left|\frac{\partial}{\partial r} F_{\tau}(e^{a_1 r}, r)\right|_{r=r(a_1)} \gg r^{-3}.$$
+
+Therefore,
+
+$$|r - r_0| \ll \sup_{a_1} \left. \frac{\partial}{\partial a} r(a) \right|_{a=a_1} \ll O(r^2 \log \frac{1}{r})$$
+
+which completes the proof.
+
+**Proof of Lemma 6:**
+
+Let
+
+$$A := \left[ \frac{x}{r\sigma_{d,n}} \right] \text{ and } a := \frac{x}{r\sigma_{d,n}} - A,$$
+
+where [.] denotes the nearest integer. For $\lambda \le A$ and for a fixed non-negative integer $k$, there are positive constants $K_1$ and $K_2$ depending only on $k$ such that
+
+$$K_1 d^{k+1} \le \sum_{j=0}^{d-1} j^k u^j e^{-\lambda j r} \le K_2 d^{k+1}, \quad (35)$$
+
+since $u^j = 1 + O(\sqrt{dn^{-1/4}})$ and $\lambda j r \ll \sqrt{dn^{-1/4}}$ as well. Now we split the series $F_{\tau\tau}(u, r)$ into two parts and we denote by $S_1$ the sum over $\lambda \le A$ and by $S_2$ the sum over $\lambda > A$. We are going to estimate them separately: we have
+
+$$S_1 = \sum_{\lambda \le A} \lambda^2 \frac{\sum_{j=0}^{d-1} j^2 u^j e^{-\lambda j r} \sum_{j=0}^{d-1} u^j e^{-\lambda j r} - \left( \sum_{j=0}^{d-1} j u^j e^{-\lambda j r} \right)^2}{\left( \sum_{j=0}^{d-1} u^j e^{-\lambda j r} \right)^2}$$
+
+and so
+
+$$S_1 \le \sum_{\lambda \le A} \lambda^2 \frac{\sum_{j=0}^{d-1} j^2 u^j e^{-\lambda j r}}{\sum_{j=0}^{d-1} u^j e^{-\lambda j r}} \ll A^3 d^2 \ll n$$
+
+by (35). For $S_2$ we shift the summation so that we can write the sum as
+
+$$S_2 = \sum_{\lambda \ge 1} (\lambda + A)^2 \left( \frac{e^{-(\lambda-a)r}}{(1 - e^{-(\lambda-a)r})^2} - \frac{d^2 e^{-(\lambda-a)dr}}{(1 - e^{-(\lambda-a)dr})^2} \right).$$
+
+Now we expand $(\lambda + A)^2$. Then the term with $\lambda^2$ is equal to $F_{\tau\tau}(e^{ar}, r)$, and the term with $A^2$ is almost the same as $H(r)$: the difference is that the sum is taken over a slightly shifted sequence, where the shift
+---PAGE_BREAK---
+
+a is at most $\frac{1}{2}$ in absolute value. Since the Dirichlet series of the shifted sequence is $\zeta(s, 1-a)$, the term with $A^2$ contributes only $O(A^2H(r))$. The term with $2A\lambda$ can be written as
+
+$$2A \sum_{\lambda} (\lambda - a) \left( \frac{e^{-(\lambda-a)r}}{(1-e^{-(\lambda-a)r})^2} - \frac{d^2 e^{-(\lambda-a)dr}}{(1-e^{-(\lambda-a)dr})^2} \right) + O(AH(r)).$$
+
+The sum can be estimated by using the Mellin transform method, and we have
+
+$$\sum_{\lambda} (\lambda - a) \frac{e^{-(\lambda-a)r}}{(1 - e^{-(\lambda-a)r})^2} = \frac{\log \frac{1}{r}}{r^2} + O(r^{-2})$$
+
+and
+
+$$d^2 \sum_{\lambda} (\lambda - a) \frac{e^{-(\lambda-a)dr}}{(1 - e^{-(\lambda-a)dr})^2} = \frac{\log \frac{1}{dr}}{r^2} + O(r^{-2}).$$
+
+Putting everything together we get
+
+$$F_{\tau\tau}(u,r) = F_{\tau\tau}(e^{ar}, r) + \mathcal{O}\left(\frac{n^{5/4} \log d}{\sqrt{d}}\right),$$
+
+and we can estimate $F_{\tau\tau}(e^{ar}, r)$ again by Theorem 9 to get the estimate in (22). The estimate in (23) is done in a similar manner. $\square$
+
+**Proof of Lemma 7:** Let $v$ be $\frac{x}{\sigma_{d,n}}$, so that $u = e^v$, and set $A = [v/r]$. Moreover, we set
+
+$$S_1' = \sum_{\lambda \le A} \log \left( \sum_{j=0}^{d-1} e^{j(v-\lambda r)} \right) \quad \text{and} \quad S_2' = \sum_{\lambda > A} \log \left( \sum_{j=0}^{d-1} e^{j(v-\lambda r)} \right),$$
+
+so that $F(u,r) = S'_1 + S'_2$. We estimate $S'_1$ and $S'_2$ separately. We know that $v$ is of order $d^{-1/2}r^{1/2}$ and that $A = [\frac{v}{r}]$. Hence
+
+$$
+\begin{align*}
+S_1' &= \sum_{\lambda \le A} \log \left( d + \frac{d(d-1)}{2}(v - \lambda r) + \mathcal{O}(d^2 r) \right) \\
+&= A \log d + \frac{(d-1)vA}{2} - \frac{r(d-1)A(A+1)}{4} + \mathcal{O}(\sqrt{dr}) \\
+&= A \log d + \frac{(d-1)x^2}{2r\sigma_{d,n}^2} - \frac{(d-1)x^2}{4r\sigma_{d,n}^2} + \mathcal{O}(\sqrt{dr}) \\
+&= A \log d + \frac{(d-1)x^2}{4r\sigma_{d,n}^2} + \mathcal{O}(\sqrt{dr}).
+\end{align*}
+$$
+
+To estimate $S'_2$ we use the same trick as in the proof of Lemma 6 by shifting the sum and we get
+
+$$
+\begin{align*}
+S_2' - F(1, r) &= F(e^{ar}, r) - F(1, r) \\
+&= f(e^{ar}, r) - f(1, r) - (f(e^{adr}, dr) - f(1, dr)) \\
+&= a \log d + o(1),
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where $a = \frac{v}{r} - A$. Here we used Equation (33) to derive the last line from the second line. Combining the two, we get
+
+$$
+\begin{aligned}
+F(u,r) &= S'_{1} + S'_{2} = F(1,r) + (A+a) \log d + \frac{(d-1)x^2}{4r\sigma_{d,n}^2} + o(1) \\
+&= F(1,r) + \frac{v}{r} \log d + \frac{(d-1)x^2}{4r\sigma_{d,n}^2} + o(1)
+\end{aligned}
+$$
+
+which completes the proof.
+
+**Proof of Lemma 8:** Let us assume first that $d$ goes to infinity with $n$. As in the proof of Lemma 4 we use implicit differentiation and we get
+
+$$ \frac{\partial}{\partial u} r = - \frac{\frac{\partial}{\partial u} F_{\tau}(u, r)}{\frac{\partial}{\partial r} F_{\tau}(u, r)}. \quad (36) $$
+
+Then we apply our routine calculation to estimate the numerator and the denominator. For the numerator we split the sum at $A = [\frac{\log u}{r}]$, and the first sum is
+
+$$ \sum_{\lambda \le A} \frac{\lambda \sum_{j=0}^{d-1} j^2 u^j e^{-\lambda j r} \sum_{j=0}^{d-1} u^j e^{-\lambda j r} - \sum_{j=0}^{d-1} j u^j e^{-\lambda j r} \sum_{j=0}^{d-1} j u^j e^{-\lambda j r}}{\left(\sum_{j=0}^{d-1} u^j e^{-\lambda j r}\right)^2} $$
+
+which is of order $\mathcal{O}(A^2d^2)$ by the same argument that we used in Lemma 6. After shifting the summation, the sum over $\lambda > A$ can be written as
+
+$$ \sum_{\lambda} \frac{\lambda + A}{u} \left( \frac{e^{-(\lambda-a)r}}{(1-e^{-(\lambda-a)r})^2} - \frac{d^2 e^{-(\lambda-a)dr}}{(1-e^{-(\lambda-a)dr})^2} \right). $$
+
+Here we can see that this sum can be Mellin-transformed, and we can use Theorem 9 to prove that this sum is a $\mathcal{O}(r^{-2} \log d)$. We have already seen that the denominator admits the asymptotic estimate
+
+$$ F_{\tau\tau}(u, r) \gg r^{-3}. $$
+
+These completes the case where $d$ tends to infinity since
+
+$$ |r - r_0| \ll |u - 1|(r \log d) \ll \frac{r \log d}{\sigma_{d,n}}. $$
+
+If $d$ is fixed then we have
+
+$$ -n = F_{\tau}(u, r) = F_{\tau}(1, r_0) $$
+
+which implies that
+
+$$ F_{\tau}(u, r) - F_{\tau}(1, r) = -(F_{\tau}(1, r) - F_{\tau}(1, r_0)). \quad (37) $$
+
+We estimate both sides of Equation (37). The right hand side is easier, and we get
+
+$$ -(F_{\tau}(1, r) - F_{\tau}(1, r_0)) = -F_{\tau\tau}(1, r_0)(r - r_0) + \mathcal{O}(r_0^{-4}|r - r_0|^2). $$
+---PAGE_BREAK---
+
+To estimate the left hand side, note that for any $0 \le j < d$
+
+$$u^j = 1 + j(u-1) + \mathcal{O}(dr),$$
+
+and so for any positive integer $\lambda$ we have
+
+$$
+\begin{align*}
+& \frac{\sum_{j=0}^{d-1} j u^j e^{-\lambda j r}}{\sum_{j=0}^{d-1} u^j e^{-\lambda j r}} - \frac{\sum_{j=0}^{d-1} j e^{-\lambda j r}}{\sum_{j=0}^{d-1} e^{-\lambda j r}} \\
+&= \frac{\sum_{j=0}^{d-1} j u^j e^{-\lambda j r} \sum_{j=0}^{d-1} e^{-\lambda j r} - \sum_{j=0}^{d-1} j e^{-\lambda j r} \sum_{j=0}^{d-1} u^j e^{-\lambda j r}}{\left(\sum_{j=0}^{d-1} e^{-\lambda j r}\right)^2 \left(1 + \mathcal{O}(\sqrt{dr})\right)} \\
+&= (u-1) \frac{\sum_{j=0}^{d-1} j^2 e^{-\lambda j r} \sum_{j=0}^{d-1} e^{-\lambda j r} - \left(\sum_{j=0}^{d-1} j e^{-\lambda j r}\right)^2}{\left(\sum_{j=0}^{d-1} e^{-\lambda j r}\right)^2 \left(1 + \mathcal{O}(\sqrt{dr})\right)} \\
+&\quad + \mathcal{O}\left(dr \frac{\sum_{j=0}^{d-1} j e^{-\lambda j r}}{\sum_{j=0}^{d-1} e^{-\lambda j r}}\right).
+\end{align*}
+$$
+
+Summing over all positive integers we have
+
+$$
+F_{\tau}(u, r) - F_{\tau}(1, r) = (u-1)F_{u\tau}(1, r) + \mathcal{O}\left(\sqrt{dr}|u-1||F_{u\tau}(1,r)| + dr|F_{\tau}(1,r)|\right).
+$$
+
+Since $r$ and $r_0$ are asymptotically equal, we have the asymptotic formulae
+
+$$
+\begin{align*}
+& u - 1 \sim \frac{x}{\sigma_{d,n}}, \\
+& F_{\tau}(1, r) \sim \frac{-\pi^2(d-1)}{6d} r^{-2}, \\
+& F_{u\tau}(1, r) = G_{\tau}(r) \sim -(\log d)r^{-2}, \\
+& F_{\tau\tau}(1, r_0) \sim \frac{\pi^2(d-1)}{3d} r^{-3}.
+\end{align*}
+$$
+
+Finally we obtain
+
+$$r - r_0 = \frac{-(u-1)F_{u\tau}(1,r)}{F_{\tau\tau}(1,r_0)} + \mathcal{O}(r^2 + r^{-1}|r-r_0|^2).$$
+
+This gives the asymptotic formula in the statement of the lemma since $r$ is asymptotically equal to $r_0$. $\square$
+---PAGE_BREAK---
+
diff --git a/samples/texts_merged/6469735.md b/samples/texts_merged/6469735.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8af2b337dfb8a86d98dc073a4a97681e6476f7e
--- /dev/null
+++ b/samples/texts_merged/6469735.md
@@ -0,0 +1,741 @@
+
+---PAGE_BREAK---
+
+Noncommutative Solitons of Gravity
+
+TSUGUHIKO ASAKAWA ¹ and SHINPEI KOBAYASHI ²
+
+¹ Department of Physics, Graduate School of Science,
+Tohoku University,
+Sendai 980-8578, JAPAN
+
+² Department of Physics,
+Gunma National College of Technology,
+580 Toribacho, Maebashi, 371-8530, JAPAN
+
+abstract
+
+We investigate a three-dimensional gravitational theory on a noncommutative space which has a cosmological constant term only. We found various kinds of nontrivial solutions by applying a similar technique which was used to seek noncommutative solitons in noncommu- tative scalar field theories. Some of those solutions correspond to bubbles of spacetimes or represent dimensional reduction. The solution which interpolates $G_{\mu\nu} = 0$ and the Minkowski metric is also found. All solutions we obtained are non-perturbative in the noncommutative parameter $\theta$, therefore they are different from solutions found in other contexts of noncom- mutative theory of gravity and would have a close relation to quantum gravity.
+
+¹E-mail: asakawa@tuhep.phys.tohoku.ac.jp
+
+²E-mail: shimpei@nat.gunma-ct.ac.jp
+---PAGE_BREAK---
+
+# 1 Introduction
+
+The construction of a consistent theory of spacetime at the Planck scale is one of the main issue in fundamental physics. There is an expectation that a noncommutativity among spacetime coordinates,
+
+$$[x^\mu, x^\nu] = i\theta^{\mu\nu}, \qquad (1.1)$$
+
+emerges in such a scale. In fact, there are so many attempts at making this idea manifest, that is, to construct a consistent theory of gravity with a noncommutativity to be taken into account. For example, a noncommutative extension of the gauge theory of gravitation has been investigated [1, 2]. This formalism is based on gauging the noncommutative $SO(1,4)$ de Sitter group [3] and using the Seiberg-Witten map [4] with subsequent contraction to the Poincaré group $ISO(1,3)$. In that theory, corrections to cosmological and black hole solutions due to the noncommutativity have been found [5, 6, 7]. Application of the Seiberg-Witten map to Chern-Simons theoreies have been carried out in [8, 9]. Utilizing the correspondence between three-dimensional Einstein gravity and three-dimensional Chern-Simons theory, the noncommutative gauge theory of gravitation is considered in [10, 11, 12, 13]. Another approach to noncommutative spacetimes is considering noncommutative effects on gravitational sources [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. The authors have found some solutions which solve the Einstein equation with gravitational sources of Gaussian type whose widths are related to the noncommutativity. This approach is directly connected to an expectation of smearing curvature singularities that appear in Einstein gravity. Also, the authors of [25, 26] proposed a theory of gravity on noncommutative spaces from the viewpoint of twisting the diffeomorphism. This theory has been extended to a theory which includes fermionic terms, i.e., a supergravity on noncommutative spaces [27, 28]. There are some trials to give classical solutions for those theories and actually a few solutions have been found [29, 30, 31, 32]. Other approaches to noncommutative gravity also can be found in [33, 34, 35].
+
+Although these approaches are different in the basic hypothesis, they are aiming to construct a consistent noncommutative gravitational theory by deforming the Einstein-Hilbert action by the noncommutative parameter $\theta$, that comes back to the ordinary Einstein gravity in the commutative limit $\theta \to 0$. Moreover, the solutions already found are also deformations of solutions for the ordinary Einstein gravity, namely, we have not had any nontrivial solutions particular to the gravitational theories on noncommutative spaces so far.
+
+In this paper we take a rather different approach to investigate the effect of a noncommutativity, by finding classical solutions that can not be obtained by deformations of solutions of
+---PAGE_BREAK---
+
+commutative theories but are non-perturbative in the noncommutativity $\theta$. To this end, we would like to work with a three-dimensional noncommutative gravitational theory that consists of a cosmological constant term only, that is to say, a noncommutative gravitational theory without the Ricci scalar. We adopt the first-order (vielbein) formalism, and the action of our theory reduces to just the three-dimensional determinant of the vielbein. One reason to work with this situation is that the cosmological constant term is made by the $\star$-multiplication only and that would be common for many approaches to noncommutative gravity, i.e., it is model independent.
+
+This set-up is also motivated by the idea of [36], where some noncommutative solitons have been derived in noncommutative scalar field theories. The authors of [36] take a limit that the space noncommutativity is very large, which makes the kinetic term negligible compared with the potential term of that scalar field theory. Since all derivatives disappear, we naively expect that we can not find nontrivial solutions, but this is not the case due to the noncommutativity. Actually they found some classically stable solutions called noncommutative solitons. Soon after [36], their theory was extended to that which includes the kinetic term, and by a solution-generating technique, the solutions which solve the equation of motion including the kinetic term have been explicitly constructed [37]. Our case, as the determinant is made of the $\star$-multiplication of vielbein, is analogous to the noncommutative $\phi^3$-theory investigated in [36], and we can apply a similar technique to find nontrivial solutions, namely, by switching to the operator formulation and using projection operators or their generalization. One of the purposes of this paper is to construct such noncommutative solitons of gravity.
+
+By comparing to the noncommutative scalar solitons, our theory is regarded naturally as the situation where the scalar curvature can be negligible in comparison with the cosmological constant, but we will argue that there is another possibility in which the theory would be interpreted in a more radical way as the emergence of spacetime only from the cosmological constant term without the scalar curvature. In any case, the solutions we found suggest a close connection to quantum gravity, where degenerate metrics play important roles. Such a degenerate metric that satisfies $\det E_\mu^a = 0$ or $\det G_{\mu\nu} = 0$ represents a non-classical phase of the theory and contributes to the path integral. In particular, the diffeomorphism invariant phase $E_\mu^a = 0$ is considered as unbroken vacuum, while the metricity condition does not restrict the spin connection $\omega_\mu^{ab}$ to be the Christoffel symbol, which becomes a completely independent variable. This implies that the first-order formalism using vielbein is not equivalent to the ordinary second-order formalism using metrics. Another characteristic feature of quantum gravity is that topology and signature-changing solutions are allowed [38]. The solutions obtained in this paper
+---PAGE_BREAK---
+
+share the same features as above.
+
+The organization of this paper is as follows: in the following section, we give our action and
+derive the equation of motion. In section 3, we will construct solutions for the equation of motion
+by using projection operators. We first give examples to show typical structures of the solutions
+(bubbles of spacetime, dimensional reduction). Then the most general solutions of this class
+are presented. In section 4, we construct another class of solutions by using Gamma matrices,
+which are more close to conventional spacetimes. The final section is devoted to discussion and
+future directions.
+
+**2 The Noncommutative Gravity of Cosmological Constant**
+
+**2.1 Action and Equation of Motion**
+
+Let us start with a three-dimensional noncommutative plane $\mathbb{R}^3$ with coordinates $x^\mu (\mu = 0, 1, 2)$
+or $(t, x, y)$. The star product is defined for any functions on $\mathbb{R}^3$ as
+
+$$
+(f \star g)(x) = \exp \left( \frac{i}{2} \frac{\partial}{\partial x^\mu} \theta^{\mu\nu} \frac{\partial}{\partial y^\nu} \right) f(x) g(y) \bigg|_{y \to x}, \quad (2.1)
+$$
+
+where $\theta^{\mu\nu}$ is a constant, anti-symmetric matrix which represents a noncommutativity. In this
+paper, for simplicity, we introduce the noncommutativity purely in the spatial coordinates¹
+
+$$
+[x, y]_{*} \equiv x \star y - y \star x = i\theta, \tag{2.2}
+$$
+
+by choosing $\theta^{0i} = 0$ ($i = 1, 2$) and $\theta^{12} \equiv \theta$.
+
+We exploit the first-order formulation of a three-dimensional theory of gravity on a noncom-
+mutative $\mathbb{R}^3$ which has a cosmological constant term only,
+
+$$
+S = -\frac{\Lambda}{\kappa^2} \int dt d^2 x E^*, \qquad (2.3)
+$$
+
+where $\Lambda$ is a cosmological constant. Here $E^*$ is the $\star$-determinant defined by
+
+$$
+E^* = \det_* E = \frac{1}{3!} \epsilon^{\mu\nu\rho} \epsilon_{abc} E_\mu^a \star E_\nu^b \star E_\rho^c, \quad (2.4)
+$$
+
+where $E_\mu^a(x)$ is a vielbein. We denote spacetime indices by $\mu, \nu, \rho$ and tangent space indices by $a, b, c$. All indices run from 0 to 2. The metric is also defined through the star product in a similar way [11, 25]:
+
+$$
+G_{\mu\nu} = \frac{1}{2} (E_{\mu}^{a} * E_{\nu}^{b} + E_{\nu}^{b} * E_{\mu}^{a}) \eta_{ab}, \quad (2.5)
+$$
+
+¹ In general, we can choose one of coordinates which remains commutative by changing $\theta^{\mu\nu}$ to the Jordan form. Time direction is usually chosen in order to avoid infinite time derivatives. However, for *static* classical solutions, (2.1) reduces automatically to (2.2) only.
+---PAGE_BREAK---
+
+where $\eta_{ab}$ is an $SO(1, 2)$ invariant metric of the local Lorentz frame. We do not assume that $E_{\mu}^{a}$ or $G_{\mu\nu}$ are invertible as $3 \times 3$ matrices, that is, we allow degenerate metrics. Through this paper, $G_{\mu\nu}$ is assumed to be real for simplicity. The solutions we discuss later will not contradict with this assumption, but complex metrics can also be treated in a similar manner.
+
+Here we would like to point out that there are two possibilities (a) and (b) to see this simple setting in a full gravitational theory on the noncommutative space: (a) The action given in (2.3) is a part of a full theory, that is, we need to add a noncommutative generalization of the Einstein-Hilbert term to (2.3). This is of course the common belief. In this case, our theory (2.3) is considered to be valid when the scalar curvature term is negligible compared with the cosmological constant. However, as opposed to the noncommutative scalar field theory, this is not achieved by taking the large noncommutativity limit $\theta \to \infty$ of a certain full theory $^2$
+
+$$S = \frac{1}{2\kappa^2} \int dt d^2x E^* \star (R_* - 2\Lambda), \qquad (2.6)$$
+
+where $R_*$ is a suitably defined scalar curvature, which may be model dependent.
+
+On the other hand, we propose another possibility in this paper: (b) The action given in (2.3) is already a full theory. In this case, the metric and other quantities like a scalar curvature are considered to be composite quantities made from the vielbein. For a quantity in ordinary Einstein theory we can define several different quantities in the noncommutative case. For example, another metric rather than (2.5) can be defined by $g_{\mu\nu} = E_{\mu}^{a} E_{\nu}^{b} \eta_{ab}$ using ordinary product$^3$. We call the latter a “commutative” metric in this papaer, but do not confuse! It is just a quantity in the noncommutative theory. Both “commutative” and “noncommutative” quantities are used for capturing the spacetime structures given by a classical solution of the vielbein. In this paper, we will use two kinds of determinants $\det G$ and $\det_* G$ of the metric (2.5), and “commutative” scalar curvatures. In this way, we switch effectively from the first-order (vielbein) formalism to the second-order (metric) formalism without introducing a spin connection. We emphasize that the noncommutativity makes it possible. Such a kind of prescription would have never appeared in the literature to our knowledge. This is motivated by the disagreement between the first and the second-order formalism in phases with degenerate metrics in quantum gravity. Of
+
+$^2$ We recall the argument in [36]: by rescaling the coordinates $x \to x/\sqrt{\theta}$, $y \to y/\sqrt{\theta}$, all $\theta$ in the star product disappear, while any other derivatives (and also gauge fields) acquire $1/\sqrt{\theta}$. Then all the derivative terms become negligible in the large $\theta$ limit. However, as opposed to the scalar theory, the vielbein and other quantities in (2.6) are also transformed under the rescaling keeping the action invariant. Thus, the simple rescaling argument cannot be directly applied to our case. Note also that we should take a static or a slowly time-varying approximation as well, in order to drop time derivatives.
+
+$^3$ This is possible because a product $f \cdot g$ of two functions is also written by the $\star$-product [25]. To this end, first apply the bi-differential exponential operator inverse to that appears in (2.1) to $f$ and $g$, then take the $\star$-product.
+---PAGE_BREAK---
+
+course this possibility itself should be justified, but we will see in this paper that the solutions
+in this interpretation possess interesting properties, suggesting a connection to the very notion
+of quantum gravity.
+
+Now let us derive the equation of motion of our theory (2.3). We use the fact that the cyclic permutation of the star product is allowed in the integral:
+
+$$
+\begin{align}
+\int f \star g \star h &= \int f(g \star h) = \int (g \star h)f \\
+&= \int g \star h \star f, \tag{2.7}
+\end{align}
+$$
+
+which comes from a property of the star product
+
+$$
+\int f \star g = \int g \star f = \int fg. \tag{2.8}
+$$
+
+Taking this into account, the action (2.3) is rewritten as
+
+$$
+S = -\lambda \int dt d^2 x \epsilon_{abc} E_0^a \{E_1^b, E_2^c\}_\star, \quad (2.9)
+$$
+
+where $\lambda = \frac{\Lambda}{k^2}$. Here we used the star-anti-commutator defined by $\{f,g\}_\star = f\star g + g\star f$. Varying the action (2.9) with respect to $E_\mu^\alpha$ and using the cyclic symmetry of the star product, we have nine equations of motion for $\forall \mu$ and $\forall a$,
+
+$$
+\epsilon^{\mu\nu\rho}\epsilon_{abc}\{E_{\nu}^{b}, E_{\rho}^{c}\}_{\star} = 0. \qquad (2.10)
+$$
+
+Clearly the action (2.9) will be zero if the vielbein solves (2.10), that is, all classical solutions give degenerate vielbein that satisfies $\det_* E = 0$. Nevertheless, as we will explicitly show, there are in fact nontrivial solutions other than $\det G = 0$. This is contrast to the theory only with the cosmological constant term defined on a commutative space, where only $\det G = 0$ is allowed due to the absence of the kinetic term. This is because the star product has an infinite number of derivatives in it, which act as an effective kinetic term.
+
+## 2.2 Star Product and Operator Formulation
+
+In the following sections we explicitly give solutions of Eq.(2.10). For simplicity, we will consider
+static or stationary solutions there. In order to find solutions, we exploit the recipe used in [36],
+i.e., the usage of the connection between the star product and the operator formulation, an
+analogue of the Weyl-Wigner correspondence in quantum mechanics (see also [39] for a review).
+The vielbein $E_\mu^a(x, y)$ is a function on $\mathbb{R}^2$ if it is static. Recall that, given a (suitably defined)
+function $f(x, y)$ on $\mathbb{R}^2$, there is a map which uniquely assigns to it an operator $O_f(\hat{x}, \hat{y})$ that
+---PAGE_BREAK---
+
+acts on the corresponding one-dimensional quantum mechanical Hilbert space $\mathcal{H} = L^2(\mathbb{R})$ with
+$[\hat{x}, \hat{y}] = i\theta$. By choosing the Weyl ordering prescription, the Weyl map is given by
+
+$$
+O_f(\hat{x}, \hat{y}) = \frac{1}{(2\pi)^2} \int d^2k \tilde{f}(k) e^{i(k_x\hat{x} + k_y\hat{y})}, \qquad (2.11)
+$$
+
+where
+
+$$
+\tilde{f}(k) = \int d^2 x e^{-i(k_x x + k_y y)} f(x, y) \tag{2.12}
+$$
+
+is the Fourier transformation. Then the algebra of functions with the ⋆-multiplication is isomor-
+phic to the operator algebra with relations
+
+$$
+O_f \cdot O_g = O_{f \star g}. \tag{2.13}
+$$
+
+$$
+\operatorname{Tr} O_f = \int \frac{d^2 x}{2\pi\theta} f. \qquad (2.14)
+$$
+
+The creation and the annihilation operator are defined by
+
+$$
+\hat{a} = \frac{\hat{x} + i\hat{y}}{\sqrt{2\theta}}, \quad \hat{a}^{\dagger} = \frac{\hat{x} - i\hat{y}}{\sqrt{2\theta}}. \tag{2.15}
+$$
+
+The Hilbert space $\mathcal{H}$ is now spanned by orthonormal basis $|n\rangle$ ($n = 0, 1, 2, \dots$), which is the energy eigenstate of the one-dimensional harmonic oscillator given in (2.15). Thus a general operator $O$ acting on $\mathcal{H}$ can be written as the linear combination of the matrix elements of the form
+
+$$
+O = \sum_{i,j=0}^{\infty} O_j^i |i\rangle \langle j|. \tag{2.16}
+$$
+
+In particular, the projection operator $|i\rangle\langle i|$ will be important to construct solutions in the following sections. The function (symbol) $\phi_i$ corresponding to the projection operator (that is $O_{\phi_i} = |i\rangle\langle i|$) can be expressed as [36, 39]
+
+$$
+\phi_i(x, y) = 2(-1)^i e^{-r^2/\theta} L_i \left( \frac{2r^2}{\theta} \right), \qquad (2.17)
+$$
+
+where $L_i(x)$ is the $i$th Laguerre polynomial and $r^2 = x^2 + y^2$. By construction, $\phi_i$ is the
+orthogonal projection
+
+$$
+\phi_i \star \phi_j = \delta_{ij} \phi_i
+\quad (2.18)
+$$
+---PAGE_BREAK---
+
+and satisfy the completeness relation⁴
+
+$$ \sum_{i=0}^{\infty} \phi_i = 1. \qquad (2.21) $$
+
+In the following, we sometimes use a loose notation not to distinguish $O_f$ and $f$.
+
+# 3 Noncommutative Solitons by Projection Operators
+
+In this section, we will give various static solutions of the equation of motion (2.10) using projection operators. We begin by simple solutions of two types and then move to more general solutions.
+
+## 3.1 Diagonal Solution
+
+As a warm-up, let us first consider the case with a diagonal vielbein, namely, we take an ansatz
+
+$$ E_{\mu}^{a} = \begin{pmatrix} E_{0}^{0} & 0 & 0 \\ 0 & E_{1}^{1} & 0 \\ 0 & 0 & E_{2}^{2} \end{pmatrix} \qquad (3.1) $$
+
+as a 3 × 3 matrix. In this case, the equation of motion (2.10) reduces to three equations
+
+$$ \begin{aligned} E_0^0 &: \quad 0 = \{E_1^1, E_2^2\}_\star, \\ E_1^1 &: \quad 0 = \{E_2^2, E_0^0\}_\star, \\ E_2^2 &: \quad 0 = \{E_0^0, E_1^1\}_\star. \end{aligned} \qquad (3.2) $$
+
+Therefore, if each component of the vielbein is given by a projection operator and they are orthogonal among them, then they solves (3.2). The simplest choice is
+
+$$ E_{\nu}^{b} = \begin{pmatrix} \alpha_0 \phi_0 & 0 & 0 \\ 0 & \alpha_1 \phi_1 & 0 \\ 0 & 0 & \alpha_2 \phi_2 \end{pmatrix}, \qquad (3.3) $$
+
+⁴ By using the generating function for the Laguerre polynomials
+
+$$ \sum_{i=0}^{\infty} L_i^{(\alpha)}(x) t^i = \frac{1}{(1-t)^{\alpha+1}} \exp \left(-\frac{xt}{1-t}\right), \qquad (2.19) $$
+
+it is shown explicitly:
+
+$$ \sum_{i=0}^{\infty} \phi_i = 2e^{-r^2/\theta} \sum_{i=0}^{\infty} (-1)^i L_i \left(\frac{2r^2}{\theta}\right) = 1. \qquad (2.20) $$
+---PAGE_BREAK---
+
+where $\alpha_0, \alpha_1$ and $\alpha_2$ are arbitrary complex numbers. Of course, any other choice of three different projection operators (say $\phi_3, \phi_{16}$, and $\phi_{51}$, etc.) is also a solution. More generally, arbitrary mutually orthogonal three groups of projection operators are allowed.
+
+This simple example already possesses some interesting features, as we will see below. In order to give insight to the solution (3.3), we apply the prescription announced in the previous section to this example. First, we can see that all solutions of this type give non-zero metric $G_{\mu\nu}$. In fact, (3.3) gives the following line element:
+
+$$
+\begin{aligned}
+ds^2 &= G_{\mu\nu}dx^\mu dx^\nu \\
+&= \frac{1}{2}(E_\mu^a \star E_\nu^b + E_\nu^b \star E_\mu^a) \eta_{ab} dx^\mu dx^\nu \\
+&= -\alpha_0^2 \phi_0^{2*} dt^2 + \alpha_1^2 \phi_1^{2*} dx^2 + \alpha_2^2 \phi_2^{2*} dy^2 \\
+&= -\alpha_0^2 \phi_0 dt^2 + \alpha_1^2 \phi_1 dx^2 + \alpha_2^2 \phi_2 dy^2 \\
+&= 2e^{-r^2/\theta} \left( -\alpha_0^2 L_0(2r^2/\theta) dt^2 - \alpha_1^2 L_1(2r^2/\theta) dx^2 + \alpha_2^2 L_2(2r^2/\theta) dy^2 \right) \\
+&= 2e^{-r^2/\theta} \left( -\alpha_0^2 dt^2 - \alpha_1^2 \left(1 - \frac{2r^2}{\theta}\right) dx^2 + \alpha_2^2 \left(1 - \frac{4r^2}{\theta} + \frac{2r^4}{\theta^2}\right) dy^2 \right),
+\end{aligned}
+\quad (3.4) $$
+
+where we used the property of the projection operators (2.18). We clearly see that the metric becomes singular when we take the commutative limit $\theta \to 0$. This means that this solution can not exist if we start from the commutative theory. Furthermore, for finite $\theta$, other “commutative” quantities defined from this metric (3.4) are now computable, because it is non-degenerate in the sense that it gives $\det G \neq 0$ except for some points. Of course, as mentioned in the previous section, this treatment itself should be justified.
+
+By adopting the remark above, we can evaluate the Ricci scalar $R$ and the Kretschmann invariant $R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$ from (3.4) by the standard analysis. The results are shown in figures 1 and 2. The explicit forms of them are given in Appendix A. All of them diverge at $r = \infty$ coming from the overall factor $e^{-r^2/\theta}$ appeared in (3.4), and also diverge at several values of $r$ which comes from the zero points of the Laguerre polynomials. As seen from these figures, the spacetime is divided into several radial regions by the walls of curvature singularities. The divergent points agree with those satisfy $\det G = 0$. In each region, the Ricci scalar evaluated by ordinary GR method is meaningful because $\det G \neq 0$. The result is not exactly but very closed to 0, and moreover, is almost constant for finite $\theta$. Because $\theta$ is a free parameter, we can take a commutative limit $\theta \to 0$. Then we see that all of the walls shrink to $r = 0$, and the space measured by the metric concentrates to one point with a curvature singularity. Conversely, the metric at the finite $\theta$ can be viewed as a resolution of such a “one-point speced”. This solution suggests that the bubbles of several spacetimes with small cosmological constants would emerge
+---PAGE_BREAK---
+
+Figure 1: The value of the Ricci scalar $R$ of spacetime (3.4). The left and right graphs are the $y = 0$ and the $x = 0$ section of $R$, respectively. Here we set $\theta = 1$ and $\alpha_0 = \alpha_2 = 1/\sqrt{2}$ and $\alpha_1 = i/\sqrt{2}$ as an example.
+
+Figure 2: The value of the Kretschmann invariant $R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$ of spacetime (3.4). The left and right graphs are the $y = 0$ and the $x = 0$ section of $R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}$, respectively. We set $\theta = 1$ and $\alpha_0 = \alpha_2 = 1/\sqrt{2}$ and $\alpha_1 = i/\sqrt{2}$ as well as Figure 1.
+
+as a fine structure of a single point. This fact might give a new direction for the cosmological constant problem.⁵
+
+⁵ Note also the signature of the metric. Due to the nature of the Laguerre polynomials, the sign of each component of the metric oscillates as $r$ increases. This is not surprising because such a sign-changing solution is also typical in the black hole spacetime, where in the interior of the event horizon $dt^2$ becomes spacelike while $dr^2$ becomes timelike. The signs of the coefficients of $dt^2$ and $dr^2$ changes independently in our solution.
+---PAGE_BREAK---
+
+## 3.2 Nondiagonal Solutions and Dimensional Reduction
+
+Next, let us slightly generalize the above and take a non-diagonal ansatz for the vielbein of the form
+
+$$ \begin{pmatrix} E_0^0 & 0 & 0 \\ 0 & E_1^1 & E_1^2 \\ 0 & E_2^1 & E_2^2 \end{pmatrix}. \qquad (3.5) $$
+
+The equation of motion (2.10) reduces to five equations
+
+$$ 0 = \{E_1^1, E_2^2\}_\star - \{E_1^2, E_2^1\}_\star, \qquad (3.6) $$
+
+$$ 0 = \{E_0^0, E_\mu^a\}_\star \quad (a, \mu = 1, 2). \qquad (3.7) $$
+
+We will give solutions that represent effectively two-dimensional spacetime.
+
+For example, we can easily find a solution which consists of the two projections $\phi_0$ and $\phi_1$ as
+
+$$ E_{\nu}^{b} = \begin{pmatrix} \alpha_0 \phi_0 & 0 & 0 \\ 0 & \alpha_1 \phi_1 & \alpha_1 \phi_1 \\ 0 & \alpha_1 \phi_1 & \alpha_1 \phi_1 \end{pmatrix}, \qquad (3.8) $$
+
+where $\alpha_0$ and $\alpha_1$ are arbitrary constants as before. This implies the metric[^6]
+
+$$ ds^2 = -\alpha_0^2 \phi_0 dt^2 + 2\alpha_1^2 \phi_1 (dx^2 + 2dxdy + dy^2) \qquad (3.9) $$
+
+$$ = 2e^{-r^2/\theta} \left( -\alpha_0^2 dt^2 - 2\alpha_1^2 \left(1 - \frac{2r^2}{\theta}\right) (dx + dy)^2 \right). \qquad (3.10) $$
+
+As seen in the second term of the metric, the line element effectively consists of $dt$ and $dx + dy$. In other words, the metrical dimension of this metric is 2. The disagreement between the naive (manifold) dimension and the metrical dimension would be a sign of quantum gravity again [38]. In particular, it would be interesting to compare it with the results obtained in the analysis by causal dynamical triangulation [40, 41, 42, 43] or spontaneous dimensional reduction in short-distance quantum gravity [44].
+
+A similar solution only with a single projection operator $\phi_0$ is obtained from the above solution by replacing $\phi_0 \to 1 - \phi_0$ and $\phi_1 \to \phi_0$. Its metric is given by
+
+$$ ds^2 = -(1-\phi_0)dt^2 + 2\phi_0(dx^2 + 2dxdy + dy^2) \qquad (3.11) $$
+
+$$ = -\left(1-2e^{-r^2/\theta}\right)dt^2 + 4e^{-r^2/\theta}(dx+dy)^2. \qquad (3.12) $$
+
+[^6]: We assume $dxdy = dydx$.
+---PAGE_BREAK---
+
+This is again an effectively two-dimensional metric.
+
+On the other hand, effectively one-dimensional solutions are obtained in the most general ansatz and the corresponding equation of motion (2.10). For example, by using a single projection operator $\phi_0$, the vielbein
+
+$$E_{\nu}^{b} = \begin{pmatrix} \phi_0 & \phi_0 & \phi_0 \\ \phi_0 & \phi_0 & \phi_0 \\ \phi_0 & \phi_0 & \phi_0 \end{pmatrix} \qquad (3.13)$$
+
+solves Eq.(2.10). The line element of this solution
+
+$$ds^2 = \phi_0(dt + dx + dy)^2 \qquad (3.14)$$
+
+$$= 2e^{-r^2/\theta}(dt + dx + dy)^2. \qquad (3.15)$$
+
+shows that the metric effectively reduces to a one-dimensional metric. The disagreement between the naive dimension and the metrical dimension appears again. Clearly this happens because of the degeneracy of the vielbein. In other words, the rank or the invertibility of the vielbein determines whether such a dimensional reduction occurs or not. We discuss this point again in the following subsection.
+
+## 3.3 General Solutions by Projection Operators
+
+The structure of the dimensional reduction in above examples suggest a systematic construction of solutions. In general, each component of vielbein is a function on noncommutative $\mathbb{R}^2$ (we refer time-independent metrics only) and is written as an operator acting on the Hilbert space of a harmonic oscillator. Therefore, the most general expression of the vielbein is written as
+
+$$E_{\mu}^{a} = \sum_{i,j=0}^{\infty} (C_{\mu}^{a})_{j}^{i} |i\rangle \langle j|, \qquad (3.16)$$
+
+where $(C_\mu^a)_j^i$ is a (complex) number. Now a (star) product of two components is written by using $\langle j | k \rangle = \delta_k^j$ as a matrix multiplication for $i, j$:
+
+$$E_{\mu}^{a} \star E_{\nu}^{b} = \sum_{i,j=0}^{\infty} (C_{\mu}^{a} C_{\nu}^{b})_{j}^{i} |i\rangle \langle j|. \qquad (3.17)$$
+
+Thus, the metric is given by using the anti-commutator as
+
+$$G_{\mu\nu} = \frac{1}{2}\eta_{ab}\sum_{i,j=0}^{\infty}\{C_{\mu}^{a}, C_{\nu}^{b}\}_{j}^{i}|i\rangle\langle j|. \qquad (3.18)$$
+---PAGE_BREAK---
+
+Similarly, the determinant (for $\mu$ and $a$)
+
+$$ \det(E_{\mu}^{a}) = \sum_{i,j=0}^{\infty} [\det(C_{\mu}^{a})]_{j}^{i} |i\rangle\langle j| \quad (3.19) $$
+
+reduces to the determinant of the matrix $C_\mu^a$. Correspondingly, the equation of motion (2.10) reduces to the following constraint for $(C_\mu^a)_i^j$
+
+$$ \epsilon^{\mu\nu\rho}\epsilon_{abc}(C_\nu^b C_\rho^c)_j^i = 0. \quad (3.20) $$
+
+As a particular situation, let us assume the diagonality in $i, j$, that is, each vielbein is written in the linear combination of the projection operators as
+
+$$ E_{\nu}^{b} = \sum_{j=0}^{\infty} C(j)_{\nu}^{b} \phi_{j}, \quad C(j)_{\nu}^{b} = (C_{\nu}^{b})_{j}^{i}. \qquad (3.21) $$
+
+Then, (3.20) becomes
+
+$$ \epsilon^{\mu\nu\rho}\epsilon_{abc}C(j)^{b}_{\nu}C(j)^{c}_{\rho} = 0, \quad (3.22) $$
+
+for an arbitrary $j$ (no summation). For a fixed $j$, this is an ordinary (commutative) matrix equation and $C(j)_\nu^b$ is seen as a $3 \times 3$ matrix for $\nu$ and $b$ $^7$,
+
+$$ C(j) = \begin{pmatrix} C(j)_0^0 & C(j)_1^0 & C(j)_2^0 \\ C(j)_0^1 & C(j)_1^1 & C(j)_2^1 \\ C(j)_0^2 & C(j)_1^2 & C(j)_2^2 \end{pmatrix}. \quad (3.23) $$
+
+Then this constraint simply shows that all minors (the determinants of cofactor matrices) of each matrix element $C(j)_\nu^b$ should be zero. The most general form of such a matrix is given by
+
+$$ C(j) = \begin{pmatrix} \alpha_j \\ \beta_j \\ \gamma_j \end{pmatrix} \begin{pmatrix} s_j & t_j & u_j \end{pmatrix} = \begin{pmatrix} \alpha_j s_j & \alpha_j t_j & \alpha_j u_j \\ \beta_j s_j & \beta_j t_j & \beta_j u_j \\ \gamma_j s_j & \gamma_j t_j & \gamma_j u_j \end{pmatrix} \quad (3.24) $$
+
+where $\alpha_j, \beta_j, \gamma_j, s_j, t_j$ and $u_j$ are arbitrary constants. This means that $C(j)$ is a matrix whose rank is 1, parametrized by $\mathbb{C}^6$. Here, the remarkable fact is that any linear combination (3.21), with each $C(j)$ given by (3.24), is also a solution due to the orthogonality of $\phi_j$'s. Therefore, we can in fact generate an infinite number of classical solutions easily by assigning a set of degenerate matrices $\{C(j)\}_{j \in \mathbb{Z}}$. We conclude that the most general solution of the vielbein and
+
+⁷ It is also equivalent to the equation of motion for the vielbein $e_\nu^b$ in the commutative theory that has only one matrix $C$.
+---PAGE_BREAK---
+
+the corresponding metric written by the projection operators are as follows:
+
+$$
+E_{\mu}^{a} = \begin{pmatrix}
+E_0^0 & E_0^1 & E_0^2 \\
+E_1^0 & E_1^1 & E_1^2 \\
+E_2^0 & E_2^1 & E_2^2
+\end{pmatrix} = \sum_{j=0}^{\infty} \begin{pmatrix}
+\alpha_j s_j & \alpha_j t_j & \alpha_j u_j \\
+\beta_j s_j & \beta_j t_j & \beta_j u_j \\
+\gamma_j s_j & \gamma_j t_j & \gamma_j u_j
+\end{pmatrix} \phi_j, \quad (3.25)
+$$
+
+$$
+G_{\mu\nu} = \begin{pmatrix} G_{00} & G_{01} & G_{02} \\ G_{10} & G_{11} & G_{12} \\ G_{20} & G_{21} & G_{22} \end{pmatrix} = \sum_{j=0}^{\infty} (-\alpha_j^2 + \beta_j^2 + \gamma_j^2) \begin{pmatrix} s_j^2 & s_j t_j & s_j u_j \\ s_j t_j & t_j^2 & t_j u_j \\ s_j u_j & t_j u_j & u_j^2 \end{pmatrix} \phi_j. \quad (3.26)
+$$
+
+It is immediately shown that any metric of the form (3.26) satisfies det$_*$ G = 0.
+
+Of course, all the solutions obtained so far are characterized in this way. In fact, the first
+example giving (3.4) is characterized by the following three degenerate matrices
+
+$$
+C(0) = \begin{pmatrix} \alpha_0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \quad C(1) = \begin{pmatrix} 0 & 0 & 0 \\ 0 & \alpha_1 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \quad C(2) = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \alpha_2 \end{pmatrix}. \tag{3.27}
+$$
+
+As we mentioned before, the fact that some solutions have the discrepancy between the dimension
+of the manifolds and that of the metrics can be explained by the degeneracy of these matrices.
+The examples (3.8) and (3.13) given in the previous subsection are characterized by
+
+$$
+C(0) = \begin{pmatrix} \alpha_0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \quad C(1) = \begin{pmatrix} 0 & 0 & 0 \\ 0 & \alpha_1 & \alpha_1 \\ 0 & \alpha_1 & \alpha_1 \end{pmatrix}, \quad (3.28)
+$$
+
+and
+
+$$
+C(0) = \begin{pmatrix} \alpha_0 & \alpha_0 & \alpha_0 \\ \alpha_0 & \alpha_0 & \alpha_0 \\ \alpha_0 & \alpha_0 & \alpha_0 \end{pmatrix}, \qquad (3.29)
+$$
+
+respectively. Since each matrix $C(j)$ carries rank 1, the sum of two such terms in the former
+gives effective two dimension, while the latter gives one dimension. In other words, we need at
+least three non-zero matrices $C(j)$ in order to construct a three-dimensional solution as (3.27).
+As noted above, the commutative theory corresponds to a single matrix $C$. Along the argument
+here, it is clear that the metric in the commutative theory is at most one dimensional.
+
+In summary, even for restricting the diagonal (projection) operators in *i*,*j*, we have found
+infinitely many solutions characterized by the infinite set of degenerate matrices $C(j)$. Dividing
+by the symmetry, we would obtain the vacuum moduli space of the theory in this diagonal
+sector.
+---PAGE_BREAK---
+
+We close this section by a remark. Although we consider the three-dimensional theory in this paper, the extension to the (2n+1)-dimensional theory is straightforward. Then the construction of the solutions in this section is also applied to the higher dimensional case. To be more precise, the vielbein is represented as operators on the n-dimensional harmonic oscillator basis $|j_1, j_2, \cdots, j_n\rangle$. The eom $\epsilon^{\mu_1 \cdots \mu_n} \epsilon_{a_1, \cdots, a_n} E_{\mu_1}^{a_1} \cdots \star \cdots \star E_{\mu_n}^{a_n} = 0$ is solved in the same way as (3.25) but now the sum is over any projection operators $\phi_{j_1, \cdots, j_n} = |j_1, j_2, \cdots, j_n\rangle \langle j_1, j_2, \cdots, j_n|$, because each matrix $C(j_1, j_2, \cdots, j_n)$ is independently solved similarly to (3.24). The structure of the dimensional reduction is also same, which means that the invertibility of the vielbein might be a key to the mechanism of compactification of higher-dimensional theories.
+
+# 4 Noncommutative Solitons by Clifford Algebras
+
+In this section, we will give another class of solutions represented by various dimensional Clifford algebras. Here all solutions are proportional to the Minkowski metric and satisfy $\det_* G \neq 0$, as opposed to the solutions in §3.
+
+## 4.1 First Solution
+
+Let us come back to the ansatz (3.1) for the vielbein. In the previous section, we found solutions using the projection operators, which correspond to the diagonal matrix elements $|i\rangle\langle i|$ in the harmonic oscillator basis. However, because the equation of motion (3.2) shows that the vielbein should be mutually anti-commuting, the vielbein obeying the Clifford algebra relation solves (3.2). Such a solution is generally represented by a non-diagonal matrix element $|i\rangle\langle j|$ in that basis.
+
+To be more precise, let us for example focus on the indices $i=0,1$ and define the $SO(3)$ gamma matrices (Pauli matrices) as
+
+$$
+\begin{align}
+\gamma^0 &= \sigma^3 = |0\rangle\langle 0| - |1\rangle\langle 1|, \\
+\gamma^1 &= \sigma^1 = |1\rangle\langle 0| + |0\rangle\langle 1|, \\
+\gamma^2 &= \sigma^2 = i|1\rangle\langle 0| - i|0\rangle\langle 1|.
+\end{align}
+\tag{4.1} $$
+
+They satisfy the Clifford algebra relation $\{\gamma^\mu, \gamma^\nu\} = 2\delta^{\mu\nu}\mathbf{1}_2$. Here, $\mathbf{1}_2 = |0\rangle\langle 0| + |1\rangle\langle 1|$ is a unit matrix in the two-dimensional subspace spanned by $|0\rangle$ and $|1\rangle$, which is equivalent to the
+---PAGE_BREAK---
+
+projection operator $\phi_0 + \phi_1$ in the full Hilbert space. Then the vielbein of the form
+
+$$E_{\mu}^{a} = \begin{pmatrix} \gamma^0 & 0 & 0 \\ 0 & \gamma^1 & 0 \\ 0 & 0 & \gamma^2 \end{pmatrix} \qquad (4.2)$$
+
+is evidently a solution for (3.2). The metric for this vielbein is
+
+$$G_{\mu\nu} = \eta_{\mu\nu} (|0\rangle\langle 0| + |1\rangle\langle 1|)$$
+
+$$= \eta_{\mu\nu} (\phi_0 + \phi_1) \qquad (4.3)$$
+
+$$= \frac{4r^2}{\theta} e^{-r^2/\theta} \eta_{\mu\nu}. \qquad (4.4)$$
+
+In the last line, we rewrote $\phi_0$ and $\phi_1$ in terms of the Laguerre polynomials.
+
+Remarkably, this metric is proportional to the three-dimensional Minkowski metric, so that it is natural to regard this solution as a soliton that interpolates two vacua $G_{\mu\nu} = 0$ and $G_{\mu\nu} = \eta_{\mu\nu}$. The overall factor of the projection operators means that the (noncommutative) Minkowski space exists only in the region where $\phi_0 + \phi_1$ has non-zero support in analogy to the interpretation of the noncommutative scalar solitons: On the noncommutative plane, each projection $\phi_i$ shares a region with a minimal area $2\pi\theta$, which is determined by the uncertainly relation. It is indeed seen by noting $\det_* G = (\det \eta)(\phi_0 + \phi_1)_*^3 = -(\phi_0 + \phi_1)$ and $\text{Tr}(\phi_0 + \phi_1) = 2$. This implies that an effective cosmological constant term defined by $\det_* G$ (it is a composite quantity different from our action) is given by
+
+$$S_{eff} = -\lambda \int dt d^2 x \sqrt{-\det_* G} = -2\pi\theta\lambda \int dt \text{Tr}(\phi_0 + \phi_1) = -4\pi\theta\lambda \int dt, \qquad (4.5)$$
+
+which means the finite volume $4\pi\theta$ in the spatial direction. Because now $\det_* G \neq 0$, it is in principle possible to compute the noncommutative scalar curvature $R_*$ to capture the structure further. But it needs a proper definition of $R_*$ of course, and we will not perform it in this paper.
+
+Nevertheless, the same qualitative feature can be observed in analyzing “commutative” quantities. In this treatment, the support of $\phi_0 + \phi_1$ (non-degenerate region) as a function is $0 < r < \infty$. However, note that $r$ is the radial coordinate in the isotropic coordinates
+
+$$ds^2 = \frac{4r^2}{\theta} e^{-r^2/\theta} (-dt^2 + dr^2 + r^2 d\varphi^2), \qquad (4.6)$$
+
+which differs from the physical radial coordinate usually defined by $R = (2r^2/\sqrt{\theta})e^{-r^2/2\theta}$. Both $r=0$ and $r=\infty$ correspond to $R=0$. This implies that the physical distance between $r=0$ and $r=\infty$ is finite. It is consistent with the finite spatial integral in (4.5). With this in mind,
+---PAGE_BREAK---
+
+Figure 3: The Ricci scalar (left) and the Kretschmann invariant (right) for the metric (4.6).
+Here we set $\theta = 1$.
+
+we now check invariant scalars $R$ and $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ for this metric, and the results are given as
+(see also Fig.3)
+
+$$R = - \frac{e^{r^2 / \theta}}{2r^4 \theta} (\theta^2 - 6r^2 \theta + r^4), \qquad (4.7)$$
+
+$$R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} = \frac{e^{2r^2}}{4r^8\theta^2} (5\theta^4 - 10r^2\theta^3 + 18r^4\theta^2 - 6r^6\theta + r^8). \quad (4.8)$$
+
+All of them diverge both at $r=0$ and $r=\infty$, where $\det G(r) = 0$. From Figure 3 we see that the “interior” region is actually $1.0 < r/\sqrt{\theta} < 1.5$, and in this region the spacetime seems to be a “warped” Minkowski space in the sense that the scalar curvature behaves as if it is almost constant and the metric has an overall scaling factor $(4r^2/\theta)e^{-r^2/\theta}$. These analyses indicate that this spacetime is seen as a small bubble of ordinary space surrounded by the empty space (nothing state). The size of this bubble is approximately $\sqrt{\theta}$.
+
+## 4.2 Generalizations
+
+The above solution can be immediately generalized in two ways.
+
+First, we can change the choice of the two indices from $i=0,1$ to any other pair, because it is not important to construct a solution. In particular, this generalization is related to the solution-generating technique [37]. Note that if $E_\mu^a$ is a solution then $SE_\mu^a S^\dagger$ is also a solution$^8$.
+Here $S$ is a shift operator defined by
+
+$$S = \sum_{i=0}^{\infty} |i+1\rangle \langle i| \qquad (4.9)$$
+
+⁸ Note that the unitary transformation $E_{\mu}^{a} \rightarrow U E_{\mu}^{a} U^{-1}$ is a symmetry of the cosmological action.
+---PAGE_BREAK---
+
+and satisfies $S^\dagger S = 1$ but $SS^\dagger = 1 - \phi_0$. Thus, the metric
+
+$$
+\begin{align}
+G_{\mu\nu} &= \eta_{\mu\nu} S (|0\rangle\langle0| + |1\rangle\langle1|) S^\dagger \\
+&= \eta_{\mu\nu} S (|1\rangle\langle1| + |2\rangle\langle2|) S^\dagger \\
+&= \eta_{\mu\nu} (\phi_1 + \phi_2) \tag{4.10}
+\end{align}
+$$
+
+is also a solution. The choice of the two indices does not affect the size of the “bubble” on the noncommutative space because $\text{Tr}(\phi_1 + \phi_2) = 2$ is the same as above.
+
+Next generalization is to enlarge the size of gamma matrices. To do this, choose the index $i = 0, 1, \dots, q$, where $q = 2^{[d/2]} - 1$ and define $SO(d)$ gamma matrices $\gamma^0, \dots, \gamma^d$ in the harmonic oscillator space as above. Then by selecting three of them, say,
+
+$$ E_0^0 = \gamma^0, \quad E_1^1 = \gamma^1, \quad E_2^2 = \gamma^2, \tag{4.11} $$
+
+they also solve the equation of motion. Because now the size of gamma matrices is $2^{[d/2]}$, the corresponding metric is proportional to the rank $2^{[d/2]}$ projection operators as
+
+$$
+\begin{align}
+G_{\mu\nu} &= \eta_{\mu\nu} (|0\rangle\langle0| + \cdots + |q\rangle\langle q|) \\
+&= \eta_{\mu\nu} (\phi_0 + \cdots + \phi_q). \tag{4.12}
+\end{align}
+$$
+
+The volume of the support which will contribute to the effective cosmological constant term is $\text{Tr}(\phi_0 + \cdots + \phi_q) = q+1$ times larger than previous solutions, as expected. In particular, by taking a large matrix-size limit $q \to \infty$, we find that (4.12) actually reduces to the Minkowski metric because of the completeness relation $\sum_{i=0}^{\infty} \phi_i = 1$ (2.21). The derived second-order cosmological constant term in this limit
+
+$$ S = -2\lambda \int dt d^2 x \tag{4.13} $$
+
+is in fact divergent. It is surprising that the Minkowski spacetime can emerge only from the cosmological constant term. And it is rather confusing that the Minkowski metric carries the divergent cosmological constant, because that spacetime is a classical vacuum for the vanishing cosmological constant in the ordinary sense. The point is that we see the spacetime from the nothing $G_{\mu\nu} = 0$ as the ground state, where the Minkowski space has infinite volume, while in the ordinary Einstein equation it is implicitly assumed that the Minkowski space is a ground state. Therefore it is not a contradiction. In summary, we found a sequence of solutions that interpolates $G_{\mu\nu} = 0$ ($q=0$) and the Minkowski space ($q \to \infty$).
+
+Another interesting application is to choose the index now starting from 1, i.e., $i = 1, \dots, q$ with $q = 2^{[d/2]}$ and to take $q \to \infty$ limit. Then $E_0^0 = \gamma^0$, $E_1^1 = \gamma^1$, $E_2^2 = \gamma^2$ in this basis define
+---PAGE_BREAK---
+
+Figure 4: The profile of $1 - 2e^{-r^2/\theta}$. Here we set $\theta = 1$.
+
+a solution as above but now the metric becomes
+
+$$
+\begin{aligned}
+G_{\mu\nu} &= \eta_{\mu\nu} (|1\rangle \langle 1| + \dots + |q\rangle \langle q|) \\
+&= \eta_{\mu\nu} (\phi_1 + \dots + \phi_q) \\
+&\xrightarrow{q \to \infty} \eta_{\mu\nu} (1 - \phi_0).
+\end{aligned}
+\quad (4.14)
+$$
+
+Thus, the metric approaches to $G_{\mu\nu} = (1 - 2e^{-r^2/\theta})\eta_{\mu\nu}$. More generally, by choosing the index $i = k, \dots, \infty$ for some $k$, we have a class of solutions
+
+$$ G_{\mu\nu} \xrightarrow{q \to \infty} \eta_{\mu\nu} (1 - \phi_0 - \dots - \phi_k). \quad (4.15) $$
+
+As opposed to the solutions above, the metric (4.14) has a support for all over the spacetime except for that of $\phi_0$. Then as seen from the “noncommutative” determinant, this spacetime is seen as a “hole” of minimal size $\sim \sqrt{\theta}$ in the Minkowski space. Similarly, the metric (4.15) has a hole of radius $k\sqrt{\theta}$ in the Minkowski spacetime.
+
+It is also seen from the analysis of “commutative” quantities. Indeed, a “hole” for the metric (4.14) is roughly seen by its “step function” profile (see Fig.4) jumping at $r = \sqrt{(\ln 2)\theta} \sim 0.833\sqrt{\theta}$, which is the zero point of $\det G$. Checking now the invariant scalars of this spacetime (4.14), we find the concrete forms of them as
+
+$$ R = - \frac{8e^{r^2/\theta}}{(e^{r^2/\theta} - 2)^3 \theta^2} \left\{ (1 - 2e^{r^2/\theta}) r^2 + 2(-2 + e^{r^2/\theta}) \theta \right\} \quad (4.16) $$
+
+$$
+\begin{aligned}
+R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} &= \frac{32e^{2r^2/\theta}}{(e^{r^2/\theta} - 2)^6 \theta^4} \\
+&\quad \times \left\{ (2 + 4e^{2r^2/\theta}) r^4 - 2(2 - 7e^{r^2/\theta} + 3e^{2r^2/\theta}) r^2 \theta + 3(-2 + e^{r^2/\theta}) \theta^2 \right\}
+\end{aligned}
+\quad (4.17)
+$$
+
+Both of them are finite at the origin and diverge at $r \sim 0.833\sqrt{\theta}$. As shown in Fig.5, the spacetime are divided into two regions by a wall where the curvature diverges. Therefore, two
+---PAGE_BREAK---
+
+Figure 5: The Ricci scalar (left) and the Kretschmann invariant (right) of the spacetime (4.14) for $\theta = 1$.
+
+bubbles of the universes of different curvatures seem to be glued at the curvature wall. The outer region has an almost zero scalar curvature so that it is expected to be the Minkowski spacetime outside the “hole”, in order to be consistent with the “noncommutative” quantities. On the other hand, the interior region has a negative scalar curvature. This naively indicates the AdS spacetime, which is not expected from the noncommutative viewpoint. However, we would not like to be serious about the precise value of the scalar curvature in this “commutative” evaluation.
+
+A remarkable feature of this class of solutions (4.12), (4.14) and (4.15) is that they do not shrink to a point in the commutative limit. In the limit, all the projection operators $\phi_i$ reduce to sharp, delta-function type distributions, and all the degenerate points with $\det_* G = 0$ and $\det G = 0$ will concentrate to the origin. Then the metrics (4.14) and (4.15) approach to the Minkowski spacetime except for the origin, which is the point-like curvature singularity$^9$. Indeed, it is easily shown that $R \to 0$ for $r \neq 0$ and $R \sim 16/\theta \to \infty$ for $r = 0$. It is again interesting to see this as the resolution of singularities. When the spacetime is highly curved at a point in the second-order formulation of gravity, one would expect strong effects of quantum gravity to appear at this point. In our case, it is simply the degenerate point of the metric, where the second-order formulation becomes meaningless but they are still well defined in the first-order noncommutative formulation. This scenario is analogous to the stringy resolution of singularity in the instanton moduli space, where the small instanton singularity is not an end of the moduli space actually and is connected to other branches of the vacua. Moreover, each singularity carries a kind of an index ($k$ in (4.15)), which has a definite meaning in the noncommutative space as the size of the singularity. This reminds us to the black hole microstates.
+
+⁹ On the other hand, the metric (4.12) approaches to spacetime with a single point around the nothing.
+---PAGE_BREAK---
+
+# 5 Conclusions and Discussions
+
+In this paper, we investigated the three-dimensional gravitational theory on the noncommutative space. We considered the setting where the action has the cosmological constant term only. Although the action has no kinetic term, we found infinitely many nontrivial classical solutions owing to the noncommutativity. In order to construct the solutions, we applied the recipe developed in [36], i.e., the usage of the connection between the star product and the operator formulation.
+
+To understand the solutions, we proposed a new point of view for the cosmological constant term, that is, the action we gave here is already a full theory without introducing the scalar curvature term. When we adopt this idea, the metric, the Ricci scalar and other physical quantities can be constructed after the vielbein is obtained by solving the equation of motion (2.10). In other words, we switch the second-order formalism effectively. In this case, the vielbein which solves (2.10) can be regarded as a “meta” spacetime or a seed vielbein that can work as a source for the commutative Ricci scalar $R$ or the noncommutative one $R_*$.
+
+We would like to emphasize that this point of view has never appeared. One of the reasons for that is that on commutative spaces, a cosmological constant term itself can not give a nontrivial solution, but it needs a kinetic term.
+
+Let us now summarize the solutions that are classified into two classes. The solutions of the first class are constructed by using the projection operators. We constructed general solutions of this class. All of them satisfy $\det_* G = 0$ but $\det G \neq 0$, so we calculated commutative scalar curvatures produced by the metrics based on the solutions of the vielbein. We found that the spacetimes divided into several regions by the walls of the curvature singularities where $\det G$ becomes zero. In that sense, they have structures of the bubbles of spacetimes with various cosmological constants. Another feature of this class is that they indicate dimensional reduction, that is, there are some solutions which are effectively one or two dimensional because of the degeneracy $\det_* G = 0$. In the context of quantum gravity, the possibility of dimensional reduction has been intensively discussed [38, 41] or in the other gravitational theory, a similar issue has been reported [45]. It would be interesting to investigate the relation of our theory to them.
+
+The solutions of the second class are constructed by applying the Clifford algebra and the gamma matrices. They are noncommutative solitons interpolating $G_{\mu\nu} = 0$ and $G_{\mu\nu} = \eta_{\mu\nu}$. They satisfy $\det_* G \neq 0$ and $\det G \neq 0$, so both noncommutative and commutative quantities can be derived from the vielbein. This analysis indicates that the solutions are regarded as either
+---PAGE_BREAK---
+
+a bubble of ordinary spacetime around the nothing $G_{\mu\nu} = 0$, or a hole (bubble of nothing) in the Minkowski space, where their regions with different scalar curvatures are partitioned by the wall of the curvature singularity. Interestingly, the Minkowski metric is included in this class of solutions, in which the curvature singularities are absent in the large size limit of the gamma matrices. We also argued the possible mechanism for the resolution of point-like curvature singularities in the commutative limit.
+
+Thus we found a lot of nontrivial solutions which can be expected to have effects of quantum gravity, but there are many open questions to be investigated. We would like to note again that they depend on the two possible interpretations of the model discussed in §2.
+
+The first possibility is to regard the action we have used in this paper as a part of a full theory. In other words, we need to add a (noncommutative) spin connection to our theory. Along this interpretation, the solutions in this paper would not exact solutions in the full theory. However, they should be valid in a certain limit where the spin connection term is negligible compared with the cosmological constant term. It is interesting if the existence of our solutions would restrict possible noncommutative extensions of the first-order formulation of gravity. Note that for noncommutative scalar field theories, the solutions obtained in the large noncommutativity limit can be extended to the so-called exact noncommutative solitons in the full theory with kinetic term by adding noncommutative gauge fields. The spin connection would play a similar role as gauge fields. It would also be useful to focus on the symmetry of our solutions for that purpose. We refer that the $E_\mu^a = 0$ solution preserves the full (twisted) diffeomorphism, while the Minkowski metric preserves the twisted Poincaré symmetry. What is the corresponding twisted symmetry in our case? Because of the static, rotational symmetric ansatz, a naive guess is the twisted version of $\mathbb{R} \times U(1) \times SO(2)$.
+
+Looking at our model from the observational point of view is very interesting as well. Concerning it, we note that there is an argument that $G_{\mu\nu} = 0$ is an origin of the dark matter [46]. Here the $E_\mu^a = 0$ does not constrain the spin connection and thus in the equation of motion for the fluctuation there is an extra integration constant, which behaves as the dark matter. If such a possibility would be applicable to our solutions as well, we might be able to see noncommutative effect by cosmological observations. In that sense, we need more “realistic” solutions, e.g., a four-dimensional and time-dependent solution. The application of our model to black holes on noncommutative spaces is also an interesting direction. In the commutative limit $\theta \to 0$ of (4.14), there appears a sharp, delta-function like singularity at the origin which behaves as a point-like source. There are black hole solutions on the noncommutative space with that kind of source term (smeared by the noncommutativity) [14, 17, 18, 19, 20]. It is interesting to inves-
+---PAGE_BREAK---
+
+tigate the relation to our solutions. In the weakly noncommutative case, the $\theta$-expansion works so that we can approximately use ordinary Einstein-Hilbert action. Note that for any finite $k$ the metric (4.15) also represents a point-like source but now with $k$ internal degrees of freedom. There might be a relation to black hole microstates.
+
+On the other hand, when we regard our theory as a full theory, the most important issue is to show the validity of this approach, in other words, to show the relation to the second-order formalism without spin connection. Concerning this, we remind that there is already a similar situation in string field theory and in the context of quantum gravity [38]: There exists the solution which satisfies $\Phi_0 * \Phi_0 = 0$ of the pre-geometrical action $S \sim \int \Phi * \Phi * \Phi$ defines a BRST charge as $\Phi_0$ and the fluctuation theory becomes Witten's SFT. This seem to be a very interesting scenario, if there is an analogous mechanism for our solutions to emerge gravity starting from the cosmological constant only.
+
+This is the first paper that suggest the emergence of “meta” spacetimes only from a cosmological constant and noncommutativity. Besides the ordinary expectation that the noncommutativity becomes important at the Planck scale, our model may suggest a more radical scenario that the noncommutativity would also be crucial for spacetimes even at a large scale. In this respect, this scenario gives also a new direction about the cosmological constant problem, that is, the cosmological constant is necessary for spacetimes to emerge. Both fundamental and phenomenological questions on this model have to be investigated further.
+
+## Acknowledgements
+
+The authors would like to thank Y. Sasai, J. Soda, H. Ujino, H. Usui and S. Watamura for helpful discussions. This work of S. K. is supported by JSPS Grand-in-Aid for Young Scientists (B) 21740198.
+---PAGE_BREAK---
+
+A The explicit forms of the Ricci scalar and the Kretschmann
+invariant for the metric (3.4)
+
+We give the explicit forms of the Ricci scalar and the Kretschmann invariant for the metric (3.4). They are given by
+
+$$
+\begin{equation}
+R = 2e^{\frac{r^2}{\theta}} \left\{
+\begin{aligned}
+& 8x^{12} + x^{10}(40y^2 - 68\theta) + 8x^8(10y^4 - 37y^2\theta + 24\theta^2) \\
+& + 4x^6(20y^6 - 126y^4\theta + 172y^2\theta^2 - 65\theta^3) \\
+& + 2x^4(20y^8 - 208y^6\theta + 456y^4\theta^2 - 359y^2\theta^3 + 89\theta^4) \\
+& + x^2(8y^{10} - 164y^8\theta + 528y^6\theta^2 - 656y^4\theta^3 + 330y^2\theta^4 - 65\theta^5) \\
+& + \theta(-24y^{10} + 112y^8\theta - 198y^6\theta^2 + 152y^4\theta^3 - 61y^2\theta^4 + 10\theta^5)
+\end{aligned}
+\right\} / \left\{ \theta (-2r^2 + \theta)^2 (2r^4 - 4r^2\theta + \theta^2)^2 \right\}, \tag{A.1}
+\end{equation}
+$$
+---PAGE_BREAK---
+
+$$
+\begin{aligned}
+R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} &= 4e^{\frac{2r^2}{6}} \Biggl\{
+\begin{aligned}[t]
+& 64x^{24} + 64x^{22}(10y^2 - 13\theta) + 16x^{20}(180y^4 - 476y^2\theta + 305\theta^2) \\
+& + 64x^{18}(120y^6 - 486y^4\theta + 639y^2\theta^2 - 260\theta^3) \\
+& + 32x^{16}(420y^8 - 2328y^6\theta + 4744y^4\theta^2 - 3989y^2\theta^3 + 1148\theta^4) \\
+& + 16x^{14}(1008y^{10} - 7224y^8\theta + 20488y^6\theta^2 - 26914y^4\theta^3 + 16049y^2\theta^4 - 3487\theta^5) \\
+& + 8x^{12}(1680y^{12} - 15120y^{10}\theta + 56812y^8\theta^2 - 104748y^6\theta^3 + 97688y^4\theta^4 - 43900y^2\theta^5 + 7587\theta^6) \\
+& + 8x^{10}(960y^{14} - 10752y^{12}\theta + 52640y^{10}\theta^2 - 129556y^8\theta^3 + 169222y^6\theta^4 \\
+& \qquad \qquad \qquad \qquad \quad - 118388y^4\theta^5 + 42087y^2\theta^6 - 6071\theta^7) \\
+& + 4x^8(720y^{16} - 10176y^{14}\theta + 65744y^{12}\theta^2 - 211400y^{10}\theta^3 \\
+& \qquad \qquad \qquad \qquad \quad + 365560y^8\theta^4 - 355060y^6\theta^5 + 194909y^4\theta^6 - 57396y^2\theta^7 + 7245\theta^8) \\
+& + 4x^6(160y^{18} - 3024y^{16}\theta + 27232y^{14}\theta^2 - 114072y^{12}\theta^3 + 252764y^{10}\theta^4 \\
+& \qquad \qquad \qquad \qquad \quad - 320300y^8\theta^5 + 241376y^6\theta^6 - 108758y^4\theta^7 + 27825y^2\theta^8 - 3176\theta^9) \\
+& + x^4(64y^{20} - 1984y^{18}\theta + 28688y^{16}\theta^2 - 157984y^{14}\theta^3 + 438784y^{12}\theta^4 \\
+& \qquad \qquad \qquad \qquad \quad - 696832y^{10}\theta^5 + 675216y^8\theta^6 - 413272y^6\theta^7 + 160576y^4\theta^8 - 36972y^2\theta^9 + 3897\theta^{10}) \\
+& - 2x^2\theta(64y^{20} - 2208y^{18}\theta + 16112y^{16}\theta^2 - 54952y^{14}\theta^3 + 106080y^{12}\theta^4 \\
+& \qquad \qquad \qquad \qquad \quad - 126580y^{10}\theta^5 + 98472y^8\theta^6 - 51586y^6\theta^7 + 17960y^4\theta^8 - 3815y^2\theta^9 + 373\theta^{10}) \\
+& + \theta^2(320y^{20} - 3008y^{18}\theta + 12256y^{16}\theta^2 - 27984y^{14}\theta^3 + 39812y^{12}\theta^4 \\
+& \qquad \qquad \qquad \qquad \quad - 37688y^{10}\theta^5 + 24916y^8\theta^6 - 11652y^6\theta^7 + 3741y^4\theta^8 - 738y^2\theta^9 + 66\theta^{10}) \Biggr\}
+\end{aligned}
+/ \left\{ \theta^2 (-2r^2 + \theta)^4 (2r^4 - 4r^2\theta + \theta^2)^4 \right\}, \quad (\text{A.2})
+$$
+
+respectively.
+
+References
+
+[1] A. H. Chamseddine, *Deforming Einstein's gravity*, Phys. Lett. B504 (2001) 33–37 [hep-th/0009153].
+
+[2] J. W. Moffat, *Noncommutative quantum gravity*, Phys. Lett. B491 (2000) 345–352 [hep-th/0007181].
+
+[3] G. Zet, V. Manta and S. Babeti, *De Sitter gauge theory of gravitation*, Int. J. Mod. Phys. C14 (2003) 41–48.
+---PAGE_BREAK---
+
+[4] N. Seiberg and E. Witten, String theory and noncommutative geometry, JHEP 09 (1999) 032 [hep-th/9908142].
+
+[5] P. Mukherjee and A. Saha, *Reissner-Nordstrom solutions in noncommutative gravity*, Phys. Rev. D77 (2008) 064014 [0710.5847].
+
+[6] M. Chaichian, A. Tureanu and G. Zet, *Corrections to Schwarzschild Solution in Noncommutative Gauge Theory of Gravity*, Phys. Lett. B660 (2008) 573-578 [0710.2075].
+
+[7] M. Chaichian, M. R. Setare, A. Tureanu and G. Zet, *On Black Holes and Cosmological Constant in Noncommutative Gauge Theory of Gravity*, JHEP 04 (2008) 064 [0711.4546].
+
+[8] P. Mukherjee and A. Saha, *A new approach to the analysis of a noncommutative Chern-Simons theory*, Mod. Phys. Lett. A21 (2006) 821-830 [hep-th/0409248].
+
+[9] P. Mukherjee and A. Saha, *On the question of regular solitons in a Noncommutative Maxwell-Chern-Simons-Higgs model*, Mod. Phys. Lett. A22 (2007) 1113 [hep-th/0605123].
+
+[10] E. Witten, (*2+1*-Dimensional Gravity as an Exactly Soluble System, Nucl. Phys. B311 (1988) 46).
+
+[11] M. Banados, O. Chandra, N. E. Grandi, F. A. Schaposnik and G. A. Silva, *Three-dimensional noncommutative gravity*, Phys. Rev. D64 (2001) 084012 [hep-th/0104264].
+
+[12] E. Chang-Young, D. Lee and Y. Lee, *Noncommutative BTZ Black Hole in Polar Coordinates*, Class. Quant. Grav. 26 (2009) 185001 [0808.2330].
+
+[13] H.-C. Kim, M.-I. Park, C. Rim and J. H. Yee, *Smeared BTZ Black Hole from Space Noncommutativity*, JHEP 10 (2008) 060 [0710.1362].
+
+[14] P. Nicolini and E. Spallucci, *Noncommutative geometry inspired wormholes and dirty black holes*, Class. Quant. Grav. 27 (2010) 015010 [0902.4654].
+
+[15] P. Nicolini, *Noncommutative Black Holes, The Final Appeal To Quantum Gravity: A Review*, Int. J. Mod. Phys. A24 (2009) 1229-1308 [0807.1939].
+
+[16] P. Nicolini, *A model of radiating black hole in noncommutative geometry*, J. Phys. A38 (2005) L631-L638 [hep-th/0507266].
+---PAGE_BREAK---
+
+[17] P. Nicolini, A. Smailagic and E. Spallucci, *Noncommutative geometry inspired Schwarzschild black hole*, Phys. Lett. **B**632 (2006) 547-551 [gr-qc/0510112].
+
+[18] S. Ansoldi, P. Nicolini, A. Smailagic and E. Spallucci, *Noncommutative geometry inspired charged black holes*, Phys. Lett. **B**645 (2007) 261-266 [gr-qc/0612035].
+
+[19] E. Spallucci, A. Smailagic and P. Nicolini, *Pair creation by higher dimensional, regular, charged, micro black holes*, Phys. Lett. **B**670 (2009) 449-454 [0801.3519].
+
+[20] Y. S. Myung and M. Yoon, *Regular black hole in three dimensions*, Eur. Phys. J. C62 (2009) 405-411 [0810.0078].
+
+[21] R. Banerjee, B. R. Majhi and S. K. Modak, *Area Law in Noncommutative Schwarzschild Black Hole*, Class. Quant. Grav. **26** (2009) 085010 [0802.2176].
+
+[22] R. Banerjee, B. R. Majhi and S. Samanta, *Noncommutative Black Hole Thermodynamics*, Phys. Rev. D77 (2008) 124035 [0801.3583].
+
+[23] R. Banerjee, B. Chakraborty, S. Ghosh, P. Mukherjee and S. Samanta, *Topics in Noncommutative Geometry Inspired Physics*, Found. Phys. **39** (2009) 1297-1345 [0909.1000].
+
+[24] R. Banerjee, S. Gangopadhyay and S. K. Modak, *Voros product, Noncommutative Schwarzschild Black Hole and Corrected Area Law*, Phys. Lett. **B**686 (2010) 181-187 [0911.2123].
+
+[25] P. Aschieri, M. Dimitrijevic, F. Meyer and J. Wess, *Noncommutative geometry and gravity*, Class. Quant. Grav. **23** (2006) 1883-1912 [hep-th/0510059].
+
+[26] P. Aschieri et al., *A gravity theory on noncommutative spaces*, Class. Quant. Grav. **22** (2005) 3511-3532 [hep-th/0504183].
+
+[27] P. Aschieri and L. Castellani, *Noncommutative $D=4$ gravity coupled to fermions*, JHEP **06** (2009) 086 [0902.3817].
+
+[28] P. Aschieri and L. Castellani, *Noncommutative supergravity in $D=3$ and $D=4$, JHEP* **06** (2009) 087 [0902.3823].
+
+[29] P. Aschieri and L. Castellani, *Noncommutative Gravity Solutions*, J. Geom. Phys. **60** (2010) 375-393 [0906.2774].
+---PAGE_BREAK---
+
+[30] P. Schupp and S. Solodukhin, *Exact Black Hole Solutions in Noncommutative Gravity*, 0906.2724.
+
+[31] A. Schenkel, *Symmetry Reduction and Exact Solutions in Twisted Noncommutative Gravity*, 0908.0434.
+
+[32] T. Ohl and A. Schenkel, *Cosmological and Black Hole Spacetimes in Twisted Noncommutative Gravity*, JHEP **10** (2009) 052 [0906.2730].
+
+[33] P. Mukherjee and A. Saha, *Comment on the first order noncommutative correction to gravity*, Phys. Rev. **D74** (2006) 027702 [hep-th/0605287].
+
+[34] R. Banerjee, P. Mukherjee and S. Samanta, *Lie algebraic Noncommutative Gravity*, Phys. Rev. **D75** (2007) 125020 [hep-th/0703128].
+
+[35] D. V. Vassilevich, *Diffeomorphism covariant star products and noncommutative gravity*, Class. Quant. Grav. **26** (2009) 145010 [0904.3079].
+
+[36] R. Gopakumar, S. Minwalla and A. Strominger, *Noncommutative solitons*, JHEP **05** (2000) 020 [hep-th/0003160].
+
+[37] J. A. Harvey, P. Kraus and F. Larsen, *Exact noncommutative solitons*, JHEP **12** (2000) 024 [hep-th/0010060].
+
+[38] G. T. Horowitz, *Topology change in classical and quantum gravity*, Class. Quant. Grav. **8** (1991) 587-602.
+
+[39] J. A. Harvey, *Komaba lectures on noncommutative solitons and D-branes*, hep-th/0102076.
+
+[40] J. Ambjorn, J. Jurkiewicz and R. Loll, *Emergence of a 4D world from causal quantum gravity*, Phys. Rev. Lett. **93** (2004) 131301 [hep-th/0404156].
+
+[41] J. Ambjorn, J. Jurkiewicz and R. Loll, *Spectral dimension of the universe*, Phys. Rev. Lett. **95** (2005) 171301 [hep-th/0505113].
+
+[42] J. Ambjorn, J. Jurkiewicz and R. Loll, *Reconstructing the universe*, Phys. Rev. **D72** (2005) 064014 [hep-th/0505154].
+
+[43] J. Ambjorn, J. Jurkiewicz and R. Loll, *The universe from scratch*, Contemp. Phys. **47** (2006) 103-117 [hep-th/0509010].
+---PAGE_BREAK---
+
+[44] S. Carlip, *Spontaneous Dimensional Reduction in Short-Distance Quantum Gravity?*, 0909.3329.
+
+[45] P. Horava, *Spectral Dimension of the Universe in Quantum Gravity at a Lifshitz Point*, Phys. Rev. Lett. **102** (2009) 161301 [0902.3657].
+
+[46] M. Banados, *The ground-state of General Relativity, Topological Theories and Dark Matter*, Class. Quant. Grav. **24** (2007) 5911-5916 [hep-th/0701169].
\ No newline at end of file
diff --git a/samples/texts_merged/6476246.md b/samples/texts_merged/6476246.md
new file mode 100644
index 0000000000000000000000000000000000000000..467a267b736c13a504175f31052c554a2598b294
--- /dev/null
+++ b/samples/texts_merged/6476246.md
@@ -0,0 +1,508 @@
+
+---PAGE_BREAK---
+
+# Coding for Interactive Communication*
+
+Leonard J. Schulman
+Computer Science Division
+U. C. Berkeley
+
+## Abstract
+
+Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol $\pi$ be known by which, on any input, the processors can solve the problem using no more than $T$ transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably?
+
+Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability and expense.
+
+We treat a model with random channel noise. We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slow-down. This is an analog for general interactive protocols of Shannon's coding theorem, which deals only with data transmission, i.e. one-way protocols.
+
+We cannot use Shannon's block coding method because the bits exchanged in the protocol are determined only one at a time, dynamically, in the course of the interaction. Instead we describe a simulation protocol using a new kind of code, explicit tree codes.
+
+Key words: Interactive Communication, Coding Theorem, Tree Code, Distributed Computing, Reliable Communication, Error Correction.
+
+*Special issue on Codes and Complexity of the IEEE Transactions on Information Theory, 42(6) Part 1, 1745-1756, Nov. 1996.
+---PAGE_BREAK---
+
+# 1 Introduction
+
+Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol $\pi$ be known by which, on any input, the processors can solve the problem using no more than $T$ transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably?
+
+We focus on a model in which the channel suffers random noise. In this case an upper bound on the number of transmissions necessary is provided by the simple protocol which repeats each transmission of the noiseless-channel protocol many times (and decodes each transmission by a vote). For this simulation to succeed, each transmission must be repeated enough times so that no simulated transmission goes wrong; hence the length of the simulation increases as a superlinear function of $T$, even if only a constant probability of error is desired. Said another way, the “rate” of the simulation (number of protocol steps simulated per transmission) goes to zero as $T$ increases.
+
+Shannon considered this matter in 1948 in his seminal study of communication [29]. Shannon studied the case of “one-way” communication problems, i.e. data transmission. His fundamental observation was that coding schemes which did not treat each bit separately, but jointly encoded large blocks of data into long codewords, could achieve very small error probability (exponentially small in $T$), while slowing down by only a constant factor relative to the $T$ transmissions required by the noiseless-channel protocol (which can simply send the bits one by one). The constant (ratio of noiseless to noisy communication time) is a property of the channel, and is known as the Shannon capacity of the channel. The improvement in communication rate provided by Shannon’s insight is dramatic: the naive protocol can only achieve the same error probability by repeating each bit a number of times proportional to the length of the entire original protocol (for a total of $\Omega(T^2)$ communications¹).
+
+A precise statement is captured in Shannon’s coding theorem:
+
+**Theorem 1 (Shannon)** Let a binary symmetric channel of capacity $C$ be given. For every $T$ and every $\gamma > 0$ there exists a code $\chi : \{0,1\}^T \to \{0,1\}^{T\frac{1}{\gamma(1+\gamma)}}$ and a decoding map $\chi' : \{0,1\}^{T\frac{1}{\gamma(1+\gamma)}} \to \{0,1\}^T$ such that every codeword transmitted across the channel is decoded correctly with probability $1 - e^{-\Omega(T)}$.
+
+The original proof of such a theorem appears in [29] (formulated already for the more general class of Markov channels). The ramifications of this insight for data transmission have been explored in coding theory and information theory.
+
+Recently, in computer science, communication has come to be critical to distributed computing, parallel computing, and the performance of VLSI chips. In these contexts interaction is an essential part of the communication process. If the environment is noisy, it is necessary to be able to sustain interactive communication in the presence of that noise. Noise afflicts interactive communications just as it does the one-way communications considered by Shannon, and for much the same reasons: physical devices are by nature noisy, and there is often a significant cost associated with making them so reliable that the noise can be ignored (such as by providing very strong transmitters, or using circuits cooled to very low temperatures). In order to mitigate such costs we must design our systems to operate reliably even in the presence of some noise. The ability to transmit data in the presence of noise, the subject of Shannon’s and subsequent work, is a necessary but far from sufficient condition for sustained interaction and computation.
+
+For this reason we will be concerned with the problem of achieving simultaneously high communication rate and high reliability, in arbitrary interactive protocols.
+
+¹The following notation will be used in this paper. If $g: \mathbb{N} \to \mathbb{N}$ is a nondecreasing function, then $\Omega(g)$ denotes the set of functions $f: \mathbb{N} \to \mathbb{N}$ such that $\liminf f(i)/g(i) > 0$; $O(g)$ denotes the set of functions $f: \mathbb{N} \to \mathbb{N}$ such that $\limsup f(i)/g(i) < \infty$; and $\theta(g) = O(g) \cap \Omega(g)$, the set of functions which, asymptotically, grow proportionately to $g$.
+---PAGE_BREAK---
+
+Observe that in the case of an interactive protocol, the processors generally do not know what they want to transmit more than one bit ahead, and therefore cannot use a block code as in the one-way case. Another difficulty that arises in our situation, but not in data transmission, is that once an error has occurred, subsequent exchanges on the channel are affected. Such exchanges cannot be counted on to be of any use either to the simulation of the original protocol, or to the detection of the error condition. Yet the processors must be able to recover, and resume synchronized execution of the intended protocol, following any sequence of errors, although these may cause them to have very different views of the history of their interaction.
+
+We describe in this paper how, in spite of these new difficulties, it is possible to simulate noiseless-channel protocols on noisy channels with exponentially small error and with an increase by only a constant factor in the number of transmissions. First we state a version of our result for stochastic noise. In this theorem, as elsewhere in the paper except section 5, we assume that in both the noisy and noiseless scenarios, the processors are connected by a pair of unidirectional channels which, in every unit of time, transmit one bit in each direction. The run time of a protocol is the number of bit exchanges required for it to terminate on any input.
+
+**Theorem 2** In each direction between a pair of processors let a binary symmetric channel of capacity $C$ be given. There is a deterministic communication protocol which, given any noiseless channel protocol $\pi$ running in time $T$, simulates $\pi$ on the noisy channel in time $\theta(T/C)$ and with error probability $c^{-\Omega(T)}$.
+
+Note that if $\pi$ is deterministic, so is the simulation; if $\pi$ takes advantage of some randomness then exactly the same randomness resources are used by the simulation, and the probability of error will be bounded by the sum of the probability of error of $\pi$ and the probability of error of the simulation.
+
+In all but a constant factor in the rate, this is an exact analog, for the general case of interactive communication problems, of the Shannon coding theorem. We focus in this paper on binary channels as the canonical case; but we do not restrict ourselves to memoryless channels or even to stochastic noise, as will be seen below.
+
+We will also show that if the "tree codes" introduced in this paper can be efficiently constructed, then the encoding and decoding procedures of the protocol can also be implemented efficiently:
+
+**Theorem 3** Given an oracle for a tree code, the expected computation time of each of the processors implementing our protocol, when the communication channels are binary symmetric, is polynomial in $T$.
+
+Tree codes will be described below. Following standard usage in complexity theory, an "oracle" is any procedure which, on request, produces a local description of the tree code; one unit of time is charged for this service. Thus any polynomial time algorithm for this task will result in polynomial time encoding and decoding procedures.
+
+It is worth noting that if all one wants is to transmit one bit (or a fixed number of bits) but a very small probability of error is desired, then there is only one thing to be done: the redundancy of the transmission must be increased, in order to drive down the error probability. (By redundancy we mean the number of transmissions per bit of the message.) Shannon's theorem says that this is not necessary if one may assume that the message to be transmitted is long. In that case a code can be selected in such a manner that when any codeword (an image of a $T$-bit input block via $\chi$, as above) is transmitted over the channel, the probability that it is changed by the channel noise to a string which is mistaken by the receiver for another codeword, is exponentially small in $T$. Since the exponential error is gained from the block size, $T$, which may be chosen as large as desired, it is not necessary to invest extra redundancy for this purpose. It is enough just to bring the redundancy above the threshold value $1/C$.
+
+For a noiseless environment, the interactive model for communication problems, in which the input to a problem is split between two processors linked by a noiseless channel, was introduced by A. Yao in
+---PAGE_BREAK---
+
+1979 [36] to measure the cost incurred in departing from the single-processor model of computation. It has been intensively investigated (e.g. [37, 19, 32, 17]; and see [18] for a survey). The present work can be viewed as guaranteeing that every upper bound in that model, yields an upper bound in the noisy model.
+
+A weaker interactive analog of Shannon's coding theorem, which made use of randomization in the simulation process, was given by the author in [26] and [27]. The present result was presented in preliminary form in [28].
+
+The binary symmetric channel (BSC) is a simple noise model which nevertheless captures the chief difficulties of the problem considered in this paper; therefore we have focussed on it. However with little additional effort our result can be extended to far more general channels.
+
+**Theorem 4 (Adversarial Channel)** There is a deterministic communication protocol which, given any noiseless channel protocol $\pi$ running in time $T$, runs in time $N = \theta(T)$, and successfully simulates $\pi$ provided the number of incorrectly transmitted bits is at most $N/240$.
+
+(Note that due to the limit on the tolerable noise level this result does not dominate theorem 2.)
+
+We also consider in this paper what happens in a noise model that is much milder than the BSC: binary erasure channels with free feedback. In this case a particularly clean result can be obtained, which will be described in section 5.
+
+## The model
+
+In the standard (noiseless) communication complexity model, the argument (input) $z = (z_A, z_B)$ of a function $f(z)$ is split between two processors A and B, with A receiving $z_A$ and B receiving $z_B$; the processors compute $f(z)$ (solve the communication problem $f$) by exchanging bits over a noiseless link between them. Each processor has unbounded time and space resources. Randomness may be allowed as a resource, and is typically considered at one of three levels: none (deterministic protocols), private coins (each processor has its own unbounded source of random coins, but cannot see the other processor's coins), and public coins (the processors can see a common unbounded source of random coins). The procedure which determines the processors' actions, based on their local input and past receptions, is called a communication protocol. We will be concerned in this paper with a deterministic simulation of one protocol by another. In case the original protocol was deterministic, so is the simulation. If a randomized protocol is to be simulated, the same method applies; simply choose the random string in advance, and then treat the protocol being simulated as if it were deterministic.
+
+In this paper the noiseless link is replaced (in each direction) by a noisy one. We will mostly focus on the following noise model. The *binary symmetric channel* is a synchronous communication link which, in every unit of time, accepts at one end either a 0 or 1; and in response produces either a 0 or a 1 at the other end. With some fixed probability $\epsilon$ the output differs from the input, while with probability $1-\epsilon$ they are the same. This random event for a particular transmission is statistically independent of the bits sent and received in all other channel transmissions. (Thus the channel is "memoryless".) Weaker and stronger assumptions regarding the channel noise will be considered in sections 3.1 and 5.
+
+The average mutual information between two ensembles $\{P(x)\}_{x \in X}$ and $\{P(y)\}_{y \in Y}$, which have a joint distribution $\{P(xy)\}_{x \in X, y \in Y}$, is defined as $\sum_{xy} P(xy) \log \frac{P(xy)}{P(x)P(y)}$. This is a measure of how much information is provided about one ensemble by the specification of a point (chosen at random from the joint distribution) of the other ensemble.
+
+The capacity of a channel was defined by Shannon as the maximum, ranging over probability distributions on the inputs, of the average mutual information between the input and output distributions. In the case of the binary symmetric channel the capacity (in base 2) is $C = \epsilon \lg(2\epsilon) + (1-\epsilon) \lg(2(1-\epsilon))$. Observe that $0 \le C \le 1$; and that a noiseless channel has capacity $C=1$.
+---PAGE_BREAK---
+
+## 2 Coding Theorem
+
+A key tool in our protocol is the tree code. Random tree codes — essentially, distributions over labelled trees — were introduced by Wozencraft and used by him and others for the purpose of sequential decoding ([34, 24, 7]; see [12] §6.9). However, random tree codes are not sufficient for our purpose. We introduce below the stronger notion of tree code which we use in our work.
+
+### 2.1 Tree codes
+
+Let $S$ be a finite alphabet. If $s = (s_1...s_m)$ and $r = (r_1...r_m)$ are words of the same length over $S$, say that the distance $\Delta(s,r)$ between $s$ and $r$ is the number of positions $i$ in which $s_i \neq r_i$ (Hamming distance). A $d$-ary tree of depth $n$ is a rooted tree in which every internal node has $d$ children, and every leaf is at depth $n$ (the root is at depth 0).
+
+**Definition** A $d$-ary tree code over alphabet $S$, of distance parameter $\alpha$ and depth $n$, is a $d$-ary tree of depth $n$ in which every arc of the tree is labeled with a character from the alphabet $S$ subject to the following condition. Let $v_1$ and $v_2$ be any two nodes at some common depth $h$ in the tree. Let $h - \ell$ be the depth of their least common ancestor. Let $W(v_1)$ and $W(v_2)$ be the concatenation of the letters on the arcs leading from the root to $v_1$ and $v_2$ respectively. Then $\Delta(W(v_1), W(v_2)) \ge \alpha\ell$.
+
+In the codes we will use in this paper we fix the distance parameter $\alpha$ to 1/2.
+
+The key point regarding tree codes is that the alphabet size required to guarantee their existence does not depend on $n$. In fact we will show that for any fixed $d$ there exists an infinite tree code with a constant size alphabet (i.e. a complete, infinite $d$-ary tree in which the labels satisfy the distance condition).
+
+We begin with the case of $d=2$ and finite $n$; then treat arbitrary $d$ and an infinite tree.
+
+**Lemma 1** For any fixed degree $d$, there exists a finite alphabet which suffices to label the arcs of a $d$-ary tree code. Specifically:
+
+(A) An alphabet $S$ of size $|S| = 99$ suffices to label the arcs of a binary tree code of distance parameter $1/2$ and any depth $n$.
+
+$$ \text{Let } \eta(\alpha) = \frac{1}{(1-\alpha)^{1-\alpha}} \frac{1}{\alpha^{\alpha}}. \text{ Note that } \eta(\alpha) \le 2. $$
+
+(B) An alphabet $S$ of size $|S| = 2[(2\eta(\alpha)d)^{\frac{1}{1-\alpha}}] - 1$ suffices to label the arcs of an infinite $d$-ary tree code of distance parameter $\alpha$.
+
+**Proof:**
+
+(A) By induction on $n$. A base case $n=0$ for the induction is trivial. Now take a tree code of depth $n-1$, and put two copies of it side-by-side in a tree of depth $n$, each rooted at a child of the root. Choose a random string $\sigma = (\sigma_1...\sigma_n)$, where the variables $\{\sigma_h\}$ are i.i.d. random variables, each chosen uniformly from among the set of permutations of the letters of the alphabet. Modify the right-hand subtree by replacing each letter $x$ at depth $h$ in the subtree, by the letter $\sigma_h(x)$. (Also place the letters $1$ and $\sigma_1(1)$ on the arcs incident to the root.) This operation does not affect the tree-code conditions within each subtree. Now consider any pair of nodes $v_1, v_2$ at depth $h$, with $v_1$ in the left subtree and $v_2$ in the right subtree. The probability that they violate the tree-code condition is bounded by $\exp(-hD(1-\alpha||1/|S|))$, where $D$ is the Kullback-Leibler divergence, $D(x||y) = x \log x/y + (1-x)\log(1-x)/(1-y)$. The number of such pairs at depth $h$ is $4^{h-1}$. Thus the probability of a violation of the tree code condition at any depth is bounded by $\sum_{h=1}^{\infty} \exp((h-1)2\log 2 - h(D(1-\alpha||1/|S|)))$. For the case $\alpha = 1/2$ and $|S| = 99$ this is strictly less than 1 (approximately .99975). Hence some string $\sigma$ produces a tree code of depth $n$.
+---PAGE_BREAK---
+
+(B) What follows is the best currently known existence argument for tree codes². Let $|S|$ be the smallest prime greater or equal to $(2\eta(\alpha)d)^{\frac{1}{1-\alpha}}$. By Bertrand's postulate [14], there is a prime strictly between $n$ and $2n$ for any integer $n > 1$. Hence $|S| < 2[(2\eta(\alpha)d)^{\frac{1}{1-\alpha}}]$.
+
+Identify $S$ with the set $\{0, 1, \dots, |S| - 1\}$. Associate with every arc of the tree its address, which for an arc at depth $h$, is the string in $\{0, \dots, d-1\}^h$ describing the path from the root to that arc. (Thus the arcs below the root are labelled $\{0, \dots, d-1\}$; the arcs below the 0-child of the root are labelled $\{00, \dots, 0(d-1)\}$; and so forth.)
+
+Let $a = a_1 a_2 a_3 \dots$ be an infinite string with each $a_i$ chosen uniformly, independently, in $\{0, \dots, |S|-1\}$. Label the tree code by convolving the arc addresses with the random string, as follows: if the address of an arc (at depth $h$) is $\varepsilon = \varepsilon_1 \varepsilon_2 \dots \varepsilon_h$, then the arc is labelled $\sum_{i=1}^h \varepsilon_i a_{h+1-i}$.
+
+Let $i, j \in \{0, \dots, d-1\}, i \neq j$; and let $x, y, z$ be strings over $\{0, \dots, d-1\}$, with $|y| = |z| = r$. Observe that
+
+$$ \text{label}(xiy) - \text{label}(xjz) = (i-j)a_{r+1} + \sum_{i=1}^{r} (y_i - z_i)a_{r+1-i}. $$
+
+Therefore the event that $xiy$ and $xjz$ fail the distance condition depends only on the sequence $(i-j), (y_1-z_1), \dots, (y_r-z_r)$. The probability that $a$ causes a failure of the distance condition on this sequence is bounded by $\exp(-hD((1-\alpha)||(1/|S|)|))$ (where $h=r+1$); summing over all such sequences, we bound the probability that $a$ fails to provide a tree code by $\sum_{h \ge 1} d^h \exp(-hD((1-\alpha)||(1/|S|)|))$. This in turn is strictly less than $\sum_{h \ge 1} \exp(h(\log d - (1-\alpha)\log|S| - (1-\alpha)\log(1-\alpha) - \alpha \log \alpha))$ which, with the stated choice of $|S|$, is bounded by 1. Therefore there is a positive probability that the string $a$ defines an infinite tree code. $\square$
+
+What is needed for efficient implementation of our protocol, is a tree code in which the label on any edge can be computed in time polynomial in the depth of the tree, or (for an infinite tree) in the depth of the edge.
+
+An "oracle", as referred to in theorem 3, is an abstract machine which provides, on request, labels of edges in the tree code; we are charged one unit of computation time per request. Therefore theorem 3 guarantees that, should we be able to come up with a polynomial computation-time implementation of a tree code, this would result in a polynomial computation-time coding theorem for noisy channels.
+
+## 2.2 Preliminaries
+
+### 2.2.1 The role of channel capacity
+
+A standard calculation shows:
+
+**Lemma 2 (standard)** Let a binary symmetric channel *M* of capacity *C*, and an alphabet *S*, be given. Then there is a transmission code for the alphabet, i.e. a pair consisting of an encoding function $\kappa: S \rightarrow \{0,1\}^n$ and a decoding function $\kappa': \{0,1\}^n \rightarrow S$, such that $n = O(1/\log|S|)$, while the probability of error in any transmission over the channel (i.e. the probability that $\kappa'(M(\kappa(s))) \neq s$) is no more than $2^{-2^{208}\cdot 3^{-40}}$. $\square$
+
+In the protocol we make use of a 12-ary tree code with distance parameter $d=1/2$. The protocol as we describe it will involve the processors transmitting, in each round, a letter inscribed on some arc of the
+
+²The present form of this argument is due to discussions with Yuval Rabani (for the convolution) and Oded Goldreich (for the $1/(1-\alpha)$ exponent).
+---PAGE_BREAK---
+
+tree; this will be implemented by the transmission of the codeword for that letter, in a transmission code
+as described above. (Thus, due to lemma 1, we will be applying lemma 2 with a constant size alphabet.)
+This presentation slightly simplifies our later description of the protocol, and also makes clear how the
+channel capacity enters the slow-down factor of theorem 2.
+
+The role of the channel capacity in our result should not be overemphasized. In contrast to the
+situation for one-way communication, where the capacity provides a tight characterization of the rate at
+which communication can be maintained, in the interactive case we will be able to characterize this rate
+only to within a constant factor of capacity. Moreover the positive results of this paper use only crude
+properties of the capacity: its quadratic basin (as a function of the channel error probability) in the very
+noisy regime, and its boundedness elsewhere. The interesting problem remains, whether there are protocols
+whose simulation in the presence of noise must slow down by a factor worse than the capacity.
+
+2.2.2 Noiseless Protocol $\pi$
+
+Let $\pi$ be the noiseless-channel protocol of length $T$ to be simulated. We assume the “hardest” case, in which every transmission of $\pi$ is only one bit long; and for simplicity we suppose that in every round both processors send a bit (this affects the number of transmissions by no more than a factor of 2).
+
+The history of $\pi$ on any particular input is described by a path from the root to a leaf, in a 4-ary tree (which we denote $\mathcal{T}$, see figure 1) in which each arc is labelled by one of the pairs 00, 01, 10 or 11 — referring to the pair of messages that can be exchanged by the processors in that round. (Thus, e.g., 10 refers to the transmission of a 1 by processor A and a 0 by processor B.) If $x = x_A x_B$ is the problem input ($x_A$ at processor A and $x_B$ at processor B) then let $\pi_A(x_A, \phi)$ denote the first bit sent by processor A; let $\pi_B(x_B, \phi)$ denote the first bit sent by processor B; and let $\pi(x, \phi) \in \{00, 01, 10, 11\}$ denote the pair of these first-round bits. In the second round of $\pi$ both processors are aware of the bits exchanged in the first round, and so we denote their transmissions by $\pi_A(x_A, \pi(x, \phi))$ and $\pi_B(x_B, \pi(x, \phi))$ respectively, and the pair of bits by $\pi(x, \pi(x, \phi))$. In general if messages $m_1, \dots, m_t$ (each $m_i \in \{00, 01, 10, 11\}$) have been exchanged in the first $t$ rounds, then we denote their transmissions in round $t+1$ by $\pi_A(x_A, m_1 \dots m_t)$ and $\pi_B(x_B, m_1 \dots m_t)$, and the pair of bits by $\pi(x, m_1 \dots m_t)$. The vertex of $\mathcal{T}$ labelled by a sequence of exchanges $m_1 \dots m_t$ is denoted $\mathcal{T}[m_1:\dots:m_t]$. For example the root is $\mathcal{T}[]$.
+
+Figure 1: Noiseless protocol tree $\mathcal{T}$.
+
+On input $x$ the protocol $\pi$ specifies a path from the root to a leaf in $\mathcal{T}$, namely the succession $\mathcal{T}[]$, $\mathcal{T}[\pi(x, \phi)]$, $\mathcal{T}[\pi(x, \phi): \pi(x, \pi(x, \phi))]$, etc., at the end of which the outcome of the protocol is determined. We call this path $\gamma_x$. On a noiseless channel the processors simply extend $\gamma_x$ by one arc in every round. In the noisy channel simulation the processors must develop $\gamma_x$ without expending too much time pursuing other branches of $\mathcal{T}$.
+---PAGE_BREAK---
+
+## 2.3 Simulation Protocol
+
+The simulation protocol attempts, on input $x = x_A x_B$, to recreate the development by $\pi$ of the path $\gamma_x$ in $\mathcal{T}$. We equip each processor with a pebble which it moves about on $\mathcal{T}$, in accordance with its “best guess” to date regarding $\gamma_x$. In every round, each processor sends the bit it would transmit in $\pi$ upon reaching the current pebble position in $\mathcal{T}$ (among other information). When the pebbles coincide, this exchange is a useful simulation of a step in $\pi$ (unless the exchange fails). However, due to channel errors, the processors’ pebbles will sometimes diverge. The simulation embeds $\pi$ within a larger protocol which provides a “restoring force” that tends to drive the pebbles together when they separate. The guarantee that (with high probability) the pebbles coincide through most of the simulation is what ensures the progress of the simulation.
+
+We begin by describing the mechanics of the protocol: how the pebbles can be moved about, what information the processors send each other as they move the pebbles, and how they use a tree code to do so. Then we explain how the processors choose their pebble moves based upon their local input and their history of the interaction.
+
+One way to think about our protocol is that it tries to make the sequence of transmissions produced by each processor, mimic the sequence that might be produced by a source transmitting a large block of data (and thus subject to efficient coding), in spite of the fact that the protocol is interactive and processors do not know the future “data” they will be transmitting.
+
+### 2.3.1 The state tree: mechanics of the protocol
+
+In any round of the protocol, a processor can move its pebble only to a neighbor of the current position in $\mathcal{T}$, or leave it put. Thus any move is described by one of the six possibilities 00, 01, 10, 11, $\mathcal{H}$ (“hold”) and $\mathcal{B}$ (“back”, i.e. toward the root.) Following such a move, as mentioned above, the simulation involves transmitting either a 0 or a 1 according to the value transmitted in $\pi$ at that point. (E.g., upon reaching $v \in \mathcal{T}$, A needs to inform B of $\pi_A(x_A, v)$.)
+
+The entire history of any one processor up through time $t$ can therefore be described as a sequence of $t$ “tracks”, each an integer between 1 and 12, indicating the pebble move made by the processor in some round, and the value $\pi_A(x_A, v)$ (or correspondingly $\pi_B(x_B, v)$) at the vertex $v$ reached by the pebble after that move. In particular, the position of the processor’s pebble can be inferred from this sequence of tracks. Let us say that the sequence of tracks taken by a processor is its “state”: thus the universe of possible states for a processor is described by a 12-ary tree which we call $\mathcal{Y}$ (see figure 2), and in each round, each processor moves to a child of its previous state in this “state tree”. (It will suffice to take $\mathcal{Y}$ of depth $5T$.) For a state $s \in \mathcal{Y}$ let pebble($s$) denote the pebble position of a processor which has reached state $s$.
+
+Figure 2: State tree $\mathcal{Y}$ and associated tree code
+---PAGE_BREAK---
+
+The processors' strategy will be founded on a "total information" approach: they will try to keep each other informed of their exact state, and thus of their entire history. Naturally, they cannot do this simply by encoding each track with a transmission code and sending it across the channel; in order to achieve the desired constant slow-down, such a code could have only constant length, hence channel errors would cause occasional errors in these track transmissions in the course of the simulation. (These would in fact occur at a constant frequency.) In the absence of any further indication of the correctness or incorrectness of the decoding of any track transmission, each processor would be entirely incapable of recovering the other's sequence of tracks.
+
+Instead it is necessary to transmit the tracks using constant length encodings which, in spite of their brevity, incorporate global information about the track sequence, and thus enable recovery from errors in individual track transmissions. We accomplish this with the help of tree codes. Following lemma 1 identify with $\mathcal{Y}$ a 12-ary tree code of depth $3\mathcal{T}$ over suitable alphabet $S$. If $\tau_1...\tau_t$ is a sequence of $t$ tracks (indicating a processor's actions in the first $t$ rounds) then $\mathcal{Y}[\tau_1:\dots:\tau_t]$ denotes the vertex of $\mathcal{Y}$ reached from the root by the sequence of arcs $\tau_1...\tau_t$. Denote by $w(\tau_1...\tau_t) \in S$ the letter on arc $\mathcal{Y}[\tau_1:\dots:\tau_{t-1}]\mathcal{Y}[\tau_1:\dots:\tau_t]$ of the tree code, and by $W(\tau_1...\tau_t) \in S^t$ the concatenation $w(\tau_1)w(\tau_1\tau_2)...w(\tau_1...\tau_t)$.
+
+The prescription for transmitting messages in the protocol is now this: suppose processor A in round $t$ decides on a pebble move $\alpha \in \{00, 01, 10, 11, \mathcal{H}, \mathcal{B}\}$, and reaches vertex $v$ of $\mathcal{T}$ with that move; thus its track is $\tau_t = \alpha \times \pi_A(x_A, v)$, corresponding to an integer between 1 and 12. Further suppose its past tracks were $\tau_1, ..., \tau_{t-1}$. Then it transmits in round $t$ the letter $w(\tau_1... \tau_t)$.
+
+If $\tau_1...\tau_t$ is a sequence of tracks (a state), and $Z = Z_1...Z_t \in S^t$ is a sequence of letters, then let $P(Z|W(\tau_1... \tau_t))$ denote the probability that $Z$ is received over the channel given that the sequence $W(\tau_1... \tau_t)$ was transmitted. More generally for $r \ge 1$ let $P(Z_r...Z_t|W(\tau_1... \tau_t))$ denote the probability that $Z_r...Z_t$ are the last $l-r+1$ characters received in a transmission of $W(\tau_1... \tau_t)$. Observe that these probabilities are the product of terms each equal to the probability that a particular letter is received, given that some other was transmitted; each term depends only on the channel characteristics and the transmission code (given by $\kappa, \kappa'$). Due to the memorylessness of the channel and lemma 2 we have:
+
+**Lemma 3** *P(Z|W(τ₁...τₜ)) is bounded above by (2⁻²⁰⁸;³⁻⁴⁰)Δ(W(τ₁...τₜ), Z). More generally P(Zᵣ...Zₜ|W(τ₁...τₜ)) ≤ (2⁻²⁰⁸;³⁻⁴⁰)Δ(U(τ₁...τᵣ)...U(τ₁...τₜ), Zᵣ...Zₜ).*
+
+### 2.3.2 Pebble Moves
+
+We now specify how the processors choose their pebble moves. Our description is from the perspective of processor A, everything for B is symmetric. Let $Z \in S^{t-1}$ be the sequence of letters received by A up to and including round $l-1$. A then determines which state $\tau_1...\tau_{t-1}$ of processor B at depth $l-1$, minimizes the distance $\Delta(W(\tau_1... \tau_{t-1}), Z)$, and chooses that state $g$ as its best guess for B's current state. Observe that a correct guess implies correct determination of B's pebble position.
+
+(We have used here a minimum-distance condition; max-likelihood would work, as well.)
+
+Suppose A's pebble is at vertex $v \in \mathcal{T}$, and that following the most recent exchange it has guessed that B's state is $g \in \mathcal{Y}$. Recall that A has just sent the bit $\pi_A(x_A, v)$ to B; and let $b$ be the corresponding bit which A has decoded as B's proposed message in $\pi$. Processor A now compares the positions of $v$ and pebble($g$) in $\mathcal{T}$, and, acting on the presumption that its guess $g$ is correct, chooses its next pebble move as follows.
+
+If $v$ and pebble($g$) are different then A tries to close the gap by making the pebbles meet at their least common ancestor in $\mathcal{T}$. There are two cases. If $v$ is not an ancestor of pebble($g$) then A moves its pebble toward the root in $\mathcal{T}$ ("back"). If $v$ is an ancestor of pebble($g$) then A cannot close the gap itself, and instead keeps its pebble put in the expectation that B will bring its pebble back to $v$.
+---PAGE_BREAK---
+
+If on the other hand $v = \text{pebble}(g)$ then A interprets $b$ as the bit B would transmit at $v$ in protocol $\pi$. Therefore it moves its pebble down in $\mathcal{T}$ on the arc labelled by the pair of bits given by $b$, and its own choice $\pi_A(x_A, v)$.
+
+The focus upon the least common ancestor of the pebbles is explained by the following.
+
+**Lemma 4** *The least common ancestor of the two pebbles lies on $\gamma_x$.*
+
+*Proof:* A vertex $z$ of $\mathcal{T}$ is on $\gamma_x$ if and only if for every strict ancestor $z'$ of $z$, the arc leading from $z'$ toward $z$ is that specified by both processors' actions in $\pi$ at $z'$, namely the arc labeled by the pair of bits $\pi_A(x_A, z')$ and $\pi_B(x_B, z')$. The pebble moves specified above ensure that the arcs leading toward a pebble are always correctly labelled in the bit corresponding to that processor's actions in $\pi$. $\square$
+
+We collect the above in a concise description of the protocol, from the perspective of processor A.
+
+### 2.3.3 Summary: Simulation Protocol
+
+Repeat the following $N = 5T$ times (where $\mathcal{T}$ is the length of protocol $\pi$). Begin with own state $s_A$ at $Y[\mathcal{H} \times \pi_A(x_A, \phi)]$, and own pebble at the root of $\mathcal{T}$.
+
+1. Transmit $w(s_A)$ to processor B.
+
+2. Given the sequence of messages $Z$ received to date from processor B, guess the current state $g$ of $B$ as that minimizing $\Delta(W(g), Z)$. Compute from $g$ both pebble($g$), and the bit $b$ (representing a message of $B$ in $\pi$).
+
+3. Depending on the relation of $v$ to pebble($g$), do one of the following:
+
+(a) If $v$ is a strict ancestor of pebble($g$), own pebble move is $\mathcal{H}$. Own next track is $\tau = \mathcal{H} \times \pi_A(x_A, v)$. Reset $s_A$ to $s_A\tau$.
+
+(b) If the least common ancestor of $v$ and pebble($g$) in the state tree is a strict ancestor of $v$, then own pebble move is $\mathcal{B}$. Own next track is $\tau = \mathcal{B} \times \pi_A(x_A, v:\mathcal{B})$. Reset $s_A$ to $s_A\tau$. (Here $v:\mathcal{B}$ denotes the parent of $v$ in $\mathcal{T}$, i.e. the vertex reached after a "back" move.)
+
+(c) If $v = \text{pebble}(g)$ then move own pebble according to the pair of bits $(\pi_A(x_A, v), b)$. Own next track is $\tau = (\pi_A(x_A, v), b) \times \pi_A(x_A, v: (\pi_A(x_A, v), b))$. Reset $s_A$ to $s_A\tau$. (Here $v: (\pi_A(x_A, v), b)$ denotes the child of $v$ in $\mathcal{T}$ reached by the move $(\pi_A(x_A, v), b)$.)
+
+A technical point: for the purpose of the simulation we modify $\pi$ so that, once the computation is completed, each processor continues sending 0's until time $5T$. Thus we consider $\mathcal{T}$ as having depth $5T$, with the protocol tree for the unmodified $\pi$ embedded within the first $\mathcal{T}$ levels. This ensures that the above process is meaningful for any state of processor A. In view of lemma 4, the simulation is successful if both pebbles terminate at descendants of the same "embedded leaf".
+
+## 3 Analysis
+
+We prove the following bound on the performance of the protocol, for binary symmetric channels. Theorem 2 follows in consideration of the comments in section 2.2.1 on the coding of individual rounds.
+
+**Proposition 1** *The simulation protocol, when run for $5T$ rounds on any noiseless-channel protocol $\pi$ of length $T$, will correctly simulate $\pi$ except for an error probability no greater than $2^{-5T}$.*
+---PAGE_BREAK---
+
+**Proof:** Let $v_A$ and $v_B$ denote the positions of the two pebbles in $T$ after some round of the simulation. The least common ancestor of $v_A$ and $v_B$ is written $\bar{v}$.
+
+**Definition** The current mark of the protocol is defined as the depth of $\bar{v}$, minus the distance from $\bar{v}$ to the further of $v_A$ and $v_B$.
+
+Observe that $\pi$ is successfully simulated if the mark at termination is at least $T$.
+
+**Definition** Say that a round is good if both processors guess the other's state correctly. Otherwise say that the round is bad.
+
+These definitions are related through the following:
+
+**Lemma 5**
+
+1. The pebble moves in a good round increase the mark by 1.
+
+2. The pebble moves in a bad round decrease the mark by at most 3.
+
+**Proof:**
+
+1. After a good round the processors are using the correct information regarding both the other processor's pebble position, and the bit the other processor transmits in $\pi$ at that position.
+
+If the pebbles were not located at the same vertex, then either pebble not equal to $\bar{v}$ moves one arc closer to it. If the pebbles were located at the same vertex then each progresses to the same child of that vertex.
+
+2. Each pebble (and hence also $\bar{v}$) moves by at most one arc. $\square$
+
+Recall that the simulation is run for $N = 5T$ rounds.
+
+**Corollary 1** *The simulation is successful provided the fraction of good rounds is at least 4/5.* $\square$
+
+The task of bounding the number of bad rounds is complicated by the fact that this quality is not independent across rounds.
+
+For any round $t$ let $\ell(t)$ be the greater, among the two processors, of the magnitude of their error regarding the other's state; this magnitude is measured as the difference between $t$, and the level of the least common ancestor of the true state and the guessed state. Thus a round is good precisely if $\ell(t) = 0$. Due to the tree code condition, any wrong guess of magnitude $\ell$ is at Hamming distance at least $\ell/2$ from the correct guess; in order to be preferred to the correct guess, at least $\ell/4$ character decoding errors are required within the last $\ell$ rounds. There are at most $2^{\ell}$ ways in which these errors can be distributed. Applying lemma 3 — and noting that this application relies only on the transmissions and receptions during the $\ell$ rounds in question, and that the channel is memoryless — we have:
+
+**Corollary 2** *Conditioned on any pair of states the processors may be in at time $t-\ell$, the probability of a processor making a particular wrong guess of magnitude $\ell$ at time $t$ is at most* $2^{-(2^{-2083}-40)\ell/4} = (2^{-513-10})^\ell$. $\square$
+
+Now define the error interval corresponding to a bad round at time $t$ as the sequence of $\ell(t)$ rounds $t - \ell(t) + 1, \dots, t$. Every bad round is contained inside an error interval, hence the size of the union of the error intervals is a bound on the number of bad rounds. The remainder of the proof rests on the following pair of observations:
+---PAGE_BREAK---
+
+**Lemma 6** If a set of erroneous guesses define a disjoint set of error intervals, of lengths $l_1, ..., l_k$, then
+the probability that these erroneous guesses occur is at most $(2^{-50}3^{-10})^{\sum l_i}$.
+
+**Proof:** The argument is by induction on the number of error intervals. Let the last error interval be due to a wrong guess at round $t$, of magnitude $l_k$. Let $Z$ be the received sequence through round $t$. Let $s$ be the true state of the other processor at time $t$, and let $r$ be the erroneous guess. Then the transmitted strings $W(s)$ and $W(r)$ differ only in the last $l_k$ rounds. Corollary 2 implies that the bound on the probability of occurrence of the erroneous guesses is multiplied by a factor of $2(2^{-51}3^{-10})^{l_k}$ (the factor of 2 allows for either processor to be in error). $\square$
+
+**Lemma 7** In any finite set of intervals on the real line whose union $J$ is of total length $s$ there is a subset of disjoint intervals whose union is of total length at least $s/2$.
+
+**Proof:** We show that $J$ can be written as the union of two sequences of disjoint intervals.
+
+The question reduces to the case in which the intervals of the family are closed and their union $J$ is an interval. In the first step put into the first sequence that interval which reaches the left endpoint of $J$, and which extends furthest to the right. In each successive step select the interval which intersects the union of those selected so far, and which extends furthest to the right; adjoin the new interval to one of the sequences in alternation. $\square$
+
+Lemma 7 and the sufficiency condition of corollary 1 reduce our problem to the task of bounding the probability of occurrence of some set of erroneous guesses which define disjoint error intervals spanning at least $N/10$ rounds. First consider all ways in which these erroneous guesses can arise. There are at most $2^{2N}$ ways in which the disjoint error intervals can be distributed in the $N$ rounds available. For each error of magnitude $\ell$, the erroneous guess can be at one of $12^\ell - 1$ states; hence for a given set of disjoint error intervals there are, in all, at most $12^N$ ways in which the guesses can arise.
+
+If a set of erroneous guesses defines a set of disjoint error intervals of lengths $l_1, ... l_k$ then by corollary 2 and lemma 6 the probability of all these guesses occurring is at most $96^{-10} \sum l_i$. In particular if these error intervals span at least $N/10$ rounds then the probability of this event is at most $96^{-N}$. Ranging over all such events we find that the probability of the protocol failing to simulate $\pi$ is at most $2^{2N} 12^N 96^{-N} = 2^{-N} = 2^{-5T}$. $\square$
+
+## 3.1 Adversarial Channel
+
+**Proof of Theorem 4:** For the transmission code used to exchange individual tracks between the processors (thus individual letters of S, the tree code alphabet), we use a code with a minimum-distance property, so that any two codewords differ in at least 1/3 of their length. (Thus the protocol here differs slightly from that for the BSC, where we use a code that achieves small error probability, not necessarily a minimum-distance code.)
+
+In order that one of the processors make a wrong guess of magnitude $\ell$, at least $\ell/4$ of the transmitted characters must have been received incorrectly, since we are using a minimum-distance condition and the number of differing characters between the correct and incorrect branches is at least $\ell/2$. Since for the character transmissions we are using the above minimum-distance code, it follows that the fraction of incorrectly transmitted bits during this period of length $\ell$, must be at least $1/24$.
+
+To avoid double-counting these errors we again resort to lemma 7, and wish to bound the total length of disjoint error intervals by $N/10$. For this it therefore suffices that the fraction of bit errors during the protocol be bounded by $1/240$. $\square$
+---PAGE_BREAK---
+
+# 4 Computational Overhead of the Protocol
+
+**Proof of Theorem 3:** The computationally critical aspect of the simulation protocol is the determination, in each round, of the branch of the tree which minimizes the distance to the received sequence of messages (step 2 of the protocol). A priori this requires time exponential in the number of elapsed rounds, but we will show that it can be done in time exponential in the amount of "current error." Furthermore the frequency with which changes of various distances are made falls off exponentially in the distance, and by ensuring that this exponential dominates the previous one, we achieve constant expected computation time per round.
+
+The idea of such a probabilistically constrained search was introduced in the work of Wozencraft, Reiffen, Fano and others [34, 24, 7] on sequential decoding.
+
+We allot constant time to elementary operations such as pointer following and integer arithmetic; in more conservative models the time allotted would depend on the size of the database and the size of the integers, but in any standard model our time analysis would be affected by a factor of no more than $\log T$.
+
+Some attention to data structures is necessary in the analysis.
+
+As is common in the algorithms literature, we use the term suffix to indicate any terminal interval $g_j...g_i$ of a sequence $g_1...g_i$ ($1 \le j \le i$).
+
+The procedure for determining the new guess is as follows. If the newly received character matches that on one of the arcs exiting the current guessed state, then extend the guess along that arc; otherwise extend it arbitrarily along one of the 12 arcs. Now (in either case) determine the longest suffix of this interim guess, such that the fraction of tree code characters in the suffix which differ from those received over the channel, is at least 1/4.
+
+Observe that if this fraction is less than 1/4 in all suffixes then, since every pair of branches differ in half their characters, it is certain that the current guess is the desired (i.e. minimum distance) one. Moreover, for the same reason, if for all suffixes beyond some length $l$, the fractions are less than 1/4, then we need not examine branches that diverged from our current guess more than $l$ rounds ago. Therefore, if there exist suffixes which violate the 1/4 condition, we find the longest such suffix, say of length $l$, and determine our new guess as that state which minimizes the distance to the received sequence, among all states which agree with the interim guess in all but the last $l$ rounds. This computation is performed by exhaustively considering all such states.
+
+We now describe how to implement the above computations efficiently. Let the current guess be $g = g_1...g_t$ and let the received sequence be $Z = Z_1...Z_t$. Let $\epsilon_i = 0 \forall i \le 0$, and for $i > 0$, let $\epsilon_i = 1$ if $w(g_1...g_i) \neq Z_i$, otherwise $\epsilon_i = 0$. Now define $\psi(i) = \sum_{j=1}^{\infty} \delta_j - 1$ for $i \ge 1$; and by extension, $\psi(i) = -i$ for $i \le 0$.
+
+We maintain the following data: for every integer $r$ for which there exists an integer $j \ge 0$ s.t. $\psi(j) = r$, we maintain the pair $(r, J_r)$. Here $J_r$ is a doubly linked list consisting of all integers $j$ for which $\psi(j) = r$, arranged in order of increasing size. We also maintain external pointers to the smallest and largest elements of $J_r$. Note that if $r < 0$, $J_r$ contains the list $(j_1, ..., j_k)$, while if $r \ge 0$, $J_r$ contains the list $(-r, j_1, ..., j_k)$. Let $m(r)$ be the least integer in $J_r$.
+
+Observe that at time $t$, the suffixes of $g$ in which the relative distance is at least 1/4, are precisely those which begin at a time $t'$ for which $\psi(t') \le \psi(t)$. Since $\psi$ decreases by at most 1 per round, the greatest such suffix begins at time $m(\psi(t))$.
+
+The records $(r, J_r)$ are themselves stored in a doubly linked list, sorted by $r$.
+
+The procedure is now implemented as follows. At time $t$, after the previous guess has been extended by one arc, we determine $\psi(t)$ for the new guess $g$ by simply adding 4 or -1 to $\psi(t-1)$. Now if no entry $(\psi(t), J_{\psi(t)})$ exists in the database, a new one is created (with $J_{\psi(t)}$ being either $(t)$ or $(-\psi(t), t)$ depending on the sign of $\psi(t)$). If an entry $(\psi(t), J_{\psi(t)})$ does exist in the database, we append $t$ to the end of the list.
+---PAGE_BREAK---
+
+Note that since at time $t-1$ we already had a pointer to the record $(\psi(t-1), J_{\psi(t-1)})$, a constant number of steps suffice to locate the record $(\psi(t), J_{\psi(t)})$, or determine its absence and insert it.
+
+Now if $m(\psi(t)) = t$ then there is no suffix of the current guess in which the distance between $g$ and $Z$ is at least a quarter of the length of the suffix, and we are done with this round. If $m(\psi(t)) < t$ then the greatest such suffix begins at $m(\psi(t))$, and we proceed to exhaustively compute the new suffix which minimizes the distance to $Z$. This dictates a new guess $g'$. We update the data structure by first, deleting the entries corresponding to the suffix of $g$ (in reverse order, so the first entry to go is that corresponding to $g_t$); and second, inserting the entries corresponding to $g'$ (in the usual order, starting with $g'_{m(\psi(t))+1}$).
+
+Let $L(t)$ be the length of the recomputed suffix, $L(t) = t - m(\psi(t))$. The amount of time expended on computation is proportional to $\sum_{t=1}^{5T} 12^L(t) + L(t)$. The expectation of this quantity (allowing for all possible ways in which the erroneous messages can arrive, and using lemma 3) is $\sum_t E(12^L(t) + L(t)) \le \sum_t \sum_{L \ge 1} 2^L (2^{-2083} 3^{-40})^{L/4} (12^L + L) = O(T)$.
+
+# 5 Binary Erasure Channel with Feedback
+
+We have focused in this paper on the binary symmetric channel (BSC) as representative of channels with random noise. Some justification for using the BSC is to be found in the fact that the method developed in order to solve the case of the BSC, extended with hardly any modification to the extreme case of an “adversarial” channel. There are however even “tamer” models of error than the BSC; we consider one of these here. In this case we obtain relatively simple and elegant results; but the method involved is not useful for more difficult error models.
+
+We focus in this section upon protocols in which only one processor transmits at any time; as opposed to the work in previous sections, in which both processors were allowed to transmit simultaneously.
+
+Consider the binary erasure channel (BEC). This is a channel with two inputs 0, 1, and three outputs 0, Err, 1. For some $p$, inputs are transformed into Err with probability $p$; transmitted successfully with probability $1-p$; and never transformed into the opposite character.
+
+Let us assume the BEC is additionally equipped with perfect feedback, which means that the transmitter can see every character as it arrives at the receiver. (Call this a BECF channel.) Consider the following channel-specific complexity for a communication problem $f$: the maximum over distributions on input pairs to $f$, of the minimum over protocols, of the expected number of transmissions until the problem is solved with zero error probability on the BECF. Now define the complexity $C_{BECF}$ of the problem as the product of the previous quantity with the capacity of the channel. Let $C_{BECF}^R$ denote the corresponding quantity on a specified distribution $R$ on input pairs.
+
+Recall the distributional complexity $D$ for noiseless channels [35], which is defined as the maximum over distributions on input pairs, of the minimum over protocols, of the expected number of transmissions until the problem is solved. Let $D^R$ denote the corresponding quantity on a specified distribution $R$ on input pairs.
+
+The following result is a zero-error or “Las Vegas” analogue of Shannon’s theorem and its converse (and incidentally shows that the definition of $C_{BECF}$ is meaningful).
+
+**Theorem 5** *The $C_{BECF}$ complexity of a communication problem $f$ is equal to its distributional complexity $D$.*
+
+*Proof:* First we show that $C_{BECF}(f) \le D(f)$. The capacity of the BECF (as well as the BEC) is $1-p$. Observe that due to the perfect feedback, both processors are always aware of the entire history of their communication session. Fix any distribution $R = \{r(z)\}$ on input pairs. Now, simulate a noiseless-channel protocol on the BECF by simply repeating every transmission until it is received successfully. The length of the BECF protocol on a particular input pair is the sum, over all noiseless steps, of the number of BECF
+---PAGE_BREAK---
+
+transmissions required to get that noiseless step across; the expectation of the length of each of these steps is $1/(1-p)$. Thus an input pair solved in $n$ steps by the noiseless protocol will be solved in an expected $n^{-1/(1-p)}$ transmissions on the BECF. Hence the complexity of the problem is preserved on every input pair, and in particular the average over the distribution $\mathcal{R}$ is preserved.
+
+Next we show that $C_{\text{BECF}}(f) \ge D(f)$. Fix any distribution $\mathcal{R} = \{r(z)\}$ on input pairs. Pick a BECF protocol $\pi$ achieving complexity $C_{\text{BECF}}^{\mathcal{R}}(f)+\epsilon$ for some $\epsilon \ge 0$, smaller than any positive $r(z)$. Now represent $\pi$ with a binary tree whose root $v_0$ corresponds to the start of the protocol, while the children of a vertex $v$ correspond to the possible states of the protocol after an additional transmission. Denote by $v^S$ the vertex the protocol proceeds to if the transmission was successful, and $v^{\text{Err}}$ the next vertex if the transmission was unsuccessful. Define the following functions at each vertex:
+
+$$h_{\pi,z}(v) = E(\text{number of remaining transmissions until } \pi \text{ terminates on input pair } z)$$
+
+and
+
+$$h_{\pi}(v) = \sum_{z} r(z) h_{\pi,z}(v).$$
+
+Observe that $h_{\pi}(v_0) = C_{\text{BECF}}(f) + \epsilon$.
+
+The performance of the channel is such that for any $\pi, z$ and $v$, $h_{\pi,z}(v) = 1+(1-p)h_{\pi,z}(v^S)+ph_{\pi,z}(v^{\text{Err}})$.
+Therefore also
+
+$$ (1) \quad h_{\pi}(v) = 1 + (1-p)h_{\pi}(v^S) + ph_{\pi}(v^{\text{Err}}). $$
+
+We claim that we may as well assume that $h_{\pi}(v) = h_{\pi}(v^{\text{Err}})$ for any vertex $v$ of the tree. Suppose this is false at some $v$. Consider the chain of vertices descended from $v$ strictly through errors: i.e. $v^{\text{Err}}$, $(v^{\text{Err}})^{\text{Err}}$ (which we may write $v^{\text{Err}^2}$), etc. Exactly the same information is available to the processors at all these vertices, and so $\pi$ may be edited by replacing the subtree rooted at the “success” child of $v$ (which we denote $v^S$), by that rooted at the “success” child of any other vertex $v^{\text{Err}^k}$ on this chain. (This includes replacing the protocol’s action on each input at $v$, by that specified at $v^{\text{Err}^k}$.) Such a replacement may be made not only at $v$, but at any vertex $v^{\text{Err}l'}$ on the chain. Using (1) we find that
+
+$$ h_{\pi}(v) = \sum_{i=0}^{\infty} p^i ((1-p)h_{\pi}(v^{\text{Err}^{i'S}}) + 1) $$
+
+(where $v^{\text{Err}^{i'S}}$ is the "success" child of $v^{\text{Err}^{i}}$). If there is a vertex $v^{\text{Err}^k}$ on the error-chain descended from $v$, whose "success" child $v^{\text{Err}^{k'S}}$ attains the infimum among all the values $\{h_{\pi}(v^{\text{Err}^{i'S}})\}_{i=0}^{\infty}$, then we may reduce $h_{\pi}(v)$ by replacing all of the "success" children $\{v^{\text{Err}^{i'S}}\}$ by $v^{\text{Err}^k}$'s. Even if the infimum is not achieved by any $v^{\text{Err}^{k'S}}$, $h_{\pi}(v)$ can be reduced by choosing a $v^{\text{Err}^{k'S}}$ with $h_{\pi}(v^{\text{Err}^{k'S}})$ sufficiently close to the infimum, and making the same replacement.
+
+We transform $\pi$ into a protocol $\pi'$ satisfying $h_{\pi'}(v) = h_{\pi'}(v^{\text{Err}})$ at all its vertices, by first editing $\pi$ in this way at the root, then editing the resulting protocol at the vertices one level away from the root, and so forth. (This process is infinite but it gives a well-defined $\pi'$.) Note that $C_{\text{BECF}}^{\mathcal{R}}(f) \le h_{\pi'}(v_0) \le C_{\text{BECF}}^{\mathcal{R}}(f) + \epsilon$.
+
+From 1 and the fact that $h_{\pi'}(v) = h_{\pi'}(v^{\text{Err}})$, we now have at every vertex:
+
+$$ h_{\pi'}(v) = 1 + (1-p)h_{\pi'}(v^S) + ph_{\pi'}(v). $$
+
+Therefore $h_{\pi'}(v^S) = h_{\pi'}(v) - 1/(1-p)$.
+
+Now we use $\pi'$ to solve $f$ on a noiseless channel. Since no errors occur, the protocol simply proceeds to "success" children in every transmission. After $(1-p)C_{\text{BECF}}^{\mathcal{R}}(f)$ levels of descent, we will reach a vertex $u$ such that $h_{\pi'}(u) \le \epsilon$. If there were any input pair $z$ such that $\pi'$ had not yet terminated on $z$ at $u$, then $h_{\pi',z}(u)$ would have had to be at least 1, and therefore $h_{\pi'}(u)$ would have had to be at least $r(z)$. However
+---PAGE_BREAK---
+
+this is a contradiction by our assumption on $\epsilon$. Therefore $\pi'$ terminates on all input pairs by the time it reaches $u$. The theorem follows because $u$ is at level $(1-p)C_{\text{BECF}}^R(f)$. $\square$
+
+It is evident from this section that the coding problem for interactive communication, is much simpler on the BECF than on more general channels, even among discrete memoryless channels. This is a phenomenon familiar already from data transmission, where the BECF model and variants of it have been studied in work beginning with Berlekamp [1].
+
+# 6 Discussion
+
+Insofar as interactive protocols model the operation of a computer whose inputs and processing power are not localized, this paper may be regarded as presenting a coding theorem for computation.
+
+Our theorem suffers, however, a drawback similar to one which afflicted Shannon’s original work: namely that a good code has only been shown to exist, and has not been explicitly exhibited. From a practical standpoint this was not necessarily a severe problem for data transmission, since a randomly constructed code almost surely has good properties. Even so this was hardly satisfactory. Explicit codes achieving arbitrarily low error at a positive asymptotic rate were not exhibited until Justesen’s work of 1972 [15]. This drawback is even greater for implementation of the present work, since the existence arguments for the required tree codes do not provide one with high probability.
+
+Explicit construction of a tree code can reasonably be interpreted as meaning that each label must be computable in time polynomial in the depth of the tree. (Or, for an infinite tree, polynomial in the depth of the label.) The current state of knowledge on this very interesting problem is as follows. First, a construction is known with alphabet size polynomial in the depth of the tree [5]. Second, if the definition of a tree code is relaxed so that the Hamming distance condition is required only for pairs of vertices whose distance from their least common ancestor is at most logarithmic in the depth of the tree, then the existence proof can be adapted to provide an explicit construction. These constructions provide, respectively, an explicit protocol with logarithmic communication overhead and exponentially small probability of error; or, with constant communication overhead and polynomially small probability of error.
+
+A word on the constant $2^{-2083}3^{-40}$ in lemma 2. External events will intervene long before the first interesting event happens in an implementation of the protocol, if the error probability in each “track” transmission is truly amplified to this degree. Correspondingly, the rate achieved by the protocol run with this parameter, although positive, will be absurdly low. However this appears to be primarily an artifact of the analysis and not of the protocol; and we have endeavored to make the presentation as simple as possible at the expense of optimization of the parameters. In practice an optimized version of the protocol will likely achieve higher rate than provided in the bounds.
+
+The present work shows that interactive protocols can be deterministically simulated on a BSC at a rate within a constant factor of channel capacity, but in the data transmission case it is possible to communicate as close to capacity as is desired. In the view of the author it is very likely that there is indeed a multiplicative gap between the channel capacity, and the rate which can always be guaranteed in the interactive case. It appears that (much as for the quantity $R_0$ in the sequential decoding literature) a certain amount of “backtracking” has to be allowed for. It would be of great interest to demonstrate this gap, or else close it by providing a better simulation.
+
+Even if the above is true, this does not suggest that there will not be problems which cannot be simulated with relatively greater efficiency in the noisy case. Shannon’s theorem is accompanied by its converse: that at any transmission rate beyond channel capacity, a code must suffer high error probability on at least some codewords. Since n-bit transmissions are a special case of n-bit noiseless communication protocols, this provides a restricted sort of converse for the coding theorem for interactive protocols. However, this does not mean that any particular noiseless-channel protocol does not in fact have a much more efficient
+---PAGE_BREAK---
+
+noisy-channel protocol (after factoring in the channel capacity). For instance, it may be that a new protocol can be devised for the case of noisy channels, that is not at all a simulation of a noiseless-channel protocol, and that takes advantage of the greater number of (albeit noisy) rounds available. Furthermore, it may be that the best noiseless protocol does not have use for the two bits being exchanged in each round in our model, but only one of them; while they might be of use to a noisy protocol. In this regard one may also want to keep in mind the work of Shannon on two-way channels with a joint constraint on capacity [30].
+
+In the above context, we mention that the use of randomness as a resource must be considered carefully. If it is allowed, then one should compare randomized complexities in both the noiseless and noisy settings. However if a comparison of deterministic complexities is preferred, then one must be wary of the processors gaining random bits from the noise on the channel. (For instance, to avert this, an adversary might be allowed some control over the error probability of the channel in each transmission.)
+
+Theorem 4 indicates that a coding theorem can be obtained for most channels of interest, but it would of course be desirable to study what rate can be obtained on various channels. We should note however that a complicating factor in this question, is that the capacity of a channel with memory, unlike that of one without memory, can be increased by feedback. Hence there may be competition for use of each direction of the channel, between its role in transmitting messages in that direction, and its role in amplifying the capacity of the channel in the opposing direction.
+
+In earlier work of the author in this area it was proposed that the problem of interactive communication in the presence of noise, also be studied for networks of more than two processors [26]. In particular one would ask to what extent the coding theorem might be extended to the efficient simulation of noiseless distributed protocols, on networks with noise. This question has been answered by Rajagopalan and the author in a forthcoming publication.
+
+We draw attention also to the work of Gallager [13], who has previously considered a different problem concerning noise in a distributed computation. He considered a complete network of $n$ processors, each of which in a single transmission can broadcast one bit, which arrives at each of the other processors subject to independent noise. He studied a specific problem on this network: supposing that each processor receives a single input bit, he showed how to quickly and reliably compute the combined parity of all the inputs. (There is as yet a gap between the upper and lower bounds in this interesting problem, however.)
+
+Karchmer and Wigderson [16] observed a certain equivalence between communication complexity and circuit complexity, and thereby stimulated great recent interest in communication complexity. While noisy circuits have been studied in the literature (e.g. [33, 21, 23, 22, 2, 3, 8, 11, 25, 6]), (as have noisy cellular automata, [31, 9, 10]) the correspondence between circuits and communication protocols does not appear to extend to the noisy cases of each. Elias [4] and later Peterson and Rabin [20] investigated the possibility of encoding data for computation on noisy gates, and the extent to which these gates might be said to have a finite (nonzero) capacity for computation. (Here, as for channel transmission, noiseless encoding and decoding of the data before and after the computation are allowed.)
+
+## Acknowledgments
+
+This research was conducted at MIT (see [26, 27] for preliminary presentations and related work) and at U. C. Berkeley (see [28]). Thanks to my advisor Mike Sipser whose oversight and encouragement in the early stages of this work were invaluable. For consultations and comments thanks also to Dan Abramovich, Manuel Blum, Peter Elias, Will Evans, Robert Gallager, Wayne Goddard, Oded Goldreich, Mauricio Karchmer, Richard Karp, Claire Kenyon, Dan Kleitman, Mike Klugerman, Nati Linial, Mike Luby, Moni Naor, Yuval Peres, Nicholas Pippenger, Yuval Rabani, Sridhar Rajagopalan, Andrew Sutherland, Umesh Vazirani, Boban Velickovic and David Zuckerman. Thanks to the editor, Rene Cruz, and the anonymous referees, whose careful comments substantially improved the paper.
+---PAGE_BREAK---
+
+Funding at MIT was provided by an ONR graduate fellowship, an MIT Applied Mathematics graduate fellowship, and grants NSF 8912586 CCR and AFOSR 89-0271. Funding at Berkeley was provided by an NSF postdoctoral fellowship.
+
+Current address: College of Computing, Georgia Institute of Technology, Atlanta GA 30332-0280, USA.
+
+References
+
+[1] E. R. Berlekamp. Block coding for the binary symmetric channel with noiseless, delayless feedback. In H. B. Mann, editor, *Error Correcting Codes*, pages 61–85. Wiley, 1968.
+
+[2] R. L. Dobrushin and S. I. Ortyukov. Lower bound for the redundancy of self-correcting arrangements of unreliable functional elements. *Prob. Inf. Trans.*, 13:59–65, 1977.
+
+[3] R. L. Dobrushin and S. I. Ortyukov. Upper bound for the redundancy of self-correcting arrangements of unreliable functional elements. *Prob. Inf. Trans.*, 13:203–218, 1977.
+
+[4] P. Elias. Computation in the presence of noise. *IBM Journal of Research and Development*, 2(4):346-353, October 1958.
+
+[5] W. Evans, M. Klugerman, and L. J. Schulman. Constructive tree codes with polynomial size alphabet. Manuscript.
+
+[6] W. Evans and L. J. Schulman. Signal propagation, with application to a lower bound on the depth of noisy formulas. In *Proceedings of the 34th Annual Symposium on Foundations of Computer Science*, pages 594–603, 1993.
+
+[7] R. M. Fano. A heuristic discussion of probabilistic decoding. *IEEE Transactions on Information Theory*, pages 64–74, 1963.
+
+[8] T. Feder. Reliable computation by networks in the presence of noise. *IEEE Transactions on Information Theory*, 35(3):569–571, May 1989.
+
+[9] P. Gács. Reliable computation with cellular automata. *J. Computer and System Sciences*, 32:15–78, 1986.
+
+[10] P. Gács and J. Reif. A simple three-dimensional real-time reliable cellular array. *J. Computer and System Sciences*, 36:125–147, 1988.
+
+[11] A. Gál. Lower bounds for the complexity of reliable boolean circuits with noisy gates. In *Proceedings of the 32nd Annual Symposium on Foundations of Computer Science*, pages 594–601, 1991.
+
+[12] R. G. Gallager. *Information Theory and Reliable Communication*. Wiley, 1968.
+
+[13] R. G. Gallager. Finding parity in a simple broadcast network. *IEEE Trans. Inform. Theory*, 34(2):176-180, March 1988.
+
+[14] G. H. Hardy and E. M. Wright. *An Introduction to the Theory of Numbers*. Oxford, fifth edition, 1979.
+
+[15] J. Justesen. A class of constructive, asymptotically good algebraic codes. *IEEE Transactions on Information Theory*, IT-18:652–656, September 1972.
+---PAGE_BREAK---
+
+[16] M. Karchmer and A. Wigderson. Monotone circuits for connectivity require super-logarithmic depth. In *Proceedings of the 20th Annual Symposium on Theory of Computing*, pages 539–550, 1988.
+
+[17] Lipton and Sedgewick. Lower bounds for VLSI. In *Proceedings of the 13th Annual Symposium on Theory of Computing*, pages 300–307, 1981.
+
+[18] L. Lovász. Communication complexity: A survey. In Korde et al, editor, *Algorithms and Combinatorics*. Springer-Verlag, 1990.
+
+[19] C. H. Papadimitriou and M. Sipser. Communication complexity. In *Proceedings of the 14th Annual Symposium on Theory of Computing*, pages 196–200, 1982.
+
+[20] W. W. Peterson and M. O. Rabin. On codes for checking logical operations. *IBM Journal of Research and Development*, 3:163–168, April 1959.
+
+[21] N. Pippenger. On networks of noisy gates. In *Proceedings of the 26th Annual Symposium on Foundations of Computer Science*, pages 30–36, 1985.
+
+[22] N. Pippenger. Reliable computation by formulas in the presence of noise. *IEEE Transactions on Information Theory*, 34(2):194–197, March 1988.
+
+[23] N. Pippenger. Invariance of complexity measures for networks with unreliable gates. *J. ACM*, 36:531–539, 1989.
+
+[24] B. Reiffen. Sequential encoding and decoding for the discrete memoryless channel. *Res. Lab. of Electronics, M.I.T. Technical Report*, 374, 1960.
+
+[25] R. Reischuk and B. Schmeltz. Reliable computation with noisy circuits and decision trees — a general $n \log n$ lower bound. In *Proceedings of the 32nd Annual Symposium on Foundations of Computer Science*, pages 602–611, 1991.
+
+[26] L. J. Schulman. *Communication in the Presence of Noise*. PhD thesis, Massachusetts Institute of Technology, 1992.
+
+[27] L. J. Schulman. Communication on noisy channels: A coding theorem for computation. In *Proceedings of the 33rd Annual Symposium on Foundations of Computer Science*, pages 724–733, 1992.
+
+[28] L. J. Schulman. Deterministic coding for interactive communication. In *Proceedings of the 25th Annual Symposium on Theory of Computing*, pages 747–756, 1993.
+
+[29] C. E. Shannon. A mathematical theory of communication. *Bell System Tech. J.*, 27:379–423; 623–656, 1948.
+
+[30] C. E. Shannon. Two-way communication channels. Proc. 4th Berkeley Symp. Math. Stat. and Prob. (reprinted in *Key Papers in Information Theory, D. Slepian, ed.*, IEEE Press, 1974), 1:611–644, 1961.
+
+[31] M. C. Taylor. Reliable information storage in memories designed from unreliable components. *Bell System Tech. J.*, 47(10):2299–2337, 1968.
+
+[32] C. D. Thompson. Area-time complexity for VLSI. In *Proceedings of the 11th Annual Symposium on Theory of Computing*, pages 81–88, 1979.
+
+[33] J. von Neumann. Probabilistic logics and the synthesis of reliable organisms from unreliable components. In C. E. Shannon and J. McCarthy, editors, *Automata Studies*, pages 43–98. Princeton University Press, 1956.
+---PAGE_BREAK---
+
+[34] J. M. Wozencraft. Sequential decoding for reliable communications. *Res. Lab. of Electronics, M.I.T. Technical Report*, 325, 1957.
+
+[35] A. C. Yao. Probabilistic computations: Toward a unified measure of complexity. In *Proceedings of the 18th Annual Symposium on Foundations of Computer Science*, pages 222–227, 1977.
+
+[36] A. C. Yao. Some complexity questions related to distributive computing. In *Proceedings of the 11th Annual Symposium on Theory of Computing*, pages 209–213, 1979.
+
+[37] A. C. Yao. The entropic limitations on VLSI computations. In *Proceedings of the 13th Annual Symposium on Theory of Computing*, pages 308–311, 1981.
\ No newline at end of file
diff --git a/samples/texts_merged/647655.md b/samples/texts_merged/647655.md
new file mode 100644
index 0000000000000000000000000000000000000000..6975e7649713669245003050f64cd04bc341a0fe
--- /dev/null
+++ b/samples/texts_merged/647655.md
@@ -0,0 +1,847 @@
+
+---PAGE_BREAK---
+
+A COMPRESSED SENSING FRAMEWORK FOR MAGNETIC
+RESONANCE FINGERPRINTING
+
+MIKE DAVIES, GILLES PUY, PIERRE VANDERGHEYNST AND YVES WIAUX
+
+**Abstract.** Inspired by the recently proposed Magnetic Resonance Fingerprinting (MRF) technique, we develop a principled compressed sensing framework for quantitative MRI. The three key components are: a random pulse excitation sequence following the MRF technique; a random EPI subsampling strategy and an iterative projection algorithm that imposes consistency with the Bloch equations. We show that theoretically, as long as the excitation sequence possesses an appropriate form of persistent excitation, we are able to accurately recover the proton density, T1, T2 and off-resonance maps simultaneously from a limited number of samples. These results are further supported through extensive simulations using a brain phantom.
+
+**Key words.** Compressed sensing, MRI, Bloch equations, manifolds, Johnston-Linderstrauss embedding
+
+1. Introduction. Inspired by the recently proposed procedure of Magnetic Resonance Fingerprinting (MRF), which gives a new technique for quantitative MRI, we investigate this idea from a compressed sensing perspective. While MRF itself, was inspired by the recent growth of compressed sensing (CS) techniques in MRI [29], the exact link to CS was not made explicit, and the paper does not consider a full CS formulation. Indeed the role of sparsity, random excitation and sampling are not clarified. The goal of this current paper is to make the links with CS explicit, shed light on the appropriate acquisition and reconstruction procedures and hence to develop a full compressed sensing strategy for quantitative MRI.
+
+In particular, we identify separate roles for the pulse excitation and the subsam-
+pling of *k*-space. We identify the Bloch response manifold as the appropriate low
+dimensional signal model on which the CS acquisition is performed, and interpret the
+“model-based” dictionary of [29] as a natural discretization of this response manifold
+We also discuss what is necessary in order to have an appropriate CS-type acquisition
+scheme.
+
+Having identified the underlying signal model we next turn to the reconstruc-
+tion process. In [29] this was performed through pattern matching using a matched
+filter based on the model-based dictionary. However, this does not offer the oppor-
+tunity for exact reconstruction, even if the signal is hypothesised to be 1-sparse in
+this dictionary due to the undersampling of *k*-space. This suggests that we should
+look to a model based CS framework that directly supports such manifold models [4].
+Recent algorithmic work in this direction has been presented by Iwen and Maggioni
+[23], however, their approach is not practical in the present context as the computa-
+tional cost of their scheme grows exponentially with the dimension of the manifold.
+Instead, we leverage recent results from [11] and develop a recovery algorithm based
+on the Projected Landweber Algorithm (PLA). This method also has the appealing
+interpretation of an iterated refinement of the original MRF scheme.
+
+The remainder of the paper is set out as follows. We begin by giving a brief overview of MRI acquisition. Then we discuss the challenges of quantitative imaging in MRI and review the recently proposed MRF scheme [29]. We next develop the detailed mathematical model associated with MRF acquisition which leads us to the voxel-wise Bloch response manifold model observed through a sequence of partially sampled *k*-space measurements. In §4, using the MRF acquisition model, we set out a framework for a compressed sensing solution to the quantitative MRI problem
+---PAGE_BREAK---
+
+followed by a simple extension that provides a degree of spatial regularization.
+
+In the simulation section we demonstrate the efficacy of our methods on an
+anatomical brain phantom [15], available at the BrainWeb repository [12]. Our re-
+sults show that our CS method offers substantial gains in reconstruction accuracy
+over the original MRF matched filter scheme [29]. We also demonstrate the efficiency
+of the proposed algorithm in terms of speed of convergence and the empirical trade-off
+between undersampling in k-space and excitation sequence length.
+
+Finally, we summarize what we have learnt by placing the MRF procedure within
+a CS framework and highlight a number of open questions and research challenges.
+
+**2. Magnetic Resonance Imaging Principles.** MRI, with its ability to image soft tissue, provides a very powerful imaging tool for medicine. The basic principles of MRI lie in the interaction of proton spins with applied magnetic fields. While a full review of these principles is beyond the scope of this paper, following [37], we now introduce the basics required in order to understand the motivation for the proposed acquisition scheme and the subsequent mathematical models. For a more detailed treatment of MRI from a signal processing perspective we refer the reader to one of the excellent reviews on the subject, such as [37, 20].
+
+**2.1. Bloch Equations.** The main source of the measured signal in MRI comes from the magnetic moments of the proton spins. In a single volume element (voxel) the net magnetization **m** = (m^x, m^y, m^z)^T is the vector sum of all the individual dipole moments within the voxel. If there is no magnetic field then at equilibrium the net magnetization is zero.
+
+If a static magnetic field, B₀ (usually considered to lie in the [0, 0, 1]ᵀ direction), is then applied the spins align with this field and the net magnetization at equilibrium, mₐeq, is proportional to the proton density ρ within the volume. However, equilibrium is not achieved immediately after the field is applied, but is controlled by the longitudinal relaxation time, T₁, such that the net magnetization at time t is given by: m^z(t) = mₐeq(1 - exp(-t/T₁)).
+
+If there is magnetization in the plane orthogonal to B₀ then the magnetization,
+{m^x, m^y}, precesses about the z axis at a frequency called the Lamor frequency,
+ω_L = γ|B₀| (approximately 42.6MHz per Tesla), where the quantity γ is called the
+gyromagnetic ratio. This in turn emits an electromagnetic signal which is the signal
+that is measured. As the individual dipoles dephase the net transverse magnetization
+decays exponentially at a rate T2, called the transverse relaxation time.
+
+In MRI the magnetic field is composed of a static magnetic field and a dynamic
+component which is manipulated through a radio frequency (RF) coil aligned with the
+x direction. When a transverse magnetic field is applied via an RF pulse the proton
+dipoles rotate about the applied magnetic field. The overall macroscopic dynamics
+of the net magnetization can be summarized by a set of linear differential equations
+called the Bloch equations [37]:
+
+$$
+(2.1) \quad \frac{\partial \mathbf{m}(t)}{\partial t} = \mathbf{m}(t) \times \gamma \mathbf{B}(t) - \begin{pmatrix} \frac{m^x(t)}{T2} \\ \frac{m^y(t)}{T2} \\ \frac{(m^z - m_{\text{eq}})}{T1} \end{pmatrix}
+$$
+
+The response at a given readout time (TE) from an initial RF pulse can be determined
+by integrating these equations over time. When a specific sequence of pulses is applied
+(assuming pulse length ≪ T1, T2) then the dynamics of the magnetization from pulse
+to pulse or readout to readout can be described simply by a three dimensional discrete
+time linear dynamical system [24].
+---PAGE_BREAK---
+
+**2.2. Spatial Encoding and Image Formation.** In order to produce an image it is necessary to spatially encode the magnetization in the received signal. This is done through the application of various magnetic gradients. First, a slice can be selected through the application of a magnetic gradient along the *z* direction, while appropriately restricting the frequency band of the excitation pulses. The gradient changes the Larmor frequency as a function of *z*, and only those positions that are excited by the pulses generate a magnetization in the transverse plane.
+
+In order to encode the transverse magnetization spatially at the acquisition time (called the echo time (TE)) the magnetic field can be modified further to have gradients $G_x$ and $G_y$ in the *x* and *y* directions. For example, if a linear gradient is applied along the *x* direction so that $B^z = (B_0 + G_x x)$, then the spatial variation of the transverse magnetization is encoded in the Larmor frequency and hence in the frequency of the received signal (it is assumed that the duration of the signal read out time is sufficiently short such that the magnetization can be treated as stationary). The received signal therefore corresponds to a line in the spatial Fourier transform, known as *k*-space, of the transverse magnetization. By careful selection of $G_x$ and $G_y$ it is possible to sample different lines of *k*-space until it is adequately sampled. The most popular technique is to take measurements, which we denote **y**, that sample *k*-space on a Cartesian grid, so that the image can be formed by the application of the inverse 2D Discrete Fourier transform (DFT), $F$. Thus we can generate a discrete image **x** (represented here in vector form) by using $\mathbf{x} = F^H \mathbf{y}$, where $H$ denotes the conjugate transpose. For simplicity, unless stated otherwise, we will work with this discrete representation and assume that samples in *k*-space have been taken on the Cartesian grid.
+
+**2.3. Rapid Imaging.** A key challenge in MRI is acquiring the signals in a reasonably short time. Long scan times are costly, unpopular with patients and can introduce additional complications such as motion artefacts. However, the set up described so far for MRI requires the application of repeated excitation pulses and gradients to acquire the multiple lines of *k*-space. Furthermore, after each acquisition sufficient time must be left in order for the magnetization to achieve equilibrium once again.
+
+One way to accelerate the imaging is to acquire more samples from *k*-space per acquisition. By varying the transverse gradients $G_x$ and $G_y$ as a function of time it is possible to generate more sophisticated sampling patterns. For example, in echo-planar imaging (EPI) [30] multiple lines of *k*-space are acquired at each pulse. Another strategy is to generate spiral trajectories in *k*-space. However, in both cases as the readout time gets longer artefacts are introduced by variation in the transverse magnetization over the read out time. Furthermore, in the case of spiral and other non-Cartesian trajectories there is the added complication of requiring more complicated image formation algorithms, such as gridding techniques [13], that attempt to approximate the pseudo-inverse of the non-uniform Fourier transform [19].
+
+A second approach to rapid imaging is to take fewer samples. Since the emergence of compressed sensing in MRI [28], the idea of subsampling *k*-space has become very popular. Compressed Sensing exploits the fact that the image being acquired can be approximated by a low dimensional model, e.g. sparse in the spatial or wavelet domain. Then, under certain circumstances, the image can be recovered from a sub-sampled *k*-space using an appropriate iterative reconstruction algorithm.
+
+Parallel imaging techniques can also be used in conjunction with the above strategies to provide further acceleration. However, these are outside the scope of the
+---PAGE_BREAK---
+
+current work.
+
+**2.4. Quantitative MRI.** Rather than simply forming an image that measures the transverse magnetization response from a single excitation, the aim of quantitative imaging is to provide additional physiological information by estimating the spatial variation of one or more of the physical parameters that control the Bloch equations, specifically: T1, T2 and proton density. These can help in the discrimination of different tissue types and provide useful information in numerous application areas, such as diffusion and perfusion imaging.
+
+The standard approach to parameter estimation is to acquire a large sequence of images in such a way that for each voxel the sequence of values is dependent on either T1 and/or T2 as well as certain nuisance parameters. For example, the most common techniques acquire a sequence of images at different echo times from an initial excitation pulse. For T1 estimation, this is typically an inversion recovery pulse (full 180° rotation of the magnetic field) and for T2 it is a spin-echo pulse (90° rotation). The image sequences encode the exponential relaxation and the parameter of interest can be estimated by fitting an exponential curve to each voxel sequence. Another approach [16] uses a set of well tailored steady state sequences, such that each voxel sequence encodes the relevant parameter values. Such techniques require the acquisition of multiple lines for multiple images, and it is very challenging to achieve within a reasonable time and with an acceptable signal-to-noise ratio (SNR) and resolution.
+
+Recently there have been a number of papers attempting to address this problem taking a compressed sensing approach [5, 16, 33, 35]. All these techniques accelerate the parameter acquisition using an exponential fitting model combined with only partially sampling *k*-space for each image. Model-based optimization algorithms [20] are then used to retrieve the parameter values. However, while such approaches take their inspiration from compressed sensing and exploit sparse signal models, these techniques mainly focus on the development of novel reconstruction algorithms, and do not tackle the fundamental issue of how to design the acquisition in such a way as to meet the compressed sensing sampling criteria.
+
+In contrast to this previous body of work, here we will set out a principled compressed sensing approach to the simultaneous determination of all parameters of interest. That is, we will develop an acquisition framework that can be shown to satisfy the compressed sensing criteria thereby enabling us to develop a model based parameter estimation algorithm with exact recovery guarantees. The basis of our acquisition scheme is the recently proposed ‘magnetic resonance fingerprinting’ technique [29] which we describe next.
+
+**3. Magnetic Resonance Fingerprinting.** In the recent paper [29] a new type of MRI acquisition scheme is presented that enables the quantification of multiple tissue properties simultaneously through a single acquisition process. The procedure is composed of 4 key ingredients:
+
+1. The material magnetization is excited through a sequence of random RF pulses. There is no need to wait for the signal to return to equilibrium between pulses or for the response to reach a steady state condition as in other techniques.
+
+2. After each pulse the response is recorded through measurements in *k*-space. Due to the time constraints only a proportion of *k*-space can be sampled between each pulse. In [29] this is achieved through Variable Density Spiral (VDS) sampling.
+---PAGE_BREAK---
+
+3. A sequence of magnetization response images is formed using gridding to approximate the least square solution. These images suffer from significant aliasing due to the high level of undersampling.
+
+4. Parameter maps (proton density, $\rho$, T1, T2 and off-resonance, $^1\delta f$) are formed through a pattern matching algorithm that matches the alias-distorted magnetization response sequences per voxel to the response predicted from the Bloch equations.
+
+Below we develop the relevant mathematical models for the MRF acquisition system that will allow us to develop a full CS strategy for quantitative MRI.
+
+**3.1. Pulse excitation and the Bloch response manifold.** The MRF process is based upon an Inversion Recovery Steady State Free Precession (IR-SSFP) pulse sequence.² The dynamics of the magnetization for each voxel, assuming a single chemical composition, are described by the response of the Bloch equations when 'driven' by the excitation parameters.
+
+Let $i = 1, \dots, N$ index the voxels of the imaged slice. The MRF excitation generates a magnetization response that can be observed (or at least partially observed) at each excitation pulse. The magnetization at a given voxel at the $l$th echo time is then a function of the excitation parameters of the $l$th excitation pulse, the magnetization at the $(l-1)$th echo time, the overall magnetic field and the unknown parameters associated with the given voxel. The overall dynamics can be described by a parametrically excited linear system and are summarized in appendix A.
+
+The magnetization dynamics at voxel $i$ are parameterized by the voxel's parameter set $\theta_i = \{\text{T1}_i, \text{T2}_i, \delta f_i\} \in \mathcal{M}$, where $\mathcal{M} \subset \mathbb{R}^3$ denotes the set of feasible values for $\theta_i$, and the voxel's proton density, $\rho_i$. The magnetization response dynamics are also characterized by the excitation parameters of the $l$th pulse, namely the flip angle, $\alpha_l$, and the repetition time, TR$_l$.
+
+Now and subsequently we will denote the magnetization image sequence by the matrix $X$, with $X_{i,l}$ denoting the magnetization for voxel $i$ at the $l$th read out time. Note we are representing the response image at a the $l$th readout by a column vector which we denote as: $X_{:,l}$, using a Matlab style notation for indexing. Similarly, we will denote the magnetization response sequence for a given voxel $i$ as $X_{i,:}$.
+
+Given the initial magnetic field, the initial magnetization of any voxel is known up to the unknown scaling by its proton density $\rho_i$. Thus the magnetization response at any voxel can be written as a parametric nonlinear mapping from $\{\rho_i, \theta_i\}$ to the sequence, $X_{i,:}$:
+
+$$ (3.1) \qquad X_{i,:} = \rho_i B(\theta_i; \alpha, \text{TR}) \in \mathbb{C}^{1 \times L}. $$
+
+Here $\rho_i \in \mathbb{R}_+$ is the proton density at voxel $i$, $L$ is the excitation sequence length and $B$ is a smooth mapping induced by the Bloch equation dynamics: $B: \mathcal{M} \to \mathbb{C}^{1 \times L}$, where its smoothness can be deduced by the smooth dependence of the dynamics (A.1) and (A.3) with respect to $\theta_i$.
+
+In order to be able to retrieve the Bloch parameters $\theta_i$ and proton density from $X_{i,:}$, it is necessary that the excitation sequence is “sufficiently rich” such that the voxel’s magnetization response (3.1) can be distinguished from a response with different parameters. Mathematically this means that there is an embedding of $\mathbb{R}_+ \times \mathcal{M}$
+
+¹The off-resonance frequency is an additional parameter that can be incorporated into the Bloch equations and measures local field inhomogeneity and chemical shift effects [25].
+
+²As the excitation pulses in MRF are random the term steady state is now somewhat of a misnomer and we should possibly call these Inversion Recovery Randomly Excited Free Precession.
+---PAGE_BREAK---
+
+into $\mathbb{C}^L$.³ We will call $B = B(\mathcal{M}; \alpha, \text{TR}) \subset \mathbb{C}^L$ the Bloch response manifold and denote its cone by $\mathbb{R}_+B$.
+
+**REMARK 1.** Note that this component of the MRF procedure is not compressive, as the mapping (3.1) will typically need to map to a higher dimension than $\dim(\mathbb{R}_+\mathcal{M})$ in order to induce an embedding. The primary role of the excitation sequence is therefore to ensure identifiability and this can typically be achieved through random excitation as is commonly used in system identification. We will see, however, that the excitation sequence will also need to induce a sufficiently persistent excitation in order for it to be observed in a compressive manner.
+
+**REMARK 2.** The aim of a good excitation sequence should be to minimize the time taken to acquire the necessary data rather than minimizing the total number of samples. To this end, the total acquisition time for the sequences, $\sum_l \text{TR}_l$, is the relevant cost. Here, while more samples may be taken in MRF in comparison with other quantitative techniques the benefit comes in not having to wait for the magnetization to relax to its equilibrium state between samples.
+
+**REMARK 3.** While it is clear that the proton density, $\rho_i$, will necessarily be real valued and non-negative, it is common practice in MRI to allow this quantity to absorb additional phase terms due to, for example, coil sensitivity or timing errors. Therefore $\rho_i$ is often allowed to take a complex value. In this work we will retain the idealized model, treating it as non-negative real, however, we note that the subsequent theory presented here can typically be easily modified to work with $\rho_i \in \mathbb{C}$ instead of $\rho_i \in \mathbb{R}_+$, albeit with an increase in the dimensionality of the unknown parameter set. We will highlight specific differences along the way.
+
+**3.2. MRF imaging and k-space sampling.** So far we have considered the signal model for a single voxel. For a complete spatial image, assuming a discretization into $N$ voxels and treating each voxel as independent we have $\theta \in \mathcal{M}^N$ and $\rho \in \mathbb{R}_+^N$. Similarly $X \in \mathbb{C}^{N \times L}$. We can therefore define the full response mapping, $X = f(\rho, \theta)$, $f: \mathbb{R}_+^N \times \mathcal{M}^N \to (\mathbb{R}_+B)^N \subset \mathbb{C}^{N \times L}$, as:
+
+$$ (3.2) \qquad X = f(\rho, \theta) = [\rho_1 B(\theta_1; \alpha, \text{TR}), \dots, \rho_N B(\theta_N; \alpha, \text{TR})]^T. $$
+
+Unfortunately, it is impractical to observe the full spatial magnetization (via k-space) at each repetition time within a sufficiently small time for the magnetization to remain approximately constant. It is therefore necessary to resort to some form of undersampling. Let us denote the observed sequence of k-space samples as $Y \in \mathbb{C}^{M \times L}$, such that the samples taken at the $l$th read out, $Y_{:,l} \in \mathbb{C}^M$ are given by:
+
+$$ (3.3) \qquad Y_{:,l} = P(l)FX_{:,l} $$
+
+where $F$ again denotes the 2D discrete Fourier transform and $P(l)$ is the projection onto a subset of coefficients measured at the $l$th read out (although the original MRF scheme used a sequence of spiral trajectories, for simplicity we will assume that the Fourier samples are only taken from a Cartesian grid). We can finally define the full linear observation map from the spatial magnetization sequence to the observation sequence as $Y = h(X)$ where $h$ is given by:
+
+$$ (3.4) \qquad Y = h(X) = [P(1)FX_{:,1}, \dots, P(N)FX_{:,N}]. $$
+
+Together (3.2) and (3.4) define the full MRF acquisition model from the parameter maps T1, T2, $\delta f$ and $\rho$ to the observed data $Y$.
+
+³Strictly speaking we can only consider this to be an embedding for $\rho_i > 0$ otherwise $\theta_i$ is not observable.
+---PAGE_BREAK---
+
+**3.3. MRF matched filter reconstruction.** In [29] the image sequence is first reconstructed using the regridding method [13] which approximates the least squares estimate for $X_{i,:}$ given $Y_{i,:}$:
+
+$$ (3.5) \qquad \hat{X}_{:,l} = F^H P(t)^T Y_{:,l} $$
+
+or equivalently $\hat{X} = h^H(Y)$. Due to the high level of undersampling, each reconstructed image contains significant aliasing. However, it is argued in [29] that accurate estimates of the parameter maps can still be obtained by matching each voxel sequence to a predicted Bloch response sequence using a set of matched filters. This essentially averages the aliasing across the sequence, treating the aliasing as noise. While the technique provides impressive results, it ignores the main tenet of compressed sensing - that aliasing is interference and under the right circumstances can be completely removed (we explore this idea in detail in §4).
+
+Mathematically, it will be convenient to view the matched filter solution as the projection of the voxel sequence onto a discretization of the Bloch response manifold as follows.
+
+**3.3.1. Sampling the Bloch response manifold.** Suppose that we wished to approximate the projection of the sequence $X_{i,:}$ onto the cone of the Bloch response manifold. One way to do this is to first take a discrete set of samples of the parameter space, $\mathcal{M}$, $\theta_i^{(k)} = \{\text{T1}_i^{(k)}, \text{T2}_i^{(k)}, \delta f_i^{(k)}\}$, $k = 1, \dots, P$ and construct a 'dictionary' of magnetization responses, $D = \{D_k\}$, $D_k = B(\theta_i^{(k)}; \alpha, \text{TR})$, $k = 1, \dots, P$. The density of such samples controls the accuracy of the final approximation of the projection operator.
+
+We can similarly construct a look-up table (LUT) to provide an inverse for $B(\theta_i; \alpha, \text{TR})$ on the discrete samples such that $\theta_i^{(k)} = \text{LUT}_B(k)$.
+
+The projection onto the cone of the discretized response manifold, $D$, can then be calculated using:
+
+$$ (3.6) \qquad \hat{k}_i = \underset{k}{\operatorname{argmax}} \frac{\operatorname{real}\langle D_k, X_{i,:} \rangle}{\|D_k\|_2} $$
+
+to select the closest sample $\hat{D}_{\hat{k}_i}$ and
+
+$$ (3.7) \qquad \hat{\rho}_i = \max\{\operatorname{real}\langle D_{\hat{k}_i}, X_{i,:} \rangle / \|D_{\hat{k}_i}\|_2^2, 0\} $$
+
+for the proton density, where the real and max operations are necessary to select only positive correlations since negative $\rho_i$ are not admissible.
+
+If we allow $\rho_i$ to be complex valued (see Remark 3) then the projection equations become:
+
+$$ (3.8) \qquad \hat{k}_i = \underset{k}{\operatorname{argmax}} \frac{|\langle D_k, X_{i,:} \rangle|}{\|D_k\|_2} $$
+
+and
+
+$$ (3.9) \qquad \hat{\rho}_i = \frac{\langle D_{\hat{k}_i}, X_{i,:} \rangle}{\|D_{\hat{k}_i}\|_2^2} $$
+
+Equations (3.8) and (3.9) are precisely the matched filter equations used in [29], applied to the distorted voxel sequences. We therefore see that one interpretation of matched filtering with the MRF dictionary model is to provide an approximate projection onto the cone of the Bloch response manifold for each voxel sequence.
+
+A summary of the full MRF parameter map recovery algorithm (with a real valued proton density model) is given in Algorithm 1.
+---PAGE_BREAK---
+
+**Algorithm 1** MRF reconstruction
+
+Given: $Y$
+
+Reconstruct $X$:
+
+$$ \hat{X} = h^H(Y) $$
+
+MF parameter estimation:
+
+**for** i = 1 : N **do**
+
+$$ \hat{k}_i = \underset{k \in D}{\operatorname{argmax}} \operatorname{real}\langle D_k, \hat{X}_{i,:} \rangle / \|D_k\|_2 $$
+
+$$ \hat{\theta}_i = \operatorname{LUT}_B(\hat{k}_i) $$
+
+$$ \hat{\rho}_i = \max\{0, \operatorname{real}\langle D_{\hat{k}_i}, \hat{X}_{i,:} \rangle / \|D_{\hat{k}_i}\|_2^2\} $$
+
+**end for**
+
+**Return:** $\hat{\theta}, \hat{\rho}$
+
+**Computational cost and accuracy.** Given that the discretized MRF dictionary can be very large (≈ 500,000 samples in [29]), it is useful to consider the computational complexity of the above calculations as a function of parameter accuracy as this is the major computational bottleneck that we will encounter.
+
+The accuracy with which we can estimate the parameters for a given voxel will depend on the accuracy of the approximate projection operator and the Lipschitz constants of the inverse mapping, $LUT_B$. We can achieve an approximate projection by generating an $\epsilon$-cover of $B$ with $D_k$. As the dimension of $B$ is 3, this requires choosing $P \sim C\epsilon^{-3}$ atoms in our dictionary. Furthermore, as the projection operation described in (3.6) takes the form of a nearest neighbour search, we can use fast nearest neighbour search strategies, such as the cover tree method [5], to quickly solve (3.6) in $O(L \ln(1/\epsilon))$ computations per voxel, instead of the $O(L\epsilon^{-3})$ necessary for exhaustive search. This effectively makes the speed of each application of $D$ on a par with that of a traditional fast transform. Similarly, the approximate inverse using $LUT_B$ can also be computed in $O(\ln(1/\epsilon))$.
+
+We could also consider enhancing such an estimate by exploiting the smoothness of the response manifold, either by using local linear approximations of the manifold [23] or by further locally optimizing the projection numerically around the selected parameter set, once we are assured global convergence. Such an enhancement could allow either for increased accuracy or reduced computation through the use of fewer parameter samples, however, we do not pursue these ideas further here.
+
+**4. Compressed Quantitative Imaging.** In order to generate a full compressed sensing framework for MRF we will identify sufficient conditions on the excitation pulse sequences and the *k*-space sampling, along with a suitable reconstruction algorithm, to guarantee recovery of the parameter maps from the observed *k*-space samples. As the dimension of our problem is large, $\dim((\mathbb{R}_+ \times \mathcal{M})^N) = 4N$, we do not consider the manifold reconstruction algorithms in [23] as these scale poorly with the dimension of the manifold. Instead, we propose a CS solution based around the iterative projection algorithm of Blumensath [11] which we will see has computational cost that is linear in the voxel dimension. Our approach, which we call BLIP (BLoch response recovery via Iterated Projection), has three key ingredients: a random pulse excitation sequence following the original MRF technique; a random subsampling strategy that can be shown to induce a low distortion embedding of $\mathbb{R}_+^N \times \mathcal{M}^N$ and an efficient iterated projection algorithm [11] that imposes consistency with the Bloch equations. Moreover, the projection operation is the same nearest neighbour search described in section 3.3.1.
+---PAGE_BREAK---
+
+We first describe the iterative projection method and then consider the implications for the appropriate excitation and sampling strategies.
+
+**4.1. Reconstruction by Iterated Projection.** In [11] a general reconstruction algorithm, the Projected Landweber Algorithm (PLA) was proposed as an extension of the popular Iterated Hard Thresholding Algorithm [7, 9]. PLA is applicable to arbitrary union of subspace models as long as we have access to a computationally tractable projection operator onto the union of subspace model within the complete signal space. The algorithm is given by:
+
+$$
+(4.1) \qquad X^{(n+1)} = P_A(X^{(n)} + \mu h^H (Y - hX^{(n)}))
+$$
+
+where $\mathcal{P}_A$ is the orthogonal projection onto the signal model $\mathcal{A}$ such that
+
+$$
+(4.2) \qquad \mathcal{P}_A(X) = \underset{\tilde{X} \in \mathcal{A}}{\operatorname{argmin}} \|X - \tilde{X}\|_F
+$$
+
+and $\mu$ is the step size.
+
+The current theory for PLA [11] states that a sufficient condition for stable recovery of $X$ given $Y$ is that $h$ is a stable embedding - a so-called Restricted Isometry Property (RIP) or bi-Lipschitz embedding - for the signal model, $\mathcal{A}$. A mapping, $h$, is said to have the RIP (be a bi-Lipschitz embedding) for the signal model $\mathcal{A}$ if there exists a sufficiently small constant $\delta > 0$ such that:
+
+$$
+(4.3) \quad (1-\delta)\|X - \tilde{X}\|_2^2 \le \frac{N}{M} \|h(X - \tilde{X})\|_2^2 \le (1+\delta)\|X - \tilde{X}\|_2^2
+$$
+
+for all pairs $X$ and $\tilde{X}$ in $\mathcal{A}$. How to achieve such an embedding will be considered
+later in section 4.2.
+
+The theory [11] states that it is sufficient that $h$ satisfy the RIP with $\frac{M}{N}(1+\delta) < 1/\mu < \frac{3M}{2N}(1-\delta)$ for the guaranteed recovery. If $h$ is essentially ‘optimal’, e.g. a random ortho-projector, then we should set the step size $\mu \approx N/M$ since in the large system limit $\delta \to 0$.
+
+For our compressed sensing scenario the signal model $\mathcal{A}$ is the product set $(\mathbb{R}_{+} \mathcal{B})^N$
+or, more precisely, its discrete approximation $(\mathbb{R}_{+} D)^N$ and the projection operator
+$\mathcal{P}_{\mathcal{A}}$ can be realized by separately projecting the individual voxel sequences $X_i^n$: onto
+the cone of the Bloch response manifold using the equations (3.6) and (3.7). Although
+$(\mathbb{R}_{+} \mathcal{B})^N$ is not itself a union of subspace model it can easily be extended to $(\mathbb{R} \mathcal{B})^N$,
+which forms an uncountably infinite union of lines (1D subspaces). In fact, the theory
+of [11] does not require $\mathcal{A}$ to be a union of subspace [11] and is directly applicable to
+$\mathcal{A} = (\mathbb{R}_{+} \mathcal{B})^N$. We therefore appear to have all the ingredients for a full compressed
+sensing recovery algorithm. This is summarized in Algorithm 2.
+
+**REMARK 4.** Note that the above procedure has separated out the parameter map estimation (by inverting the estimated Bloch responses) and the reconstruction of the magnetization image sequence (via the PLA). Indeed, as long as the partial k-space sampling provides a bi-Lipschitz embedding for all possible magnetization responses then the CS component of the imaging is well defined even if the Bloch response is not invertible.
+
+**4.1.1. Step size selection.** Selection of the correct step size is crucial in order to attain good performance from these iterative projection based algorithms [10, 11]. Note that the original parameter estimation in [29] can be interpreted as an application of a single iteration of PLA with a step size $\mu = 1$ and iterating PLA with this step size
+---PAGE_BREAK---
+
+**Algorithm 2** BLoch response recovery via Iterative Projection (BLIP)
+
+**Given:** $Y$
+
+**Initialization:** $X^{(0)} = 0$, $\mu = N/M$
+
+Image sequence reconstruction
+
+**for** $n = 1; n := n + 1$ **until** stopping criterion **do**
+
+Gradient step:
+
+**for** $l = 1 : L$ **do**
+
+$$X_{:,l}^{(n+1/2)} = X_{:,l}^{(n)} + \mu F^H P(l)^T (Y_{:,l} - P(l)F) X_{:,l}^{(n)};$$
+
+**end for**
+
+Projection step:
+
+**for** $i = 1 : N$ **do**
+
+$$\hat{k}_i = \underset{k}{\operatorname{argmax}} \operatorname{real}\langle D_k, X_{i,:}^{(n+1/2)} \rangle / \|D_k\|_2$$
+
+$$\hat{\rho}_i = \max\{0, \operatorname{real}\langle D_{\hat{k}_i}, X_{i,:}^{(n+1/2)} \rangle / \|D_{\hat{k}_i}\|_2^2\}$$
+
+$$X_{i,:}^{(n+1)} = \hat{\rho}_i D_{\hat{k}_i}$$
+
+**end for**
+
+**end for**
+
+Parameter map estimation:
+
+**for** $i = 1 : N$ **do**
+
+$$\hat{\theta}_i = \mathrm{LUT}_{\mathcal{B}}(\hat{k}_i)$$
+
+**end for**
+
+**Return:** $\hat{\theta}, \hat{\rho}$
+
+tends to only deliver a modest improvement over the matched filter (single iteration). The matched filter also has the effect of underestimating the magnitude of $X$, and hence also the proton density map, as $h$ tends to shrink vectors uniformly (when it provides a stable embedding).
+
+In contrast, when using the substantially more aggressive step size proposed by the theory we will see that significant improvements are observed in signal recovery and often in a very small number of iterations.
+
+In practice, it is also beneficial to select the step size for PLA adaptively to ensure stability. Following the work on adaptive step size selection for IHT [10] we adopt the following heuristic. We begin each iteration by choosing $\mu = N/M$ as is suggested from the CS theory. Then after calculating a new proposed value for $X^{n+1}$ we calculate the quantity:
+
+$$ (4.4) \qquad \omega = \kappa \frac{\|X^{n+1} - X^n\|_2^2}{\|h(X^{n+1} - X^n)\|_2^2} $$
+
+for some $\kappa < 1$. If $\mu > \omega$ we reject this update, shrink the step size, $\mu \mapsto \mu/2$ and calculate a new proposed value for $X^{n+1}$. As with the Normalized IHT algorithm [10], this form of line search is sufficient to ensure convergence of the algorithm irrespective of conditions on the measurement operator, and we will use this form of step size selection in all subsequent experiments.
+
+**4.2. Strategies for subsampling k-space.** We now consider what properties of the excitation response sequences and the k-space sampling pattern will ensure that the sufficient RIP conditions in the PLA theory are satisfied.
+
+First note that, as the signal model treats each voxel as independent, we need to take at least $N \dim(\mathbb{R}_+M)$ measurements as this is the dimension of our model.
+---PAGE_BREAK---
+
+Furthermore, since we only take a small number of measurements at each repetition time, we cannot expect to achieve a stable embedding without imposing further constraints on the excitation response. For example, if the embedding was induced in the first few repetition times and all further responses were non-informative we would not have taken sufficient measurements from the informative portion of the response. Therefore we consider responses that somehow spread the information across the repetition times. We will assume that the excitation sequence induces an embedding for the response map (3.2) (here random sequences seem to suffice), and identify additional conditions that enable us to develop a random $k$-space subsampling strategy with an appropriate RIP condition. Our approach will follow the technique of random sampling as is common in compressed sensing measurement design, along with a pre-conditioning technique that has been used in the Fast Johnson-Lindenstrauss Transform [2] and in spread spectrum compressed sensing [33]. It is also reminiscent of the Rauhut's bounded orthonormal systems [34] and has a similar aim of ensuring that information is sufficiently spread within the measurement domain
+
+The key vectors of interest are those that discriminate between pairs of possible signals within our model, namely the chords of $\mathbb{R}_+\mathcal{B}$, which are the vectors of the form $u = X_{i,:} - \tilde{X}_{i,:}$ with $X_{i,:}, \tilde{X}_{i,:} \in \mathbb{R}_+\mathcal{B}$ and $X_{i,:} \neq \tilde{X}_{i,:}$. We will quantify the pre-conditioning requirement for the excitation response through the flatness of such vectors which we define as follows.
+
+**DEFINITION 1.** Let $U$ be a collection of vectors {$u$} in $\mathbb{C}^L$. We denote the flatness, $\lambda$, of these vectors by:
+
+$$ (4.5) \qquad \lambda := \max_{u \in U} \frac{\|u\|_{\infty}}{\|u\|_2}. $$
+
+Note that from standard norm inequalities $L^{-1/2} \le \lambda \le 1$.
+
+We will consider the chords of an excitation response to be sufficiently flat up to a log penalty if $\lambda \sim L^{-1/2} \log^{\alpha} L$ for $U = \{\mathbb{R}_+\mathcal{B} - \mathbb{R}_+\mathcal{B}\} \setminus \{0\}$.
+
+In constructing our measurement function we also note that the signal model contains no spatial structure, and therefore we should expect to have to uniformly sample $k$-space in order to achieve a sufficient RIP. Note this is in contrast with the variable density sampling strategy proposed by [29] which concentrated samples at the centre of $k$-space. It turns out that we can achieve this using a remarkably simple random subsampling pattern based on multi-shot Echo-planar Imaging (EPI) [30].
+
+Let $F \in \mathbb{C}^{N \times N}$ denote the 2D discrete Fourier transform (assuming an image size of $\sqrt{N} \times \sqrt{N}$) with $F_{i,:}, i = 1, \dots, N$ denoting the $N$ 2D discrete Fourier basis vectors associated with the spatial frequencies $k_x(i), k_y(i) \in \{0, \dots, \sqrt{N}-1\}$. Without loss of generality we assume that the vectors are ordered such that $k_x(i) = (i-1) \mod \sqrt{N}$, and $k_y(i) = (i-1)/\sqrt{N} \lfloor (i-1)/\sqrt{N} \rfloor$. We can now define a random Echo-Planar Imaging measurement operator by $Y_{:,l} = P(\zeta_l) F X_{:,l}$, where $\zeta_l$ is a sequence of independent random variables uniformly drawn from $\{0, \dots, p-1\}$ and $P(\zeta) \in \mathbb{R}^{M \times N}$ is defined as follows:
+
+$$ (4.6) \qquad P_{i,j} = \begin{cases} 1 & \text{if } j = i + \sqrt{N}(\zeta + (p-1)\lfloor(i-1)/\sqrt{N}\rfloor) \\ 0 & \text{otherwise.} \end{cases} $$
+
+where for convenience we have assumed that $N$ is exactly divisible by $p$ so that $M = N/p$ is an integer. In words, we uniformly subsample $k_y$ by a factor of $p$ with random shifts across time in $k_y$ of the set of $k$-space samples. This is illustrated in figure 4.2.
+---PAGE_BREAK---
+
+FIG. 4.1. The plot shows an instance of random EPI k-space sampling for three time frames: red, green and blue respectively. A colored pixel indicate that (k_x, k_y) frequency is sampled at the associated time frame through the projection operator, $P(\zeta_t)$. In this instance $p = 16$.
+
+Random EPI, along with an excitation response with appropriate chord flatness, $\lambda$, is then sufficient to provide us with a measurement operator, $h$, that is a bi-Lipschitz embedding on our signal model. In appendix B we prove the following theorem:
+
+**THEOREM 1 (RIP for random EPI).** Given an excitation response cone, $\mathbb{R}_+B$ of dimension $d_B$, whose chords have a flatness $\lambda$ and a random EPI operator $h$: $(\mathbb{R}_+B)^N \to \mathbb{C}^{M\times L}$, then, with probability at least $1 - \eta$, $h$ is a restricted isometry on $(\mathbb{R}_+B)^N - (\mathbb{R}_+B)^N$ with constant $\delta$ as long as:
+
+$$ (4.7) \qquad \lambda^{-2} \ge C \delta^{-2} p^2 d_B \log(N/\delta\eta) $$
+
+for some constant $C$ independent of $p, N, d_B, \delta$ and $\eta$.
+
+Specifically, if $\lambda = O(L^{-1/2} \log^\alpha L)$ then we require:
+
+$$ (4.8) \qquad L = O(\delta^{-2} p^2 d_B \log(N/\delta\eta) \log^\alpha(L)) $$
+
+excitation pulses. While we might hope to get $L$ of the order of $pd_B$ it appears that this is not possible, at least for a worst case RIP analysis based on the flatness criterion alone. Indeed, in the experimental section we will provide evidence to suggest that $L \sim p^2$ is indeed the scaling behaviour that we empirically observe.
+
+**REMARK 5.** It might seem surprising that the proposed scheme uses uniform random sampling in k-space whereas it is usually advisable to use a variable density sampling strategy for compressed sensing solutions for MRI. Indeed, there is good theoretical justification for variable density sampling patterns [1, 32]. Our theory above is not inconsistent with such results. Variable density sampling is advantageous because the underlying signal model - sparsity in the wavelet domain - is not incoherent
+---PAGE_BREAK---
+
+with the Fourier basis [32, 1]. However, the Fourier basis is incoherent with a voxel-wise signal model as used above. This is not to say that spatial structure cannot be effectively exploited within a compressed quantitative imaging scheme or that variable density sampling would not then be of benefit. However, as the basic MRF based model does not exploit spatial structure we argue that uniform random sampling is appropriate here.
+
+The challenge of incorporating spatial regularity into the signal model is discussed next.
+
+**4.3. Extending the Bloch response model.** Our current compressed sensing model takes no account of additional structure within the parameter maps. This structure could, for example, be the piecewise smoothness of the parameter maps or the magnetization response maps, or an imposed segmentation of the image into different material compositions. In general, it is not clear how such additional regularization can be included in a principled manner, although many heuristic approaches could of course be adopted, as for example in [17]. This is because the parameter values are encoded within the samples of the Bloch response manifold, and therefore the spatial regularity would need to be mapped through the Bloch response leading to a non-separable high dimensional nonlinear signal model.
+
+The one exception, which we consider here, is the regularization of the proton density map, or at least a close relative. We note, however, that in this instance the theory relies on the real non-negative proton density model and does not directly extend to the complex case.
+
+Let us define the *pseudo-density*, $\tilde{\rho}$ as the proton density map scaled by the norm of the Bloch response vector, so that:
+
+$$ (4.9) \qquad \tilde{\rho}_i = \rho_i \|B(\theta_i; \alpha, \text{TR})\|_2. $$
+
+Similarly we can define the normalized Bloch response as:
+
+$$ (4.10) \qquad \eta_{i,:} = \tilde{B}(\theta_i; \alpha, \text{TR}) \triangleq B(\theta_i; \alpha, \text{TR}) / \|B(\theta_i; \alpha, \text{TR})\|_2 $$
+
+and the normalized Bloch response manifold, $\tilde{\mathcal{B}}$ as:
+
+$$ (4.11) \qquad \tilde{\mathcal{B}} = \{ \eta_{i,:} = \tilde{B}(\theta_i; \alpha, \text{TR}) \text{ for some } \theta_i \in \mathcal{M} \} $$
+
+The pseudo-density will be roughly the same as the density, as long as the Bloch response sequences are all of approximately the same magnitude. The transform to $\{\tilde{\rho}, \eta\}$ normalizes the manifold $\tilde{\mathcal{B}}$ so that we can more easily calculate projections onto product signal models of the form $\{\tilde{\rho}, \eta\} \in \Sigma \times \tilde{\mathcal{B}}^N$, where $\Sigma$ denotes the set of spatially regularized pseudo-density maps. To do this we will find the following proposition useful:
+
+PROPOSITION 1. Given an $X \in \mathbb{C}^{N \times L}$, suppose that the projection onto the signal model $\Sigma \times \tilde{\mathcal{B}}^N$ is given by $\hat{\rho} \in \Sigma$ and $\hat{\eta}_{i,:} \in \tilde{\mathcal{B}}$ and results in $\hat{\rho}_i \ge 0$ for all $i$, then:
+
+$$ (4.12) \qquad \hat{\eta}_{i,:} = \underset{\eta_{i,:} \in \tilde{\mathcal{B}}}{\operatorname{argmax}} z_i $$
+
+and
+
+$$ (4.13) \qquad \hat{\rho} = \underset{\tilde{\rho} \in \Sigma}{\operatorname{argmin}} \| \tilde{\rho} - z \|_2^2 $$
+---PAGE_BREAK---
+
+where $z_i = \text{real}\langle\eta_{i,:}, X_{i,:}\rangle$.
+
+*Proof.* By definition of the orthogonal projection we have:
+
+$$ (4.14) \qquad \{\hat{\eta}, \hat{\rho}\} = \operatorname*{argmin}_{\eta, \tilde{\rho}} \sum_i \sum_j |X_{i,j} - \tilde{\rho}_i \eta_{i,j}|^2 $$
+
+Expanding (4.14), substituting in $z_i$ and noting that $\|\eta_{i,}\|_2 = 1$ we have:
+
+$$ (4.15) \qquad \{\hat{\eta}, \hat{\rho}\} = \underset{\eta \in B, \tilde{\rho} \in \Sigma}{\operatorname{argmin}} \sum_i (\tilde{\rho}_i^2 - 2\tilde{\rho}_i z_i). $$
+
+By assumption $\hat{\rho}_i$ is non-negative so the expression is minimized with respect to $\eta_{i,}$ by (4.12) independently of $\tilde{\rho}_i$. Finally we note that (4.13) holds since:
+
+$$ (4.16) \qquad \sum_i (\tilde{\rho}_i^2 - 2\tilde{\rho}_i z_i) = \| \tilde{\rho} - z \|_2^2 + \text{const.} $$
+
+□
+
+One way to impose spatial regularity on $\tilde{\rho}$ is to force it to be sparse in the wavelet domain for some appropriate orthogonal wavelet representation, $W$, such that $c = W\tilde{\rho}$. In this case, the projection (4.13) can be written as $\hat{\rho} = W^T\hat{c}$ with:
+
+$$ (4.17) \qquad \hat{c} = H_k(Wz) $$
+
+where $H_k$ denotes an element-wise hard thresholding [7, 9] that retains only the largest $k$ elements.
+
+Under the non-negativity assumption the projection operator can be formed by applying (4.12) followed by (4.17). This results in a simple algorithm for incorporating a degree of spatial regularization within the compressed quantitative imaging framework. In the next section we will see, however, that the inclusion of this addition spatial constraint adds little to the performance of the compressed sensing approach, suggesting that the Bloch equation constraint dominates the performance.
+
+**REMARK 6.** The above calculation is only guaranteed to be valid when the resulting pseudo-density map is non-negative. In theory, applying such an operator when we incur negative values of pseudo-density could give a projection that is not optimal. However, in practice we have found that this is not a problem as we always impose non-negativity on both the pseudo-density and the correlations with the Bloch response, $z_i$, in order to ensure that the projection is physically meaningful.
+
+**5. Experiments.** In order to test the efficacy of BLIP for compressed quantitative imaging we performed a set of simulations using an anatomical brain phantom, segmented into various material compositions. This provided a well defined ground truth and enabled us to demonstrate image sequence recovery and parameter map estimation as a function of the $k$-space subsampling factor and the excitation sequence lengths.
+
+**5.1. Experimental Set up.** The key ingredients of the experimental set up are described below.
+
+**Anatomical Brain Phantom.** To develop realistic simulations that also provide a solid ground truth we have adapted the anatomical brain phantom of [15], available at the BrainWeb repository [12]. A 217 × 181 slice (slice 40) of the crisp
+---PAGE_BREAK---
+
+TABLE 5.1
+Tissue types used from MNI segmented brain phantom
+
+| Tissue | index | proton density | T1 (ms) | T2 (ms) |
|---|
| Background | 0 | 0 | - | - | | CSF | 1 | 100 | 5012 | 512 | | Grey matter | 2 | 100 | 1545 | 83 | | White matter | 3 | 80 | 811 | 77 | | Adipose | 4 | 80 | 530 | 77 | | Skin/Muscle | 5/6 | 80 | 1425 | 41 |
+
+segmented anatomical brain was used and restricted to contain only 6 material components, listed in table 5.1. The phantom was further zero padded to make a 256 x 256 image to simplify the computations. Since we are using the crisp segmentation the model is somewhat idealized and does not address inaccuracies associated with partial volume effects or many of the other issues with real MRI. However, it serves as a useful test bed to provide a good proof-of-concept for our proposed techniques.
+
+The material properties were chosen to be both representative of the correct tissue type [21] and challenging: the proton densities were fixed to give little discrimination for individual parameters and were set so that there is not an exact match to the sampling of the Bloch response manifold.
+
+The segmented brain is shown, colored by index, in figure 5.1.
+
+FIG. 5.1. The MNI segmented anatomical brain phantom [15] colored by index: 0 = Background, 1 = CSF, 2 = Grey Matter, 3 = White Matter, 4 = Fat, 5 = Muscle/Skin, 6 = Skin.
+---PAGE_BREAK---
+
+FIG. 5.2. Left: examples of the response differences for pairs of tissue types given in table 5.1 when using IR-SSFP pulse sequence excitation with random flip angles. Right: $λ^{-2}/L$ as a function of sequence length for the response differences plotted on the left. From this plot it can be deduced that $λ^{-2}$ grows roughly proportionally to $L$.
+
+**Pulse excitation.** For the excitation sequences we use IR-SSFP sequences (exemplar code can be found in the supplementary material of [29]) with random flip angles drawn from an independent and identically distributed Gaussian distribution:
+
+$$ (5.1) \qquad \alpha_l \sim N(0, \sigma_\alpha^2) $$
+
+with a standard deviation, $\sigma_\alpha = 10$ degrees. The repetition times were uniformly spaced at an interval of 10 ms. While we also experimented with randomizing repetition times, we did not find that these significantly changed the performance of the techniques. Constant repetition time intervals also mean that we can directly assess the imaging speed in terms of the sequence length, $L$.
+
+The value of $\sigma_\alpha$ was chosen empirically to provide reasonable persistence of excitation for the expected T1 and T2 responses. Figure 5.2 (left) shows the magnitude of the response differences for the set of tissue types listed in table 5.1. It can be seen that the difference in the responses does indeed persist over time. Using these differences we can also estimate their flatness. Figure 5.2 (right) shows how the flatness varies as a function of sequence length. We see that $λ^{-2}$ roughly scales proportionally to $L$, as desired, with a slight downward sublinear trend.
+
+**Discretized Bloch response.** The Bloch response manifold was sampled in a similar manner to [29], however, to simplify things we have only considered variation in T1 and T2 here, assuming the off resonance frequency is equal to zero. Similar to [29], discrete samples for T1 were selected to go between 100 and 2000 in increments of 20 and from 2300 to 6000 in increments of 300. T2 was sampled between 20 and 100 in increments of 5, from 110 to 200 in increments of 20 and from 400 to 1000 in increments of 200. This results in a dictionary of size 3379 × L. This range of T1 and T2 values clearly spans the anticipated range for the tissue types listed in table 5.1.
+
+**Subsampling strategy.** For the *k*-space subsampling we use the random EPI sampling scheme detailed in section 4.2. Specifically, we fully sample the *k*-space in the $k_x$ direction while regularly subsampling the $k_y$ direction by a factor of *p*. This deterministic sampling pattern was then cyclically shifted by a random number of $k_y$ lines at each repetition time. In most experiments *p* is set to 16 (sampling at 6.25% of Nyquist).
+---PAGE_BREAK---
+
+**5.1.1. Reconstruction algorithms.** In the experiments below we compare three distinct algorithms for reconstructing the magnetization image sequences. These are: (1) the original MRF algorithm; (2) BLIP algorithm presented in Algorithm 2; and (3) BLIP with spatial regularization as detailed in section 4.3. For both iterative algorithms we use the adaptive step size strategy set out in section 4.1.1 with $\kappa = 0.99$. For the spatial regularization we use a Haar wavelet representation with hard thresholding as detailed in section 4.3, retaining only the largest 12000 wavelet coefficients at each iteration.
+
+As the MRF algorithm (with step size equal to 1) underestimates the value of the image sequence (and also the proton density) we include in the appropriate plots the performance of a rescaled MRF algorithm where the step size is $\mu = N/M$.
+
+Finally, in some of the plots we also include the performance for an oracle estimator. This oracle is given the fully sampled image sequence data as an input and then projects each voxel sequence onto the discretized Bloch response. In this way we can differentiate between errors associated with the Bloch response discretization and the image sequence reconstruction.
+
+**5.2. Results.** All the experiments were evaluated using a signal-to-error-ratio (SER) in decibels (dBs), calculated as $20 \log_{10} \frac{\|x\|_2}{\|x - \hat{x}\|_2}$ for a target signal $x$ with the estimate $\hat{x}$. For T1 and T2 this corresponds to the measures $T_1$NR and $T_2$NR that has been used to gauge the efficiency of relaxation time acquisition schemes [16]. To avoid issues of estimates associated with empty voxels the errors are only calculated over regions with a non-zero proton density value.
+
+In all experiments, unless stated otherwise, the following parameters were used: the undersampling ratio for the operator $h(\cdot)$ was fixed at 1/16 and for both the iterative algorithms a maximum of 20 iterations was allowed, though in many cases fewer iterations would have sufficed.
+
+**5.2.1. Performance as a function of excitation sequence length.** Our first experiment evaluates the performance of the algorithms in terms of the sequence length, which was varied between 10 and 1000 pulses. Here we can separately evaluate the performance of the compressed sensing component and the recovery of the parameter maps.
+
+The compressed sensing recovery performance, evaluated by the SER of the image sequence reconstruction, $X$, is shown in figure 5.3 (a).
+
+First, note that the strange behaviour of the oracle estimator for small sequence lengths is probably due to the failure of $f(\cdot)$ to achieve a low distortion embedding. This would result in it being easier to approximate voxel sequences with a given element of the Bloch response approximation. Beyond this the performance reaches a plateau at approximately SER = 27 dB which can be considered to be the error associated with the discretization of the Bloch response.
+
+The performance of both BLIP algorithms is roughly equivalent. They both sharply increase in performance at a sequence length of 100 and then tend to a plateau beyond this with an SER of about 0.5 dB below that of the oracle estimator. This suggests that we can achieve near perfect compressed sensing reconstruction with a sequence containing as few as 100 pulses. In this simulation there was no significant gain from the additional inclusion of the spatial regularization.
+
+The performance of MRF is significantly worse. We first highlight that the non-rescaled MRF performance is terrible, however, as noted earlier, this is mainly due to the shrinkage effect of the subsampling operator, $h(\cdot)$. Correcting for this with
+---PAGE_BREAK---
+
+appropriate rescaling leads to significantly improved estimation. However, we see
+that the SER increases slowly as a function of sequence length, which is consistent
+with the argument that the matched filter is averaging over the aliasing rather than
+cancelling it, as presented in section 3.3. Furthermore, even for a sequence length of
+1000 the SER still only reaches 12dB.
+
+Subfigures 5.3 (b), (c) and (d) show the SER for the estimation of the parameter maps, proton density, T1 and T2 respectively, and reflects the combined performance of inverting both $h(\cdot)$ and $f(\cdot)$. In each case the two iterative algorithms approach the oracle performance for sequence lengths of $L \ge 200$, indicating successful parameter map recovery. Furthermore, the performance for the $\rho$ estimates and T2 estimates do not improve substantially beyond the $L = 200$ value as $L$ is increased reaching a plateau at approximately 16dB which corresponds to a root mean squared (rms) error of approximately 30ms. In contrast, the T1 estimation performance does increase from roughly 20dB (213ms rms error) at $L = 200$ to 30dB (67ms rms error) at $L = 1000$. This may be a function of the isometry properties (in the T1 direction) for the Bloch response embedding, and is possibly related to the longer time constants of T1. It is an open question as to whether a better excitation sequence can be designed to improve the T1 estimates for small $L$.
+
+**5.2.2. Visual Comparison.** To get a visual indication of the performance of the BLIP approach over the MRF reconstruction at low sequence lengths, images of the 3 different parameter estimates for $L = 300$ are given in figures 5.4, 5.5 and 5.6. The left hand column shows the ground truth parameter maps while the middle row shows the MRF reconstruction (scaled) and the right hand column shows the BLIP estimates (with spatial regularization). While the main aspects of the parameter maps are visible in the MRF reconstructions, there are still substantial aliasing artefacts. These are most prominent in the T1 and T2 estimates. In contrast, the BLIP estimates are virtually distortion-free, indicating that good spatial parameter estimates can be obtained with as little as 300 excitation pulses.
+
+**5.2.3. Convergence rates for BLIP.** The convergence of the iterative algorithms is shown in figure 5.7 as a function of the relative data consistency error at each iteration *k*, which we define as $\|Y - h(X^k)\|_2^2 / \|Y\|_2^2$. Results for three different sequence lengths, 100, 200 and 500, are shown in the figure. It is clear that in all cases the algorithms converge rapidly and for sequence lengths of 200 or more have effectively converged within 20 iterations (note the log scale along the y-axis). Indeed, this is predicted by the compressed sensing theory for IPA: when the sequence length increases, so that compressed sensing task becomes easier (smaller isometry constant) the rate of convergence also increases. Thus BLIP can be considered to be reasonably computationally efficient.
+
+**5.3. Subsampling versus sequence length.** In our next experiment we investigate the dependencies of the undersampling ratio and the sequence length on the reconstruction performance. In this experiment we evaluate the image sequence SER as a function of *L* and *p*. Recall that the theory presented in section 4.2 suggested that this performance might degrade roughly as a function of *p*²/*L*. However, as we noted earlier, the analysis in that section is of a 'worst case' type and may be highly conservative. Figure 5.8 shows a plot of the image sequence SER as a function of *L*/*p*² for three different subsampling rates: *p* = 16 (green), *p* = 32 (red) and *p* = 64 (blue). From the plot we can see that the rapid growth of the SER that we associate with successful recovery occurs in each case at roughly the same value of *L*/*p*². This
+---PAGE_BREAK---
+
+FIG. 5.3. Reconstruction performance as a function of sequence length. (a) SER for image sequence reconstruction; (b) SER for density map estimation; (c) SER for T1 map estimation; and (d) SER for T2 map estimation. Results are shown for the following algorithms: MRF, BLIP, BLIP with spatial regularization. Also shown is the performance of an oracle estimator given the full image sequence data. Finally subfigures (a) and (b) also include the performance of a rescaled MRF estimator.
+
+seems to suggest that the predicted scaling behaviour for *L* and *p* in random EPI to achieve RIP is of the right order. This in turn suggests that to maximize efficiency we should attempt to minimize *p* (all other design criteria being equal).
+
+**5.4. Using a complex density model.** The simulations, so far, have used the somewhat idealized model that the density map is real and non-negative. In this experiment we demonstrate that the algorithm works just as well when the density map is allowed to be complex and to absorb sensitivity maps and other phase terms. Here we repeat the first experiment but we modify the density map to have a quadratic phase that is zero at the centre of the image and pi/4 at the corners. A plot of the phase is shown on the left hand side in figure 5.9.
+
+We then ran the MRF reconstruction algorithm and BLIP with equations (3.6) and (3.7) replaced by (3.8) and (3.9) in both algorithms. The resulting performance was very similar to that in the real valued case. For brevity we only show a plot of the T2 SER in figure 5.9. We see that the parameter estimation behaves identically to that in the first experiment. Similar behaviour can be observed for the other
+---PAGE_BREAK---
+
+FIG. 5.4. A visual comparison of the density map estimates from a sequence of length L = 300.
+The top plot shows the original density map. The middle image is the MRF estimate and the bottom
+image is the BLIP estimate.
+---PAGE_BREAK---
+
+FIG. 5.5. A visual comparison of the T1 map estimates from a sequence of length L = 300.
+The top plot shows the original T1 map. The middle image is the MRF estimate and the bottom
+image is the BLIP estimate.
+---PAGE_BREAK---
+
+FIG. 5.6. A visual comparison of the T2 map estimates from a sequence of length L = 300.
+The top plot shows the original T2 map. The middle image is the MRF estimate and the bottom
+image is the BLIP estimate.
+---PAGE_BREAK---
+
+FIG. 5.7. Plots of the data consistency error at each iteration for BLIP using a varying sequence length. The convergence rate increases as the sequence length increases. This is consistent with theory as the increased sequence length is likely to reduce the isometry constant.
+
+parameters. Therefore, it seems that there is no significant difference in using the real
+or complex model for proton density.
+
+**5.5. Uniform versus non-uniform sampling.** In §4.2 we asserted that as the Bloch response model does not include any spatial structure it is preferable to take uniformly random samples of *k*-space in order to achieve the RIP rather than use a variable density scheme. In this final experiment we examine the effect of replacing the (uniform) random EPI sampling with a sampling pattern that weights the lower frequencies more, as is common in compressed sensing schemes for MRI [28]. Specifically, we choose a non-uniform sampling pattern with an equivalent undersampling ratio, $M/N = 1/16$, that always samples $k_y = 0, 1, 2, \sqrt{N}-3, \sqrt{N}-2$ and $\sqrt{N}-1$ (the centre of $k_y$-space), and then samples the remainder of *k*-space uniformly at random (with the remaining 10 samples). While we have not tried to optimize this non-uniform sampling strategy we have found that other variable density sampling strategies performed similarly.
+
+We repeated the first experiment and compared the random EPI sampling to using
+non-uniform sampling with the sequence length varied between 10 and 300. Again
+we focus on the T2 reconstruction, although similar behaviour was observed for the
+density and T1 estimation (not shown). The T2 results are plotted in figure 5.10. It is
+clear from the figure that BLIP does not perform well with the non-uniform sampling
+of k-space, and it never achieves the near oracle performance that we observe with
+the random EPI sampling strategy. Indeed, we observed no non-uniform sampling
+strategy to do this. Other simulations (not shown) have indicated that uniform i.i.d.
+undersampling in k_y also performs well, although we have yet to prove this has the
+RIP.
+
+Interestingly, the MRF reconstruction does benefit from the non-uniform sam-
+pling, however the reconstruction quality is still very poor. We believe that this can
+---PAGE_BREAK---
+
+FIG. 5.8. A plot of the Image sequence SER (dB) against L/p² for three different levels of undersampling: p = 16 (green), p = 32 (red) and p = 64 (blue). The rapid increase in SER appears to occur at roughly the same value of L/p² in each case suggesting that the RIP result in Theorem 1 is of the right order.
+
+FIG. 5.9. Reconstruction performance for the T2 map using a complex density model. (a) The quadratic phase applied to the density map; (b) SER for T2 map estimation as a function of sequence length. Results are shown for the following algorithms: complex MRF, complex BLIP and the complex oracle estimate.
+
+be explained by the fact that in both cases the MRF reconstructions exhibit significant aliasing. However, in the non-uniform case the aliasing is concentrated more in the high frequencies where the signal has less energy and therefore introduces less distortion.
+
+**6. Conclusions and open questions.** We have presented a principled mathematical framework for compressed quantitative MRI based around the recently proposed technique of Magnetic Resonance Fingerprinting [29]. The sensing process can
+---PAGE_BREAK---
+
+FIG. 5.10. A plot of the T2 estimate SER (dB) against L for reconstruction algorithms MRF and BLIP using uniform random (EPI) sampling with $p = 16$ and a non-uniform random sampling with an equivalent undersampling ratio $M/N = 16$. Only in the case of BLIP with uniform random sampling does the T2 estimate performance approach that of the oracle estimator.
+
+be considered in two separate stages. First, the embedding of the parameter information into the magnetization response sequences through the mapping $f(\cdot)$. Second, the compressive imaging of the induced magnetization image sequence. The key elements of our approach have been: the characterization of the signal model through the Bloch response manifold; the identification of a provably good image sequence reconstruction algorithm based on iterative projection; an excitation response condition based on a newly introduced measure of flatness to quantify the persistence of the excitation; and a random EPI $k$-space sampling scheme that can be shown to have the necessary RIP condition when the excitation is suitably flat.
+
+The simulations presented in §5 show that the proposed technique is capable of achieving good parameter map reconstruction with very short pulse sequences. The next step will be to make a thorough comparison on an MRI scanner with MRF and other existing quantitative MRI techniques such as [16].
+
+While the current work is specifically targeted at a compressed sensing framework for MRF, we believe that many elements of it should be more broadly applicable. Specifically, the RIP condition for randomized EPI may well have applications in other MR imaging strategies and the characterization of excitation response in terms of flatness could prove a useful tool for the analysis of other compressed sensing schemes involving some form of active sensing.
+
+Finally, the use of parametric physical models (through appropriate discretisation) could be applicable to many areas of compressed sensing beyond MRI. The experience we have gained here suggests that such models can be more powerful than traditional spatial image models, such as wavelet sparsity, that are often found in compressive imaging.
+
+**6.1. Open Questions.** In setting out this compressed sensing framework a number of questions have arisen that we feel should be addressed. We conclude by briefly
+---PAGE_BREAK---
+
+describing these below.
+
+**Excitation sequences.** What are the key requirements for the excitation sequences? We have introduced the flatness condition, however, we have so far not exploited randomness in the excitation. This raises the question: does the excitation sequence need to be random? Although randomness seems a natural way to obtain flat responses, it is not clear that it is necessary or even preferable. Random excitations may also be able to provide less stringent sampling conditions in order to provide the RIP. Furthermore, whether deterministic or random, how should we optimize the excitation sequences in order to maximise the performance of the parameter map estimation? This seems to be very much a system identification problem.
+
+**Improved signal models.** A key question for the Bloch response model is: how densely do we need to sample $\mathcal{M}$? This will depend on the response mapping $f$, the undersampling operator $h$ and the performance of the recovery algorithm. It would be interesting to try to quantify these errors using the existing union of subspace compressed sensing theory [8, 11].
+
+A second question is: how should we best include additional modelling information? It is clearly desirable to include spatial regularization. However, we have seen in §5 that the inclusion of our limited spatial regularization within the signal model did not significantly improve performance. On the other hand, this only regularized the density map, whereas, ideally we would like to impose spatial regularity on each of the parameter maps. Unfortunately, a naive construction of such a model would lead to a complex non-separable representation that we cannot easily project onto. Alternatively, we might try to impose block spatial regularity on the image sequence on top of the Bloch response model. This form of spatial regularization was used in [17] and appears to have only provided modest performance improvements. Therefore the question is how to best combine these models to maximize reconstruction performance and can we back this up theoretically?
+
+The current signal model is also somewhat idealised. We have treated the proton density values, $\rho_i$, as nonnegative, following the physics. However, in MRI it is more common to treat $\rho_i$ as a complex value, absorbing various phase factors into the quantity. While our framework easily extends to the complex case as highlighted in §3.3.1 and evaluated in §5.4, it would be interesting to see whether there was a more principled way to deal with such additional phase factors.
+
+Another idealization that is made both here and in the original MRF is that the read out time is assumed negligible with respect to the relaxation times. Depending on the level of undersampling this may not be true. This might introduce significant artefacts. If so, can we modify the signal model to account for this?
+
+Finally, our model does not account for partial volume effects. These were briefly touched on in the supplementary material of [29], where it was proposed to model individual voxels as a composition of different material components. Such a model is reminiscent of the spatial abundance maps used in hyperspectral imaging. In such a case we are in the realms of compressive source separation [26]. Can we formulate a compressive MRF problem that accounts for partial volume effects in a similar manner?
+
+**Subsampling k-space.** We have identified certain conditions that guarantee the RIP for random EPI sampling. This allows us to trade off the k-space subsampling factor $p = N/M$ with the length of the excitation sequence, $L$. Unfortunately the trade off scales as $L \sim p^2$. It is not clear whether similar guarantees could be achieved from
+---PAGE_BREAK---
+
+a deterministic sampling sequence or whether this is indeed optimal. It would be more desirable to have a proportional trade off $L \sim p$. Is such a scaling possible? If so, what is the appropriate combination of excitation sequence and sampling strategy?
+
+Finally, if we can successfully incorporate spatial structure into our signal model, as suggested above, it is very likely that a variable density sampling would be preferable. If so, can we leverage existing theory for variable density sampling [1, 32] to develop principled designs for variable density sampling for compressive MRF?
+
+**Appendix A. Dynamics of balanced SSFP sequences.** Balanced SSFP sequences are popular in MRI and were the basis of the excitation sequences used in MRF [29], although the term ‘steady state’ is somewhat of a misnomer as this refers to the steady state conditions arrived at following periodic excitation with constant $\alpha$ and TR [35].
+
+In fact, here we are explicitly interested in the transient dynamics of a non-periodic excitation sequence. This is in contrast with traditional SSFP sequences where transient oscillations are seen as undesirable as they can introduce imaging artefacts [27]. In this work, as in [29], we will regard the transient behaviour as essential in enabling us to distinguish between different quantitative behaviour.
+
+The transient response can be formally described in terms of a 3-dimensional linear discrete time dynamical system that we summarize below, see [27, 25, 35] for further details. To keep things simple we will assume there is no phase increment between pulses and also that the $l$th echo time, $TE_l$, is half the $l$th repetition time $TR_l$.
+
+Following [27], let $\mathbf{m}_l = (m_l^x, m_l^y, m_l^z)^T \in \mathbb{R}^3$ represent the 3-dimensional magnetization vector for a voxel at the $l$th excitation pulse. In Inversion Recovery SSFP sequences the equilibrium magnetization, $\mathbf{m}_{\text{eq}} = [0, 0, 1]^T$, is initially inverted so that $\mathbf{m}_0 = [0, 0, -1]^T$. Then the magnetization after the $l$th RF-excitation is given by the following linear discrete time dynamical system:
+
+$$ (A.1) \qquad \mathbf{m}_{l+1} = R_x(\alpha_l)R_z(\phi_l)E_l\mathbf{m}_l + R_x(\alpha_l)(\mathrm{Id} - E_l)\mathbf{m}_{\mathrm{eq}} $$
+
+where $R_u(\phi)$ denotes a rotation about the $u \in \{x,y,z\}$ axis by an angle $\phi$, $\phi_l = 2\pi\delta f \operatorname{TR}_l$ is the off-resonance phase associated with local field variations and chemical shift effects [25] and $E_l$ is the diagonal matrix characterizing the relaxation process:
+
+$$ (A.2) \qquad E_l := \begin{pmatrix} e^{-\operatorname{TR}_l / T2} & & \\ & e^{-\operatorname{TR}_l / T2} & \\ & & e^{-\operatorname{TR}_l / T1} \end{pmatrix} $$
+
+where the T1 relaxation time controls the rate of relaxation along the z-axis, while the T2 relaxation time controls the relaxation onto the z-axis.
+
+Finally let $\hat{\mathbf{m}}_l$ denote the magnetization at the echo time, $TE_l$. Then this is given by [27]:
+
+$$ (A.3) \qquad \hat{\mathbf{m}}_l = R_z(\phi_l/2)E_l^{1/2}\mathbf{m}_l + (\mathrm{Id} - E_l^{1/2})\mathbf{m}_{\mathrm{eq}}, $$
+
+with the readout coil measuring $\hat{m}_l^x + j\hat{m}_l^y$. Thus the magnetization dynamics in response to a sequence of RF pulses with flip angles, $\alpha_l$, and repetition times, $TR_l$, is given by (A.1) and (A.3) which apart from the input parameters is solely a function of the tissue parameters T1, T2, and the off-resonance frequency, $\delta f$.
+
+**Appendix B. Proof of Theorem 1.** We first introduce the key lemmas that form the main ingredients of the proof. Our approach will follow the standard route
+---PAGE_BREAK---
+
+of concentration of measure, $\epsilon$-net and union bound. To this end we will need the
+following well known Chernoff bound [18]:
+
+LEMMA 1. Let $X = X_1 + X_2 + \dots + X_n$, $0 \le X_i \le 1$ with $\mu = \mathbb{E}(X)$. Then
+
+$$
+(B.1) \qquad \mathbb{P}(|X - \mu| > \epsilon\mu) \le 2 \exp \left( -\frac{\epsilon^2 \mu}{3} \right)
+$$
+
+The next lemma establishes a near isometry for a single aliased voxel sequence.
+
+LEMMA 2. Let $z \in \mathbb{C}^L$ be a random vector given by:
+
+$$
+(B.2) \qquad z_i = \frac{1}{p} \sum_k U_{k,i} e^{-j2\pi\zeta_i k/p}
+$$
+
+where $\zeta_i$ are independent random variables drawn uniformly from $\{0, \dots, p-1\}$ and
+$U \in \mathbb{C}^{p \times L}$ is a matrix whose rows have flatness $\lambda$. Then, with probability at least
+$1 - 2e^{-\epsilon^2/(3p\lambda^2)}$, $z$ satisfies:
+
+$$
+(B.3) \qquad (1-\epsilon)\|U\|_F^2 \le p^2 \|z\|_2^2 \le (1+\epsilon)\|U\|_F^2
+$$
+
+Proof. We first show that $\mathbb{E}\|z\|_2^2 = \frac{1}{p^2}\|U\|_F^2$ and then derive the necessary tail bounds.
+
+Let $W_{a,k} = \frac{1}{\sqrt{p}}e^{-j2\pi ak/p}$, $a, k = 0, \dots, p-1$, denote the unitary Discrete Fourier transform in $\mathbb{C}^p$. We can then write
+
+$$
+(B.4) \qquad \mathbb{E} \|z\|_2^2 = \sum_{a=0}^{p-1} \frac{1}{p} \left( \sum_{i=1}^{L} \frac{1}{p} |W_{a,:} U_{:,i}|^2 \right)
+$$
+
+$$
+(B.5) \qquad = \frac{1}{p^2} \sum_i \sum_a |W_{a,:} U_{:,i}|^2
+$$
+
+(B.6) $\displaystyle=\frac{1}{p^2}\sum_i \|U_{:,i}\|_2^2$
+
+(B.7) $\displaystyle=\frac{1}{p^2}\|U\|_F^2,$
+
+Now note that $\|z\|_2^2$ is the sum of $L$ independent random variables, $\|z\|_2^2 = \sum_i \xi_i$
+with $\xi_i = \frac{1}{p} |W_{\zeta_i,:} U_{:,i}|^2$. Furthermore the $\xi_i$ satisfy:
+
+$$
+\begin{align*}
+0 \le \xi_i &\le \frac{1}{p} \|U_{:,i}\|_2^2 \\
+&\le \frac{1}{p} \sum_k \max_i |U_{k,i}|^2 \\
+(B.8) \qquad &\le \frac{1}{p} \sum_k \lambda^2 \|U_{k,:}\|_2^2 \\
+&\qquad = \frac{\lambda^2}{p} \|U\|_F^2
+\end{align*}
+$$
+
+We can therefore apply the Chernoff bound from Lemma 1 to $\sum_i \xi_i$ rescaled by
+$\frac{\lambda^2}{p} \|U\|_F^2$ to give:
+
+$$
+(B.9) \quad \mathbb{P}(\|\|z\|_2^2 - \frac{1}{p^2} \|U\|_F^2\| > \epsilon \frac{1}{p^2} \|U\|_F^2) \le 2 \exp\left(-\frac{\epsilon^2}{3p\lambda^2}\right)
+$$
+---PAGE_BREAK---
+
+Rearranging this expression completes the proof. □
+
+Next we extend Lemma 2 to a near isometry for groups of aliased voxels under the action of $h$. Since $h$ is an ortho-projector, $\|h(X)\|_2^2 = \|h^H h(X)\|_2^2$ and so we can equivalently consider the isometry properties of $h^H h$.
+
+Let us denote $Z = h^H(h(X))$ such that $Z_{:,l} = F^H P(\zeta_l)^T Y_{:,l}$. Recall that $h$ is a partially sampled 2D discrete Fourier transform that is fully sampled in the $k_x$ direction and periodically subsampled by a factor of $p = N/M$ in the $k_y$ direction. Therefore each $Z_{i,l}$ is the sum of $p$ aliases taken from $X_{:,l}$:
+
+$$ (B.10) \qquad Z_{i,l} = \frac{1}{p} \sum_{k=0}^{p-1} X_{\tau_i(k),l} e^{-j2\pi\zeta_l k/p} $$
+
+where $\tau_i(k)$ gives the index of the $k$th alias for the $i$th voxel (with $\tau_i(0) = i$). We can therefore partition the set $\{1, \dots, N\}$ into $M$ disjoint index sets $\Lambda_1, \dots, \Lambda_M$ with each set associated with $p$ aliases, such that $h^H h$ is separable over $\{\Lambda_i\}$ and $Z_{\Lambda_i,:} = [h^H h]_{\Lambda_i} X_{\Lambda_i,:}$. Since each $Z_{\Lambda_i,:}$ contains $p$ copies of the same combination of aliases (up to a phase shift) we can conclude that:
+
+$$ (B.11) \qquad \|Z_{\Lambda_i,:}\|_F^2 = p \|Z_{k,:}\|_2^2, \forall k \in \Lambda_i $$
+
+Applying Lemma 2 then gives us:
+
+LEMMA 3. Let $Z_{\Lambda_i,:} = [h^H h]_{\Lambda_i} X_{\Lambda_i,:}$ for some $X_{\Lambda_i,:} \in \mathbb{C}^{p \times L}$ whose rows have a flatness $\lambda$ where $[h^H h]_{\Lambda_i}$ is defined above. Then with probability at least $1-2e^{-\epsilon^2/(3p\lambda^2)}$ we have
+
+$$ (B.12) \qquad (1-\epsilon)\|X_{\Lambda_i,:}\|_F^2 \le p\|Z_{\Lambda_i,:}\|_F^2 \le (1+\epsilon)\|X_{\Lambda_i,:}\|_F^2 $$
+
+The final ingredient guarantees a near isometry for low dimensional subsets of the unit sphere (for a more sophisticated but slightly different result in this direction see [14])
+
+LEMMA 4. Let $S \subset \mathbb{S}^{n-1}$ have box counting dimension $d$ such that for any $\epsilon > 0$ there exists an $\epsilon$-cover of $S$ of size $C_S \epsilon^{-d}$. Let $P: \mathbb{C}^n \to \mathbb{C}^k$ be a random projection such that for any $\delta > 0$ and a fixed $x \in S$,
+
+$$ (B.13) \qquad 1 - \delta \le \frac{n}{k} \|Px\|_2^2 \le 1 + \delta $$
+
+holds with probability at least $1 - c_0 e^{-c_1 \delta^2}$. Then $P$ satisfies (B.13) for all $x \in S$ with probability at least $1 - \eta$ as long as:
+
+$$ (B.14) \qquad c_1 \ge 72\delta^{-2} \left(d \log(36n/\delta k) + \log C_S c_0 / \eta\right) $$
+
+*Proof.* Consider an $\epsilon$-cover $S_\epsilon$ of $S$ with $\epsilon = \delta'/(2\sqrt{n/k})$ and suppose that $P$ satisfies
+
+$$ (B.15) \qquad 1 - \delta'/2 \le \frac{n}{k} \|Px\|_2^2 \le 1 + \delta'/2 $$
+---PAGE_BREAK---
+
+for all $x \in S_\epsilon$ with a constant $0 < \delta' < 1$. Then there exists a $u \in S_\epsilon$ such that:
+
+$$ (B.16) \qquad \sqrt{\frac{n}{k}} \|Px\|_2 \le \sqrt{\frac{n}{k}} \|Pu\|_2 + \sqrt{\frac{n}{k}} \|P(x-u)\|_2 $$
+
+$$ (B.17) \qquad \le 1 + \delta'/2 + \sqrt{\frac{n}{k}}\epsilon $$
+
+$$ (B.18) \qquad = 1 + \delta' $$
+
+where in (B.17) we have used the fact that $(1 + \delta'/2)^2 > (1 + \delta'/2)$.
+
+We can similarly show that $\sqrt{\frac{n}{k}}\|Px\|_2 \ge 1 - \delta'$. Then finally noting that the "non-squared" RIP implies the squared RIP in (B.13) with $\delta = 3\delta'$ gives us the required isometry.
+
+It only remains to bound the probability of failure. Let $p_f$ be the probability that $P$ fails to satisfy (B.13) on $S$. By the union bound:
+
+$$ (B.19) \qquad pf \le |S_e|c_0e^{-c_1(\delta'/2)^2} $$
+
+$$ (B.20) \qquad \le C_S c_0 \left( \frac{\delta'}{2\sqrt{n/k}} \right)^{-d} e^{-c_1(\delta'/2)^2} $$
+
+Therefore it is sufficient to choose $\eta$ so that:
+
+$$ (B.21) \qquad \frac{\eta}{C_S c_0} \ge \left( \frac{\delta}{6\sqrt{n/k}} \right)^{-d} e^{-c_1(\delta/6)^2} $$
+
+Re-arranging the above gives:
+
+$$ (B.22) \qquad c_1 \ge 72\delta^{-2} \left(d \log\left(\frac{36n}{\delta k}\right) + \log C_S c_0 / \eta\right) $$
+
+as required. □
+
+We are now ready to prove the main theorem.
+
+*Proof.* [Proof of Theorem 1]
+
+First, note that $\mathbb{R}_+B \subset BB$ which is an infinite union of subspace model, as is its $p$-product, $(BB)^p$ associated with a group of aliased voxels, $\Lambda_i$. To guarantee that $h_{\Lambda_i}$ possesses the necessary RIP on $(BB)^p - (BB)^p$ it is sufficient to consider the RIP on the normalized difference set $S$ given by:
+
+$$ (B.23) \qquad S = \{x \in ((RB)^p - (RB)^p), \|x\|_2 = 1\}, $$
+
+due to the linearity of $h$.
+
+By construction we have $\dim(S) = 2pd_B - 1$ and we can therefore apply Lemma 4 to $S$ together with Lemma 3. This guarantees for all $X_{\Lambda_i}, i = 1, \dots, M$ we can again apply the union bound and replace $\eta$ by $M\eta$. Noting that $p, \delta^{-1}, \eta^{-1} > 1$ we can collect together the constants and simplify to finally give:
+
+$$ (B.24) \qquad \lambda^{-2} \ge (3p) \times 72\delta^{-2} ((2pd_B - 1)\log(36p/\delta) + \log C_S c_0 / \eta) $$
+
+To ensure this holds for all aliased voxel groups $\Lambda_i$, $i = 1, \dots, M$ we can again apply the union bound and replace $\eta$ by $M\eta$. Noting that $p, \delta^{-1}, \eta^{-1} > 1$ we can collect together the constants and simplify to finally give:
+
+$$ (B.25) \qquad \lambda^{-2} \ge C\delta^{-2} p^2 d_B \log(N/\delta\eta) $$
+
+for some constant $C$ independent of $p, N, d_B, \delta$ and $\eta$ which gives the required conditions of the theorem. □
+---PAGE_BREAK---
+
+REFERENCES
+
+[1] B. Adcock and A. C. Hansen, Generalized sampling and infinite-dimensional compressed sensing. DAMTP Tech. Rep. 2011/NA12, 2011.
+
+[2] N. Ailon and B. Chazelle, The Fast Johnson-Lindenstrauss Transform and Approximate Nearest Neighbors. SIAM J. Computing, vol. 39, No. 1, pp. 302-322, 2009.
+
+[3] R. Baraniuk, M. Davenport, R. De Vore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approx., vol. 28, pp. 253–263, 2008.
+
+[4] R. G. Baraniuk and M. B. Wakin, Random Projections of Smooth Manifolds. Foundations of Computational Mathematics, vol. 9(1), pp. 51-77, 2009.
+
+[5] A Beygelzimer, S. Kakade and J. Langford, Cover Trees for Nearest Neighbor. In Proceedings of the 23rd International Conference on Machine Learning (ICML), Pittsburgh, PA, pp. 97–104, 2006.
+
+[6] K. T. Block, M. Uecker, and J. Frahm, Model-based Iterative Reconstruction for Radial Fast Spin-Echo MRI. IEEE Trans. Med. Imag., vol. 28(11), pp. 1759–1769, 2009.
+
+[7] T. Blumensath and M. E. Davies, Iterative Hard Thresholding for Sparse Approximation, J. Fourier Analysis and Applications, vol. 14, no. 5, pp. 629–654, 2008.
+
+[8] T. Blumensath and M. E. Davies, Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces. IEEE Trans. Inf. Theory, vol. 55(4), pp. 1872–1882, 2009.
+
+[9] T. Blumensath and M. E. Davies, Iterative Hard thresholding for Compressed sensing. Applied Computational Harmonic Analysis, vol. 27, no. 3, pp. 265-274, 2009.
+
+[10] T. Blumensath, M. E. Davies, Normalised Iterative Hard Thresholding; guaranteed stability and performance, IEEE Journal of Selected Topics in Signal Processing, vol. 4(2), pp. 298-309, 2010.
+
+[11] T. Blumensath, Sampling and Reconstructing Signals From a Union of Linear Subspaces.
+IEEE Trans. Inf. Theory, vol. 57(7), pp. 4660–4671, 2011.
+
+[12] Brainweb data repository, available at: http://brainweb.bic.mni.mcgill.ca/brainweb/
+
+[13] M. Bydder, A. A. Samsonov, and J. Du, Evaluation of optimal density weighting for regridding.
+Mag. Res. Im., vol. 25(5), pp. 695-702, 2007.
+
+[14] K. Clarkson, Tighter Bounds for Random Projections of Manifolds. Proceedings of the 24th annual symposium on Computational geometry (SCG’08), pp. 39-48, 2008.
+
+[15] D.L. Collins, A.P. Zijdenbos, V. Kollokian, J.G. Sled, N.J. Kabani, C.J. Holmes and A.C.
+Evans, Design and Construction of a Realistic Digital Brain Phantom. IEEE Trans. on
+Medical Imaging, vol.17(3), pp.463–468, 1998.
+
+[16] S.C.L. Deoni, B.K. Rutt, and T.M. Peters, Rapid Combined T1 and T2 Mapping Using Gradient Recalled Acquisition in the Steady State. Magn. Reson. Med. vol. 49, pp. 515-526, 2003.
+
+[17] M. Doneva, P. Bornert, H. Eggers, C. Stehning, J. Senegas and A. Mertins, Compressed sensing reconstruction for magnetic resonance parameter mapping, Magn. Reson. Med., vol 64, pp. 1114-1120, 2010.
+
+[18] D.P. Dubhashi and A. Panconesi, *Concentration of Measure for the Analysis of Randomized Algorithms*. Cambridge University Press, 2009.
+
+[19] J. A. Fessler and B. P. Sutton, Nonuniform fast Fourier transform using min-max interpolation.
+IEEE Trans. Sig. Proc. vol. 51(2) pp. 560–574, 2003.
+
+[20] J. A. Fessler, Model-based image reconstruction for MRI. IEEE Sig. Proc. Mag., vol. 27(4),
+pp. 81–89, 2010.
+
+[21] J.P. Hornak, *The Basics of MRI*. Webbook, available on-line at:
+http://www.cis.rit.edu/htbooks/mri/.
+
+[22] C. Huang, A. Bilgin, T. Barr and M. I. Altbach, T2 relaxometry with indirect echo com-
+pensation from highly undersampled data. Magn Reson Med., vol. 70, pp. 1026-1037,
+2013.
+
+[23] M. Iwen and M. Maggioni, Approximation of Points on low-dimensional manifolds via random linear projections. arXiv:1204.3337.
+
+[24] E. T. Jaynes, Matrix treatment of nuclear induction. The Physics Review, vol 98(4), pp.
+1099-1105, 1955.
+
+[25] C. Ganter, Off-Resonance Effects in the Transient Response of SSFP Sequences. Magn. Reson.
+Med. vol. 52, pp. 368-375, 2004.
+
+[26] M. Golbabaee, S. Arberet and P. Vandergheynst, Compressive Source Separation: Theory and Methods for Hyperspectral Imaging. IEEE Trans. Image Proc., vol. 22(12), pp. 5096–5110,
+2013.
+
+[27] B.A. Hargreaves, S.S. Vasanawala, J.M. Pauly and D.G. Nishmura, Characterization and Reduction of the Transient Response in Steady-State MR Imaging. Magn, Reson. Med.
+
+
+---PAGE_BREAK---
+
+vol 46, pp. 149–158, 2001.
+
+[28] M. Lustig, D. L. Donoho, J.M. Santos, and J.M. Pauly, Compressed sensing MRI. IEEE Sig. Proc. Mag., vol. 25(2), pp. 72-82, 2008.
+
+[29] D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. L. Sunshine, J. L. Duerk and M. A. Griswold, Magnetic Resonance Fingerprinting. Nature, vol. 145, pp. 187–192, 2013.
+
+[30] G.C. McKinnon, Ultrafast interleaved gradient-echo-planar imaging on a standard scanner. Magn. Reson. Med. vol. 30, pp. 609–616, 1993.
+
+[31] P. Niyogi, S. Smale, and S. Weinberger, Finding the Homology of Submanifolds with High Confidence from Random Samples. Discrete Comput. Geom. vol. 39(1), pp. 419–441, 2008.
+
+[32] G. Puy, P. Vandergheynst and Y. Wiaux, On Variable Density Compressive Sampling. IEEE Sig. Proc. Lett., vol. 18(10), pp. 595–598, 2011.
+
+[33] G. Puy, P. Vandergheynst, R. Gribonval and Y. Wiaux, Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques. EURASIP Journal on Advances in Signal Processing, 2012, 2012:6.
+
+[34] H. Rauhut, Compressive sensing and structured random matrices. Radon Series Comp. Appl. Math., vol. 9, pp. 1–92, 2010.
+
+[35] K. Sheffler and S. Lehnhardt, Principles and applications of balanced SSFP techniques. Eur. Radiol. vol. 13, pp. 2409-2418, 2003.
+
+[36] J. Tran-Gia, D. Stab, T. Wech, D. Hahn, and H. Kostler, Model-based Acceleration of Parameter mapping (MAP) for saturation prepared radially acquired data. Magn Reson Med., vol. 70, pp. 1524-1534, 2013.
+
+[37] G. A. Wright, Magnetic Resonance Imaging. IEEE Sig. Proc. Mag., vol.14(1), pp. 56–66, 1997.
+
+[38] B. Zhao, F. Lam, W. Luy and Z.-P. Liang, Model-based MR parameter mapping with sparsity constraint. IEEE Int. Symp. Biomed. Imag. (ISBI), pp. 1–4, 2013.
\ No newline at end of file
diff --git a/samples/texts_merged/6518189.md b/samples/texts_merged/6518189.md
new file mode 100644
index 0000000000000000000000000000000000000000..96b9ebea7571daca345eed890d69e2470de1f10a
--- /dev/null
+++ b/samples/texts_merged/6518189.md
@@ -0,0 +1,546 @@
+
+---PAGE_BREAK---
+
+# An energy method for computing the use of fossil fuel energy
+
+## Un método energético para calcular el uso de energía de combustibles fósiles
+
+Timur B. Temukuyev
+
+Russian Presidential Academy of National Economy and Public Administration under the President of
+the Russian Federation. Moscow, Russia.
+
+energoconsul@mail.ru
+
+*(recibido/received: 28-octubre-2020; aceptado/accepted: 15-enero-2021)*
+
+### ABSTRACT
+
+An energy method for computing the use of fossil fuel energy has been considered in the article. On the world market, the fuel price depends on supply and demand and involves no energy costs for fuel production. An energy analysis of economic activity was suggested by Charles Hall, an American scientist, who introduced a notion of Energy Returned on Energy Invested, as a ratio between returned and invested energy, into scientific discourse. No account has been taken of invested energy depreciation in this method. All losses are fully incorporated, when the ratio between beneficially used energy in all process flow chains from fuel deposit exploration to energy utilisation, and the considered amount of natural fuel primary energy is taken as the coefficient of beneficial primary energy use (CBPEU). When CBPEU is determined, allowance is made for all potential energy losses; the depreciation degree of energy, contained in the fuel, from its deposit to a consumer, is defined. When energy of renewable sources is utilised, a coefficient of renewable sources energy conversion, defined as the ratio between energy delivered by a power unit throughout the entire operation period, and invested energy taking into account CBPEU over the same period, will represent an objective criterion of power unit efficiency.
+
+**Keywords:** coefficient of beneficial primary energy use; fuel reprocessing; fuel transportation; energy breeding gain.
+
+### RESUMEN
+
+En el artículo se ha considerado un método energético para calcular el uso de energía de combustibles fósiles. En el mercado mundial, el precio del combustible depende de la oferta y la demanda y no implica costos de energía para la producción de combustible. Charles Hall, un científico estadounidense, sugirió un análisis energético de la actividad económica, quien introdujo una noción de energía devuelta sobre energía invertida, como una relación entre energía devuelta e invertida, en el discurso científico. No se ha tenido en cuenta la depreciación de la energía invertida en este método. Todas las pérdidas se incorporan por completo, cuando la relación entre la energía beneficiosa usada en todas las cadenas de flujo del proceso desde la exploración de depósitos de combustible hasta la utilización de energía, y la cantidad considerada de energía primaria de combustible natural se toma como el coeficiente de uso de beneficioso de energía primaria (CBPEU). Cuando se determina el CBPEU, se tienen en cuenta todas las pérdidas de energía potenciales; Se define el grado de depreciación de la energía contenida en el combustible, desde su depósito hasta el consumidor. Cuando se utiliza energía de fuentes renovables, un coeficiente de conversión de energía de fuentes renovables, definido como la rela-
+---PAGE_BREAK---
+
+ción entre la energía entregada por una unidad de potencia durante todo el período de operación y la energía invertida teniendo en cuenta el CBPEU durante el mismo período, representará un criterio objetivo. de eficiencia de la unidad de potencia.
+
+**Palabra clave**: coeficiente de uso de beneficioso de energía primaria; reprocesamiento de combustible; transporte de combustible; ganancia de generación de energía.
+
+# 1. INTRODUCTION
+
+On the worldwide market, the price of fossil fuel, as the price of any product, mostly depends on its quality, supply, and demand. Since market competition principles underlie the international pricing policy, the goods production costs, which are generally defined by one of the world currencies or national currency unit, constitute solely a seller's problem.
+
+If the processes of producing and selling goods are considered in terms of energy, evaluative computations will become more complex. In conditions of the existing international commerce system, the energy computation method that can be implemented only within a particular country, where there are unified laws and regulations, may basically be of interest only to government institutions, who define the technology-related policy.
+
+# 2. MATERIALS AND METHODS
+
+A deposit most commonly contains two types of fuel, for instance, coal and methane, natural gas and gas condensate, petroleum, and associated petroleum gas. Here, only one component is principal for deposit developers.
+
+The total amount of primary energy within the deposit of fossil fuel found from carried out geological exploration, will be determined by formula, J:
+
+$$ \Sigma Q_o = \sum_{i=1}^{n} B_i \cdot Q_{Hi}^p, \quad (1) $$
+
+where $B_i$ – estimated reserves of the $i$th fuel, kg, m³;
+$Q_{Hi}^p$ – higher heating value of the $i$th fuel, J/kg, J/m³;
+$n$– number of components.
+
+The stage of geological exploration involves the initiation of technical processes that result in reducing the fuel energy value.
+
+It is geological exploration energy costs $\Sigma Q_{g.s}$, J first. They can be deducted from the total amount of primary energy or taken into account as other losses, on conversion to the design unit of main fuel component, J/kg, J/m³, by formula:
+
+$$ Q_{g.s} = \frac{\Sigma Q_{g.s.}}{\sum_{i=1}^{m} B_i}, \quad (2) $$
+
+where **B** – main component of fuels.
+
+When computing, the heating value of the main fuel component will decrease by the corresponding value.
+
+If there are two in-place components, and only one is used, its reduced heating value will be greater than the intrinsic; it will be defined by the following formula:
+
+$$ Q_{\Pi}^{o} = \frac{\Sigma Q_{o}}{B}, \quad (3) $$
+---PAGE_BREAK---
+
+where B – design amount of main natural fuel situated deep in the earth, kg, m³.
+
+Unused energy of extracted subterranean fuel should be classified as extraction losses. It is determined as the difference between heating value of natural fuel composition and higher heating value of operating composition of the main fuel component:
+
+$$ \Delta Q_2 = Q_{\Pi}^{0} - Q_{H}^{p}. \quad (4) $$
+
+Associated petroleum gas and flammable gas condensate are typically not used.
+
+If the Earth's interior contains only one component without combustible admixtures, which are lost during fuel extraction, then $Q_{\Pi}^{0} = Q_{H}^{p}$, and formula (1) will be as follows:
+
+$$ \Sigma Q_o = B \cdot Q_H^p, \quad (5) $$
+
+Energy efficiency indicator:
+
+1. EROEI (energy returned on energy invested) is the ratio between obtained and invested energy:
+
+2.
+
+$$ Er = \frac{E_2}{E_1}, \quad (6) $$
+
+where *Er* – EROEI;
+
+*E₂* – energy obtained from fuel or a device transforming the Earth’s, solar, etc. energy, J;
+
+*E₁* – energy expended to extract (produce) energy *E₂*, J.
+
+2. EROI (energy return on investment) is the ratio between obtained energy and investments, J/rub.:
+
+$$ Ei = \frac{E_2}{C}, \quad (7) $$
+
+where *Ei* – EROI;
+
+*E₂* – the same as in formula (6);
+
+*C* – means spent to extract (produce) energy *E₂*, rub.
+
+Let the coefficient of beneficial primary energy use (CBPEU) be the ratio between beneficially used energy at a certain stage of the process flow from exploring fuel deposit to utilising energy *Q*ᵢ (in mass or volume unit equivalent) and the considered amount of natural fuel primary energy *Q*Π0:
+
+$$ \mu_{oi} = \frac{Q_i}{Q_H^o}. \quad (8) $$
+
+When determining CBPEU, not only energy loss during operation, but also energy losses on the unit creation, assembly, subsequent dismantling, etc., i.e. all possible energy losses should be taken into account. Actually, these are the computations of how subterranean fuel energy is depreciated until it reaches consumers in the form of electricity or heat and is used by them.
+
+With such a method of estimation, CBPEU of the entire system from fuel extraction to energy use will be defined by the following formula:
+
+$$ \mu_{o1} = \frac{Q_1}{Q_{\Pi}^0}, \quad (9) $$
+
+where *Q*₁ – beneficially used energy, J/kg, J/m³.
+---PAGE_BREAK---
+
+In general, total primary energy costs, J/kg, J/m³, for a particular stage, can be presented as follows:
+
+$$ \Sigma Q_{ci} = \sum_{i=1}^{n} Q_{ci}^{c} + \sum_{i=1}^{n} Q_{ci}^{ut} + \sum_{i=1}^{n} Q_{ci}^{op} + \sum_{i=1}^{n} Q_{ci}^{sal} + \sum_{i=1}^{n} Q_{ci}^{oth}, \quad (10) $$
+
+where $\sum_{i=1}^{n} Q_{ci}^{c}$ – total primary energy expended on the object capital construction, equipment assem-
+bly and disassembly;
+
+$\sum_{i=1}^{n} Q_{ci}^{ut}$ – total primary energy expended on the equipment fabrication and utilisation;
+
+$\sum_{i=1}^{n} Q_{ci}^{op}$ – total operational primary energy costs;
+
+$\sum_{i=1}^{n} Q_{ci}^{sal}$ – total primary energy costs related to people's work and their salary paid;
+
+$\sum_{i=1}^{n} Q_{ci}^{oth}$ – other total primary energy costs.
+
+Only operational costs will be defined rather easily; the costs related to people's work are very com-
+plicated to calculate, since it is fairly difficult to transform them into heat or energy. It will require the
+development of a special methodology.
+
+Technological system from fuel extraction to electrical and heat energy use can be divided into several
+processes: extraction, reprocessing, and transportation of fuel, generation, transportation, distribution,
+consumption of electrical and heat energy. When needed, any of these processes can be divided into
+parts.
+
+Let the total energy costs be denoted at:
+
+$\Sigma Q_{c2}$ – fuel extraction and preparation;
+
+$\Sigma Q_{c3}$ – fuel transportation and storage;
+
+$\Sigma Q_{c4}$ – fuel reprocessing;
+
+$\Sigma Q_{c5}$ – conversion into other types of energy;
+
+$\Sigma Q_{c6}$ – transmission of electrical and heat energy;
+
+$\Sigma Q_{c7}$ – distribution of electrical and heat energy;
+
+$\Sigma Q_{c8}$ – use (consumption) of electrical and heat energy.
+
+Then the absolute amount of the fuel energy available will accordingly be determined by formulas
+(11-13). For fuel after:
+
+extraction and preparation
+
+$$ Q_2 = Q_n^o - \sum Q_{c2}; \qquad (11) $$
+
+transportation and storage
+
+$$ Q_3 = Q_n^o - (\sum Q_{c2} + \sum Q_{c3}); \qquad (12) $$
+
+reprocessing
+
+$$ Q_4 = Q_n^o - (\sum Q_{c2} + \sum Q_{c3} + \sum Q_{c4}). \qquad (13) $$
+
+The fuel as it is with such available energy $Q_4$ is delivered for conversion into other types of energy –
+electrical and heat. The absolute amount of primary energy allowing for previous costs will be deter-
+mined by formulas (14 – 17):
+
+after conversion into other types of energy
+
+$$ Q_5 = Q_n^o - (\sum Q_{c2} + \sum Q_{c3} + \sum Q_{c4} + \sum Q_{c5}); \qquad (14) $$
+
+after transportation by trunk transmission lines (pipelines)
+
+$$ Q_6 = Q_n^o - (\sum Q_{c2} + \sum Q_{c3} + \sum Q_{c4} + \sum Q_{c5} + \sum Q_{c6}); \qquad (15) $$
+---PAGE_BREAK---
+
+after transportation by distribution electrical transmission lines (pipelines)
+
+$$Q_7 = Q_n^0 - (\sum Q_{c2} + \sum Q_{c3} + \sum Q_{c4} + \sum Q_{c5} + \sum Q_{c6} + \sum Q_{c7}); \quad (16)$$
+
+beneficially used
+
+$$Q_1 = Q_n^0 - (\sum Q_{c2} + \sum Q_{c3} + \sum Q_{c4} + \sum Q_{c5} + \sum Q_{c6} + \sum Q_{c7} + \sum Q_{c8}). \quad (17)$$
+
+To obtain dimensionless values $q_i$, denoting respective shares of losses, both parts of equation (17) should be divided by $Q_n^0$, then formula (17) will be written as:
+
+$$\begin{gathered} q_{01} = 1 - (q_2 + q_3 + q_4 + q_5 + q_6 + q_7 + q_8) \\ \mu_{01} = 1 - (q_2 + q_3 + q_4 + q_5 + q_6 + q_7 + q_8), \end{gathered} \quad (18)$$
+
+or
+
+i.e. CBPEU for a consumer of electrical (heat) energy is obtained.
+
+Formulas (11 – 16) can be rewritten correspondingly.
+
+CBPEU is an indicator that reflects the effectiveness of using energy resources at all stages of the process flow, from fuel extraction to energy consumer inclusive. If needed, it can be calculated for several consecutive stages of energy consumption.
+
+### 3. LITERATURE REVIEW
+
+In the late XIX century, S.A. Podolinsky, the Russian scientist, was the first to examine life problems in the energy-related context (Podolinsky, 1880). Commenting on his work, F. Engels noted what problems scientists can face when examining this matter (Marx, 1955-1981). In 1886, L. Boltzmann proposed a thermodynamic analysis of life phenomena (Boltzmann, 1970). Russian academics had their own ideas and suggestions. In 1901, N.A. Umov put forward an idea of the third law of thermodynamics, which would determine the specifics of energy processes in life phenomena (Umov, 1916). K.A. Timiryazev (1948) analysed specific thermodynamic functions of chlorophyll apparatus in plants V.I. Vernadsky (1928) offered to introduce some general unit 'to quantitatively compare all natural productive forces. N.M. Fedorovsky (1935) suggested that mineral resources be classified based on energy principle. A.E. Fersman (1937) employed energy method in his research studies. The issues of economic activity energy analysis interested P.G. Kuznetsov (1994).
+
+In 1956, King Hubbert, an American scientist, deduced a formula for extracting petroleum in USA. The extraction first rises then remains unchanged for a while, and then starts to be down. At the first and second stages petroleum is cheap, and at the third stage its price begins to raise (King Hubbert, 1956).
+
+An American biologist Charles Hall proposed a theory of economic activity energy analysis. He introduced a concept of Energy Returned On Energy Invested (EROEI) into scientific use, asserting that predators cannot expend more energy than they receive while hunting. Hall further transferred this idea to petroleum extraction. He divided the amount of energy contained in extracted petroleum by the amount of energy expended on its extraction. Hall, through comparing such indicators of various fields, determined the most perspective of them in terms of energy (Hall, 2008).
+
+In Russia, researchers under the guidance of A.F. Safronov computed EROEI (ratio between obtained and invested energy) of a specific gas condensate field (Safronov, 2010, 2011), and examined its influence on the CBPEU value (Temukuyev, 2014); the EROI value for coal was determined in Ukraine (Cherevatskyi, 2017).
+---PAGE_BREAK---
+
+To date, closer attention has been given to using alternative energy sources, what is largely due to their increased efficiency. Hence, the research on increasing the efficiency of using unconventional power sources has become highly relevant. The use of such unconventional sources as energy obtained by employing biotechnologies can be considered among the most appropriate trends for economies (Fiap-shev *et al.*, 2017, 2018). Rather representative is comparison between EROI for wind and solar photo-voltaic power systems (Raugei *et al.*, 2017). As time passes, the attitude towards alternative fuel types changes (IPCC, 2018). It becomes apparent that in future, the EROI values of fossils of most renewable energy sources will decrease (Järvensivu *et al.*, 2018), and the future energy balance can change significantly (Moriarty & Honnery, 2019). A new methodology for estimating EROI (Capellán-Pérez *et al.*, 2019) and standard (De Castro and Capellán-Pérez, 2020) were developed. It is not certain that EROI will be the main decisive factor in the future (Hall, 2017), at the same time a new approach to calculating “corporate” EROI is under review (Celi *et al.*, 2018).
+
+# 4. RESULTS
+
+## 4.1. Method for determining a coefficient of renewable sources energy conversion (CRSEC)
+
+When utilising the energy of renewable resources, CRSEC should be taken as an objective power plant (power unit) efficiency criterion, defined by formula:
+
+$$ \pi = \frac{Q_1}{Q_{pec}}, \qquad (19) $$
+
+where $Q_1$ – energy, supplied by power plant (power unit) over the entire operation period;
+
+$Q_{pec}$ – primary energy costs (imported energy allowing for CBPEU), obtained from an external source over the entire period of its operation.
+
+They are defined as follows:
+
+$$ Q_{pec} = Q_{eq} + Q_{cap} + Q_{op} + Q_{oth}, \qquad (20) $$
+
+where $Q_{eq}$ – primary energy costs to manufacture the equipment;
+
+$Q_{cap}$ – primary energy expended on capital construction, object equipment assembly and disassembly;
+
+$Q_{op}$ – operational primary energy costs;
+
+$Q_{oth}$ – other total primary energy costs.
+
+Other costs should also include labour costs.
+
+If total energy costs $Q_i$ are to the power unit operation time $\tau$, then specific primary energy losses
+$q_i = \frac{q_i}{\tau}$, and $\pi = \frac{q_i}{q_{pec}}$.
+
+Power unit efficiency factor will be defined by formula:
+
+$$ \eta = \frac{Q_{un}}{Q_{max}}, \qquad (21) $$
+
+where $Q_{un}$ – energy supplied by the power unit to an external consumer;
+
+$Q_{max}$ – maximum energy theoretically obtainable when there is an ideal power unit over the same period.
+
+In terms of energy, the system use is justified, when $\pi > 1$, otherwise if $\pi < 1$, it serves no purpose to invest in its development.
+
+Let the above suggested method be considered for specific energy sources.
+---PAGE_BREAK---
+
+## 4.2 Hydroelectric power plants (HPPs)
+
+The interval from the start of operation to overhaul should be taken as the design HPP operation period, and during further operation the time should be counted from the overhaul. In addition, determination of CRSEC during subsequent operation should involve overhaul costs rather than primary costs. For HPPs, with certain data correction, CRSEC can also be determined using formula (19), which will take the following form:
+
+$$ \pi^{HPP} = \frac{Q_1^{HPP}}{Q_{pec}^{HPP}}, \qquad (22) $$
+
+where $Q_1^{HPP}$ – energy supplied by HPP over the design period;
+
+$Q_{pec}^{HPP}$ – costs of primary energy obtained from an external source over the design period.
+
+They are determined according to formula:
+
+$$ Q_{pec}^{HPP} = Q_{eq}^{HPP} + Q_{cap}^{HPP} + Q_{op}^{HPP} + Q_{oth}^{HPP}, \qquad (23) $$
+
+where $Q_{eq}^{HPP}$ – primary energy costs to manufacture the equipment;
+
+$Q_{cap}^{HPP}$ – primary energy costs on capital construction, object equipment assembly and disassembly;
+
+$Q_{op}^{HPP}$ – primary energy operational costs;
+
+$Q_{oth}^{HPP}$ – other primary energy costs.
+
+## 4.3. Solar energy
+
+For helioplants, CRSEC is defined by formula:
+
+$$ \pi^{Hel} = \frac{Q_1^{Hel}}{Q_{pec}^{Hel}}, \qquad (24) $$
+
+where $Q_1^{Hel}$ – energy supplied by helioplant over the entire period of its operation;
+
+$Q_{pec}^{Hel}$ – costs of primary energy obtained from an external source over the entire period of its operation. They are defined as follows:
+
+$$ Q_{pec}^{Hel} = Q_{eq}^{Hel} + Q_{cap}^{Hel} + Q_{op}^{Hel} + Q_{oth}^{Hel}, \qquad (25) $$
+
+where $Q_{eq}^{Hel}$ – primary energy expended on manufacturing the equipment;
+
+$Q_{cap}^{Hel}$ – primary energy expended on construction, assembly and disassembly of helioplant;
+
+$Q_{op}^{Hel}$ – operational primary energy costs;
+
+$Q_{oth}^{Hel}$ – other primary energy costs.
+
+## 4.4. Wind energy
+
+For wind turbines, CRSEC is determined using the same method as described in the previous cases:
+
+$$ \pi^W = \frac{W}{Q_{pec}^W}, \qquad (26) $$
+
+where $Q_1^W$ – energy obtained from wind turbine over its entire operation period;
+
+$Q_{pec}^W$ – costs of primary energy from an external source for wind turbine manufacture, construction, and operation. They are determined according to expression:
+---PAGE_BREAK---
+
+$$Q_{pec}^{W} = Q_{eq}^{W} + Q_{cap}^{W} + Q_{op}^{W} + Q_{oth}^{W}, \quad (27)$$
+
+where $Q_{eq}^W$ – primary energy expended on manufacturing the equipment;
+
+$Q_{cap}^W$ – primary energy costs for equipment assembly and disassembly;
+
+$Q_{op}^W$ – operational primary energy costs;
+
+$Q_{oth}^W$ – other primary energy costs.
+
+### 4.5. Geothermal energy
+
+For geothermal power plants, CRSEC is determined using the same method as described in the previous cases:
+
+$$\pi^{Geo} = \frac{Q_1^{Geo}}{Q_{pec}^{Geo}}, \quad (28)$$
+
+where $Q_1^{Geo}$ – energy obtained from geothermal power station over its entire operation period;
+
+$Q_{pec}^{Geo}$ – costs of primary energy obtained from an external source over the entire period of geo-thermal power station operation. They are defined as follows:
+
+$$Q_{pec}^{Geo} = Q_{eq}^{Geo} + Q_{cap}^{Geo} + Q_{op}^{Geo} + Q_{oth}^{Geo}, \quad (29)$$
+
+where $Q_{eq}^{Geo}$ – primary energy expended on manufacturing the equipment;
+
+$Q_{cap}^{Geo}$ – primary energy costs for drilling a borehole, equipment assembly and disassembly;
+
+$Q_{op}^{Geo}$ – operational primary energy costs;
+
+$Q_{oth}^{Geo}$ – other primary energy costs.
+
+It is difficult to immediately and fully switch over to an energy method for evaluating the geothermal energy cost, but even a stepwise transition can provide a clear impression in a first approximation of the degree of effectiveness, which a particular system has.
+
+### 4.6. Waste recycling
+
+For waste recycling, CRSEC is determined using the same methods as those described in the previous cases:
+
+$$\pi^{waste} = \frac{Q_1^{waste}}{Q_{pec}^{waste}}, \quad (30)$$
+
+where $Q_1^{waste}$ – energy obtained from recycled waste over the entire period of recycling facility operation (when calculating it, the energy should also be taken into account, expended on removing metal, glass, and other materials from domestic waste);
+
+$Q_{pec}^{waste}$ – actual costs of primary energy obtained from an external source over its entire operation period. They are defined as follows:
+
+$$Q_{pec}^{waste} = Q_{eq}^{waste} + Q_{cap}^{waste} + Q_{op}^{waste} + Q_{oth}^{waste}, \quad (31)$$
+
+where $Q_{eq}^{waste}$ – primary energy expended on manufacturing the equipment;
+
+$Q_{cap}^{waste}$ – primary energy costs for construction, assembly and disassembly of recycling facility;
+
+$Q_{op}^{waste}$ – operational primary energy costs;
+
+$Q_{oth}^{waste}$ – other primary energy costs.
+---PAGE_BREAK---
+
+## 4.7. Biofuel production
+
+The formula to determine CRSEC as applied to biofuel production per 1 ha of land over a design period of 1 year is as follows:
+
+$$ \pi^{bio} = \frac{q_{1}^{bio}}{q_{pec}^{bio}}, \quad (32) $$
+
+where $q_1^{bio}$ – specific energy obtained from biofuel over the design period of production, J/(ha year);
+$q_{pec}^{bio}$ – costs of specific primary energy to produce biofuel over the design period, J/(ha year);
+
+Actual energy costs are defined by the following formula:
+
+$$ q_{pec}^{bio} = q_{eq}^{bio} + q_{cap}^{bio} + q_{op}^{bio} + q_{oth}^{bio}, \quad (33) $$
+
+where $q_{eq}^{bio}$ – specific primary energy expended on manufacturing the equipment;
+$q_{cap}^{bio}$ – costs of specific primary energy for constructing the object to reprocess biomass, assemble and dismantle its equipment;
+$q_{op}^{bio}$ – operational costs of specific primary energy for machinery, fertilizers, pesticides, etc. over the design period;
+$q_{oth}^{bio}$ – other specific primary energy costs.
+
+Other costs should also include labour costs.
+
+For heat pump installation (HPI) CRSEC is determined using the same method as described in the previous cases:
+
+$$ \pi^{HPI} = \frac{Q_{1}^{HPI}}{Q_{pec}^{HPI}}, \quad (34) $$
+
+where $Q_1^{HPI}$ – energy delivered to HPI consumer over the entire period of HPI operation;
+$Q_{pec}^{HPI}$ – costs of primary energy obtained from an external source over the entire operation period. They are determined as follows:
+
+$$ Q_{pec}^{HPI} = Q_{eq}^{HPI} + Q_{cap}^{HPI} + Q_{op}^{HPI} + Q_{oth}^{HPI}, \quad (35) $$
+
+where $Q_{eq}^{HPI}$ – primary energy expended on manufacturing the equipment;
+$Q_{cap}^{HPI}$ – costs of primary energy over the entire operation period from assembling to disassembling HPI;
+$Q_{op}^{HPI}$ – operational primary energy costs;
+$Q_{oth}^{HPI}$ – other primary energy costs.
+
+## 4.8. Nuclear power plants (NPPs)
+
+For NPPs that utilise limited reserves of nuclear fuel, it is possible to define an energy breeding gain through formula (19):
+
+$$ \pi^{NPP} = \frac{Q_{1}^{NPP}}{Q_{pec}^{NPP}}, \quad (36) $$
+
+where $Q_1^{NPP}$ – energy, obtained from NPP over the entire period of its operation;
+---PAGE_BREAK---
+
+$Q_{pec}^{NPP}$ – costs of primary energy obtained from an external source over the whole operation period. They are determined as follows:
+
+$$Q_{pec}^{NPP} = Q_{eq} + Q_{cap}^{NPP} + Q_{op}^{NPP} + Q_{oth}^{NPP}, \quad (37)$$
+
+where $Q_{eq}$ – primary energy expended on extraction and preparation, transportation and storage, milling of uranium ore;
+
+$Q_{cap}^{NPP}$ – primary energy expended on construction of power plant and repository, assembly and disassembly of NPP equipment;
+
+$Q_{op}^{NPP}$ – operational primary energy costs for power plant and repository, including costs for extraction, transportation, ore preparation, and storage of radioactive production waste;
+
+$Q_{oth}^{NPP}$ – other primary energy costs.
+
+Energy breeding gain largely depends on uranium content in ore and trouble-free NPP operation.
+
+## 5. DISCUSSIONS
+
+In the energy analysis of economic activity suggested by Charles Hall, there are two spelling variants and two values of EROEI abbreviation in English in the modern sense: if it is EROEI (energy returned on energy invested), it implies the ratio between obtained and invested energy; when it is EROI (energy return on investment), this is the ratio between obtained energy and investments. These ratios can accordingly be written as the following formulas:
+
+$$Er = \frac{E_2}{E_1}, \qquad (38)$$
+
+and
+
+$$Ei = \frac{E_2}{C}, \quad \text{J/rub.,} \qquad (39)$$
+
+where $Er$ – EROEI;
+
+$E_2$ – energy obtained from fuel or a device transforming the Earth's, solar, etc. energy, J;
+
+$E_1$ – energy expended to extract (produce) energy $E_2$, J;
+
+$Ei$ – EROI;
+
+$C$ – means spent to extract (produce) energy $E_2$, rub.
+
+Any predator as all living organisms, with no exceptions for salmon as well, is unable to expend more energy than food provides to him, i.e. all processes here occur according to the second thermodynamic law. Any living organism can be considered as a usual heat engine, functioning with less than 1 efficiency factor, since no organism can extract all the energy from food.#
+
+In a general case, the efficiency factor for predator over a fixed period will be defined by formula:
+
+$$\eta = \frac{E_{1(\tau)}}{E_{2(\tau)}}, \qquad (26)$$
+
+where $E_{1(\tau)}$ – energy expended by predator over time $\tau$;
+
+$E_{2(\tau)}$ – energy obtained by predator from food over time $\tau$.
+
+Predator, while eating food, can move, grow, and breed but is unable to generate energy. Hence, a concept of EROEI, when extended from predator to fuel extraction, should be considered differently. If in the first case the energy expended by predator is always less than the energy, he obtained by eating food, then in the second case, when it comes to extracting fuel, the energy obtained from fuel, is always greater than the energy expended on its extraction. If the total energy expended on fuel extrac-
+---PAGE_BREAK---
+
+tion is equal to the energy contained in the extracted fuel, it makes no sense to extract fuel at a given field in terms of energy. In fact, EROEI thus interpreted is nothing but CRSEC. So, when utilising fossil fuel, CRSEC can be defined by formula:
+
+$$ \pi = \frac{Q_2}{Q_1}, \qquad (40) $$
+
+where $Q_2$ – amount of all energy obtained at the field, J;
+$Q_1$ – amount of energy expended over the period of carrying out all works at the field, J.
+
+It should be noted that $\pi$ is greater than 1, since $Q_2$ is not related to $Q_1$ through the second thermodynamic law, as the fuel extracted at the field, is an energy source, i.e. an energy carrier. And if $\pi$ is less or equal to 1, this technological process becomes meaningless in terms of energy, regardless of whether it relates to helioplant or fossil fuel deposit.
+
+In Russia, researchers under the guidance of Safronov A.F. address the problem of computing EROEI. They, in particular, present EROEI computation data for a number of energy resources, mostly with respect to American conditions obtained by Hall and revised by Richard Heinberg as of 2009. Thus, EROEI for the worldwide petroleum extraction amounts to 19, for natural gas – 10, coal – 50, bituminous sands – 5.2-5.8, shale oil – 1.5-1.4, nuclear energy – 1.1-1.5, hydropower – 11-267, wind energy – 18, photovoltaics – 3.75-10, ethanol sugar cane – 0.8-1.7 (8-10 in Brazil), corn ethanol – 1.1-1.8, biodiesel – 1.9-9 (Safronov, 2010).
+
+As fossil fuels are extracted, the value of their EROEI decreases due to various reasons, since prolific and accessible deposits are usually developed first. This trend is also seen in Russia, for which the numerical value of EROEI was determined for three years: 31.7 in 2005, 29.9 in 2007 and 29.5 in 2008, based on the data on direct joint energy expenditure when extracting oil and gas (Safronov, 2010).
+
+For particular EROEI cases, numerical values of $Er$ will largely depend on how accurately $E_1$ was determined. Since during the development of some fields the energy obtained at other fields is utilised, inaccurately defined energy cost value may create an illusion about the field energy cost effectiveness. For example, energy costs associated with works on oil and gas extraction objects are provisionally divided into groups: capital, current, closure (Safronov, 2010). The respective work stages are construction, operation and closure of objects within a field.
+
+EROEI will decrease with distance from a borehole and be defined by formula:
+
+$$ Er_i = \mu_{2i} \cdot Er_{uc}, \qquad (41) $$
+
+At the point of the process cycle from extracting to using heat and electrical energy, where $Er_i$ will become equal to 1, the energy value of this energy carrier ceases to exist. When exporting energy carriers, $Er_i$ can be defined to the country frontiers.
+
+EROEI has not any significance for a seller of energy carriers, since their price is established by the market, where cheap fuel sets the pace. However, a seller's profit will be defined by EROEI value, i.e. the higher the costs, the lower the benefit.
+
+Since both $\mu_{1i}$ and $\mu_{2i}$ are always less than 1, the value of $Er_i$, computed taking CBPEU into account, will be considerably lower than that obtained by formula (24). Allowing for CBPEU, formula (24) will be written as follows:
+
+$$ Er = \frac{E_2}{\frac{E_1}{\mu_{1i}}} \text{ or } Er = \frac{\mu_{1i} \cdot E_2}{E_1}. \qquad (42) $$
+---PAGE_BREAK---
+
+Time factor should be taken into account when computing EROEI for hydropower plants, helioplants, stations that use geothermal energy, etc. I.e., the ratio shall be taken between the energy obtained or supposed to be obtained over the whole operation period of a power unit or a facility, and the energy expended over the entire period: from commencement of works to complete disposition of a unit or facilities. CBPEU should be taken into consideration in both cases (Safronov, 2011).
+
+There is no way to fully account for all energy losses in the absence of the corresponding data. Energy consumption during main process stages must be allowed for.
+
+To gain a full knowledge of the energy value of energy resources when computing EROEI, the data on total energy costs need be used, with allowance for their depreciation while moving along the process flow, i.e. from fuel deposit to a consumer, defined by CBPEU.
+
+It is important to change over to such calculation methods, which would allow determination of energy cost effectiveness of using a given fuel.
+
+## 6. CONCLUSIONS
+
+A complex method for determining CBPEU of power units will make it possible to reveal those particular processes, where energy losses are considerable, and where it is essential to enhance the quality of energy use first. To attain the stated objective, various economic measures need to be taken, which will result in increasing CBPEU, i.e. in improving a system capability without increasing fuel consumption, only at the expense of decreasing energy losses in irreversible processes occurred in the system.
+
+Determination of energy breeding gain taking primary energy costs into consideration will enable comprehensive evaluation of a project's implementation potential regardless of its estimated parameters.
+
+## REFERENCES
+
+Boltzmann, L. (1970). Articles and speeches. Moscow: Nauka.
+
+Capellán-Pérez, I., Miguel, L.J., and de Castro, C. (2019). Dynamic Energy Return on Energy Investment (EROI) and material requirements in scenarios of global transition to renewable energies. Energy Strategy Reviews, 26, 100399.
+
+Celi, L., Della Volpe, C., Pardi, L., and Siboni, S. (2018). A new approach to calculating the «corporate» EROI. Biophysical Economics and Resource Quality, 3, 15.
+
+Cherevatskyi, D. (2017). EROI the Ukrainian coal. *Ekonomicheskiy vestnik Donbassa (Economic Bulletin of Donbass)*, 4 (50), 20-31.
+
+De Castro, C., and Capellán-Pérez, I. (2020). Standard, Point of Use, and Extended Energy Return on Energy Invested (EROI) from Comprehensive Material Requirements of Present Global Wind, Solar, and Hydro Power Technologies. *Energies*, 13 (12), 3036.
+
+Fedorovsky, N.M. (1935). *Classification of mineral resources according to their energy indices*. Moscow-Leningrad.
+
+Fersman, A.E. (1937). *Geochemistry*. Volume III. Leningrad: Khimteoret.
+
+Fiapshev, A., Kilchukova, O., and Khamokov, M. (2017). Biogas unit for agricultural enterprises. *Energy security and energy saving*, 2, 27-29.
+---PAGE_BREAK---
+
+Fiapshev, A., Kilchukova, O., Shekikhachev, Y., Khamokov, M., and Khazhmetov, L. (2018). Mathematical model of thermal processes in a biogas plant. In: ICRE 2018 International Scientific Conference 'Investment, Construction, Real Estate: New Technologies and Special-Purpose Development Priorities. MATEC Web of Conferences, 212, 010032, 1-13. https://www.matec-conferences.org
+
+Global Warming of 1.5°C. (2018). Special Report. Intergovernmental Panel on Climate Change, Geneva, Switzerland. https://www.ipcc.ch/sr15/
+
+Hall, A.S. (2017). Will EROI be the primary determinant of our economic future? The view of the natural scientist versus the economist. *Joule*, **1**, 635–638
+
+Hall, Ch. (2008). Why EROI matters. *The Oil Drum*. http://www.theoildrum.com/node/3786
+
+Järvensivu, P., Toivanen, T., Vadén, T., Lähde, V., Antti Majava, A., Jussi, T. (2018). Governance of Economic Transition. Global Sustainable Development Report 2019. https://bios.fi/bios-governance_of_economic_transition.pdf
+
+King Hubbert, M. (1956). *Nuclear energy and the fossil fuels. Drilling and Production Practice.* Washington: American Petroleum Institute
+
+Kuznetsov, P.G. (1994). System of nutrition: common sense against genocide. *Journal of interregional statesmanship*, **5**, 182-184
+
+Marx, K. (1955-1981). Written works. Volume. 35. Moscow: Politizdat
+
+Moriarty, P. and Honnery, D. (2019). Energy Accounting for a Renewable Energy Future. *Energies*, **12** (22), 4280
+
+Podolinsky, S.A. (1880). *Human work and its relation to energy distribution*. Vol. IV-V. Moscow: Slovo
+
+Raugei, M., Sgouridis, S., Murphy, D., Fthenakis, V. (2017). Energy Return on Energy Invested (EROEI) for photovoltaic solar systems in regions of moderate insolation: A comprehensive response. *Energy Policy*, **102**, 377-384
+
+Safronov, A.F. (2010). EROEI as an indicator of the effectiveness of energy resources extraction and production. *Drilling and Oil*, **12**, 48-51
+
+Safronov, A.F. (2011). Methodology for calculating EROEI by the example of developing Sredneviluy gas-condensate field. *Oil and Gas Engineering*, **6**, 197-209. http://www.ogbus.ru
+
+Temukuyev, T.B. (2014). On the method of calculating EROEI allowing for coefficient of beneficial energy use. *Economic sciences*, **3** (112), 62-66.
+
+Timiryazev, K.A. (1948). Sun, Life, and Chlorophyll. Selected works in 4 volumes. Public lectures, speeches, and scientific research. Moscow: Ogiz-selhozgiz.
+
+Umov, N.A. (1916). Physico-chemical model of living matter. Collected works. Vol. III. Moscow: Typolithography of I.N. Kushnerev and K. http://eritage.ru/ras/view/publication/general.html?id=46906367
+
+Vernadsky, V.I. (1928). On objectives and organization of the USSR AS applied research work. Leningrad: Publishing House of the USSR Academy of Science
\ No newline at end of file
diff --git a/samples/texts_merged/6545431.md b/samples/texts_merged/6545431.md
new file mode 100644
index 0000000000000000000000000000000000000000..4165edfc0a33a09622d7a04e81f3cc4c5f6df946
--- /dev/null
+++ b/samples/texts_merged/6545431.md
@@ -0,0 +1,1497 @@
+
+---PAGE_BREAK---
+
+Wirtschaftswissenschaftliche Fakultät
+Faculty of Economics and Business
+Administration
+
+Working Paper, No. 69
+
+Christian Groth /
+Karl-Josef Koch /
+Thomas M. Steger
+
+When economic growth is
+less than exponential
+
+February 2008
+
+ISSN 1437-9384
+---PAGE_BREAK---
+
+# When economic growth is less than exponential†
+
+Christian Groth (University of Copenhagen and EPRU)‡
+
+Karl-Josef Koch (University of Siegen)
+
+Thomas M. Steger (University of Leipzig and CESifo Munich)
+
+This version: May 6, 2009
+
+forthcoming in: *Economic Theory*
+
+## Abstract
+
+This paper argues that growth theory needs a more general notion of “regularity” than that of exponential growth. We suggest that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. This opens up for considering a richer set of parameter combinations than in standard growth models. And it avoids the usual oversimplistic dichotomy of either exponential growth or stagnation. Allowing zero population growth in three different growth models (the Jones R&D-based model, a learning-by-doing model, and an embodied technical change model) serve as illustrations that a continuum of “regular” growth processes fill the whole range between exponential growth and complete stagnation.
+
+*Keywords:* Quasi-arithmetic growth; Regular growth; Semi-endogenous growth; Knife-edge restrictions; Learning by doing; Embodied technical change.
+
+*JEL Classification:* O31; O40; O41.
+
+†For helpful comments and suggestions we would like to thank three anonymous referees, Carl-Johan Dalgaard, Hannes Egli, Jakub Growiec, Chad Jones, Sebastian Krautheim, Ingmar Schumacher, Robert Solow, Holger Strulik and participants in the Sustainable Resource Use and Economic Dynamics (SURED) Conference, Ascona, June 2006, and an EPRU seminar, University of Copenhagen, April 2007.
+
+‡Corresponding author, University of Copenhagen, Department of Economics, Studiestraede 6, DK-1455 Copenhagen, Denmark, Tel. +45 35323028, Fax +45 35323000, chr.groth@econ.ku.dk. The activities of EPRU (Economic Policy Research Unit) are financed by a grant from The National Research Foundation of Denmark.
+---PAGE_BREAK---
+
+# 1 Introduction
+
+The notion of balanced growth, generally synonymous with exponential growth, has proved extremely useful in the theory of economic growth. This is not only because of the historical evidence (Kaldor's "stylized facts"), but also because of its convenient simplicity. Yet there may be a deceptive temptation to oversimplify and ignore other possible growth patterns. We argue there is a need to allow for a richer set of parameter constellations than in standard growth models and to look for a more general regularity concept than that of exponential growth. The motivation is the following:
+
+First, when setting up growth models researchers place severe restrictions on preferences and technology such that the resulting model is compatible with balanced growth (as pointed out by Solow, 2000, Chapters 8-9). In addition, population is either assumed to grow exponentially or to be constant. This paper demonstrates that regular long-run growth, in a sense specified below, can arise even when some of the prototype restrictions are left out.
+
+Second, standard R&D-based semi-endogenous growth models imply that the long-run per-capita growth rate is proportional to the growth rate of the labor force (Jones, 2005).¹ This class of models is frequently used for positive and normative analysis since it appears empirically plausible in many respects. And the models are consistent with more than a century of approximately exponential growth. If we employ this framework to evaluate the prospect of growth in the future, then we end up with the assertion that the growth *rate* will converge to zero. This is simply due to the fact that there must be limits to population growth, hence also to growth of human capital. The open question is then what this really implies for economic development in the future and thereby, for example, for the warranted discount rate for long-term environmental projects. This issue has not received much attention so far and the answer is not that clear at first glance. Of course, there is an alternative to the semi-endogenous growth framework, namely that of fully endogenous growth as in the first-generation R&D-based growth models of Romer (1990), Grossman and Helpman (1991), and Aghion and Howitt (1992). This approach allows of exponential growth with zero population growth. However, in spite of their path-breaking nature these models rely on the simplifying knife-edge assumption of constant returns to scale (either exactly or asymptotically) with respect to producible factors in the invention pro-
+
+¹Of course, if one digs a little deeper, it is not growth in population as such that matters. Rather, as Jones (2005) suggests, it is growth in human capital, but this ultimately depends on population growth.
+---PAGE_BREAK---
+
+duction function.² As argued, for instance by McCallum (1996), the knife-edge assumption of constant returns to scale to producible inputs should be interpreted as a simplifying approximation to the case of slightly decreasing returns (increasing returns can be ruled out because they have the nonsensical implication of infinite output in finite time, see Solow 1994). But the case of decreasing returns to producible inputs is exactly the semi-endogenous growth case.
+
+A third reason for thinking about less than exponential growth is to open up for a perspective of *sustained* growth (in the sense of output per capita going to infinity for time going to infinity) in spite of the growth *rate* approaching zero. Everything less than exponential growth often seems interpreted as a fairly bad outcome and associated with economic stagnation. For instance, in the context of the Jones (1995) model with constant population, Young (1998, n. 10) states “*Thus, even if there are intertemporal spillovers, if they are not large enough to allow for constant growth, the development of the economy grinds to a halt.*” However, to our knowledge, the case of zero population growth in the Jones model has not really been explored yet. We take the opportunity to let an analysis of this case serve as one of our illustrations that the usual dichotomy between either exponential growth or complete stagnation is too narrow. The analysis suggests that paths along which the rate of decline of the growth rate is proportional to the growth rate itself deserve attention. Indeed, this criterion will define our concept of *regular growth*. It turns out that exponential growth is the limiting case where the factor of proportionality, the “damping coefficient”, is zero. And the “opposite” limiting case is stagnation which occurs when the “damping coefficient” is infinite.
+
+To show the usefulness of this generalized regularity concept two further examples are provided. One of these is motivated by what seems to be a gap in the theoretical learning-by-doing literature. With the perspective of exponential growth, existing models either assume a very specific value of the learning parameter combined with zero population growth in order to avoid growth explosion (Barro and Sala-i-Martin, 2004, Section 4.3) or allow for a range of values for the learning parameter below that specific value, but then combined with exponential population growth (Arrow, 1962). There is an intermediate case, which to our knowledge has not been systematically explored. And this case leads to less-than-exponential, but sustained regular growth.
+
+Our third example of regular growth is intended to show that the framework is easily applicable also to more realistic and complex models. As Greenwood et
+
+²By “knife-edge assumption” is meant a condition imposed on a parameter value such that the set of values satisfying this condition has an empty interior in the space of all possible values for this parameter (see Growiec, 2007).
+---PAGE_BREAK---
+
+al. (1997) document, since World War II there has been a steady decline in the relative price of capital equipment and a secular rise in the ratio of new equipment investment to GNP. On this background we consider a model with investment-specific learning and embodied technical change, implying a persistent decline in the relative price of capital. When conditions do not allow of exponential growth, the same regularity emerges as in the two previous examples. We further sort out how and why the source of learning – be it gross or net investment – is decisive for this result.
+
+The paper is structured as follows. Section 2 introduces proportionality of the rate of decline of the growth rate and the growth rate itself as defining “regular growth”. It is shown that this regularity concept nests, inter alia, exponential growth, arithmetic growth, and stagnation as special cases. Sections 3, 4, and 5 present our three economic examples which, by allowing for a richer set of parameter constellations than in standard growth models, give rise to growth patterns satisfying our regularity criterion, yet being non-exponential. Asymptotic stability of the regular growth pattern is established in all three examples. Finally, Section 6 summarizes the findings.
+
+## 2 Regular Growth
+
+Growth theory explains long-run economic development as some pattern of regular growth. The most common regularity concept is that of exponential growth. Occasionally another regularity pattern turns up, namely that of arithmetic growth. Indeed, a Ramsey growth model with AK technology and CARA preferences features arithmetic GDP per capita growth (e.g., Blanchard and Fischer, 1989, pp. 44-45). Similarly, under Hartwick's rule, a model with essential, non-renewable resources (but without population growth, technical change, and capital depreciation) features arithmetic growth of capital (Solow, 1974; Hartwick, 1977). In similar settings, Mitra (1983), Pezzey (2004), and Asheim et al. (2007) consider growth paths of the form $x(t) = x(0)(1 + \mu t)^{\omega}$, $\mu, \omega > 0$, which, by the last-mentioned authors, is called “quasi-arithmetic growth”. In these analyses the quasi-arithmetic growth pattern is associated with exogenous quasi-arithmetic growth in either population or technology. In this way results by Dasgupta and Heal (1979, pp. 303-308) on optimal growth within a classical utilitarian framework with non-renewable resources, constant population, and constant technology are extended. Hakenes and Irmen (2007) also study exogenous quasi-arithmetic growth paths. Their angle is to evaluate the plausibility of equations of motion for technology on the basis of the ultimate forward-looking as well as backward-looking behavior of the implied path.
+---PAGE_BREAK---
+
+In our view there is a rationale for a concept of regular growth, subsuming
+exponential growth and arithmetic growth as well as the range between these
+two. Also some kind of less-than-arithmetic growth should be included. We la-
+bel this general concept *regular growth*, for reasons that will become clear below.
+The example we consider in Section 3 illustrates that by varying one parameter
+(the elasticity of knowledge creation with respect to the level of existing knowl-
+edge), the whole range between complete stagnation and exponential growth of
+the knowledge stock is spanned. Furthermore, the example shows how a quasi-
+arithmetic growth pattern for knowledge, capital, output, and consumption may
+arise *endogenously* in a two-sector, knowledge-driven growth model. The second
+and third example, discussed in Section 4 and 5, respectively, show that also
+models of learning by doing and learning by investing may endogenously generate
+quasi-arithmetic growth.
+
+To describe our suggested concept of regular growth, a few definitions are
+needed. Let the variable $x(t)$ be a positively-valued differentiable function of time
+$t$. Then the growth rate of $x(t)$ at time $t$ is:
+
+$$g_1(t) \equiv \frac{\dot{x}(t)}{x(t)},$$
+
+where $\dot{x}(t) = dx(t)/dt$. We call $g_1(t)$ the first-order growth rate. Since we seek a more general concept of regular growth than exponential growth, we allow $g_1(t)$ to be time-variant. Indeed, the regularity we look for relates precisely to the way growth rates change over time. Presupposing $g_1(t)$ is strictly positive within the time range considered, let $g_2(t)$ denote the second-order growth rate of $x(t)$ at time $t$, i.e.,
+
+$$g_2(t) \equiv \frac{\dot{g}_1(t)}{g_1(t)}.$$
+
+We suggest the following criterion as defining *regular growth*:
+
+$$g_2(t) = -\beta g_1(t) \quad \text{for all } t \ge 0, \qquad (1)$$
+
+where $\beta \ge 0$. That is, the second-order growth rate is proportional to the first-order growth rate with a non-positive factor of proportionality. The coefficient $\beta$ is called the *damping coefficient*, since it indicates the rate of damping in the growth process.
+
+Let $x_0$ and $\alpha$ denote the initial values $x(0) > 0$ and $g_1(0) > 0$, respectively. The unique solution of the second-order differential equation (1) may then be expressed as:
+
+$$x(t) = x_0 (1 + \alpha \beta t)^{\frac{1}{\beta}}. \qquad (2)$$
+---PAGE_BREAK---
+
+Note that this solution has at least one well-known special case, namely $x(t) = x_0e^{\alpha t}$ for $\beta = 0$.³ Moreover, it should be observed that, given $x_0$, (2) is also the unique solution of the first-order equation:
+
+$$ \dot{x}(t) = \alpha x_0^\beta x(t)^{1-\beta}, \quad \alpha > 0, \beta \ge 0, \qquad (3) $$
+
+which is an autonomous Bernoulli equation. This gives an alternative and equivalent characterization of regular growth. The feature that $x(t)$ here has a constant exponent fits well with economists' preference for constant elasticity functional forms.
+
+The simple formula (2) describes a family of growth paths, the members of which are indexed by the damping coefficient $\beta$. Figure 1 illustrates this family of regular growth paths.⁴ There are three well-known special cases. For $\beta = 0$, we have $g_1(t) = \alpha$, a positive constant. This is the case of exponential growth. At the other extreme we have complete stagnation, i.e., the constant path $x(t) = x_0$. This can be interpreted as the limiting case $\beta \to \infty$.⁵ Arithmetic growth, i.e., $\dot{x}(t) = \alpha$, for all $t \ge 0$, is the special case $\beta = 1$.
+
+Figure 1: A family of growth paths indexed by $\beta$.
+
+Table 1 lists these three cases and gives labels also to the intermediate ranges for the value of the damping coefficient $\beta$. Apart from being written in another
+
+³To see this, use L'Hôpital's rule for "0/0" on $\ln(x(t)) = \ln(x_0) + \frac{1}{\beta}\ln(1+\alpha\beta t)$.
+
+⁴Figure 1 is based on $\alpha = 0.05$ and $x_0 = 1$. In this case, the time paths do not intersect. Intersections occur for $x_0 < 1$. However, for large $t$ the picture always is as shown in Figure 1.
+
+⁵Use L'Hôpital's rule for "$\infty/\infty$" on $\ln x(t)$. If we allow $g_1(0) = 0$, stagnation can of course also be seen as the case $\alpha = 0$.
+---PAGE_BREAK---
+
+(and perhaps less “family-oriented”) way, the “quasi-arithmetic growth” formula
+in Asheim et al. (2007) mentioned above, is subsumed under these intermediate
+ranges.
+
+Table 1: Regular growth paths: $g_2(t) = -\beta g_1(t) \quad \forall t \ge 0$, $\beta \ge 0$, $g_1(0) = \alpha > 0$.
+
+
+
+
+ |
+ Label
+ |
+
+ Damping coefficient
+ |
+
+ Time path
+ |
+
+
+
+
+ |
+ Limiting case 1: exponential growth
+ |
+
+ β = 0
+ |
+
+ x(t) = x
+
+ 0
+
+ e
+
+ αt
+
+ , α > 0
+ |
+
+
+ |
+ More-than-arithmetic growth
+ |
+
+ 0 < β < 1
+ |
+
+ x(t) = x
+
+ 0
+
+ (1 + αβt)
+
+ 1/3
+
+ , α > 0
+ |
+
+
+ |
+ Arithmetic growth
+ |
+
+ β = 1
+ |
+
+ x(t) = x
+
+ 0
+
+ (1 + αt), α > 0
+ |
+
+
+ |
+ Less-than-arithmetic growth
+ |
+
+ 1 < β < ∞
+ |
+
+ x(t) = x
+
+ 0
+
+ (1 + αβt)
+
+ 1/3
+
+ , α > 0
+ |
+
+
+ |
+ Limiting case 2: stagnation
+ |
+
+ β = ∞
+ |
+
+ x(t) = x
+
+ 0
+
+ |
+
+
+
+
+As to the case $β > 1$, notice that though the increase in $x$ per time unit is
+falling over time, it remains positive; there is sustained growth in the sense that
+$x(t) \to \infty$ for $t \to \infty$.⁶ Formally, also the case of $β < 0$ (more-than-exponential
+growth) could be included in the family of regular growth paths. However, this
+case should be considered as only relevant for a description of possible phases of
+transitional dynamics. A growth path (for, say, GDP per capita) with $β < 0$ is
+explosive in a very dramatic sense: it leads to infinite output in finite time (Solow,
+1994).
+
+It is clear that with $0 < \beta < \infty$, the solution formula (2) can not be extended, without bound, *backward* in time. For $t = -(\alpha\beta)^{-1} \equiv \bar{t}$, we get $x(t) = 0$, and thus, according to (3), $x(t) = 0$ for all $t \le \bar{t}$. This should not, however, be considered a necessarily problematic feature. A certain growth regularity need not be applicable to all periods in history. It may apply only to specific historical epochs characterized by a particular institutional environment.⁷
+
+By adding one parameter (the damping coefficient $\beta$), we have succeeded spanning the whole range of sustained growth patterns between exponential growth and complete stagnation. Our conjecture is that there are no other one-parameter extensions of exponential growth with this property (but we have no proof). In any case, as witnessed by the examples in the next sections, the extension has
+
+⁶Empirical investigation of post-WWII GDP per-capita data of a sample of OECD countries yields positive damping coefficients between 0.17 (UK) and 1.43 (Germany). The associated initial (annual) growth rates in 1951 are 2.3% (UK) and 12.4% (Germany), respectively. The fit of the regular growth formula is remarkable. This is not a claim, of course, that this data is better described as regular growth with damping than as transition to exponential growth. Yet, discriminating between the two should be possible in principle.
+
+⁷Here we disagree with Hakenes and Irmen (2007) who find a growth formula (for technical knowledge) implausible, if its unbounded extension backward in time implies a point where knowledge vanishes.
+---PAGE_BREAK---
+
+relevance for real-world economic problems. It is of course possible – and likely – that one will come across economic growth problems that will motivate adding a second parameter or introducing other functional forms. Exploring such extensions is beyond the scope of this paper.⁸
+
+Before we discuss our economic examples of regular growth, a word on terminology is appropriate. Our reason for introducing the term “regular growth” for the described class of growth paths is that we want an inclusive name, whereas for example “quasi-arithmetic growth” will probably in general be taken to exclude the limiting cases of exponential growth and complete stagnation.
+
+### 3 Example 1: R&D-based growth
+
+As our first example of the regularity described above we consider an optimal growth problem within the Romer (1990)-Jones (1995) framework. The labor force (= population), $L$, is governed by $L = L_0e^{nt}$, where $n \ge 0$ is constant (this is a common assumption in most growth models whether $n = 0$, as with Romer, or $n > 0$, as with Jones). The idea of the example is to follow Jones’ relaxation regarding Romer’s value of the elasticity of knowledge creation with respect to existing knowledge, but in contrast to Jones allow $n = 0$ as well as a vanishing pure rate of time preference. We believe the case $n = 0$ is pertinent not only for theoretical reasons, but also because it is of practical interest in view of the projected stationarity of the population of developed countries as a whole already from 2005 (United Nations, 2005).
+
+The technology of the economy is described by constant elasticity functional forms:⁹
+
+$$Y = A^{\sigma} K^{\alpha} (uL)^{1-\alpha}, \quad \sigma > 0, 0 < \alpha < 1, \qquad (4)$$
+
+$$\dot{K} = Y - cL, \quad K(0) = K_0 > 0 \text{ given,} \qquad (5)$$
+
+$$\dot{A} = \gamma A^{\varphi} (1-u)L, \quad \gamma > 0, \varphi \le 1, A(0) = A_0 > 0 \text{ given,} \qquad (6)$$
+
+where $Y$ is aggregate manufacturing output (net of capital depreciation), $A$ society's stock of "knowledge", $K$ society's capital, $u$ the fraction of the labor force employed in manufacturing, and $c$ per-capita consumption; $\sigma, \alpha, \gamma$ and $\varphi$ are con-
+
+⁸However, an interesting paper by Growiec (2008) takes steps in this direction. We may add that this paper, as well as the constructive comments by its author on the working paper version of the present article, has taught us that *reducing* the number of problematic knife-edge restrictions is not the same as “getting rid of” knife-edge assumptions concerning parameter values and/or functional forms.
+
+⁹From now, the explicit timing of the variables is suppressed when not needed for clarity.
+---PAGE_BREAK---
+
+stant parameters. The criterion functional of the social planner is:
+
+$$U_0 = \int_0^\infty \frac{c^{1-\theta} - 1}{1-\theta} L e^{-\rho t} dt,$$
+
+where $\theta > 0$ and $\rho \ge n$. In the spirit of Ramsey (1928) we include the case $\rho = 0$, since giving less weight to future than to current generations might be deemed “ethically indefensible”. When $\rho = n$, there exist feasible paths for which the integral $U_0$ does not converge. In that case our optimality criterion is the catching-up criterion, see Case 4 below. The social planner chooses a plan $(c(t), u(t))_{t=0}^\infty$, where $c(t) > 0$ and $u(t) \in [0, 1]$, to optimize $U_0$ under the constraints (4), (5) and (6) as well as $K \ge 0$ and $A \ge 0$, for all $t \ge 0$. From now, the (first-order) growth rate of any positive-valued variable $v$ will be denoted $g_v$.
+
+*Case 1:* $\varphi = 1$, $\rho > n = 0$. This is the fully-endogenous growth case considered by Romer (1990).¹⁰ An interior optimal solution converges to exponential growth with growth rate $g_c = (1/\theta)[\sigma\gamma L/(1-\alpha) - \rho]$ and $u = 1 - (1-\alpha)g_c/(\sigma\gamma L)$.¹¹
+
+*Case 2:* $\varphi < 1$, $\rho > n > 0$. This is the semi-endogenous growth case considered by Jones (1995). An interior optimal solution converges to exponential growth with growth rate $g_c = n/(1-\varphi)$ and $u = (\sigma/(1-\alpha))(1/n + (1-\varphi)\rho)/(\theta n + (1-\varphi)\rho)$.¹²
+
+*Case 3:* $\varphi < 1$, $\rho > n = 0$. In this case the economy ends up in complete stagnation (constant $c$) with all labor in the manufacturing sector, as is indicated by setting $n=0$ in the formula for $u$ in Case 2. The explanation is the combination of a) no population growth to counteract the diminishing marginal returns to knowledge ($\partial \dot{A}/\partial A \to 0$ for $A \to \infty$), and b) a positive constant rate of time preference.
+
+*Case 4:* $\varphi < 1$, $\rho = n = 0$. This is the canonical Ramsey case. Depending on the values of $\varphi$, $\sigma$, $\alpha$ and $\theta$, a continuum of dynamic processes for $A, K, Y$, and $c$ emerges which fill the whole range between stagnation and exponential growth. Since this case does not seem investigated in the literature, we shall spell it out here. The optimality criterion is the *catching-up criterion*: a feasible path $(\hat{K}, \hat{A}, \hat{c}, \hat{u})_{t=0}^\infty$ is catching-up optimal if
+
+$$ \liminf_{t \to \infty} \left( \int_0^t \frac{\hat{c}^{1-\theta}-1}{1-\theta} d\tau - \int_0^t \frac{c^{1-\theta}-1}{1-\theta} d\tau \right) \geq 0 $$
+
+¹⁰ Contrary to Romer (1990), though, we permit $\sigma \neq 1 - \alpha$ since that still allows stable fully endogenous growth and, in addition, avoids blurring countervailing effects (see Alvarez-Pelaez and Groth, 2005).
+
+¹¹ With $\varphi = 1$, an $n > 0$ would generate an implausible ever-increasing growth rate.
+
+¹² The Jones (1995) model also includes a negative duplication externality in R&D, which is not of importance for our discussion. Convergence of this model is shown in Arnold (2006). In both Case 1 and Case 2 boundedness of the utility integral $U_0$ requires that parameters are such that $(1-\theta)g_c < \rho - n$.
+---PAGE_BREAK---
+
+for all feasible paths $(K, A, c, u)_{t=0}^{\infty}$.
+
+Let $p$ be the shadow price of knowledge in terms of the capital good. Then, the value ratio $x \equiv pA/K$ is capable of being stationary in the long run. Indeed, as shown in Appendix A, the first-order conditions of the problem lead to:
+
+$$ \dot{x} = \frac{\gamma LA^{\varphi-1}}{1-\alpha} \left\{ (\alpha-s)xu - [\sigma + (1-\alpha)(1-\varphi)]u + (1-\alpha)(1-\varphi) \right\} x, \quad (7) $$
+
+where $s = 1 - cL/Y$ is the saving rate; further,
+
+$$ \dot{u} = \frac{\gamma LA^{\varphi-1}}{1-\alpha} \left[ -(1-s)xu + \sigma u + \frac{1-\alpha}{\alpha}\sigma \right] u, \quad \text{and} \quad (8) $$
+
+$$ \dot{s} = \frac{\gamma LA^{\varphi-1}}{1-\alpha} \left[ -\left(\frac{1-\theta}{\theta}\alpha + 1-s\right)xu + \frac{1-\alpha}{\alpha}\sigma \right] (1-s). \quad (9) $$
+
+Provided $\theta > 1$, this dynamic system has a unique steady state:
+
+$$ x^* = \frac{\sigma\theta}{\alpha(\theta-1)} > \frac{\sigma}{\alpha}, \quad u^* = \frac{(\theta-1)[\sigma+\alpha(1-\varphi)]}{\theta\sigma + (\theta-1)\alpha(1-\varphi)} \in (0,1), \quad (10) $$
+
+$$ s^* = \frac{\alpha(\sigma+1-\varphi)}{\theta[\sigma+\alpha(1-\varphi)]} \in (\frac{\alpha}{\theta}, \frac{1}{\theta}). $$
+
+The resulting paths for $A, K, Y$, and $c$ feature regular growth with positive damping. This is seen in the following way. First, given $u = u^*$, the innovation equation (6) is a Bernoulli equation of form (3) and has the solution:
+
+$$ A(t) = [A_0^{1-\varphi} + (1-\varphi)\gamma(1-u^*)Lt]^{1/(1-\varphi)} = A_0 (1+\mu t)^{1/(1-\varphi)}, \quad (11) $$
+
+where $\mu \equiv (1-\varphi)\gamma(1-u^*)LA_0^{\varphi-1} > 0$. Second, the optimality condition saying that at the margin, time must be equally valuable in its two uses, implies the same value of the marginal product of labor in the two sectors, that is, $p\gamma A^\varphi = (1-\alpha)Y/(uL)$. Substituting (4) into this equation, we see that
+
+$$ x \equiv \frac{pA}{K} = \frac{(1-\alpha)A^{\sigma+1-\varphi}}{\gamma K^{1-\alpha}(uL)^{\alpha}}. \quad (12) $$
+
+Thus, solving for $K$ yields, in the steady state,
+
+$$ K(t) = (u^*L)^{\frac{-\alpha}{1-\alpha}} \left(\frac{1-\alpha}{\gamma x^*}\right)^{\frac{1}{1-\alpha}} A_0^{\frac{\sigma+1-\varphi}{1-\alpha}} (1+\mu t)^{\frac{\sigma+1-\varphi}{(1-\alpha)(1-\varphi)}}. \quad (13) $$
+
+The resultant path for $Y$ is:
+
+$$ \begin{aligned} Y(t) &= A(t)^{\sigma} K(t)^{\alpha} (u^*L)^{1-\alpha} \\ &= (u^*L)^{\frac{1-2\alpha}{1-\alpha}} \left(\frac{1-\alpha}{\gamma x^*}\right)^{\frac{\alpha}{1-\alpha}} A_0^{\frac{\sigma+\alpha(1-\varphi)}{1-\alpha}} (1+\mu t)^{\frac{\sigma+\alpha(1-\varphi)}{(1-\alpha)(1-\varphi)}}. \end{aligned} \quad (14) $$
+---PAGE_BREAK---
+
+Finally, per capita consumption is given by $c(t) = (1-s^*)Y(t)/L$. The assumption that $\theta > 1$ (which seems to be consistent with the microeconometric evidence, see Attanasio and Weber, 1995) is needed to avoid postponement *forever* of the consumption return to R&D.¹³
+
+When $0 < \varphi < 1$ (the “standing on the shoulders” case), the damping coefficient for knowledge growth equals $1 - \varphi < 1$, i.e., knowledge features more-than-arithmetic growth. When $\varphi < 0$ (the “fishing out” case), the damping coefficient is $1 - \varphi > 1$, and knowledge features less-than-arithmetic growth. In the intermediate case, $\varphi = 0$, knowledge features arithmetic growth. The coefficient $\mu$, which equals the initial growth rate times the damping coefficient, could be called the *growth momentum*. It is seen to incorporate a *scale effect* from L. This is as expected, in view of the non-rival character of technical knowledge.
+
+The time paths of K and Y also feature regular growth, though with a damping coefficient different from that of technology. The time path of Y, to which the path of c is proportional, features more-than-arithmetic growth if and only if $\sigma > (1-2\alpha)(1-\varphi)$. A sufficient condition for this is that $\frac{1}{2} \le \alpha < 1$. It is interesting that $\varphi > 0$ is not needed; the reason is that even if knowledge exhibits less-than-arithmetic growth ($\varphi < 0$), this may be compensated by high enough production elasticities with respect to knowledge or capital in the manufacturing sector. Notice also that the capital-output ratio features exactly arithmetic growth always along the regular growth path of the economy, i.e., independently of the size relation between the parameters. Indeed, $K/Y = [K(0)/Y(0)](1+\mu t)$. This is like in Hartwick's rule (Solow, 1974). A mirror image of this is that the marginal product of capital always approaches zero for $t \to \infty$, a property not surprising in view of $\rho = 0$.
+
+Is the regular growth path robust to small disturbances in the initial conditions? The answer is yes: the regular growth path is locally saddle-point stable. That is, if the pre-determined initial value of the ratio, $A^{\sigma+1-\varphi}/K^{1-\alpha}$, is in a small neighborhood of its steady state value (which is $\gamma L^{\alpha}x^*u^{*\alpha}/(1-\alpha)$), then the dynamic system (7), (8), and (9) has a unique solution $(x_t, u_t, s_t)_{t=0}^{\infty}$ and this solution converges to the steady state $(x^*, u^*, s^*)$ for $t \to \infty$ (see Appendix A). Thus, the time paths of A, K, Y, and c approach regular growth in the long run.
+
+Of course, exactly constant population is an abstraction but, for example, logistic population growth should over time lead to approximately the same pattern. Admittedly, also the nil time-preference rate is a particular case, but in our opinion not the least interesting one in view of its benchmark character as an
+
+¹³ The conjectured necessary and sufficient transversality conditions (see Appendix A) require $\theta > (\sigma+1-\phi)/[\sigma + \alpha(1-\phi)]$, which we assume to be satisfied. This condition is a little stronger than the requirement $\theta > 1$.
+---PAGE_BREAK---
+
+expression of a canonical ethical principle.¹⁴
+
+# 4 Example 2: Learning by doing
+
+In the first example regular non-exponential growth arose in the Ramsey case with a zero rate of time preference. Are there examples with a positive rate of time preference? This question was raised by Chad Jones (private correspondence), who kindly suggested us to look at learning by doing. The answer to the question turns out be a yes.
+
+Assume there is learning by doing in the following form:
+
+$$ \dot{A} = \gamma A^{\varphi} L, \quad \gamma > 0, \varphi < 1, \quad A(0) = A_0 > 0 \text{ given,} \qquad (15) $$
+
+where, as before, $A$ is an index of productivity at time $t$ and $L$ is the labor force (= population).¹⁵ As noted in the introduction, the case $\varphi = 1$, combined with constant $L$, and the case $\varphi < 1$ combined with exponential growth in $L$, are well understood. And the case $\varphi > 1$ leads to explosive growth. But the remaining case, $\varphi < 1$, combined with constant $L$, has to our knowledge not received much attention, possibly because of the absence of a conceptual framework for the kind of regularity which arises in this case. Moreover, this case is also of interest because its dynamics turn out to reappear as a sub-system of the more elaborate example with embodied technical change in the next section.
+
+The Bernoulli equation (15) has the solution
+
+$$ A(t) = \left[ A_0^{1-\varphi} + (1-\varphi)\gamma L t \right]^{1/(1-\varphi)}. \qquad (16) $$
+
+Thus, $A$ features regular growth. We wish to see whether, in the problem below, also $Y, K$, and $c$ feature regular growth when $\rho > 0$.¹⁶
+
+The social planner chooses a plan $(c(t))_{t=0}^{\infty}$ so as to maximize
+
+$$ U_0 = \int_0^{\infty} \frac{c^{1-\theta} - 1}{1-\theta} L e^{-\rho t} dt \quad \text{s.t.} $$
+
+$$ \dot{K} = Y - cL - \delta K, \quad \delta \ge 0, \quad K(0) = K_0 > 0 \text{ given,} \qquad (17) $$
+
+where
+
+$$ Y = A^{\sigma} K^{\alpha} L^{1-\alpha}, \quad \sigma > 0, \ 0 < \alpha < 1, \qquad (18) $$
+
+¹⁴The entire spectrum of regular growth patterns can also be obtained in an elementary version of the Jones (1995) model with no capital, but two types of (immobile) labor, i.e., unskilled labor in final goods production and skilled labor in R&D.
+
+¹⁵As an alternative to our “learning-by-doing” interpretation of (15), one might invoke a “population-breeds-ideas” hypothesis. In his study of the very-long run history of population Kremer (1993) combines such an interpretation of (15) with a Malthusian story of population dynamics.
+
+¹⁶In order to allow potential scale effects to be visible, we do not normalize $L$ to 1.
+---PAGE_BREAK---
+
+with the time path of A given by (16). Whereas the previous example assumed
+that *net* output was described by a Cobb-Douglas production function, here it
+can be *gross* output as well. The current-value Hamiltonian is
+
+$$
+H(K, c, \lambda, t) = \frac{c^{1-\theta} - 1}{1-\theta} L + \lambda(A^{\sigma}K^{\alpha}L^{1-\alpha} - cL - \delta K),
+$$
+
+where $\lambda$ is the co-state variable associated with physical capital. Necessary first-
+order conditions for an interior solution are:
+
+$$
+\begin{align}
+\frac{\partial H}{\partial c} &= c^{-\theta}L - \lambda L = 0, \tag{19} \\
+\frac{\partial H}{\partial K} &= \lambda \left( \alpha \frac{Y}{K} - \delta \right) = -\dot{\lambda} + \rho \lambda. \tag{20}
+\end{align}
+$$
+
+These conditions, combined with the transversality condition,
+
+$$
+\lim_{t \to \infty} \lambda(t)e^{-\rho t} K(t) = 0, \quad (21)
+$$
+
+are sufficient for an optimal solution. Owing to strict concavity of the Hamiltonian
+with respect to (K, c) this solution will be unique, if it exists (see Appendix B).
+
+It remains to show existence of such a path. Combining (19) and (20) gives
+the Keynes-Ramsey rule
+
+$$
+g_c = \frac{1}{\theta} \left( \alpha \frac{Y}{K} - \delta - \rho \right). \qquad (22)
+$$
+
+Let $v = cL/K$ and log-differentiate $v$ with respect to time to get
+
+$$
+g_v = \frac{1}{\theta}(\alpha z - \delta - \rho) - (z - v - \delta),
+$$
+
+where
+
+$$
+z \equiv \frac{Y}{K} = A^{\sigma} K^{\alpha-1} L^{1-\alpha}.
+$$
+
+Log-differentiating *z* with respect to time gives
+
+$$
+g_z = \sigma\gamma A^{\varphi-1} L + (\alpha - 1)(z - v - \delta).
+$$
+
+Thus we have a system in $v$ and $z$ :
+
+$$
+\begin{align*}
+\dot{v} &= \left[ \frac{1}{\theta}(\alpha z - \delta - \rho) - (z - v - \delta) \right] v, \\
+\dot{z} &= \left[ \sigma\gamma A^{\varphi-1} L - (1-\alpha)(z-v-\delta) \right] z,
+\end{align*}
+$$
+
+where $v$ is a jump variable and $z$ a pre-determined variable. We have $\sigma\gamma A^{\varphi-1}L \rightarrow 0$ for $t \rightarrow \infty$. There is an asymptotic steady state, $(v^*, z^*)$, where
+
+$$
+v^* = \frac{\rho}{\alpha} + \frac{1-\alpha}{\alpha}\delta,
+$$
+
+$$
+z^* = v^* + \delta = \frac{\rho + \delta}{\alpha}.
+$$
+---PAGE_BREAK---
+
+The investment-capital ratio, $(Y-cL)/K \equiv z-v$, in this asymptotic steady state is $z^* - v^* = \delta$. The associated Jacobian is
+
+$$J = \begin{bmatrix} v^* & (\frac{\alpha}{\theta} - 1)v^* \\ (1-\alpha)z^* & -(1-\alpha)z^* \end{bmatrix},$$
+
+with determinant $\det J = -(1-\alpha)v^*z^* - (\frac{\alpha}{\theta}-1)(1-\alpha)v^*z^* = -\frac{\alpha}{\theta}(1-\alpha)v^*z^* < 0$. The eigenvalues of $J$ are thus of opposite sign.
+
+Figure 2 contains an illustrating phase diagram. The line marked by "$\dot{z}=0$" is the locus for $\dot{z}=0$ only in the long run. The path (with arrows) through the point $E$ is the "long-run saddle path". If the level of $A^{\varphi-1}$ remained at its initial value, $A_0^{\varphi-1}$, the point $E'$ would be a steady state and have a saddle path going through it (as illustrated by the dashed line through $E'$). But over time, $A^{\varphi-1}$ decreases and approaches zero. Hence, the point $E'$ shifts and approaches the long-run steady state, $E$.¹⁷
+
+Figure 2: Phase diagram for the learning-by-doing model.
+
+¹⁷ We shall not here pursue the potentially interesting dynamics going on *temporarily*, if $z_0$ is above $z^*$ but below the value associated with the point $E'$.
+---PAGE_BREAK---
+
+The following relations must hold asymptotically:
+
+$$ \frac{Y}{K} = \frac{A^{\sigma} K^{\alpha} L^{1-\alpha}}{K} = z^{*} \quad \text{so that} $$
+
+$$ K^{1-\alpha} = \frac{A^{\sigma} L^{1-\alpha}}{z^{*}} \quad \text{or} $$
+
+$$
+\begin{aligned}
+K(t) &= z^{-\frac{1}{1-\alpha}} A(t)^{\frac{\sigma}{1-\alpha}} L = \left(\frac{\alpha}{\delta + \rho}\right)^{\frac{1}{1-\alpha}} L \left[ A_0^{1-\varphi} + (1-\varphi)\gamma L t \right]^{\frac{\sigma}{(1-\alpha)(1-\varphi)}} \\
+&= \left(\frac{\alpha}{\delta + \rho}\right)^{\frac{1}{1-\alpha}} L A_0^{\frac{\sigma}{1-\alpha}} (1+\mu t)^{\frac{\sigma}{(1-\alpha)(1-\varphi)}}, \quad \text{where } \mu \equiv (1-\varphi)\gamma A_0^{\varphi-1} L > 0.
+\end{aligned}
+$$
+
+Thus, in the long run $K$ features regular growth with positive damping. The damping coefficient is $\frac{(1-\alpha)(1-\varphi)}{\sigma}$, which may be above or below one, depending on $\sigma$. In the often considered benchmark case, $\sigma = 1 - \alpha$, the damping coefficient is less than one if $\varphi > 0$. Then $K$ features more-than-arithmetic growth. The growth momentum is $\mu$ and is seen to incorporate a scale effect (reflecting the non-rival character of learning). Although $K$ is growing, the growth rate of $K$ tends to zero. The investment-capital ratio, $(Y - cL)/K$, tends to $\delta$; thus, the saving rate, $s = 1 - cL/Y$, tends to $\delta K/Y = \delta/z^*$.
+
+As to manufacturing output we have in the long run
+
+$$ Y(t) = z^* K(t) = \left( \frac{\alpha}{\delta + \rho} \right)^{\frac{\alpha}{1-\alpha}} L A_0^{\frac{\sigma}{1-\alpha}} (1 + \mu t)^{\frac{\sigma}{(1-\alpha)(1-\varphi)}}, $$
+
+which is, of course, also regular growth with positive damping. A similar pattern is then true for the marginal product of labor $w(t) = (1-\alpha)Y(t)/L$. The output-capital ratio tends to a constant in the long run. Per capita consumption, $c(t) = (1-s(t))Y(t)/L$, tends to $(1-\delta/z^*)Y(t)/L$. Finally, the net marginal product of capital, $\alpha Y(t)/K(t) - \delta$, tends to
+
+$$ \alpha z^* - \delta = \rho. $$
+
+This explains why the growth rate of consumption tends to zero.
+
+Although the asymptotic steady state is never reached, the conclusion is that $K$, $Y$, and $c$ in the long run are arbitrarily close to a regular growth pattern with a damping coefficient, $\frac{(1-\alpha)(1-\varphi)}{\sigma}$, and a growth momentum, $\mu$, the same for all three variables. In spite of the absence of exponential growth, key ratios such as $Y/K$ and $wL/Y$ tend to be constant in the long run.
+
+The purpose of this example was to show that a positive rate of time preference, $\rho$, is no hindrance to such an outcome.¹⁸ Given that the regular growth
+
+¹⁸Presupposing $\delta > 0$, qualitatively the same outcome – asymptotic regular growth – emerges for $\rho = 0$ (although in this case we have to use catching-up as optimality criterion).
+---PAGE_BREAK---
+
+pattern was inherited from the independent technology path described by (16),
+this conclusion is perhaps no surprise. In the next section we consider an example
+where there is mutual dependence between the development of technology and
+the remainder of the economy.
+
+**5 Example 3: Investment-specific learning and embodied technical change**
+
+Motivated by the steady decline of the relative price of capital equipment and
+the secular rise in the ratio of new equipment investment to GNP, Greenwood
+et al. (1997) developed a tractable model with *embodied* technical change. The
+framework has afterwards been applied and extended in different directions. One
+such application is that of Boucekkine et al. (2003).¹⁹ They show that a relative
+shift from general to investment-specific learning externalities may explain the
+simultaneous occurrence of a faster decline in the price of capital equipment and
+a productivity slowdown in the 1970s after the first oil price shock.
+
+In this section we present a related model and show that regular, but less-than-
+exponential growth may arise. To begin with we allow for population growth in
+order to clarify the role of this aspect for the long-run results. Notation is as
+above, unless otherwise indicated. The technology of the economy is described by
+
+$$
+Y = K^{\alpha} L^{1-\alpha}, \quad 0 < \alpha < 1, \tag{23}
+$$
+
+$$
+\dot{K} = qI - \delta K, \quad \delta > 0, K(0) = K_0 \text{ given}, \tag{24}
+$$
+
+$$
+q = \tilde{\gamma} \left( \int_{-\infty}^{t} I(\tau) d\tau \right)^{\beta}, \quad \tilde{\gamma} > 0, 0 < \beta < (1-\alpha)/\alpha, q(0) = q_0 \text{ given, (25)}
+$$
+
+where $L = L_0 e^{nt}$, $n \ge 0$, and $K_0, q_0$, and $L_0$ are positive. The new variables are
+$I \equiv Y - cL$, i.e., gross investment, and $q$ which denotes the quality (productivity)
+of newly produced investment goods. There is learning by investing, but new
+learning is incorporated only in newly produced investment goods (this is the
+embodiment hypothesis). Thus, over time each new investment good gives rise to
+a greater and greater addition to the capital stock, $K$, measured in constant
+efficiency units. The quality $q$ of investment goods of the current vintage is
+determined by cumulative aggregate gross investment as indicated by (25). The
+parameter $\beta$ is named the “learning parameter”. The upper bound on $\beta$ is brought
+in to avoid explosive growth (infinite output in finite time). We assume capital
+
+¹⁹ We are thankful to Solow for suggesting that embodied technical change might fit our approach and to a referee for suggesting in particular a look at the Boucekkine et al. (2003) paper.
+---PAGE_BREAK---
+
+goods cannot be converted back into consumption goods. So gross investment, *I*, is always non-negative.
+
+As we will see, with this technology and the same preferences as in the previous example, including a positive rate of time preference, the following holds. (a) If $n > 0$, the social planner's solution features exponential growth. (b) If $n = 0$, the solution features asymptotic quasi-arithmetic growth; in the limiting case $\beta = (1-\alpha)/\alpha$, asymptotic exponential growth arises, whereas the case $\beta > (1-\alpha)/\alpha$ implies explosive growth. Before proceeding it is worth pointing out two key differences between the present model and that of Boucekkine et al. (2003). In their paper *q* is determined by cumulative *net* investment. We find it more plausible to have learning associated with *gross* investment. And in fact this difference turns out to be crucial for whether $n = 0$ leads to quasi-arithmetic growth or merely stagnation. Another difference is that in the spirit of our general endeavor we impose no knife-edge condition on the learning parameter.²⁰
+
+Since not even the exponential growth case of this model seems explored in the literature, our exposition will cover that case as well as the less-than-exponential growth case. Many of the basic formulas are common but imply different conclusions depending on the value of *n*.
+
+## 5.1 The general context
+
+By taking the time derivative on both sides of (25) we get the more convenient differential form
+
+$$ \dot{q} = \gamma q^{(\beta-1)/\beta} I = \gamma q^{(\beta-1)/\beta} (Y - cL), \quad \gamma = \tilde{\gamma}^{1/\beta} \beta. \tag{26} $$
+
+Given $\rho > n$ and initial positive $K(0)$ and $q(0)$, the social planner chooses a plan $(c(t))_{t=0}^{\infty}$, where $0 < c(t) \le Y(t)/L(t)$, so as to maximize
+
+$$ U_0 = \int_0^\infty \frac{c^{1-\theta} - 1}{1-\theta} L e^{-\rho t} dt $$
+
+subject to (24), (26), and non-negativity of $K$ for all $t$. From the first-order conditions for an interior solution we find (see Appendix C) that the Keynes-Ramsey rule takes the form
+
+$$ g_c = \frac{1}{\theta}(\alpha z - m\delta - \rho), \tag{27} $$
+
+where $z \equiv qY/K$ (the modified output-capital ratio) and $m \equiv pq$ with $p$ denoting the shadow price of the capital good in terms of the consumption good. Thus, $z$
+
+²⁰Differences of minor importance from our perspective include, first, that Boucekkine et al. (2003) let the embodied learning effect come from accumulated (net) investment *per capita* (presumably to avoid any kind of scale effect), second, that they combine this effect with a disembodied learning effect.
+---PAGE_BREAK---
+
+is a modified output-capital ratio and $m$ is the shadow price of newly produced investment goods in terms of the consumption good. Let $v \equiv qCL/K$ (the modified consumption-capital ratio), so that, by (24), the growth rate of $K$ is $g_K = z-v-\delta$. Further, let $h \equiv \gamma Y/q^{1/\beta}$, so that, by (26), the growth rate of $q$ is $g_q = (1-v/z)h$; that is, $(1-v/z)$ is the saving rate, which we will denote $s$, and $h$ is the highest possible growth rate of the quality of newly produced investment goods. Then, combining the first-order conditions and the dynamic constraints (24) and (26) yields the dynamic system:
+
+$$
+\begin{aligned}
+\dot{m} &= \left[ \frac{1-m}{m}(\delta m - \alpha z) + \left(1 - \frac{v}{z}\right)h \right] m, && (28) \\
+\dot{v} &= \left[ \frac{1}{\theta}(\alpha z - \delta m - \rho) - (z - v - \delta - n) + \left(1 - \frac{v}{z}\right)h \right] v, && (29) \\
+\dot{z} &= \left[ -(1-\alpha)(z - v - \delta - n) + \left(1 - \frac{v}{z}\right)h \right] z, && (30) \\
+\dot{h} &= \left[ \alpha(z - v - \delta - n) + n - \frac{1}{\beta}\left(1 - \frac{v}{z}\right)h \right] h. && (31)
+\end{aligned}
+$$
+
+Consider a steady state, $(m^*, v^*, z^*, h^*)$, of this system. In steady state, if $n > 0$, the economy follows a balanced growth path (BGP for short) with constant growth rates of $K$, $q$, $Y$, and $c$. Indeed, from (30) and (31) we find the growth rate of $K$ to be
+
+$$ g_K^* = z^* - v^* - \delta = \frac{(1-\alpha)(1+\beta)}{1-\alpha(1+\beta)} n > n \quad \text{iff } n > 0. \quad (32) $$
+
+The inequality is due to the parameter condition
+
+$$ \alpha < 1/(1+\beta) \quad (33) $$
+
+which is equivalent to $\beta < (1-\alpha)/\alpha$, the condition assumed in (25). Then, from (30),
+
+$$ g_q^* = s^*h^* = \left(1 - \frac{v^*}{z^*}\right)h^* = \frac{(1-\alpha)\beta}{1-\alpha(1+\beta)}n = \frac{\beta}{1+\beta}g_K^*. \quad (34) $$
+
+In view of constancy of $h \equiv \gamma Y/q^{1/\beta}$,
+
+$$ g_Y^* = \frac{1}{\beta}g_q^* = \frac{1}{1+\beta}g_K^*. \quad (35) $$
+
+That is, owing to the embodiment of technical progress $Y$ does not grow as fast as $K$. This is in line with the empirical evidence mentioned above. Inserting (27), (32), and (34) into (29) we find
+
+$$ g_c^* = \frac{1}{\theta}(\alpha z^* - m^*\delta - \rho) = \frac{\alpha\beta}{1-\alpha(1+\beta)}n > 0 \quad \text{iff } n > 0. \quad (36) $$
+---PAGE_BREAK---
+
+This result is of course also obtained if we use constancy of $v^*/z^*$ to conclude that
+$g_c^* = g_Y^* - n$. To ensure boundedness of the discounted utility integral we impose
+the parameter restriction
+
+$$
+(1-\theta) \frac{\alpha\beta}{1-\alpha(1+\beta)} n < \rho - n, \quad (37)
+$$
+
+which is equivalent to $(1 - \theta)g_c^* < \rho - n$.
+
+With these findings we get from (28)
+
+$$
+m^* = \frac{\alpha(\theta g_c^* + \rho)}{(1 - \alpha + \alpha\theta)g_c^* + \alpha\rho} = \frac{\theta\alpha\beta n + [1 - \alpha(1 + \beta)]\rho}{(1 - \alpha + \alpha\theta)n + [1 - \alpha(1 + \beta)]\rho} \le 1, \quad (38)
+$$
+
+if $n \ge 0$, respectively. The parameter restriction (37) implies $m^* > \alpha$. Next, from
+(36),
+
+$$
+z^* = \frac{\theta\beta}{1 - \alpha(1 + \beta)} n + \frac{\rho + \delta m^*}{\alpha} > 0, \quad (39)
+$$
+
+so that, from (32),
+
+$$
+v^* = \frac{\theta\beta - (1-\alpha)(1+\beta)}{1-\alpha(1+\beta)}n + \frac{\rho+\delta m^*}{\alpha} - \delta, \quad (40)
+$$
+
+and
+
+$$
+s^* = 1 - \frac{v^*}{z^*} = \alpha \frac{(1-\alpha)(1+\beta)n + [1-\alpha(1+\beta)]\delta}{\theta\alpha\beta n + [1-\alpha(1+\beta)](\rho+\delta m^*)} \in (0,1). \quad (41)
+$$
+
+That $s^* > 0$ is immediate from the formula. And $s^* < 1$ is implied by $v^* < z^*$,
+which immediately follows by comparing (40) and (39). Finally, we have from
+(34)
+
+$$
+h^* = \frac{g_q^*}{s^*} = \frac{(1-\alpha)\beta n}{[1-\alpha(1+\beta)]s^*} \ge 0 \quad \text{for } n \ge 0, \qquad (42)
+$$
+
+respectively.
+
+In a BGP the shadow price $p$ ($\equiv m/q$) of the capital good in terms of the
+consumption good is falling since $m$ is constant while $q$ is rising. Indeed,
+
+$$
+g_p^* = -g_q^* = - \frac{(1-\alpha)\beta}{1-\alpha(1+\beta)} n = - \frac{\beta}{1+\beta} g_K^*. \tag{43}
+$$
+
+Thus, at the same time as $Y/K$ is falling, the *value* capital-output ratio $Y/(pK)$ stays constant in a BGP. If $r$ denotes the social planner's marginal net rate of return in terms of the consumption good, we have $r = [\partial Y / \partial K - (p\delta - \dot{p})] / p$. Since $p \equiv m/q$ and $z \equiv qY/K$, we have $(\partial Y / \partial K) / p = \alpha Y / (pK) = \alpha z^* / m^*$. Along the BGP, therefore,
+
+$$
+r^* = \alpha \frac{z^*}{m^*} - (\delta - g_p^*) = \frac{\theta \alpha \beta}{1 - \alpha(1 + \beta)} n + \rho = \theta g_c^* + \rho, \quad (44)
+$$
+---PAGE_BREAK---
+
+as expected. Since the investment good and the consumption good are produced by the same technology, we can alternatively calculate r as the marginal net rate of return to investment: $r = (\partial Y/\partial K - p\delta)\partial \dot{K}/\partial I = (\alpha Y/K - p\delta)q$. In the BGP we then get $r^* = \alpha z^* - m^*\delta$, which according to (36) amounts to the same as (44).
+
+We have hereby shown that if the learning parameter satisfies (33), a steady state of the dynamic system is feasible and features exponential semi-endogenous growth if $n > 0$.²¹ On the other hand, violation of (33) combined with a positive $n$ implies a growth potential so enormous that a steady state of the system is infeasible and growth tends to be explosive. But what if $n = 0$?
+
+## 5.2 The case with zero population growth
+
+With $n = 0$ the formulas above are still valid. As a result the growth rates $g_K^*, g_q^*, g_c^*$, and $g_p^*$ are all zero, whereas $m^* = 1$, $z^* = (\rho+\delta)/\alpha$, $v^* = (\rho+\delta)/(\alpha-\delta)$, $s^* = \alpha\delta/(\rho+\delta) = \delta/z^*$, and $h^* = 0$. By definition we have $h = \gamma Y/q^{1/\beta} > 0$ for all $t$. So the vanishing value of $h^*$ tells us that the economic system can never attain the steady state. We will now show, however, that the system converges towards this steady state, which is therefore an asymptotic steady state.
+
+When $n=0$ and $\alpha < 1/(1+\beta)$, we have from purely technological reasons that $\lim_{t\to\infty} h = 0$ (for details, see Appendix C). This implies that for $t \to \infty$ the dynamics of $m$, $v$, and $z$ approach the simpler form
+
+$$
+\begin{aligned}
+\dot{m} &= (1-m)(\delta m - \alpha z), \\
+\dot{v} &= \left[ \frac{1}{\theta}(\alpha z - \delta m - \rho) - (z-v-\delta) \right] v, \\
+\dot{z} &= -(1-\alpha)(z-v-\delta)z.
+\end{aligned}
+ $$
+
+The associated Jacobian is
+
+$$ J = \begin{bmatrix} \rho & 0 & 0 \\ -\frac{\delta}{\theta}v^* & v^* & (\frac{\alpha}{\theta}-1)v^* \\ 0 & (1-\alpha)z^* & -(1-\alpha)z^* \end{bmatrix}. $$
+
+This is block-triangular and so the eigenvalues are $\rho$ and those of the lower right $2 \times 2$ sub-matrix of $J$. Note that this sub-matrix is identical to the Jacobian in the learning-by-doing example of Section 4. Accordingly, its eigenvalues are of opposite sign. Since $m$ and $v$ are jump variables and $z$ is pre-determined, it follows that the asymptotic steady state is locally saddle-point stable.²²
+
+²¹The standard transversality conditions are satisfied at least if $\theta \ge 1$ (see Appendix C). Owing to non-concavity of the maximized Hamiltonian, however, we have not been able to establish sufficient conditions for optimality.
+
+²²The unique converging path unconditionally satisfies the standard transversality conditions, see Appendix C.
+---PAGE_BREAK---
+
+For $t \to \infty$ we therefore have $s \equiv 1 - v/z \to 1 - v^*/z^* \equiv s^*$ and $K \to L(q/z^*)^{1/(1-\alpha)}$ (from the definition of $z$). So, from (26) and (23) follows that ultimately
+
+$$ \dot{q} = \gamma q^{\frac{\beta-1}{\beta}} s^* K^{\alpha} L^{1-\alpha} = \gamma L s^* z^{*\frac{-\alpha}{1-\alpha}} q^{1-\frac{1-\alpha(1+\beta)}{(1-\alpha)\beta}} \equiv C q^{1-\xi}, \quad (45) $$
+
+where $C$ and $\xi$ are implicitly defined constants. This Bernoulli equation has the solution
+
+$$ q(t) = (q_0^\xi + \xi Ct)^{\frac{1}{\xi}} = q_0(1 + \mu t)^{\frac{1}{\xi}}, \quad \text{where } \mu = \xi L \gamma \delta \left(\frac{\alpha}{\rho + \delta}\right)^{\frac{1}{1-\alpha}} q_0^{-\frac{\alpha}{\xi}}, $$
+
+using the solutions for $s^*$ and $z^*$ above. This shows that in the long run the productivity of newly produced investment goods features regular growth with damping coefficient $\xi = [1 - \alpha(1 + \beta)] / [(1 - \alpha)\beta] > 0$ and growth momentum $\mu$ (which, as expected, is seen to incorporate a scale effect reflecting the non-rival character of learning). The corresponding long-run path for capital is
+
+$$ K(t) = L \left( \frac{q}{z^*} \right)^{\frac{1}{1-\alpha}} = L \left( \frac{\alpha}{\rho + \delta} \right)^{\frac{1}{1-\alpha}} q_0^{\frac{1}{1-\alpha}} (1 + \mu t)^{\frac{1}{(1-\alpha)\xi}} $$
+
+and for output
+
+$$ Y(t) = K(t)^\alpha L^{1-\alpha} = L \left( \frac{\alpha}{\rho + \delta} \right)^{\frac{\alpha}{1-\alpha}} q_0^{\frac{\alpha}{1-\alpha}} (1 + \mu t)^{\frac{\alpha}{(1-\alpha)\xi}}. $$
+
+The damping coefficient for $Y$ is thus $(1-\alpha)\xi/\alpha = [1-\alpha(1+\beta)]/(\alpha\beta)$, so that more-than-arithmetic growth arises if $\frac{1}{2}(1-\alpha)/\alpha < \beta < (1-\alpha)/\alpha$ and less-than-arithmetic growth if $\beta$ is beneath the lower end of this interval. The same is then true for the marginal product of labor, $w(t) = (1-\alpha)Y(t)/L$, and for per capita consumption, $c(t) = (1-s(t))Y(t)/L$, which tends to $(1-\delta/z^*)Y(t)/L$. For the capital-output ratio we ultimately have $K(t)/Y(t) = q(t)/z^*$, which implies more-than-arithmetic growth if $\beta > 1-\alpha$ and less-than-arithmetic growth if $\beta < 1-\alpha$.
+
+A new interesting facet compared with the learning-by-doing example of Section 4 is that the shadow price, $p$, of capital goods remains falling, although at a decreasing rate. This follows from the fact that the shadow price, $m \equiv pq$, of newly produced investment goods in terms of the consumption good tends to a constant at the same time as $q$ is growing, although at a decreasing rate. Finally, the value output-capital ratio $Y/(pK)$ tends to the constant $(qY/K)m = z^*m^* = z^* = (\rho + \delta)/\alpha$ and the marginal net rate of return to investment tends to $r^* = \alpha Y/(pK) - \delta = \rho$.
+
+These results hold when, in addition to $n=0$, we have $\alpha < 1/(1+\beta)$. In the limiting case, $\alpha = 1/(1+\beta)$, the growth formulas above no longer hold and
+---PAGE_BREAK---
+
+instead exponential growth arises. Indeed, the system (28), (29), (30), and (31)
+is still valid and so is (45) in a steady state of the system. But now $\xi = 0$. We
+therefore have in a steady state that $\dot{q} = Cq$, which has the solution $q(t) = q_0 e^{ Ct}$,
+where $C \equiv \gamma L s^* z^* \frac{-\alpha}{1-\alpha} > 0$. By constancy of $h$ in the steady state, $g_Y = \alpha g_K$
+= $g_q/\beta = C/\beta = \alpha C/(1-\alpha)$ so that also $Y$ and $K$ grow exponentially. This is
+the fully-endogenous growth case of the model. If instead $\alpha > 1/(1+\beta)$ we get
+$\xi < 0$ in (45), implying explosive growth, a not plausible scenario.
+
+We conclude this section with a remark on why, when exponential growth can-
+not be sustained in a model, sometimes quasi-arithmetic growth results and some-
+times complete stagnation. In the present context, where we focus on learning, it
+is the source of learning that matters. Suppose that, contrary to our assumption
+above, learning is associated with *net* investment, as in Boucekkine et al. (2003).
+If with respect to the value of the learning parameter we rule out both the knife-
+edge case leading to exponential growth and the explosive case, then $n = 0$ will
+lead to complete stagnation. Even if there is an incentive to maintain the capital
+stock, this requires no net investment and so learning tends to stop. When learn-
+ing is associated with *gross* investment, however, maintaining the capital stock
+implies sustained learning. In turn, this induces more investment than needed to
+replace wear and tear and so capital accumulates, although at a declining rate.
+Even if there are diminishing marginal returns to capital, this is countervailed
+by the rising productivity of investment goods due to learning. Similarly, in the
+learning-by-doing example of Section 4, where learning is simply associated with
+working, learning occurs even if the capital stock is just maintained. Therefore,
+instead of mere stagnation we get quasi-arithmetic growth.
+
+6 Conclusion
+
+The search for exponential growth paths can be justified by analytical simplicity
+and the approximate constancy of the long-run growth rate for more than a cen-
+tury in, for example, the US. Yet this paper argues that growth theory needs a
+more general notion of regularity than that of exponential growth. We suggest
+that paths along which the rate of decline of the growth rate is proportional to the
+growth rate itself deserve attention; this criterion defines our concept of regular
+growth. Exponential growth is the limiting case where the factor of proportional-
+ity, the "damping coefficient", is zero. When the damping coefficient is positive,
+there is less-than-exponential growth, yet this growth exhibits a certain regularity
+and is sustained in the sense that $Y/L \to \infty$ for $t \to \infty$. We believe that such
+a broader perspective on growth will prove particularly useful for discussions of
+the prospects of economic growth in the future, where population growth (and
+---PAGE_BREAK---
+
+thereby the expansion of the ultimate source of new ideas) is likely to come to an end.
+
+The main advantages of the generalized regularity concept are as follows: (1) The concept allows researchers to reduce the number of problematic parameter restrictions, which underlie both standard neoclassical and endogenous growth models. (2) Since the resulting dynamic process has one more degree of freedom compared to exponential growth, it is at least as plausible in empirical terms. (3) The concept covers a continuum of sustained growth processes which fill the whole range between exponential growth and complete stagnation, a range which may deserve more attention in view of the likely future demographic development in the world. (4) As our analyses of zero population growth in the Jones (1995) model, a learning-by-doing model, and an embodied technical change model show, falling growth rates need not mean that economic development grinds to a halt. (5) Finally, at least for these three examples we have demonstrated not only the presence of the generalized regularity pattern, but also the asymptotic stability of this pattern.
+
+The examples considered are based on a representative agent framework. Our conjecture is that with heterogeneous agents the generalized notion of regular growth could be of use as well. Likewise, an elaboration of the embodied technical change approach of Section 5 might be of empirical interest. For example, Solow (1996) indicates that vintage effects tend to be more visible against a background of less-than exponential growth. As Solow has also suggested,²³ there is an array of “behavioral” assumptions waiting for application within growth theory, in particular growth theory without the straightjacket of exponential growth.
+
+# 7 Appendix
+
+**A. The canonical Ramsey example** This appendix derives the results reported for Case 4 in Section 3. The Hamiltonian for the optimal control problem is:
+
+$$H(K, A, c, u, \lambda_1, \lambda_2, t) = \frac{c^{1-\theta}-1}{1-\theta}L + \lambda_1(Y-cL) + \lambda_2\gamma A^{\varphi}(1-u)L,$$
+
+where $Y = A^{\sigma}K^{\alpha}(uL)^{1-\alpha}$ and $\lambda_1$ and $\lambda_2$ are the co-state variables associated with physical capital and knowledge, respectively. Applying the catching-up optimality criterion, necessary first-order conditions (see Seierstad and Sydsaeter, 1987, p.
+
+²³Private communication.
+---PAGE_BREAK---
+
+232-34) for an interior solution are:
+
+$$ \frac{\partial H}{\partial c} = c^{-\theta}L - \lambda_1 L = 0, \quad (46) $$
+
+$$ \frac{\partial H}{\partial u} = \lambda_1(1-\alpha)\frac{Y}{u} - \lambda_2\gamma A^{\varphi}L = 0, \quad (47) $$
+
+$$ \frac{\partial H}{\partial K} = \lambda_1 \alpha \frac{Y}{K} = -\dot{\lambda}_1, \quad (48) $$
+
+$$ \frac{\partial H}{\partial A} = \lambda_1 \sigma \frac{Y}{A} + \lambda_2 \varphi \gamma A^{\varphi-1} (1-u)L = -\dot{\lambda}_2. \quad (49) $$
+
+Combining (46) and (48) gives the Keynes-Ramsey rule
+
+$$ g_c = \frac{1}{\theta} \alpha A^{\sigma} K^{\alpha-1} (uL)^{1-\alpha}. \quad (50) $$
+
+Given the definition $p = \lambda_2/\lambda_1$, (47), (48), and (49) yield
+
+$$ g_p = \alpha A^\sigma K^{\alpha-1} (uL)^{1-\alpha} - \frac{\sigma \gamma A^{\varphi-1} uL}{1-\alpha} - \varphi \gamma A^{\varphi-1} (1-u)L. \quad (51) $$
+
+Let $x \equiv pA/K$. Log-differentiating $x$ w.r.t. time and using (47), (6), (5), and (4) gives (7). Log-differentiating (47) w.r.t. time, using (51), (5), (4) and (6), gives (8). Finally, log-differentiating $1-s \equiv cL/Y$, using (50), (4), (6) and (5), gives (9).
+
+In the text we defined $\mu \equiv (1-\varphi)\gamma(1-u^*)LA_0^{\varphi-1}$.
+
+**Lemma 1.** In a steady state of the system (7), (8), and (9)
+
+$$ \lambda_1(t)K(t) = \lambda_1(0)K_0(1+\mu t)^{\omega}, \quad \text{and} \quad (52) $$
+
+$$ \lambda_2(t)A(t) = \lambda_2(0)A_0(1+\mu t)^{\omega}, \quad (53) $$
+
+where
+
+$$ \omega = \frac{\sigma + 1 - \varphi - \theta [\sigma + \alpha(1-\varphi)]}{(1-\alpha)(1-\varphi)}. $$
+
+*Proof.* As shown in the text, in a steady state of the system we have $Y(t)/K(t) = (Y(0)/K_0)(1+\mu t)^{-1}$ so that
+
+$$ \int_0^t \frac{Y(\tau)}{K(\tau)} d\tau = \frac{Y(0)}{K_0} \mu^{-1} \ln(1+\mu t) = \frac{\theta [\sigma + \alpha(1-\varphi)]}{\alpha(1-\alpha)(1-\varphi)} \ln(1+\mu t), $$
+
+where the latter equality follows from (13), (11), (10), and the definition of $\mu$. Therefore, by (48) and (13),
+
+$$ \begin{aligned} \lambda_1(t)K(t) &= \lambda_1(0)e^{-\alpha \int_0^t \frac{Y(\tau)}{K(\tau)} d\tau} K_0(1+\mu t)^{\frac{\sigma+1-\varphi}{(1-\alpha)(1-\varphi)}} \\ &= \lambda_1(0)K_0(1+\mu t)^{\frac{\sigma+1-\varphi}{(1-\alpha)(1-\varphi)} - \frac{\theta[\sigma+\alpha(1-\varphi)]}{(1-\alpha)(1-\varphi)}}, \end{aligned} $$
+---PAGE_BREAK---
+
+which proves (52).
+
+From (49) and $p \equiv \lambda_2/\lambda_1$ follows that in steady state
+
+$$ \frac{\dot{\lambda}_2}{\lambda_2} = -\frac{\sigma Y}{pA} - \varphi \gamma A^{\varphi-1} (1-u^*) L = -\gamma \left( \frac{\sigma u^*}{1-\alpha} + \varphi (1-u^*) \right) L A_0^{\varphi-1} (1+\mu t)^{-1}, $$
+
+where the latter equality follows from (4), (12), and (11). Hence,
+
+$$ \int_0^t \frac{\dot{\lambda}_2(\tau)}{\lambda_2(\tau)} d\tau = - \frac{\varphi - \sigma - \alpha + \theta [\sigma + \alpha(1-\varphi)]}{(1-\alpha)(1-\varphi)} \ln(1+\mu t), $$
+
+by (10) and the definition of $\mu$. Therefore,
+
+$$ \lambda_2(t) A(t) = \lambda_2(0) e^{\int_0^t \frac{\lambda_2(\tau)}{\lambda_2(\tau)} d\tau} A_0(1+\mu t)^{\frac{1}{1-\varphi}} = \lambda_2(0) A_0(1+\mu t)^{\frac{1}{1-\varphi} - \frac{\varphi - \sigma - \alpha + \theta [\sigma + \alpha(1-\varphi)]}{(1-\alpha)(1-\varphi)}}, $$
+
+which proves (53). $\square$
+
+We have $\omega < 0$ if and only if
+
+$$ \theta > (\sigma + 1 - \varphi) / [\sigma + \alpha(1 - \varphi)]. \quad (54) $$
+
+Hence, by Lemma 1 follows that the “standard” transversality conditions, $\lim_{t \to -\infty} \lambda_1(t)K(t) = 0$ and $\lim_{t \to -\infty} \lambda_2(t)A(t) = 0$, hold along the unique regular growth path if and only if (54) is satisfied. This condition is a little stronger than $\theta > 1$. Our conjecture is that these transversality conditions together with the first-order conditions are necessary and sufficient for an optimal solution. This guessed necessity and sufficiency is based on the saddle-point stability of the steady state (see below). Yet, we have so far no proof. The maximized Hamiltonian is not jointly concave in $(K, A)$ unless $\sigma = \varphi(1-\alpha)$. Thus, the Arrow sufficiency theorem does not apply; hence, neither does the Mangasarian sufficiency theorem (see Seierstad and Sydsaeter, 1987). So, we only have a conjecture. (This is of course not a satisfactory situation, but we might add that this situation is quite common in the semi-endogenous growth literature, although authors are often silent about the issue.)
+
+As to the stability question it is convenient to transform the dynamic system. We do that in two steps. First, let $z \equiv xu$ and $q \equiv (1-s)xu$. Then the system (7), (8), and (9) becomes:
+
+$$
+\begin{aligned}
+\dot{z} &= \gamma LA^{\varphi-1} \left(1 - \varphi + \frac{\sigma}{\alpha} - z - (1-\varphi)u\right) z, \\
+\dot{u} &= \gamma LA^{\varphi-1} \left(\frac{\sigma}{\alpha} + \frac{\sigma}{1-\alpha}u - \frac{q}{1-\alpha}\right) u, \\
+\dot{q} &= \gamma LA^{\varphi-1} \left(1 - \varphi + \frac{\alpha-\theta}{(1-\alpha)\theta}z - (1-\varphi)u + \frac{1}{1-\alpha}q\right) q.
+\end{aligned}
+ $$
+---PAGE_BREAK---
+
+The steady state of this system is $(z^*, u^*, q^*) = (x^*u^*, u^*, (1-s^*)x^*u^*)$. Second, this system can be converted into an autonomous system in "transformed time" $\tau = \ln A(t) \equiv f(t)$. With $u(t) < 1$, $f'(t) = \gamma A(t)^{\varphi-1}(1-u(t))L > 0$ and we have $t = f^{-1}(\tau)$. Thus, considering $\tilde{z}(\tau) \equiv z(f^{-1}(\tau))$, $\tilde{u}(\tau) \equiv u(f^{-1}(\tau))$ and $\tilde{q}(\tau) \equiv q(f^{-1}(\tau))$, the above system is converted into:
+
+$$
+\begin{align*}
+\frac{d\tilde{z}}{d\tau} &= \left(1 - \varphi + \frac{\sigma}{\alpha} - \tilde{z} - (1-\varphi)\tilde{u}\right) \frac{\tilde{z}}{1-\tilde{u}}, \\
+\frac{d\tilde{u}}{d\tau} &= \left(\frac{\sigma}{\alpha} + \frac{\sigma}{1-\alpha}\tilde{u} - \frac{\tilde{q}}{1-\alpha}\right) \frac{\tilde{u}}{1-\tilde{u}}, \\
+\frac{d\tilde{q}}{d\tau} &= \left(1 - \varphi + \frac{\alpha-\theta}{(1-\alpha)\theta}\tilde{z} - (1-\varphi)\tilde{u} + \frac{1}{1-\alpha}\tilde{q}\right) \frac{\tilde{q}}{1-\tilde{u}}.
+\end{align*}
+$$
+
+The Jacobian of this system, evaluated in steady state, is
+
+$$
+J = \begin{bmatrix}
+-z^* & -(1-\varphi)z^* & 0 \\
+0 & \frac{\sigma}{1-\alpha}u^* & -\frac{1}{1-\alpha}u^* \\
+\frac{\alpha-\theta}{(1-\alpha)\theta}q^* & -(1-\varphi)q^* & \frac{1}{1-\alpha}q^*
+\end{bmatrix}
+\cdot
+\frac{1}{1-u^*}.
+$$
+
+The determinant is
+
+$$
+\det J = - \frac{\sigma\theta + (\theta - 1)(1 - \varphi)\alpha}{(1 - \alpha)^2\theta} z^* u^* q^* < 0,
+$$
+
+in view of $\theta > 1$. The trace is
+
+$$
+\operatorname{tr} J = \frac{(\alpha - s^*)x^* + \sigma}{1 - \alpha} \frac{u^*}{1 - u^*} = \frac{[\sigma + \alpha(1 - \varphi)](2\theta - 1) - \sigma - 1 + \varphi}{(1 - \alpha)(\theta - 1)} \frac{\sigma u^*}{1 - u^*} > 0,
+$$
+
+in view of the transversality condition (54). Thus, *J* has one negative eigenvalue, η₁, and two eigenvalues with positive real part. All three variables, *ţ*, *ŭ* and *tet*, are jump variables, but *ţ* and *ŭ* are linked through
+
+$$
+\tilde{z} = \frac{1-\alpha}{\gamma L^\alpha} A^{\sigma+1-\varphi} (\frac{\tilde{u}}{K})^{1-\alpha} \equiv h(\tilde{u}, A, K). \quad (55)
+$$
+
+In order to check existence and uniqueness of a convergent solution, let $x = (x_1, x_2, x_3) \equiv (\tilde{z}, \tilde{u}, \tilde{q})$ and $\bar{x} = (\bar{x}_1, \bar{x}_2, \bar{x}_3) \equiv (z^*, u^*, q^*)$. Then, in a small neighborhood of $\bar{x}$ any convergent solution is of the form $x(\tau) = Cve^{\eta_1\tau} + \bar{x}$, where $C$ is a constant, depending on initial $A$ and $K$, and $v = (v_1, v_2, v_3)$ is an eigenvector associated with $\eta_1$ so that
+
+$$
+(-z^* - \eta_1)v_1 - (1 - \varphi)z^*v_2 = 0, \quad (56a)
+$$
+
+$$
+0 + \left(\frac{\sigma}{1-\alpha}u^* - \eta_1\right)v_2 - \frac{1}{1-\alpha}u^*v_3 = 0, \quad (56b)
+$$
+
+$$
+\frac{\alpha - \theta}{(1 - \alpha)\theta} q^* v_1 - (1 - \varphi)q^* v_2 + \left(\frac{1}{1 - \alpha}q^* - \eta_1\right)v_3 = 0. \quad (56c)
+$$
+---PAGE_BREAK---
+
+We see that $v_i \neq 0$, $i = 1,2,3$. Initial transformed time is $\tau_0 = \ln A_0$ and we have $x(\tau_0) = (h(u(0),A_0,K_0),u(0),q(0))$ for $A(0)=A_0$ and $K(0)=K_0$ (both pre-determined), where we have used (55) for $t=0$. Hence, coordinate-wise,
+
+$$x_1(\tau_0) = Cv_1e^{\eta_1\tau_0} + z^* = h(u(0), A_0, K_0), \quad (57)$$
+
+$$x_2(\tau_0) = Cv_2e^{\eta_1\tau_0} + u^* = u(0), \quad (58)$$
+
+$$x_3(\tau_0) = Cv_3e^{\eta_1\tau_0} + q^* = q(0). \quad (59)$$
+
+This system has a unique solution in $(C, u(0), q(0))$; indeed, substituting (58) and (59) into (57), setting $v_1 = 1$ and using $z^* = x^*u^*$, gives
+
+$$\frac{1}{v_2}u(0) + u^*(x^* - \frac{1}{v_2}) = h(u(0), A_0, K_0). \quad (60)$$
+
+It follows from Lemma 2 that, given $\theta > 1$, (60) has a unique solution in $u(0)$. With the pre-determined initial value of the ratio, $A^{\sigma+1-\varphi}/K^{1-\alpha}$, in a small neighborhood of its steady state value (which is $\gamma L^\alpha x^* u^* \alpha / (1-\alpha)$), the solution for $u(0)$ is close to $u^*$, hence it belongs to the open interval $(0,1)$.
+
+**Lemma 2.** Assume $\theta > 1$. Then $1/v_2 > x^*$.
+
+*Proof.* From (56a),
+
+$$v_2 = \frac{-z^* - \eta_1}{(1 - \varphi)z^*}. \quad (61)$$
+
+Substituting $v_1 = 1$ together with (56b) into (56c) gives
+
+$$\frac{\alpha - \theta}{(1 - \alpha)\theta}q^* - (1 - \varphi)q^*v_2 + (\frac{1}{1 - \alpha}q^* - \eta_1)(\sigma - \frac{(1 - \alpha)\eta_1}{u^*})v_2 = Q(v_2, \eta_1) = 0.$$
+
+Replacing $\eta_1$ and $v_2$ in (61) by $\eta$ and $w(\eta)$, respectively, we see that $P(\eta) \equiv Q(w(\eta),\eta)$ is the characteristic polynomial of degree 3 corresponding to $J$. Now,
+
+$$P(-z^*) = \frac{\alpha - \theta}{(1 - \alpha)\theta}q^* < 0,$$
+
+as $\theta > 1$. Consider $\eta_0 = -(1-\varphi)z^*/x^* - z^* < -z^*$. Clearly, $w(\eta_0) = 1/x^*$. If $P(\eta_0) > 0$, then the unique negative eigenvalue $\eta_1$ satisfies $\eta_0 < \eta_1 < -z^*$, implying that $v_2 = w(\eta_1) < 1/x^*$, in view of $w'(\eta) < 0$; hence $1/v_2 > x^*$. It remains to prove that $P(\eta_0) > 0$. We have
+
+$$\begin{align*}
+P(\eta_0) &= \frac{\alpha - \theta}{(1 - \alpha)\theta}q^* - (1 - \varphi)q^*w(\eta_0) + (\frac{1}{1 - \alpha}q^* - \eta_1)(\sigma - \frac{(1 - \alpha)\eta_1}{u^*})w(\eta_0) \\
+&= \frac{\alpha - \theta}{(1 - \alpha)\theta}q^* - \frac{(1 - \varphi)q^*}{x^*} + (\frac{1}{1 - \alpha}q^* - \eta_0)(\sigma - \frac{(1 - \alpha)\eta_0}{u^*})\frac{1}{x^*}
+\end{align*}$$
+---PAGE_BREAK---
+
+$$
+\begin{aligned}
+&= \frac{\alpha(1-\theta)[(1-\alpha)(1-\varphi) + \sigma](1-s^*)x^*u^*}{(1-\alpha)\theta\sigma} \\
+&\quad + \frac{[(1-\alpha)(1-\varphi) + \sigma](1-s^*) + (1-\alpha)\sigma}{1-\alpha}u^* + \frac{1-\varphi}{x^*}\sigma u^* + \frac{1-\alpha}{u^*x^*}\eta_0^2 \\
+&= \frac{\theta-1}{\theta}[\sigma + \alpha(1-\varphi)] + \frac{1-\alpha}{u^*x^*}\eta_0^2 > 0,
+\end{aligned}
+$$
+
+where the third equality is based on reordering and the definition of $q^*$, whereas
+the last equality is based on the formulas for $x^*, u^*,$ and $s^*$ in (10); finally, the
+inequality is due to $\theta > 1$. $\square$
+
+**B. The learning-by-doing example** By (19), the transversality condition (21) can be written
+
+$$ \lim_{t \to \infty} c(t)^{-\theta} e^{-\rho t} K(t) = 0, $$
+
+which is obviously satisfied along the asymptotic regular growth path, since $\rho > 0$,
+and $c$ and $K$ feature *less* than exponential growth. In the text we claimed that
+the first-order conditions together with the transversality condition are sufficient
+for an optimal solution. Indeed, this follows from the Mangasarian sufficiency
+theorem, since $H$ is jointly concave in $(K, c)$ and the state and co-state variables
+are non-negative for all $t \ge 0$, cf. Seierstad and Sydsaeter (1987, p. 234-35).
+Uniqueness of the solution follows because $H$ is *strictly* concave in $(K, c)$ for all
+$t \ge 0$.
+
+**C. The investment-specific learning example** The current-value Hamiltonian for the optimal control problem is:
+
+$$ H(K, q, c, \lambda_1, \lambda_2, t) = \frac{c^{1-\theta}-1}{1-\theta}L + \lambda_1 [q(Y - cL) - \delta K] + \lambda_2 \gamma q^{\frac{\beta-1}{\beta}} (Y - cL), $$
+
+where $Y = K^\alpha L^{1-\alpha}$ and $\lambda_1$ and $\lambda_2$ are the co-state variables associated with physical capital and the quality of newly produced investment goods, respectively. An interior solution will satisfy the first-order conditions
+
+$$ \frac{\partial H}{\partial c} = c^{-\theta} L - \lambda_1 q L - \lambda_2 \gamma q^{\frac{\beta-1}{\beta}} L = 0, \quad (62) $$
+
+$$ \frac{\partial H}{\partial K} = \lambda_1(q\alpha\frac{Y}{K} - \delta) + \lambda_2\gamma q^{\frac{\beta-1}{\beta}}\alpha\frac{Y}{K} = \rho\lambda_1 - \dot{\lambda}_1, \quad (63) $$
+
+$$ \frac{\partial H}{\partial q} = \lambda_1(Y - cL) + \lambda_2\gamma\frac{\beta-1}{\beta}q^{\frac{1}{\beta}}(Y - cL) = \rho\lambda_2 - \dot{\lambda}_2. \quad (64) $$
+
+The first-order conditions imply:
+
+**Lemma 3.** $\frac{d}{dt}(c^{-\theta}) = c^{-\theta}(\rho - \alpha q \frac{Y}{K}) + \lambda_1 q \delta.$
+---PAGE_BREAK---
+
+*Proof.* Let
+
+$$u \equiv c^{-\theta} - \lambda_1 q = \lambda_2 \gamma q^{\frac{\beta-1}{\beta}} = \lambda_2 \frac{\dot{q}}{I}, \quad (65)$$
+
+by (62) and (26), respectively. Then, using (64) and $I \equiv Y - cL$,
+
+$$g_u = g_{\lambda_2} + \frac{\beta-1}{\beta}g_q = \rho - \left(\frac{\lambda_1}{\lambda_2} + \gamma \frac{\beta-1}{\beta} q^{\frac{\beta-1}{\beta}}\right)I + \frac{\beta-1}{\beta}\gamma q^{\frac{\beta-1}{\beta}}I = \rho - \frac{\lambda_1}{\lambda_2}I, \quad (66)$$
+
+so that
+
+$$\dot{u} = \rho u - \frac{\lambda_1}{\lambda_2} I u = \rho u - \lambda_1 \dot{q}, \quad (67)$$
+
+by (65). Rewriting (65) as $c^{-\theta} = \lambda_1 q + u$, we find
+
+$$
+\begin{align*}
+\frac{d}{dt}(c^{-\theta}) &= \lambda_1 \dot{q} + \dot{\lambda}_1 q + \dot{u} = \rho u + \dot{\lambda}_1 q = \rho c^{-\theta} - (\rho \lambda_1 - \dot{\lambda}_1) q \quad (\text{from (67) and (65)}) \\
+&= \rho c^{-\theta} - \left[ (\lambda_1 q + \lambda_2 \gamma q^{\frac{\beta-1}{\beta}}) \alpha \frac{Y}{K} - \lambda_1 \delta \right] q = \rho c^{-\theta} - c^{-\theta} \alpha q \frac{Y}{K} + \lambda_1 q \delta,
+\end{align*}
+$$
+
+where the two latter equalities come from (63) and (62), respectively. $\square$
+
+From Lemma 3 follows
+
+$$g_c = -\frac{1}{\theta} \frac{\frac{d}{dt}(c^{-\theta})}{c^{-\theta}} = \frac{1}{\theta}(\alpha z - m\delta - \rho),$$
+
+using that $z \equiv qY/K$ and
+
+$$m \equiv pq \equiv (\lambda_1/c^{-\theta})q = \frac{\lambda_1}{\lambda_1 q + \lambda_2 \gamma q^{\frac{\beta-1}{\beta}}}q, \quad (68)$$
+
+by (62). This proves (27).
+
+The conjectured necessary and sufficient transversality conditions are $\lim_{t \to \infty} \lambda_1(t)e^{-pt}K(t) = 0$ and $\lim_{t \to \infty} \lambda_2(t)e^{-pt}q(t) = 0$. We now check whether these conditions hold in the steady state. First, note that (63) and (65) give
+
+$$
+\begin{align*}
+g_{\lambda_1} &= \rho + \delta - \frac{c^{-\theta}}{\lambda_1} \alpha \frac{Y}{K} = \rho + \delta - \alpha \frac{Y}{pK} = \rho + \frac{1}{m}(m\delta - \alpha z) \\
+&= \rho - \frac{1}{m^*}(\theta g_c^* + \rho) = \frac{(1 - \alpha + \alpha\theta)g_c^*}{\alpha}
+\end{align*}
+$$
+
+in steady state, by (38). Further, we have in steady state $g_K^* = g_c^*/\alpha + n$. Hence, $g_{\lambda_1}^* + g_K^* - \rho = (1-\theta)g_c^* + n - \rho < 0$, by the parameter restriction (37). Thus the first transversality condition holds for all $\theta > 0$.
+
+From (66)
+
+$$
+\begin{align*}
+g_{\lambda_2} + g_q - \rho &= \rho - \frac{\lambda_1}{\lambda_2} I - \frac{\beta-1}{\beta} g_q + g_q - \rho = -\frac{m}{1-m}\gamma q^{-\frac{1}{\beta}} I + \frac{1}{\beta} g_q && (\text{by (68)}) \\
+&= -\frac{m}{1-m} g_q + \frac{1}{\beta} g_q = -(q g_c^* + p) + \frac{1-\alpha}{\alpha\beta} g_c^* = \frac{(1-\alpha)(1+\theta\beta)}{1-\alpha(1+\beta)} n - p,
+\end{align*}
+$$
+---PAGE_BREAK---
+
+in steady state, by (38), (34), and (36). It follows that $\theta \ge 1$ is sufficient for the
+second transversality condition to hold. If $n = 0$, no particular condition on $\theta$ is
+needed to ensure this transversality condition.
+
+It remains to show:
+
+**Lemma 4.** If $n = 0$ and $\alpha < 1/(1 + \beta)$, then for purely technological reasons $\lim_{t \to \infty} h = 0$.
+
+*Proof.* Let $n = 0$ and $\alpha < 1/(1 + \beta)$. We have $h \equiv Y/q^{1/\beta} = K^{\alpha}L^{1-\alpha}/q^{1/\beta}$,
+where *L* is constant and *q* is always non-decreasing, by (26). There are two cases
+to consider. *Case 1*: $q \to \infty$ for $t \to \infty$. Then, by (25), for $t \to \infty$, $I \to 0$,
+hence $\dot{K} \to -\delta K$, and so $K \to 0$, whereby $h \to 0$. *Case 2*: $q \to \infty$ for $t \to \infty$.
+If $K \to \infty$ for $t \to \infty$, we are finished. Suppose $K \to \infty$ for $t \to \infty$. Then, for
+$t \to \infty$ we must have $g_K = sz - \delta \ge 0$ so that $z \to 0$, in view of $\delta > 0$. In addition,
+defining $x \equiv zh^\beta$, we get $x = K^{\alpha(1+\beta)-1}L^{(1-\alpha)(1+\beta)} \to 0$ for $t \to \infty$. It follows
+that $h = (x/z)^{1/\beta} \to 0$ for $t \to \infty$, since $\alpha < 1/(1+\beta)$. $\square$
+
+References
+
+[1] Aghion, P., and P. Howitt, 1992. A model of growth through creative de-
+struction. Econometrica 60, 323-351.
+
+[2] Alvarez-Pelaez, M. J., and C. Groth, 2005. Too Little or Too Much R&D? European Economic Review 49, 437-456.
+
+[3] Arnold, L., 2006. The Dynamics of the Jones R&D Growth Model. Review of Economic Dynamics 9, 143-52.
+
+[4] Arrow, K. J., 1962. The economic implications of learning by doing. Review of Economic Studies 29, 153-73.
+
+[5] Asheim, G.B., Buchholz, W., Hartwick, J. M., Mitra, T., Withagen, C. A., 2007. Constant Savings Rates and Quasi-Arithmetic Population Growth under Exhaustible Resource Constraints. Journal of Environmental Economics and Management, 53, 213-229.
+
+[6] Attanasio, O., and G. Weber, 1995. Is Consumption Growth Consistent with Intertemporal Optimization? Journal of Political Economy 103, 1121-1157.
+
+[7] Barro, R. J., and X. Sala-i-Martin, 2004. Economic Growth, 2. ed. MIT Press, Cambridge (Mass.).
+
+[8] Blanchard, O. J., 1985. Debts, Deficits, and Finite Horizons. Journal of Political Economy 93, 223-247.
+---PAGE_BREAK---
+
+[9] Blanchard, O. J., Fischer, S., 1989. Lectures on Macroeconomics. MIT Press, Cambridge MA.
+
+[10] Boucekkine, R., F. del Rio, and O. Licandro, 2003. Embodied Technological Change, Learning-by-doing and the Productivity Slowdown. Scandinavian Journal of Economics 105 (1), 87-97.
+
+[11] Dasgupta, P., Heal G., 1979. Economic Theory and Exhaustible Resources. Cambridge University Press, Cambridge.
+
+[12] Greenwood, J., Z. Hercowitz, and P. Krusell, 1997. Long-Run Implications of Investment-Specific Technological Change. American Economic Review 87 (3), 342-362.
+
+[13] Grossman, G. M., and E. Helpman, 1991. Innovation and Growth in the Global Economy. MIT Press, Cambridge (Mass.).
+
+[14] Growiec, J., 2007. Beyond the linearity critique: The knife-edge assumption of steady-state growth. Economic Theory 31 (3), 489-499.
+
+[15] Growiec, J., 2008, Knife-edge conditions in the modeling of long-run growth regularities, Working Paper, Warsaw School of Economics.
+
+[16] Hakenes, H., and A. Irmen, 2007, On the Long-run Evolution of Technological Knowledge, Economic Theory 30, 171-180.
+
+[17] Hartwick, J. M., 1977. Intergenerational Equity and the Investing of Rents from Exhaustible Resources. American Economic Review 67, 972-974.
+
+[18] Jones, C. I., 1995. R&D-based Models of Economic Growth. Journal of Political Economy 103, 759-784.
+
+[19] Jones, C. I., 2005. Growth and Ideas. In: Handbook of Economic Growth, vol. I.B, ed. by P. Aghion and S. N. Durlauf, Elsevier, Amsterdam, 1063-1111.
+
+[20] Kremer, M., 1993. Population growth and technological change: One million B.C. to 1990. Quarterly Journal of Economics 108, 681-716.
+
+[21] McCallum, B. T., 1996. Neoclassical vs. endogenous growth analysis: An overview. Federal Reserve Bank of Richmond Economic Quarterly Review 82, Fall, 41-71.
+
+[22] Mitra, T., 1983. Limits on Population Growth under Exhaustible Resource Constraints, International Economic Review 24, 155-168.
+---PAGE_BREAK---
+
+[23] Pezzey, J., 2004. Exact Measures of Income in a Hyperbolic Economy. Environment and Development Economics 9, 473-484.
+
+[24] Ramsey, F. P., 1928. A Mathematical Theory of Saving. The Economic Journal 38, 543-559.
+
+[25] Romer, P. M., 1990. Endogenous Technological Change. Journal of Political Economy 98, 71-101.
+
+[26] Seierstad, A., Sydsaeter, K., 1987. Optimal Control Theory with Economic Applications. North Holland, Amsterdam.
+
+[27] Solow, R. M., 1974. Intergenerational Equity and Exhaustible Resources. Review of Economic Studies, Symposium Issue, 29-45.
+
+[28] Solow, R. M., 1994. Perspectives on Growth Theory. Journal of Economic Perspectives 8, 45-54.
+
+[29] Solow, R. M., 1996. Growth Theory without “Growth” – Notes Inspired by Rereading Oelgaard. Nationaloekonomisk Tidsskrift – Festskrift til Anders Oelgaard, 87-93.
+
+[30] Solow, R. M., 2000. Growth Theory. An Exposition. Oxford University Press, Oxford.
+
+[31] United Nations, 2005. World Population Prospects. The 2004 Revision. New York.
+
+[32] Young, A., 1998. Growth without Scale Effects, Journal of Political Economy 106, 41-63.
+---PAGE_BREAK---
+
+Universität Leipzig
+Wirtschaftswissenschaftliche Fakultät
+
+Nr. 1 Wolfgang Bernhardt
+
+Stock Options wegen oder gegen Shareholder Value?
+Vergütungsmodelle für Vorstände und Führungskräfte
+04/98
+
+Nr. 2 Thomas Lenk / Volkanar Teichmann
+
+Bei der Reform der Finanzverfassung die neuen Bundesländer nicht vergessen!
+10/98
+
+Nr. 3 Wolfgang Bernhardt
+
+Gedanken über Führen – Dienen – Verantworten
+11/98
+
+Nr. 4 Kristin Wellner
+
+Möglichkeiten und Grenzen kooperativer Standortgestaltung zur Revitalisierung von Innenstädten
+12/98
+
+Nr. 5 Gerhardt Wolff
+
+Brauchen wir eine weitere Internationalisierung der Betriebswirtschaftslehre?
+01/99
+
+Nr. 6 Thomas Lenk / Friedrich Schneider
+
+Zurück zu mehr Föderalismus: Ein Vorschlag zur Neugestaltung des Finanzausgleichs in der Bundesrepublik Deutschland unter besonderer Berücksichtigung der neuen Bundesländer
+12/98
+
+Nr. 7 Thomas Lenk
+
+Kooperativer Förderalismus – Wettbewerbsorientierter Förderalismus
+03/99
+
+Nr. 8 Thomas Lenk / Andreas Mathes
+
+EU – Osterweiterung – Finanzierbar?
+03/99
+
+Nr. 9 Thomas Lenk / Volkanar Teichmann
+
+Die fisikalischen Wirkungen verschiedener Forderungen zur Neugestaltung des Länderfinanzausgleichs in der Bundesrepublik Deutschland
+Eine empirische Analyse unter Einbeziehung der Normenkontrollanträge der Länder Baden-Württemberg, Bayern und Hessen sowie der Stellungnahmen verschiedener Bundesländer
+09/99
+
+Nr. 10 Kai-Uwe Graw
+
+Gedanken zur Entwicklung der Strukturen im Bereich der Wasserversorgung unter besonderer Berücksichtigung kleiner und mittlerer Unternehmen
+10/99
+
+Nr. 11 Adolf Wagner
+
+Materialien zur Konjunkturforschung
+12/99
+
+Nr. 12 Anja Birke
+
+Die Übertragung westdeutscher Institutionen auf die ostdeutsche Wirklichkeit – erfolgversprechendes Zusammenspiel oder Aufdeckung systematischer Mängel?
+Ein empirischer Bericht für den kommunalen Finanzausgleich am Beispiel Sach
+02/00
+
+Nr. 13 Rolf H. Hasse
+
+Internationaler Kapitalverkehr in den letzten 40 Jahren – Wohlstandsmotor oder Krisenursache?
+03/00
+
+Nr. 14 Wolfgang Bernhardt
+
+Untemehmensführung (Corporate Governance) und Hauptversammlung
+04/00
+
+Nr. 15 Adolf Wagner
+
+Materialien zur Wachstumsforschung
+03/00
+
+Nr. 16 Thomas Lenk / Anja Birke
+
+Determinanten des kommunalen Gebührenaufkommens unter besonderer Berücksichtigung der neuen Bundesländer
+04/00
+
+Nr. 17 Thomas Lenk
+
+Finanzwirtschaftliche Auswirkungen des Bundesverfassungsgerichtsurteils zum Länderfinanzausgleich vom 11.11.1999
+04/00
+
+Nr. 18 Dirk Bültei
+
+Continous linear utility for preferences on convex sets in normal real vector
+05/00
+
+Nr. 19 Stefan Dierkes / Stephanie Hanrath
+
+Steuerung dezentraler Investitionsentscheidungen bei nutzungsabhängigem und nutzungsunabhängigem Verschleiß des Anlagenvermögens
+06/00
+
+Nr. 20 Thomas Lenk / Andreas Mathes / Olaf Hirschefeld
+
+Zur Trennung von Bundes- und Landeskompetenzen in der Finanzverfassung Deutschlands
+07/00
+
+Nr. 21 Stefan Dierkes
+
+Marktwerte, Kapitalkosten und Betafaktoren bei wertabhängiger Finanzierung
+10/00
+
+Nr. 22 Thomas Lenk
+
+Intergovernmental Fiscal Relationships in Germany: Requirement for Regulations?
+03/01
+
+Nr. 23 Wolfgang Bernhardt
+
+Stock Options – Aktuelle Fragen Besteuerung, Bewertung, Offenlegung
+03/01
+
+Nr. 24 Thomas Lenk
+
+Die „kleine Reform“ des Länderfinanzausgleichs als Nukleus für die Finanzverfassungsreform?
+10/01
+
+Nr. 25 Wolfgang Bernhardt
+
+Biotechnologie im Spannungsfeld von Menschenwürde, Forschung, Markt und Moral
+Wirtschaftsethik zwischen Beredsamkeit und Schweigen
+11/01
+---PAGE_BREAK---
+
+
+
+ | Nr. 26 |
+ Thomas Lenk |
+ Finanzwirtschaftliche Bedeutung der Neuregelung des bundestaatlichen Finanzausgleichs - Eine allkoative und distributive Wirkungsanalyse für das Jahr 2005 |
+
+
+ | Nr. 27 |
+ Sören Bär |
+ 11/01 Grundzüge eines Tourismusmarketing, untersucht für den Südraum Leipzig |
+
+
+ | Nr. 28 |
+ Wolfgang Bernhardt |
+ 05/02 Der Deutsche Corporate Governance Kodex. Zuwahl (comply) oder Abwahl (explain)? |
+
+
+ | Nr. 29 |
+ Adolf Wagner |
+ 06/02 Konjunkturtheorie, Globalisierung und Evolutionsökonomik |
+
+
+ | Nr. 30 |
+ Adolf Wagner |
+ 08/02 Zur Profilbildung der Universitäten |
+
+
+ | Nr. 31 |
+ Sabine Klinger / Jens Ulrich / Hans-Joachim Rudolph |
+ 08/02 Konjunktur als Determinante des Erdgasverbrauchs in der ostdeutschen Industrie |
+
+
+ | Nr. 32 |
+ Thomas Lenk / Anja Birke |
+ 10/02 The Measurement of Expenditure Needs in the Fiscal Equalization at the Local Empirical Evidence from German Municipalities |
+
+
+ | Nr. 33 |
+ Wolfgang Bernhardt |
+ 10/02 Die Lust am Fliegen Eine Parabel auf viel Corporate Governance und wenig Unternehmensführung |
+
+
+ | Nr. 34 |
+ Udo Hielscher |
+ 11/02 Wie reich waren die reichsten Amerikaner wirklich? (US-Vermögensbewertungsindex 1800 – 2000) |
+
+
+ | Nr. 35 |
+ Uwe Haubold / Michael Nowak |
+ 12/02 Risikoanalyse für Langfrist-Investments - Eine simulationsbasierte Studie - |
+
+
+ | Nr. 36 |
+ Thomas Lenk |
+ 12/02 Die Neuregelung des bundesstaatlichen Finanzausgleichs - auf Basis der Steuerschätzung Mai 2002 und einer aktualisierten Bevölkerungsstatistik - |
+
+
+ | Nr. 37 |
+ Uwe Haubold / Michael Nowak |
+ 12/02 Auswirkungen der Renditeverteilungsannahme auf Anlageentscheidungen - Eine simulationsbasierte Studie - |
+
+
+ | Nr. 38 |
+ Wolfgang Bernhardt |
+ 02/03 Corporate Governance Kondex für den Mittel-Stand? |
+
+
+ | Nr. 39 |
+ Hermut Kormann |
+ 06/03 Familienunternehmen: Grundfragen mit finanzwirtschaftlichen Bezug |
+
+
+ | Nr. 40 |
+ Matthias Folk |
+ 10/03 Launhardtsche Trichter |
+
+
+ | Nr. 41 |
+ Wolfgang Bernhardt |
+ 11/03 Corporate Governance statt Unternehmensführung |
+
+
+ | Nr. 42 |
+ Thomas Lenk / Karolina Kaiser |
+ 11/03 Das Prämienmodell im Länderfinanzausgleich - Anreiz- und Verteilungsmitwirl |
+
+
+ | Nr. 43 |
+ Sabine Klinger |
+ 11/03 Die Volkswirtschaftliche Gesamtrechnung des Haushaltsektors in einer Matrix |
+
+
+ | Nr. 44 |
+ Thomas Lenk / Heide Köpping |
+ 03/04 Strategien zur Armutsbekämpfung und -vermeidung in Ostdeutschland |
+
+
+ | Nr. 45 |
+ Wolfgang Bernhardt |
+ 05/04 Sommernachtsfantasien Corporate Governance im Land der Träume. |
+
+
+ | Nr. 46 |
+ Thomas Lenk / Karolina Kaiser |
+ 07/04 The Premium Model in the German Fiscal Equalization System |
+
+
+ | Nr. 47 |
+ Thomas Lenk / Christine Falken |
+ 12/04 Komparative Analyse ausgewählter Indikatoren des Kommunalwirtschaftlichen Gesamtergebnisses |
+
+
+ | Nr. 48 |
+ Michael Nowak / Stephan Earth |
+ 05/05 Immobilienanlagen im Portfolio institutioneller Investoren am Beispiel von Versicherungsunternehmen - Auswirkungen auf die Risikosituation- |
+
+
+ | Nr. 49 |
+ Wolfgang Bernhardt |
+ 08/05 Familiengesellschaften - Quo Vadis? Vorsicht vor zu viel „Professionalisierung“ und Ver-Fremdung |
+
+
+ | Nr. 50 |
+ Christian Milow |
+ 11/05 Der Griff des Staates nach dem Währungsgold |
+
+
+ | Nr. 51 |
+ Anja Eichhorst / Karolina Kaiser |
+ 12/05 The Institutional Design of Bailouts and Its Role in Hardening Budget Constraints in Federations |
+
+
+ | Nr. 52 |
+ Ullrich Heilemann / Nancy Beck |
+ 03/06 Die Mühen der Ebene - Regionale Wirtschaftsförderung in Leipzig 1991 bis 2004 |
+
+
+ | Nr. 53 |
+ Gunther Schnabl |
+ 08/06 Die Grenzen der monetären Integration in Europa |
+
+
+ | Nr. 54 |
+ Hermut Kormann |
+ 08/06 Gibt es so etwas wie typisch mittelständige Strategien? |
+
+
+ |
+ |
+ 11/06 |
+
+
+---PAGE_BREAK---
+
+Nr. 55 Wolfgang Bernhardt
+(Miss-)Stimmung, Bestimmung und Mitbestimmung
+Zwischen Juristentag und Biedenkopf-Kommission
+11/06
+
+Nr. 56 Ullrich Heilemann / Annika Blaschzik
+Indicators and the German Business Cycle - A Multivariate Perspective on
+Indicators of Ifo, OECD, and ZEW
+01/07
+
+Nr. 57 Ullrich Heilemann
+"THE SOUL OF A NEW MACHINE" ZU DEN ANFÄNGEN DES RWI-
+KONJUNKTURMODELLS
+12/06
+
+Nr. 58 Ullrich Heilemann / Roland Schuhr / Annika
+Blaschzik
+ZUR EVOLUTION DES DEUTSCHEN KONJUNKTURZYKLUS 1958 BIS
+2004-
+ERGEBNISSE EINER DYNAMISCHEN DISKRIMINANZANALYSE
+01/07
+
+Nr. 59 Christine Falken / Mario Schmidt
+Kameralistik versus Doppik
+Zur Informationsfunktion des alten und neuen Rechnungswesens der
+Kommunen
+Teil I. Einführende und Erläuternde Betrachtungen zum Systemwechsel im
+kommunalen Rechnungswesen
+01/07
+
+Nr. 60 Christine Falken / Mario Schmidt
+Kameralistik versus Doppik
+Zur Informationsfunktion des alten und neuen Rechnungswesens der
+Kommunen
+Teil II
+Bewertung der Informationsfunktion im Vergleich
+01/07
+
+Nr. 61 Udo Hielscher
+MONTI DELLA CITTÀ DI FIRENZE
+Innovative Finanzierungen im Zeitalter Der Medici Würzeln der modernen
+Finanzmärkte
+03/07
+
+Nr. 62 Ullrich Heilemann / Stefan Wappler
+SACHSEN WÄCHST ANDERS - KONJUNKTURELLE; SEKTORALE
+UND REGIONALE BESTIMMUNGS GRÜNDE DER ENTWICKLUNG
+DER BRUTTOWERTSCHÖPFUNG 1992 BIS 2006
+07/2007
+
+Nr. 63 Adolf Wagner
+REGIONALÖKONOMIK
+KONVERGERENDE ODER DIVERGERENDE REGIONALENTWICK-
+LUNGEN
+08/2007
+
+Nr. 64 Ullrich Heilemann / Jens Ulrich
+GOOD BYE, PROFESSOR PHILLIPS?
+ZUM WANDEL DER TARIFLOHNDE TERMINANTEN IN DER
+BUNDESREPUBLIK 1952 – 2004
+08/2007
+
+Nr. 65 Gunther Schnabl / Franziska Schobert
+Monetary Policy Operations of Debtor Central Banks in MENA Countries
+10/07
+
+Nr. 66 Andreas Schäfer / Simone Valente
+Habit Formation, Dynastic Altruism, and Population Dynamics
+11/07
+
+Nr. 67 Wolfgang Bernhardt
+5 Jahre Deutscher Corporate Governance Kondex
+Eine Erfolgsgeschichte?
+01/2008
+
+Nr. 68 Ullrich Heilemann / Jens Ulrich
+Viel Lärm um wenig? Zur Empirie von Lohnformeln in der Bundesrepublik
+01/2008
+
+Nr. 69 Christian Groth / Karl-Josef Koch /
+Thomas M. Steger
+When economic growth is less than exponential
+02/2008
\ No newline at end of file
diff --git a/samples/texts_merged/6548147.md b/samples/texts_merged/6548147.md
new file mode 100644
index 0000000000000000000000000000000000000000..83c8d0e2f56d15f1a59e05e303d76bd75fc39faf
--- /dev/null
+++ b/samples/texts_merged/6548147.md
@@ -0,0 +1,334 @@
+
+---PAGE_BREAK---
+
+# Bayesian and Hybrid Cramer-Rao Bounds for QAM Dynamical Phase Estimation
+
+J. Yang, B. Geller, and A. Wei¹
+
+**Abstract**—In this paper, we study Bayesian and hybrid Cramer-Rao bounds for the dynamical phase estimation of QAM modulated signals. We present the analytical expressions for the various CRBs. This avoids the calculation of any matrix inversion and thus greatly reduces the computation complexity. Through simulations, we also illustrate the behaviors of the BCRB and of the HCRB with the signal-to-noise ratio.
+
+**Index Terms**—Bayesian Cramer-Rao Bound (BCRB), Hybrid Cramer-Rao Bound (HCRB), Synchronization Performance
+
+## I. INTRODUCTION
+
+There are three types of estimators widely used in communication systems [1]: data-aided (DA), code-aided (CA) and non-data-aided (NDA) estimators. DA estimation techniques obtain the better performance but may lead to unacceptable losses in power and spectral efficiency. CA synchronization allows an improved data efficiency but requires additional interactive impairments between decoding and synchronization units. Finally, NDA synchronization algorithms may sometimes lead to poor results but they exhibit the highest transmission efficiency and are still attractive.
+
+To know whether an imposed algorithm is good or not, one often compares its performance with lower bounds. Although there exists many lower bounds, the Cramer-Rao bound (CRB) is the most commonly used and the easiest to determine [2], [3].
+
+Many works [4]-[10] concern the CRBs for the carrier phase and frequency estimation in DA and NDA scenarios. But all these papers refer to an idealized situation in which the phase offset is constant. However, in modern burst-mode communications, a time-varying phase noise due to the oscillator instabilities has to be considered [11], [12]. Taking into account such a phase noise variance, [13] considered the DA CRB for the phase estimation with a phase noise variance and [14] derived a BCRB for NDA BPSK signals; [15] added the consideration of a deterministic parameter to obtain the HCRB for BPSK signals. However, the synchronization of BPSK signals is relatively simpler than that of QAM signals. The goal of this paper is then two-folded. First we detail the derivation of the BCRB and of the HCRB for QAM modulated signals. Second we want to provide benchmarks for the QAM dynamical phase estimation, since we present a generalization of the off-line synchronizing scheme [16] to QAM modulated signals in [17] whose performance can exactly reach the bounds derived in this paper. For lack of space reasons, this study is extended to the code-aided case in [18].
+
+¹ This work was partially funded by the ANR LURGA program.
+J. Yang is with SATIE, ENS Cachan (email: yang@satie.ens-cachan.fr);
+B. Geller is with LEI, ENSTA PARISTECH (email: geller@ensta.fr);
+A. Wei is with LATTIS, Université Toulouse II (email: anne.wei@lattis.Univ-toulouse.fr).
+
+This paper is organized as follows. In section II, we recall the various kinds of Cramer-Rao bounds. After describing the system model in section III, we derive the BCRBs and the HCRBs for both on-line and off-line estimations in section IV. Moreover, we also present the analytical expressions for the various CRBs; this avoids computing the inverse of any information matrix. A discussion about the various CRBs and a conclusion are respectively provided in section V and VI.²
+
+## II. CRAMER-RAO BOUNDS (CRBS) REVIEW
+
+It is known that the parameters to be estimated can be categorized as deterministic or random parameters. Denote this parameter vector as $\mathbf{u} = (\mathbf{u}_r^T, \mathbf{u}_d^T)^T$, where $\mathbf{u}_d$ is a $(n-m) \times 1$ deterministic vector and $\mathbf{u}_r$ is a $m \times 1$ random vector with an a priori probability density function (pdf) $p(\mathbf{u}_r)$. The true value of $\mathbf{u}_d$ will be denoted $\mathbf{u}_d^\Delta$ and $\hat{\mathbf{u}}(\mathbf{y})$ is the estimator of $\mathbf{u}$ where $\mathbf{y}$ is the observation vector. The HCRB satisfies the following inequality [15] on the MSE:
+
+$$E_{\mathbf{y},\mathbf{u},|\mathbf{u}_r|=\mathbf{u}_d^\Delta} \left[ (\hat{\mathbf{u}}(\mathbf{y}) - \mathbf{u})(\hat{\mathbf{u}}(\mathbf{y}) - \mathbf{u})^T |_{|\mathbf{u}_r|=\mathbf{u}_d^\Delta} \right] \geq \mathbf{H}^{-1}(\mathbf{u}_d^\Delta), \quad (1)$$
+
+where $\mathbf{H}(\mathbf{u}_d^\Delta)$ is the so-called hybrid information matrix (HIM) and is defined as:
+
+$$\mathbf{H}(\mathbf{u}_d^\Delta) = E_{\mathbf{y},\mathbf{u},|\mathbf{u}_r|=\mathbf{u}_d^\Delta} \left[ -\Delta_\mathbf{u}^\mathrm{u} \log p(\mathbf{y},\mathbf{u}_r|\mathbf{u}_d) |_{|\mathbf{u}_r|=\mathbf{u}_d^\Delta} \right]. \quad (2)$$
+
+It is shown in [16] that inequality (1) still holds when the deterministic and the random parts of the parameter vector are dependent. By expanding $\log p(\mathbf{y},\mathbf{u}_r|\mathbf{u}_d)$ as $\log p(\mathbf{y}|\mathbf{u}_r,\mathbf{u}_d) + \log p(\mathbf{u}_r|\mathbf{u}_d)$, the HIM can be rewritten as:
+
+$$\begin{gather*} \mathbf{H}(\mathbf{u}_d^\Delta) = E_{\mathbf{u}_d, |\mathbf{u}_r|=\mathbf{u}_d^\Delta} [\mathbf{F}(\mathbf{u}_d^\Delta, \mathbf{u}_r)] + E_{\mathbf{u}_d, |\mathbf{u}_r|=\mathbf{u}_d^\Delta} [-\Delta_\mathbf{u}^\mathrm{u} \log p(\mathbf{u}_r|\mathbf{u}_d)_{|\mathbf{u}_r|=\mathbf{u}_d^\Delta}], \quad (3) \\ \text{where } \mathbf{F}(\mathbf{u}_d^\Delta, \mathbf{u}_r) = E_{\mathbf{y}, |\mathbf{u}_r|=\mathbf{u}_d^\Delta} [-\Delta_\mathbf{y}^\mathrm{y} \log p(\mathbf{y}|\mathbf{u}_d, \mathbf{u}_r)_{|\mathbf{y}|=\mathbf{u}_d^\Delta}] \quad (4) \end{gather*}$$
+
+is the Fisher information matrix (FIM).
+
+In particular, if $\mathbf{u} = \mathbf{u}_d$, (4) reduces to:
+
+$$\mathbf{H}(\mathbf{u}_d^\Delta) = \mathbf{F}(\mathbf{u}_d^\Delta) = E_{\substack{\mathbf{y}\\\vert\mathbf{u}_r\\=\mathbf{u}_d^\Delta}} \left[ -\Delta_{\substack{\mathbf{y}\\|\mathbf{u}_r|}}^\mathrm{y} \log p(\mathbf{y}\mid\mathbf{u}_d)_{\substack{\mathbf{y}\\|\mathbf{u}_r|=\mathbf{u}_d^\Delta}} \right]. \quad (5)$$
+
+The inverse of $\mathbf{H}(\mathbf{u}_d^\Delta)$ in (5) is just the standard CRB [2].
+
+On the contrary, if $\mathbf{u} = \mathbf{u}_r$, (4) becomes:
+
+² The notational convention adopted is as follows: italic indicates a scalar quantity, as in *a*; boldface indicates a vector quantity, as in **a** and capital boldface indicates a matrix quantity as in **A**. The ($m,n$)th entry of matrix **A** is denoted as $[\textcolor{red}{A}]_{m,n}$. The transpose matrix of **A** is indicated by a superscript $^T$, and $|\textcolor{red}{A}|$ is the determinant of **A**. $a_m^n$ represents the vector $[a_{m-1}^n, ..., a_n^n]^T$, where m and n are positive integers ($m < n$). Re{$a_i$} and Im{$a_j$} are respectively the real and imaginary parts of *a*. $E_{ij}$ [ ] denotes the expectation over *x* and *y*. $\nabla_u$ and $\Delta_v^x$ represent the first and second order derivative operators.
+---PAGE_BREAK---
+
+$$
+\mathbf{H} = E_{\mathbf{u}}[\mathbf{F}(\mathbf{u}_r)] + E_{\mathbf{u}}[-\Delta_{\mathbf{u}} \log p(\mathbf{u}_r)], \quad (6)
+$$
+
+$$
+\text{where } \mathbf{F}(\mathbf{u}_r) = E_{y|\mathbf{u}_r} [-\Delta_{\mathbf{u}_r}^{\text{u}} \log p(\mathbf{y}|\mathbf{u}_r)], \quad (7)
+$$
+
+and the inverse of **H** in (6) is the Bayesian CRB (BCRB) [3].
+
+III. SYSTEM MODEL
+
+We consider the transmission of a sequence $s = [s_1, \dots, s_L]^T$ of M-QAM signals from a set $\mathcal{S}_M$, rotated by some random carrier phases $\theta = [\theta_1, \dots, \theta_L]^T$, over an additive white Gaussian noise (AWGN) channel. Assuming that the timing recovery is perfect without inter-symbol interference (ISI), the sampled baseband signal $\mathbf{y} = [y_1, \dots, y_L]^T$ can be written as:
+
+$$
+y_l = s_l e^{j\theta_l} + n_l = (a_l + j b_l) e^{j\theta_l} + n_l, \quad (8)
+$$
+
+where $s_l$, $\theta_l$ and $n_l$ are respectively the l-th transmitted complex symbol ($s_l = a_l + j b_l$), the residual phase distortion and the zero mean circular Gaussian noise with variance $\sigma_n^2$.
+
+For the data aided (DA) scenario, the transmitted symbols are independent and identically distributed (i.i.d.) and the conditional probability based on the known phase is:
+
+$$
+p(\mathbf{r} | \boldsymbol{\theta}) = \prod_{l=1}^{L} p(y_l | \theta_l) = \left( \frac{1}{\pi \sigma_n^2} \right)^L \prod_{l=1}^{L} \exp \left\{ -\frac{|s_l|^2 + |y_l|^2}{\sigma_n^2} \right\} \exp \left\{ 2 \frac{\operatorname{Re}(y_l s_l^* e^{-j\theta_l})}{\sigma_n^2} \right\} \quad (9)
+$$
+
+For the non-data aided (NDA) case, the transmitted symbols are also i.i.d and thus $p(\mathbf{r}|\mathbf{\theta})$ has a similar form:
+
+$$
+\begin{align}
+p(\mathbf{r} | \boldsymbol{\theta}) &= \prod_{l=1}^{L} p(y_l | \theta_l) \nonumber \\
+&= \left( \frac{1}{\pi \sigma_n^2} \right)^L \prod_{l=1}^{L} \sum_{s_l \in S_m} \frac{1}{M} \exp \left\{ -\frac{|s_l|^2 + |y_l|^2 + 2 \operatorname{Re}\{y_l s_l^* e^{-j\theta_l}\}}{\sigma_n^2} \right\}. \tag{10}
+\end{align}
+$$
+
+In practice, the oscillators are never perfect and suffer from jitters. [11] and [12] have provided a mathematical model which has been widely used to describe the oscillator behavior:
+
+$$
+\theta_i = \theta_{i-1} + \xi + w_i, \quad (11)
+$$
+
+where $\theta_i$ is the unknown phase offset at time $l$, $\xi$ is the unknown constant frequency offset (linear drift), $w_i$ is a white Gaussian noise with zero mean and variance $\sigma_w^2$. The corresponding conditional probability can be expressed as:
+
+$$
+p(\theta_i | \theta_{i-1}, \xi) = \frac{1}{\sqrt{2\pi}\sigma_w} \exp\left\{-\frac{(\theta_i - \theta_{i-1} - \xi)^2}{2\sigma_w^2}\right\}. \quad (12)
+$$
+
+IV. CRBS FOR THE DYNAMICAL PHASE ESTIMATION
+
+In practice, phase estimation can be considered using the off-line scenario and the on-line scenario. The former uses the whole observations frame and only begins estimation when the whole frame has been received, while the latter directly uses the current and previous observations. In the following, we will give both the on-line and the off-line lower bounds.
+
+The parameters of the phase model in (9), (10) include some random parameters $\mathbf{\Theta}=[\theta_1,\cdots,\theta_L]^T$ (i.e. the dynamical phase)
+
+and a deterministic parameter $\xi$ (i.e. the scalar linear drift). So the parameter vector can be written as:
+
+$$
+\mathbf{u} = \begin{bmatrix} \mathbf{u}_r \\ \mathbf{u}_d \end{bmatrix} = \begin{bmatrix} \boldsymbol{\theta} \\ \boldsymbol{\xi} \end{bmatrix} \qquad (13)
+$$
+
+Equation (3) thus becomes:
+
+$$
+\begin{equation}
+\begin{split}
+\mathbf{H}(\xi^\Delta) &= E_{\boldsymbol{\theta}|\xi^\Delta} [\mathbf{F}(\xi^\Delta, \boldsymbol{\theta})] \\
+&\quad + E_{\boldsymbol{\theta}|\xi^\Delta} \begin{pmatrix}
+-\Delta_\theta^\xi \log p(\boldsymbol{\theta} | \xi^\Delta) & -\Delta_\theta^\xi \log p(\boldsymbol{\theta} | \xi) \\
+(-\Delta_\theta^\xi \log p(\boldsymbol{\theta} | \xi))_{|\xi^\Delta=\xi^\Delta}^T & -\Delta_\xi^\xi \log p(\boldsymbol{\theta} | \xi)_{|\xi^\Delta=\xi^\Delta}
+\end{pmatrix}
+\end{split}
+\tag{14}
+\end{equation}
+$$
+
+where $\mathbf{F}(\xi^\Delta, \mathbf{\theta}) = E_{y|\mathbf{\theta},\xi^\Delta}[-\Delta_\theta^\xi \log p(\mathbf{y}|\xi^\Delta, \mathbf{\theta})]_{|\xi^\Delta=\xi^\Delta}$. We then decompose the HIM $\mathbf{H}$ into smaller matrices that will be useful in the sequel:
+
+$$
+\mathbf{H} = \begin{bmatrix}
+\mathbf{H}_{11} & \mathbf{H}_{12} \\
+\mathbf{H}_{21} & \mathbf{H}_{22}
+\end{bmatrix}
+=
+\begin{bmatrix}
+\mathbf{H}_{11} & \mathbf{H}_{12} \\
+\mathbf{H}_{12}^T & \mathbf{H}_{22}
+\end{bmatrix},
+\quad (15)
+$$
+
+where
+
+$$
+\left\{
+\begin{aligned}
+\mathbf{H}_{11} &= E_{y|\mathbf{\theta},\xi^\Delta}[-\Delta_\theta^\xi \log p(\mathbf{y},\mathbf{\theta}|\xi)_{|\xi^\Delta=\xi^\Delta}] + E_{\mathbf{\theta}|\xi^\Delta}[-\Delta_\theta^\xi \log p(\mathbf{\theta},|\xi)_{|\xi^\Delta=\xi^\Delta}] \\
+\mathbf{H}_{12} &= E_{y|\mathbf{\theta},\xi^\Delta}[-\Delta_\xi^\xi \log p(\mathbf{y},\mathbf{\theta}|\xi)_{|\xi^\Delta=\xi^\Delta}] + E_{\mathbf{\theta}|\xi^\Delta}[-\Delta_\xi^\xi \log p(\mathbf{\theta},|\xi)_{|\xi^\Delta=\xi^\Delta}] \\
+\mathbf{H}_{22} &= E_{y|\mathbf{\theta},\xi^\Delta}[-\Delta_\xi^\xi \log p(\mathbf{y},\mathbf{\theta}|\xi)_{|\xi^\Delta=\xi^\Delta}] + E_{\mathbf{\theta}|\xi^\Delta}[-\Delta_\xi^\xi \log p(\mathbf{\theta},|\xi)_{|\xi^\Delta=\xi^\Delta}]
+\end{aligned}
+\right.
+$$
+
+A. Computation of $E_{\boldsymbol{\theta}|\boldsymbol{\xi}}[\mathbf{F}(\boldsymbol{\xi}^{\Delta}, \boldsymbol{\theta})]$
+
+From (9) (resp. (10)) in the DA (resp. NDA) scenario,
+$\ln p(\mathbf{y} | \boldsymbol{\xi}, \boldsymbol{\theta})$ can be expanded as:
+
+$$
+\ln p(\mathbf{y} | \boldsymbol{\xi}, \boldsymbol{\theta}) = \ln \sum_{s} p(\mathbf{y} | \boldsymbol{\xi}, s) p(s). \quad (17)
+$$
+
+Using the i.i.d condition among data, the FIM can be written as:
+
+$$
+E_0[\mathbf{F}(\mathbf{\theta})] = J_D I_L, \quad (18)
+$$
+
+where $\mathbf{I}_L$ is the $L \times L$ identity matrix and $J_D$ is defined as:
+
+$$
+J_D \triangleq E_{y,\theta} \left[ - \frac{\partial^2 \log p(y_i | \xi, \theta_l)}{\partial \theta_l^2} \right]. \quad (19)
+$$
+
+Starting with the DA scenario, from (9) we have that:
+
+$$
+E\left\{\frac{\partial^2 \ln p(\mathbf{y} | \boldsymbol{\theta})}{\partial \boldsymbol{\theta}_l^2}\right\} = -2 \cdot \text{SNR} = -\frac{2\sigma_n^2}{\sigma_l^2}. \quad (20)
+$$
+
+We now turn to the NDA scenario, and from (10), one has that:
+
+$$
+\frac{\partial^2 \ln p(\mathbf{y} | \boldsymbol{\theta}, \boldsymbol{\xi})}{\partial \boldsymbol{\theta}_l^2} = -\frac{\sum_{s_j \in S_m} \frac{\partial^2 p(y_i | s_j, \boldsymbol{\theta}_l, \boldsymbol{\xi})}{\partial \boldsymbol{\theta}_l^2}}{\sum_{s_j \in S_m} p(y_i | s_j, \boldsymbol{\theta}_l, \boldsymbol{\xi})} - \left( -\frac{\sum_{s_j \in S_m} \frac{\partial p(y_i | s_j, \boldsymbol{\theta}_l, \boldsymbol{\xi})}{\partial \boldsymbol{\theta}_l}}{\sum_{s_j \in S_m} p(y_i | s_j, \boldsymbol{\theta}_l, \boldsymbol{\xi})} \right)^2 , (21)
+$$
+
+where $\frac{\partial p(y_i | s_j, \theta_i, \xi)}{\partial \theta_i} = p(y_i | s_j, \theta_i, \xi) (2 \operatorname{Im}\{y_i s_j^* e^{-j\theta_i}\}/\sigma_n^2)$, (22)
+
+$$
+\frac{\partial^2 p(y_i | s_j, \theta_i, \xi)}{\partial \theta_i^2} = p(y_i | s_j, \theta_i, \xi)
+ &
+ \left(
+ (2 |\mathrm{Im}\{y_i s_j^* e^{-j\theta_i}\}|/\sigma_n^2)^2
+ - 2 |\mathrm{Re}\{y_i s_j^* e^{-j\theta_i}\}|/\sigma_n^2
+ \right),
+ (23)
+$$
+
+and $p(y_i|s_j, \theta_i, \xi)$ has been shown in (10). Unfortunately, the expectation of (21) has no simple analytical solution and one must resort to numerical methods.
+
+B. Analytical Expressions of HCRBs
+
+From (16), due to (12) and without any priori knowledge
+---PAGE_BREAK---
+
+on $\theta_i$ (i.e. $E_{\delta_i}[\Delta_{\delta_i}^{\xi} \log p(\theta_i)] = 0$), we obtain matrix $\mathbf{H}_{11}$:
+
+$$ \mathbf{H}_{11} = b \begin{bmatrix} A+1, & 1 & 0 & \cdots & 0 \\ 1 & A & 1 & 0 & \vdots \\ 0 & \ddots & \ddots & \ddots & 0 \\ \vdots & 0 & 1 & A & 1 \\ 0 & \cdots & 0 & 1 & A+1 \end{bmatrix}, \quad (24) $$
+
+where $A = -\sigma_w^2 J_D - 2$ and $b = -1/\sigma_w^2$. (25)
+
+For both the DA and NDA scenarios (see (9) and (10)), we can see that $\log p(y_i | \xi, \theta_i)$ is independent of $\xi$, i.e. the partial derivatives $\Delta_\xi^\xi \log p(\mathbf{y}, \mathbf{\theta} | \xi)_{\mathbf{\xi}=\xi^\Delta}$ and $\Delta_\xi^\xi \log p(\mathbf{y}, \mathbf{\theta} | \xi)_{\mathbf{\xi}=\xi^\Delta}$ are equal to 0. $\mathbf{H}_{12}$ and $\mathbf{H}_{22}$ thus become:
+
+$$ \mathbf{H}_{12} = [1/\sigma_w^2, \mathbf{0}_{\mathrm{B}(L-2)}, -1/\sigma_w^2]^T \quad (26) $$
+
+$$ \mathbf{H}_{22} = (L-1)/\sigma_w^2. \quad (27) $$
+
+We now invert the HIM. Starting with $\mathbf{H}_{11}^{-1}$ and just similarly to Appendix I of [15], we find:
+
+$$ [\mathbf{H}_{11}^{-1}]_{l,l} = \frac{b^{l-1}}{|\mathbf{H}_{11}|} (\rho_1 r_1^{L-l-1} (r_1 + b) + \rho_2 r_2^{L-l-1} (r_2 + b)), \quad (28) $$
+
+$$ [\mathbf{H}_{11}^{-1}]_{l,l} = \frac{1}{|\mathbf{H}_{11}|} \left\{ \begin{aligned} & \rho_1^2 (b+r_1)^2 r_1^{L-3} + \rho_2^2 (b+r_2)^2 r_2^{L-3} \\ & - b^2 (r_1^{L-2} r_2^{L-l-1} + r_1^{L-l-1} r_2^{L-2}) (A-2)^{-1} \end{aligned} \right\}, \quad (29) $$
+
+where
+
+$$ \begin{cases} r_1 = 1/\sigma_w^2 + (1 - \sqrt{1 + 4(J_D\sigma_w^2)^{-1}})J_D/2, \\ r_2 = 1/\sigma_w^2 + (1 + \sqrt{1 + 4(J_D\sigma_w^2)^{-1}})J_D/2 \end{cases}, \quad (30) $$
+
+and
+
+$$ \begin{cases} \rho_1 \triangleq (\sqrt{1+4(J_D\sigma_w^2)^{-1}} - 1 - 2(J_D\sigma_w^2)^{-1}) / (\sqrt{1+4(J_D\sigma_w^2)^{-1}}). \\ \rho_2 \triangleq (\sqrt{1+4(J_D\sigma_w^2)^{-1}} + 1 + 2(J_D\sigma_w^2)^{-1}) / (\sqrt{1+4(J_D\sigma_w^2)^{-1}}). \end{cases} \quad (31) $$
+
+Thanks to the block-matrix inversion formula [3], we have:
+
+$$ \mathbf{H}^{-1} = \begin{bmatrix} \mathbf{H}_{11}^{-1} + \mathbf{V}_L & -\lambda^{-1}\mathbf{H}_{11}^{-T}\mathbf{H}_{12} \\ -\lambda^{-T}\mathbf{H}_{12}^{T}\mathbf{H}_{11}^{-T} & \lambda^{-1} \end{bmatrix}, \quad (32) $$
+
+where we define $\lambda \triangleq \frac{L-1}{\sigma_w^2} - \mathbf{H}_{12}^T\mathbf{H}_{11}^{-T}\mathbf{H}_{12}$ and $\mathbf{V}_L \triangleq \lambda^{-1}\mathbf{H}_{11}^{-T}\mathbf{H}_{12}\mathbf{H}_{12}^{-T}\mathbf{H}_{11}$.
+
+Due to the particularly light structures of $\mathbf{H}_{11}$ and $\mathbf{H}_{12}$, we find:
+
+$$ \lambda = \frac{L-1}{\sigma_w^2} - \frac{2b^2}{|\mathbf{H}_{11}|} \{\rho_1 r_1^{L-2} (r_1+b) + \rho_2 r_2^{L-2} (r_2+b) + b^{L-1}\}. \quad (33) $$
+
+From the definition of $\mathbf{V}_L$, the diagonal elements $[\mathbf{V}_L]_{l,l}$ can be written as:
+
+$$ [\mathbf{V}_L]_{l,l} = (\mathbf{[H}_{l l}^{-1}]_{l,l} - [\mathbf{H}_{l l}^{-1}]_{l,l+l-1})^2 / (\lambda \sigma_w^4). \quad (34) $$
+
+Using (29) and (34), we can then get the analytical expression of the upper diagonal elements $[\mathbf{H}^{-1}]_{l,l}$ in (32) i.e. the off-line HCRB associated to the estimation of $\theta_k$:
+
+$$ [\mathbf{H}^{-1}]_{l,l} = \frac{1}{|\mathbf{H}_{ll}|} \left\{ \begin{aligned} & \rho_1^2 (b+r_1)^2 r_1^{L-3} + \rho_2^2 (b+r_2)^2 r_2^{L-3} \\ & - b^2 (r_1^{L-2} r_2^{L-l-1} + r_1^{L-l-1} r_2^{L-2}) (A-2)^{-1} \\ & + \frac{b^2}{\lambda |\mathbf{H}_{ll}|^2} \left\{ b^{l-1} (\rho_1 r_1^{L-l-1} (b+r_1) + \rho_2 r_2^{L-l-1} (b+r_2)) + b^{L-1} (\rho_1 r_1^{L-2} (b+r_1) + \rho_2 r_2^{L-2} (b+r_2)) \right\}^2 \end{aligned} \right\}. \quad (35) $$
+
+Finally, just replacing $|\mathbf{H}_{ll}|$ by $|\mathbf{H}_{ll}(l)|$ in (35), one also obtains the analytical expression of the on-line HCRB associated to the estimation of $\theta_l$ ($l \ge 3$):
+
+$$ C_{\mathbf{H}_l} = \frac{1}{|\mathbf{H}_{ll}(l)|} \left\{ \begin{aligned} & \rho_1^2 (b+r_1)^2 r_1^{l-3} + \rho_2^2 (b+r_2)^2 r_2^{l-3} \\ & - b^2 (r_1^{l-2} r_2^{l-1} + r_1^{l-1} r_2^{l-2}) (A-2)^{-1} \\ & + \frac{b^2}{\lambda |\mathbf{H}_{ll}(l)|^2} \left\{ b^{l-1} (\rho_1 r_1^{l-1} (b+r_1) + \rho_2 r_2^{l-1} (b+r_2)) + \rho_1 r_1^{l-2} (b+r_1) + \rho_2 r_2^{l-2} (b+r_2) \right\}^2 \end{aligned} \right\}, \quad (36) $$
+
+where $\mathbf{H}_{ll}(l)$ is the left-upper $l \times l$ submatrix of $\mathbf{H}_{ll}$ (see (24)). We can notice that (35) and (36) do not depend on the value of the parameter $\xi$.
+
+### C. Analytical Expressions of BCRBs
+
+When there is no linear drift i.e. $\xi=0$, the parameter vector **u** contains only random parameters **θ**, i.e. **u** = **u***r = **θ** and the BCRB is the lower bound of the MSE. Moreover, the Bayesian information matrix (BIM) **B***L is equal to the upper left sub-matrix of the hybrid information matrix **H**:
+
+$$ \mathbf{B}_L = \mathbf{H}_{ll}. \quad (37) $$
+
+The diagonal element $[\mathbf{B}_L^{-1}]_{l,l}$ is the off-line BCRB associated to the estimation of $\theta_l$. The corresponding analytical expressions for respectively the off-line and on-line BCRB are:
+
+$$ [\mathbf{B}_L^{-1}]_{l,l} = \frac{1}{|\mathbf{B}_L|} \left\{ \begin{aligned} & \rho_1^2 (b+r_1)^2 r_1^{L-3} + \rho_2^2 (b+r_2)^2 r_2^{L-3} \\ & - b^2 (r_1^{L-2} r_2^{L-l-1} + r_1^{L-l-1} r_2^{L-2}) (A-2)^{-1} \\ & + b^2 (r_1^{l-3} r_2^{l-3}) (A-3)^{-3/4}. \\ & - b^3 (r_1^{l-3} r_2^{l-3}) (A-4)^{-3/4}. \\ & - b^4 (r_1^{l-3} r_2^{l-3}) (A-5)^{-3/4}. \\ & - b^5 (r_1^{l-3} r_2^{l-3}) (A-6)^{-3/4}. \\ & - b^6 (r_1^{l-3} r_2^{l-3}) (A-7)^{-3/4}. \\ & - b^7 (r_1^{l-3} r_2^{l-3}) (A-8)^{-3/4}. \\ & - b^8 (r_1^{l-3} r_2^{l-3}) (A-9)^{-3/4}. \\ & - b^9 (r_1^{l-3} r_2^{l-3}) (A-9)^{-5/4}. \\ & - b^{l+3/4}(r_0-r_{l+3/4})^T R^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})^{-T}(R-R_{l+3/4})(\lambda+\mu)^{-6}). \\ & - b^5 R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_R_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_r_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_x_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_y_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy_yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy__yy\\end{aligned}\right\}, $$
+
+and
+
+$$ C_B = \frac{|\mathbf{B}_L|}{|\mathbf{B}_L|}, $$
+
+Note that (38) (resp. (39)) is the first term on the right side of (55) (resp. (56)). The second terms in (55) and (56) represent the additional positive uncertainty brought by $\xi$ and the HCRB is always lower bounded by the BCRB.
+
+## V. DISCUSSION
+
+Constrained by the paper size, only the HCRB will be discussed in the following. First, one readily sees on Fig. 1 the superiority of the off-line approach compared to the on-line approach in the different positions of the block. Also, there is little improvement for the DA scenario (compared to the NDA scenario) when using a BPSK modulation but the gain becomes obvious for a larger constellation. Moreover, as the observation number increases, both the on-line and off-line CRBs decreases and tends to reach the corresponding asymptote values.
+
+We now illustrate the behavior of the M-QAM ($M=4$, 16, 64 and 256) HCRB of $\theta_l$ as a function of the SNR in Fig. 2.
+
+At high SNR (above 30dB), we notice that the various off-line CRBs logically merge and do not depend on the constellation. In this range of SNR, the received constellations are reliable enough to make correct decisions and it is sufficient to only take the present observation $y_l$ into account to estimate $\theta_l$; thus the estimation problem tends to a deterministic phase
+---PAGE_BREAK---
+
+estimation problem where we estimate *L* independent phases θᵢ with *L* independent observations.
+
+Fig. 1 HCRBs in the various block positions (L = 30 and L = 60).
+
+Fig. 2 HCRBs in the center of the block (*l* = 30) for various constellations.
+
+In mid-range SNRs, one observation is not sufficient to estimate the phase offset and a block of observations can improve the estimation performance. This explains why the NDA CRBs do not merge anymore with the DA CRBs. Moreover, we notice that every time the constellation size is multiplied by 4, the threshold where the NDA bounds leave the DA bound is increased by 6dB.
+
+At low SNR, the AWGN has more influence than the phase noise and that results in many decision errors. That is why the
+
+NDA CRBs increase quicker at low SNR than at high SNR, and particularly for the 2-dimension QAM signals (compared to the real BPSK signals).
+
+## VI. CONCLUSION
+
+In this paper, we have applied the general analytical form of the BCRBs and HCRBs to evaluate the performance of dynamical phase estimation. We have illustrated the phase estimation performance for arbitrary constellations. In particular, we can measure the advantage of using an off-line scenario. We also point out the difference between the QAM bounds and the traditionally studied BPSK bounds.
+
+## REFERENCES
+
+[1] C. Herzet, N. Noels, V. Lottici, H. Wymeersch, M. Luise, M. Moeneclaey, L. Vandendorpe "Code-aided turbo synchronization," Proceedings of the IEEE, vol. 95, pp. 1255-1271, June 2007.
+
+[2] H. L. V. Trees, Detection, Estimation and Modulation Theory. New York: Wiley, 1968, vol. 1.
+
+[3] S. M. Kay, Fundamentals of statistical signal processing: estimation theory. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1993.
+
+[4] H. Meyr, M. Moeneclaey, and S. Fechtel, Digital Communication Receivers: Synchronization, Channel Estimation and Signal Processing, ser. Telecommunications and Signal Processing. New York: Wiley, 1998.
+
+[5] U. Mengali and A. N. D'Andrea, Synchronization Techniques for Digital Receivers. New York: Plenum, 1997.
+
+[6] W.G. Cowley, "Phase and frequency estimation for PSK packets: Bounds and algorithms," IEEE Trans. Commun., vol. 44, pp. 26-28, Jan. 1996.
+
+[7] F. Rice, B. Cowley, B. Moran, M. Rice, "Cramer-Rao lower bounds for QAM phase and frequency estimation," IEEE Trans. Commun., vol. 49, pp 1582-1591, Sep. 2001.
+
+[8] D.C. Rife and R.R. Boorstyn, "Single-tone parameter estimation from discrete-time observations," IEEE Trans. Inf. Theory, vol. IT-20, No. 5, pp. 591-597, Sept. 1974.
+
+[9] H. Steendam and M. Moeneclaey, "Low-SNR limit of the Cramer-Rao bound for estimating the carrier phase and frequency of a PSK or QAM waveform," IEEE Commun. Letters, vol. 5, pp. 215-217, May 2001.
+
+[10] N. Noels, H. Steendam, and M. Moeneclaey, "The true Cramer-Rao bound for carrier frequency estimation from a PSK signal," IEEE Trans. Commun., vol. 52, pp. 834-844, May 2004.
+
+[11] J. A. McNeill, "Jitter in ring oscillators," Ph.D. dissertation, Boston University, 1994.
+
+[12] A. Demir, A. Mehrotra, and J. Roychowdhury, "Phase noise in oscillators: a unifying theory and numerical methods for characterization," IEEE Trans. Circuits Syst. I, vol. 47, pp. 655-674, May 2000.
+
+[13] A. Barbieri, D. Bolletta, and G. Colavolpe, "On the Cramer-Rao bound for carrier frequency estimation in the presence of phase noise," IEEE GLOBECOM 05, vol. 3, Issue 28, Nov.-Dec. 2005.
+
+[14] S. Bay, C. Herzet, J.P. Barbot, J. M. Brossier and B. Geller, "Analytic and Asymptotic Analysis of Bayesian Cramér-Rao Bound for Dynamical Phase Offset Estimation," IEEE Trans. on Signal Processing, vol. 56, pp. 61-70, Jan. 2008.
+
+[15] S. Bay, B. Geller, A. Renaux, J. P. Barbot and J. M. Brossier, "A General Form of the Hybrid CRB Application to the Dynamical Phase Estimation," IEEE Signal Processing Letters, vol. 15, pp. 453-456, 2008.
+
+[16] J. Yang, and B. Geller, "Near-optimum Low-Complexity Smoothing Loops for Dynamical Phase Estimation," to appear in IEEE Trans. on Signal Processing.
+
+[17] J. Yang, B. Geller, C. Herzet, and J.M. Brossier, "Smoothing PLLs for QAM Dynamical Phase Estimation," in Proc. IEEE Inter. Conf. on Commun. 2009, ICC'09, Dresden, June 14-18, 2009.
+
+[18] J. Yang, B. Geller, and A. Wei, "Approximate Expressions for Cramer-Rao Bounds of Coded Aided QAM Dynamical Phase Estimation," in Proc. IEEE Inter. Conf. on Commun. 2009, ICC'09, Dresden, June 14-18, 2009.
\ No newline at end of file
diff --git a/samples/texts_merged/6555042.md b/samples/texts_merged/6555042.md
new file mode 100644
index 0000000000000000000000000000000000000000..e730780fe9f96c3488805771aab1717cd2979153
--- /dev/null
+++ b/samples/texts_merged/6555042.md
@@ -0,0 +1,506 @@
+
+---PAGE_BREAK---
+
+Decomposing Alignment-based Conformance Checking
+of Data-aware Process Models
+
+Massimiliano de Leoni¹*, Jorge Munoz-Gama²**, Josep Carmona², and
+Wil M.P. van der Aalst¹
+
+¹ Eindhoven University of Technology, Eindhoven (The Netherlands)
+² Universitat Politècnica de Catalunya, Barcelona (Spain)
+
+m.d.leoni@tue.nl, jmunoz@cs.upc.edu, jcarmona@cs.upc.edu,
+w.m.p.v.d.aalst@tue.nl
+
+**Abstract.** Process mining techniques relate observed behavior to modeled be-
+havior, e.g., the automatic discovery of a Petri net based on an event log. Process
+mining is not limited to process discovery and also includes conformance check-
+ing. Conformance checking techniques are used for evaluating the quality of dis-
+covered process models and to diagnose deviations from some normative model
+(e.g., to check compliance). Existing conformance checking approaches typically
+focus on the control-flow, thus being unable to diagnose deviations concerning
+data. This paper proposes a technique to check the conformance of data-aware
+process models. We use so-called *Petri nets with Data* to model data variables,
+guards, and read/write actions. Data-aware conformance checking problem may
+be very time consuming and sometimes even intractable when there are many
+transitions and data variables. Therefore, we propose a technique to decompose
+large data-aware conformance checking problems into smaller problems that can
+be solved more efficiently. We provide a general correctness result showing that
+decomposition does not influence the outcome of conformance checking. The
+approach is supported through ProM plug-ins and experimental results show sig-
+nificant performance improvements. Experiments have also been conducted with
+a real-life case study, thus showing that the approach is also relevant in real busi-
+ness settings.
+
+**Keywords:** Process Mining, Conformance Checking, Divide-and-Conquer Tech-
+niques, Multi-Perspective Process Modelling
+
+# 1 Introduction
+
+Nowadays, most organizations document and analyze their processes in some form,
+and with it, the practical relevance of process mining is increasing as more and more
+event data becomes available. Process mining techniques aim to discover, monitor and
+improve real processes by extracting knowledge from event logs. The two most promi-
+nent process mining tasks are: (i) *process discovery*: learning a process model from
+example behavior recorded in an event log, and (ii) *conformance checking*: diagnosing
+
+* When conducting most of this research work, Dr. de Leoni was also affiliated with University of Padua and financially supported by the Eurostars - Eureka project PROMPT (E!6696).
+** Supported by FPU Grant (AP2009-4959) and project FORMALISM (TIN-2007-66523)
+---PAGE_BREAK---
+
+Fig. 1: Example of a (simplified) process to request loans. The dotted arcs going from a transition to a variable denote write operations; the arcs towards a transition denote read operations, i.e. the transition requires accessing the current variables' value. In the paper, each transition is abbreviated into a lower-case letter (e.g. *a*) and each variable is represented as an upper-case letter (e.g. *A*). The abbreviations are shown in brackets after the name of the transitions or variable names.
+
+and quantifying discrepancies between observed behavior and modeled behavior [1]. Models that faithfully conform the reality are necessary to obtain trustful analysis and simulation results, for certification and regulation purposes, or simply to gain insight into the process.
+
+Most of the work done in conformance checking in the literature focuses on the control-flow of the underlying process, i.e. the ordering of activities. There are various approaches to compute the fraction of events or traces in the log that can be replayed by the model [2,3].
+
+In a data-aware process model, each case, i.e. a process instance, is characterized by its case variables. Paths taken during the execution may be governed by guards and conditions defined over such variables. A process model specifies the set of variables and their possible values, guards, and write/read actions. Since existing conformance checking techniques typically completely abstract from data, resources, and time, many deviations remain undetected. Therefore, the event log may record executions of process instances that appear fully conforming, even when it is not the case. Rigorous analysis of the data perspective is needed to reveal such deviations.
+
+Let us consider the process that is modeled as BPMN diagram in Figure 1. It models the handling of loans requests from customers. It is deliberately oversimplified to be able to explain the concepts more easily. The process starts with a credit request where the requestor provides some documents to demonstrate the capability of paying the loan back. These documents are verified and the interest amount is also computed. If the verification step is negative, a negative decision is made, the requestor is informed and, finally, the negative outcome of the request is stored in the system. If verification
+---PAGE_BREAK---
+
+is positive, an assessment is made to take a final decision. Independently of the assess-
+ment's decision, the requestor is informed. Moreover, even if the verification is negative,
+the requestor can renegotiate the loan (e.g. to have lower interests) by providing further
+documents or by asking for a smaller amount. In this case, the verification-assessment
+part is repeated. If both the decision and verification are positive and the requestor is
+not willing to renegotiate, the credit is opened. Let us consider the following trace:³
+
+$$
+\sigma_{ex} = \left(
+ \begin{array}{@{}l@{}}
+ (\mathbf{a}, \emptyset, \{(A, 4000)\}), (\mathbf{b}, \{(A, 4000)\}, \{(I, 450), (V, \text{false})\}), (\mathbf{c}, \{(V, \text{false})\}, \\
+ \quad \{(D, \text{true})\}), (\mathbf{e}, \emptyset, \emptyset), (\mathbf{f}, \{(A, 4000)\}, \{(A, 5000)\}), (\mathbf{b}, \{(A, 5000)\}, \{(I, 450), \\
+ \quad \{(V, \text{false})\}), (\mathbf{d}, \{(V, \text{false})\}, \{(D, \text{false})\}), (\mathbf{e}, \emptyset, \emptyset), (\mathbf{h}, \{(D, \text{true})\}, \emptyset)
+ \end{array}
+\right)
+$$
+
+Seen from a control-flow perspective only (i.e. only considering the activities' order-
+ing), the trace seems to be fully conforming. Nonetheless, a number of deviations can
+be noticed if the data perspective is considered. First of all, if activity *c* is executed,
+previously activity *b* could not have resulted in a negative verification, i.e. *V* is set to
+*false*. Second, activity *f* cannot write value 5000 to variable *A*, as this new value is
+larger than the previous value, i.e. 4000. Furthermore, if the decision and verification
+are both negative, i.e. both *V* are *D* are set to *false*, then *h* cannot be executed at the
+end.
+
+The identification of non-conforming traces clearly has value in itself. Nonetheless,
+organizations are often interested in explanations that can steer measures to improve
+the quality of the process. *Alignments* aim to support more refined conformance check-
+ing. An alignment aligns a case in the event log with an execution path of the process
+model as good as possible. If the case deviates from the model, then it is not possible
+to perfectly align with the model and a best matching scenario is selected. Note that
+for the same deviation, multiple explanations can be given. For instance, the problem
+that *h* was executed when it was not supposed to happen can be explained in two ways:
+(1) *h* should not have occurred because *V* and *D* are both set to **false** ("control-flow is
+wrong") and (2) *V* and *D* should both have been set to **true** because *h* occurs ("data-flow
+is wrong"). In order to decide for the most reasonable explanation, costs are assigned
+to deviations and we aim to find the explanation with the lowest cost. For instance, if
+assigning a wrong value to *V* and *D* is less severe than executing *h* wrongly, the sec-
+ond explanation is preferred. The seminal work in [3] only considers alignments in the
+control-flow part, thus ignoring the data-perspective aspect of conformance.
+
+As we detail in Section 2.4, finding an alignment of an event log and a data-aware
+process model is undecidable in the general case. However, to make the problem decid-
+able, works [4,5] put forward the limitation that guards need to be linear (in)equations.
+Readers are also referred to them for a state-of-the-art analysis of data-aware confor-
+mance checking. These works also show that, even with that limitation, the problem
+of finding an alignment of an event log can become intractable since the problem's
+complexity is exponential on the size of the model, i.e. the number of activities and data
+variables. In this paper, while keeping the limitations mentioned above, we aim to speed
+
+³ Notation (**act**, *r*, *w*) is used to denote the occurrence of activity *act* that writes and reads variables according to functions *w* and *r*, e.g., (b, {(A, 4000)}, {(I, 450), (V, false)}) is an event corresponding to the occurrence of activity **b** while reading value 4000 for variable **A** and writing values 450 and **false** to variables **I** and **V** respectively. (e, ∅, ∅) corresponds to the occurrence of activity **e** without reading/writing any variables.
+---PAGE_BREAK---
+
+Fig. 2: Positioning the contribution of this paper with respect to the state of the art: the gray area identifies the novelty of the proposed technique.
+
+up the computation of alignments by using a divide-and-conquer approach. The data-aware process model is split into smaller partly overlapping model fragments. For each model fragment a sublog is created by projecting the initial event log onto the activities used in the fragment. Given the exponential nature of conformance checking, this may significantly reduce the computation time. If the decomposition is done properly, then any trace that fits into the overall model also fits all of the smaller model fragments and vice versa. Figure 2 positions the contribution of this paper with respect to the state-of-the-art alignment-based techniques. The top part identifies the alignment-based conformance checking that considers the control flow, only. The bottom part refers to the alignment-based conformance checking techniques that account for data, as well. Regarding the control-flow only, several approaches have been proposed to decompose process mining problems, both for discovery and conformance checking. As described in [6], it is possible to decompose process mining problems in a variety of ways. Special cases of this more general theory are passages [7] and SESE-based decomposition [8]. However, these approaches are limited to control-flow. Indeed, some techniques exist that also consider the data aspects (i.e. [4,5]) but without exploiting the possibility of decomposing the data-aware model. In this paper, we extend the control-flow approaches mentioned above to also take data into account, which coincides with the gray area in Figure 2. Finally, the work in [9] (and similar) for data-aware conformance on declarative models is orthogonal to the contributions listed in Figure 2, that are focused on procedural models.
+
+The decomposed data-aware conformance checking approach presented in this paper has been implemented as plug-ins for the ProM framework. We conducted experiments related to a real-life case study as well as with several synthetic event logs. Experimental results show that data-aware decomposition may indeed be used to significantly reduce the time needed for conformance checking and that the problem is practically relevant since models of real processes can actually be decomposed.
+
+Preliminaries are presented in Section 2. Section 3 introduces our approach for data-aware decomposition. Section 4 describes different algorithms for instantiating the gen-
+---PAGE_BREAK---
+
+eral results presented in Section 3. Section 5 reports on experimental results. Section 6
+concludes the paper.
+
+2 Preliminaries
+
+## 2.1 System Nets
+
+Petri nets and their semantics are defined as usual: a Petri net is a tuple (P, T, F) with P the set of places, T the set of transitions, $P \cap T = \emptyset$, and $F \subseteq (P \times T) \cup (T \times P)$ the flow relation. A place *p* is an input place of a transition *t* iff $(p, t) \in F$; similarly, *p* is an output place of *t* iff $(t, p) \in F$. The marking of a Petri net is a multiset $M \in \mathbb{B}(P)$, $M(p)$ denotes the number of times element *p* appears in *M*. The standard set operators can be extended to multisets, $M_1 \uplus M_2$ is the union of two multisets.
+
+Firing a transition *t* in a marking *M* consumes one token from each of its input
+places and produces one token in each of its output places. Furthermore, transition *t*
+is enabled and may fire in *M* if there are enough tokens in its input places for the
+consumptions to be possible, i.e. iff for each input place *s* of *t*, *M*(*s*) ≥ 1. Some of
+the transitions corresponds to piece of work in the process; each of those transitions are
+associated with a label that indicates the activity that it represents.
+
+**Definition 1 (Labeled Petri net).** A labeled Petri net $\mathcal{PN} = (P, T, F, l)$ is a Petri net $(P, T, F)$ with labeling function $l \in T \to \mathcal{U}_A$ where $\mathcal{U}_A$ is some universe of activity labels.⁴
+
+Transitions without a label are invisible transitions, also known as τ-transitions. They
+are introduced for routing purposes but they do not represent actual pieces of work. As
+such, their execution is not recorded in the event logs.
+
+**Definition 2 (System Net).** A system net $\mathit{SN} = (\mathit{PN}, \mathit{M}_{\text{init}}, \mathit{M}_{\text{final}})$ is a triplet where $\mathit{PN} = (\mathit{P}, \mathit{T}, \mathit{F}, l)$ is a labeled Petri net, $\mathit{M}_{\text{init}} \in \mathbb{B}(\mathit{P})$ is the initial marking, and $\mathit{M}_{\text{final}} \in \mathbb{B}(\mathit{P})$ is the final marking. $\mathcal{U}_{\mathit{SN}}$ is the universe of system nets.
+
+**Definition 3 (System Net Notations).** Let $\mathit{SN} = (\mathit{PN}, \mathit{M}_{\text{init}}, \mathit{M}_{\text{final}}) \in \mathcal{U}_{\mathit{SN}}$ be a system net with $\mathit{PN} = (\mathit{P}, \mathit{T}, \mathit{F}, l)$.
+
+- $T_v(\mathit{SN}) = \mathrm{dom}(l)$ is the set of visible transitions in $\mathit{SN}$,
+
+- $A_v(\mathit{SN}) = \mathrm{rang}(l)$ is the set of corresponding observable activities in $\mathit{SN}$,
+
+- $T_v^u(\mathit{SN}) = \{t \in T_v(\mathit{SN}) | \forall t' \in T_v(\mathit{SN}) l(t) = l(t') \Rightarrow t = t'\}$ is the set of unique visible transitions in $\mathit{SN}$ (i.e., there are no other transitions having the same visible label), and
+
+- $A_v^u(\mathit{SN}) = \{\mathit{l}(t) | t \in T_v^u(\mathit{SN})\}$ is the set of corresponding unique observable activities in $\mathit{SN}$.
+
+In the remainder, for the formal definitions and proved theorems in Section 3, we
+need to introduce the concept of union of two nets. For this, we need to merge labeling
+functions. For any two partial functions $f_1 \in X_1 \leftrightarrow Y_1$ and $f_2 \in X_2 \leftrightarrow Y_2$: $f_3 =$
+
+⁴ Symbol $\leftrightarrow$ is used to denote partial functions.
+---PAGE_BREAK---
+
+Fig. 3: Pictorial representation of a Petri net with Data that models the process earlier described in terms of BPMN diagram (cf. Figure 1). Places, transitions and variables are represented as circles, rectangles and triangles, respectively. The dotted arcs going from a transition to a variable denote the writing operations; the reverse arcs denote the read operations, i.e. the transition requires accessing the current variables' value.
+
+$f_1 \oplus f_2$ is the union of the two functions. $f_3 \in (X_1 \cup X_2) \nrightarrow (Y_1 \cup Y_2)$, $\text{dom}(f_3) = \text{dom}(f_1) \cup \text{dom}(f_2)$, $f_3(x) = f_2(x)$ if $x \in \text{dom}(f_2)$, and $f_3(x) = f_1(x)$ if $x \in \text{dom}(f_1) \setminus \text{dom}(f_2)$.
+
+**Definition 4 (Union of Nets).** Let $SN^1 = (N^1, M_{init}^1, M_{final}^1) \in \mathcal{U}_{SN}$ with $N^1 = (P^1, T^1, F^1, l^1)$ and $SN^2 = (N^2, M_{init}^2, M_{final}^2) \in \mathcal{U}_{SN}$ with $N^2 = (P^2, T^2, F^2, l^2)$ be two system nets.
+
+- $l^3 = l^1 \oplus l^2$ is the union of $l_1$ and $l_2$,
+
+- $N^1 \cup N^2 = (P^1 \cup P^2, T^1 \cup T^2, F^1 \cup F^2, l^3)$ is the union of $N^1$ and $N^2$, and
+
+- $SN^1 \cup SN^2 = (N^1 \cup N^2, M_{init}^1 \uplus M_{init}^2, M_{final}^1 \uplus M_{final}^2)$ is the union of system nets $SN^1$ and $SN^2$.
+
+## 2.2 Petri nets with Data
+
+A Petri net with Data is a Petri net with any number of variables (see Definitions 5 and 6 below). Petri nets with data can be seen as an abstracted version of high-level-colored Petri nets [10]. Colored Petri nets are extremely rich in expressiveness; however, many aspects are unimportant in our setting. Petri nets with data provide precisely the information needed for conformance checking of data-aware models and logs.
+
+**Definition 5 (Variables and Values).** $\mathcal{U}_{VN}$ is the universe of variable names. $\mathcal{U}_{VV}$ is the universe of values. $\mathcal{U}_{VM} = \mathcal{U}_{VN} \nrightarrow \mathcal{U}_{VV}$ is the universe of variable mappings.
+
+In this type of nets, transitions may read from and/or write to variables. Moreover, transitions are associated with guards over these variables, which define when these they can fire. A guard can be any formula over the process variables using relational operators ($<$, $>$, $=$) as well as logical operators such as conjunction (`$\wedge$`), disjunction (`$\vee$`),
+---PAGE_BREAK---
+
+and negation ($\neg$). A variable $v$ appear as $v_r$ or $v_w$, denoting the values read and written by the transition for $v$. We denote with *Formulas*(V) the universe of such formulas defined over a set V of variables. In the remainder, given a set $V \subset \mathcal{U}_{VN}$ of variable names, we denote $V_R = \{v_r : v \in V\}$ and $V_W = \{v_w : v \in V\}$.
+
+Formally, a Petri net with Data (DPN) is defined as follows:
+
+**Definition 6 (Petri net with Data).** A Petri net with Data DPN = $(SN, V, val, init, read, write, guard)$ consists of
+
+- a system net $SN = (PN, M_{init}, M_{final})$ with $PN = (P, T, F, l)$,
+
+- a set $V \subseteq \mathcal{U}_{VN}$ of data variables,
+
+- a function $val \in V \to \mathcal{P}(\mathcal{U}_{VV})$ that defines the values admissible for each variable, i.e., $val(v)$ is the set of values that variable $v$ can have,⁵
+
+- a function $init \in V \to \mathcal{U}_{VV}$ that defines the initial value for each variable $v$ such that $init(v) \in val(v)$ (initial values are admissible),
+
+- a read function $read \in T \to \mathcal{P}(V)$ that labels each transition with the set of variables that it reads,
+
+- a write function $write \in T \to \mathcal{P}(V)$ that labels each transition with the set of variables that it writes,
+
+- a guard function $guard \in T \to \text{Formulas}(V_W \cup V_R)$ that associates a guard with each transition such that, for any $t \in T$ and for any $v \in V$, if $v_r$ appears in guard($t$) then $v \in read(t)$ and if $v_w$ appears in guard($t$) then $v \in write(t)$.
+
+$\mathcal{U}_{DPN}$ is the universe of Petri nets with data.
+
+The notion of bindings is essential for the remainder. A binding is a triplet $(t, r, w)$ describing the execution of transition $t$ while reading values $r$ and writing values $w$. A binding is valid if:
+
+1. $r \in read(t) \rightarrow \mathcal{U}_{VV}$ and $w \in write(t) \rightarrow \mathcal{U}_{VV}$
+
+2. for any $v \in read(t): r(v) \in val(v)$, i.e., all values read should be admissible,
+
+3. for any $v \in write(t): w(v) \in val(v)$, i.e., all values written should be admissible.
+
+4. Guard $guard(t)$ evaluate true.
+
+More specifically, let us introduce variable assignment $\chi_b : (V_R \cup V_W) \leftrightarrow \mathcal{U}_{VV})$ which is defined as follows: for any $v \in read(t)$, $\chi(v_r) = r(v)$ and, for any $v \in write(t)$, $\chi(v_w) = w(v)$. A binding $(t, r, w)$ makes **guard(t)** evaluate true if the evaluation of **guard(t)** wrt. $\chi_b$ returns true.
+
+A marking $(M, s)$ of a Petri net with Data DPN has two components: $M \in \mathbf{B}(P)$ is the control-flow marking and $s \in \mathcal{U}_{VM}$ with $dom(s) = V$ and $s(v) \in val(v)$ for all $v \in V$ is the data marking. The initial marking of a Petri net with Data DPN is $(M_{init}, init)$. Recall that *init* is a function that defines the initial value for each variable.
+
+(DPN, (M, s))[b] denotes that a binding b is enabled in marking (M, s), which indicates that each of its input places •t contains at least one token (control-flow enabled), b is valid and $s^{\uparrow read(t)} = r$ (the actual values read match the binding).⁶
+
+⁵ $\mathcal{P}(X)$ is the powerset of X, i.e., $Y \in \mathcal{P}(X)$ is and only if $Y \subseteq X$.
+
+⁶ $f|_Q$ is the function projected on $Q: dom(f|_Q) = dom(f) \cap Q$ and $f|_Q(x) = f(x)$ for $x \in dom(f|_Q)$. Projection can also be used for bags and sequences, e.g., $[x^3, y, z^2]^{\uparrow \{x,y\}} = [x^3, y]$ and $\langle y, z, y \rangle^{\uparrow \{x,y\}} = \langle y, y \rangle$.
+---PAGE_BREAK---
+
+Table 1: Definitions of the guards of the transitions in Fig. 3. Variables and transition names are abbreviated as described in Figure 1. Subscripts *r* and *w* refer to, respectively, the values read and written for that given variable.
+
+| Transition | Guard |
|---|
| Credit Request | true | | Verify | 0.1 · Ar < Iw < 0.2 · Ar | | Assessment | VR = true | | Register Negative Verification | Vr = false & Dw = false | | Inform Requester | true | | Renegotiate Request | Vr = false & Aw < Ar | | Register Negative Request | Dr = false | | Open Credit | Dr = true |
+
+An enabled binding $b = (t, r, w)$ may occur, i.e., one token is removed from each of the input places $\bullet t$ and one token is produced for each of the output places $t \bullet$. Moreover, the variables are updated as specified by $w$. Formally: $M' = (M \setminus \bullet t) \uplus t \bullet$ is the control-flow marking resulting from firing enabled transition $t$ in marking $M$ (abstracting from data) and $s' = s \oplus w$ is the data marking where $s'(v) = w(v)$ for all $v \in write(t)$ and $s'(v) = s(v)$ for all $v \in V \setminus write(t)$. $(DPN, (M, s))[b](DPN, (M', s'))$ denotes that $b$ is enabled in $(M, s)$ and the occurrence of $b$ results in marking $(M', s')$.
+
+Figure 3 shows a Petri net with Data $DPN_{ex}$ that models the same process as represented in Figure 1 as BPMN diagram, and Table 1 illustrates the conditions of the guards of the transitions of $DPN_{ex}$. The labeling function $l$ is such that the domain of $l$ is the set of transitions of $DPN_{ex}$ and, for each transition $t$ of $DPN_{ex}$, $l(t) = t$. In other words, the set of activity labels coincides with the set of transitions.
+
+Let $\sigma_b = \langle b_1, b_2, \dots, b_n \rangle$ be a sequence of bindings. $(DPN, (M, s))[\sigma_b](DPN, (M', s'))$ denotes that there is a set of markings $(M_0, s_0), (M_1, s_1), \dots, (M_n, s_n)$ such that $(M_0, s_0) = (M, s), (M_n, s_n) = (M', s'),$ and $(DPN, (M_i, s_i))[b_{i+1}](DPN, (M_{i+1}, s_{i+1}))$ for $0 \le i < n$. A marking $(M', s')$ is reachable from $(M, s)$ if there exists a $\sigma_b$ such that $(DPN, (M, s))[\sigma_b](DPN, (M', s'))$.
+
+$$\phi_f(DPN) = \{\sigma_b \mid \exists_s (DPN, (M_{init}, init))([\sigma_b](DPN, (M_{final}, s)))\}$$ is the set of complete binding sequences, thus describing the behavior of DPN.
+
+**Definition 7 (Union of Petri nets with Data).** Let $DPN^1 = (SN^1, V^1, val^1, init^1, read^1, write^1, guard^1)$ and $DPN^2 = (SN^2, V^2, val^2, init^2, read^2, write^2, guard^2)$ with $V^1 \cap V^2 = \emptyset$. $DPN^1 \cup DPN^2 = (SN^1 \cup SN^2, V^1 \cup V^2, val^1 \oplus val^2, init^1 \oplus init^2, read^3, write^3, guard^3)$ is the union such that
+
+- $read^3(t) = read^1(t), write^3(t) = write^1(t)$, and $guard^3(t) = guard^1(t)$ if $t \in T^1 \setminus T^2$,
+
+- $read^3(t) = read^2(t), write^3(t) = write^2(t)$, and $guard^3(t) = guard^2(t)$ if $t \in T^2 \setminus T^1$, and
+
+- $read^3(t) = read^1(t) \cup read^2(t), write^3(t) = write^1(t) \cup write^2(t)$, and $guard^3(t) = guard^1(t) \cdot guard^2(t)$ if $t \in T^1 \cap T^2$.
+---PAGE_BREAK---
+
+## 2.3 Event Logs and Relating Models to Event Logs
+
+Next we introduce *event logs* and relate them to the *observable* behavior of a DPN.
+
+**Definition 8 (Trace, Event Log with Data).** A trace $\sigma \in (\mathcal{U}_A \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*$ is a sequence of activities with input and output data. $L \in \mathcal{B}((\mathcal{U}_A \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*)$ is an event log with read and write information, i.e., a multiset of traces with data.
+
+**Definition 9 (From Bindings to Traces).** Consider a Petri net with Data with transitions $T$ and labeling function $l \in T \nrightarrow \mathcal{U}_A$. A binding sequence $\sigma_b \in (T \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*$ can be converted into a trace $\sigma_v \in (\mathcal{U}_A \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*$ by removing the bindings that correspond to unlabeled transitions and by mapping the labeled transitions onto their corresponding label. $l(\sigma_b)$ denotes the corresponding trace $\sigma_v$.
+
+Note that we overload the labeling function to binding sequences, $\sigma_v = l(\sigma_b)$. This is used to define $\phi(DPN)$: the set of all visible traces.
+
+**Definition 10 (Observable Behavior of a Petri net with Data).** Let DPN be a Petri net with Data. $(DPN, (M, s))[\sigma_v \triangleright (DPN, (M', s'))]$ if and only if there is a sequence $\sigma_b$ such that $(DPN, (M, s))[\sigma_b](DPN, (M', s'))$ and $\sigma_v = l(\sigma_b)$. $\phi(DPN) = \{l(\sigma_b) | \sigma_b \in \phi_f(DPN)\}$ is the set of visible traces starting in $(M_{init}, init)$ and ending in $(M_{final}, s)$ for some data marking $s$.
+
+**Definition 11 (Perfectly Fitting with Data).** A trace $\sigma \in (\mathcal{U}_A \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*$ is perfectly fitting DPN $\in \mathcal{U}_{DPN}$ if $\sigma \in \phi(DPN)$. An event log $L \in \mathcal{B}((\mathcal{U}_A \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*)$ is perfectly fitting DPN if all of its traces are perfectly fitting.
+
+Later, we will need to project binding sequences and traces onto subsets of transitions/activities and variables. Therefore, we introduce a generic projection operator $\Pi_{Y,V}(\sigma)$ that removes transitions/activities not in Y and variables not in V.
+
+**Definition 12 (Projection).** Let $X$ be a set of transitions or activities (i.e., $X \subseteq T$ or $X \subseteq \mathcal{U}_A$). Let $Y \subseteq X$ be a subset and $V \subseteq \mathcal{U}_{\text{VN}}$ a subset of variable names. Let $\sigma \in (X \times \mathcal{U}_{\text{VM}} \times \mathcal{U}_{\text{VM}})^*$ be a binding sequence or a trace with data. $\Pi_{Y,V}(\sigma) \in (Y \times (V \nrightarrow \mathcal{U}_{VV}) \times (V \nrightarrow \mathcal{U}_{VV}))^*$ is the projection of $\sigma$ onto transitions/activities Y and variables V. Bindings/events unrelated to transitions/activities in Y are removed completely. Moreover, for the remaining bindings/events all read and write variables not in V are removed. $\Pi_{Y,V}(L) = [\Pi_{Y,V}(\sigma) | \sigma \in L]$ lifts the projection operator to the level of logs.
+
+## 2.4 Alignments
+
+Conformance checking requires an alignment of event log $L$ and process model DPN, that is the alignment of each single trace $\sigma \in L$ and process model DPN.
+
+The events in the event log need to be related to transitions in the model, and vice versa. Such an alignment shows how the event log can be replayed on the process model. Building this alignment is far from trivial, since the log may deviate from the model at an arbitrary number of places. We need to relate “moves” in the log to “moves” in the model in order to establish an alignment between a process model and an event log. It may be that some of the moves in the log cannot be mimicked by the model and vice versa. We denote such “no moves” by $\gg$. An alignment is a sequence of moves:
+---PAGE_BREAK---
+
+Table 2: Examples of complete alignments of $\sigma_{example}$ and N. For readability, the read operations are omitted. Of course, read operations for any variable must match the most recent value for that variable. Any move is highlighted with a gray color if it contains deviations, i.e. it is not a move in both without incorrect read/write operations.
+
+(a)
+
+(b)
+
+**Definition 13 (Legal alignment moves).** Let $DPN = (SN, V, val, init, read, write, guard)$ be a Petri net with Data, with $SN = (PN, M_{init}, M_{final})$ and $PN = (P, T, F, l)$. Let $S_L = U_A \times U_{VM} \times U_{VM}$ be the universe of events. Let $S_{DPN} = T \times U_{VM} \times U_{VM}$ be the universe of bindings of DPN. Let be $S^{\gg}_{DPN} = S_{DPN} \cup \{\gg\}$ and $S^{\gg}_L = S_L \cup \{\gg\}$.
+
+A legal move in an alignment is represented by a pair $(s_L, s_M) \in (S^{\gg}_L \times S^{\gg}_{DPN}) \setminus \{(\gg, \gg)\}$ such that
+
+- $(s_L, s_M)$ is a move in log if $s_L \in S_L$ and $s_M = \gg$,
+
+- $(s_L, s_M)$ is a move in model if $s_L = \gg$ and $s_M \in S_{DPN}$,
+
+- $(s_L, s_M)$ is a move in both without incorrect read/write operations if $s_M = (t, r, w) \in S_{DPN}$ and $s_L = (l(t), r, w) \in S_L$,
+
+- $(s_L, s_M)$ is a move in both with incorrect read/write operations if $s_M = (t, r, w) \in S_{DPN}$ and $s_L = (l(t), r', w') \in S_L$, and $r \neq r'$ or $w \neq w'$.
+
+All other moves are considered as illegal.
+
+**Definition 14 (Alignments).** Let $DPN = (SN, V, val, init, read, write, guard)$ be a Petri net with Data and $\sigma \in (S_L)^*$ be an event-log trace. Let $A_{DPN}$ be the set of legal moves for DPN. A complete alignment of $\sigma_L$ and DPN is a sequence $\gamma \in A_{DPN}^*$ such that, ignoring all occurrences of $\gg$, the projection on the first element yields $\sigma_L$ and the projection on the second yields a $\sigma_P \in \phi_f(DPN)$.
+
+Table 2 shows two complete alignments of the process model in Figure 3 and the log trace $\sigma_{ex}$ from Section 1.
+
+In order to define the severity of a deviation, we introduce a cost function on legal moves: $\kappa \in A_{DPN} \to \mathbb{R}_+^n$. This cost function can be used to favor one type of explanation for deviations over others. The cost of each legal move depends on the specific model and process domain and, hence, the cost function $\kappa$ needs to be defined specifically for each setting. The cost of an alignment $\gamma$ is the sum of the cost of all individual moves composing it: $K(\gamma) = \sum_{(s_L, s_M) \in \gamma} \kappa(s_L, s_M)$.
+
+However, we do not aim to find just any complete alignment. Our goal is to find a complete alignment of $\sigma_L$ and DPN which minimizes the cost: an optimal alignment.
+---PAGE_BREAK---
+
+Let $\Gamma_{\sigma_L, N}$ be the (infinite) set of all complete alignments of $\sigma_L$ and DPN. The alignment $\gamma \in \Gamma_{\sigma_L, DPN}$ is an *optimal alignment* if, for all $\gamma' \in \Gamma_{\sigma_L, N}, K(\gamma) \le K(\gamma')$. Note that an optimal alignment does not need to be unique, i.e. multiple complete alignments with the same minimal cost may exist.
+
+Let us consider again our example introduced above. Let us assume to have a cost function $\kappa^s$ such that $\kappa^s(s_L, s_M) = 1$ if $(s_L, s_M)$ is a visible move in process or a move in log (i.e. $s_L = \gg$ and $s_M$ corresponds to a labeled transition or, conversely, $s_M = \gg$, respectively) or a move in both with incorrect read/write operations and $\kappa^s(s_L, s_M) = 0$ in case of move in both without incorrect read/write operations or a move in model corresponding to an unlabeled transition. The alignment in Table 2a has a cost of 6 whereas the alignment in Table 2b has a cost 8.⁷ It follows that the former is a better alignment. As a matter of fact, it is also an optimal alignment, although it is not the only one. For instance, any variation of such an alignment where the move for *f* is of the form (now including read operations) ((f, {(A, 4000)}, {(A, 5000)}) (f, {(A, 4000)}, {(A, x)}))) with $2250 < x < 4000$ corresponds to an optimal alignment, as well.
+
+In Section 1, we have mentioned that the data-aware conformance checking is undecidable in the general case. This is caused by the fact that Petri nets with Data are Turing-complete. Therefore, it is not decidable to verify whether a sequence of valid bindings exists that takes from the initial marking to any final marking ($M_{final}, s$). As a consequence, for instance, it is not possible to find an alignment of a Petri net with Data and the empty log trace. As mentioned in Section 1, the problem becomes decidable (with an exponential complexity) if guards are restricted to linear (in)equalities.
+
+# 3 Valid Decomposition of Data-aware Models
+
+In [6] the author defines valid decomposition in terms of Petri nets: the overall system net *SN* is decomposed into a collection of subnets {$SN^1, SN^2, ..., SN^n$} such that the union of these subnets yields the original system net. A decomposition is valid if the subnets “agree” on the original labeling function (i.e., the same transition always has the same label), each place resides in just one subnet, and also each invisible transition resides in just one subnet. Moreover, if there are multiple transitions with the same label, they should reside in the same subnet. Only unique visible transitions can be shared among different subnets.
+
+**Definition 15 (Valid Decomposition for Petri nets [6].)** Let $SN \in \mathcal{U}_{SN}$ be a system net with labeling function $l$. $D = \{SN^1, SN^2, ..., SN^n\} \subseteq \mathcal{U}_{SN}$ is a valid decomposition if and only if:
+
+- $SN^i = (N^i, M_{init}^i, M_{final}^i)$ is a system net with $N^i = (P^i, T^i, F^i, l^i)$ for all $1 \le i \le n,$
+
+- $l^i = l|_{T^i}$ for all $1 \le i \le n,$
+
+- $P^i \cap P^j = \emptyset$ for $1 \le i < j \le n,$
+
+- $T^i \cap T^j \subseteq T_v^u(SN)$ for $1 \le i < j \le n,$
+
+- $rang(l^i) \cap rang(l^j) \subseteq T_v^u(SN)$ for $1 \le i < j \le n,$ and
+
+⁷ They also include a cost of two that is accounted for incorrect read operations, not shown in the alignments, which are caused by incorrect write operations.
+---PAGE_BREAK---
+
+$$ - SN = \bigcup_{1 \le i \le n} SN^i. $$
+
+$\mathcal{D}(SN)$ is the set of all valid decompositions of $SN$.
+
+From the definition the following properties follow:
+
+1. each place appears in precisely one of the subnets, i.e., for any $p \in P$: $\left| \{1 \le i \le n \mid p \in P^i\} \right| = 1$,
+
+2. each invisible transition appears in precisely one of the subnets, i.e., for any $t \in T \setminus T_v(SN)$: $\left| \{1 \le i \le n \mid t \in T^i\} \right| = 1$,
+
+3. all visible transitions with the same label (i.e. the label is not unique) appear in the same subnet, i.e., for any $a \in A_v(SN) \setminus A_v^u(SN)$: $\left| \{1 \le i \le n \mid \exists t \in T_v(SN) \cap T^i, l(t)\} \right| = 1$,
+
+4. visible transitions having a unique label may appear in multiple subnets, i.e., for any $t \in T_v^u(SN)$: $\left| \{1 \le i \le n \mid t \in T^i\} \right| \ge 1$, and
+
+5. each edge appears in precisely one of the subnets, i.e., for any $(x, y) \in F$: $\left| \{1 \le i \le n \mid (x, y) \in F^i\} \right| = 1$.
+
+As shown in [6], these observations imply that conformance checking can be decomposed. Any trace that fits the overall process model can be decomposed into smaller traces that fit the individual model fragments. Moreover, if the smaller traces fit the individual fragments, then they can be composed into a trace that fits into the overall process model. This result is the basis for decomposing process mining problems.
+
+**Theorem 1 (Conformance Checking Can be Decomposed [6]).** Let $L \in B(A^*)$ be an event log with $A \subseteq U_A$ and let $SN \in U_{SN}$ be a system net. For any valid decomposition $D = \{SN^1, SN^2, ..., SN^n\} \subseteq D(SN)$: $L$ is perfectly fitting system net $SN$ if and only if for all $1 \le i \le n$: the projection of $L$ onto $A_v(SN^i)$ is perfectly fitting $SN^i$.
+
+In this paper, the definition of valid decomposition is extended to cover Petri nets with data.
+
+**Definition 16 (Valid Decomposition for Petri nets with Data).** Let $DPN \in U_{DPN}$ be a Petri net with Data. $D = \{DPN^1, DPN^2, ..., DPN^n\} \subseteq U_{DPN}$ is a valid decomposition if and only if:
+
+- for all $1 \le i \le n$: $DPN^i = (SN^i, V^i, val^i, init^i, read^i, write^i, guard^i)$ is a Petri net with Data, $SN^i = (PN^i, M_{init}^i, M_{final}^i) \in U_{SN}$ is a system net, and $PN^i = (P^i, T^i, F^i, l^i)$ is a labeled Petri net,
+
+- $D' = \{SN^1, SN^2, ..., SN^n\} \subseteq U_{SN}$ is a valid decomposition of $\bigcup_{1 \le i \le n} SN^i$,
+
+- $V^i \cap V^j = \emptyset$ for $1 \le i < j \le n$,
+
+- $DPN = \bigcup_{1 \le i \le n} DPN^i$.
+
+$\mathcal{D}(DPN)$ is the set of all valid decompositions of DPN.
+
+Each variable appears in precisely one of the subnets. Therefore, there cannot be two fragments that read and or write the same data variables: $\bigcup_{t \in T^i} read^i(t) \cup write^i(t) \cap \bigcup_{t \in T^j} read^j(t) \cup write^j(t) = \emptyset$ for $1 \le i < j \le n$. Moreover, two guards in different fragments cannot refer to the same variable. If a transition $t$ appears in multiple fragments, then it needs to have a visible unique label as shown in [6]. Such a uniquely labeled transition $t$ shared among fragments, may use, read, or write different variables in different fragments. Since $DPN = \bigcup_{1 \le i \le n} DPN^i$, we know that, for all $t$ in $DPN$, $\text{guard}(t)$ is the product of all $\text{guard}^i(t)$ such that $t \in T^i$. Without loss of generality
+---PAGE_BREAK---
+
+we can assume that the first *k* fragments share *t*. Hence, *guard*(*t*) = *guard*¹(*t*) ... *guard*⁴(*t*). Hence, in a valid decomposition, the guard of a shared transition can only be split if the different parts do not depend on one another. Notice that, the splitting of the data variables is limited by how the variables are used throughout the process, existing a worst-case where all the data variables are used in all the steps of the process.
+
+Based on these observations, we prove that we can decompose conformance checking also for Petri nets with data.
+
+**Theorem 2 (Conformance Checking With Data Can be Decomposed).** Let $L \in \mathcal{B}((\mathcal{U}_A \times \mathcal{U}_{VM} \times \mathcal{U}_{VM})^*)$ be an event log with information about reads and writes and let $DPN \in \mathcal{U}_{DPN}$ be a Petri net with Data. For any valid decomposition $D = \{DPN^1, DPN^2, \dots, DPN^n\} \subseteq \mathcal{U}_{DPN}: L$ is perfectly fitting Petri net with Data $DPN$ if and only if for all $1 \le i \le n$: $\Pi_{A_v(SN^i), V^i}(L)$ is perfectly fitting $DPN^i$.
+
+*Proof.* Let $DPN = (SN, V, val, init, read, write, guard)$ be a Petri net with Data with $SN = (PN, M_{init}, M_{final})$ and $PN = (P, T, F, l)$. Let $D = \{DPN^1, DPN^2, \dots, DPN^n\}$ be a valid decomposition of $DPN$ with $DPN^i = (SN^i, V^i, val^i, init^i, read^i, write^i, guard^i)$, $SN^i = (PN^i, M_{init}^i, M_{final}^i) \in \mathcal{U}_{SN}$, and $PN^i = (P^i, T^i, F^i, l^i)$. ($\Rightarrow$) Let $\sigma_v \in L$ be such that there exists a data marking $s$ such that $(DPN, (M_{init}, init))[\sigma_v \triangleright (DPN, (M_{final}, s)))$. This implies that there exists a corresponding $\sigma_b$ with $(DPN, (M_{init}, init))[{\sigma_b}\rangle(DPN, (M_{final}, s)))$ and $l(\sigma_b) = \sigma_v$. For all $1 \le i \le n$, we need to prove that there is a $\sigma_b^i$ with $(DPN^i, (M_{init}^i, init^i))[{\sigma_b^i}\rangle(DPN^i, (M_{final}^i, s^i))$ for some $s^i$. This follows trivially because $DPN^i$ can mimic any move of $DPN$ with respect to transitions $T^i$: just take $\sigma_b^i = \Pi_{T^i, V^i}(\sigma_b)$. Note that guards can only become weaker by projection.
+
+($\Leftarrow$) Let $\sigma_v \in L$. For all $1 \le i \le n$, let $\sigma_b^i$ be such that $(DPN^i, (M_{init}^i, init^i))[{\sigma_b^i}\rangle(DPN^i, (M_{final}^i, s^i))$ and $l^i(\sigma_b^i) = \Pi_{A_v(SN^i), V^i}(\sigma_v)$. The different $\sigma_b^i$ sequences can be stitched together into an overall $\sigma_b$ s.t. $(DPN, (M_{init}, init))[{\sigma_b}\rangle(DPN, (M_{final}, s)))$ with $s = s^1 \oplus s^2 \oplus \dots \oplus s^n$. This is possible because transitions in one subnet can only influence other subnets through unique visible transitions and these can only move synchronously as defined by $\sigma_v$. Moreover, guards can only be split in independent parts (see Definition 16). Suppose that $t$ appears in $T_i$ and $T_j$, then *guard*(*t*) = *guard*¹(*t*) · *guard*⁴(*t*). Hence, a read/write in subnet *i* cannot limit a read/write in subnet *j*. Therefore, we can construct $\sigma_b$ and $l(\sigma_b) = \sigma_v$. $\square$
+
+# 4 SESE-based Strategy for Realizing a Valid Decomposition
+
+In this section we present a concrete strategy to instantiate the valid decomposition definition over a Petri net with data presented in the previous section (cf. Def.16). The proposed strategy decomposes the Petri net with data in a number of Single-Entry Single-Exit (SESE) components, which have recently been shown to create meaningful fragments of a process model [11,8]. SESE decomposition is indicated for well-structured models, whereas for unstructured models some automatic transformation techniques can be considered as a pre-processing step [12].
+
+We will now informally describe the necessary notions for understanding the proposed data-oriented SESE-based valid decomposition strategies described below. For
+---PAGE_BREAK---
+
+Fig. 4: A Petri net modeling the control-flow of the running example, its workflow graph and the RPST and SESE decomposition.
+
+Fig. 5: SESE-based decomposition for the running example, with 2-decomposition.
+
+the sake of clarity, we will focus on the control flow to illustrate the concepts, although
+the definitions will be extended at the end to also consider data.
+
+Given Petri net PN = (P, T, F, l), its *workflow graph* is the structural graph
+WG = (S, E) with no distinctions between places and transitions, i.e., S = P ∪ T and
+E = F. For instance, Fig. 4(b) shows the workflow graph of the Petri net of Fig. 4(a)
+(corresponding with the control-flow part of the running example). Given a subset of
+edges E' ⊆ E of WG, the nodes $S \upharpoonright_{E'} = \{s \in S : \exists s' \in S. (s, s') \in E' \lor (s', s) \in E'\}$ can be partitioned into interior and boundary. Interior nodes have no connection with
+nodes outside $S \upharpoonright_{E'}$, while boundary nodes do. Furthermore, boundary nodes can be
+partitioned into entry (no incoming edge belongs to E), or exit (no outgoing edge be-
+longs to E'). $E' \subseteq E$ is a *SESE* of WG iff the subnet derived from E has exactly two
+boundary nodes: one entry and one exit. Fig. 4(b) shows all non-trivial SESEs⁸ of the
+Petri net of Fig. 4(a). For a formal definition we refer to [11].
+
+The decomposition based on SESEs is a well studied problem in the literature, and
+can be computed in linear time. In [13,14], efficient algorithms for constructing the
+
+⁸ Note that by definition, a single edge is a SESE.
+---PAGE_BREAK---
+
+**Algorithm 1 SESE-based Decomposition**
+
+1: Build data workflow graph *DWG* from *F*, *R*, *W*
+2: Compute *RPST* from *DWG*
+3: Compute SESE decomposition *D* from the *RPST*
+4: Compute and merge subnets if necessary to preserve valid decomposition.
+5: **return** valid decomposition where perspectives are decomposed altogether
+
+Refined Process Structure Tree (RPST), i.e., a hierarchical structure containing all the canonical SESEs of a model, were presented. Informally, an RPST is a tree where the nodes are canonical SESEs, such that the parent of a SESE *S* is the smallest SESE that contains *S*. Fig. 4(c) shows the RPST of the workflow graph depicted in Fig. 4(b). By selecting a particular set of SESEs in the RPST (e.g., k-decomposition [8]), it is possible to obtain a partitioning of the arcs. We refer the reader to the aforementioned work for a formal description of the SESE-based decomposition.
+
+To extend the previous definitions to also account for data, one simply has to incor-
+porate in the workflow graph the variables and read/write arcs, i.e., the data workflow
+graph of a Petri net with Data (((P, T, F, l), Minit, Mfinal), V, val, init, read, write, guard)
+with data arcs R = {(v, t)|v ∈ read(t)} and W = {(t, v)|v ∈ write(t)} is DWG =
+(S, E) with S = P ∪ T ∪ V and E = F ∪ R ∪ W. The subsequent definitions after this
+extension (SESE, RPST) are analogous.
+
+Similar to [8], we propose a SESE decomposition to analyze the conformance of
+Petri nets with data, but considering data workflow graph instead. Algorithm 1 de-
+scribes the steps necessary to construct a SESE decomposition. The arcs are partitioned
+in SESEs by means of creating the RPST from the data workflow graph, and select-
+ing a particular set of SESES over it. Once the partitioning is done, a subnet is created
+for each part. Subnets contradicting some of the requirements of Def. 16 (e.g. shar-
+ing places, invisible or duplicate transitions, variables, or transitions with non-splitting
+guards) are merged to preserve the valid decomposition definition.
+
+Figure 5 shows the decomposition for the example of Fig.3, where the RPST is partitioned using the 2-decomposition algorithm [8], i.e., SESEs of at most 2 arcs⁹. To ensure a valid decomposition is obtained, step 4 of Algorithm 1 combines multiple SESE fragments into larger fragments, which are not necessarily SESEs anymore.
+
+# 5 Implementation and Experimental Results
+
+The approach discussed in this paper has been implemented as a plug-in for the open-source *PromM* framework for process mining.¹⁰ Our plug-in requires a Petri Net with Data and an event log as input and returns as many bags of alignments as the number of fragments in which the Petri Net with Data has been decomposed. Each bag refers to a different fragment and shows the alignments of each log trace and that fragment. A second type of output is also produced in which the alignments' information is projected onto the Petri net with Data. Transitions are colored according to the number of
+
+⁹ Although the SESEs have at most two arcs, this is not guaranteed for the final subnets, i.e., some subnets are merged to preserve the valid decomposition definition.
+
+¹⁰ http://www.promtools.org
+---PAGE_BREAK---
+
+Fig. 6: Computation time for checking the conformance of the Petri net with Data in Figure 3 and event logs of different size. The Y axis is on a logarithmic scale.
+
+deviations: if no deviation occurs for a given transition, the respective box in the model is white-colored. The filling color of a box shades towards red as a larger fraction of deviations occur for the corresponding transition. Something similar is also done for variables: the more incorrect read/write operations occur for a variable, the more the variable is shown with a color close to red. This output is extremely interesting from an end-user viewpoint as it allows for gaining a helicopter view on the main causes of deviations. Space limitations prevent us from giving more details and showing screenshots of the output. Interested readers can refer to [15] for further information.
+
+As previously mentioned, the plug-in has been evaluated using a number of synthetic event logs and also a real-life process. The plug-in has been evaluated using the model in Figure 3 and with a number of event logs that were artificially generated. In particular, we have generated different event logs with the same number of traces, 5000, but increasing number of events, meaning that, on average, traces were of different length. To simulate that, for each simulated process execution, an increasing number of renegotiations was enforced to happen. Traces were also generated so as to contain a number of deviations: the event logs were generated in a way that 25% of transitions fired violating the guards.
+
+Figure 6 shows the results of checking for conformance of the different event logs and the process model, comparing the SESE-based decomposition with $k = 2$ with the case in which no decomposition is made. To check the conformance of each fragment, we used the technique reported in [4]. Each dot in the chart indicates a different event log with traces of different size. The computation time refers to the conformance checking of the whole event logs (i.e., 5000 traces). The decomposed net is the same as in Figure 5. Regarding the cost function, we assign cost 1 to any deviation; however, this could be customized based on domain knowledge. The results show that, for every combination of event log and process model, the decomposition significantly reduces the computation time and the improvement is exponential in the size of the event log.
+
+To assess the practical relevant of the approach, we also performed an evaluation with a Dutch financial institute. The process model was provided by a process analyst of the institute and consists of 21 transitions: 13 transitions with unique labels, 3 activities labels shared between 2 transitions (i.e. 6 transitions in total), plus 3 invisible
+---PAGE_BREAK---
+
+transitions. The model contains twelve process variables, which are read and written by the activities when being executed. The process model is omitted for space reasons and shown in [15]. We were also provided with an event log that recorded the execution of 111 real instances of such a process; overall, the 111 log traces contained 3285 events, which means roughly 29.6 events per trace. We checked the conformance of this process model and this event log, comparing the results when the model has or has not been decomposed in small fragments. For conformance checking, here we used the technique reported in [5] since the provided process model breaks the soundness assumptions required by [4]. For this experiment round, the additional optimizations proposed in [5] were deactivated to allow for a fair comparison.
+
+The application of the decomposition approach to this real-life case study has shown tremendous results: the conformance checking has required 52.94 seconds when the process model was decomposed using the SESE-based technique presented in Section 4; conversely, it required 52891 seconds when the model was not decomposed. This indicates that decomposing the process model allowed us to save 99.999% of the computation time. As a matter of fact, we tried for different values of SESE parameter *k* but we obtained similar results: the computation time did not move away for more than 1 second. The reason of this is related to the fact that every decomposition for any value of *k* always contained a certain fragment, along with others. Indeed, that fragment could not be decomposed any further than a given extent. Since the computation time was mostly due to constructing alignments with that fragment, no significant difference in computation time could be observed when varying *k*.
+
+# 6 Conclusions and Future Work
+
+Conformance checking is becoming more important for two reasons: (1) the volume of event data available for checking normative models is rapidly growing (the topic of “Big Data” is on the radar of all larger organizations) and (2) because of a variety of regulations there is a need to check compliance. Moreover, conformance checking is also used for the evaluation of process discovery algorithms. Also genetic process mining algorithms heavily rely on the efficiency of conformance checking techniques.
+
+Thus far, lion's share of conformance checking techniques has focused on control-flow and relatively small event logs. As shown in this paper, abstracting from other perspectives may lead to misleading conformance results that are too optimistic. Moreover, as process models and event logs grow in size, divide-and-conquer approaches are needed to still be able to check conformance and diagnose problems. Perspectives such as work distribution, resource allocation, quality of service, temporal constraints, etc. can all be encoded as data constraints. Hence, there is an urgent need to support data-aware conformance checking in-the-large.
+
+This paper demonstrates that data-aware decompositions can be used to speed up conformance checking significantly. The evaluation with a real-life case study has shown that real data-aware process models can indeed be decomposed, thus obtaining even tremendous saving of computation time. As future work, we would like to extend our experimental evaluation with real-life process models of larger sizes. Moreover, we would like to explore alternative decomposition strategies using properties of the underlying data, and to analyze the impact of different component sizes. This paper only
+---PAGE_BREAK---
+
+focuses on fitness aspect of conformance, namely whether a trace can be replayed on a process model. However, recently, research has also been carried on as regards to different conformance dimensions [16,17], such as whether the model is precise enough to not allow for too much behavior compared with what observed in reality in the event log. We plan to use data-aware decomposition approaches to speed up the assessment of the quality of process models with respect to these other conformance dimensions, as well.
+
+## References
+
+1. van der Aalst, W.M.P.: Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer (2011)
+
+2. Rozinat, A., van der Aalst, W.M.P.: Conformance checking of processes based on monitoring real behavior. Information System **33**(1) (2008) 64–95
+
+3. Adriansyah, A., van Dongen, B.F., van der Aalst, W.M.P.: Conformance checking using cost-based fitness analysis. In: Proceedings of the 15th IEEE International Enterprise Distributed Object Computing Conference, (EDOC 2011), IEEE Computer Society (2011) 55–64
+
+4. de Leoni, M., van der Aalst, W.M.P.: Aligning event logs and process models for multi-perspective conformance checking: An approach based on integer linear programming. In: Proceedings of the 11th International Conference on Business Process Management, (BPM 2013). Volume 8094 of LNCS., Springer (2013) 113–129
+
+5. Mannhardt, F., de Leoni, M., Reijers, H.A. van der Aalst, W.M.P.: Balanced Multi-Perspective Checking of Process Conformance (2014) BPM Center Report BPM-14-07.
+
+6. van der Aalst, W.M.P.: Decomposing Petri nets for process mining: A generic approach. Distributed and Parallel Databases **31**(4) (2013) 471–507
+
+7. van der Aalst, W.M.P.: Decomposing process mining problems using passages. In: Proceedings of the 33rd International Conference on Application and Theory of Petri (PETRI NETS 2012). Volume 7347 of LNCS., Springer (2012) 72–91
+
+8. Munoz-Gama, J., Carmona, J., van der Aalst, W.M.P.: Single-entry single-exit decomposed conformance checking. Information Systems **46** (2014) 102–122
+
+9. Montali, M., Chesani, F., Mello, P., Maggi, F.M.: Towards data-aware constraints in declare. In Shin, S.Y., Maldonado, J.C., eds.: SAC, ACM (2013) 1391–1396
+
+10. Jensen, K., Kristensen, L.: Coloured Petri Nets. Springer Verlag (2009)
+
+11. Polyvyanyy, A.: Structuring process models. PhD thesis, University of Potsdam (2012)
+
+12. Dumas, M., García-Bañuelos, L., Polyvyanyy, A.: Unraveling unstructured process models. In Mendling, J., Weidlich, M., Weske, M., eds.: BPMN. Volume 67 of LNBIP., Springer (2010) 1–7
+
+13. Vanhatalo, J., Völzer, H., Koehler, J.: The refined process structure tree. Data Knowl. Eng. **68**(9) (2009) 793–818
+
+14. Polyvyanyy, A., Vanhatalo, J., Völzer, H.: Simplified computation and generalization of the refined process structure tree. In: 7th International Workshop on Web Services and Formal Methods. Revised Selected Papers. Volume 6551 of LNCS., Springer (2011) 25–41
+
+15. de Leoni, M., Munoz-Gama, J., Carmona, J., van der Aalst, W.M.P.: Decomposing Conformance Checking on Petri Nets with Data. (2014) BPM Center Report BPM-14-06.
+
+16. Munoz-Gama, J., Carmona, J.: A General Framework for Precision Checking. International Journal of Innovative Computing, Information and Control (IJICIC) **8**(7B) (July 2012) 5317–5339
+
+17. De Weerdt, J., De Backer, M., Vanthienen, J., Baesens, B.: A multi-dimensional quality assessment of state-of-the-art process discovery algorithms using real-life event logs. Information System **37**(7) (2012) 654–676
\ No newline at end of file
diff --git a/samples/texts_merged/668834.md b/samples/texts_merged/668834.md
new file mode 100644
index 0000000000000000000000000000000000000000..06895a78af192439b5928c9c77c272794f24f1a4
--- /dev/null
+++ b/samples/texts_merged/668834.md
@@ -0,0 +1,44 @@
+
+---PAGE_BREAK---
+
+## Area of Plane Figures
+
+### Area of a Rectangle:
+
+$$A_{\text{rectangle}} = l \times w$$
+
+Where $l$ is length and $w$ is width.
+
+$w = 2 \text{ cm}$
+
+$l = 3 \text{ cm}$
+
+### Area of a Triangle:
+
+$$A_{\text{triangle}} = \frac{b \times h}{2}$$
+
+Where $b$ is the base of the triangle and $h$ is the height.
+
+### Area of a Parallelogram:
+
+$$A_{\text{parallelogram}} = b \times h$$
+
+Where $b$ is the base of the parallelogram and $h$ is the height.
+---PAGE_BREAK---
+
+## Area of a Trapezoid:
+
+$$A_{\text{trapezoid}} = \frac{(a + b) \times h}{2}$$
+
+Where *a* and *b* are the bases of the trapezoid and *h* is the height.
+
+a = 8 cm, b = 12 cm, and h = 10 cm
+
+## Area of a Circle:
+
+$$A = \pi r^2$$
+
+Where *r* is the radius and $\pi$ is approximately 3.14
+---PAGE_BREAK---
+
+Examples: Determine the area of each figure to 1 decimal place:
\ No newline at end of file
diff --git a/samples/texts_merged/6729477.md b/samples/texts_merged/6729477.md
new file mode 100644
index 0000000000000000000000000000000000000000..62c0a1a3a0aa9ed9f2988d520a554acc0356f215
--- /dev/null
+++ b/samples/texts_merged/6729477.md
@@ -0,0 +1,511 @@
+
+---PAGE_BREAK---
+
+# Unifying Guilt-by-Association Approaches:
+Theorems and Fast Algorithms
+
+Danai Koutra¹, Tai-You Ke², U Kang¹, Duen Horng (Polo) Chau¹, Hsing-Kuo Kenneth Pao², and Christos Faloutsos¹
+
+¹ School of Computer Science, Carnegie Mellon University
+{danai, ukang, dchau, christos}@cs.cmu.edu
+
+² Dept. of Computer Science & Information Engineering
+National Taiwan Univ. of Science & Technology
+{M9815071, pao}@mail.ntust.edu.tw
+
+**Abstract.** If several friends of *Smith* have committed petty thefts, what would you say about *Smith*? Most people would not be surprised if *Smith* is a hardened criminal. **Guilt-by-association** methods combine weak signals to derive stronger ones, and have been extensively used for anomaly detection and classification in numerous settings (e.g., accounting fraud, cyber-security, calling-card fraud).
+
+The focus of this paper is to compare and contrast several very successful, *guilt-by-association* methods: *Random Walk with Restarts*, Semi-Supervised Learning, and *Belief Propagation* (BP).
+
+Our main contributions are two-fold: (a) theoretically, we prove that all the methods result in a similar matrix inversion problem; (b) for practical applications, we developed FABP, a fast algorithm that yields 2× speedup, equal or higher accuracy than BP, and is guaranteed to converge. We demonstrate these benefits using synthetic and real datasets, including YahooWeb, one of the largest graphs ever studied with BP.
+
+**Keywords:** Belief Propagation, Random Walk with Restart, Semi-Supervised Learning, probabilistic graphical models, inference
+
+## 1 Introduction
+
+Network effects are very powerful, resulting even in popular proverbs like “birds of a feather flock together”. In social networks, obese people tend to have obese friends [5], happy people tend to make their friends happy too [7], and in general, people usually associate with like-minded friends with respect to politics, hobbies, religion etc. Thus, knowing the types of a few nodes in a network, (say, “honest” vs “dishonest”), we would have good chances to guess the types of the rest of the nodes.
+
+Informally, the guilt-by-association problem (or label propagation in graphs) is defined as follows:
+
+Given: a graph with *N* nodes and *M* edges; *n*+ and *n*- nodes labeled as members of the positive and negative class respectively
+---PAGE_BREAK---
+
+**Find:** the class memberships of the rest of the nodes, assuming that neighbors influence each other
+
+The influence can be “homophily”, meaning that nearby nodes have similar labels, or “heterophily”, meaning the reverse (e.g., talkative people tend to prefer silent friends, and vice-versa). Homophily appears in numerous settings, for example: (a) *Personalized PageRank*: if a user likes some pages, she would probably like other pages that are heavily connected to her favorites. (b) *Recommendation systems*: if a user likes some products (i.e., members of *positive* class), which other products should get *positive* scores? (c) *Accounting and calling-card fraud*: if a user is dishonest, his/her contacts are probably dishonest too.
+
+There are several, closely related methods that address the homophily problem, and some - among which is our proposed FABP method, improved on *Belief Propagation* - that address both homophily and heterophily. We focus on three of them: Personalized PageRank (or “Personalized Random Walk with Restarts”, or just RWR), Semi-Supervised Learning (SSL), and Belief Propagation (BP). How are these methods related? Are they identical? If not, which method gives the best accuracy? Which method has the best scalability?
+
+These questions are exactly the focus of this work. In a nutshell, we contribute by answering the above questions, and providing a fast algorithm inspired by our theoretical analysis:
+
+* *Theory & Correspondences:* the three methods are closely related, but not identical.
+* *Algorithm & Convergence:* we propose FABP, a fast, accurate and scalable algorithm, and provide the conditions under which it converges.
+* *Implementation & Experiments:* finally, we propose a HADOOP-based algorithm, that scales to billion-node graphs, and we report experiments on one of the largest graphs ever studied in the open literature. Our FABP method achieves about 2× better runtime.
+
+## 2 Related Work
+
+RWR, SSL and BP are very popular techniques, with numerous papers using or improving them. Here, we survey the related work for each method.
+
+RWR is the method underlying Google's classic PageRank algorithm [2]. RWR's many variations include *Personalized PageRank* [10], *lazy random walks* [20], and more[24, 21]. Related methods for node-to-node distance (but not necessarily *guilt-by-association*) include Pegasus [15], parameterized by *escape probability* and *round-trip probability*.
+
+According to conventional categorization, SSL approaches are classified into four categories [28]: low-density separation methods, graph-based methods, methods for changing the representation, and co-training methods. The principle behind SSL is that unlabeled data can help us decide the “metric” between data points and improve the models’ performance. A very recent use of SSL for
+---PAGE_BREAK---
+
+multi-class settings has been proposed in [12]. In this work, we mainly study the
+graph-based SSL methods.
+
+BP [23], being an efficient inference algorithm on probabilistic graphical
+models, has been successfully applied to numerous domains, including error-
+correcting codes [16], stereo imaging in computer vision [6], fraud detection [19,
+22], and malware detection[3]. Extensions of BP include *Generalized Belief Prop-
+agation* (GBP), that takes a multi-resolution view point, grouping nodes into
+regions [27]; however, how to construct good regions is still an open research
+problem. Thus, we focus on standard BP, which is better understood. Here, we
+study how the parameter choices for BP helps accelerate the algorithms, and
+how to implement the method on top of HADOOP [1] (open-source MapReduce
+implementation). This focus differentiates our work from existing research which
+speeds up BP by exploiting the graph structure [4, 22] or the order of message
+propagation [9].
+
+Summary: None of the above papers show the relationships between the three methods, or discuss the parameter choices (e.g., homophily factor). Table 1 qualitatively compares the methods. BP supports heterophily, but there is no guarantee on convergence. Our FABP algorithm improves on it to provide convergence.
+
+Table 1: Qualitative comparison of ‘guilt-by-association’ (GBA) methods.
+
+
+
+
+ |
+ GBA Method
+ |
+
+ Heterophily
+ |
+
+ Scalability
+ |
+
+ Convergence
+ |
+
+
+
+
+ |
+ RWR
+ |
+
+ No
+ |
+
+ Yes
+ |
+
+ Yes
+ |
+
+
+ |
+ SSL
+ |
+
+ No
+ |
+
+ Yes
+ |
+
+ Yes
+ |
+
+
+ |
+ BP
+ |
+
+ Yes
+ |
+
+ Yes
+ |
+
+ ?
+ |
+
+
+ |
+ FABP
+ |
+
+ Yes
+ |
+
+ Yes
+ |
+
+ Yes
+ |
+
+
+
+
+3 Theorems and Correspondences
+
+In this section we present the three main formulas that show the similarity of
+the following methods: binary BP and specifically our proposed approximation,
+the linearized BP (FABP), Gaussian BP (GAUSSIANBP), Personalized RWR
+(RWR), and Semi-Supervised learning (SSL).
+
+For the homophily case, all the above methods are similar in spirit, and
+closely related to diffusion processes: the $n_+$ nodes that belong to class “+” (say,
+“green”), act as if they taint their neighbors (diffusion) with green color, and
+similarly do the negative nodes with, say, red color. Depending on the strength
+of homophily, or equivalently the speed of diffusion of the color, eventually we
+have green-ish neighborhoods, red-ish neighborhoods, and bridge-nodes (half-
+red, half-green).
+
+The solution vectors for each method obey very similar equations: they all involve a matrix inversion, where the matrix consists of a diagonal matrix and a weighted or normalized version of the adjacency matrix. Table 2 has the definitions of the symbols that are used in the following discussion, and Table 3 shows the resulting equations, carefully aligned to highlight the correspondences.
+---PAGE_BREAK---
+
+Table 2: Major Symbols and Definitions. (matrices in bold capital, vectors in bold lowercase, and scalars in plain font)
+
+| Symbols | Definitions | Explanations |
|---|
| n | number of nodes in the graph | | | A | n × n symmetric adjacency matrix | | | D | n × n diagonal matrix of degrees | $D_{ii} = \sum_j A_{ij}$ and $D_{ij} = 0$ for $i \neq j$ | | I | n × n identity matrix | | | bh | "about-half" final beliefs b = 0.5 | b = n × 1 vector of the BP final beliefs b(i){> 0.5, < 0.5} means i ∈ {"+"}, {"bbox": [634, 376, 968, 492], "category": "Text", "text": "$b = n \\times 1$ vector of the BP final beliefs\n$b(i) = 0$ means i is unclassified (neutral)\n$\\phi = n \\times 1$ vector of the BP prior beliefs\n$h = \\psi(\"+\", \"+\")$: BP propagation matrix entry\n$h \\to 0$ means strong heterophily\n$h \\to 1$ means strong homophily"}, {"bbox": [311, 394, 424, 408], "category": "Formula", "text": ""}, {"bbox": [311, 427, 617, 442], "category": "Text", "text": "\"about-half\" prior beliefs, $\\phi - 0.5$"}, {"bbox": [311, 462, 424, 475], "category": "Formula", "text": ""}, {"bbox": [311, 486, 424, 499], "category": "Text", "text": "\"about-half\" homophily factor"}, {"bbox": [277, 516, 993, 560], "category": "Caption", "text": "Table 3: Main results, to illustrate correspondence: $n \\times n$ matrices in bold capital, $n \\times 1$ vectors in bold lowercase, and scalars in plain font."}, {"bbox": [339, 562, 927, 687], "category": "Table", "text": "| Method | matrix | unknown | known |
|---|
| RWR | [I − cAD−1]x | x = (1 − c)y | | | SSL | [I + α(D − A)]x | x = y | | | Gaussian BP = SSL | [I + α(D − A)]x | x = y | | | FABP | [I + aD − c'A]x | bh = φh | | "}, {"bbox": [277, 713, 997, 762], "category": "Text", "text": "**Theorem 1 (FABP).** The solution to Belief Propagation can be approximated by the linear system"}, {"bbox": [530, 763, 993, 788], "category": "Formula", "text": "$$[I + a\\mathbf{D} - c'\\mathbf{A}]\\mathbf{b}_h = \\mathbf{\\phi}_h \\quad (1)$$"}, {"bbox": [277, 796, 998, 896], "category": "Text", "text": "where $a = 4h_h^2/(1 - 4h_h^2)$, and $c' = 2h_h/(1 - 4h_h^2)$. The definitions of $h_h$, $\\mathbf{\\phi}_h$ and $\\mathbf{b}_h$ are given in Table 2. Specifically, $\\mathbf{\\phi}_h$ corresponds to the prior beliefs of the nodes, and node i, about which we have no information, has $\\mathbf{\\phi}_h(i) = 0$; $\\mathbf{b}_h$ corresponds to the vector of our final beliefs for each node."}, {"bbox": [277, 909, 995, 1008], "category": "Text", "text": "*Proof.* The goal behind the “about-half” is the linearization of BP using Maclaurin expansions. The preliminary analysis of FABP, and the necessary lemmas for the linearization of the original BP equations are given in Appendix A. For the detailed proof of this theorem see Appendix B. □"}, {"bbox": [277, 1025, 995, 1074], "category": "Text", "text": "**Lemma 1 (Personalized RWR).** The linear system for RWR given an observation $\\mathbf{y}$, is described by the following equation:"}, {"bbox": [522, 1088, 993, 1115], "category": "Formula", "text": "$$[I - c\\mathbf{AD}^{-1}]\\mathbf{x} = (1 - c)\\mathbf{y} \\quad (2)$$"}, {"bbox": [277, 1130, 995, 1229], "category": "Text", "text": "where $1-c$ is the restart probability, $c \\in [0,1]$. Similarly to the BP case above, $\\mathbf{y}$ corresponds to the prior beliefs for each node, with the small difference that $y_i = 0$ means that we know nothing about node i, while a positive score $y_i > 0$ means that the node belongs to the positive class (with the corresponding strength)."}, {"bbox": [277, 1242, 993, 1266], "category": "Text", "text": "*Proof.* See [11], [24]. □"}, {"bbox": [277, 1283, 995, 1356], "category": "Text", "text": "**Lemma 2 (SSL and Gaussian BP).** Suppose we are given l labeled nodes $(x_i, y_i)$, $i = 1,...,l$, $y_i \\in \\{0,1\\}$, and u unlabeled nodes $(x_{l+1}, ..., x_{l+u})$. The solution to a Gaussian BP and SSL problem is given by the linear system:"}, {"bbox": [540, 1373, 993, 1398], "category": "Formula", "text": "$$[\\alpha(\\mathbf{D} - \\mathbf{A}) + \\mathbf{I}]\\mathbf{x} = \\mathbf{y} \\quad (3)$$"}]
+---PAGE_BREAK---
+
+where $\alpha$ is related to the coupling strength (homophily) of neighboring nodes, $\mathbf{y}$ represents the labels of the labeled nodes and, thus, it is related to the prior beliefs in BP, and $\mathbf{x}$ corresponds to the labels of all the nodes or equivalently the final beliefs in BP.
+
+*Proof.* See Appendix B and [25], [28]. □
+
+**Lemma 3 (RWR-SSL correspondence).** On a regular graph (i.e., all nodes have the same degree $d$), RWR and SSL can produce identical results if
+
+$$ \alpha = \frac{c}{(1-c)d} \qquad (4) $$
+
+That is, we need to align carefully the homophily strengths $\alpha$ and $c$.
+
+*Proof.* See Appendix B. □
+
+In an arbitrary graph the degrees are different, but we can still make the two methods give the same results if each node has a different $\alpha_i$ instead of $\alpha$. Specifically, for node $i$ with degree $d_i$, the quantity $\alpha_i$ should be $\frac{c}{(1-c)d_i}$. The following section illustrates the correspondence between RWR and SSL.
+
+## 3.1 Arithmetic Examples
+
+Here we illustrate that SSL and RWR result in closely related solutions. We study both the case with variable $\alpha_i$ for each node $i$, and the case with fixed $\alpha = c/((1-c)\bar{d})$, where $\bar{d}$ is the average degree.
+
+We generated a random graph using the Erdős-Rényi model, $G(n,p) = G(100, 0.3)$. Figure 1 shows the scatter-plot: each node $i$ has a corresponding blue circle ($x_{1i}, y_{1i}$) for variable $\alpha_i$, and also a red star ($x_{2i}, y_{2i}$) for fixed $\alpha$. The coordinates of the points are the RWR and SSL scores, respectively. Figure 1(b) shows a magnification of the central part of Fig. 1(a). Notice that the red stars (fixed $\alpha$) are close to the 45-degree line, while the blue circles (variable $\alpha_i$) are exactly on the 45-degree line. The conclusion is that (a) the SSL and RWR scores are similar, and (b) the rankings are the same: whichever node is labeled as “positive” by SSL, gets a high score by RWR, and conversely.
+
+## 4 Analysis of Convergence
+
+In this section we provide the sufficient, but not necessary conditions for which our method, FABP, converges. The implementation details of FABP are described in the upcoming Section 5. Lemmas 4, 5, and 8 give the convergence conditions. At this point we should mention that work on the convergence of a variant of BP, Gaussian BP, is done in [18] and [25]. The reasons that we focus on BP are that (a) it has a solid, Bayesian foundation, and (b) it is more general than the rest, being able to handle heterophily (as well as multiple-classes, that we don't elaborate here).
+---PAGE_BREAK---
+
+Fig. 1: Scatter plot showing the similarities between SSL and RWR. Scores of SSL and RWR for the nodes of a random graph: blue circles (perfect equality – variable $\alpha_i$) and red stars (fixed $\alpha$). The right figure is a zoom-in of the left. Most red stars are on or close to the diagonal: the two methods give similar scores, and identical assignments to positive/negative classes.
+
+All our results are based on the power expansion required to compute the inverse of a matrix of the form $I - W$; all the methods undergo this process, as we show in Table 3. Specifically, we need the inverse of the matrix $I + aD - c'A = I - W$, which is given by the expansion:
+
+$$ (I - W)^{-1} = I + W + W^2 + W^3 + \dots \quad (5) $$
+
+and the solution of the linear system is given by the formula
+
+$$ (I - W)^{-1} \phi_h = \phi_h + W \cdot \phi_h + W \cdot (W \cdot \phi_h) + \dots \quad (6) $$
+
+This method, also referred to as the Power Method, is fast since the computation can be done in iterations, each one of which consists of a sparse-matrix/vector multiplication. In this section we examine its convergence conditions.
+
+**Lemma 4 (Largest eigenvalue).** The series $\sum_{k=0}^{\infty} |c'A - aD|^k$ converges iff $\lambda(W) < 1$, where $\lambda(W)$ is the magnitude of the largest eigenvalue of $W$.
+
+Given that the computation of the largest eigenvalue is non-trivial, we suggest using one of the following lemmas, which give a closed form for computing the “about-half” homophily factor, $h_h$.
+
+**Lemma 5 (1-norm).** The series $\sum_{k=0}^{\infty} |c'A - aD|^k$ converges if
+
+$$ h_h < \frac{1}{2 + 2 \max_j d_{jj}} \quad (7) $$
+
+where $d_{jj}$ are the elements of the diagonal matrix $D$.
+
+*Proof.* The proof is based on the fact that the power series converges if the 1-norm, or equivalently the $\infty$-norm, of the symmetric matrix $W$ is smaller than 1. The detailed proof is shown in Appendix C. □
+---PAGE_BREAK---
+
+**Lemma 6** (Frobenius norm). The series $\sum_{k=0}^{\infty} |c'A - aD|^k$ converges if
+
+$$h_h < \sqrt{\frac{-c_1 + \sqrt{c_1^2 + 4c_2}}{8c_2}} \quad (8)$$
+
+where $c_1 = 2 + \sum_i d_{ii}$ and $c_2 = \sum_i d_{ii}^2 - 1$.
+
+*Proof*. This upper bound for $h_h$ is obtained when we consider the Frobenius norm of matrix **W** and we solve the inequality $\|\mathbf{W}\|_F = \sqrt{\sum_{i=1}^n \sum_{j=1}^n |\mathbf{W}_{ij}|^2} < 1$ with respect to $h_h$. We omit the detailed proof. □
+
+Equation (8) is preferable over (7) when the degrees of the graph's nodes demonstrate considerable standard deviation. The 1-norm yields small $h_h$ for very big highest degree, while the Frobenius norm gives a higher upper bound for $h_h$. Nevertheless, we should bear in mind that $h_h$ should be a sufficiently small number in order for the “about-half” approximations to hold, because of the “about-half” approximations done in the analysis of FABP.
+
+# 5 Proposed Algorithm: FaBP
+
+Based on the analysis in Sections 3 and 4, we propose the FABP algorithm:
+
+• **Step 1:** Pick $h_h$ to achieve convergence: $h_h = \max\{(7), (8)\}$ and compute the parameters $a$ and $c'$ as described in Theorem 1.
+
+• **Step 2:** Solve the linear system (1). Notice that all the quantities involved in this equation are close to zero.
+
+• **Step 3 (optional):** If the achieved accuracy is not sufficient, run a few iterations of BP using the values computed in Step 2 as the prior node beliefs.
+
+In the datasets we studied, the optional step was not required, as FABP achieved equal or higher accuracy than BP, while running in less time.
+
+# 6 Experiments
+
+We present experimental results to answer the following questions:
+
+**Q1:** How accurate is FABP?
+
+**Q2:** Under what conditions does FABP converge?
+
+**Q3:** How sensitive is FABP to the values of $h$ and $\phi$?
+
+**Q4:** How does FABP scale on very large graphs with billions of nodes and edges?
+---PAGE_BREAK---
+
+The graphs we used in our experiments are summarized in Table 4. To answer the first three questions, we used the DBLP dataset [8], which consists of 14,376 papers, 14,475 authors, 20 conferences, and 8,920 terms. Each paper is connected to its authors, the conference in which it appeared and the terms in its title. Only a small portion of the nodes are labeled: 4,057 authors, 100 papers, and all the conferences. We adapted the labels of the nodes to two classes: AI (Artificial Intelligence) and not AI (= Databases, Data Mining and Information Retrieval). In each trial, we ran FABP on the DBLP network where $(1-p)$% of the labels of the papers and the authors had been discarded, with $p \in \{0.1\%, 0.2\%, 0.3\%, 0.4\%, 0.5\%, 5\% \}$. Then, we tested the classification accuracy on the nodes whose labels were discarded. To avoid combinatorial explosion in the presentation of the results, we consider $\{h_h, priors\} = \{0.002, \pm 0.001\}$ as the anchor values, and then, we vary one parameter at a time. When the results are the same for different values of $p\%$, due to lack of space, we randomly pick the plots to present.
+
+To answer the last question, we used the YahooWeb dataset, as well as Kronecker graphs – synthetic graphs generated by the Kronecker generator [17]. YahooWeb is a Web graph containing 1.4 billion web pages and 6.6 billion edges; we automatically labeled 11 million educational and 11 million adult web pages. We used 90% of these labeled data to set the node priors, and the remaining 10% to evaluate the accuracy. For parameters, we set $h_h$ to 0.001 using Lemma 6 (Frobenius norm), and the magnitude of the prior beliefs to $\pm 0.001$.
+
+Table 4: Order and size of graphs.
+
+| Dataset | YahooWeb | Kronecker | Kronecker | Kronecker | Kronecker | DBLP |
|---|
| 1 | 2 | 3 | 4 | |
|---|
| # nodes | 1,413,511,390 | 177,147 | 120,552 | 59,049 | 19,683 | 37,791 | | # edges | 6,636,600,779 | 1,977,149,596 | 1,145,744,786 | 282,416,200 | 40,333,924 | 170,794 |
+
+## 6.1 Q1: Accuracy
+
+Figure 2 shows the scatter plots of beliefs (FABP vs BP) for each node of the DBLP data. We observe that FABP and BP result in practically the same beliefs for all the nodes in the graph, when ran with the same parameters, and thus, they yield the same accuracy. Conclusions are identical for any labeled set-size we tried (0.1% and 0.3% shown in Fig. 2).
+
+**Observation 1.** FABP and BP agree on the classification of the nodes when ran with the same parameters.
+
+## 6.2 Q2: Convergence
+
+We examine how the value of the “about-half” homophily factor affects the convergence of FABP. In Fig. 3 the red line annotated with “max $|eval|$ = 1” splits the plots into two regions; (a) on the left, the Power Method converges and FABP is accurate, (b) on the right, the Power Method diverges resulting
+---PAGE_BREAK---
+
+Fig. 2: The quality of scores of FABP is near-identical to BP, i.e. all the points are on the 45-degree line in the scatter plot of the DBLP sub-network node beliefs (FABP vs BP); red/green points correspond to nodes classified as “AI/not-AI” respectively.
+
+in significant drop in the classification accuracy. We annotate the number of classified nodes for the values of $h_h$ that leave some nodes unclassified due to numerical representation issues. The low accuracy scores for the smallest values of $h_h$ are due to the unclassified nodes, which are counted as misclassifications. The Frobenius norm-based method yields greater upper bound for $h_h$ than the 1-norm based method, preventing any numerical representation problems.
+
+**Observation 2.** Our convergence bounds consistently coincide with high-accuracy regions. Thus, we recommend choosing the homophily factor based on the Frobenius norm using (8).
+
+Fig. 3: FABP achieves maximum accuracy within the convergence bounds. The annotated red numbers correspond to the classified nodes when not all nodes were classified by FABP.
+---PAGE_BREAK---
+
+## 6.3 Q3: Sensitivity to parameters
+
+Figure 3 shows that FABP is insensitive to the “about-half” homophily factor, $h_h$, as long as the latter is within the convergence bounds. Moreover, in Fig. 4 we observe that the accuracy score is insensitive to the *magnitude* of the prior beliefs. For brevity, we show only the cases $p \in \{0.1\%, 0.3\%, 0.5\% \}$, as for all values except for $p = 5.0\%$, the accuracy is practically identical. Similar results were found for different “about-half” homophily factors, but the plots are omitted due to lack of space.
+
+**Observation 3.** The accuracy results are insensitive to the magnitude of the prior beliefs and the homophily factor - as far as the latter is within the convergence bounds we gave in Section 4.
+
+Fig. 4: Insensitivity of FABP to the magnitude of the prior beliefs.
+
+Fig. 5: FABP runtime vs \# edges of Kronecker graphs for 10 and 30 machines on HADOOP.
+
+## 6.4 Q4: Scalability
+
+To show the scalability of FABP we implemented FABP on HADOOP, an open source MAPREDUCE framework, which has been successfully used for large scale graph analysis [14]. We first show the scalability of FABP on the number of edges of Kronecker graphs. As seen in Fig. 5, FABP scales linearly on the number of edges. Next, we compare HADOOP implementation of FABP and BP [13] in terms of running time and accuracy on YahooWeb graph. Figures 6(a-c) show that FABP achieves the maximum accuracy level after two iterations of the Power Method and is ~2x faster than BP. This is explained as follows: BP needs to store the updated messages for 2 states on disks for large graphs, and thus, it stores $2|E|$ records in total, where $|E|$ is the number of edges. In contrast, FABP stores $n$ records per iteration, where $n$ is the number of nodes. Given that $n < 2|E|$, FABP is faster than BP.
+
+**Observation 4.** FABP is linear on the number of edges, with ~2x faster running time than BP on HADOOP.
+---PAGE_BREAK---
+
+Fig. 6: Performance on the YahooWeb graph (best viewed in color): FABP wins on speed and wins/ties on accuracy. In (c), each of the method has 4 points that correspond to one step from 1 to 4. FABP achieves the maximum accuracy after 84 minutes, while BP achieves the same accuracy after 151 minutes.
+
+# 7 Conclusions
+
+Which of the many *guilt-by-association* methods one should use? We answered this question, and we developed FABP, a new, fast algorithm to do such computations. The contributions of our work are the following:
+
+* **Theory & Correspondences:** We showed that successful, major *guilt-by-association* approaches (RWR, SSL, and BP variants) are closely related, and we proved that some are even equivalent under certain conditions (Theorem 1, Lemmas 1, 2, and 3).
+
+* **Algorithms & Convergence:** Thanks to our analysis, we designed FABP, a fast and accurate approximation to the standard belief propagation (BP), which has convergence guarantee (Lemmas 5 and 6).
+
+* **Implementation & Experiments:** We showed that FABP is significantly faster, about $2 \times$, and has the same or better accuracy (AUC) than BP. Moreover, we showed how to parallelize it with MAPREDUCE (HADOOP), operating on billion-node graphs.
+
+Thanks to our analysis, our guide to practitioners is the following: among all 3 *guilt-by-association* methods, we recommend belief propagation, for two reasons: (1) it has solid, Bayesian underpinnings and (2) it can naturally handle heterophily, as well as multiple class-labels. With respect to parameter setting, we recommend to choose homophily score, $h_h$, according to the Frobenius bound in (8).
+
+Future work could focus on time-evolving graphs, and label-tracking over time. For instance, in a call-graph, we would like to spot nodes that change behavior, e.g. from “telemarketer” type to “normal user” type.
+
+**Acknowledgments** This work is partially supported by an IBM Faculty Award, by the National Science Foundation under Grants No. CNS-0721736 IIS0970179, under the project No. NSC 98-2221-E-011-105, NSC 99-2218-E-011-019, under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-09-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as
+---PAGE_BREAK---
+
+representing the official policies, either expressed or implied, of the Army Research Laboratory, the U.S. Government, NSF, or any other funding parties. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
+
+## References
+
+[1] Hadoop information. http://hadoop.apache.org/.
+
+[2] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. *Computer Networks*, 30(1-7), 1998.
+
+[3] D. Chau, C. Nachenberg, J. Wilhelm, A. Wright, and C. Faloutsos. Polonium: Tera-scale graph mining and inference for malware detection. *SDM*, 2011.
+
+[4] A. Chechetka and C. Guestrin. Focused belief propagation for query-specific inference. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, May 2010.
+
+[5] N. A. Christakis and J. H. Fowler. The spread of obesity in a large social network over 32 years. *New England Journal of Medicine*, 357(4):370-379, 2007.
+
+[6] P. Felzenszwalb and D. Huttenlocher. Efficient belief propagation for early vision. *International journal of computer vision*, 70(1):41-54, 2006.
+
+[7] J. H. Fowler and N. A. Christakis. Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study. *BMJ*, 2008.
+
+[8] J. Gao, F. Liang, W. Fan, Y. Sun, and J. Han. Graph-based Consensus Maximization among Multiple Supervised and Unsupervised Models. In *NIPS*, 2009.
+
+[9] J. Gonzalez, Y. Low, and C. Guestrin. Residual splash for optimally parallelizing belief propagation. In *AISTAT*, 2009.
+
+[10] T. Haveliwala. Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search. *IEEE transactions on knowledge and data engineering*, pages 784-796, 2003.
+
+[11] T. Haveliwala, S. Kamvar, and G. Jeh. An analytical comparison of approaches to personalizing pagerank. Technical report, Stanford University, 2003.
+
+[12] M. Ji, Y. Sun, M. Danilevsky, J. Han, and J. Gao. Graph regularized transductive classification on heterogeneous information networks. ECML PKDD, 2010.
+
+[13] U. Kang, D. H. Chau, and C. Faloutsos. Mining large graphs: Algorithms, inference, and discoveries. In *ICDE*, pages 243-254, 2011.
+
+[14] U. Kang, C. Tsourakakis, and C. Faloutsos. Pegasus: A peta-scale graph mining system - implementation and observations. *IEEE International Conference on Data Mining*, 2009.
+
+[15] Y. Koren, S. C. North, and C. Volinsky. Measuring and extracting proximity in networks. In *KDD*, pages 245-255. ACM, 2006.
+
+[16] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product algorithm. *IEEE Transactions on Information Theory*, 47(2):498-519, 2001.
+
+[17] J. Leskovec, D. Chakrabarti, J. M. Kleinberg, and C. Faloutsos. Realistic, mathematically tractable graph generation and evolution, using kronecker multiplication. *PKDD*, 2005.
+
+[18] D. M. Malioutov, J. K. Johnson, and A. S. Willsky. Walk-sums and belief propagation in gaussian graphical models. *Journal of Machine Learning Research*, 7:2031-2064, 2006.
+---PAGE_BREAK---
+
+[19] M. McGlohon, S. Bay, M. G. Anderle, D. M. Steier, and C. Faloutsos. Snare: a link analytic system for graph labeling and risk detection. KDD, 2009.
+
+[20] E. Minkov and W. Cohen. Learning to rank typed graph walks: Local and global approaches. In *Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis*, pages 1–8. ACM, 2007.
+
+[21] J. Pan, H. Yang, C. Faloutsos, and P. Duygulu. Gcap: Graph-based automatic image captioning. In *MDDE*, 2004.
+
+[22] S. Pandit, D. Chau, S. Wang, and C. Faloutsos. Netprobe: a fast and scalable system for fraud detection in online auction networks. In *WWW*, 2007.
+
+[23] J. Pearl. Reverend Bayes on inference engines: A distributed hierarchical approach. In *Proceedings of the AAAI National Conference on AI*, pages 133–136, 1982.
+
+[24] H. Tong, C. Faloutsos, and J. Pan. Fast random walk with restart and its applications. In *ICDM*, 2006.
+
+[25] Y. Weiss. Correctness of local probability propagation in graphical models with loops. *Neural computation*, 12(1):1–41, 2000.
+
+[26] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. *Exploring artificial intelligence in the new millennium*, 8:236–239, 2003.
+
+[27] J. Yedidia, W. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. *IEEE Transactions on Information Theory*, 51(7):2282–2312, 2005.
+
+[28] X. Zhu. Semi-supervised learning literature survey, 2006.
+
+## Appendix A: Preliminaries - Analysis of FaBP
+
+In this appendix we present the lemmas that are needed to prove Theorem 1 (FABP), which gives the linearized version of BP. We start with the original BP equations, and we derive the proof by:
+
+* using the *odds ratio* $p_r = p/(1-p)$, instead of probabilities. The advantage is that we have only one value for each node, $p_r(i)$, instead of two, $p_+(i)$ and $p_-(i)$; also, the normalization factor is not needed. Moreover, working with the *odds ratios* results in the substitution of the propagation matrix entries by a scalar homophily factor.
+
+* assuming that all the parameters are close to 1/2, using Maclaurin expansions to linearize the equations, and keeping only the first order terms. By doing so we avoid the sigmoid/non-linear equations of BP.
+
+*Traditional BP equations:* In [26], Yedidia derives the following update formulas for the messages sent from node *i* to node *j* and the belief of each node that it is in state $x_i$
+
+$$m_{ij}(x_j) = \sum_{x_i} \phi_i(x_i) \cdot \psi_{ij}(x_i, x_j) \cdot \prod_{n \in N(i) \setminus j} m_{ni}(x_i) \quad (9)$$
+
+$$b_i(x_i) = \eta \cdot \phi_i(x_i) \cdot \prod_{j \in N(i)} m_{ij}(x_j) \quad (10)$$
+---PAGE_BREAK---
+
+where the message from node *i* to node *j* is computed based on all the messages sent by all its neighbors in the previous step except for the previous message sent from node *j* to node *i*. $N(i)$ denotes the neighbors of *i* and $\eta$ is a normalization constant that guarantees that the beliefs sum to 1.
+
+Table 5: Additional Symbols and Definitions
+
+| Symbols | Definitions |
|---|
| $p$ | $P(\text{node in positive class}) = P(+)$ | | $m$ | message | | < var >r | odds ratio = <var>⁄1-<var>, where < var >= b, $\phi$, m, h | | $B(a, b)$ | blending function of the variables a and b = a·b+1⁄a+b. |
+
+**Lemma 7.** Expressed as ratios, the BP equations become:
+
+$$m_r(i, j) \leftarrow B[h_r, b_{r, \text{adjusted}}(i, j)] \quad (11)$$
+
+$$b_r(i) \leftarrow \phi_r(i) \cdot \prod_{j \in N(i)} m_r(j, i) \quad (12)$$
+
+where $b_{r,adjusted}(i,j)$ is defined as $b_{r,adjusted}(i,j) = b_r(i)/m_r(j,i)$. The division by $m_r(j,i)$ subtracts the influence of node j when preparing the message $m_r(i,j)$.
+
+*Proof.* The proof is straightforward. Notice that $1 - v_+(i) = v_-(i)$ for $v \in \{b, \phi, m\}$, eg., $b_-(i) = 1 - b_+(i) = \eta \cdot (1 - \phi_+(i)) \cdot \prod_{j \in N(i)} (1 - m_+(j, i))$. $\square$
+
+**Lemma 8 (Approximations).** Fundamental approximations for all the variables of interest, {$m, b, \phi, h$}, are:
+
+$$v_r = \frac{v}{1-v} = \frac{1/2 + v_h}{1/2 - v_h} \approx 1 + 4v_h \quad (13)$$
+
+$$B(a_r, b_r) \approx 1 + 8a_h b_h \quad (14)$$
+
+where $B(a_r, b_r)$ is the blending function for any variables $a_r, b_r$.
+
+*Sketch of proof.* Use the definition of “about-half” approximations, apply the appropriate Maclaurin series expansions and keep only the first order terms. $\square$
+
+Lemmas 9-11 are useful in order to derive the linear equation of FABP. Note that we apply several approximations, but omit the “≈” symbol to make the proofs more readable.
+
+**Lemma 9.** The “about-half” version of the belief equation becomes, for small deviations from the half-point:
+
+$$b_h(i) \approx \phi_h(i) + \sum_{j \in N(i)} m_h(j, i). \quad (15)$$
+---PAGE_BREAK---
+
+*Proof.* We use (12) and (13) and apply the appropriate Maclaurin series expansions:
+
+$$
+\begin{align*}
+b_r(i) &= \phi_r(i) \prod_{j \in N(i)} m_r(j, i) \Rightarrow \\
+\log(1 + 4b_h(i)) &= \log(1 + 4\phi_h(i)) + \sum_{j \in N(i)} \log(1 + 4m_h(j, i)) \Rightarrow \\
+b_h(i) &= \phi_h(i) + \sum_{j \in N(i)} m_h(j, i). \quad \square
+\end{align*}
+$$
+
+**Lemma 10.** The “about-half” version of the message equation becomes:
+
+$$ m_h(i, j) \approx 2h_h[b_h(i) - m_h(j, i)]. \tag{16} $$
+
+*Proof.* We combine (11), (13) and (14) to deduce
+
+$$ m_r(i, j) = B[h_r, b_{r,adjusted}(i, j)] \Rightarrow m_h(i, j) = 2h_h b_{h,adjusted}(i, j). \quad (17) $$
+
+In order to derive $b_{h,adjusted}(i,j)$ we use (13) and the approximation of the Maclaurin expansion $\frac{1}{1+\epsilon} \approx 1-\epsilon$ for a small quantity $\epsilon$:
+
+$$
+\begin{align}
+b_{r,adjusted}(i,j) &= b_r(i)/m_r(j,i) \Rightarrow \nonumber \\
+1+b_{r,adjusted}(i,j) &= (1+4b_h(i))(1-4m_h(j,i)) \Rightarrow \nonumber \\
+b_{h,adjusted}(i,j) &= b_h(i) - m_h(j,i) - 4b_h(i)m_h(j,i). \tag{18}
+\end{align}
+$$
+
+Substituting (18) to (17) and ignoring the terms of second order, leads to the about-half version of the message equation. $\square$
+
+**Lemma 11.** At steady state, the messages can be expressed in terms of the beliefs:
+
+$$ m_h(i, j) \approx \frac{2h_h}{1 - 4h_h^2} [b_h(i) - 2h_h b_h(j)]. \quad (19) $$
+
+*Proof.* We apply Lemma 10 both for $m_h(i, j)$ and $m_h(j, i)$ and we solve for $m_h(i, j)$. $\square$
+
+# Appendix B: Proofs of Section 3 (Theorems)
+
+Here we give the proofs of the theorems and lemmas presented in Section 3.
+
+**Proof of Theorem 1.** We substitute (16) to (15) and we obtain:
+
+$$
+\begin{align*}
+& b_h(i) - \sum_{j \in N(i)} m_h(j, i) = \phi_h(i) \Rightarrow \\
+& b_h(i) + \sum_{j \in N(i)} \frac{4h_h^2 b_h(j)}{1 - 4h_h^2} - \sum_{j \in N(i)} \frac{2h_h}{1 - 4h_h^2} b_h(i) = \phi_h(i) \Rightarrow \\
+& (\mathbf{I} + a\mathbf{D} - c'\mathbf{A})\mathbf{b}_h = \mathbf{\Phi}_h. \quad \square
+\end{align*}
+$$
+---PAGE_BREAK---
+
+**Proof of Lemma 2.** Given $l$ labeled points $(x_i, y_i)$, $i = 1, \dots, l$, and $u$ unlabeled points $x_{l+1}, \dots, x_{l+u}$ for a semi-supervised learning problem, based on an energy minimization formulation, we find the labels $x_i$ by minimizing the following function $E$
+
+$$E(\mathbf{x}) = \alpha \sum_{j \in N(i)} a_{ij} (x_i - x_j)^2 + \sum_{1 \le i \le l} (y_i - x_i)^2, \quad (20)$$
+
+where $\alpha$ is related to the coupling strength (homophily) of neighboring nodes, and $N(i)$ denotes the neighbors of $i$. If all points are labeled, (20) becomes, in matrix form,
+
+$$\begin{aligned} E(\mathbf{x}) &= \mathbf{x}^T[\mathbf{I} + \alpha(\mathbf{D} - \mathbf{A})]\mathbf{x} - 2\mathbf{x} \cdot \mathbf{y} + K(\mathbf{y}) \\ &= (\mathbf{x} - \mathbf{x}^*)^T[\mathbf{I} + \alpha(\mathbf{D} - \mathbf{A})](\mathbf{x} - \mathbf{x}^*) + K'(\mathbf{y}), \end{aligned}$$
+
+where $\mathbf{x}^* = [\mathbf{I} + \alpha(\mathbf{D} - \mathbf{A})]^{-1}\mathbf{y}$, and $K, K'$ are some constant terms which depend only on $\mathbf{y}$. Clearly, $E$ achieves the minimum when
+
+$$\mathbf{x} = \mathbf{x}^* = [\mathbf{I} + \alpha(\mathbf{D} - \mathbf{A})]^{-1}\mathbf{y}.$$
+
+The equivalence of SSL and Gaussian BP can be found in [25]. □
+
+**Proof of Lemma 3.** Based on (2) and (3), the two methods will give identical results if
+
+$$\begin{gathered} (1-c)[\mathbf{I} - c\mathbf{D}^{-1}\mathbf{A}]^{-1} = [\mathbf{I} + \alpha(\mathbf{D} - \mathbf{A})]^{-1} \Leftrightarrow \\ \left(\frac{c}{1-c}\right) [\mathbf{I} - \mathbf{D}^{-1}\mathbf{A}] = \alpha(\mathbf{D} - \mathbf{A}) \Leftrightarrow \\ \left(\frac{c}{1-c}\right) \mathbf{D}^{-1} = \alpha\mathbf{I}. \end{gathered}$$
+
+This cannot hold in general, unless the graph is “regular”: $d_i = d$ ($i = 1, \dots, n$), or $\mathbf{D} = d \cdot \mathbf{I}$, in which case the condition becomes
+
+$$\alpha = \frac{c}{(1-c)d} \Rightarrow c = \frac{\alpha d}{1+\alpha d} \quad (21)$$
+
+where $d$ is the common degree of all the nodes. □
+
+## Appendix C: Proofs of Section 4 (Convergence)
+
+**Proof of Lemma 5.** In order for the power series to converge, a sub-multiplicative norm of matrix $\mathbf{W} = c\mathbf{A} - a\mathbf{D}$ should be smaller than 1. In this analysis we use the 1-norm (or equivalently the $\infty$-norm). The elements of matrix $\mathbf{W}$ are either $c = \frac{2h_h}{1-4h_h^2}$ or $-ad_{ii} = \frac{-4h_h^2 d_{ii}}{1-4h_h^2}$. Thus, we require
+
+$$\max_j \left(\sum_{i=1}^{n} |\mathbf{W}_{ij}| \right) < 1 \Rightarrow (c+a) \cdot \max_j d_{jj} < 1 \Rightarrow \\ \frac{2h}{1-2h} \max_j d_{jj} < 1 \Rightarrow h_h < \frac{1}{2(1+\max_j d_{jj})}. \quad \square$$
\ No newline at end of file
diff --git a/samples/texts_merged/675668.md b/samples/texts_merged/675668.md
new file mode 100644
index 0000000000000000000000000000000000000000..c91ba4ba2201165bec53bf388d0000c0ee9e033d
--- /dev/null
+++ b/samples/texts_merged/675668.md
@@ -0,0 +1,2299 @@
+
+---PAGE_BREAK---
+
+PROGRAMMING SEMANTICS TO
+TOPOLOGICAL SYSTEMS TO
+LATTICE-VALUED TOPOLOGY
+
+by
+
+JEFFREY T. DENNISTON, AUSTIN MELTON,
+AND STEPHEN E. RODABAUGH
+
+Electronically published on July 18, 2013
+
+Topology Proceedings
+
+**Web:** http://topology.auburn.edu/tp/
+
+**Mail:** Topology Proceedings
+Department of Mathematics & Statistics
+Auburn University, Alabama 36849, USA
+
+**E-mail:** topolog@auburn.edu
+
+**ISSN:** 0146-4124
+
+COPYRIGHT © by Topology Proceedings. All rights reserved.
+---PAGE_BREAK---
+
+PROGRAMMING SEMANTICS TO TOPOLOGICAL
+SYSTEMS TO LATTICE-VALUED TOPOLOGY
+
+JEFFREY T. DENNISTON, AUSTIN MELTON,
+AND STEPHEN E. RODABAUGH
+
+ABSTRACT. This paper examines the synergism emerging from three historically distinctive traditions: theory of locales; programming semantics and topological systems; and point-set lattice-theoretic (poslat) topology, both fixed-basis and variable-basis. Many gaps are discovered and filled with new results; and open questions are posed.
+
+# 1. INTRODUCTION AND PLAN OF PAPER
+
+This paper extends the presentation [67] made by the third author and traces the emerging synergism of these three historically distinct developments:
+
+(1) the study of locales, motivated in part by the Stone representation theorems and the subsequent and underlying sobriety-spatiality representation theorem based upon D. Papert and S. Papert [46] and J. R. Isbell [28];
+
+(2) the study of programming as a discipline begun in 1976 by E. W. Dijkstra [14] and further developed in a topology related direction by M. Smyth [70], culminating in the topological systems of S. J. Vickers [75]; and
+
+2010 Mathematics Subject Classification. Primary 03E72, 06A06, 06F30, 18D20, 54A40, 54B30, 54D10; Secondary 03B52, 03B70, 06D22, 54E25, 54F05, 94D05.
+
+**Key words and phrases.** Postcondition predicates, precondition predicates, deterministic programs, open predicates, topological spaces, topological systems, spectrum, sober/spatial spaces/systems, adjunctions, lattice/quantale-valued topological spaces/systems, lattice/quantale-valued spectra, essentially algebraic categories, topological categories, noncommutative topologies, data mining, pattern matching, enriched category and many-valued preorders, preordered topological spaces/systems.
+
+©2013 Topology Proceedings.
+---PAGE_BREAK---
+
+(3) the study of fuzzy sets as introduced by L. A. Zadeh [77], motivated in part by control theory in engineering, followed by lattice-valued topology in the sense of C. L. Chang, J. A. Goguen, and R. Lowen [6, 18, 41], the openness predicate motivated lattice-valued fuzzy topology in the sense of U. Höhle, T. Kubiak, and A. P. Šostak [21, 34, 74], and culminating in the first variable-basis categories for both lattice-valued topology and lattice-valued fuzzy topology in S. E. Rodabaugh [62].
+
+What is striking is that these three developments did not know of each other initially. The programming development became aware of the localic tradition fairly early in its history, at least by the 1980's as judged by [70, 75], while the "fuzzy" development took relatively longer in its history to become aware of locales (and similar structures such as MV-algebras, quantales, residuated lattices), doing so in the 1980's [22, 53] and culminating in [25, 32, 33, 48, 49, 50, 62] in the 1990's and beyond. What is most striking is that (2) and (3) above were apparently unaware of each other until J. T. Denniston and Rodabaugh [7] in 2009 showed how both fixed-basis and variable-basis topology are intimately connected to topological systems, which seemed to spark developments in several directions. These include: attachment relations in C. Guido [19, 20]; algebraic varieties and powerset theories in S. A. Solovjov [71, 72, 73]; and nondeterminism, formal concept analysis, and enriched categories in Denniston, A. Melton, and Rodabaugh [9, 10, 11, 12]. Not only are (2) and (3) invigorating each other, but this linkage is reshaping the relationship between (1) and (3) as well.
+
+It is the purpose of this paper to extend the presentation of [67], update and broaden the linkage of (1), (2), (3) above given in [9], and generally examine the growing synergism of (1), (2), (3). While many of the results of this paper are known, our examination uncovers and fills many gaps with new results. Known constructions and results are cited without proof, while new constructions and results are given proofs. Several open questions are posed.
+
+Unless stated otherwise, categorical notions are from [1]. Also, a frame $L$ is *consistent* if it has at least two elements, in which case $\perp \neq \top$; otherwise, $L$ is *inconsistent*. We find the following notation, commonly attributed to P. Halmos, to be frequently convenient: if $f : X \to Y$ is a function and $P$ is a possible predicate of members of $Y$, then
+
+$$[f \text{ has } P] := \{x \in X : f(x) \text{ has } P\}.$$
+---PAGE_BREAK---
+
+An outline of the rest of this paper is as follows:
+
+Section 2: From programming to topological systems and TopSys
+
+Section 3: Categorical behavior of TopSys
+
+Section 4: Topological systems and fixed-basis lattice-valued topology
+
+Section 5: Topological systems and variable-basis lattice-valued
+topology
+
+Section 6: Generalizations and future directions
+
+Section 7: Acknowledgements
+
+## 2. FROM PROGRAMMING TO TOPOLOGICAL SYSTEMS AND TOPSYS
+
+2.1. **Dijkstra's programming principles and adjointness of programs.** In 1976 [14], E. W. Dijkstra laid down principles the goal of which was to improve programming methodology. Two key ideas are the following:
+
+**Out.** Focus more on outputs than inputs.
+
+**Pred.** Focus on predicates or properties of outputs (*postcondition predicates*) and predicates or properties of inputs (*precondition predicates*).
+
+Letting *X* and *Y* be the sets, respectively, of inputs and outputs, the above principles can be illustrated in the special case in which precondition predicates comprise a family *P* of subsets of *X* and postcondition predicates a family *Q* of subsets of *Y*. In this case, we say that input *x* satisfies *P* ∈ *P*, output *y* satisfies *Q* ∈ *Q* if *x* ∈ *P*, *y* ∈ *Q*; so that inputs and outputs are related to, and satisfy, predicates via the membership relation.
+
+These notions can be packaged as a “system”, which, in the case of the input side, comprises an ordered triple (*X*, *P*, ∈), where *X* is a set (perhaps of bitstrings), *P* ⊂ ϕ(*X*) is a family of predicates, and ∈ ⊂ *X* × *P* acts as a “satisfaction relation” indicating when a given string satisfies a given predicate. Later we shall consider generalizations of the notions of predicates and satisfaction, but for now we continue to work within the special case of predicates as subsets and satisfaction as membership.
+
+In addition to the above notions, one can distinguish *deterministic* from
+*nondeterministic* relationships between inputs of *X* and outputs of *Y*:
+
+• In the deterministic case, the input-to-output correspondence is a well-defined (partial) function $f : X \to Y$.
+
+• In the nondeterministic case, the input-to-output correspondence is a relation $R \subset X \times Y$.
+---PAGE_BREAK---
+
+The nondeterministic case is carefully studied in [11]. Focusing now on the deterministic case with a total function *f* and continuing to use the special case of predicates and satisfaction as above, a *deterministic program* (*f*, ϕ) : (*X*, *P*, ε) → (*Y*, *Q*, ε) comprises:
+
+• a “forward” map $f: X \to Y$ converting inputs to outputs, and
+
+• a “backwards” map $\varphi^{op}: Q \to \mathcal{P}$ converting postcondition predicates to precondition predicates.
+
+Additionally, Dijkstra goes on to describe an “optimal” deterministic program ($f, \varphi$) by imposing the following axiom which some call *adjointness*:
+
+$$ \forall Q \in \mathcal{Q}, \forall x \in X, x \in \varphi^{op}(Q) \Leftrightarrow f(x) \in Q. $$
+
+Each deterministic program in the sequel is assumed to have adjointness and may be said to be “adjoint”. It is instructive to motivate each direction of the biconditional predicate of the adjointness axiom in our special case, a biconditionality with far-reaching categorical consequences (cf. Proposition 3.1.1 below):
+
+(1) A desirable postcondition predicate $Q$ for outputs is chosen. Next, the map $\varphi^{op} : Q \to \mathcal{P}$ is applied to pull $Q$ back to a precondition predicate $\varphi^{op}(Q)$ for inputs. Then for each input $x$ satisfying this precondition predicate, it is mandated that the corresponding output $f(x)$ satisfies the originally chosen postpredicate $Q$. This approach to improved program quality implements Dijkstra's Out and Pred conditions above and motivates the “only if” direction of adjointness.
+
+(2) To have the most applicable program possible, Dijkstra also wants each $\varphi^{op}(Q)$ to be the optimal pullback of $Q$, namely that it should be the weakest or largest possible pullback to a precondition predicate. This means that if an input $x$ does not satisfy the pullback $\varphi^{op}(Q)$, then the program output $f(x)$ does not satisfy $Q$. This motivates (the contrapositive of) the “if” direction of adjointness and further implements Dijkstra's philosophy.
+
+It is worthwhile to motivate the term “adjointness” describing Dijkstra’s philosophy for high quality, optimal programs. It should be recalled [29] that if $f: L \to M, g: L \leftarrow M$ are isotone maps of preordered sets, then $f \nvdash g$ whenever
+
+$$ \forall b \in M, \forall a \in L, a \leq g(b) \Leftrightarrow f(a) \leq b. $$
+
+The relationship $f \nvdash g$ is termed *adjunction* and the displayed condition is dubbed *adjointness*, and its similarity to the adjointness condition for deterministic programs should be apparent—see the first display above and Definition 2.3.2(2) below.
+---PAGE_BREAK---
+
+It is also worthwhile to note that the adjointness condition for Dijkstra's programs is the same condition required for Chu transforms as morphisms between Chu spaces (or Chu systems) introduced by M. Barr [2] in 1979.
+
+Finally, we note that the special case we have been tracing above determines uniquely for each input-output function $f$ a compatible $\varphi^{op}$ predicate map so that the program $(f, \varphi)$ satisfies adjointness, as indicated in the following proposition.
+
+**Proposition 2.1.1.** *Assume predicates are subsets and that inputs/out-puts relate to predicates by membership. Then $(f, \varphi)$ satisfies adjointness if and only if*
+
+$$ \varphi^{op} = (f \stackrel{\leftarrow}{\to}) |_Q, $$
+
+where $f \stackrel{\leftarrow}{\to} : \wp(Y) \to \wp(X)$ is the usual preimage operator for the mapping $f: X \to Y$.
+
+Throughout this paper we adopt—or modify as appropriate—T. S. Blyth’s arrow notation [3] for the image and preimage operators of a function.
+
+## 2.2. Program semantics with open predicates.
+M. Smyth [70] in 1983 advocated viewing predicates as open sets. Continuing with predicates as subsets and satisfaction of predicates as membership, as in the previous subsection, the application of finite observational logic, described by S. J. Vickers [75] and J. T. Denniston, A. Melton, and S. E. Rodabaugh [9], to this special case forces the precondition predicates $\mathcal{P}$ and the postcondition predicates $\mathcal{Q}$ to respectively form topologies on the input set $X$ and the output set $Y$. Thus, finite observational logic gives us what we heuristically call “topological” systems $(X, \mathcal{P}, \in) , (Y, \mathcal{Q}, \in)$, where $(X, \mathcal{P}) , (Y, \mathcal{Q})$ are topological spaces. We point out that “topological system” will be formally defined later. The relationship of programs to continuous maps is given by the following proposition:
+
+**Proposition 2.2.1.** $(f, \varphi) : (X, \mathcal{P}, \in) \to (Y, \mathcal{Q}, \in)$ is a deterministic program if and only if $f : (X, \mathcal{P}) \to (Y, \mathcal{Q})$ is continuous.
+
+Up to this point, “topological” systems are simply a repackaging of topological spaces—literally a rewriting of spaces with the membership relation; and in that setting, Proposition 2.2.1 points out that deterministic programs correspondingly become repackaged continuous maps. We now construct some example classes which motivate consideration of “topological” systems which cannot be repackaged topological spaces. Such example classes justify the formal and general definition of topological systems and the category **TopSys** given in [75].
+---PAGE_BREAK---
+
+**Example 2.2.2** (restriction examples). Let $(Z, \mathfrak{T}, \in)$ be a “topological”
+system as considered above, but with $\mathfrak{T}$ an infinite topology on $Z$—many
+such systems exist. Now let $X \subset Z$ be any finite subset of $Z$, and let
+“satisfaction” relation $\models \subset X \times \mathfrak{T}$ be the restriction $\in_{|X \times \mathfrak{T}}$ of the mem-
+bership relation $\in \subset Z \times \mathfrak{T}$. From the standpoint of programming, the
+system $(X, \mathfrak{T}, \models)$ makes good sense as either an input or output system:
+this is true, in part, because there will in fact be a finite set of inputs
+or outputs with a potentially unlimited family of predicates; this is con-
+sistent with the “finite-unlimited” paradox existing in computer science.
+Since the predicates of $(X, \mathfrak{T}, \models)$ are open sets (from the space $(Z, \mathfrak{T})$), it
+is reasonable to speak of $(X, \mathfrak{T}, \models)$ as a “topological” system in the sense
+of [70]; however, $(X, \mathfrak{T}, \models)$ cannot be a repackaged topological space—a
+finite set cannot have an infinite topology. Further, it is important to
+note for such systems that the following “interchange” laws hold:
+
+$$ x \models \bigcup_{\gamma \in \Gamma} U_{\gamma} \Leftrightarrow \exists \beta \in \Gamma, x \models U_{\beta}; x \models \bigcap_{\gamma \in \Gamma} U_{\gamma} \Leftrightarrow \forall \gamma \in \Gamma, x \models U_{\gamma} \ (\Gamma \text{ finite}) $$
+
+Borrowing from the formalism to come later in this paper, $(X, \mathfrak{T}, \models)$ is a “non-spatial” topological system, which is rather remarkable since its family $\mathfrak{T}$ of predicates is a spatial locale. Finally, a deterministic program $(f, \varphi)$ between such restricted “topological” systems cannot be the repackaging of a continuous map between topological spaces; and these programs are more general than those covered by Proposition 2.1.1—the backwards map $\varphi^{op}$ cannot be a restriction to the codomain topology of the preimage operator $f^\leftarrow$ of the forward map $f$ of the deterministic program between restricted systems. Thus the simple notion of restricted systems generates a huge example class of systems we want to regard as “topological”systems, but each of which is not the rewriting of a topological space as a system.
+
+A second, huge class of systems which should be regarded as “topo-
+logical” systems, but each of which is not the rewriting of a topological
+space as a system, can be constructed, analogously to the above class,
+by beginning with a “topological” system $(Z, \mathfrak{T}, \in)$ as above, but with $\mathfrak{T$
+having cardinality greater than the continuum, and then letting $X$ be
+any countable subset of $Z$. Both of these example classes are included by
+assuming
+
+$$ |\wp(X)| < |\mathfrak{T}|. $$
+
+**Example 2.2.3 (prefix ordering examples).** We give a subclass of the
+example class of Example 2.2.2 above which is directly related to pro-
+gramming. Let **2*** be set of all finite (and empty) strings with values
+from {0, 1}.
+---PAGE_BREAK---
+
+(1) Put $s \sqsubseteq t$ in $\mathbf{2}^*$ if $\exists r \in \mathbf{2}^*$, $s :: r = t$, where $s :: r$ is the concatenation $s$ followed by $r$. Then $(\mathbf{2}^*, \sqsubseteq)$ is a poset.
+
+(2) For $s \in \mathbf{2}^*$ and $U \subset \mathbf{2}^*$, put
+
+$$
+\texttt{starts}(s) = \uparrow(s) = \{t \in \mathbf{2}^* : s \sqsubseteq t\},
+$$
+
+$$
+\mathbf{starts}(U) = \bigcup \{\mathbf{starts}(s) : s \in U\}.
+$$
+
+Then
+
+$$
+\mathcal{A}(\mathbf{2}^*) := \{U \subset \mathbf{2}^* : U = \mathbf{starts}(U)\}
+$$
+
+is an Alexandrov topology on $\mathbf{2}^*$ with basis $\{\mathbf{starts}(s) : s \in U\}$.
+
+(3) We now have a “topological” system $(\mathbf{2}^*, \mathcal{A}(\mathbf{2}^*), \in)$ in the sense
+used just above 2.2.1.
+
+(4) For $s \in \mathbf{2}^*$, $\neg s$ is defined to be the bitstring in $\mathbf{2}^*$ formed by interchanging all 0's and 1's; and for $U \subset \mathbf{2}^*$,
+
+$$
+\neg U := \{\neg s \in \mathbf{2}^* : s \in U\}.
+$$
+
+Now consider
+
+$$
+(f, \varphi) : (\mathbf{2}^*, \mathcal{A}(\mathbf{2}^*), \in) \to (\mathbf{2}^*, \mathcal{A}(\mathbf{2}^*), \in)
+$$
+
+given by
+
+$$
+f(s) = \neg s, \quad \varphi^{op}(U) = \neg U.
+$$
+
+Then $(f, \varphi) : (\mathbf{2}^*, \mathcal{A}(\mathbf{2}^*), \in) \to (\mathbf{2}^* \mathcal{A}(\mathbf{2}^*), \in)$ is a deterministic program as defined in Subsection 2.1 above and can also be called a *complementation* program or morphism.
+
+(5) Continuing with the complementation morphism $(f, \varphi)$ from (4) above, let $X$ be a finite subset of $\mathbf{2}^*$, let $Y$ be the corresponding finite subset $f^\rightarrow(X)$ of $\mathbf{2}^*$, and put
+
+$$
+\vdash_1 = \varepsilon_{|X} \times \mathcal{A}(\mathbf{2}^*), \quad \vdash_2 = \varepsilon_{|Y} \times \mathcal{A}(\mathbf{2}^*).
+$$
+
+Then each of $(X, \mathcal{A}(\mathbf{2}^*), \vdash_1)$ and $(Y, \mathcal{A}(\mathbf{2}^*), \vdash_2)$ are (restricted)
+"topological" systems as in 2.2.2 above which cannot be repack-
+aged topological spaces, and
+
+$$
+(f|_X, \varphi) : (X, \mathcal{A}(\mathbf{2}^*), \vdash_1) \to (Y, \mathcal{A}(\mathbf{2}^*), \vdash_2)
+$$
+
+is a deterministic program which is not the repackaging of a
+continuous map between topological spaces. This "morphism"
+$(f|_X, \varphi)$ can also be called a *complementation* program or mor-
+phism.
+---PAGE_BREAK---
+
+2.3. **Topological systems and TopSys.** A few preliminary notions are needed before topological systems are formally defined.
+
+We first recall that a complete lattice is a poset closed under arbitrary $\vee$ and $\wedge$, including those of the empty set, which means that each complete lattice has a universal lower bound $\perp$ and a universal upper bound $\top$. A frame or locale $A$ is a complete lattice satisfying the first infinite distributive law: $\forall a \in A, \forall \{b_\gamma\}_{\gamma \in \Gamma} \subset A$,
+
+$$a \wedge \left( \bigvee_{\gamma \in \Gamma} b_\gamma \right) = \bigvee_{\gamma \in \Gamma} (a \wedge b_\gamma).$$
+
+Frame morphisms are mappings between frames which preserve arbitrary $\vee$ and finite $\wedge$; and localic morphisms are morphisms between locales which are in a bijection with, and in the opposite direction of, corresponding frame morphisms between the same locales. This information about frames and locales and their associated morphisms are respectively packaged as the categories **Frm** and **Loc** $\equiv \mathbf{Frm}^{op}$. We point out that frames and locales are appropriate for computer science because of the role of finite observational logic [75, 9] referred to in Subsection 2.2.
+
+**Definition 2.3.1** (ground category **Set** × **Loc**). The category **Set** × **Loc** is a product category and comprises the following data:
+
+(1) *Objects*: $(X, A)$, where $X$ is a set, $A$ is a locale.
+
+(2) *Morphisms*: $(f, \varphi) : (X, A) \to (Y, B)$, where $f: X \to Y$ is in **Set** and $\varphi: A \to B$ is in **Loc**, i.e., $\varphi^{op}: B \to A$ is in **Frm**.
+
+(3) *Composition, identities*: component-wise from **Set** and **Loc**.
+
+It can be shown that **Set** × **Loc** is both complete and cocomplete.
+
+**Definition 2.3.2** (category of topological systems). The category **Top-Sys** of topological systems and continuous mappings has ground category **Set** × **Loc** and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, A, \models)$, where $(X, A) \in |\mathbf{Set} \times \mathbf{Loc}|$ and $\models \subset X \times A$ is a satisfaction relation possessing the arbitrary $\vee$ and finite $\wedge$ interchange laws:
+
+$$
+\begin{align*}
+& \forall x \in X, \forall \{a_\gamma\}_{\gamma \in \Gamma} \subset A, x \models \bigvee_{\gamma \in \Gamma} a_\gamma &\Leftrightarrow& \exists \beta \in \Gamma, x \models a_\beta; \\
+& \forall x \in X, \forall \{a_\gamma\}_{\gamma \in \Gamma} \subset A, x \models \bigwedge_{\gamma \in \Gamma} a_\gamma &\Leftrightarrow& \forall \gamma \in \Gamma, x \models a_\gamma (\Gamma \text{ finite})
+\end{align*}
+$$
+
+The set $X$ in some examples could be interpreted as bitstrings; $A$ may be interpreted as a locale of (open) predicates; and if $x \models a$, then it may be said that (bitstring) $x$ satisfies (predicate) $a$.
+---PAGE_BREAK---
+
+(2) *Morphisms*: $(f, \varphi) : (X, A, \nvDash_1) \to (Y, B, \nvDash_2)$, where $(f, \varphi) : (X, A) \to (Y, B)$ in **Set** $\times$ **Loc** and $(f, \varphi)$ satisfies *adjointness*:
+
+$$ \forall b \in B, \forall x \in X, x \vDash_1 \varphi^{\text{op}}(b) \Leftrightarrow f(x) \vDash_2 b. $$
+
+(3) *Composition, identities:* from **Set** $\times$ **Loc**.
+
+The reader can check that **TopSys** is indeed a category. The categorical isomorphisms of **TopSys** are called homeomorphisms. A ground morphism $(f, \varphi)$ is a homeomorphism if and only if $f$ and $\varphi^{op}$ are bijections, $(f, \varphi)$ is continuous, and $((f^{-1})^{(\varphi^{op})-1})^{op}$ is continuous.
+
+As discussed in the previous subsection, the objects of **TopSys** are more general than topological spaces rewritten as “topological” systems and include all the examples in Example 2.2.2 and Example 2.2.3; and topological systems as defined in Definition 2.3.2 include much more than all the systems considered in Subsection 2.2, as will be seen in the next section. Similar comments may be made for morphisms: those defined in 2.3.2 include all those considered in Subsection 2.2 and much more besides.
+
+Given topological systems $(X, A, \vDash_1)$, $(Y, B, \vDash_2)$ respectively interpreted as input and output systems, a **TopSys** morphism $(f, \varphi) : (X, A, \vDash_1) \to (Y, B, \vDash_2)$ is then a deterministic program as discussed in Subsection 2.1. Thus, **TopSys** may be viewed as the category of all systems having open predicates in a generalized way (from locales and not just topologies) and satisfaction relations generalizing (restricted) membership relations, together with all deterministic programs between them.
+
+This completes our trajectory from Dijkstra's programming principles [14] through Smyth's topological point of view [70] to topological systems in the sense of [75], aided by the example classes of [75, 9] and Examples 2.2.2–2.2.3 above.
+
+### 3. CATEGORICAL BEHAVIOR OF TOPSYS
+
+3.1. Basic categorical properties of TopSys. It is helpful to specify the forgetful functor $T: \textbf{TopSys} \to \textbf{Set} \times \textbf{Loc}$ given by
+
+$$ T(X, A, \vDash) = (X, A), \quad T(f, \varphi) = (f, \varphi). $$
+
+The categorical behavior of **TopSys** is tantamount in many cases to the behavior of **T**.
+
+**Proposition 3.1.1** [7, 71]. **TopSys** is quasi-algebraic over **Set** × **Loc** w.r.t. **T**, i.e., **T** reflects isomorphisms: thus, if $(f, \varphi)$ is a **TopSys** morphism, then $(f, \varphi)$ is a **Set** × **Loc** isomorphism if and only if it is a **TopSys** homeomorphism.
+---PAGE_BREAK---
+
+The term “quasi-algebraic” stems from [12]. The proof of 3.1.1 [7, 7] hinges around the fact that, given ground isomorphism $(f, \varphi)$ with $(f, \varphi)$ continuous, the biconditionality defining adjointness implies that $$ (f^{-1}, ((\varphi^{op})^{-1})^{op}) $$ is also continuous, thus simplifying the definition of homeomorphism given in Subsection 2.3 above. And there is more along this line.
+
+**Lemma 3.1.2** [71]. *T* is transportable and (generating, mono-source)-factorizable in the sense of [1].
+
+**Theorem 3.1.3** [71]. *TopSys* is essentially algebraic over *Set* × *Loc* w.r.t. *T*.
+
+**Corollary 3.1.4.** *TopSys* is complete and cocomplete.
+
+When we investigate the topological behavior of **TopSys**, a very different story emerges.
+
+**Lemma 3.1.5** [7]. *T*-structured sources [sinks] need not have unique initial [final] lifts; not even singleton *T*-structured sources need have lifts.
+
+This lemma means that topological systems lack the initial and final structures typical of classical topological spaces and catalogued in [5] and [30]. In fact, we have the following theorem:
+
+**Theorem 3.1.6** [9]. *TopSys* is not topological over *Set* × *Loc* w.r.t. *T*. Further, *TopSys* is neither mono-, nor epi-, nor (small) existentially, nor (small) essentially topological over *Set* × *Loc* w.r.t. *T*.
+
+The proof of Theorem 3.1.6 relies on Lemma 3.1.5; but we note that the first statement of Theorem 3.1.6 follows independently from Example 23.6(4) of [1]: if **TopSys** were topological over **Set** × **Loc** w.r.t. **T**, then **T** would be an isomorphism, which is manifestly not the case.
+
+The rather surprising bottom line is that topological systems are algebraic and not topological. On the other hand, topological systems are intimately related to topology in various ways which will be seen below.
+
+## 3.2. TopSys as supercategory of Top and Loc.
+The categorical relationships presented below detail how topological systems are related to topological spaces and locales, often in ways which illumine the insights of Dijkstra and Smyth discussed in Section 1 above. We recall that **Top** is the category of topological spaces and continuous maps. Results stated without proof are a blend of [75, 7, 9, 72]; but new results are given proofs.
+---PAGE_BREAK---
+
+**Theorem 3.2.1.** *TopSys* is a supercategory up to isomorphism of **Top**
+via the full functorial embedding $E_V : \mathbf{Top} \to \mathbf{TopSys}$ given by
+
+$$
+E_V (X, \mathfrak{T}) = (X, \mathfrak{T}, \in),
+$$
+
+$$
+E_V(f : (X, \mathfrak{T}) \to (Y, \mathfrak{G})) = (f, ((f \leftarrow) |_{\mathfrak{G}})^{\text{op}}) : (X, \mathfrak{T}, \in) \to (Y, \mathfrak{G}, \in).
+$$
+
+This embedding is essentially Smyth's insight in a systems setting.
+Of course, as documented by the Examples 2.2.2, 2.2.3, there is much
+more in **TopSys** than the image $E_V^\rightarrow (\mathbf{Top})$ as a subcategory of **TopSys};
+indeed, these examples are not even homeomorphic in **TopSys** to systems
+in $E_V^\rightarrow (\mathbf{Top})$. Thus the subcategory $E_V^\rightarrow (\mathbf{Top})$ of **TopSys** is distinctive
+and, as it turns out, in a manner parallel to the distinctiveness of the
+subcategory **SpatLoc** of **Loc**; and this leads to the next discussion and
+definition used throughout the sequel.
+
+We recall the first Stone comparison map associated with spectra of locales—see [29] and below. Recording that
+
+$$
+\mathbf{2} = \{\perp, \top\} = \{\text{false, true}\}
+$$
+
+is a frame, and given locale A, put
+
+$$
+pt(A) = \mathbf{Frm}(A, 2) = \{ p : A \to 2 \mid p \text{ preserves arbitrary } \vee, \text{finite } \wedge \},
+$$
+
+$$
+\Phi : A \to \varphi(pt(A)) \quad \text{by} \quad \Phi(a) = \{p \in pt(A) : p(a) = \top\}.
+$$
+
+Then $\Phi$ is a frame map, $\Phi^{\rightarrow}(A)$ is a topology on $pt(A)$, and $Pt(A) := (pt(A), \Phi^{\rightarrow}(A))$ is a topological space which is the spectrum of $A$. A locale $A$ is spatial if $\Phi$ is injective, in which case $A$ is order isomorphic to the topology $\Phi^{\rightarrow}(A)$. It can be shown from the (co)universality of $\Phi$ that $A$ is spatial if and only if it is order-isomorphic to some topology. This suggests the following definition for topological systems.
+
+**Definition 3.2.1.1 (spatial topological systems).** A topological system is *spatial* if it is homeomorphic (in **TopSys**) to some system in $E_V^\rightarrow$ (**Top**). Equivalently, a topological system $(X, A, \models)$ is spatial if and only if there exists a topological space $(Y, \mathfrak{T})$ such that $(X, A, \models)$ is homeomorphic (in **TopSys**) to $(Y, \mathfrak{T}, \in)$.
+
+**Proposition 3.2.1.2.** Let $(X, A, \models)$ be a topological system.
+
+(1) If $(X, A, \models)$ is spatial, then the locale $A$ is spatial.
+
+(2) The converse to (1) need not hold.
+
+*Proof.* If $(f, \varphi) : (X, A, \models) \to (Y, \mathfrak{T}, \in)$ is a **TopSys** homeomorphism for some topological space $(Y, \mathfrak{T})$, then $\varphi^{\text{op}} : \mathfrak{T} \to A$ is a bijective frame map, and hence a frame isomorphism. Hence $A$ is a spatial locale, and (1) is confirmed. Examples 2.2.2 and 2.2.3 verify (2). □
+---PAGE_BREAK---
+
+There is more to the relationship between **Top** and **TopSys** than the embedding $E_V$, and, indeed, $E_V$ is part of a well-behaved, two way relationship between topological spaces and topological systems using our next functor *Ext*. The functor *Ext* is built using a variation of the (first) Stone comparison map $\Phi$ recalled above.
+
+**Theorem 3.2.2.** *TopSys* functorially maps to *Top* via *Ext : TopSys → Top* constructed as follows:
+
+$$
+(1) \text{ Given } (X, A, \vdash) \in |\mathbf{TopSys}|, \text{ put } \mathrm{ext} : A \to \wp(X) \text{ by}
+$$
+
+$$
+\operatorname{ext}(a) = \{x \in X : x \models a\}.
+$$
+
+Then $\operatorname{ext}$ is a frame map and $(X, \operatorname{ext}^\rightarrow(A)) \in |\mathbf{Top}|.$
+
+(2) $\operatorname{Ext} : \mathbf{TopSys} \to \mathbf{Top}$, defined by
+
+$$
+\operatorname{Ext}(X, A, \vdash) = (X, \operatorname{ext}^{\rightarrow}(A)), \quad \operatorname{Ext}(f, \varphi) = f,
+$$
+
+is a functor.
+
+**Theorem 3.2.3.** *Ext* is both the right adjoint and left inverse of *EV*, i.e.,
+
+$$
+E_V \dashv \mathrm{Ext}, \quad \mathrm{Ext} \circ E_V = \mathrm{Id}_{\mathbf{Top}}.
+$$
+
+The sense in which $E_V \circ \mathrm{Ext}$ is essentially the identity (i.e., up to homeomorphism) characterizes spatiality of systems, as seen in the next proposition.
+
+**Theorem 3.2.3.1.** *A topological system $(X, A, \vdash)$ is spatial if and only if it is homeomorphic (in **TopSys**) to $E_V(\mathrm{Ext}((X, A, \vdash)))$ via a homeomorphism of the form $(\mathrm{id}_X, \varphi)$.*
+
+*Proof.* Sufficiency follows by the definition of spatial systems. For necessity, suppose $(g, \psi) : (X, A, \vdash) \to (Y, \mathcal{T}, \in) $ is a **TopSys** homeomorphism for some topological space $(Y, \mathcal{T})$. We are to find $\varphi^{op} : ext^{\rightarrow}(A) \to A$ so that $(id_X, \varphi) : (X, A, \vdash) \to (X, ext^{\rightarrow}(A), \in)$ is a homeomorphism. We already have that $ext^{|\;ext^{\rightarrow}(A)} : A \to ext^{\rightarrow}(A)$ is surjective, so we now show that $ext^{|\;ext^{\rightarrow}(A)}$ is injective. Let $a \neq b$ in $A$. Then by the bijectivity of $\psi^{op}$, it follows that $U \neq V$ in $\mathcal{T}$, where
+
+$$
+U := (\psi^{op})^{-1}(a), \quad V := (\psi^{op})^{-1}(b).
+$$
+
+Now let $x \in X$. Then the adjointness of $(g, \psi)$ implies
+
+$$
+\begin{align*}
+x \models a &\Leftrightarrow x \models \psi^{op}(U) \\[0.5ex]
+x \models b &\Leftrightarrow x \models \psi^{op}(V)
+\end{align*}
+\qquad x \in U, \\
+\qquad g(x) \in V.
+$$
+---PAGE_BREAK---
+
+It follows from the bijectivity of $g$ that
+
+$$
+\begin{align*}
+ext(a) = ext(b) & \Leftrightarrow [\forall x \in X, x \models a \Leftrightarrow x \models b] \\
+& \Leftrightarrow [\forall x \in X, g(x) \in U \Leftrightarrow g(x) \in V] \\
+& \Leftrightarrow U = V,
+\end{align*}
+$$
+
+a contradiction. Hence $ext(a) \neq ext(b)$. Hence $ext|^{ext \to (A)}$ is injective and hence an order-isomorphism, whose inverse we now denote as $\varphi^{op} : ext \to (A) \to A$. The proof is finished by checking the adjointness of $(id_X, \varphi) : (X, A, \models) \to (X, ext \to (A), \in)$; and this is trivial since it amounts to saying that for $x \in X$ and $ext(a) \in ext \to (A)$, $x \models a$ iff $x \in ext(a)$.
+
+The difference **TopSys** – $E_V^\rightarrow$ (**Top**) is much larger than documented in Subsection 2.2, and this claim rests in part on the two-way relationship now outlined between locales and topological systems. We begin by seeing how **TopSys** is a supercategory up to isomorphism of **Loc**.
+
+For a locale *A*, recall the carrier set *pt*(*A*) of its spectrum *Pt*(*A*) discussed above, and put
+
+$$
+\vdash_A \subset pt(A) \times A \text{ by } p \models_A a \Leftrightarrow p(a) = \mathbf{true}.
+$$
+
+It can be shown that $\vdash_A$ satisfies the arbitrary join and finite meet inter-change laws, yielding a topological system $(pt(A), A, \vdash_A)$ similar to the spectrum $Pt(A)$ of $A$, a similarity which we later examine in more detail.
+
+**Theorem 3.2.4.** *TopSys* is a supercategory up to isomorphism of Loc via the full functorial embedding *E*Loc : Loc *→* TopSys given by
+
+$$
+\begin{gather*}
+E_{\text{Loc}}(A) = (pt(A), A, \vdash_A), \\
+E_{\text{Loc}}(\varphi : A \to B) = ((\quad) \circ \varphi^{\text{op}}, \varphi) : (pt(A), A, \vdash_A) \to (pt(B), B, \vdash_B).
+\end{gather*}
+$$
+
+We now construct counterparts to Examples 2.2.2, 2.2.3 above, namely
+an example class of topological systems which are not in $E_{\text{Loc}}^\rightarrow$ (Loc).
+
+**Example 3.2.4.1 (restriction examples).** Let $A$ be a locale and choose $X \subset pt(A)$ such that $|X| < |pt(A)|$. Take the restriction $\vdash_X$ of the satisfaction relation $\vdash_A$ given by
+
+$$
+\vdash_X = (\vdash_A)|_{X \times A}.
+$$
+
+Then $(X, A, \vdash_X)$ is a topological system which is not homeomorphic to any
+system in $E_{\text{Loc}}^\rightarrow$ (Loc). To see the latter part of this claim, suppose there
+is a locale $B$ such that $(X, A, \vdash_X)$ is homeomorphic to $(pt(B), B, \vdash_B)$.
+Then $X$ is bijective with $pt(B)$ and $A$ is order-isomorphic to $B$; and this
+order-isomorphism implies that $pt(A)$ is bijective with $pt(B)$, making $X$
+bijective with $pt(A)$, a contradiction to $|X| < |pt(A)|$. It remains to see
+that the assumption $X \subset pt(A)$ such that $|X| < |pt(A)|$ is frequently
+satisfied. We indicate two examples, which the reader can easily expand:
+---PAGE_BREAK---
+
+(1) Let $A$ be the four-element Boolean algebra $\{\perp, a, b, \top\}$ with $\perp$ the universal lower bound, $\top$ the universal upper bound, and $a, b$ unrelated. Then it can be shown that $pt(A)$ has precisely two members, call them $p, q$, in which case, setting $X = \{p\}$ or $X = \{q\}$, $(X, A, \vdash_X)$ is a topological system as claimed above.
+
+(2) Let $A = [0, 1]$ with the usual ordering. Then it can be shown that $pt(A)$ is bijective with $[0, 1)$ and so is uncountable. In this case, $X$ can be chosen as any finite or countable subset of $pt(A)$, and then $(X, A, \vdash_X)$ is a topological system as claimed above.
+
+Example 3.2.4.1 documents that there is much more in **TopSys** than the image $E_{\text{Loc}}^\rightarrow (\text{Loc})$ as a subcategory of **TopSys**; indeed, these examples are not homeomorphic in **TopSys** to systems in $E_{\text{Loc}}^\rightarrow (\text{Loc})$. Thus the subcategory $E_{\text{Loc}}^\rightarrow (\text{Loc})$ of **TopSys** is distinctive and, as it turns out, in a manner parallel to the distinctiveness of the subcategory **SobTop** of **Top**; and this leads to the next discussion and definition.
+
+For a topological space $(X, \mathfrak{T})$, we consider the second Stone comparison map $\Psi: X \to pt(\mathfrak{T})$ constructed as follows—see [29]:
+
+$$
+\begin{align*}
+x & \mapsto \text{irreducible closed } \overline{\{x\}} \\
+& \mapsto \text{prime open } X - \overline{\{x\}} \\
+& \mapsto \text{prime principal ideal } \downarrow (X - \overline{\{x\}}) \\
+& \mapsto \text{frame map } \chi_{\mathfrak{T} - \downarrow(X - \overline{\{x\}})} : \mathfrak{T} \to \mathbf{2}.
+\end{align*} $$
+
+Then $(X, \mathfrak{T})$ is sober if $\Psi: X \to pt(\mathfrak{T})$ is a bijection—injectivity is equivalent to $(X, \mathfrak{T})$ being $T_0$, and sobriety is unrelated to $T_1$ and implied by Hausdorff separation. Considered as a map between spaces, $\Psi: (X, \mathfrak{T}) \to Pt(\mathfrak{T})$—the latter space being the spectrum of the topology of the first space—is continuous and relatively open; hence $(X, \mathfrak{T})$ is $T_0$ if and only if $\Psi$ is a homeomorphic embedding, and sober if and only if $\Psi$ is a homeomorphism. Finally, it can be shown, using the universality of $\Psi$, that a space $(X, \mathfrak{T})$ is sober if and only if it is homeomorphic to the spectrum of some locale. This suggests the following definition [75] for topological systems.
+
+**Definition 3.2.4.2** (sober topological systems). A topological system is localic or sober if it is homeomorphic (in **TopSys**) to some system in $E_{\text{Loc}}^\rightarrow (\text{Loc})$. Equivalently, a topological system $(X, A, \vdash)$ is sober if and only if there exists a locale $B$ such that $(X, A, \vdash)$ is homeomorphic (in **TopSys**) to $(pt(B), B, \vdash_B)$.
+
+**Proposition 3.2.4.3.** A topological system $(X, A, \vdash)$ is sober if and only if $(X, A, \vdash)$ is homeomorphic (in **TopSys**) to $(pt(A), A, \vdash_A)$ via some $(f, \varphi)$ with $\varphi^{op} = id_A$.
+---PAGE_BREAK---
+
+*Proof.* Sufficiency is immediate, and necessity is implicit in the first part of Example 3.2.4.1. To make necessity explicit, suppose there is a locale *B* such that (*X*, *A*, ⊨*X*) is homeomorphic to (*pt*(*B*), *B*, ⊨*B*) via (*g*, *ψ*). Then *g*: *X* → *pt*(*B*) is a bijection and *ψ*op: *B* → *A* is an order isomorphism. Put *f*: *X* → *pt*(*A*) by
+
+$$
+f(x) = g(x) \circ (\psi^{op})^{-1} : A \to \mathbf{2}.
+$$
+
+The claim is that (f, id op A ) : (X, A, ⊢) → (pt (A) , A, ⊢$_{A}$ ) is the needed TopSys homeomorphism. Now the bijectivity of f follows from that of g and ψ op . To check the adjointness of (f, id op A ), let x ∈ X, a ∈ A. Then ∃!b ∈ B, ψ op (b) = a. It follows that
+
+$$
+\begin{align*}
+x \models a &\Leftrightarrow x \models \psi^{op}(b) \\
+&\Leftrightarrow g(x) \models_B b \\
+&\Leftrightarrow g(x)(b) = \top \\
+&\Leftrightarrow g(x) ((\psi^{op})^{-1}(a)) = \top \\
+&\Leftrightarrow f(x)(a) = \top \\
+&\Leftrightarrow f(x) \models_A a.
+\end{align*}
+$$
+
+□
+
+We are now in a position to give further results and characterizations of spatial and sober systems.
+
+**Lemma 3.2.4.4.** Let (X, A, ⊢) be a topological system.
+
+(1) If (X, A, ⊢) is sober, then the space Ext (X, A, ⊢) is sober.
+
+(2) The converse to (1) need not hold.
+
+*Proof.* Ad(1). Applying Proposition 3.2.4.3, it may be assumed that $(f, id_A^{op}): (X, A, \vdash) \rightarrow (pt(A), A, \vdash_A)$ is a **TopSys** homeomorphism; and, in particular, we have $f$ is a bijection. We are to show $(X, ext^{\rightarrow}(A))$ is sober, i.e., that $\Psi: X \rightarrow pt(ext^{\rightarrow}(A))$ is bijective. It can be shown that the action of $\Psi: X \rightarrow pt(ext^{\rightarrow}(A))$ may be summarized thusly:
+
+$$
+\Psi(x)(ext(a)) = \chi_{ext(a)}(x) = \top \Leftrightarrow x \models a.
+$$
+
+Now applying the adjointness of (f, idopA ), we have, for x ∈ X, a ∈ A,
+that
+
+$$
+x \models a \Leftrightarrow f(x) \models_A a \Leftrightarrow f(x)(a) = \top.
+$$
+
+Hence, $\forall x \in X, a \in A,$
+
+$$
+\Psi(x)(ext(a)) = f(x)(a).
+$$
+
+Now if $x \neq y$, then the injectivity of $f$ implies that $\exists a \in A, f(x)(a) \neq f(y)(a)$, so that $\Psi(x)(ext(a)) \neq \Psi(y)(ext(a))$, so $\Psi$ is injective.
+To show $\Psi$ is onto, let $p \in pt(ext \rightarrow (A))$. Then $p \circ ext \in pt(A)$.
+---PAGE_BREAK---
+
+Invoking the surjectivity of $f$, there is $x \in X$, $f(x) = p \circ ext$. Now let
+$ext(a) \in ext \to (A)$. Then
+
+$$p(\mathrm{ext}(a)) = \top \Leftrightarrow f(x)(a) = \top \Leftrightarrow \Psi(x)(\mathrm{ext}(a)) = \top,$$
+
+and so $\Psi(x) = p$.
+
+Ad(2). The example of Example 3.2.4.1(1) suffices to confirm (2). In that example, $pt(A) = \{p, q\}$, and, setting $X = \{p\}$, we have $(X, A, \vdash_X)$ is a topological system using the restricted satisfaction $\vdash_X$. But, as shown in 3.2.4.1, this system is not sober. Now the action of $p$ must be either
+
+$$p(a) = p(\top) = \top, \quad p(b) = p(\bot) = \bot$$
+
+or
+
+$$p(b) = p(\top) = \top, \quad p(a) = p(\bot) = \bot.$$
+
+W.L.O.G. assume the former. Then the restricted satisfaction $\vdash_X$ is as
+follows: $p$ satisfies precisely $a$ and $\top$. This implies that
+
+$$ext(a) = ext(\top) = \{p\}, \quad ext(b) = ext(\bot) = \emptyset.$$
+
+Hence
+
+$$ext \to (A) = \{\emptyset, \{p\}\},$$
+
+so that
+
+$$\mathrm{Ext}(X, A, \vdash_X) = (X, \{\emptyset, \{p\}\}),$$
+
+which is a sober topological space.
+□
+
+**Theorem 3.2.4.5.** *The following hold:*
+
+(1) A locale *A* is spatial if and only if the system *E*Loc(*A*) is spatial.
+
+(2) A space $(X, \mathfrak{T})$ is sober if and only if the system $E_V(X, \mathfrak{T})$ is sober.
+
+*Proof.* Ad(1). Sufficiency follows from Proposition 3.2.1.2 above. As for necessity, assume that *A* is spatial, which means that $\Phi^{|\Phi^{\to}(A)}} : A \to \Phi^{\to}(*A*)$ is an order-isomorphism. To finish the proof that
+
+$$
+(id_{pt(A)}, (\left(\Phi^{|\Phi^{\to}(A)}\right)^{-1}\right)^{op}) : (pt(A), A, \vdash_A) \to (pt(A), \Phi^{\to}(A), \in)
+$$
+
+is a **TopSys** morphism, we need only check adjointness. Given $p \in pt(A)$
+and $\Phi(a) \in \Phi^\to(A)$, it is trivially the case that
+
+$$p \models_A a \Leftrightarrow p(a) = \top \Leftrightarrow p \in \Phi(a).$$
+
+Ad(2). Assume a space $(X, \mathfrak{T})$ is sober, i.e., $\Psi : X \to pt(\mathfrak{T})$ is a bijection. For $(\Psi, id_{\mathfrak{T}}^{op}) : (X, \mathfrak{T}, \in) \to (pt(\mathfrak{T}), \mathfrak{T}, \vdash_{\mathfrak{T}})$ to be a **TopSys** homeomorphism, adjointness should be checked: given $x \in X$, $U \in \mathfrak{T}$,
+
+$x \in U \Leftrightarrow \Psi(x)(U) = \top \Leftrightarrow \Psi(x) \vdash_U U,$
+
+finishing necessity. Sufficiency follows from Lemma 3.2.4.4(1) by applying
+the identity $\operatorname{Ext} \circ E_V = \operatorname{Id}_{\operatorname{Top}}$ from Theorem 3.2.3.
+□
+---PAGE_BREAK---
+
+$E_{\text{Loc}}$ is part of a well-behaved, two way relationship between topological spaces and topological systems, as seen in Theorem 3.2.6, using the functor $\Omega_V$ introduced in Theorem 3.2.5.
+
+**Theorem 3.2.5.** TopSys functorially maps to Loc via $\Omega_V : \text{TopSys} \to \text{Loc}$ by
+
+$$ \Omega_V (X, A, \vdash) = A, \quad \Omega_V (f, \varphi) = \varphi. $$
+
+**Theorem 3.2.6.** $\Omega_V$ is both a left adjoint and left inverse of $E_{\text{Loc}}$; i.e.,
+
+$$ \Omega_V \dashv E_{\text{Loc}}, \quad \Omega_V \circ E_{\text{Loc}} = \text{Id}_{\text{Loc}}. $$
+
+We have presented **TopSys** as a supercategory (up to categorical isomorphism) of both **Top** and **Loc**. But **TopSys** is much more than a supercategory of these categories. In the following discussion, we give a rather complete description of the internalization within **TopSys** of fundamental representation theorems.
+
+**Discussion 3.2.7** (internalization of sobriety-spatiality representation). To internalize representation *within TopSys*, we need two functors from Stone representation theory:
+
+$$ \Omega : \mathbf{Top} \to \mathbf{Loc} \quad \text{by} \quad \Omega(X, \mathcal{T}) = \mathcal{T}, $$
+
+$$ \Omega[f : (X, \mathcal{T}) \to (Y, \mathcal{S})] = [((f^\leftarrow)|_{\mathcal{S}})^{\circ p} : \mathcal{T} \to \mathcal{S}], $$
+
+$$ Pt : \mathbf{Loc} \to \mathbf{Top} \quad \text{by} \quad
+\begin{aligned}
+Pt(A) &= (pt(A), \Phi \to (A)), \\
+Pt[\varphi : A \to B] &= [( ) \circ \varphi^{\circ p} : Pt(A) \to Pt(B)].
+\end{aligned}
+$$
+
+The most important functorial relationship in representations à la Stone is the Dowker-Papert-Isbell adjunction $\Omega \dashv Pt$ [46, 15, 28, 29], which when restricted on both sides to the category **SobTop** of sober spaces and continuous maps and the category **SpatLoc** of spatial locales and localic morphisms, respectively, yields the foundational representation theorem
+
+**SobTop** ~ **SpatLoc**,
+
+namely, that **SobTop** and **SpatLoc** are categorically equivalent and hence behave similarly in categorical terms (but they are not isomorphic). When this categorical equivalence is suitably restricted, it eventually yields the Stone representation theorems for distributive lattices and Boolean algebras [29]. The adjunction $\Omega \dashv Pt$ is also important because it converts a choice-free and point-free Čech-Stone compactification into the point-set version (using AC) [29], as well as a choice-free and point-free Hahn-Banach Theorem into its point-set analogue (again using AC) [45]. Now $\Omega \dashv Pt$ and **SobTop** ~ **SpatLoc** are relationships between categories, indeed between categories and their subcategories which can
+---PAGE_BREAK---
+
+be embedded into **TopSys**. What is nice about **TopSys** is that these
+relationships are internalized within **TopSys** as well. In particular, using
+the functors of this subsection (cf. [75]), it can be shown that
+
+$$
+[\Omega \vdash Pt] = [(\Omega_V \circ E_V) \vdash (Ext \circ E_{Loc})]. \quad \square
+$$
+
+4. TOPOLOGICAL SYSTEMS AND FIXED-BASIS
+LATTICE-VALUED TOPOLOGY
+
+Having motivated and positioned topological systems with respect to
+programming semantics, traditional topological spaces, and locales, we
+shift our attention to many-valued topologies and show that topological
+systems are intimately connected to this part of topology as well. This
+section looks at the relationship between topological systems and “fixed-
+basis” lattice-valued topology, while the next section examines the rela-
+tionship between topological systems and “variable-basis” lattice-valued
+topology. The terms “basis” and “base” in these contexts refer to a lattice
+of truth or membership values which occurs in the base of an expression
+of the form $L^X$. And in this and the next sections, lattices of truth values
+are assumed to be frames; generalizations are briefly considered in the
+last section.
+
+4.1. Fixed-basis lattice-valued powerset monad. This subsection comes from [77, 18, 60, 61, 65]. Fix a set X and a frame L. Then the L-powerset of X, comprising all its L-subsets, is
+
+$$
+L^X = \{a \mid a : X \to L\},
+$$
+
+equipped with the order lifted pointwise from that of *L*, and hence equipped
+also with all least upper bounds and greater lower bounds lifted pointwise
+from *L*; and so *L**X* is a frame. For *a* ∈ *L**X* and *x* ∈ *X*, *a*(*x*) is interpreted
+as the degree of membership of *x* in the *L*-subset *a*.
+
+Let $f: X \rightarrow Y$ be a function. Then the $(L-)$image and $(L-)$preimage and $(L-)$lower image operators of $f$ are as follows:
+
+$$
+f_L^\rightarrow : L^X \rightarrow L^Y \quad \text{by} \quad f_L^\leftarrow : (a)(y) = \bigvee_{f(x)=y} a(x) = \bigvee_{x \in f^\leftarrow \{y\}} a(x),
+$$
+
+$$
+f_L^\leftarrow : L^X \leftarrow L^Y \quad \text{by} \quad f_L^\leftarrow (b) = b \circ f,
+$$
+
+$$
+f_L \rightarrow : L^X \rightarrow L^Y \quad \text{by} \quad f_L \rightarrow (a) = \bigvee_{f_L^-(b) \le a} b.
+$$
+
+**Theorem 4.1.1.** (*L*(), *f*, *f**L* ⊢ *f**L* ⊢ *f**L*→) is the *L*-valued powerset monad associated with *f*: *X* → *Y*. More precisely, the following hold:
+
+(1) The adjunctions $f_L^\rightarrow \dashv f_L^\leftarrow \dashv f_L \rightarrow$ hold.
+---PAGE_BREAK---
+
+(2) $f_L^\rightarrow$ preserves order and arbitrary $\vee$ and universal lower bounds; $f_L^\leftarrow$ preserves order and arbitrary $\vee$ and arbitrary $\wedge$ and universal upper and lower bounds; and $f_L \rightarrow$ preserves order and arbitrary $\wedge$ and universal upper bounds.
+
+(3) $(L^(), f, f_L^\rightarrow \dashv f_L^\leftarrow \dashv f_L \rightarrow)$ lifts the traditional powerset monad $(\wp(), f, f^\rightarrow \dashv f^\leftarrow \dashv f_\rightarrow)$. In particular:
+
+(a) $\forall A \in \wp(X)$,
+
+$$f_L^\rightarrow (\chi_A) = \chi_{f \dashv (A)}; \quad f_L^\rightarrow \circ \dashv = \dashv \circ f^\rightarrow.$$
+
+(b) $\forall B \in \wp(Y)$,
+
+$$f_L^{\leftarrow}(\chi_B) = \chi_{f \leftarrow (\mathbf{B})}; \quad f_L^{\leftarrow} \circ \leftarrow = \leftarrow \circ f^{\leftarrow}.$$
+
+4.2. *L-Topological spaces and categories L-Top's and their behavior.* The main definition below is essentially from [6, 18], while the notation and results are from [25, 62].
+
+**Definition 4.2.1.** The category *L-Top* of (*L*-)topological spaces and (*L*-)continuous mappings has ground category **Set** and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, \tau)$, where $X$ is a set and $\tau$ is a subframe of $L^X$, i.e., $\tau \subset L^X$ is closed under arbitrary $\vee$ and finite $\wedge$.
+
+(2) *Morphisms*: $f : (X, \tau) \to (Y, \sigma)$, where $f: X \to Y$ be a function and $(f_L^\leftarrow)^\to (\sigma) \subset \tau$, i.e.,
+
+$$\forall v \in \sigma, f_L^\leftarrow(v) \in \tau.$$
+
+(3) *Composition, Identities:* from **Set**.
+Let forgetful functor $V_L : L-Top \to Set$ be given by
+
+$$V_L(X, \tau) = X, \quad V_L(f) = f.$$
+
+**Theorem 4.2.2.** Each $V_L$-structured source [sink] in **Set** has a unique initial [final] lift to L-Top. So L-Top is topological over **Set** w.r.t. $V_L$.
+
+**Corollary 4.2.3.** For each $L \in [\mathrm{Frm}]$, L-Top is complete and cocomplete.
+
+4.3. Example classes of L-topological spaces.
+
+**Example 4.3.1** (first class of examples—characteristic functors). These are functor-generated examples. For *L* consistent, *G*χ : **Top** → **L-Top**, defined by
+
+$$G_{\chi}(X, \mathfrak{T}) = (X, \{\chi_U : U \in \mathfrak{T}\}), \quad G_{\chi}(f) = f,$$
+
+is a concrete embedding; and it is an isomorphism when $L = 2$, i.e., **Top** $\approx$ **2-Top**. Hence for each consistent $L$, $G_\chi$ generated examples include all traditional topological spaces and continuous mappings as a subcategory.
+---PAGE_BREAK---
+
+This is part of a two-way relationship, the other direction being $M_χ : \mathbf{Top} \leftarrow L-\mathbf{Top}$ [43], given by
+
+$$M_{\chi}(X, \tau) = (X, \{U \subset X : \chi_U \in \tau\}), \quad M_{\chi}(f) = f$$
+
+which is a concrete functor. Further,
+
+$$M_{\chi} \dashv G_{\chi},$$
+
+and this adjunction is a monocoreflection, but not an equivalence. $G_{\chi}$ will be referenced throughout the sequel.
+
+**Example 4.3.2** (second class of examples—lower semi-continuity). These are functor-generated examples [41, 55, 36]. The correspondence $\omega_L : \mathbf{Top} \to L-\mathbf{Top}$, defined by
+
+$$\omega_L(X, \mathfrak{T}) := (X, \omega_L(\mathfrak{T})) := (X, \langle\langle\{\text{u : } X \to L \mid \forall \alpha \in L, [u \not\leq \alpha] \in \mathfrak{T}\}\rangle\rangle),$$
+
+$$\omega_L(f) = f,$$
+
+is a concrete functor for which the *L*-topology of the image is generated by
+the (*L*-valued) subbase written within the double brackets $\langle\langle\rangle\rangle$. It should
+be noted that this subbasis comprises all continuous maps with respect to
+the given topology $\mathfrak{T}$ on $X$ and the upper topology on $L$. Now $\omega_L$ is part
+of a two-way relationship, the other direction being $\iota_L : L-\mathbf{Top} \to \mathbf{Top}$,
+given by
+
+$$\iota_L(X, \tau) = (X, \langle\langle\{\text{[u} \not\leq \alpha]\text{ : } u \in \tau, \alpha \in L}\rangle\rangle\rangle), \quad \iota_L(f) = f,$$
+
+which is also a concrete functor. It is the case that
+
+$$\omega_L \dashv \iota_L,$$
+
+both functors reflect lifted morphisms [9], and $\omega_L$ preserves products (unusual for a left-adjoint). Further, if $L$ is completely distributive, then the following improvements result [36]:
+
+(1) The *L*-subbases of the $\omega_L$ image objects given above are *L*-topologies.
+
+(2) $\omega_L: \mathbf{Top} \to L-\mathbf{Top}$ is a categorical embedding.
+
+(3) $\omega_L \dashv \iota_L$ is also an epicoreflection with $\iota_L$ as left-inverse of $\omega_L$.
+
+(4) The action of $\omega_L$ is “stratification” in the following sense: for each topological space $(X, \mathfrak{T})$, the *L*-topology $\omega_L(\mathfrak{T})$ of $\omega_L(X, \mathfrak{T})$ is given by
+
+$$\omega_L(\mathfrak{T}) = G_\chi(\mathfrak{T}) \cup \{\underline{\alpha} : X \to L \mid \alpha \in L, (\forall x \in X, \underline{\alpha}(x) = \alpha)\},$$
+
+where “$\vee$” indicates the smallest *L*-topology containing both fam-
+ilies of mappings. Restated,
+
+$$\omega_L(\mathfrak{T}) = \langle\langle\{\chi_U : U \in \mathfrak{T}\} \cup \{\underline{\alpha} : X \to L \mid \alpha \in L\}\rangle\rangle.$$
+
+Any *L*-topology $\tau$ on $X$ which contains $\{\underline{\alpha} : X \to L \mid \alpha \in L\}$ is
+said to be *stratified*.
+---PAGE_BREAK---
+
+It should be noted that the right hand side of the preceding display defines an *L*-topology for any given *L*, in which case it is denoted $G_ω(Ω)$, defining yet another concrete functor $G_ω : \mathbf{Top} \to \mathbf{L-Top}$ which is a categorical embedding and coincides with $ω_L$ when *L* is completely distributive [36]. It remains an open question, after more than 25 years [55], whether in general $ω_L = G_ω$. The functor $G_ω$ plays its own critical role in Subsection 5.4 below.
+
+**Example 4.3.3** (third class of examples—the *L*-spectrum). These examples include all *L*-spectra of locales and even complete lattices—a generalization of traditional spectra—and these are generated by the *L*-spectrum functor [53, 22, 56, 32, 57, 58, 59, 63, 48, 33, 50]. Put
+
+$$ L\Omega : \mathbf{L-Top} \to \mathbf{Loc} \quad \text{by} \quad L\Omega(X, \tau) = \tau, $$
+
+$$ L\Omega[f : (X, \tau) \to (Y, \sigma)] = [((f_L \leftarrow)_\sigma)^{\text{op}} : \tau \to \sigma]; $$
+
+$$ Lpt(A) = \mathbf{Frm}(A, L) = \{p : A \to L \mid p \text{ preserves arbitrary } \bigvee, \text{ finite } \wedge\}; $$
+
+$$ \Phi_L : A \to L^{Lpt(A)} \quad \text{by} \quad \Phi_L(a)(p) = p(a); $$
+
+$$ LPt : \mathbf{L-Top} \leftarrow \mathbf{Loc} \quad \text{by} \quad LPt(A) := (Lpt(A), (\Phi_L)^{\to}(A)), $$
+
+$$ LPt[\varphi : A \to B] = [( ) \circ \varphi^{\text{op}} : LPt(A) \to LPt(B)]. $$
+
+Then the following hold:
+
+(1) $L\Omega$ and $LPt$ are functors.
+
+(2) $L\Omega \dashv LPt$, with counits $((\Phi_L)^{|\Phi_L}\to(A))^{\text{op}} : (\Phi_L)\to(A) \to A$ in **Loc**, and units $\Psi_L : (X, \tau) \to LPt(\tau)$ defined by
+$$ \Psi_L : X \to LPt(\tau) \text{ by } \Psi_L(x) : \tau \to L \text{ by } \Psi_L(x)(u) = u(x). $$
+
+(3) $\Phi_L : A \to L^{Lpt(A)}$ is a frame map—so that $(Lpt(A), (\Phi_L)^{\to}(A))$ is an *L*-topological space, and $(\Phi_L)^{|\Phi_L}\to(A) : A \to (\Phi_L)^{\to}(A)$ is injective if and only if it is an order-isomorphism—in which case A is *L*-spatial.
+
+(4) $\Psi_L : (X, \tau) \to LPt(\tau)$ is *L*-continuous and relatively *L*-open, an *L*-homeomorphic embedding if and only if $\Psi_L$ is injective—in which case $(X, \tau)$ is *L*-*$T_0$, and an *L*-homeomorphism if and only if $\Psi_L$ is bijective—in which case $(X, \tau)$ is *L*-sober.
+
+(5) The restriction of $L\Omega\dashv LPt$, respectively, to *L*-sober topological spaces and *L*-spatial locales yields a categorical equivalence between *L-SobTop* and *L-SpatLoc*, i.e.,
+$$ L\text{-SobTop} \sim L\text{-SpatLoc}. $$
+
+(6) There are schemata of “Stone” representation theorems and compactifications based upon (5) indexed by categories for *L*.
+---PAGE_BREAK---
+
+(7) The above functors give more relationships between **Top** and L-**Top**: $LPt \circ \Omega : \mathbf{Top} \to \mathbf{L-Top}$; $Pt \circ L\Omega : \mathbf{L-Top} \to \mathbf{Top}$; $LPt \circ \Omega \dashv Pt \circ L\Omega$; and when restricted respectively to $\mathbf{L-SobTop}$ and $\mathbf{SobTop}$, $LPt \circ \Omega \dashv Pt \circ L\Omega$ restricts to a categorical equivalence, in which case the reverse adjunction $Pt \circ L\Omega \dashv LPt \circ \Omega$ also holds.
+
+An alternative approach to the *L*-spectrum is developed in [48, 49, 50].
+
+**Example 4.3.4** (fourth class of examples—fuzzy real lines from probability distributions). We now outline the fuzzy real lines and fuzzy unit intervals from the standpoint of *L*-probability distributions [26, 17, 64]. Fix *L* a DeMorgan frame—a frame equipped with an order-reversing involution'. We begin with a series of notations and definitions, where $\mathbb{R}$ denotes the traditional real line. Letting $\lambda: \mathbb{R} \to \mathcal{L}$ be an antitone map and $t \in \mathbb{R}$, we adopt this series of notations:
+
+$$
+\begin{align*}
+\lambda(t-) &= \bigwedge_{st} \lambda(s), & \lambda((- \infty) +) &= \\
+& \bigvee_{s \in \mathbb{R}} \lambda(s), & \lambda((+ \infty) -) &= \bigwedge_{s \in \mathbb{R}} \lambda(s).
+\end{align*}
+$$
+
+These notations allow us to state the following series of definitions:
+
+$$
+\begin{align*}
+\operatorname{Real}(L) &:= \{\lambda : \mathbb{R} \to L \mid \lambda \text{ antitone}, \lambda((+\infty)-) = \bot, \lambda((- \infty) +) = \top\}, \\
+\lambda \sim \mu &\Leftrightarrow [\forall t \in \mathbb{R}, \lambda(t+) = \mu(t+)] \Leftrightarrow [\forall t \in \mathbb{R}, \lambda(t-) = \mu(t-)], \\
+[\lambda] &:= \{\mu \in \operatorname{Real}(L) : \lambda \sim \mu\}, \\
+\mathbb{R}(L) &:= \operatorname{Real}(L) / \sim, \\
+\forall t \in \mathbb{R}, L_t, R_t & : \mathbb{R}(L) \to L \quad \text{by} \quad L_t[\lambda] = (\lambda(t-))', R_t[\lambda] = \lambda(t+), \\
+\tau(L) &:= \langle\langle\{L_t, R_t : t \in \mathbb{R}\}\rangle\rangle.
+\end{align*}
+$$
+
+From all these notions come the following comments and results:
+
+(1) $(\mathbb{R}(L), \tau(L))$ is an $L$-topological space, called the $L$-(fuzzy) real line and also denoted simply $\mathbb{R}(L)$.
+
+(2) $(\mathbb{R}(2), \tau(2))$ is 2-homeomorphic to $G_{\chi}(\mathbb{R}, \mathfrak{T})$, where $\mathbb{R}$ has the usual topology $\mathfrak{T}$.
+
+(3) For $r \in \mathbb{R}$, define $\lambda_r : \mathbb{R} \to L$ by $\lambda_r(t) = \begin{cases} t, & t < r \\ \bot, & t > r \end{cases}$. Then $r \mapsto [\lambda_r]$ is an $L$-embedding of $G_\chi(\mathbb{R}, \mathfrak{T})$ into $(\mathbb{R}(L), \tau(L))$ if $L$ is consistent, in which case it also follows that $\mathbb{R}(2) = [\{\lambda_r : r \in \mathbb{R}\}]$ and that
+
+$$
+L_t[\lambda_r] = \chi_{(-\infty, t)}(r), \quad R_t[\lambda_r] = \chi_{(t, +\infty)}(r).
+$$
+
+Thus, for *L* consistent, and identifying $\mathbb{R}(2)$ with $\mathbb{R}$, $L_t$ extends the left-handed, subbasic open interval $(-\infty, t)$ and $R_t$ extends the right-handed, subbasic open interval $(t, +\infty)$: this justifies the notation “$L_t$” and “$R_t$” for the subbasic *L*-open subsets of the *L*-topology $\tau(L)$ on $\mathbb{R}(L)$.
+---PAGE_BREAK---
+
+(4) For $L$ a complete DeMorgan chain, there are jointly $L$-continuous addition $\oplus$ and multiplication $\otimes$ extending the usual addition and multiplication. For example, $(\mathbb{R}(L), \oplus, \tau(L))$ is an abelian, cancellation, $L$-topological semigroup.
+
+(5) $\mathbb{R}(L)$ is $L-T_0$ (Example 3.3.3-3.3.4); $\mathbb{R}(L)$ is $L$-sober (3.3.3-3.3.4)) if and only if $L$ is a complete Boolean algebra; and, for $L$ completely distributive, $\mathbb{R}(L)$ is Hutton-uniformizable, metrizable via $d: L^{\mathbb{R}(L)} \times L^{\mathbb{R}(L)} \to [0, +\infty)$ extending the Euclidean metric in the sense that $d(\chi_{\{|\lambda_r|\}}, \chi_{\{|\lambda_s|\}}) = |r-s|$, and hence possesses all Hutton-Reilly separation axioms [27, 52, 35, 59, 37, 63].
+
+(6) The subspace of $\mathbb{R}(L)$ resulting from restricting *Real*($L$) to
+
+$$ \{\lambda \in \text{Real}(L) \mid \lambda(t) = \perp \text{ for } t > 1, \lambda(t) = \top \text{ for } t < 0\} $$
+
+is called the *L-(fuzzy) unit interval* and denoted $\mathbb{I}(L)$; and analogues of (1-3,5) above hold for $\mathbb{I}(L)$.
+
+The fuzzy real lines and fuzzy unit intervals are among the most important examples of many-valued topology and possess an extensive literature, a brief sample of which comprises [26, 17, 52, 44, 36, 64, 38]. With these examples, lattice valued topology has Urysohn Lemmas, Tietze Extension Theorems, Tihonov cubes, etc.
+
+**Example 4.3.5** (fifth class of examples—fuzzy real lines from *L*-spectra). Fix *L* a frame. Given the usual real line (*$\mathbb{R}$*, *T*) and unit interval ([II, T](II)), recall the functor *LPt* from Example 4.3.3 and put
+
+$$ \begin{align*} \mathbb{R}^*(L) &:= LPt(\mathfrak{T}) = (Lpt(\mathfrak{T}), (\Phi_L)^\rightarrow(\mathfrak{T})), \\ \mathbb{I}^*(L) &:= LPt(\mathfrak{T}(\mathbb{I})) = (Lpt(\mathfrak{T}(\mathbb{I})), (\Phi_L)^\rightarrow(\mathfrak{T}(\mathbb{I}))). \end{align*} $$
+
+Then the following hold [63, 64]:
+
+(1) For $L = \mathbf{2}$, $\mathbb{R}^*(\mathbf{2})$ is homeomorphic to $(\mathbb{R}, \mathfrak{T})$ and $\mathbb{I}^*(\mathbf{2})$ is homeomorphic to $(\mathbb{I}, \mathfrak{T}(\mathbb{I}))$.
+
+(2) $G_\chi(\mathbb{R}, \mathfrak{T})$ $L$-embeds into $\mathbb{R}^*(L)$; $G_\chi(\mathbb{I}, \mathfrak{T}(\mathbb{I}))$ $L$-embeds into $\mathbb{I}^*(L)$.
+
+(3) There are jointly $L$-continuous addition $\boxplus$ and multiplication $\otimes$ extending the usual addition and multiplication.
+
+(4) Assume $L$ a DeMorgan frame. Then the following are equivalent:
+
+(a) $\mathbb{R}(L)$ and $\mathbb{R}^*(L)$ are $L$-homeomorphic.
+
+(b) $\mathbb{I}(L)$ and $\mathbb{I}^*(L)$ are $L$-homeomorphic.
+
+(c) $L$ is a complete Boolean algebra.
+
+(5) $\mathbb{I}^*(L)$ is the $L$-Čech-Stone compactification of $G_\chi([0, 1])$; and for $L$ a complete Boolean algebra, $\mathbb{I}(L)$ is the $L$-Čech-Stone compactification of $G_\chi([0, 1])$.
+---PAGE_BREAK---
+
+This class of examples gives the frame maps and prime principal ideal approach to fuzzy numbers. The results of (4) seem rather astounding: for $L$ a complete Boolean algebra, the probability distribution approach to fuzzy numbers is equivalent to the frame maps approach to fuzzy numbers. And (5) is also surprising: the traditional unit interval is not as compact as it could be if $L$ is bigger than 2; and, for example, for the Boolean algebra 4, $\mathbb{I}(4)$ has extra points of "density" and "closure", is more compact than $[0, 1]$, and $[0, 1]$ in fact 4-embeds into $\mathbb{I}(4)$ as a "4-dense" subset.
+
+**Example 4.3.6** (sixth class of examples—dual L-topologies on $\mathbb{R}$). Fix $L$ a DeMorgan frame and recall the subbasis $\{L_t, R_t : t \in \mathbb{R}\}$ for the L-topology $\tau(L)$ on $\mathbb{R}(L)$ from Example 4.3.4 above. Put [54, 55]
+
+$$
+\begin{align*}
+L_{[\lambda]}, R_{[\lambda]} : \mathbb{R} &\to L && \text{by } L_{[\lambda]}(t) = L_t[\lambda], R_{[\lambda]}(t) = R_t[\lambda], \\
+\tau[L] &= \langle\langle \{L_{[\lambda]}, R_{[\lambda]} : [\lambda] \in \mathbb{R}(L)\} \rangle\rangle, \\
+\mathbb{R}_L &:= (\mathbb{R}, \tau[L]).
+\end{align*}
+$$
+
+The following hold:
+
+(1) $\mathbb{R}_L$ is an $L$-topological space, called the dual $L$-real line.
+
+(2) $\mathbb{R}_L$ is $L$-homeomorphic to $G_\chi(\mathbb{R}, \mathfrak{T})$ if $L = \mathbb{2}$.
+
+(3) $\mathbb{R}_L$ is $L$-homeomorphic to $\omega_L(\mathbb{R}, \mathfrak{T})$ if $L$ is completely distributive (where $\omega_L$ is given in Example 4.3.2).
+
+(4) The usual addition and multiplication are both jointly $L$-continuous in $\mathbb{R}_L$.
+
+(5) $\mathbb{R}_L$ is $L-T_0$; and, for $L$ completely distributive, $\mathbb{R}_L$ is Hutton-uniformizable, metrizable via $d: L^\mathbb{R} \times L^\mathbb{R} \to [0, +\infty)$ extending the Euclidean metric in the sense that $d(\chi_{\{r\}}, \chi_{\{s\}}) = |r - s|$, and hence possesses all Hutton-Reilly separation axioms.
+
+(6) $\mathbb{I}_L$, as the subspace of $\mathbb{R}_L$ on $[0, 1]$, has analogues of (1–3,5).
+
+### 4.4. Embeddings of L-Top's into TopSys.
+This subsection begins the process of connecting topological systems and L-topological spaces, a process continuing into the next section—see Subsection 5.4 below, and constructs a class of embeddings of the schema {$L-Top : L \in |\mathrm{Frm}|$} into TopSys [7], embeddings which place all the example classes of Subsection 4.3 into TopSys.
+
+Fix a frame $L$, and put
+
+$$ L^{\bullet} = L - \{\uparrow\}, \quad \Pr(L^{\bullet}) = \{\alpha \in L^{\bullet} : \alpha \text{ prime}\}. $$
+
+**Lemma 4.4.1.** For each $\alpha \in \Pr(L^{\bullet})$, there is a functorial embedding $F_{\alpha} : L\text{-Top} \to \text{TopSys}$ constructed as follows:
+
+$$ F_{\alpha}(X, \tau) = (X, \tau, \vDash_{\alpha}) \quad \text{by} \quad x \Vdash_{\alpha} u \iff u(x) \not\vDash_{\alpha} \alpha, \\ F_{\alpha}[f : (X, \tau) \to (Y, \sigma)] = (f, ((f_L^{\tau})|_{\sigma})^{op}) : (X, \tau, \Vdash_{\alpha}) \to (Y, \sigma, \Vdash_{\alpha}). $$
+---PAGE_BREAK---
+
+This lemma gives many-valued extensions of the Vickers embedding $E_V : \mathbf{Top} \to \mathbf{TopSys}$ discussed in Section 2. Viewing satisfaction as a kind of generalized membership relation and $u(x)$ as a degree of membership, then the satisfaction relation constructed here is the restriction that the degree of membership cannot be less than a prechosen prime $\alpha$ in the frame of membership values.
+
+The question arises: what if $L$ has no primes? For examples: $L$ could be the family of all regular open subsets of $\mathbb{R}$ (with the usual topology) ordered by inclusion; $L$ could be the localic product of the subspace topology of the rational numbers with itself; or $L$ could be the family of all Lebésgue measurable subsets of $[0,1]$ with the measure 0 subsets identified and the measure 1 subsets identified. For such “atomless” frames $L$, how is $\mathbf{L-Top}$ to be embedded into $\mathbf{TopSys}$?
+
+We proceed as follows.
+
+(1) For any given frame $L$, adjoin a new bottom $\perp^*$ to $L$ which is required to be strictly below the bottom $\perp$ of $L$. This yields a new frame $L_{\perp*}$ having the property that
+
+$$ \perp^* \in \Pr(L_{\perp*}). $$
+
+(2) By Lemma 4.4.1, there exists a functorial embedding $F_{\perp*} : L_{\perp*} - \mathbf{Top} \to \mathbf{TopSys}$.
+
+(3) Construct the concrete functorial embedding $\hookrightarrow : \mathbf{L-Top} \to \mathbf{L}_{\perp*}-\mathbf{Top}$ by
+
+$$ (X, \tau) \mapsto (X, \tau_{\perp}) \quad \text{with} \quad \tau_{\perp} = \tau \cup \{\bot^*\}, \quad f \mapsto f, $$
+
+where $\bot^* : X \to L$ is the constant subset given by $\bot^*(x) = \bot^*$.
+
+**Theorem 4.4.2.** For each frame $L$, there is a functorial embedding $E_{\perp} : \mathbf{L-Top} \to \mathbf{TopSys}$ given by
+
+$$ E_{\perp} = F_{\perp*} \circ \hookrightarrow. $$
+
+**Corollary 4.4.3.** The embedding $E_V : \mathbf{Top} \to \mathbf{TopSys}$ is recovered from Lemma 4.4.1 by choosing $L = 2$ and noting that
+
+$$ E_V = F_{\perp} \circ G_{\chi}, $$
+
+where $G_{\chi}$ is from Example 4.3.1.
+
+**Corollary 4.4.4.** *TopSys* is a supercategory, up to categorical isomorphisms, of **Top**, **Loc**, and the schema {$\mathbf{L-Top} : L \in |\mathrm{Frm}|$}.
+
+The Corollary assures us that the rich inventories of examples in Subsection 4.3 are included in **TopSys**.
+---PAGE_BREAK---
+
+5. TOPOLOGICAL SYSTEMS AND VARIABLE-BASIS
+LATTICE-VALUED TOPOLOGY
+
+Heuristically, “variable-basis” topology means spaces with different
+lattice-theoretic bases are all accommodated within the same category;
+morphisms change both underlying carrier sets and underlying lattices
+(or bases) of truth/membership values. See a history of variable-basis
+thinking in many-valued mathematics in Section 1 of [62], a history that
+begins in a veiled form in 1967 [18] and becomes more explicit in 1981 and
+1983 [51]; and a fairly recent update packaged in the language of power-
+set theories and topological theories may be found in [65]. From a more
+formal point of view, variable-basis topology is a schema of categories
+C-Top, each with ground category Set × C, where Cop is a concrete cat-
+egory of lattice-theoretic structures. By convention, this schema is of the
+form
+
+$$
+\{\text{C-Top} : \mathbf{C} \hookrightarrow \mathbf{SQuant}^{op}\},
+$$
+
+where $\mathbf{SQuant}$ [65] is the category of semiquantales (complete lattices
+with a binary operation $\otimes$) and semiquantale morphisms (mappings pre-
+serving arbitrary $\vee$ and $\otimes$).
+
+To both simplify this section and facilitate relationships with **TopSys**, which has ground category **Set** × **Loc**, we choose **C** = **Loc** in the above schema and focus on the category **Loc-Top** in the sequel; but we recognize that most of what follows works with other, more general choices of **C** [62, 65].
+
+5.1. Motivations and powerset monad for Loc-Top.
+
+Discussion 5.1.1. It would be desirable to have a topological category satisfying the following conditions:
+
+(1) it is a supercategory up to isomorphism of both **Top** and **Loc**;
+
+(2) it is a supercategory of the fixed-basis schema {$L-\mathbf{Top}:L \in [\mathrm{Frm}]$},
+thus allowing for "internalized" change of basis *vis-a-vis* the "ex-
+ternalized" change of basis given in [25]—whereby conditions are
+given under which there is a functor from *L*-Top to *M*-Top for
+different *L* and *M*; and
+
+(3) it is a framework for asking and answering the following “comparison” questions concerning several of the example classes given in Subsection 4.3 above:
+
+(a) how do $\mathbb{R}(L)$ and $\mathbb{R}(M)$ compare and when are they homeomorphic;
+
+(b) how do $LPt(A)$ and $Mpt(A)$ compare and when are they homeomorphic;
+
+(c) how do $\mathbb{R}^*(L)$ and $\mathbb{R}^*(M)$ compare and when are they homeomorphic; and
+---PAGE_BREAK---
+
+(d) how do $\omega_L(X, \mathfrak{T})$ and $\omega_M(X, \mathfrak{T})$ compare and when are they homeomorphic?
+
+We note that **TopSys** satisfies 5.1.1(1,2); however, **TopSys** is not a topological category. Also, it is unexplored in what sense (3) could be asked and answered in **TopSys**.
+
+**Definition 5.1.2** [53, 60, 61, 65]. The powerset monad for **Loc-Top** has ground category **Set** × **Loc**. Let $(f, \varphi) : (X, L) \to (Y, M)$ be in **Set**×**Loc**.
+
+(1) Put $(f, \varphi)^{\leftarrow} : L^X \leftarrow M^Y$ by
+
+$$ (f, \varphi)^{\leftarrow}(b) = \varphi^{op} \circ b \circ f, \quad \text{i.e.,} \quad \varphi^{op} \circ f_L^{\leftarrow}(b). $$
+
+(2) Put $(f, \varphi)^{\rightarrow} : L^X \rightarrow M^Y$ by
+
+$$ (f, \varphi)^{\rightarrow}(a) = \bigwedge_{a \leq (f, \varphi)^{\leftarrow}(b)} b. $$
+
+In the language of [65], $(f, \varphi)^{\rightarrow}$ is the pseudo-left adjoint of $(f, \varphi)^{\leftarrow}$.
+
+(3) Put $(f, \varphi)_{\rightarrow} : L^X \rightarrow M^Y$ by
+
+$$ (f, \varphi)_{\rightarrow}(a) = \bigvee_{(f, \varphi)^{\leftarrow}(b) \leq a} b. $$
+
+For terminology, $(f, \varphi)^{\rightarrow}$, $(f, \varphi)^{\leftarrow}$, $(f, \varphi)_{\rightarrow}$ are respectively called the *image*, *preimage*, and *lower image operators* of ground morphism $(f, \varphi)$.
+
+**Theorem 5.1.3.** *The following hold:*
+
+(1) $(f, \varphi)^{\rightarrow}$ is isotone.
+
+(2) $(f, \varphi)^{\leftarrow}$ is a frame map and $(f, \varphi)^{\rightarrow} \dashv (f, \varphi)_{\rightarrow}$.
+
+(3) $(f, \varphi)^{\rightarrow} \dashv (f, \varphi)^{\leftarrow} \dashv (f, \varphi)_{\rightarrow}$ if $\varphi^{op}$ preserves arbitrary $\bigwedge$ (which is the case if $L, M$ are DeMorgan).
+
+## 5.2. Loc-Top and its basic categorical properties.
+
+**Definition 5.2.1** (variable-basis topology). The category **Loc-Top** of topological spaces and continuous morphisms/mappings has ground category **Set** × **Loc** and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, L, \tau)$, where $(X, \tau) \in |L-\text{Top}|$, i.e., $\tau \subset L^X$ is closed under arbitrary $\bigvee$ and finite $\wedge$. In this case, $\tau$ is a *topology* on the ground object $(X, L)$.
+
+(2) *Morphisms*: $(f, \varphi) : (X, L, \tau) \to (Y, M, \sigma)$ satisfies
+
+$$ [(f, \varphi)^{\leftarrow}]^{\rightarrow}(\sigma) \subset \tau, \text{ i.e., } \forall v \in \sigma, (f, \varphi)^{\leftarrow}(v) \in \tau. $$
+
+(3) *Composition, identities*: from **Set** × **Loc**.
+---PAGE_BREAK---
+
+Homeomorphisms are the categorical isomorphisms of **Loc-Top** and are ground morphisms $(f, \varphi)$ such that each of $f, \varphi^{op}$ is a bijection (and so $\varphi^{op}$ is an order isomorphism), $(f, \varphi)$ is continuous, and $(f^{-1}, ((\varphi^{op})^{-1})^{op})$ is continuous.
+
+**Comparison Theorems 5.2.2.** *The following statements hold:*
+
+(1) Let $L, M$ be DeMorgan frames. The following are equivalent:
+
+(a) $\mathbb{R}(L)$ is homeomorphic in **Loc-Top** to $\mathbb{R}(M)$.
+
+(b) $\mathbb{R}^*(L)$ is homeomorphic in **Loc-Top** to $\mathbb{R}^*(M)$.
+
+(c) $L$ is order isomorphic to $M$.
+
+(2) Let $L, M$ be frames. The following are equivalent:
+
+(a) $\forall (X, \mathfrak{T}) \in |\textbf{Top}|$, $(X, L, \omega_L(\mathfrak{T}))$ is homeomorphic in **Loc-Top** to $(X, M, \omega_M(\mathfrak{T}))$.
+
+(b) $\forall (X, \mathfrak{T}) \in |\textbf{Top}|$, $(X, L, G_\omega(\mathfrak{T}))$ is homeomorphic in **Loc-Top** to $(X, M, G_\omega(\mathfrak{T}))$.
+
+(c) $\forall A \in |\textbf{Loc}|$, $LPt(A)$ is homeomorphic to $MPt(A)$.
+
+(d) $L$ is order isomorphic to $M$.
+
+*Proof.* The equivalence of (1)(c) with each of (1)(a) and (1)(b) follows respectively from Corollary 7.1.7.1 and Corollary 7.3.7.1 of [62]; and the equivalence of (2)(d) with (2)(c) follows from Corollary 7.4.6.1(2), of [62]. Further, it is immediate that each of (2)(a) and (2)(b) implies (2)(d) using the definition of homeomorphism. We now show that (2)(d) implies each of (2)(a) and (2)(b). To this end, assume that $L$ is order isomorphic to $M$ via some $\varphi^{op}: L \leftarrow M$, and let $(X, \mathfrak{T})$ be a topological space. It follows that $L^X$ is bijective with $M^X$ via the “left-hand” and “right-hand” correspondences
+
+$$ a \in L^X \mapsto (\varphi^{op})^{-1} \circ a \in M^X, \quad b \in M^X \mapsto \varphi^{op} \circ b \in L^X, $$
+
+and each of these correspondence is isotone; and hence $L^X$ and $M^X$ are order-isomorphic.
+
+To show (2)(a), it is claimed that
+
+$$ (id_X, \varphi) : (X, L, \omega_L(\mathfrak{T})) \to (X, M, \omega_M(\mathfrak{T})) $$
+
+is a homeomorphism. Recall that the subbases of the two topologies are respectively as follows:
+
+$$ \{u : X \to L \mid \forall \alpha \in L, [u \not\leqq \alpha] \in \mathfrak{T}\}, \{v : X \to M \mid \forall \beta \in M, [v \not\leqq \beta] \in \mathfrak{T}\}. $$
+---PAGE_BREAK---
+
+Let $u$ be a member of the left-hand subbasis, and let $x \in [u \not\leq \alpha]$. Then
+$u(x) \not\leq \alpha$, and hence
+
+$$
+(\varphi^{op})^{-1} (u(x)) \not\leq (\varphi^{op})^{-1} (\alpha)
+$$
+
+by the right-hand correspondence being order-preserving. It follows that
+$x \in [(\varphi^{op})^{-1} \circ u \not\leq (\varphi^{op})^{-1} (\alpha)]$, so that
+
+$$
+[u \not\leq \alpha] \subset [(\varphi^{op})^{-1} \circ u \not\leq (\varphi^{op})^{-1} (\alpha)].
+$$
+
+A symmetric argument using the isotonicity of the left-hand correspon-
+dence above establishes that
+
+$$
+[u \not\leq \alpha] \supset [(\varphi^{op})^{-1} \circ u \not\leq (\varphi^{op})^{-1} (\alpha)],
+$$
+
+and hence that
+
+$$
+[u \not\leq \alpha] = [(\varphi^{op})^{-1} \circ u \not\leq (\varphi^{op})^{-1} (\alpha)].
+$$
+
+The right-hand set, by the left-hand correspondence, instantiates the
+predicate of the right-hand subbasis, and this implies that *u* is a member
+of the right-hand subbasis. Hence the left-hand subbasis is a subfamily
+of the right-hand subbasis. There is again a symmetric argument estab-
+lishing the reverse inclusion, so that
+
+$$
+\{u : X \to L \mid \forall \alpha \in L, [u \not\leq \alpha] \in \mathfrak{T}\} = \{v : X \to M \mid \forall \beta \in M, [v \not\leq \beta] \in \mathfrak{T}\}.
+$$
+
+It follows that
+
+$$
+\omega_L(\mathfrak{T}) = \omega_M(\mathfrak{T}).
+$$
+
+Now the action of the preimage operator $(id_X, \varphi)^\leftarrow$ on the right-hand subbase is exactly that of the left-hand correspondence above, and so $(id_X, \varphi)$ is subbasic continuous; and the action of the preimage operator $((id_X, ((\varphi^{op})^{-1})^{op})^\leftarrow$ on the left-hand subbase is exactly that of the right-hand correspondence above, and so $((id_X, ((\varphi^{op})^{-1})^{op}))$ is subbasic continuous. But Theorem 3.2.6 of [62] now implies that each of $(id_X, \varphi)$ and $((id_X, ((\varphi^{op})^{-1})^{op}))$ is continuous. Hence $(id_X, \varphi)$ is a homeomorphism and (2)(a) holds.
+
+Now to show (2)(b), it is claimed that $(id_X, \varphi) : (X, L, G_\omega(\mathfrak{T})) \to (X, M, G_\omega(\mathfrak{T}))$ is a homeomorphism. Recall that the subbases of the two topologies are respectively as follows:
+
+$$
+\{\chi_U : U \in \mathfrak{T}\} \cup \{\underline{\alpha} : \alpha \in L\}, \quad \{\chi_U : U \in \mathfrak{T}\} \cup \{\underline{\beta} : \beta \in M\}.
+$$
+---PAGE_BREAK---
+
+It is straightforward to show that $\forall U \in \mathcal{T}, \forall \alpha \in L, \forall \beta \in M$, the following preimages are obtained:
+
+$$ (id_X, \varphi)^\leftarrow (\chi_U) = \chi_U = (id_X, ((\varphi^{op})^{-1})^{op})^\leftarrow (\chi_U), $$
+
+$$ (id_X, \varphi)^\leftarrow (\underline{\beta}) = \underline{\varphi^{op}(\beta)} \in \{\underline{\alpha} : \alpha \in L\}, $$
+
+$$ (id_X, ((\varphi^{op})^{-1})^{op})^\leftarrow (\underline{\alpha}) = \underline{\varphi^{op}(\alpha)} \in \{\underline{\beta} : \beta \in M\}. $$
+
+All of this shows that each of $(id_X, \varphi)$ and $(id_X, ((\varphi^{op})^{-1})^{op})$ is subbasic continuous and hence continuous (Theorem 3.2.6 of [62]). It follows that $(id_X, \varphi)$ is a homeomorphism and (2)(b) holds. $\square$
+
+It is striking that (2)(a) and (2)(b) above are equivalent, even without the assumption of any lattices being completely distributive, though this does not address the open question stated just above Example 4.3.3. Also, given the fact that stratification is incompatible with $L$-sobriety for most locales [57, 59, 63, 66], that both $\omega_L$ and $G_\omega$ produce only stratified spaces—this follows from the definition of “stratified” in Example 4.3.2(4) above, and that $LPt$ produces only $L$-sober spaces [53, 57, 58, 59, 63], it is even more striking that each of (2)(a) or (2)(b) is equivalent with (2)(c).
+
+Now **Loc-Top** contains many more morphisms "across" different bases than just homeomorphisms. This richness of morphisms is detailed at length in Section 7 of [62], including classes of non-homeomorphisms between objects in each of the example classes of Subsection 4.3 above having different bases; e.g., conditions are given under which **Loc-Top**($\mathbb{R}(L)$, $\mathbb{R}(M)$) contains a class of non-homeomorphisms. It should be pointed out that **Loc-Top** has far more morphisms than all the morphisms of the schema $\{L-\text{Top} : L \in \text{|Form|}\}$ put together. Restated, "variable-basis" is much richer with respect to morphisms than "external" change of basis, even when the external changes are brought "inside". For now we content ourselves with the following general proposition:
+
+**Proposition 5.2.3.** *Fix frame $L$ and a localic non-isomorphism $\varphi \in \text{Loc}(L, L)$, let $(X, L, \sigma)$ be a topological space (in **Loc-Top**), and put*
+
+$$ \tau := \{\varphi^{op} \circ v : v \in \sigma\}. $$
+
+Then $(id_X, \varphi) : (X, L, \tau) \rightarrow (X, L, \sigma)$ is continuous, not a homeomorphism, and not in any $M$-Top.
+
+The forgetful functor $V : \textbf{Loc-Top} \rightarrow \textbf{Set} \times \textbf{Loc}$ is given by
+
+$$ V(X, L, \tau) = (X, L), \quad V(f, \varphi) = (f, \varphi). $$
+---PAGE_BREAK---
+
+**Theorem 5.2.4.** Each V-structured source [sink] in Set × Loc has a unique initial [final] lift to **Loc-Top**. Hence, **Loc-Top** is topological over Set × Loc w.r.t. V.
+
+**Corollary 5.2.5.** *Loc-Top* is complete and cocomplete.
+
+5.3. **Loc-Top** as supercategory of **Top**, **Loc**, **L-Top's**. The embeddings of **Top**, **Loc**, and the schema {$L-\text{Top} : L \in |\text{Frm}|$} into **Loc-Top** come from [53, 62] and are now described.
+
+Fix $L$ frame and put $L\text{-Top} \mapsto \text{Loc-Top}$ by
+
+$$ (X, \tau) \mapsto (X, L, \tau), \\[1em] [f : (X, \tau) \to (Y, \sigma)] \mapsto [(f, (id_L)^{op}) : (X, L, \tau) \to (Y, L, \sigma)]. $$
+
+This reinforces the richness of morphisms in **Loc-Top** vis-a-vis those morphisms occurring in the schema {$L\text{-Top} : L \in |\text{Frm}|$}: even for fixed basis $L$, if $|\text{Frm}(L, L)| > 1$, i.e., $L$ admits a frame endomorphism other than $id_L$, then **Loc-Top** will have continuous morphisms between spaces, both having lattice-theoretic basis $L$, which cannot come from morphisms in **L-Top**. Cf. Proposition 5.2.3 above.
+
+It should also be noted that **Top** ≈ **2-Top** $\to$ **Loc-Top**. Restated, **Top** embeds into **Loc-Top** by $G_\chi$ of Example 4.3.1 followed by the embedding above for $L = 2$.
+
+We consider two functorial embeddings of **Loc** into **Loc-Top**:
+
+(1) For the empty embedding, put **Loc** $\to$ **Loc-Top** by
+
+$$ A \mapsto (\emptyset, A, A^\emptyset \equiv 1), \\[1em] [\varphi : A \to B] \mapsto [(id_\emptyset \equiv \emptyset, \varphi) : (\emptyset, A, A^\emptyset) \to (\emptyset, B, B^\emptyset)]. $$
+
+Since **Loc-Top** is variable-basis topology by Theorem 5.2.4 above, this embedding justifies thinking of **Loc** as “point-free” or “pointless” topology.
+
+(2) For the singleton embedding, put **Loc** $\to$ **Loc-Top** by
+
+$$ A \mapsto (\mathbf{1}, A, A^1), \\[1em] [\varphi : A \to B] \mapsto [(id_1, \varphi) : (\mathbf{1}, A, A^1) \to (\mathbf{1}, B, B^1)]. $$
+
+Again, given that **Loc-Top** is variable-basis topology by 5.2.4, this justifies thinking of **Loc** as the “topology of singleton spaces”.
+
+Taking the embeddings of **Top** and the schema {$L\text{-Top} : L \in |\text{Frm}|$} into **Loc-Top** on one hand, and the embeddings of **Loc** into **Loc-Top** on the other hand, it emerges that **Top** and the schema {$L\text{-Top} : L \in |\text{Frm}|$} represent “variable carrier set and fixed lattice-theoretic basis” topology, while **Loc** represents “fixed carrier set and variable lattice-theoretic basis” topology; and hence, neither **Top** nor **Loc** is more general. All these embeddings also justify speaking of **Loc-Top** (and the other **C-Top's** of [62]) as “point-set lattice-theoretic” or “poslat” topology—see [56].
+---PAGE_BREAK---
+
+Finally, it is from the standpoint of **Loc-Top** that it may be fairly said
+that **Loc** is part of “point-set” topology, namely as part of point-set lattice-
+theoretic topology.
+
+5.4. **Loc-Top** as supercategory of **TopSys**. This subsection constructs two embeddings of **TopSys** into **Loc-Top** [7] and thereby continues the theme, begun in Subsection 4.4 above, that topological systems and many-valued topological spaces are intertwined, in multiple ways, and that this intertwining is essential to understanding both spaces and systems. Embedding variable-basis spaces into a systems context is more delicate, needs more ideas, and is discussed later in this paper.
+
+Developing relationships between **TopSys** and **Loc-Top** is mandated by these considerations:
+
+* **TopSys** and **Loc-Top** are both supercategories up to isomorphism of **Top**, **Loc**, and the schema {$L\text{-Top} : L \in |\mathrm{Frm}|$};
+* **TopSys** is essentially algebraic and **Loc-Top** is topological; and
+* both **TopSys** and **Loc-Top** have the same ground category $\mathbf{Set} \times \mathbf{Loc}$.
+
+It is helpful to note ab initio what relationships cannot exist between
+**TopSys** and **Loc-Top**, as seen in our first result.
+
+**Theorem 5.4.1.** *There cannot exist a concrete isomorphism $J: \mathbf{TopSys} \to \mathbf{Loc-Top}$.*
+
+*Proof.* Recall the forgetful functors $T : \mathbf{TopSys} \to \mathbf{Set} \times \mathbf{Loc}$ and $V : \mathbf{Loc-Top} \to \mathbf{Set} \times \mathbf{Loc}$ given earlier (above Proposition 3.1.1 and Theorem 5.2.4, respectively). Now suppose a concrete isomorphism $J : \mathbf{TopSys} \to \mathbf{Loc-Top}$ exists. Then it is the case that
+
+$$
+T = V \circ J, \quad V = T \circ J^{-1}.
+$$
+
+Now let
+
+$$
+(T(f_{\gamma}, \varphi_{\gamma}) : (X, A) \rightarrow T(X_{\gamma}, A_{\gamma}, \vdash))_{\gamma \in \Gamma}
+$$
+
+be a *T*-structured source in **Set** × **Loc**. Then it follows
+
+$$
+(VJ(f_{\gamma}, \varphi_{\gamma}) : (X, A) \rightarrow VJ(X_{\gamma}, A_{\gamma}, \vdash))_{\gamma \in \Gamma}
+$$
+
+may be regarded as a *V*-structured source in **Set** × **Loc**. Now the topologicity (Theorem 5.2.4) of **Loc-Top** implies that this second source in **Set** × **Loc** has a unique, initial lift
+
+$$
+(J(f_{\gamma}, \varphi_{\gamma}) : (X, A, \tau) \to J(X_{\gamma}, A_{\gamma}, \vDash))_{\gamma \in \Gamma},
+$$
+
+and it follows from $J : \mathbf{TopSys} \to \mathbf{Loc-Top}$ being a concrete isomorphism
+that
+
+$$
+((f_{\gamma}, \varphi_{\gamma}) : J^{-1}(X, A, \tau) \to (X_{\gamma}, A_{\gamma}, \Vdash))_{\gamma \in \Gamma}
+$$
+---PAGE_BREAK---
+
+is a unique, initial lift of the first source in **Set** × **Loc**. This implies that
+**TopSys** is topological over **Set** × **Loc** w.r.t. T, a contradiction (Theorem
+3.1.6 above). □
+
+Even though concrete isomorphisms between **TopSys** and **Loc-Top**
+do not exist, other functorial relationships do in fact exist. Now con-
+structing a space from a system or a system from a space over the same
+ground object from **Set** × **Loc** requires a fundamental paradigm shift in
+the interpretation of that ground object:
+
+* Given $(X, A, \vdash) \in |\mathbf{TopSys}|$, the ground object $(X, A)$ in some examples may be interpreted as follows: $X$ is a set of bitstrings and $A$ is a locale of predicates; and in this case $\vdash$ is a satisfaction relation on ground object $(X, A) \in |\mathbf{Set} \times \mathbf{Loc}|$.
+
+* Given $(X, L, \tau) \in |\mathbf{Loc-Top}|$, the ground object $(X, L)$ in some examples may be interpreted as follows: $X$ is a set of points and $L$ is a frame of membership or truth values; and in this case $\tau$ is a topology on ground object $(X, L) \in |\mathbf{Set} \times \mathbf{Loc}|$.
+
+**Theorem 5.4.2** (satisfaction embedding $F_\vdash$). *Construct* $F_\vdash : \mathbf{TopSys} \to \mathbf{Loc-Top}$ *by*
+
+$$F_{\vdash} (X, A, \vdash) = (X, A, \tau_{\vdash}),$$
+
+where
+
+$$\tau_{\vdash} = \{ u \in A^X : u = \bot \text{ or } [\forall x \in X, x \vdash u(x)] \},$$
+
+$$F_{\vdash}(f, \varphi) = (f, \varphi).$$
+
+Then $F_{\vdash}$ concretely embeds $\mathbf{TopSys}$ into $\mathbf{Loc-Top}$.
+
+As shown above (5.4.1), $F_{\vdash}$ cannot be an isomorphism. On the
+other hand, being an embedding insures that $F_{\vdash} \rightarrow (\mathbf{TopSys})$ is a subcat-
+egory of **Loc-Top**, albeit a proper subcategory. Given the reinterpreta-
+tion issues noted above for making spaces from systems, the subcategory
+$F_{\vdash} \rightarrow (\mathbf{TopSys})$ is of interest and it is important to better understand this
+subcategory. The following characterization theorem resolves this issue.
+
+**Theorem 5.4.3** ($F_{\vdash}$ characterization theorem). For $(X, L, \tau) \in |\mathbf{Loc-Top}|$, let
+
+$$\tau^* = \{u \in \tau : u \neq \perp\}.$$
+
+Assume $X \neq \emptyset$ and $L$ to be consistent. Then $(X, L, \tau) \in F_{\vdash} \rightarrow (\mathbf{TopSys})$
+if and only if all of the following hold:
+
+(1) $\tau^*$ is a filter;
+
+(2) $\forall x \in X, \forall u \in \tau^*, u(x) > \bot$;
+
+(3) $\forall x \in X, \forall u \in \tau^*, \forall \{\alpha_\gamma : \gamma \in \Gamma\} \subset L$ with $u(x)=\bigvee_{\gamma \in \Gamma} \alpha_\gamma, \exists \gamma_0 \in \Gamma, \exists v \in \tau^*, v(x) = \alpha_{\gamma_0};$
+
+(4) $\forall \{u_x : x \in X\} \subset \tau^*, \exists u \in \tau^*, \forall x \in X, u(x) = u_x(x).$
+---PAGE_BREAK---
+
+Since spaces can be easily constructed which do not satisfy all the conditions of Theorem 5.4.2, it necessarily follows that $F_{\vdash}$ cannot be an isomorphism, confirming Theorem 5.4.1 above.
+
+A second and strikingly different functor embeds **TopSys** into **Loc-Top**, and this second functor is closely tied to the example classes catalogued in Subsection 4.3 above.
+
+**Theorem 5.4.4 (truncation embedding $F_k$).** *Construct $F_k : \textbf{TopSys} \to \textbf{Loc-Top}$ by*
+
+$$F_k(X, A, \vdash) = (X, A, \tau_k),$$
+
+where
+
+$$\tau_k = \langle\langle \{\underline{a} \wedge \chi_{ext(a)} : a \in A\} \rangle\rangle$$
+
+and the notion of extent is from Theorem 3.2.2 above, and
+
+$$F_k(f, \varphi) = (f, \varphi).$$
+
+Then $F_k$ concretely embeds **TopSys** into **Loc-Top**.
+
+For the same reasons given above for $F_{\vdash}$, $F_k \to (\mathbf{TopSys})$ is a proper subcategory of **Loc-Top**, and it is therefore important to better understand this subcategory. The following characterization theorem resolves this corresponding issue for $F_k$.
+
+**Theorem 5.4.5 ($F_k$ characterization theorem).** *Let $(X, L, \tau) \in |\textbf{Loc-Top}|$. Then $(X, L, \tau) \in F_k \to (\textbf{TopSys})$ if and only if $\exists \{U_\alpha : \alpha \in L\} \subset \wp(X)$ satisfying all of the following:*
+
+(1) $\forall \{\alpha_\gamma : \gamma \in \Gamma\} \subset L$, $\bigcup_{\gamma \in \Gamma} U_{\alpha_\gamma} = U \bigvee_{\alpha_\gamma}$;
+
+(2) $\forall \{\alpha_\gamma : \gamma \in \Gamma\} \subset L$ (Γ finite), $\bigcap_{\gamma \in \Gamma} U_{\alpha_\gamma} = U \bigwedge_{\alpha_\gamma}$;
+
+(3) $\tau = \langle\{\underline{\alpha} \wedge \chi_{U_\alpha} : \alpha \in L\}\rangle$, where the single brackets indicate a topology generated from a basis.
+
+As with $F_{\vdash}$, Theorem 5.4.5 shows that $F_k$ is not an isomorphism since spaces can be constructed not satisfying all the conditions of 5.4.5, thus confirming Theorem 5.4.1 above.
+
+With respect to objects, the truncation functor $F_k$ is essentially built from the *Ext* functor of Theorem 3.2.2(2) followed by the schema of the $G_\omega$ functors introduced in the paragraph between Example 4.3.2 and Example 4.3.3. To see this more precisely, fix $(X, A, \vdash)$ in **TopSys**. Applying *Ext* : **TopSys** → **Top** yields the topological space $(X, ext^{->}(A))$. Now applying $G_\omega : \mathbf{Top} \to \mathbf{L-Top}$ yields the L-topological space $(X, G_\omega(ext^{->}(A)))$, where
+
+$$G_\omega(\mathrm{ext}^\rightarrow(A)) = \langle\langle \{\underline{a} : a \in A\} \cup \{\chi_{ext(b)} : b \in A\} \rangle\rangle.$$
+---PAGE_BREAK---
+
+But the first infinite distributive law implies the subbasis $\{\underline{a} : a \in A\} \cup \{\chi_{ext(b)} : b \in A\}$ generates a basis
+
+$$ \{\underline{a} \wedge \chi_{ext(b)} : (a, b) \in A \times A\} $$
+
+for the *L*-topology $G_\omega(\text{ext}^\rightarrow(A))$. This basis is actually finer than what
+is needed for the *A*-topology $\tau_k$. To address this problem, put
+
+$$ \langle G_\omega \circ \text{ext}^\rightarrow \rangle : A \times A \rightarrow G_\omega (\text{ext}^\rightarrow(A)) \text{ by } \langle G_\omega \circ \text{ext}^\rightarrow \rangle (a, b) = \underline{a} \wedge \chi_{ext(b)}, $$
+
+recall the inclusion map
+
+$$ \iota_{\Delta}: \Delta(A \times A) \to A \times A, $$
+
+of the diagonal $\Delta(A \times A)$ into $A \times A$, and note the usual bijection
+
+$$ j: A \to \Delta(A \times A). $$
+
+Then
+
+$$ ((G_{\omega} \circ ext^{\rightarrow}) \circ \iota_{\Delta} \circ j)^{\rightarrow}(A) = \{\underline{a} \wedge \chi_{ext(a)} : a \in A\}, \quad (\mathbf{K}) $$
+
+which can be shown using the first infinite distributive law to be a basis
+for τk, actually improving the statement of Theorem 5.4.4 above which is
+in terms of subbases. Now put
+
+$$ [\langle G_{\omega} \circ ext^{\rightarrow} \rangle \circ \iota_{\Delta} \circ j] (A) := (X, (\langle\langle (G_{\omega} \circ ext^{\rightarrow}) \circ \iota_{\Delta} \circ j\rangle (A)\rangle)), $$
+
+an *A*-topological space, where “[ ]” denotes that a space using *X* as
+the underlying carrier set is being created. The last step in constructing
+*F**k*(*X*, *A*, ⊢) applies the embedding *A*-Top ↦ *Loc-Top* recorded in Sub-
+section 5.3 above. So the sequence of actions in forming *F**k*(*X*, *A*, ⊢) may
+be seen as
+
+$$ [( )\text{-Top} \to \text{Loc-Top}] \circ [\langle G_{\omega} \circ ext^{\to} \rangle \circ \hookrightarrow_{\Delta} j]. $$
+
+It is remarkable that this composite action is injective on objects and is
+compatible with the action $F_k(f, \varphi) = (f, \varphi)$ on morphisms to produce
+$F_k$ as a concrete embedding. Line (K) above explains the "k" in $F_k$
+as standing for "truncation" or "cuts", since the basic open sets of the
+topology $\tau_k$ are truncations or cuts of characteristics by constant maps.
+
+We observe that the preceding paragraph adds to what is known in [7]
+and proves the following new factorization of $F_k$ w.r.t. objects:
+
+**Theorem 5.4.6 (factorization of $F_k$).** As an object level mapping, $F_k$ :
+$|\text{TopSys}| \to |\text{Loc-Top}|$ factors as follows:
+
+$$ F_k = [( )\text{-Top} \to \text{Loc-Top}] \circ [\langle G_\omega \circ ext^\to \rangle \circ \iota_{\Delta} \circ j]. $$
+
+We note in the above discussion and proposition that $\omega_L$ may be sub-
+stituted for $G_\omega$ when the locale $A$ of predicates is completely distributive
+(Example 4.3.2), bringing us back almost full circle to the beginning of
+---PAGE_BREAK---
+
+the many-valued topology literature. Finally, this discussion suggests
+a possibly new construction of spaces from systems, namely—for fixed
+(X, A, ⊢)—that given by
+
+$$
+[A-\mathbf{Top} \to \mathbf{Loc-Top}] \circ [\langle\omega_A \circ ext^{\to}\rangle \circ \hookrightarrow \Delta \circ j],
+$$
+
+and possibly a new bi-level mapping, say, $F_{\omega} : \mathbf{TopSys} \to \mathbf{Loc-Top}$ by
+
+$$
+F_{\omega}(X, A, \vdash) = (X, A, \tau_{\omega}),
+$$
+
+where
+
+$$
+\tau_{\omega} = \langle (\langle \omega_L \circ ext^{\rightarrow} \rangle \circ \hookrightarrow_{\Delta} \circ j)^{\rightarrow} (A) \rangle
+$$
+
+and the notion of extent is from Theorem 3.2.2 above, along with
+
+$$
+F\omega(f, \varphi) = (f, \varphi).
+$$
+
+If $F_\omega$ is restricted to the full subcategory of **TopSys** in which all sets of predicates are completely distributive, then $F_\omega$ coincides with $F_k$ given above and is in that instance a concrete embedding.
+
+Yet other ways exist to generate spaces from systems. There is the
+concrete functor $F^k$ already given in [7], constructed using the (sub)basis
+
+$$
+\{a \wedge \chi_{ext(b)} : (a, b) \in A \times A\},
+$$
+
+and which can be factored at the object level as
+
+$$
+F^k = [() - \mathbf{Top} \to \mathbf{Loc-Top}] \circ G_\omega \circ Ext
+$$
+
+Also one might define yet another concrete functor $F^\omega$ by choosing
+
+$$
+\tau^{\omega} = \langle\langle\{u : X \to A \mid \forall a \in A, [u \not\leq a] \in ext^{\to}(A)\}\rangle\rangle,
+$$
+
+which can be factored at the object level as
+
+$$
+F^{\omega} = [() - \mathbf{Top} \to \mathbf{Loc-Top}] \circ \omega_{( )} \circ \mathrm{Ext}.
+$$
+
+Then $F^\omega$ coincides with $F^k$ when restricted to the full subcategory of **TopSys** in which the sets of predicates are completely distributive. The behaviors of $F_\omega$ and $F^\omega$ are open questions perhaps related to the open question stated immediately above Example 4.3.3.
+
+**Discussion 5.4.7** (possible applications). There could be potential ap-
+plicability of $F_=$, $F_k$ to topological systems. It is known that **TopSys**
+lacks initial and final structures (Lemma 3.1.5, Theorem 3.1.6), and these
+embeddings may help mitigate this lack, given that **Loc-Top** has all ini-
+tial and final structures (Theorem 5.2.4). The case for initial structures is
+now discussed; the final structures case is obverse and left to the reader.
+Recall the forgetful functors $T: \textbf{TopSys} \to \textbf{Set} \times \textbf{Loc}$ and $V: \textbf{Loc-Top} \to \textbf{Set} \times \textbf{Loc}$, and take a $T$-structured source
+
+$$
+((f_{\gamma}, \varphi_{\gamma}) : (X, A) \to T(X_{\gamma}, A_{\gamma}, \vdash_{\gamma}))_{\gamma \in \Gamma}
+$$
+---PAGE_BREAK---
+
+in **Set** × **Loc** lacking a unique initial lift in **TopSys**. Because both $F_{\vdash}, F_k$ are concrete—
+
+$$T = F_{\vdash} \circ V, \quad T = F_k \circ V,$$
+
+then this *T*-structured source may be interpreted as both of these *V*-structured sources:
+
+$$((f_{\gamma}, \varphi_{\gamma}) : (X, A) \rightarrow V (X_{\gamma}, A_{\gamma}, \tau_{\vdash_{\gamma}}))_{\gamma \in \Gamma},$$
+
+$$((f_{\gamma}, \varphi_{\gamma}) : (X, A) \rightarrow V (X_{\gamma}, A_{\gamma}, \tau_{k,\gamma}))_{\gamma \in \Gamma}.$$
+
+In each case there is, respectively, a unique, initial lift in **Loc-Top**:
+
+$$((f_{\gamma}, \varphi_{\gamma}) : (X, A, \tau_s) \rightarrow (X_{\gamma}, A_{\gamma}, \tau_{\vdash_{\gamma}}))_{\gamma \in \Gamma},$$
+
+$$((f_{\gamma}, \varphi_{\gamma}) : (X, A, \tau_t) \rightarrow (X_{\gamma}, A_{\gamma}, \tau_{k,\gamma}))_{\gamma \in \Gamma}.$$
+
+It remains to be seen if programmers find these initial structures in **Loc-Top**—and **Loc-Top** filling in these structural “gaps” of **TopSys**, to be useful.
+
+## 6. GENERALIZATIONS AND FUTURE DIRECTIONS
+
+This section briefly outlines some generalizations of the ideas previously considered in this paper as well as indicating new directions of current research. These are considered in three subsections as separate themes for the sake of clarity; but we emphasize that ideas from these three subsections can and should be combined, and to some degree that is currently taking place.
+
+### 6.1. Lattice-valued satisfaction relations, Loc-TopSys, Loc-F²Top.
+
+For programming flexibility, it is appropriate to consider satisfaction relations for which a given bitstring satisfies a given predicate to a certain degree. Staying within the traditional finite observational logic of frames, this means introducing an additional frame as the lattice of “satisfaction degrees”. One question to be considered is whether this additional frame is to be common to all the systems under consideration—essentially a fixed-basis approach to degrees of satisfaction, or whether this additional frame should vary from system to system—essentially a variable-basis approach to degrees of satisfaction. In the interests of brevity, we take the second approach and proceed straightaway to the variable-basis approach, an approach which has some advantages regarding the potential applications of Discussion 5.4.7 above and blends results from [9, 71, 72, 73].
+
+The variable-basis approach to degrees of satisfaction necessitates a fundamental change in the ground category for systems as well as in the overlying category for topological systems.
+---PAGE_BREAK---
+
+**Definition 6.1.1.** The category **Set** × **Loc**² comprises the following data:
+
+(1) *Objects:* $(X, L, A)$, where $X$ is a set, $L$ is a frame, $A$ is a locale.
+
+(2) *Morphisms:* $(f, \varphi, \psi) : (X, L, A) \to (Y, M, B)$, where $f: X \to Y$ is in **Set** and $\varphi: L \to M$, $\psi: A \to B$ are in **Loc**, i.e., $\varphi^{op}: L \leftarrow M$, $\psi^{op}: A \leftarrow B$ are in **Frm**.
+
+(3) *Composition, identities:* component-wise from **Set** and **Loc**.
+
+It can be shown that **Set** × **Loc**² is both complete and cocomplete.
+
+**Definition 6.1.2 (variable-basis topological systems).** The category **Loc-TopSys** of topological systems and continuous mappings has ground category **Set** × **Loc**² and comprises data subject to axioms as follows:
+
+(1) *Objects:* $(X, L, A, \vdash)$, where $(X, L, A) \in |\mathbf{Set} \times \mathbf{Loc}^2|$ and $\vdash : X \times A \to L$ is an (*L*-valued) satisfaction relation possessing the arbitrary $\vee$ and finite $\wedge$ interchange laws:
+
+$$
+\forall x \in X, \forall \{a_\gamma\}_{\gamma \in \Gamma} \subset A, \vdash \left( x, \bigvee_{\gamma \in \Gamma} a_\gamma \right) = \bigvee_{\gamma \in \Gamma} \vdash (x, a_\gamma);
+$$
+
+$$
+\forall x \in X, \forall \{a_\gamma\}_{\gamma \in \Gamma} \subset A, \vdash \left( x, \bigwedge_{\gamma \in \Gamma} a_\gamma \right) = \bigwedge_{\gamma \in \Gamma} \vdash (x, a_\gamma) (\Gamma \text{ finite}).
+$$
+
+The set $X$ in some examples could be interpreted as bitstrings; $L$ may be interpreted as a frame of satisfaction values; $A$ may be interpreted as a locale of (open) predicates; and $\vdash (x,a)$ may be said to be the degree to which (bitstring) $x$ satisfies (predicate) $a$.
+
+(2) *Morphisms:* $(f, \varphi, \psi) : (X, L, A, \vdash_1) \to (Y, M, B, \vdash_2)$, where $(f, \varphi, \psi) : (X, L, A) \to (Y, M, B)$ in **Set** × **Loc**² and $(f, \varphi, \psi)$ satisfies *adjointness*:
+
+$$
+\forall b \in B, \forall x \in X, \vdash_1 (x, \psi^{op}(b)) = \varphi^{op}[\vdash_2 (f(x), b)].
+$$
+
+(3) *Composition, identities:* from **Set** × **Loc**².
+
+The adjointness condition in this new setting is saying that the degree
+$\vdash_1 (x, \psi^{op}(b))$ to which an input satisfies the pullback of a postcondition
+predicate is the same as the shift by $\varphi^{op}$ of the degree $\vdash_2 (f(x), b)$ to
+which the corresponding output satisfies the postcondition predicate. If
+it were the case that $L = M$ and $\varphi^{op} = id_L$, then it would be the case
+that adjointness would be insisting that the two degrees of satisfaction be
+the same.
+
+To describe the behavior of **Loc-TopSys**, the appropriate forgetful
+functor needs to be recorded. Put $W : \mathbf{Loc-TopSys} \to \mathbf{Set} \times \mathbf{Loc}^2$ by
+
+$$
+W(X, L, A, \vdash) = (X, L, A), \quad W(f, \varphi, \psi) = (f, \varphi, \psi).
+$$
+---PAGE_BREAK---
+
+**6.1.3 Theorem (categorical properties of Loc-TopSys) [9].**
+
+(1) *The functor W reflects isomorphisms and is transportable and (generating, mono-source)-factorizable.*
+
+(2) **Loc-TopSys** is essentially algebraic over **Set** × **Loc**² w.r.t. W.
+
+(3) *Loc-TopSys* is complete and cocomplete.
+
+(4) W-structured sources [sinks] need not have unique initial [final] lifts, not even singleton W-structured sources.
+
+(5) **Loc-TopSys** is not topological over **Set** × **Loc**² w.r.t. W. In fact, **Loc-TopSys** is neither mono-, nor epi-, nor (small) existentially, nor (small) essentially topological over **Set** × **Loc**² w.r.t. W.
+
+So as with TopSys, Loc-TopSys is algebraic and non-topological in
+nature. However, it is closely related to topology in various ways, as we
+see in the sequel.
+
+**Theorem 6.1.4.** *Loc-TopSys* is a supercategory up to isomorphism of both *TopSys* and *Loc-Top*. More precisely, the following hold:
+
+$$
+\begin{array}{l@{\quad}c@{\quad}l}
+\text{(1)} & E_{\text{TopSys}} : \text{TopSys} \to \text{Loc-TopSys}, & \text{defined by} \\
+& E_{\text{TopSys}} (X, A, \vdash) = (X, \mathbf{2}, A, \vdash_\mathbf{2}), & \\
+& \text{where} \\
+& \vdash_\mathbf{2} (x, a) = \left\{ \begin{array}{ll} \top, & x \models a \\ \bot, & x \not\models a \end{array} \right., & \\
+& \text{and} \\
+& E_{\text{TopSys}} (f, \varphi) = (f, \mathrm{id}_\mathbf{2}, \varphi), & \\
+& \text{is a functorial embedding.}
+\end{array}
+$$
+
+$$
+\begin{array}{l}
+(2) \quad E_{\text{Loc-Top}} : \text{Loc-Top} \rightarrow \text{Loc-TopSys}, \text{ defined by} \\
+\qquad E_{\text{Loc-Top}} (X, L, \tau) = (X, L, \tau, \vdash), \\
+\text{where} \\
+\qquad \vdash : X \times \tau \rightarrow L \quad \text{by} \quad \vdash (x, u) = u(x), \\
+\text{and} \\
+\qquad E_{\text{Loc-Top}} ((f, \varphi) : (X, L, \tau) \rightarrow (Y, M, \sigma)) = \\
+\qquad \quad [(f, \varphi, ((f,\varphi)|_{\sigma})^{\downarrow\!\!\!p}) : (X, L, \tau, \vdash) \rightarrow (Y, M, \sigma, \vdash)] ,
+\end{array}
+$$
+
+where $(f, \varphi)^{\downarrow}$ is given in Definition 5.1.2(1), is a functorial em-
+bedding.
+
+The first embedding interprets each traditional satisfaction relation as
+a “crisp” relation; and the second embedding is exactly a many-valued
+version of $E_V$ in which degrees of memberships in an open set are rein-
+terpreted as degrees of satisfaction with respect to that open set viewed
+---PAGE_BREAK---
+
+as a predicate. Both of these embeddings are part of adjoint relation-
+ships which parallel and extend those discussed in Theorems 3.2.3 and
+3.2.6 above; see [9] for more details—the adjoint of $E_{\text{Loc-Top}}$ involves a
+lattice-valued version of extent parallel to that used in Discussion 6.2.6
+below.
+
+**Discussion 6.1.5** (Discussion 5.4.7 revisited). The potential utility of the embedding $E_{\text{Loc-Top}}$ involves giving a systems solution to the problem posed in Discussion 5.4.7 above. Recall that T-structured sources [sinks] in **Set** × **Loc** lacking unique initial [final] lifts in **TopSys** can be furnished these lifts in **Loc-Top** as spaces via both $F_=$, $F_k$. Note these lifts are space solutions to a systems problem. Now these space solutions can be moved into **Loc-TopSys** as systems via $E_{\text{Loc-Top}}$. More precisely, these latter systems are solutions of the original T-lifting problem with respect to the functor $P: \textbf{Loc-TopSys} \to \textbf{Set} \times \textbf{Loc}$, given by
+
+$$P(X, L, A, \nvDash) = (X, L), \quad P(f, \varphi, \psi) = (f, \varphi),$$
+
+and the following commutivities:
+
+$$T = V \circ F_=, \quad T = V \circ F_k, \quad P \circ E_{L-T} = V.$$
+
+We now shift our attention to Kubiak-Šostak topologies [34, 39, 74] and their variable-basis generalizations first given in [62], one such generalization being the category **Loc-FTop**. Our purpose here is to give a category **Loc-F²Top** [9] which is a variation of **Loc-FTop**, but which is more explicitly and conveniently tied to topological systems.
+
+The category **Loc-F²Top** has ground category **Set** × **Loc**²; but in order to motivate **Loc-F²Top**, each ground object (X, L, A) from **Set** × **Loc**² needs reinterpretation:
+
+* For systems, X in some examples may be interpreted as bitstrings, L as satisfaction values, and A as (open) predicates.
+
+* For spaces, X should be interpreted as points, L as degrees of openness, A as degrees of membership—the latter two comprising variable bases (hence the exponent “2” in **Loc-F²Top**).
+
+**Definition 6.1.6** (topologies as openness operators). The category **Loc-F²Top** of fuzzy topological spaces and fuzzy continuous mappings has ground category **Set** × **Loc**² and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, L, A, \mathcal{T})$, where $\mathcal{T}: A^X \to L$ is an $(A, L)$-topology,
+cf. [21, 39, 62]; i.e., $\mathcal{T}$ satisfies:
+
+* $\forall \{u_\gamma\}_{\gamma \in \Gamma} \subset A^X$, $\mathcal{T}(\bigvee_{\gamma \in \Gamma} u_\gamma) \ge \bigwedge_{\gamma \in \Gamma} \mathcal{T}(u_\gamma)$;
+
+* $\forall u, v \in A^X$, $\mathcal{T}(u \wedge v) \geq \mathcal{T}(u) \wedge \mathcal{T}(v)$; and
+---PAGE_BREAK---
+
+• $\mathcal{T}(\bot) = \top$.
+
+It is a fact that $\mathcal{T}(\bot) = \top$. $\mathcal{T}$ called an (A, L)-fuzzy topology on (X, L, A) and an (A, L)-fuzzy topological space. $\mathcal{T}(u)$ is the degree of openness of $u$ in $\mathcal{T}$; and $\mathcal{T}$ is an openness predicate/operator.
+
+(2) *Morphisms*: $(f, \varphi, \psi) : (X, L, A, \mathcal{T}) \to (Y, M, B, S)$ satisfies
+$$ \mathcal{T} \circ (f, \psi)^\leftarrow \geq \varphi^{op} \circ S; $$
+i.e., $\forall v \in B^Y$,
+$$ \mathcal{T}[(f, \psi)^\leftarrow(v)] \geq \varphi^{op}(S(v)), $$
+cf., [62, 9]. It is a fact that $\mathcal{T}[(f, \psi)^\leftarrow(\top_B)] = \mathcal{T}[(f, \psi)^\leftarrow(\bot_B)] = \top_L $
+(3) *Composition, identities*: from **Set** × **Loc**².
+
+Note the following: openness of a union (or binary intersection) is no less than that of the least open set; “whole carrier set” and “empty set” are fully open; $\mathcal{T} \in L(A^X)$, consistent with the exponent in **Loc-F**²**Top**; and the preimage of a subset from codomain is at least as open in domain as the original subset was in codomain, *modulo* the shift by $\varphi^{op}$.
+
+The forgetful functor $F : \mathbf{Loc-F}^2\mathbf{Top} \to \mathbf{Set} \times \mathbf{Loc}^2$, given by
+
+$$ F(X, L, A, \tau) = (X, L, A), \quad F(f, \varphi, \psi) = (f, \varphi, \psi), $$
+
+is used to describe the categorical behavior of **Loc-F**²**Top**.
+
+**Theorem 6.1.7** (categorical properties of **Loc-F**²**Top**) [9].
+
+(1) Each F-structured source [sink] in Set × Loc² has a unique initial
+[final] lift to Loc-F²Top.
+
+(2) **Loc-F**²**Top** is topological over **Set** × **Loc**² w.r.t. F.
+
+(3) **Loc-F**²**Top** is complete and cocomplete.
+
+We now come to the climax of this subsection: embedding **Loc-TopSys**
+into **Loc-F**²**Top**; it will then follow that **Loc-Top** embeds into **Loc-
+F**²**Top**.
+
+**Theorem 6.1.8.** *Loc-F*2*Top* is a supercategory up to isomorphism of
+*Loc-TopSys*. Specifically, *F*=* : *Loc-TopSys* → *Loc-F*2*Top*, defined by
+
+$$ F_{\equiv}^{*}(X, L, A, \vdash) = (X, L, A, T_{\equiv}), $$
+
+where $T_{\equiv}: A^X \rightarrow L$ by
+
+$$ T_{\equiv}(u) = \begin{cases} \bigwedge_{x \in X} \models (x, u(x)), & u \neq \bot \\ \top, & u = \bot \end{cases}, $$
+
+and
+
+$$ F_{\equiv}^{*}(f, \varphi, \psi) = (f, \varphi, \psi), $$
+
+is a concrete functorial embedding.
+---PAGE_BREAK---
+
+Note the degree of openness of $u$ is the least degree to which every point in $X$ satisfies its degree of membership in $u$, showing that $\mathcal{T}_{\vdash}$ is an extension of the $\tau_{\vdash}$ constructed by the embedding $F_{\vdash}$ in Theorem 5.4.2 above, and therefore that $F_{\vdash}^*$ is an extension of $F_{\vdash}$, explaining the notation " $F_{\vdash}^*$". Thus we have in this situation a new satisfaction embedding. The issue of $F_{\vdash}^*$ extending $F_{\vdash}$ can be made more precise—namely it is the case, using Theorems 6.1.4 and 6.1.8, that
+
+$$F_{\vdash}^* \circ E_{\text{TopSys}} = E_{\text{Loc-Top}}^* \circ F_{\vdash},$$
+
+where $E_{\text{Loc-Top}}^* : \text{Loc-Top} \to \text{Loc-TopSys}$ by
+
+$$(X, L, \tau) \mapsto (X, L, 2, \chi_{\tau}), \quad (f, \varphi) \mapsto (f, id_2, \varphi).$$
+
+Is there a corresponding extension of the embedding $F_k$ to a new embedding of **Loc-TopSys** into **Loc-F2Top**? This is a question currently under investigation by the authors.
+
+**Discussion 6.1.9** (cf. Discussion 5.4.7). Since **Loc-TopSys** and **Loc-F2Top** share the same ground category **Set** × **Loc2**, and since $F_{\vdash}^*$ factors forgetful functor $W$ through forgetful functor $F$, so in the same manner discussed in Discussion 5.4.7, the initial/final structures missing in **Loc-TopSys** can be given in **Loc-F2Top**.
+
+**Discussion 6.1.10** (interweaving of algebra and topology). Hermann Weyl is claimed to have said, “The house of mathematics is built upon the twin pillars of algebra and topology.” Systems and spaces as considered in this paper illustrate how these two pillars interweave and reinforce each other. Categorically, we have from above:
+
+$$\text{Top}, \text{L-Top}'s \to \text{TopSys} \to \text{Loc-Top} \to \text{Loc-TopSys} \to \text{Loc-F}^2\text{Top}.$$
+
+Using the categorical properties catalogued above, we metamathematically have
+
+$$\text{topology} \to \text{algebra} \to \text{topology} \to \text{algebra} \to \text{topology}. $$
+
+It is conjectured that this pattern continues indefinitely to the right.
+
+## 6.2. Non-commutivity and generalized structures.
+In programming applications, the conjunction of predicates is frequently not commutative. This section briefly explores this issue. We begin with a motivating example.
+
+**Example 6.2.1.** Suppose there are two numerical variables $x$ and $y$ in use. Traditional truth values are used, and the program in question operates as follows:
+
+* The variable $y$ tells how many times a website has been looked at, i.e., $y$ holds a copy of the value in the website's counter.
+---PAGE_BREAK---
+
+* Whenever $y$ is used in an expression, the website (which $y$ counts) is accessed or looked at *before* the value in $y$ is used or read so that $y$ is always kept updated.
+
+* Each time $y$ is read, its value increases by one.
+
+* The website is not accessed when conjunctions of predicates are formed.
+
+* Conjunctions of predicates are read in the order left-to-right; i.e., the conjunction $P \land Q$ is read in the order $P, Q$.
+
+Now assume that the current value in $x$ is 9 and in $y$ is 8; i.e., $x = 9, y = 8$. Now form the predicates $P$ and $Q$ as follows:
+
+$$ P : [x = y]; \quad Q : [y \geq 10]. $$
+
+(1) What is the truth value of $P \land Q$? When $P$ is read, the value of $y$ changes from 8 to 9; when $Q$ is read, the value of $y$ changes from 9 to 10; so that when $P \land Q$ is read, both $P$ and $Q$ are true, and hence $P \land Q$ is **true**.
+
+(2) Now what is the truth value of $Q \land P$, again assuming that $x = 9, y = 8$? When $Q$ is read, the value of $y$ changes from 8 to 9; when $P$ is read, the value of $y$ changes from 9 to 10; so that when $Q \land P$ is read, $Q$ (and $P$) is false, so that $Q \land P$ is **false**.
+
+(3) So in this example, $P \land Q$ and $Q \land P$ are not equivalent—the conjunction is sensitive to the order in which the predicates are read.
+
+In the preceding sections the “Lindenbaum algebra” of finite observational logic has been taken to be a frame; and in a frame, the conjunction is modeled by the binary $\land$, which is commutative. The above example shows there is justification in having a structure in which the operation modeling conjunction is not commutative; and hence, there is justification in having a structure allowing for non-commutative cases.
+
+There are several non-commutative approaches to systems: topological systems as motivated by attachment relations and in which predicates form residuated lattices [19, 20]; topological systems with predicates forming algebraic varieties [71, 72, 73]; and Chu systems with predicates forming “flat” sets [11].
+
+The approach to topological systems outlined below is based on the notion of a unital quantale [68, 24]; but it is only one of several generalizations. Associated with such systems are “extent” topological spaces with non-commutative topologies—topologies in which the intersection of open sets need not be commutative. What recommends unital quantales is that they need not have commutative tensor products to express conjunctions, but they retain the infinite distributive laws which are part
+---PAGE_BREAK---
+
+of finite observational logic, and hence are appropriate models of non-commutative finite observational logic.
+
+**Definition 6.2.2** (unital quantales). A unital quantale $(L, \leq, \otimes, e)$ comprises data subject to axioms as follows:
+
+(1) $(L, \leq)$ is a complete lattice.
+
+(2) $\otimes : L \times L \rightarrow L$ is an associative binary operation, called tensor product, satisfying both left and right infinite distributive laws: $\forall a \in L, \forall \{b_\gamma\}_{\gamma \in \Gamma} \subset L$,
+
+$$a \otimes \left( \bigvee_{\gamma \in \Gamma} b_\gamma \right) = \bigvee_{\gamma \in \Gamma} (a \otimes b_\gamma),$$
+
+$$\left( \bigvee_{\gamma \in \Gamma} b_\gamma \right) \otimes a = \bigvee_{\gamma \in \Gamma} (b_\gamma \otimes a).$$
+
+(3) $e$ is a two-sided identity or unit for $\otimes$: $\forall a \in L, a \otimes e = a = e \otimes a$.
+
+It should be noted that $\perp$ is a two-sided annihilator or zero for $\otimes$: $\forall a \in L, a \otimes \perp = \perp = \perp \otimes a$.
+
+**Definition 6.2.3** (category of unital quantales). The category **UQuant** of unital quantales and unital quantalic mappings comprises data subject to axioms as follows:
+
+(1) *Objects*: Unital quantales as defined in 6.2.2 above.
+
+(2) *Morphisms*: $f: (L, \leq, \otimes_1, e_1) \to (M, \leq, \otimes_2, e_2)$, where $f: L \to M$ is a mapping which preserves arbitrary $\vee$ and the tensors—$f \circ \otimes_1 = \otimes_2 \circ (f \times f)$ and the units—$f(e_1) = e_2$.
+
+(3) *Composition, identities*: from **Set**.
+
+For convenience below, the opposite category **UQuant**op is denoted **LoUQuant**, the prefix “Lo” motivated by **Loc** being the opposite category of **Frm**.
+
+**Examples 6.2.4.** Some example classes of unital quantales include the following:
+
+(1) $L$ any frame with $\otimes$ the binary meet and $e = \top$. These are commutative, unital quantales.
+
+(2) $L = [0,1]$ with $\otimes$ any t-norm (e.g., $\otimes = \wedge$, multiplication, or Łukasiewicz conjunction) and $e = 1$. These are also commutative, unital quantales.
+
+(3) Let $L$ be any (join-)complete lattice, put
+
+$$S(L) = \{ g : L \to L \mid g \text{ preserves arbitrary } \vee \},$$
+---PAGE_BREAK---
+
+and equip $S(L)$ with the point-wise order, with $\otimes$ as functional composition $\circ$, and with $e$ as $id_L$. It should be noted that $e$ is not the top element **1** of $S(L)$ defined as
+
+$$ \mathbf{1}(a) = \left\{ \begin{array}{ll} \top, & a > \bot \\ \bot, & a = \bot \end{array} \right. . $$
+
+It can be shown that each $(S(L), \leq, \circ, id_L)$ is a unital quantale, which is usually *non-commutative*. It is also the case that $L$ order-embeds into $S(L)$ via $\eta_L : L \to S(L)$ defined by
+
+$$ \eta_L (a) (b) = \left\{ \begin{array}{ll} a, & |L| = 1 \\ \{ a, b > \bot \\ \bot, & b = \bot \end{array} , \quad |L| \ge 1 \right. . $$
+
+Detailed analysis of $\eta_L$ and its role in relating **UQuant** to **CS-Lat**($\mathcal{V}$) are given in [65].
+
+A category of topological systems based upon unital quantales can now
+be given.
+
+**Definition 6.2.5** (quantale based topological systems). The category **LoUQuant-TopSys** has ground category **Set** × **LoUQuant**² and com- prises data subject to axioms as follows:
+
+(1) *Objects*: $(X, L, A, \vdash)$, where $(X, L, A) \in |\text{Set} \times \text{LoUQuant}^2|$ and $\vdash : X \times A \rightarrow L$ is an (*L*-valued) satisfaction relation possessing the arbitrary $\vee$ and $\otimes$ and unitary interchange laws:
+
+$$
+\begin{align*}
+& \forall x \in X, \forall \{a_\gamma\}_{\gamma \in \Gamma} \subset A, \quad \vdash \left( x, \bigvee_{\gamma \in \Gamma} a_\gamma \right) = \bigvee_{\gamma \in \Gamma} \vdash (x, a_\gamma); \\
+& \forall x \in X, \forall a, b \in A, \quad \vdash (x, a \otimes b) = \vdash (x, a) \otimes \vdash (x, b), \\
+& \forall x \in X, \quad \vdash (x, e_A) = \top,
+\end{align*}
+$$
+
+where $\otimes$ stands for the tensors on $A$ and $L$, and $e_A$ is the unit for
+the tensor on $A$.
+
+(2) *Morphisms*: $(f, \varphi, \psi) : (X, L, A, \vdash_1) \to (Y, M, B, \vdash_2)$, where $(f, \varphi, \psi) : (X, L, A) \to (Y, M, B)$ in **Set** × **LoUQuant**² and $(f, \varphi, \psi)$ satisfies *adjointness*:
+
+$$ \forall b \in B, \forall x \in X, \vdash_1 (x, \psi^{op}(b)) = \varphi^{op}[\vdash_2 (f(x), b)]. $$
+
+(3) *Composition, identities:* from **Set** × **LoUQuant**².
+
+**Discussion 6.2.6.** The type of topology associated with topological
+systems based on unital quantales can be explored by constructing the
+appropriate notion of “extent” to mimic the relationship between **Top-Sys** and **Top**. Such a notion of extent for systems from **Loc-TopSys** is
+already available [9] which constructs the (adjoint) relationship between
+---PAGE_BREAK---
+
+**Loc-TopSys** and **Loc-Top**. Let $(X, L, A, \vdash) \in |\text{LoUQuant-TopSys}|$, and put
+
+$$ext : A \to L^X \quad \text{by} \quad ext(a) : X \to L \quad \text{by} \quad ext(a)(x) = \vdash (x, a).$$
+
+Then it can be shown that $ext^\rightarrow(A)$ is a subset of $L^X$ closed under arbitrary $\vee$—a “union” condition and closed under tensor products (as lifted to $L^X$ from $L$)—an “intersection condition”; it is also the case that $e_L \in ext^\rightarrow(A)$, where
+
+$$e_L : X \to L \quad \text{by} \quad e_L(x) = e_L.$$
+
+It is also a consequence of the union condition that $\perp \in ext^\rightarrow(A)$. To summarize, $ext^\rightarrow(A)$ may be viewed as closed under arbitrary “unions” and binary “intersections” and containing the “whole space” and “empty set”, and hence it should viewed as a kind of topology and $(X, L, ext^\rightarrow(A))$ as a kind of topological space. But these are topologies and spaces in which the intersection *need not be commutative*. Such spaces have been seen before, e.g., in [25, 62], in which are found spaces over complete quasi-monoidal lattices, structures which need not have commutative tensor products; but such structures need not be residuated and hence lack the structure appropriate for topological systems.
+
+**Definition 6.2.7** (quantale based topologies). The category **LoUQuant-Top** has ground category **Set** × **LoUQuant** and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, L, \tau)$, where $(X, L) \in |\mathbf{Set} \times \mathbf{LoUQuant}|$ and $\tau \subset L^X$ is closed under arbitrary $\vee$ and tensor products and contains $e_L$.
+
+(2) *Morphisms*: $(f, \varphi) : (X, L, \tau) \to (Y, M, \sigma)$, where $(f, \varphi) : (X, L) \to (Y, M)$ in **Set** × **LoUQuant** and $((f, \varphi)^\leftarrow)^\rightarrow (\sigma) \subset \tau$, i.e.,
+
+$$\forall v \in \sigma, (f, \varphi)^\leftarrow(v) \in \tau.$$
+
+(3) *Composition, identities*: from **Set** × **LoUQuant**.
+
+The proofs of [62] can be adapted to show that **LoUQuant-Top** is topological over **Set** × **LoUQuant** w.r.t. the expected forgetful functor. Next, **LoUQuant-Top** embeds into **LoUQuant-TopSys** by a functor analogous to $E_{\text{Loc-Top}}$ given in Theorem 6.1.4(2) above; and an adjoint functor can be given which is based on the extent topologies and spaces of Discussion 6.2.6 above. Finally, it is possible to again follow the pattern begun in [62] of building variable-basis frameworks for Kubiak-Šostak topologies and define a category **LoUQuant-F**²**Top**, analogous to **Loc-F**²**Top** in Definition 6.1.6 above, into which **LoUQuant-TopSys** concretely embeds.
+---PAGE_BREAK---
+
+This subsection has focused on unital quantales; but much work remains to be done to pin down the generalizations with greatest potential for applications of topological systems.
+
+**6.3. Lattice-valued preorders and enriched/preordered topological systems and topological spaces.** A question from programming arises: if bitstring $x$ compares with bitstring $y$ to some degree $\alpha$, and if bitstring $y$ satisfies predicate $a$ to some degree $\beta$, then how should the possibility be mathematically modeled that bitstring $x$ satisfies predicate $a$ to at least some degree related to both $\alpha$ and $\beta$? Many applications of (partial) responses to this question might exist in data-mining, a field in which pattern-matching is an important and commonly used method.
+
+As discussed in [8, 12, 13], ideas from enriched categories over monoidal categories address this question and enable pattern-matching techniques to be extended to many-valued contexts. In particular, notions of enriched category theory naturally lead to the notion of *L*-valued preorders, and then to topological systems enriched with frame-valued preorders and associated extent spaces as enriched (or preordered) many-valued topological spaces—the compatibility axioms for such systems and spaces allow us to answer the programming question posed above. We summarize these notions below and give an extensive inventory of example classes, including programming examples, and an extensive discussion of examples based on the *L*-spectrum of a locale outlined in Example 4.3.3 above.
+
+A partially ordered set in which each finite subset has a greatest lower bound, or meet, is a *meet semilattice*—it follows that such a poset has a greatest or top element $\top$.
+
+**Definition 6.3.1.** Let $L$ be a meet semilattice. Then a set $X$ has an *L-enrichment relation* or *L-(valued) preorder* $P$ on $X$ if $P$ satisfies:
+
+P1. $P: X \times X \rightarrow L$ is a mapping (*degrees of comparison*).
+
+P2. $\forall x \in X, P(x,x) = \top$ (*total existence* or *reflexivity*).
+
+P3. $\forall x,y,z \in X, P(x,y) \wedge P(y,z) \leq P(x,z)$ (*transitivity*).
+
+We may speak of $(X,P)$ as an *L-enriched set*—since it is an enriched category over the monoidal category $L$—or more often as an *L-preordered set*; and $(X,L,P)$ is an *enriched set* or *preordered set*, setting the stage for subsequent variable-basis settings in which the base $L$ may change from set to set.
+
+**Definition 6.3.2 (category of frame-valued preordered sets).** The category **Loc-PreSet** has **Set** × **Loc** as a ground category and comprises data subject to axioms as follows:
+
+(1) *Objects*: $(X, L, P)$, where $L$ is a frame and $(X, L, P)$ is a preordered set.
+---PAGE_BREAK---
+
+(2) *Morphisms*: $(f, \varphi) : (X, L, P) \to (Y, M, Q)$, where $f: X \to Y$ is a mapping, $\varphi^{op} : L \leftarrow M$ is a frame morphism, and $\forall x, y \in X$,
+
+$$P(x,y) \leq \varphi^{op}[Q(f(x), f(y))].$$
+
+Such morphisms are said to be enriched or order-preserving or isotone.
+
+(3) *Compositions, identities:* from **Set** × **Loc**.
+
+Denote by **Set** × **Loc**($\Lambda$) the subcategory of **Set** × **Loc** in which, for each morphism $(f, \varphi)$, $\varphi^{op}$ preserves arbitrary $\Lambda$ as well as arbitrary $\nabla$, and denote by **Loc**($\Lambda$)-**PreSet** the subcategory of **Loc-PreSet** having ground category **Set** × **Loc**($\Lambda$) and in which, for each morphism $(f, \varphi)$, $\varphi^{op}$ preserves arbitrary $\Lambda$ as well as arbitrary $\nabla$. It is shown in [12] that **Loc**($\Lambda$)-**PreSet** is topological over **Set** × **Loc**($\Lambda$) with respect to the expected forgetful functor, a result implying that **Set** × **Loc**($\Lambda$) has no known degree of algebraicity over **Set** × **Loc** as well as generalizing the fact [1] that the traditional category **PreSet** for preordered sets is topological over **Set** with respect to the expected forgetful functor.
+
+The next definition combines the notion of frame-valued preorders with
+the variable-basis notion of topological systems embodied in **Loc-TopSys**
+discussed in Subsection 6.1 above.
+
+**Definition 6.3.3** (enriched/preordered topological systems). **PreTopSys** has ground category **Loc-PreSet** × **Loc** and comprises the following data and axioms:
+
+(1) *Objects*: $((X, L, P), A, \vdash)$, or $(X, L, P, A, \vdash)$, called *enriched* or *preordered topological systems*, where:
+
+(a) $(X, L, P)$ is a *preordered set*, $A$ is a locale (*ground condition*):
+
+(b) $(X, L, A, \vdash)$ is a topological system in **Loc-TopSys**, i.e., $\vdash$ is an $L$-satisfaction relation on $(X, A)$ satisfying both arbitrary $\vee$ and finite $\wedge$ interchange laws (**topological system condition**) ;
+
+(c) $P$ and $\vdash$ are *compatible*, i.e., $\forall x, y \in X, \forall a \in A$,
+
+$$P(x,y) \wedge \vdash (y,a) \leq \vdash (x,a)$$
+
+(*compatibility condition*).
+
+(2) *Morphisms*: $(f, \varphi, \psi) : ((X, L, P), A, \vdash) \to ((Y, M, Q), B, \vdash)$,
+called *isotone continuous functions*, where:
+
+(a) $(f, \varphi) : (X, L, P) \to (Y, M, Q)$ is an isotone mapping, $\psi : A \to B$ is a localic morphism (*ground condition*)
+
+(b) $(f, \varphi, \psi) : (X, L, A, \vdash) \to (Y, M, B, \vdash)$ is a **Loc-TopSys** morphism (*continuity condition*) .
+
+(3) *Composition, identities*: from **Loc-PreSet** × **Loc**.
+---PAGE_BREAK---
+
+It is the compatibility condition in the above definition which addresses
+the programming question posed at the beginning of this subsection.
+
+Now let $U : \mathbf{PreTopSys} \to \mathbf{Loc-PreSet} \times \mathbf{Loc}$ be the forgetful functor
+given by
+
+$$U(X, L, P, A, \vdash) = (X, L, P, A),$$
+
+$$U[(f, \varphi, \psi) : (X, L, P, A, \vdash) \to (Y, M, Q, B, \vdash)] =$$
+
+$$(f, \varphi, \psi) : (X, L, P, A) \to (Y, M, Q, B).$$
+
+**Theorem 6.3.4.** *PreTopSys* is neither quasi-algebraic [12] nor essentially topological nor existentially topological in the sense of [9] over **Loc-PreSet** × **Loc** w.r.t. *U*; and hence **PreTopSys** is neither essentially algebraic nor topological over **Loc-PreSet** × **Loc** w.r.t. *U*.
+
+This theorem suggests a comparison of **PreTopSys** with **TopGrp**, which is neither algebraic nor topological over **Set**; but it is known that **TopGrp** is essentially algebraic over **Top** and topological over **Grp**. It is ongoing work of the authors to resolve the question of whether **PreTopSys** can have a degree of algebraicity over one ground and a degree of topologicity over another ground and identifying pairs of such grounds; e.g., the authors are studying the behavior of **PreTopSys** over **Loc-PreSet** and over **Loc-TopSys**. With regard to the latter category, **PreTopSys** has an adjoint relationship given by the expected concrete forgetful functor **G**: **PreTopSys** $\to \mathbf{Loc-TopSys}$ and its left adjoint and embedding **H**: **Loc-TopSys** $\to \mathbf{PreTopSys}$ constructed using the crisp equality relation
+
+$$E(x,y) = \begin{cases} \top, & x = y \\ \bot, & x \neq y \end{cases}$$
+
+as the needed lattice-valued preorders; and $H \dashv G$ is an isoreflection.
+
+**Discussion 6.3.5** (motivation of lattice-valued preordered topologies).
+As seen in Theorem 3.2.2 and Discussion 6.2.6, a type of topological sys-
+tem is generally matched with a corresponding type of topological space
+via the notion of extent. Recalling the notion of extent used in 6.2.6 for
+unital quantales, we use this notion instantiated for frames and $\otimes = \wedge$,
+the binary meet, to discover the kind of topological spaces associated
+with preordered topological systems. Let $(X, L, P, A, \vdash)$ be a preordered
+topological system, and recall $\text{ext}: A \to L^X$ given by
+
+$$\text{ext}(a) : X \to L \quad \text{by} \quad \text{ext}(a)(x) = \vdash (x, a).$$
+
+Now let $x, y \in X$. Then the compatibility axiom for $(X, L, P, A, \vdash)$ states
+that
+
+$$P(x,y) \wedge \vdash (y,a) \leq \vdash (x,a),$$
+---PAGE_BREAK---
+
+and hence
+
+$$P(x, y) \wedge ext(a)(y) \leq ext(a)(x).$$
+
+Finally, it is noted that $ext \dashv (\Lambda)$ is an $L$-topology on $X$ and that its images $ext(a)$ are open sets. This leads to the next definition.
+
+**Definition 6.3.6 (category of preordered topological spaces).** The category **PreTop** of preordered topological spaces and isotone continuous mappings has ground category **Loc-PreSet** and comprises the following data and axioms:
+
+(1) *Objects:* $(X, L, P, \tau)$, where $(X, L, P)$ is a preordered set, $(X, L, \tau)$ is a topological space in **Loc-Top** (5.2.1), and $\tau$ satisfies the compatibility axiom:
+
+$$\forall x, y \in X, \forall u \in \tau, P(x, y) \wedge u(y) \leq u(x).$$
+
+(2) *Morphisms:* $(f, \varphi) : (X, L, P, \tau) \to (Y, M, Q, \sigma)$, where $(f, \varphi) : (X, L, P) \to (Y, M, Q)$ is a **Loc-PreSet** morphism and $(f, \varphi) : (X, L, \tau) \to (Y, M, \sigma)$ is a **Loc-Top** morphism.
+
+(3) *Composition, identities:* from **Loc-PreSet**.
+
+**Theorem 6.3.7 [12].** *PreTop* is a topological category over **Loc-PreSet** w.r.t. the expected forgetful functor.
+
+In [12], many-valued specializations are considered in the context of
+preordered topological spaces and it is shown that these preorders satisfy
+a certain antisymmetry axiom if and only if the space is $L-T_0$, a separation
+axiom intrinsic to $L$-sobriety and the study of $L$-spectra (4.3.3 above) and
+is the many-valued generalization of the traditional $T_0$ axiom.
+
+An inventory of example classes of preordered topological systems and
+topological spaces closes out this subsection, preceded by an inventory of
+enriched or preordered sets.
+
+**Example 6.3.8 (classes of many-valued preordered sets).**
+
+(1) Indiscrete Preordered Sets. Let $X$ be a set and $L$ be a meet semilattice. Put $P: X \times X \rightarrow L$ by $P(x,y) = \top$. Then $(X,L,P)$ is a preordered set which we call an *indiscrete* preordered set.
+
+(2) Discrete Preordered Sets. Let $X$ be a set and $L$ be a meet semi-lattice with $|L| \ge 2$. Choose $\alpha \in L - \{\top\}$ and put $P: X \times X \rightarrow L$ by
+
+$$P(x,y) = \begin{cases} \alpha, & x \neq y \\ \top, & x = y \end{cases}.$$
+
+Then $(X, L, P)$ is a preordered set which we call the $\alpha$-discrete
+preordered set.
+---PAGE_BREAK---
+
+(3) Ultrametric Spaces. Let $(X, d)$ be an ultrametric space (with strengthened triangle inequality $d(x, z) \le d(x, y) \vee d(y, z)$) bounded by 1, and put $P: X \times X \to [0, 1]$ by
+
+$$P(x,y) = 1 - d(x,y).$$
+
+Then $(X, L, P)$ is a preordered set.
+
+(4) Bitstring Based Examples. Consider two (countably) infinite binary bitstrings $\sigma_1, \sigma_2$ (of 0's, 1's), where, for $n \in \mathbb{N}$,
+
+$$\sigma_i(n) = \text{bit in } n^{\text{th}} \text{ place,}$$
+
+and define a *comparison bitstring* $P(\sigma_1, \sigma_2)$ as follows:
+
+$$P(\sigma_1, \sigma_2)(n) = \begin{cases} 1, & \sigma_1(n) = \sigma_2(n) \\ 0, & \sigma_1(n) \neq \sigma_2(n) \end{cases} .$$
+
+Let $X = \mathbf{B} = \mathbf{2}^\omega = \{\sigma : \sigma \text{ is a countably infinite bitstring}\}$ equipped with the pointwise ordering. Then **B** is a complete Boolean algebra, and $P: X \times X \to \mathbf{B}$ as constructed above is a well-defined mapping. Also, noting that $\top$ in **B** is the bitstring with all 1's, convexity follows since
+
+$$\forall \sigma \in X, P(\sigma, \sigma) = \top.$$
+
+Further, letting $\sigma_1, \sigma_2, \sigma_3 \in X$ and $n \in \mathbb{N}$, assume
+
+$$P(\sigma_1, \sigma_2)(n) \wedge P(\sigma_2, \sigma_3)(n) = 1.$$
+
+Then
+
+$$\sigma_1(n) = \sigma_2(n) = \sigma_3(n).$$
+
+Hence
+
+$$P(\sigma_1, \sigma_3)(n) = 1.$$
+---PAGE_BREAK---
+
+Therefore transitivity follows since
+
+$$P(\sigma_1, \sigma_2) \wedge P(\sigma_2, \sigma_3) \leq P(\sigma_1, \sigma_3).$$
+
+And so $(X, \mathbf{B}, P)$ is a preordered set.
+
+(a) There are a number of variations on the construction of the preceding class which each yield preordered sets. Keeping $\mathbf{B}$ as before, $X$ could be the set $\mathbf{2}^k$ of all finite length strings of some specified length $k$, or the set $\mathbf{2}^*$ of all finite length strings, or the set $\mathbf{2}^{\omega}$ of all strings which are countable (finite or infinite). To illustrate, suppose $X$ is the set of all $k$-length bitstrings, for some fixed $k \in \mathbb{N}$; and put $P: X \times X \to \mathbf{B}$ by
+
+$$P(\sigma_1, \sigma_2)(n) = \begin{cases} 1, & \sigma_1(n) = \sigma_2(n) \text{ and } n \le k \\ 0, & \sigma_1(n) \ne \sigma_2(n) \text{ and } n \le k \\ 1, & n > k \end{cases}.$$
+
+Then $(X, \mathbf{B}, P)$ is a preordered set.
+
+(b) The previous two classes can be generalized to non-binary string induced examples. Let $\Sigma$ be an alphabet with $|\Sigma| \ge 2$, and let $\Sigma^{\omega}$ be the set of all countable strings (both finite and infinite) on $\Sigma$. Now let $\mathbf{B}$ be appropriately generalized from above and put $P: \Sigma^{\omega} \times \Sigma^{\omega} \to \mathbf{B}$ as above. Then $(\Sigma^{\omega}, \mathbf{B}, P)$ is a preordered set.
+
+(5) Locale Based Examples. Let $L$ be a frame. For each locale $A$, put
+
+$$P_A : Lpt(A) \times Lpt(A) \rightarrow L \text{ by } P_A(p,q) = \bigwedge_{a \in A} (q(a) \rightarrow p(a)),$$
+
+where we recall the definition of the carrier set of the *L*-spectrum
+from Example 4.3.3 above, namely, that
+
+$$Lpt(A) = \{p : A \to L \mid p \text{ preserves arbitrary } \bigvee \text{ and finite } \wedge\},$$
+
+and where $\to$ refers to Heyting residuation ($\alpha \to \beta \ge \gamma$ $\Leftrightarrow$ $\alpha \wedge \gamma \le \beta$).
+
+(a) $(Lpt(A), L, P)$ is a preordered set. Reflexivity follows since
+
+$$P_A (p, p) = \bigwedge_{a \in A} (p(a) \to p(a)) = \bigwedge_{a \in A} \top = \top$$
+---PAGE_BREAK---
+
+and transitivity follows since
+
+$$
+\begin{align*}
+P_A(p, q) \wedge P_A(q, r) &= \\
+\bigwedge_{a \in A} (q(a) \rightarrow p(a)) \wedge \bigwedge_{b \in A} (r(b) \rightarrow q(b)) &= \\
+\bigwedge_{(a,b) \in A \times A} [(q(a) \rightarrow p(a)) \wedge (r(b) \rightarrow q(b))] &\le \\
+\bigwedge_{a \in A} [(q(a) \rightarrow p(a)) \wedge (r(a) \rightarrow q(a))] &= \\
+\bigwedge_{a \in A} [(r(a) \rightarrow q(a)) \wedge (q(a) \rightarrow p(a))] &\le \\
+\bigwedge_{a \in A} (r(a) \rightarrow p(a)) &= P_A(p, r),
+\end{align*}
+$$
+
+where the transitivity of $\to$ is used in the next to last line.
+
+(b) The question arises as to why the order of implication $q(a) \to p(a)$ was chosen in the definition of $P_A$ and not $p(a) \to q(a)$. The chosen order is forced by the “compatibility condition” imposed in subsequent sections on preordered topological systems and preordered topological spaces, a condition which specifically answers the programming question stated at the beginning of this subsection.
+
+(c) This example is of particular interest since it is fundamentally related to specialization orders related to L-spectra of locales.
+
+(d) This example is also of particular interest when *L* is spatial and *A* is non-spatial, since it is an *L*-preordered set not generated from (preordered) topological spaces.
+
+**Example 6.3.9** (preordered topological systems and topological spaces).
+
+(1) Fibres of Preordered Topologies. Let (X, L, P) be a preordered set with L a frame and consider the fibre T of all preordered topologies on (X, L, P) ordred by inclusion. Further, consider the following families of L-valued subsets of X:
+
+$$
+\tau_{\max} = \{u \in L^X : \forall x, y \in X, P(x,y) \wedge u(y) \le u(x)\},
+$$
+
+$$
+\tau_l = \{ u \in L^X : \forall x, y \in X, P(x,y) \wedge u(y) = P(x,y) \wedge u(x) \},
+$$
+
+$$
+\tau_r = \{ u \in L^X : \forall x, y \in X, P(x, y) \wedge u(y) = P(y, x) \wedge u(x) \},
+$$
+
+$$
+\tau_{const} = \{ u \in L^X : \forall x, y \in X, P(x, y) \wedge u(y) = u(x) \},
+$$
+
+$$
+\tau_{\min} = \{\underline{\perp}, \overline{\perp}\}.
+$$
+---PAGE_BREAK---
+
+Then the following hold:
+
+(a) $\mathcal{T}$ is a complete lattice ordered by inclusion.
+
+(b) $\tau_{\max}$ is the largest member of $\mathcal{T}$ and $\tau_{\min}$ is the smallest member of $\mathcal{T}$.
+
+(c) $\forall y \in X$, put $P_y : X \to L$ by $P_y(x) = P(x, y)$. Then
+$\langle\langle\{P_y : y \in X\}\rangle\rangle \subset \tau_{\max}$.
+
+(d) $\tau_{const} = \{\underline{\alpha} : \alpha \in L\}; \tau_{const}, \tau_l, \tau_r \in \mathcal{T}; \tau_{const} \subset \tau_l, \tau_{const} \subset \tau_r$, so that each of $\tau_{const}$, $\tau_l$, $\tau_r$, $\tau_{max}$ is a stratified $L$-topology.
+
+(e) Put $\tau = \{u \in L^X : \forall x, y \in X, P(u, x, y, R)\}$, where $P(u, x, y, R)$ is of the form
+
+$$P(x,y) \wedge u(y) \text{ RHS},$$
+
+where the binary relation *R* on *L* is either = or ≤, and *rhs* is a
+string comprising any combination of *u(x)*, *P(x,y)*, *P(y,x)*,
+∧. Then τ is one of τconst, τl, τr, τmax.
+
+(2) Preordered Spaces to Preordered Systems. Let (X, L, P, τ) be a preordered topological space. Then (X, L, P, τ, ⊢τ) is a preordered topological system, where
+
+$$\vdash_{\tau} (x, u) = u(x).$$
+
+(3) Preordered Systems from *L*-Spectra—see Example 6.3.8(5) above.
+Let *L* be a frame and *A* be a locale. Then
+
+$$ (Lpt(A), L, P_A, A, \nvDash_A) $$
+
+is a preordered topological system, where $Lpt(A) = \mathbf{Frm}(A, L)$,
+$P_A : Lpt(A) \times Lpt(A) \rightarrow L$ by $P_A(p, q) = \bigwedge_{a \in A} (q(a) \rightarrow p(a))$,
+and $\nvDash_A : Lpt(A) \times A \rightarrow L$ is defined by
+
+$$ \nvDash_A (p, a) = p(a). $$
+
+(4) Bitstring Based Preordered Topologies and Topological Systems Not in (1) and (2) Above. It is possible for preordered sets $(\Sigma^{*\omega}, P, \mathbf{B})$ in Example 6.3.8(4) to construct preordered topologies not listed in (1) above. For $\alpha \in \Sigma$, put $p^\alpha : \Sigma^{*\omega} \to \mathbf{B}$ by
+
+$$ p^{\alpha}(\sigma)(n) = \begin{cases} 1, & \sigma(n) = \alpha \\ 0, & \sigma(n) \neq \alpha \end{cases}. $$
+
+Let $Q \subset \mathbf{B}^{\Sigma^{*\omega}}$ by
+
+$$ Q = \{ p^{\alpha_1} \wedge \dots \wedge p^{\alpha_k} : k \in \mathbb{N}, \{\alpha_i\}_{i=1}^k \subset \Sigma \}, $$
+
+let
+
+$$ Q = \{\vee \hat{Q} : \hat{Q} \subset Q\}. $$
+---PAGE_BREAK---
+
+Then $\mathcal{Q}$ is a subframe of $\mathbf{B}^{\Sigma^{*\omega}}$, i.e., $\mathcal{Q}$ is a $\mathbf{B}$-topology on $\Sigma^{*\omega}$;
+in fact,
+
+$$ \mathcal{Q} = \left< \left< \left\{ p^{\alpha_1}, \dots, p^{\alpha_k} : k \in \mathbb{N}, \{\alpha_i\}_{i=1}^k \subset \Sigma \right\} \right> \right>. $$
+
+Finally, put $\models_{\mathcal{Q}}: \Sigma^{*\omega} \times \mathcal{Q} \to \mathbf{B}$ by
+
+$$ \models_{\mathcal{Q}} (\sigma, q) = q(\sigma). $$
+
+It follows that $(\Sigma^{*\omega}, \mathbf{P}, \mathbf{B}, \mathcal{Q}, \vdash)$ is a preordered topological system
+by Theorem 7.4 of [12].
+
+7. ACKNOWLEDGEMENTS
+
+Appreciation is expressed to the organizers of the 27th Summer Confer-
+ence on General Topology and Its Applications (25–28 July 2012) held
+on the campus of Minnesota State University (Mankato), especially to
+Profs. L. M. Brown and S. Matthews and the asymmetric topology group
+for their generous invitation to the third author of this paper to give the
+plenary lecture listed in [67] to which this paper is related. Thanks are
+extended to Prof. Brian Martensen, the Department of Mathematics at
+MSU, and his team for accommodations and generous hospitality. Finally,
+we thank the National Science Foundation, *Topology Proceedings*, and the
+latter’s editor, Prof. M. Tuncali, for their help and support, as well as
+the referee for his/her comments which improved the paper.
+
+REFERENCES
+
+[1] J. Adámek, H. Herrlich, G. E. Strecker, Abstract and Concrete Categories, second edition, Dover Publications (New York, 2009).
+
+[2] M. Barr, *-Autonomous Categories, Lecture Notes in Mathematics **752** (1979), Springer-Verlag (Berlin/Heidelberg/New York).
+
+[3] T. S. Blyth, *Sur certaines images homomorphes des demi-groupes ordonnés*, Bull. Soc. Math. France **94** (1970), 101–111.
+
+[4] F. Bayoumi, S. E. Rodabaugh, *Overview and comparison of localic and fixed-basis topological products*, Fuzzy Sets and Systems **161** (2010), 2397–2439 (Elsevier B.V., doi:10.1016/j.fss.2010.05.013).
+
+[5] N. Bourbaki, Topologie Générale, Actualités Sci. Ind. **1084** (Paris, 1949) / Hermann Press (Paris, 1965).
+
+[6] C. L. Chang, *Fuzzy topological spaces*, J. Math. Anal. Appl. **24** (1968), 182–190.
+
+[7] J. T. Denniston, S. E. Rodabaugh, *Functorial relationships between lattice-valued topology and topological systems*, Quaestiones Mathematicae **32:2** (2009), 139–186.
+
+[8] J. T. Denniston, A. Melton, S. E. Rodabaugh, *Enriched topological systems and variable-basis enriched functors*, in U. Höhle, L. N. Stout, E. P. Klement, Enriched Category Theory and Related Topics: Abstracts of 33rd Linz Seminar (14–18 February 2012), Universitätsdirektion Johannes Kepler Universität (Linz, Austria), 16–20.
+---PAGE_BREAK---
+
+[9] __________, *Interweaving algebra and topology: Lattice-valued topological systems*, Fuzzy Sets and Systems **192** (2012), 58–103.
+
+[10] __________, *Formal concept analysis and lattice-valued Chu systems*, Fuzzy Sets and Systems, **216** (2013), 52–90.
+
+[11] __________, *Lattice-valued predicate transformers, Chu games, and lattice-valued transformer systems*, in submission.
+
+[12] __________, *Enriched topological systems, enriched topologies, and variable-basis enriched functors*, in submission.
+
+[13] J. T. Denniston, A. Melton, S. E. Rodabaugh, S. A. Solovjovs, *Lattice-valued preordered sets as lattice-valued topological systems*, in Radko Mesiar, Endre Pap, E. P. Klement, Non-Classical Measures and Integrals, Abstracts of 34rd Linz Seminar (26 February – 2 March 2013), Universitätsdirektion Johannes Kepler Universität (Linz, Austria), 28–34.
+
+[14] E. W. Dijkstra, *A Discipline of Programming*, Prentice-Hall: Engelwood (Cliffs, New Jersey, 1976).
+
+[15] C. H. Dowker, D. Papert, *On Urysohn's lemma*, General Topology and Its Relation to Modern Analysis and Algebra II (1967), 111–114, Academia (Prague).
+
+[16] M. Fourman, D. S. Scott, *Sheaves and logic*, Applications of Sheaves: Lecture Notes in Mathematics **753** (1979), 302–401, Springer-Verlag (Berlin, Heidelberg, New York).
+
+[17] T. E. Gantner, R. C. Steinlage, and R. H. Warren, *Compactness in fuzzy topological spaces*, J. Math. Anal. Appl. **62** (1978), 547–562.
+
+[18] J. A. Goguen, *L-fuzzy sets*, J. Math. Anal. Appl. **18** (1967), 145–167.
+
+[19] C. Guido, *Attachment between fuzzy points and fuzzy sets*, in U. Bodenhofer, B. De Baets, E. P. Klement, Abstracts of the 30th Linz Seminar, Universitätsdirektion Johannes Kepler Universität, Linz, Austria, 3–7 February 2009, pp. 52–54.
+
+[20] __________, *Fuzzy points and attachment*, Fuzzy Sets and Systems **161**:16 (2010), 2150–2165.
+
+[21] U. Höhle, *Uppermicontinuous fuzzy sets and applications*, J. Math. Anal. Appl. **78** (1980), 659–673.
+
+[22] __________, *Fuzzy topologies and topological space objects in a topos*, Fuzzy Sets and Systems **19** (1986), 299–304.
+
+[23] __________, *Presheaves over GL-monoids*, in U. Höhle, E. P. Klement, Non-Classical Logics and their Applications to Fuzzy Subsets: Theory and Decision Library: Series B: Mathematical and Statistical Methods **32** (1995), Kluwer Academic Publishers (Boston, Dordrecht, London), pp. 127–158.
+
+[24] U. Höhle, T. Kubiak, *A non-commutative and non-idempotent theory of quantale sets*, Fuzzy Sets and Systems **166** (2011), 1–43.
+
+[25] U. Höhle and A. Šostak, *Axiomatic foundations of fixed-basis fuzzy topology*, in: U. Höhle, S. E. Rodabaugh, *Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory*, The Handbooks of Fuzzy Sets Series **3**(1999), 123–272 (Chapter 3), Springer Verlag / Kluwer Academic Publishers.
+
+[26] B. Hutton, *Normality in fuzzy topological spaces*, J. Math. Anal. Appl. **50** (1975), 74–79.
+
+[27] B. Hutton and I. Reilly, *Separation axioms in fuzzy topological spaces*, Fuzzy Sets and Systems **3** (1980), 93–104.
+
+[28] J. R. Isbell, *Atomless parts of spaces*, Math. Scand. **31** (1972), 5–32.
+
+[29] P. T. Johnstone, *Stone Spaces*, Cambridge University Press (Cambridge, 1982).
+
+[30] J. L. Kelley, *General Topology*, Van Nostrand (New York, 1955).
+---PAGE_BREAK---
+
+[31] G. M. Kelly, Basic Concepts of Enriched Category Theory, Reprints in Theory and Applications of Categories **10** (2005).
+
+[32] W. Kotzé, *Lattice morphisms, sobriety, and Urysohn Lemmas*, in: S. E. Rodabaugh, E. P. Klement, and U. Höhle, Applications of Category Theory to Fuzzy Subsets: Series B: Mathematical and Statistical Methods **14** (1992), 257–274 (Chapter 10), Kluwer Academic Publishers (Boston/Dordrecht/London).
+
+[33] __________, *Lifting of sobriety concepts with particular reference to (L, M)-topological spaces*, in: S. E. Rodabaugh, E. P. Klement, Topological And Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets, Trends in Logic **20** (2003), 415–426 (Chapter 16), Kluwer Academic Publishers (Boston, Dordrecht, London).
+
+[34] T. Kubiak, On Fuzzy Topologies, Ph.D. dissertation, Adam Mickiewicz University, Poznan (Poland), 1985.
+
+[35] __________, *L-fuzzy normal spaces and Tiezte extension theorem*, J. Math. Anal. Appl. **125** (1987), 141–153.
+
+[36] __________, *The topological modification of the L-fuzzy unit interval*, in: S. E. Rodabaugh, E. P. Klement, and U. Höhle, Applications of Category Theory to Fuzzy Subsets: Series B: Mathematical and Statistical Methods **14** (1992), 275–305 (Chapter 11), Kluwer Academic Publishers (Boston/Dordrecht/London).
+
+[37] __________, *Separation axioms: extension of mappings and embedding of spaces*, in: U. Höhle, S. E. Rodabaugh, Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory: The Handbooks of Fuzzy Sets Series **3** (1999), 433–480 (Chapter 6), Springer Verlag / Kluwer Academic Publishers.
+
+[38] __________, *Fuzzy reals: topological results surveyed, Brouwer fixed point theorem, open questions*, in: S. E. Rodabaugh, E. P. Klement, Topological And Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets, Trends in Logic **20** (2003), 137–151 (Chapter 5), Kluwer Academic Publishers (Boston, Dordrecht, London).
+
+[39] T. Kubiak, A. Šostak, *Lower set-valued fuzzy topologies*, Quaestiones Mathematicae **20:3** (1997), 423–429.
+
+[40] H. Lai, D. Zhang, *Fuzzy preorder and fuzzy topology*, Fuzzy Sets and Systems **157** (2006), 1865–1885.
+
+[41] R. Lowen, *Fuzzy topological spaces and fuzzy compactness*, J. Math. Anal. Appl. **56** (1976), 621–633.
+
+[42] S. Mac Lane, Categories for the Working Mathematician, second edition, Graduate Texts in Mathematics **5** (1998), Springer Verlag (Berlin, Heidelberg, New York).
+
+[43] H. W. Martin, *Weakly induced fuzzy topological spaces*, J. Math. Anal. Appl. **78** (1980), 634–639.
+
+[44] G. H. J. Meßner, *Sobriety of $\mathbb{R}(L)$*, chapter in draft of Ph.D. thesis, Johannes Kepler Universität (Linz, Austria).
+
+[45] C. J. Mulvey, *On the geometry of choice*, in: S. E. Rodabaugh, E. P. Klement, Topological And Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets, Trends in Logic **20** (2003), 309–336 (Chapter 11), Kluwer Academic Publishers (Boston, Dordrecht, London).
+
+[46] D. Papert and S. Papert, *Sur les treillis des ouverts et les paratopologiques*, Séminaire Ehresmann (topologie et géometrie différentielle), Ire année (1957-1958), exposé 1.
+
+[47] Q. Pu, D. Zhang, *Preordered sets valued in a GL-monoid*, Fuzzy Sets and Systems **187** (2012), 1–32.
+---PAGE_BREAK---
+
+[48] A. Pultr, S. E. Rodabaugh, *Lattice-valued frames, functor categories, and classes of sober spaces*, in: S. E. Rodabaugh, E. P. Klement, Topological And Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets, Trends in Logic **20** (2003), 153–187 (Chapter 6), Kluwer Academic Publishers (Boston, Dordrecht, London).
+
+[49] ——, *Examples for different sobrieties in fixed-basis topology*, in: S. E. Rodabaugh, E. P. Klement, Topological And Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets, Trends in Logic **20** (2003), 427–440 (Chapter 17), Kluwer Academic Publishers (Boston, Dordrecht, London).
+
+[50] ——, *Category theoretic aspects of chain-valued frames: Part I: Categorical and presheaf theoretic foundations / Part II: Applications to lattice-valued topology*, Fuzzy Sets and Systems **159:5** (2008), 501–528 / 529–558.
+
+[51] S. E. Rodabaugh, *A categorical accommodation of various notions of fuzzy topology*, Fuzzy Sets and Systems **9** (1983), 241–265. Preliminary report, in: E. P. Klement, Proceedings of the Third International Seminar on Fuzzy Set Theory **3** (1981), 119–152, Johannes Kepler Universitätsdirektion (Linz, Austria).
+
+[52] ——, *Separation axioms and the L-fuzzy real lines*, Fuzzy Sets and Systems **11** (1983), 163–183.
+
+[53] ——, *A point-set lattice-theoretic framework T for topology which contains Loc as a subcategory of singleton subspaces and in which there are general classes of Stone Representation and Compactification Theorems*, First printing February 1986, Second printing April 1987, Youngstown State University Printing Office (Youngstown, Ohio(USA)).
+
+[54] ——, *Dynamic topologies and their applications to crisp topologies, fuzzification of crisp topologies, and fuzzy topologies on the crisp real line*, J. Math. Anal. Appl. **131** (1988), 25–66.
+
+[55] ——, *Lowen, para-Lowen, and α-level functors and fuzzy topologies on the crisp real line*, J. Math. Anal. Appl. **131** (1988), 157–169.
+
+[56] ——, *Point-set lattice-theoretic topology*, Fuzzy Sets and Systems **40** (1991), 297–345.
+
+[57] ——, *Necessity of Chang-Goguen topologies*, Rend. Circolo Mat. Palermo (Suppl: Ser. II) **29** (1992), 299–314.
+
+[58] ——, *Categorical frameworks for Stone representation theories*, in: S. E. Rodabaugh, E. P. Klement, and U. Höhle, Applications of Category Theory to Fuzzy Subsets: Series B: Mathematical and Statistical Methods **14** (1992), 177–231 (Chapter 7), Kluwer Academic Publishers (Boston/Dordrecht/London).
+
+[59] ——, *Applications of localic separation axioms, compactness axioms, representations, and compactifications to poslat topological spaces*, Fuzzy Sets and Systems **73** (1995), 55–87.
+
+[60] ——, *Powerset operator based foundation for point-set lattice-theoretic (poslat) fuzzy set theories and topologies*, Quaestiones Mathematicae **20:3** (1997), 463–530.
+
+[61] ——, *Powerset operator foundations for poslat fuzzy set theories and topologies*, in: U. Höhle, S. E. Rodabaugh, *Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory*, The Handbooks of Fuzzy Sets Series **3** (1999), 91–116 (Chapter 2), Springer Verlag / Kluwer Academic Publishers.
+
+[62] ——, *Categorical foundations of variable-basis fuzzy topology*, in: U. Höhle, S. E. Rodabaugh, Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory: The Handbooks of Fuzzy Sets Series **3** (1999), 273–388 (Chapter 4), Springer Verlag / Kluwer Academic Publishers.
+---PAGE_BREAK---
+
+[63] __________, *Separation axioms: representation theorems, compactness, compactifications* in: U. Höhle, S. E. Rodabaugh, Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory: The Handbooks of Fuzzy Sets Series **3** (1999), 481–552 (Chapter 7), Springer Verlag / Kluwer Academic Publishers.
+
+[64] __________, *Fuzzy real lines and dual real lines as poslat topological, uniform, and metric ordered semirings with unity*, in: U. Höhle, S. E. Rodabaugh, Mathematics Of Fuzzy Sets: Logic, Topology, And Measure Theory: The Handbooks of Fuzzy Sets Series **3** (1999), 607–632 (Chapter 10), Springer Verlag / Kluwer Academic Publishers.
+
+[65] __________, *Relationship of algebraic theories to powerset theories and fuzzy topological theories for lattice-valued mathematics*, International Journal of Mathematics and Mathematical Sciences **2007:3**, Article ID 43645, 71 pp., doi: 10.1155/2007/43645, .
+
+[66] __________, *Necessity of non-stratified and anti-stratified spaces in lattice-valued topology*, Fuzzy Sets and Systems **161** (2010), 1253–1269.
+
+[67] __________, *Programming semantics, topological systems, and lattice-valued topology*, 27th Summer Conference on General Topology and Its Applications, 25–28 July 2012, Minnesota State University (Mankato, Minnesota).
+
+[68] K. I. Rosenthal, Quantales and Their Applications, Pitman Research Notes in Mathematics **234** (1990), Pitman (Longman/Burnt Mill/Harlow).
+
+[69] S. Shenoi, A. Melton, *Proximity relations in the fuzzy relational database model*, Fuzzy Sets and Systems **31** (1989), 285–296.
+
+[70] M. Smyth, *Powerdomains and predicate transformers: a topological view*, Automatica, Languages, and Programming, Lecture Notes in Computer Science, **154** (1983) 662–675, Springer-Verlag (Berlin/Heidelberg/New York).
+
+[71] S. A. Solovjovs, *Embedding topology into algebra*, in U. Bodenhofer, B. De Baets, E. P. Klement, Abstracts of the 30th Linz Seminar (37 February 2009), Universitätsdirektion Johannes Kepler Universität (Linz, Austria), 106–110.
+
+[72] __________, *Variable-basis topological systems versus variable-basis topological spaces*, Soft Computing **14:10** (2010), 1059–1068.
+
+[73] __________, *Categorical foundations of variety-based topology and topological systems*, Fuzzy Sets and Systems, to appear.
+
+[74] A. P. Šostak, *On a fuzzy topological structure*, Rendiconti Circolo Matematico Palermo (Suppl: Serie II) **11** (1985) 89–103.
+
+[75] S. J. Vickers, Topology Via Logic, Cambridge University Press (Cambridge, 1989).
+
+[76] W. Yao, F.-G. Shi, *A note on specialization L-preorder of L-topological spaces, L-fuzzifying topological spaces, and L-fuzzy topological spaces*, Fuzzy Sets and Systems **159** (2008), 2586–2595.
+
+[77] L. A. Zadeh, *Fuzzy sets*, Information and Control **8** (1965), 338–353.
+
+(DENNISTON) DEPARTMENT OF MATHEMATICAL SCIENCES,
+(MELTON) DEPARTMENTS OF COMPUTER SCIENCE AND MATHEMATICAL SCIENCES,
+KENT STATE UNIVERSITY, KENT, OHIO, USA 44242
+
+E-mail address: jdennist@kent.edu
+
+E-mail address: amelton@kent.edu
+
+(RODABAUGH) COLLEGE OF SCIENCE, TECHNOLOGY, ENGINEERING, MATHEMATICS (STEM), YOUNGSTOWN STATE UNIVERSITY, YOUNGSTOWN, OH, USA, 44555-3347
+
+E-mail address: serodabaugh@ysu.edu
\ No newline at end of file
diff --git a/samples/texts_merged/6759244.md b/samples/texts_merged/6759244.md
new file mode 100644
index 0000000000000000000000000000000000000000..93de3aaa159aaff90e06a3462dc36a843fe21961
--- /dev/null
+++ b/samples/texts_merged/6759244.md
@@ -0,0 +1,676 @@
+
+---PAGE_BREAK---
+
+# Bit-parallel Witnesses and their Applications to Approximate String Matching
+
+Heikki Hyyrö * Gonzalo Navarro †
+
+## Abstract
+
+We present a new bit-parallel technique for approximate string matching. We build on two previous techniques. The first one, BPM [Myers, J. of the ACM, 1999], searches for a pattern of length $m$ in a text of length $n$ permitting $k$ differences in $O([\lfloor m/w \rfloor]n)$ time, where $w$ is the width of the computer word. The second one, ABNDM [Navarro and Raffinot, ACM JEA, 2000], extends a sublinear-time exact algorithm to approximate searching. ABNDM relies on another algorithm, BPA [Wu and Manber, Comm. ACM, 1992], which makes use of an $O(k[\lfloor m/w \rfloor]n)$ time algorithm for its internal workings. BPA is slow but flexible enough to support all operations required by ABNDM. We improve previous ABNDM analyses, showing that it is average-optimal in number of inspected characters, although the overall complexity is higher because of the $O(k[\lfloor m/w \rfloor])$ work done per inspected character. We then show that the faster BPM can be adapted to support all the operations required by ABNDM. This involves extending it to compute edit distance, to search for any pattern suffix, and to detect in advance the impossibility of a later match. The solution to those challenges is based on the concept of a witness, which permits sampling some dynamic programming matrix values so as to bound, deduce, or compute others fast. The resulting algorithm is average-optimal for $m \le w$, assuming the alphabet size is constant. In practice, it performs better than the original ABNDM and is the fastest algorithm for several combinations of $m, k$ and alphabet sizes that are useful, for example, in natural language searching and computational biology. To show that the concept of witnesses can be used in further scenarios, we also improve a recent bit-parallel algorithm based on Myers [Fredriksson, SPIRE 2003]. The use of witnesses greatly improves the running time of this algorithm too.
+
+## 1 Introduction
+
+Approximate string matching is one of the main problems in classical string algorithms, with applications to text searching, computational biology, pattern recognition, etc. Given a text of length $n$, a pattern of length $m$, and a maximal number of differences permitted, $k$, we want to find all the text positions where the pattern matches the text up to $k$ differences. The differences can be substituting, deleting or inserting a character. We call $\alpha = k/m$ the *difference ratio*, and $\sigma$ the size of the alphabet $\Sigma$. All the average case figures in this paper assume random text and uniformly distributed alphabet.
+
+In this paper we consider online searching, that is, the pattern can be preprocessed but the text cannot. The classical solution to the problem is based on filling a dynamic programming matrix
+
+*Dept. of Computer Sciences, University of Tampere, Finland. Supported by the Academy of Finland and Tampere Graduate School in Information Science and Engineering.
+
+†Dept. of Computer Science, University of Chile. Partially supported by Fondecyt Project 1-020831.
+---PAGE_BREAK---
+
+and needs $O(mn)$ time [18]. Since then, many improvements have been proposed (see [13] for a complete survey). These can be divided into four types.
+
+The first type is based on dynamic programming and has achieved $O(kn)$ worst case time [8, 11]. These algorithms are not really practical, but there exist also practical solutions that achieve, on the average, $O(kn)$ [22] and even $O(kn/\sqrt{\sigma})$ time [4].
+
+The second type reduces the problem to an automaton search, since approximate searching can be expressed in that way. A deterministic finite automaton (DFA) is used in [22] so as to obtain $O(n)$ search time, which is worst-case optimal. The problem is that the preprocessing time and the space is $O(\min(3^m, (m\sigma)^k))$ in the worst case, which makes the approach practical only for very small patterns. In [25] they trade time for space using a Four Russians approach, achieving $O(kn/\log s)$ time on average and $O(mn/\log s)$ in the worst case, assuming that $O(s)$ space is available for the DFAs.
+
+The third approach filters the text to quickly discard large text areas, using a necessary condition for an approximate occurrence that is easier to check than the full condition. The areas that cannot be discarded are verified with a classical algorithm [20, 19, 5, 14, 16]. These algorithms achieve "sublinear" expected time in many cases for low difference ratios, that is, not all text characters are inspected. However, the filtration is not effective for higher ratios. The typical average complexity is $O(kn \log_\sigma m/m)$ for $\alpha = O(1/\log_\sigma m)$. The optimal average complexity is $O((k + \log_\sigma m)n/m)$ for $\alpha < 1 - O(1/\sqrt{\sigma})$ [5], which is achieved in the same paper. The algorithm, however, is not the fastest in practice.
+
+Finally, the fourth approach is bit-parallelism [1, 24], which consists in packing several values in the bits of the same computer word and managing to update them all in a single operation. The idea is to simulate another algorithm using bit-parallelism. The first bit-parallel algorithm for approximate searching [24] parallelized an automaton-based algorithm: a nondeterministic finite automaton (NFA) was simulated in $O(k[m/w]n)$ time, where *w* is the number of bits in the computer word. We call this algorithm BPA (for Bit-Parallel Automaton) in this paper. BPA was improved to $O(\lceil km/w \rceil n)$ [3] and finally to $O(\lfloor m/w \rfloor n)$ time [12]. The latter simulates the classical dynamic programming algorithm using bit-parallelism, and we call it BPM (for Bit-Parallel Matrix) in this paper.
+
+Currently the most successful approaches in practice are filtering and bit-parallelism. A promising approach combining both [16] will be called ABNDM in this paper (for Approximate BNDM, where BNDM stands for Backward Nondeterministic DAWG Matching). The original ABNDM was built on BPA because the latter is the most flexible for the particular operations needed. The faster BPM was not used at that time yet because of the difficulty in modifying it to be suitable for ABNDM.
+
+In this paper we extend BPM in several ways so as to permit it to be used in the framework of ABNDM. The result is a competitive approximate string matching algorithm. We show that, for $m \le w$, the algorithm has average-optimal complexity $O((k + \log m)n/m)$ for $\alpha < 1/2 - O(1/\sqrt{\sigma})$. Note that optimality holds provided we assume $\sigma$ is a constant. For longer patterns it becomes $O((k+\log m)n/w)$. In practice, the algorithm turns out to be the fastest for a range of *m* and *k* that includes interesting cases of natural language searching and computational biology applications. For our analysis, we prove that ABNDM inspects a (truly) optimal number of characters, despite not having an optimal overall complexity.
+---PAGE_BREAK---
+
+Among the extensions needed by BPM, the most challenging one is making it detect whether or not the characters read up to now can lead to a match. Under the automaton approach (BPA) this is easy because it is equivalent to the automaton having run out of active states. BPM, however, does not simulate an automaton but rather a dynamic programming matrix. In this case, the condition sought is that all matrix values in the last column exceed *k*. Since BPM handles differential rather than absolute matrix values, this kind of check is difficult and has prevented using BPM instead of BPA for ABNDM.
+
+We solve the problem by introducing the witness concept. A witness is a matrix cell whose absolute value is known. Together with the differential values, we update one or more witness values in parallel. Those witnesses are used to deduce, bound or compute all the other matrix values.
+
+The usefulness of the witness concept goes well beyond the application we developed it for. To demonstrate this, we show how it can be used to improve a recently proposed algorithm [7] where the main idea is to compute the dynamic programming matrix, using BPM, in row-wise rather than the usual column-wise fashion. One of the subproblems addressed in [7] is how to determine that it is not necessary to compute more rows. Again, the condition is that all current values exceed *k*. We show that our witness technique yields large improvements over the solution presented in [7].
+
+The structure of the paper is as follows. Section 2 presents the background necessary to follow the paper. Section 3 analyzes the classical ABNDM algorithm, because previous analyses [13, 10] are pessimistic. Section 4 shows how BPM algorithm can be adapted to meet the requirements of ABNDM verification. Section 5 gives the changes to BPM that are necessary for ABNDM scanning. Section 6 gives experimental results on the improved ABNDM algorithm. Section 7 shows how the witness technique can be used to improve the row-wise BPM algorithm. Finally, Section 8 gives our conclusions and future work directions.
+
+An earlier partial version of this work appeared in [10].
+
+# 2 Basic Concepts
+
+## 2.1 Notation
+
+We will use the following notation on strings: $|x|$ will be the length of string $x$; $\varepsilon$ will be the only string of length zero; string positions will start at 1; substrings will be denoted as $x_i...j$, meaning taking from the i-th to the j-th character of x, both inclusive; $x_i$ will denote the single character at position $i$ in x. We say that $x$ is a prefix of $xy$, a suffix of $yx$, and a substring or a factor of $yxz$.
+
+Bit-parallel algorithms will be described using C-like notation for the operations: bitwise “and” (`&`), bitwise “or” (`|`), bitwise “xor” (`^`), bit complementation (`~`), and shifts to the left (`<<`) and to the right (`>>`), which are assumed to enter zero bits both ways. We also perform normal arithmetic operations (`+`, `-`, etc.) on the bit masks, which are treated as numbers in this case. Constant bit masks are expressed as sequences of bits, the first to the right, using exponentiation to denote bit repetition, for example $10^3 = 1000$ has a 1 at the 4-th position.
+---PAGE_BREAK---
+
+## 2.2 Problem Description
+
+The problem of approximate string matching can be stated as follows: given a (long) text $T$ of length $n$, and a (short) pattern $P$ of length $m$, both being sequences of characters from an alphabet $\Sigma$ of size $\sigma$, and a maximum number of differences permitted, $k$, find all the segments of $T$ whose *edit distance* to $P$ is at most $k$. Those segments are called “occurrences”, and it is common to report only their start or end points.
+
+The *edit distance* between two strings $x$ and $y$ is the minimum number of *differences* that would transform $x$ into $y$ or vice versa. The allowed differences are deletion, insertion and substitution of characters. The problem is non-trivial for $0 < k < m$. The *difference ratio* is defined as $\alpha = k/m$.
+
+Formally, if `ed()` denotes the edit distance, we may want to report start points (i.e. $\{|x|\}, T = xP'y, ed(P, P') \le k$) or end points (i.e. $\{|xP'|\}, T = xP'y, ed(P, P') \le k$) of occurrences.
+
+## 2.3 Dynamic Programming
+
+The oldest and still most flexible (albeit slowest) algorithm to solve the problem is based on dynamic programming [18]. We first show how to compute the edit distance between two strings $x$ and $y$. To compute $ed(x, y)$, a $(|x| + 1) \times (|y| + 1)$ dynamic programming matrix $M_{0..|x|,0..|y|}$ is filled so that eventually $M_{i,j} = ed(x_{1..i}, y_{1..j})$. The desired solution is then obtained as $M_{|x|,|y|} = ed(x, y)$. Matrix $M$ can be filled by using the well-known dynamic programming recurrence
+
+$$
+\begin{array}{l@{\quad}l@{\quad}l}
+M_{i,0} & \leftarrow & i, & M_{0,j} \leftarrow j, \\
+M_{i,j} & \leftarrow & \text{if } (x_i = y_j) \text{ then } M_{i-1,j-1} & \text{else } 1 + \min(M_{i-1,j}, M_{i,j-1}, M_{i-1,j-1}),
+\end{array}
+$$
+
+where the formula accounts for the three allowed operations. After setting the boundary conditions (first line of the recurrence), it is common to fill the remaining cells of $M$ in a column-wise manner from left to right: The columns are processed in the order $j = 1...|y|$, and column $j$ is filled from top to bottom in the order $i = 1...|x|$ before moving to the next column $j+1$. Dynamic programming requires $O(|x||y|)$ time to compute $ed(x,y)$. The space can be reduced to $O(m)$ by storing only one column of $M$ at a time, namely, the one corresponding to the current character of $y$ (going left to right means examining $y$ sequentially).
+
+The preceding method is easily extended to approximate searching, where $x = P$ and $y = T$, by letting the comparison between $P$ and $T$ start anywhere in $T$. The only change is the initial condition $M_{0,j} \leftarrow 0$. The time is still $O(|x||y|) = O(mn)$.
+
+Throughout this paper we assume that $M$ is processed in column-wise manner. Let the vector $C_{0...m}$ correspond to the values in the currently processed column of $M$. Then the equality $C_i = M_{i,j}$ holds whenever we have just processed the text character $T_j$. Initially $C_i \leftarrow M_{i,0} = i$. When we move on to process the next text character $T_j$, vector $C$ first corresponds to column $j-1$. Let $C'$ denote its updated version that corresponds to the values in column $j$. When we move to the next column $j+1$, $C'$ becomes $C$, and the new $C'$ will correspond to the updated values in column $j+1$, and so on. Following the recurrence for $M$, the update formula for $C$ is
+
+$$ C'_i \leftarrow \text{if } (P_i = T_j) \text{ then } C_{i-1} \text{ else } 1 + \min(C'_{i-1}, C_i, C_{i-1}) $$
+
+for all $i > 0$. We report an occurrence ending at text position $j$ whenever $C'_m \le k$ immediately after processing the column corresponding to $T_j$.
+---PAGE_BREAK---
+
+Several properties of matrix *M* are discussed in [21]. The most important for us is that adjacent cells in *M* differ at most by 1, that is, both $M_{i,j} - M_{i\pm1,j}$ and $M_{i,j} - M_{i,j\pm1}$ are in the set $\{-1, 0, +1\}$. Also, $M_{i+1,j+1} - M_{i,j}$ is in the set $\{0, 1\}$.
+
+Figure 1 shows examples of edit distance computation and approximate string matching.
+
+ | | s | u | r | g | e | r | y |
|---|
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | | s | 1 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | | u | 2 | 1 | 0 | 1 | 2 | 3 | 4 | 5 | | r | 3 | 2 | 1 | 0 | 1 | 2 | 3 | 4 | | v | 4 | 3 | 2 | 1 | 1 | 2 | 3 | 4 | | e | 5 | 4 | 3 | 2 | 2 | 1 | 2 | 3 | | y | 6 | 5 | 4 | 3 | 3 | 2 | 2 | 2 |
+
+ | | s | u | r | g | e | r | y |
|---|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
|---|
| s | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 |
|---|
| u | 2 | 1 | 0 | 1 | 2 | 2 | 2 | 2 |
|---|
| r | 3 | 2 | 1 | 0 | 1 | 2 | 2 | 3 |
|---|
| v | 4 | 3 | 2 | 1 | 1 | 2 | 3 | 3 |
|---|
| e | 5 | 4 | 3 | 2 | 2 | 1 | 2 | 3 |
|---|
| y | 6 | 5 | 4 | 3 | 3 | 2 | 2 | 2 |
|---|
+
+Figure 1: The dynamic programming algorithm. On the left, to compute the edit distance between "survey" and "surgery". On the right, to search for "survey" in the text "surgery". The bold entries show the cell with the edit distance (left) and the end positions of occurrences for $k = 2$ (right).
+
+## 2.4 The Cutoff Improvement
+
+In [22] Ukkonen observed that dynamic programming matrix values larger than $k$ can be assumed to be $k+1$ without affecting the output of the computation. Moreover, once $M_{i,j} > k$, it is known that $M_{i+1,j+1} > k$. Cells of $C$ with value not exceeding $k$ are called active. In the algorithm, the row index $\ell$ of the last active cell (i.e., largest $i$ such that $C_i \le k$) is maintained (let us assume $\ell = -1$ if $C_i > k$ for all $i$). All the values $C_{\ell+1}\dots m$ are assumed to be $k+1$, and we know that also the updated values $C'_{\ell+2}\dots m$ will be larger than $k$. So $C'$ needs to be updated only in the range $C'_{1}\dots C'_{\ell+1}$.
+
+The value $\ell$ has to be updated throughout the computation. Initially $\ell = k$ because $C_i = M_{i,0} = i$. The row index of the last active cell can increase by at most one when moving to the next column. So we may first check whether $C'_{\ell+1} \le k$, and in such a case we increment $\ell$. If this is not the case, we search upwards for the new last active cell by decrementing $\ell$ as long as $C'_{\ell} \le k$. Despite that this search can take $O(m)$ time at a given column, we cannot work more than $O(n)$ overall. There are at most $n$ increments of $\ell$ in the whole process, and hence there cannot be more than $n+k$ decrements. Thus the row index $\ell$ of the last active cell is maintained at $O(1)$ amortized cost per column.
+
+In [4] it was shown that on average $\ell = O(k)$, and therefore Ukkonen's cutoff scheme runs in $O(kn)$ expected time.
+
+## 2.5 An Automaton View
+
+An alternative approach is to model the search with a non-deterministic automaton (NFA) [2]. Consider the NFA for $k=2$ differences shown in Figure 2. Each of the $k+1$ rows denotes the
+---PAGE_BREAK---
+
+number of differences seen (the first row zero, the second row one, etc.). Every column represents
+matching a pattern prefix. Horizontal arrows represent matching a character. All the others
+increment the number of differences (i.e., move to the next row): vertical arrows insert a character
+in the pattern, solid diagonal arrows substitute a character, and dashed diagonal arrows delete a
+character of the pattern. The initial self-loop allows an occurrence to start anywhere in the text.
+The automaton signals (the end of) a match whenever a rightmost state is active.
+
+It is not hard to see that once a state in the automaton is active, all the states of the same
+column and higher-numbered rows are active too. Moreover, at a given text position, *if we collect*
+*the smallest active rows at each column, we obtain the vector C of the dynamic programming* (in
+this case [0, 1, 2, 3, 3, 3, 2], compare to the last column of the right table in Figure 1).
+
+Figure 2: An NFA for approximate string matching of the pattern "survey" with two differences.
+The shaded states are those active after reading the text "surgery".
+
+Note that the NFA can be used to compute edit distance by simply removing the self-loop,
+although it cannot distinguish among different values larger than *k*.
+
+## 2.6 A Bit-Parallel Automaton Simulation (BPA)
+
+The idea of BPA [24] is to simulate the NFA of Figure 2 using bit-parallelism, so that each row *i* of the automaton fits in a computer word *R*ᵢ (each state is represented by a bit). For each new text character, all the transitions of the automaton are simulated using bit operations among the *k* + 1 computer words.
+
+The update formula to obtain the new $R'_i$ values at text position $j$ from the current $R_i$ values
+is as follows:
+
+$$
+\begin{align*}
+R'_{0} & \leftarrow ((R_0 << 1) | 0^m 1) \& B[T_j], \\
+R'_{i+1} & \leftarrow ((R_{i+1} << 1) \& B[T_j]) | R_i | (R_i << 1) | (R'_i << 1),
+\end{align*}
+$$
+
+where $B[c]$ is a precomputed table of $\sigma$ entries such that the first bit of $B[c]$ is always set and the $(r+1)$-th bit is set whenever $P_r = c$. We start the search with $R_i = 0^{m-i}1^{i+1}$. In the formula for $R'_{i+1}$ are expressed, in that order, horizontal, vertical, diagonal and dashed diagonal arrows.
+
+If $m + 1 > w$, we need $\lfloor m/w \rfloor$ computer words to simulate every $R_i$ mask¹ and have to update
+
+¹With slightly more complicated formulas, the simulation can be done using *m* bits, instead of *m* + 1.
+---PAGE_BREAK---
+
+them one by one. The cost of this simulation is thus $O(k[m/w]n)$. The algorithm is flexible. For example the initial self-loop can be removed by changing the update formula into:
+
+$$R'_{0} \leftarrow (R_{0} << 1) \& B[T_{j}],$$
+
+$$R'_{i+1} \leftarrow ((R_{i+1} << 1) \& B[T_j]) | R_i | (R_i << 1) | (R'_i << 1).$$
+
+## 2.7 Myers’ Bit-Parallel Matrix Simulation (BPM)
+
+A better way to parallelize the computation [12] is to represent the differences between consecutive rows or columns of the dynamic programming matrix instead of the NFA states. Let us call
+
+$$\Delta h_{i,j} = M_{i,j} - M_{i,j-1} \in \{-1, 0, +1\},$$
+
+$$\Delta v_{i,j} = M_{i,j} - M_{i-1,j} \in \{-1, 0, +1\},$$
+
+$$\Delta d_{i,j} = M_{i,j} - M_{i-1,j-1} \in \{0, 1\},$$
+
+the horizontal, vertical, and diagonal differences among consecutive cells. Their range of values come from the properties of the dynamic programming matrix [21].
+
+We present a version [9] that differs slightly from that of [12]: Although both perform the same number of operations per text character, the one we present is easier to understand and more convenient for our purposes.
+
+Let us introduce the following boolean variables. The first four refer to horizontal/vertical positive/negative differences and the last to the diagonal difference being zero:
+
+$$VP_{i,j} \equiv \Delta v_{i,j} = +1, \quad VN_{i,j} \equiv \Delta v_{i,j} = -1,$$
+
+$$HP_{i,j} \equiv \Delta h_{i,j} = +1, \quad HN_{i,j} \equiv \Delta h_{i,j} = -1,$$
+
+$$D0_{i,j} \equiv \Delta d_{i,j} = 0.$$
+
+Note that $\Delta v_{i,j} = VP_{i,j} - VN_{i,j}$, $\Delta h_{i,j} = HP_{i,j} - HN_{i,j}$, and $\Delta d_{i,j} = 1 - D0_{i,j}$. It is clear that these values completely define $M_{i,j} = \sum_{r=1..i} \Delta v_{r,j}$.
+
+The boolean matrices *HN*, *VN*, *HP*, *VP*, and *D0* can be seen as vectors indexed by *i*, which change their value for each new text position *j*, as we traverse the text. These vectors are kept in bit masks with the same name. Hence, for example, the *i*-th bit of the bit mask *HN* will correspond to the value $HN_{i,j}$. The index *j* − 1 refers to the previous value of the bit mask (before processing $T_j$), whereas *j* refers to the new value, after processing $T_j$. By noticing some dependencies among the five variables [9, 17], one can arrive to identities that permit computing their new values (at *j*) from their old values (at *j* − 1) fast.
+
+Figure 3 gives the pseudo-code. The value *diff* stores $C_m = M_{m,j}$ explicitly and is updated using *HP**m*,j* and *HN**m*,j*.
+
+This algorithm uses the bits of the computer word better than previous bit-parallel algorithms, with a worst case of $O(\lceil m/w \rceil n)$ time. However, the algorithm is more difficult to adapt to other related problems, and this has prevented it from being used as an internal tool of other algorithms.
+---PAGE_BREAK---
+
+Figure 3: BPM bit-parallel simulation of the dynamic programming matrix.
+
+## 2.8 The ABNDM Algorithm
+
+Given a pattern $P$, a *suffix automaton* is an automaton that recognizes every suffix of $P$. This is used in [6] to design a simple exact pattern matching algorithm called BDM, which is optimal on average ($O(n \log_\sigma m/m)$ time). To search for a pattern $P$ in a text $T$, the suffix automaton of $P^r = P_m P_{m-1} \dots P_1$ (i.e. the pattern read backwards) is built. A window of length $m$ is slid along the text, from left to right. The algorithm scans the window backwards, using the suffix automaton to recognize a factor of $P$. During this scan, if a final state is reached that does not correspond to the entire pattern $P$, the window position is recorded in a variable *last*. This corresponds to finding a *prefix* of the pattern starting at position *last* inside the window and ending at the end of the window, because the suffixes of $P^r$ are the reverse prefixes of $P$. This backward search ends in two possible forms:
+
+1. We fail to recognize a factor, that is, we reach a letter $a$ that does not correspond to a transition in the suffix automaton (Figure 4). In this case we shift the window to the right so as to align its starting position to the position *last*.
+
+2. We reach the beginning of the window, and hence recognize $P$ and report the occurrence. Then, we shift the window exactly as in case 1 (to the previous *last* value).
+
+In BNDM [16] this scheme is combined with bit-parallelism so as to replace the construction of the deterministic suffix automaton by the bit-parallel simulation of a nondeterministic one. The scheme turns out to be flexible and powerful, and permits other types of search, in particular approximate search. The resulting algorithm is ABNDM.
+---PAGE_BREAK---
+
+Figure 4: BDM search scheme.
+
+We modify the NFA of Figure 2 so that it recognizes not only the whole pattern but also any
+suffix thereof, allowing up to *k* differences. Figure 5 illustrates the modified NFA. Note that we
+have removed the initial self-loop, so it does not search for the pattern but recognizes strings at
+edit distance *k* or less from the pattern. Moreover, we have built it on the reverse pattern. We
+have also added an initial state "I", with ε-transitions leaving it. These allow the automaton to
+recognize, with up to *k* differences, any suffix of the pattern.
+
+Figure 5: An NFA to recognize suffixes of the pattern "survey" reversed.
+
+In the case of approximate searching, the length of a pattern occurrence ranges from *m* − *k* to
+*m* + *k*. To avoid missing any occurrence, we move a window of length *m* − *k* on the text, and scan
+backwards the window using the NFA described above.
+
+Each time we move the window to a new position, we start the automaton with all its states
+active, which represents setting the initial state to active and letting the $\epsilon$-transitions propagate
+this activation to all the automaton (the states in the lower-left triangle are also activated to allow
+initial insertions). Then we start reading the window characters backward.
+
+We recognize a prefix and update *last* whenever the final NFA state is activated. We stop the
+backward scan when the NFA is out of active states.
+
+If the automaton recognizes a pattern prefix at the initial window position, then it is possible
+(but not necessary) that the window starts an occurrence. The reason is that strings of different
+length match the pattern with *k* differences, and all we know is that we have matched a prefix of
+---PAGE_BREAK---
+
+the pattern of length $m-k$.
+
+Therefore, in this case we need to *verify* whether there is a complete pattern occurrence starting at the beginning of the window. For this sake, we run the traditional automaton that computes edit distance (i.e., that of Figure 2 without initial self-loop) from the initial window position in the text. After reading at most $m+k$ characters, we have either found a match starting at the window position (that is, the final state becomes active) or determined that no match starts at the window beginning (that is, the automaton runs out of active states).
+
+So we need two different automata in this algorithm. The first one makes the *backward scanning*, recognizing suffixes of $P^r$. The second one makes the *forward scanning*, recognizing $P$.
+
+The automata can be simulated in a number of ways. In [16] they choose BPA [24] because it is easy to adapt to the new scenario. To recognize all the suffixes, we just need to initialize $R_i \leftarrow 1^{m+1}$. To make it compute edit distance, we remove the self-loop as explained in Section 2.6. The final state is active when $R_k \& 10^m \neq 0^{m+1}$. The NFA is out of active states whenever $R_k = 0^{m+1}$. Other approaches were discarded: an alternative NFA simulation [3] is not practical to compute edit distance, and BPM [12] cannot easily tell when the corresponding automaton is out of active states, or similarly, when all the cells of the current dynamic programming column are larger than $k$.
+
+Figure 6 shows the algorithm.
+
+**ABNDM** (*P*₁...*m*, *T*₁...*n*, *k*)
+
+1. **Preprocessing**
+
+2. Build forward and backward NFA simulations (*fNFA* and *bNFA*)
+
+3. **Searching**
+
+4. *pos* ← 0
+
+5. **While** *pos* ≤ *n* − (*m* − *k*) **Do**
+
+6. *j* ← *m* − *k*, *last* ← *m* − *k*
+
+7. Initialize *bNFA*
+
+8. **While** *j* ≠ 0 AND *bNFA* has active states **Do**
+
+9. Feed *bNFA* with *T**pos*+*j*
+
+10. *j* ← *j* − 1
+
+11. If *bNFA*'s final state is active **Then** /* prefix recognized */
+
+12. If *j* > 0 **Then** *last* ← *j*
+
+13. Else check with *fNFA* a possible occurrence starting at *pos* + 1
+
+14. *pos* ← *pos* + *last*
+
+Figure 6: The generic ABNDM algorithm.
+
+The algorithm is shown to be good for moderate *m*, low *k* and small *σ*, which is an interesting case, for example, in DNA searching. However, the use of BPA for the NFA simulation limits its usefulness to very small *k* values. Our purpose in this paper is to show that BPM can be extended for this task, so as to obtain a faster version of ABNDM that works with larger *k*.
+---PAGE_BREAK---
+
+### 3 Average Case Analysis of ABNDM
+
+The best previous analysis of ABNDM [10] (which improved the first one [13]) has shown that the algorithm inspects on average $O(kn \log_\sigma(m)/m)$ text positions. We show now that those analyses are pessimistic, and that the number of character inspections made by ABNDM is indeed the optimal $O((k + \log_\sigma m)n/m)$, and this holds for $\alpha < 1/2 - O(1/\sqrt{\sigma})$. This will be essential to analyze the new algorithms we present in the following sections.
+
+We analyze a simplified algorithm that can never inspect less characters than the real ABNDM algorithm. We will show that even this simplified algorithm is optimal. The simplified algorithm always inspects $\ell$ characters from the window ($\ell$ will be determined later), and only then it checks whether the string read matches inside $P$ with $k$ errors or less. If the string does not match, the window is shifted by $m-k-\ell$ characters. If it matches, the whole window is verified and shifted by one position. It is clear that this algorithm can never perform better than the original in any possible text window. If the original algorithm stops the scanning before reading $\ell$ characters (and hence shifts more than $m-k-\ell$ positions), the current algorithm reads $\ell$ characters and shifts $m-k-\ell$ positions. Otherwise, the simplified algorithm goes to the worst possible situation: it checks the whole window and shifts by one position.
+
+Let us consider the $n-(m-k)+1 \le n$ text windows of length $m-k$. We divide them into good and bad windows. A window is good if its last $\ell$ characters do not match inside $P$ with $k$ errors or less, otherwise it is bad. We will consider separately the cost to process good and bad windows.
+
+When the search encounters a good window, by definition, it inspects $\ell$ characters and shifts $m-k-\ell$ positions. Therefore, we cannot process more than $\lfloor n/(m-k-\ell) \rfloor$ good windows, at $O(\ell)$ cost each. Therefore, the overall number of inspected characters inside good windows is $O(\ln n/(m-k-\ell))$.
+
+In order to handle the bad windows, we must bound the probability of a window being bad. In [3, 13] it is shown that the probability that a given string of length $\ell$ matches at a given (final) position inside a longer string is $a^\ell/\ell$, where $a < 1$ whenever $k/\ell < 1-e/\sqrt{\sigma}$, that is, we need at least $\ell > k/(1-e/\sqrt{\sigma})$. An upper bound to the probability of the string of length $\ell$ matching inside $P$ is obtained by adding up the $m$ possible final match positions of the string inside $P$, as if the events of matching at different final positions were disjoint and the first $\ell+k-1$ positions did not have lower probability of finishing a match. Hence, an upper bound to the probability of a window being bad is $ma^\ell/\ell$.
+
+When the window is bad, we pay at most $(m-k)+(m+k)=2m$ character inspections for scanning and verifications, and then shift by one. Since there are at most $n$ bad windows in the text, an upper bound to the overall average number of characters inspected on bad windows is $n \cdot ma^\ell/\ell \cdot 2m = O(m^2na^\ell/\ell)$. This upper bound is obtained by assuming that we will consider all the text windows, and pay $2m$ for all the bad ones.
+
+We choose $\ell$ large enough so that the cost of bad windows does not exceed $O(n/m)$, so as to ensure that the cost on good windows dominates. For this to hold, we need $a^\ell/\ell \le 1/m^3$, or more strictly, $a^\ell \le 1/m^3$. This is equivalent to $\ell \ge 3\log_{1/a} m$. Since $a \ge 1/\sigma$ [3], a sufficient condition is $\ell \ge 3\log_\sigma m$.
+
+Therefore, we have that the overall number of characters inspected is $O(\ln n/(m-k-\ell))$ provided $\ell > k/(1-e/\sqrt{\sigma})$ and also $\ell \ge 3\log_\sigma m$. The complexity is $O(\ln n/m)$ provided $m-k-\ell \ge cm$ for some constant $0 < c < 1$, that is, $\ell \le (1-c)m-k$. So we have lower and upper bounds
+---PAGE_BREAK---
+
+on $l$. The first condition we can derive from both is $k/(1 - e/\sqrt{\sigma}) < (1 - c)m - k$, that is, $\alpha < (1 - e/\sqrt{\sigma})/(2 - e/\sqrt{\sigma}) = 1/2 - O(1/\sqrt{\sigma})$. Since $k < m/2$, $(1-c)m-k > (1/2-c)m$ and therefore this upper bound on $l$ does not clash, asymptotically, with the lower bound $l \ge 3\log_{\sigma} m$.
+
+So we have that the complexity is $O(\ell n/m)$ provided $\alpha < 1/2 - O(1/\sqrt{\sigma})$ and $l > \max(k/(1-e/\sqrt{\sigma}), 3\log_{\sigma} m)$. Choosing an appropriate $l$ we obtain complexity $O(\max(k, \log_{\sigma} m)n/m) = O((k+\log_{\sigma} m)n/m)$, which is optimal [5]. This shows that our pessimistic analysis is tight and that ABNDM inspects an optimal (on average) amount of characters.
+
+ABNDM, however, is not optimal in terms of overall complexity. The reason is that, for each character inspected, the BPA automaton needs time $O(k)$ to process it if $m \le w$, and $O(mk/w)$ in general. This gives an overall complexity of $O((k + \log_{\sigma} m)kn/m)$ if $m \le w$, and $O((k + \log_{\sigma} m)kn/w)$ in general.
+
+In this paper we manage to use BPM instead of BPA. This simulation takes $O(1)$ per inspected character if $m \le w$, and $O(m/w)$ in general. In this case the complexity would be the optimal $O((k + \log_{\sigma} m)n/m)$ for $m \le w$ and $O((k + \log_{\sigma} m)n/w)$ in general. However, as we show later, different complications make the real complexities $O((k + \log m)n/m)$ and $O((k + \log m)n/w)$. These are optimal for constant alphabet size $\sigma$.
+
+# 4 Forward Scanning with the BPM Simulation
+
+We first focus on how to adapt the BPM algorithm to perform the forward scanning required by the ABNDM algorithm. Two modifications are necessary. The first is to make the algorithm compute edit distance instead of performing text searching. The second is making it able to determine when it is not possible to obtain edit distance $\le k$ by reading more characters.
+
+## 4.1 Computing Edit Distance
+
+We recall that BPM implements the dynamic programming algorithm of Section 2.3 in such a way that differential values, rather than absolute ones, are stored. Therefore, we must consider which is the change required in the dynamic programming matrix in order to compute edit distance. As explained in Section 2.3, the only change is that $M_{0,j} = j$. In differential terms (Section 2.7), this means $\Delta h_{0,j} = 1$ instead of zero.
+
+When $\Delta h_{0,j} = 0$, its value does not need to be explicitly present in the BPM algorithm. The value makes a difference only when *HP* or *HN* is shifted left, which happens on lines 12 and 14 of the algorithm (Figure 3). On these occasions the assumed bit zero enters automatically from the right, thereby implicitly using a value $\Delta h_{0,j} = 0$. To use a value $\Delta h_{0,j} = 1$ instead, we change line 12 of the algorithm to $X \leftarrow (HP \ll 1) | 0^{m-1}$.
+
+Since we will use this technique several times from now on, we give in Figure 7 the code for a single step of edit distance computation.
+
+## 4.2 Preempting the Computation
+
+Albeit in the forward scan we could always run the automaton through $m+k$ text characters, stopping only if $diff \le k$ to signal a match, it is also possible to determine that $diff$ will always be larger than $k$ in the characters to come. This happens when all the cells of the vector $C_i$ are larger
+---PAGE_BREAK---
+
+**BPMStep (Bc)**
+
+1. $X \leftarrow B_c | VN$
+
+2. $D0 \leftarrow ((VP + (X \& VP)) \wedge VP) | X$
+
+3. $HN \leftarrow VP \& D0$
+
+4. $HP \leftarrow VN | \sim (VP | D0)$
+
+5. $X \leftarrow (HP << 1) | 0^{m-1}1$
+
+6. $VN \leftarrow X \& D0$
+
+7. $VP \leftarrow (HN << 1) | \sim (X | D0)$
+
+Figure 7: The procedure used for performing the variable updates per scanned character in the adaptation of BPM to edit distance computation. It receives the bit mask *Bc* of the current text character and shares all the other variables with the calling process.
+
+than *k*, because there is no way in the recurrence to introduce a value smaller than the current ones. In the automaton view, this is the same as the NFA running out of active states (since an active state at column *i* and row *r* would mean $C_i = r \le k$).
+
+This is more difficult in the dynamic programming matrix simulation of BPM. The only column value that is explicitly stored is *diff* = $C_m$. The others are implicitly represented as $C_i = \sum_{r=1..i} (VP_r - VN_r)$. Using this incremental representation, it is not easy to check whether $C_i > k$ for all *i*.
+
+Our solution is inspired by the cutoff algorithm of Section 2.4. This algorithm permits knowing all the time the largest $\ell$ such that $C_\ell \le k$, at constant amortized time per text position. Although designed for text searching, the technique can be applied without any change to the edit distance computation algorithm. Clearly $\ell \ge 0$ holds if and only if $C_i \le k$ for some $i$. Hence we will maintain a witness in the current column of the dynamic programming matrix that will tell which is the last cell not exceeding $k$.
+
+So we have to figure out how to maintain $\ell$ using BPM. Initially, since $C_i = M_{i,0} = i$, we set $\ell \leftarrow k$. Later, we have to update $\ell$ for each new text character read. Recall that neighboring cells in $M$ (and hence in $C$) differ by at most one. Since, by definition of $\ell$, $C_{\ell+1} > k$ and $C_\ell \le k$, we have that $C_\ell = M_{\ell,j-1} = k$ as long as $\ell < m$. We may assume that $k < m$, so the condition $\ell < m$ holds initially. We consider now how to move from column $j-1$ to column $j$ in $M$.
+
+Since $\ell$ can increase at most by one at the new text position, we start by effectively increasing it. This increment is correct when $M_{\ell+1,j} \le k$ before doing the increment. Since $M_{\ell+1,j} - M_{\ell,j-1} = \Delta d_{\ell+1,j} \in \{0,1\}$, we have that it was correct to increase $\ell$ if and only if the bit $D0_{\ell,j}$ is set after the increment. If it was not correct to increase $\ell$, we decrease it as much as necessary to obtain $M_{\ell,j} \le k$. In this case we know that $M_{\ell,j} = k+1$, which enables us to obtain the cell values $M_{\ell-1,j} = M_{\ell,j} - VP_{\ell,j} + VN_{\ell,j}$, and so on with $\ell-2, \ell-3, \dots$. If we reach $\ell=0$ and still $M_{\ell,j} > k$, then all the rows are larger than $k$ and we stop the scanning process.
+
+The above procedure assumed that $\ell < m$. Note that, as soon as $\ell = m$, we have $C_m \le k$, and the forward scan will terminate because we have found an occurrence.
+
+Figure 8 shows the forward scanning algorithm. It scans from text position *j* and determines whether there is an occurrence starting at *j*. Instead of *P*, the routine receives the mask table *B*
+---PAGE_BREAK---
+
+already computed (see Figure 3). Note that for efficiency $\ell$ is maintained in unary.
+
+Figure 8: Adaptation of BPM to perform a forward scan from text position $j$ and return whether there is an occurrence starting at $j$.
+
+# 5 Backward Scanning with the BPM Simulation
+
+In this section we address the main obstacle to use BPM instead of BPA inside algorithm ABNDM: the backward scanning. As explained, the problem is that the backward scanning algorithm should be able to tell, as early as possible, that the string read up to now cannot be contained in any pattern occurrence, so as to shift the window as early as possible. Under BPA simulation it turns out that the condition is equivalent to the simulated NFA not having any active state, and this can be directly checked because the NFA states are explicitly represented in BPA. BPM, on the other hand, does not simulate an automaton, but the dynamic programming matrix. In this case, the condition is equivalent to all matrix values in the last column exceeding $k$. The problem is that BPM does not store absolute matrix values, but differential ones, and this makes it difficult to tell fast whether all cell values exceed some threshold.
+
+We solve the problem by introducing the witness concept. A witness is a matrix cell whose absolute value is known. Several witnesses are spread along the current matrix column. All the witness values are maintained in a single computer word and updated in bit-parallel fashion. By knowing the absolute values of some column cells, we can efficiently compute, bound, or deduce all the other column values. When all the values can be proven to exceed $k$, we know that the current window can be abandoned.
+
+We first develop a naive solution, based on the forward scanning developed in Section 4. This method uses one witness to stop the scanning, and we will show why this cannot be efficient in
+---PAGE_BREAK---
+
+this scenario. This fact will motivate the use of several witnesses, which will be developed in depth
+next.
+
+## 5.1 A Naive Solution
+
+The backward scan has the particularity that all the NFA states start active. This is equivalent to initializing $C$ as $C_i = 0$ for all $i$. The place where this initialization is expressed in BPM is on line 4 of Figure 3: $VP = 1^m$ corresponds to $C_i = i$. We change it to $VP \leftarrow 0^m$ and obtain the desired effect. Also, like in forward scanning, $M_{0,j} = j$, so we apply to line 12 the same change as with forward scanning in order to use the value $\Delta h_{0,j} = 1$.
+
+With these tools at hand, we could simply apply the forward scan algorithm with $B$ built on $P^r$ and read the window backwards. We could use witness $\ell$ to determine when the NFA is out of active states. Every time $\ell = m$, we know that we have recognized a prefix and hence update last. There are a few changes, though: (i) we start with $\ell = m$ because $M_{i,0} = 0$; and (ii) we have to deal with the case $\ell = m$ when updating $\ell$, because now we do not stop the backward scanning in that case but just update last.
+
+The latter problem is solved as follows. As soon as $\ell = m$, we stop tracking $\ell$ and initialize $diff \leftarrow k$ as the known value for $C_m$. We keep updating $diff$ using *HP* and *HN* just as in Figure 3, until $diff > k$. At this moment we switch to updating $\ell$ again, moving it upwards as necessary.
+
+The above scheme works correctly but is terribly slow. The reason is that $\ell$ starts at $m$, and it has to reach zero before we can leave the window. This requires $m$ shifting operations $\ell \leftarrow \ell \gg 1$, which is a lot considering that on average one traverses $O(k + \log_\sigma m)$ characters in the window. The $O(k + n)$ complexity to maintain the last active cell, given in Section 2.4, becomes here $O(m + k + \log_\sigma m)$, since now $\ell$ starts at $m$ instead of $k$ and the “text” length is $O(k + \log_\sigma m)$. Hence, all the column cells reach a value larger than $k$ quite soon, and $\ell$ goes down to zero, correspondingly. The problem is that $\ell$ needs too much time to go down to zero. That is, our witness has to traverse all the $m$ cells to determine that all of them exceed $k$.
+
+We present two solutions to determine fast that all the $C_i$ values have surpassed $k$. Both solutions rely on maintaining several witnesses at the same time along the matrix column. The general idea is to maintain a denser sample of the absolute values in order to reduce the time needed to traverse all the non-sampled cells. In the first version we develop, it might be that we inspect more window characters than necessary in order to determine that we can shift the window. In the second, we will examine the minimum number of characters required, but will have to work more per character, in order to make use of the witnesses. Both will obtain the same search complexity by different means.
+
+## 5.2 Bit-Parallel Witnesses
+
+In the original BPM algorithm, the integer value $diff = C_m$ (a witness) is explicitly maintained in order to determine which text positions match. This is accomplished by using the $m$-th bit of *HP* and *HN* to keep track of $C_m$. This part of the algorithm is not bit-parallel, so in principle one cannot do the same with all the $C_i$ values and still hope to update all of them in a single operation. However, it is possible to store several such witnesses in the same computer word *MC* and use them to bound the others.
+---PAGE_BREAK---
+
+**General mechanism.** Let $Q$ denote the space, in bits, that will be reserved for a single witness. We set up $t = \lfloor m/Q \rfloor$ consecutive witnesses into $MC$. The witnesses keep track of the values $C_m, C_{m-Q}, C_{m-2Q}, \dots, C_{m-(t-1)Q}$, and the witness for $C_{m-rQ}$ uses the bits $m-rQ \dots m-rQ+Q-1$ in $MC$. This means that we need $m+Q-1$ bits for $MC^2$. We discuss later how to determine a suitable value $Q$. For now we assume that such a $Q$ has already been determined.
+
+The witnesses can be used as follows. We note that every cell is at most $\lfloor Q/2 \rfloor$ positions away from some represented witness, and it is known that the difference between consecutive cell values is at most 1. Thus we can be sure that all the cell values of $C$ exceed $k$ when all the witness values are larger than $k' = k + \lfloor Q/2 \rfloor$.
+
+The preceding assumption that every cell in $C$ is at a distance of at most $\lfloor Q/2 \rfloor$ to a represented cell may not be true for the first $\lfloor Q/2 \rfloor$ cells. But we know that $C_0 = j$ at the $j$-th iteration, and so we may assume there is an implicit witness at row zero. Moreover, since this witness is always incremented, it is at least as large as any other witness, and so it will surely surpass $k'$ when the other witnesses do. The initial $\lfloor Q/2 \rfloor$ cells are close enough to this implicit witness.
+
+So the idea is to traverse the window until all the witnesses exceed $k'$, and then shift the window. We will examine a few more cells than if we had controlled exactly all the $C$ values. We analyze later the resulting complexity.
+
+Figure 9 shows the pseudocode of the algorithm. The name of the algorithm owes to the fact that the witness positions are fixed, as opposed to the next section.
+
+**Implementation.** In implementing this idea we face two problems. The first one is how to update all the witnesses in a single operation. This is not hard because each witness $C_{m-rQ}$ can be updated from its old to its new value by considering the $(m-rQ)$-th bits of HP and HN. That is, we define a mask $sMask = (0^{Q-1})^t 0^{m+Q-1-tQ}$ and update all witnesses in parallel by setting $MC \leftarrow MC + (HP \& sMask) - (HN \& sMask)$ (lines 10 and 20 in Figure 9).
+
+The second problem is how to determine that all the witnesses have exceeded $k'$. For this sake we store each witness with excess $b = 2^{Q-1} - 1 - k'$. That is, when $C_{m-rQ} = x$, the corresponding witness holds the value $x+b$. This way the $Q$-th bit of a witness is activated when the cell value it represents exceeds $k'$. Thus if we define $eMask = (10^{Q-1})^t 0^{m+Q-1-tQ}$, then we can stop the scanning whenever $MC \& eMask = eMask$, that is, when all witnesses have their $Q$-th bits activated (lines 11 and 18 in Figure 9).
+
+**Determining $Q$.** Let us now explain how to determine the value $Q$ for the number of bits reserved for each witness. Clearly $Q$ should be as small as possible. The criteria for $Q$ are as follows. First of all we need that (i) $b+k'+1=2^{Q-1}$, where $b$ is the excess value. Initializing the witnesses to $b$ allows us to determine, from their $Q$-th bits, that a witness has exceeded $k'=k+\lfloor Q/2 \rfloor$. On the other hand, we have to ensure that the $Q$-th bit remains set for any witness value larger than $k'$, and that $Q$ bits are still enough to represent the witness. Since the upper limit for a cell value in the window is $m-k'$, the preceding is guaranteed by the condition (ii) $b+m-k' < 2^Q$. Finally, the excess cannot be negative, and so we need (iii) $b \ge 0$.
+
+²If sticking to *m* bits is necessary we can store $C_m$ separately in the *diff* variable, at the same complexity but more cost in practice.
+---PAGE_BREAK---
+
+**ABNDMFixedWitnesses** (*P*₁...*m*, *T*₁...*n*, *k*)
+1. **Preprocessing**
+2. **For** *c* ∈ Σ **Do** *Bf*[*c*] ← 0*^m* , *Bb*[*c*] ← 0*^m*
+3. **For** *i* = 1...*m* **Do**
+4. *Bf*[*P*ᵢ] ← *Bf*[*P*ᵢ] | 0*^{m-i}*10*^{i-1}
+5. *Bb*[*P*ᵢ] ← *Bb*[*P*ᵢ] | 0*^{i-1}*10*^{m-i}
+6. *Q* ← ⌊log₂(*m* − *k* + 1)⌋
+7. **If** 2*Q*-¹ < max(*m* − 2*k* − ⌊*Q*/2⌋, *k* + 1 + ⌊*Q*/2⌋) **Then** *Q* ← *Q* + 1
+8. *b* ← 2*Q*-¹ − *k* − ⌊*Q*/2⌋ − 1
+9. *t* ← ⌊*m*/Q⌋
+10. *sMask* ← (0*^{Q-1}*)*t*0*^{m+Q-1-tQ*
+11. *eMask* ← (10*^{Q-1}*)*t*0*^{m+Q-1-tQ*
+12. **Searching**
+13. *pos* ← 0
+14. **While** *pos* ≤ *n* − (*m* − *k*) **Do**
+15. *j* ← *m* − *k*, *last* ← *m* − *k*
+16. *VP* ← 0*^m*, *VN* ← 0*^m*
+17. *MC* ← [*b*]Q0*^{m+Q-1-tQ*
+18. **While** *j* ≠ 0 AND *MC* & *eMask* ≠ *eMask* **Do**
+19. **BPMStep** (*Bb*[*T*_{pos+j}])
+20. *MC* ← *MC* + (*HP* & *sMask*) − (*HN* & *sMask*)
+21. *j* ← *j* − 1
+22. **If** *MC* & 10*^{m+Q-2} ≠ 0*^{m+Q-1} **Then** /* prefix recognized */
+23. **If** *j* > 0 **Then** *last* ← *j*
+24. **Else If** **BPMFwd** (*Bf*, *T*_{pos+1...n}) **Then**
+25. Report an occurrence at *pos* + 1
+26. *pos* ← *pos* + *last*
+
+Figure 9: The ABNDM algorithm using bit-parallel witnesses. The expression [b]Q denotes the number b seen as a bit mask of length Q. Note that BPMFwd can share its variables with the calling code because these are not needed any more at that point.
+
+By replacing (i) in (ii) we get (i') $b = 2^{Q-1} - k' - 1$ and (ii') $m - k - k' \le 2^{Q-1}$. By (iii) and (i') we get (iii') $k' + 1 \le 2^{Q-1}$. Hence the solution to the new system of inequalities is $Q = 1 + \lceil \log_2(\max(m-k-k', k'+1)) \rceil$, and $b = 2^{Q-1} - k' - 1$.
+
+The problem with the above solution is that $k' = k + \lfloor Q/2 \rfloor$, so the solution is indeed a recurrence for $Q$. Fortunately, it is easy to solve. Since $(X+Y)/2 \le \max(X,Y) \le X+Y$ for any nonnegative $X$ and $Y$, if we call $X = m-k-k'$ and $Y = k'+1$, we have that $X+Y = m-k+1$. So $Q \le 1 + \lfloor \log_2((m-k+1)/2) \rfloor$, and $Q \ge 1 + \lfloor \log_2((m-k+1)/2) \rfloor = \lfloor \log_2(m-k+1) \rfloor$. This gives a 2-integer range for the actual $Q$ value. If $Q = \lfloor \log_2(m-k+1) \rfloor$ does not satisfy (ii') and (iii'), we use $Q+1$ (lines 6-8 in Figure 9).
+
+This scheme works correctly as long as $X, Y \ge 0$, that is, $\lfloor Q/2 \rfloor \le m - 2k$, or $m - k \ge k'$. If this does not hold, our method is anyway useless since in that case it will have to verify every text window.
+---PAGE_BREAK---
+
+**Example.** Figure 10 shows an example of how the vector *MC* is set up. All the bit masks are of length *m*, except *sMask*, *eMask* and *MC*, which are of length *m* + *Q* − 1.
+
+Figure 10: An example of vectors sMask, eMask and MC when $m = 7$ and $k = 1$. In this case $Q = 3$, $k' = 2$ and $b = 1$. In the middle we show a possible column *C* at position *j*, on the left the vectors *sMask* and *eMask*, and on the right the corresponding composition of the vector *MC* at column *j*. The curly braces point out the bit-regions of the witnesses. The only witness whose Q-th bit is not activated corresponds to value $C_7 = 2 \le k'$.
+
+**Complexity.** Let us analyze the complexity of the resulting algorithm. The backward scan will behave as if we permitted $k' = k + \lfloor Q/2 \rfloor$ differences, so the number of characters inspected is $\Theta(n(k + \log m + \log_\sigma m)/m) = \Theta(n(k + \log m)/m)$. Note that we have only $m/Q$ suffixes to test, but this does not affect the complexity. Note also that the amount of shifting is not affected because we have $C_m$ correctly represented.
+
+In case our upper bound $k' = k + \lfloor Q/2 \rfloor$ turns out to be too loose, we can use several interleaved sets of witnesses, each set in its own bit-parallel mask. For example we could use two interleaved *MC* masks and hence the limit would be $k + \lfloor Q/4 \rfloor$. In general we could use *c* masks and have a limit of the form $k + \lfloor Q/2^c \rfloor$. The cost would be $O(c(k + \log(m)/2^c + \log_\sigma m)n/m)$, which is optimized for $c^* = \log_2(\log(m)/(k+\log_\sigma m))$. Using this optimum, the complexity is $O((k+\log_\sigma m)c^*n/m)$, which means the almost optimal $O((k+\log_\sigma m)\log(\log(\sigma)n/m)$ when $k = O(\log_\sigma m)$, the almost optimal $O((k+\log_\sigma m)\log(\log(m)/k)n/m)$ when $\Omega(\log_\sigma m) = k = O(\log m)$, and the optimal $O(kn/m)$ for $k = \Omega(\log n)$. Hence we are very close to optimal under this scheme. Indeed, the algorithm is optimal if we assume that $\sigma$ is constant.
+
+5.3 Bit-Parallel Cutoff
+
+The previous technique, although simple, has the problem of inspecting more characters than necessary. We can instead produce, using a similar approach, an algorithm that inspects the optimal number of characters. This time the idea is to mix the bit-parallel witnesses with a bit-parallel version of the cutoff algorithm (Section 2.4). The final complexity, however, will be the same as for the previous technique, for reasons that will be clear soon.
+
+**General mechanism.** Consider regions $m - rQ - Q + 1... m - rQ$ of length $Q$. Instead of having
+the witnesses fixed at the end of each region (as in the previous section), we let the witnesses “float”
+---PAGE_BREAK---
+
+inside their region. The distance between consecutive witnesses is still $Q$, so they all float together and all are at the same distance $\delta$ to the end of their regions. We use $sMask$ and $eMask$ with the same meanings as before, but they are displaced so as to be all the time aligned to the witnesses.
+
+The invariant is that the witnesses will be as close as possible to the end of their regions, as long as all the cells past the witnesses exceed $k$. That is,
+
+$$ \delta = \min\{d \in 0...Q, \forall r \in \{0...t-1\}, \gamma \in \{0...d-1\}, C_{m-rQ-\gamma} > k\}, $$
+
+where we assume that $C$ yields values larger than $k$ when accessed at negative indexes. When $\delta$ reaches $Q$, this means that all the cell values are larger than $k$ and we can suspend the scanning. Prefix reporting is easy since no prefix can match unless $\delta = 0$, as otherwise $C_m = C_{m-0.Q} > k$, and if $\delta = 0$ then the last floating witness has exactly the value $C_m$.
+
+In the following we present the details of the witness processing after each text window character is read. Figure 11 shows the pseudocode for the whole algorithm.
+
+**Implementation.** The floating witnesses are a bit-parallel version of the cutoff technique, where each witness takes care of its region. Consequently the way of moving the witnesses up and down resembles the cutoff technique (Section 2.4). We first move down and use $D0$ to update $MC$ accordingly (lines 22–23 in Figure 11). But maybe we should not have moved down. Moreover, maybe we should move up several times. So, after having moved down, we move up as much as necessary by using $VP$ and $VN$ (lines 24–26 in Figure 11). To determine whether we should move up further, we need to know whether there is a witness that exceeds $k$. We proceed as in Section 5.2, using $eMask$ to determine whether some witness exceeds $k$. We also use $sMask$ to increment and decrement the witness values. $Q$ is computed as in Section 5.2, except that $k' = k$ and hence no recurrence arises (lines 6–10 in Figure 11).
+
+Note that we have to deal with the case where the witnesses are at the end of their region and hence cannot move down further. In this case we update them using $HP$ and $HN$ (line 20 in Figure 11).
+
+Finally, it is also possible that the upmost witness goes out of bounds while shifting the witnesses, which in effect results in that witness being removed. For this to happen, however, all the area in $C$ covered by the upmost witness must have values larger than $k$, and it is not possible that a cell in this area gets a value $\le k$ later. So this witness can be safely removed from the set, and hence we remove it from $eMask$ as soon as it gets out of bounds for the first time (line 27 in Figure 11). Note that ignoring this fact leads to inspecting slightly more characters (an almost negligible amount) but one instruction is saved, which in practice is convenient.
+
+**Example.** Figure 12 shows an example of floating the witnesses upwards in vector $MC$.
+
+**Complexity.** Let us consider the resulting time complexity. As for the case of a single witness, we work $O(1)$ amortized time per text position. More specifically, if we read $u$ window characters then we work $O(u+Q)$ because we have to move from $\delta = 0$ to $\delta = Q$. But $O(u+Q) = O(k + \log m)$ on average because $Q = O(\log m)$, and therefore we obtain the same complexity of Section 5.2 (without the possibility of tuning $c$).
+---PAGE_BREAK---
+
+**ABNDMFloatingWitnesses** (*P*₁...*m*, *T*₁...*n*, *k*)
+1. Preprocessing
+2. For *c* ∈ Σ **Do** Bf[c] ← 0**m** , Bb[c] ← 0**m**
+3. For *i* ∈ 1...*m* **Do**
+4. Bf[*P**i*] ← Bf[*P**i*] | 0*m*-*i*10*i*-1
+5. Bb[*P**i*] ← Bb[*P**i*] | 0*i*-110*m*-*i*
+6. Q ← 1 + ⌊log₂(max(*m* − 2*k*, *k* + 1))⌋
+7. b ← 2*Q*-1 − *k* − 1
+8. t ← [m/Q]
+9. sMask ← (0*Q*-1)*t*0*m*+Q*-1-*t*Q
+10. eMask ← (10*Q*-1)*t*0*m*+Q*-1-*t*Q
+11. Searching
+12. pos ← 0
+13. While pos ≤ *n* − (*m* − *k*) **Do**
+14. *j* ← *m* − *k*, *last* ← *m* − *k*
+15. VP ← 0**m** , VN ← 0**m**
+16. MC ← [*b*]*t*Q0*m*+Q*-1-*t*Q
+17. δ ← 0
+18. While *j* ≠ 0 AND δ < Q **Do**
+19. BPMStep (Bb[*T*pos+j])
+20. If δ = 0 Then MC ← MC + (HP & sMask) − (HN & sMask)
+21. Else
+22. δ ← δ − 1
+23. MC ← MC + ( ~ (D0 << δ) & sMask)
+24. While δ < Q AND MC & eMask = eMask **Do**
+25. MC ← MC − ((VP << δ) & sMask) + ((VN << δ) & sMask)
+26. δ ← δ + 1
+27. If δ = *m* − (*t* − 1)Q Then eMask ← eMask & 1(*t*-1)*Q0*m*+2Q*-1-*t*Q
+28. *j* ← *j* − 1
+29. If δ = 0 AND MC & 10*m*+Q*-2 ≠ 0*m*+Q*-1 Then /* prefix recognized */
+30. If *j* > 0 Then *last* ← *j*
+31. Else If BPMFwd (Bf, *T*pos+1...n) Then
+32. Report an occurrence at pos + 1
+33. pos ← pos + last
+
+Figure 11: The **ABNDM** algorithm using bit-parallel cutoff. The same comments of Figure 9 apply. For efficiency, the witnesses are not physically shifted, but instead we shift *D0*, *VN* and *VP* by *δ*.
+---PAGE_BREAK---
+
+Figure 12: The left side shows a situation where the witnesses are in their original position ($\delta = 0$), and the equality $eMask \& MC = eMask$ indicates that all witnesses have exceeded $k = 2$. Now we let the witnesses float upwards by incrementing $\delta$ as long as $\delta < Q$ and no witness $\le k$ has been found. When $\delta$ is incremented, the witnesses in $MC$ get their above-neighbor values. We show the new situation on the right. Then the new witness values are evaluated by again checking whether $eMask \& MC' = eMask$. In this example only one increment of $\delta$ was needed, as the last witness found the value $C_7 = 2 \le k$.
+
+We also tried a different version of this algorithm, in which the witnesses are not shifted. Instead, they are updated in a similar fashion to the algorithm of Figure 9, and when all witnesses have a value > $k$, we try to shift a *copy* of them up until either a cell with value $\le k$ is found or $Q - 1$ consecutive shifts are made. In the latter case we can stop the search, since then we have covered checking the whole column $C$. This version has a worse complexity, $O(Q(k + \log_{\sigma} m)) = O(\log m(k + \log_{\sigma} m))$ per window, as at each processed character it is possible to make $O(Q)$ shifts. But in practice it turned out to be very similar to our original cutoff algorithm.
+
+# 6 Experimental Results
+
+We compared our BPM-based ABNDM against the original BPA-based ABNDM, as well as those other algorithms that, according to a recent survey [13], are the best for moderate pattern lengths. We tested with random patterns and text over uniformly distributed alphabets. Each individual test run consisted of searching for 100 patterns a text of size 10 Mb. We measured total elapsed times.
+
+The computer used in the tests was a 64-bit Alphaserver ES45 with four 1 Ghz Alpha EV68 processors, 4 GB of RAM and Tru64 UNIX 5.1A operating system. All test programs were compiled with the DEC CC C-compiler and maximum optimization switch. There were no other active significant processes running on the computer during the tests. All algorithms were set to use a 64 KB text buffer. The tested algorithms were:
+
+**ABNDM/BPA(regular):** ABNDM implemented on BPA [24], using a generic implementation for any $k$.
+---PAGE_BREAK---
+
+**ABNDM/BPA(special code):** Same as above, but especially coded for each value of *k* to avoid using an array of bit masks.
+
+**ABNDM/BPM(fixed):** ABDNM implemented using BPM and fixed-position witnesses, without the interleaving mentioned at the end (Section 5.2). The implementation differed slightly from Figure 9 due to optimizations.
+
+**ABNDM/BPM(floating):** ABDNM implemented using BPM and cutoff, with floating-position witnesses (Section 5.3). The implementation differed slightly from Figure 11 due to optimizations.
+
+**BPM:** The sequential BPM algorithm [12]. The implementation was by us and used the slightly different (but practically equivalent in terms of performance) formulation from [9].
+
+**BPP:** A combined heuristic using pattern partitioning, superimposition and hierarchical verification, together with a diagonally bit-parallelized NFA [3, 15]. The implementation was by the original authors.
+
+**EXP:** Partitioning the pattern into $k+1$ pieces and using hierarchical verification with a diagonally bit-parallelized NFA in the checking phase [14]. The implementation was by the original authors.
+
+Figure 13 shows the test results for $\sigma = 4$, 13 and 52 and $m = 30$ and 55. This is only a small part of our complete tests, which included $\sigma = 4, 13, 20, 26$ and 52, and $m = 10, 15, 20, \dots, 55$. We chose $\sigma = 4$ because it behaves like DNA, $\sigma = 13$ because it behaves like English³, and $\sigma = 52$ to show that our algorithms are useful even on large alphabets.
+
+First of all it can be seen that ABNDM/BPM(floating) is always faster than ABNDM/BPM(fixed) by a nonnegligible margin.
+
+It can be seen that our ABNDM/BPM versions are often faster than ABNDM/BPA(special code) when $k=4$, and always when $k > 4$. Compared to ABNDM/BPA(regular), our version is always faster for $k > 1$. We note that writing down a different procedure for every possible $k$ value, as done for ABNDM/BPA(special code), is hardly a real alternative in practice.
+
+With moderate pattern length $m = 30$, our ABNDM/BPM versions are competitive for low error levels. However, BPP is better for small alphabets and EXP is better for large alphabets. In the intermediate area $\sigma = 13$, we are the best for $k = 4...6$. This area is interesting when searching natural language text, in particular when searching for phrases.
+
+When $m = 55$, our ABNDM/BPM versions become much more competitive, being the fastest in many cases: For $k = 5...9$ with $\sigma = 4$, and for $k = 4...11$ both with $\sigma = 13$ and $\sigma = 52$, with the single exception of the case $\sigma = 52$ and $k = 9$, where EXP is faster (this seems to be a variance problem, however).
+
+³On biased texts, most sequential string matching algorithms behave as on random texts of size $\sigma$, where $1/\sigma$ is the probability that two characters randomly chosen from the text match. On English texts this probability is usually between 1/12 and 1/15.
+---PAGE_BREAK---
+
+Figure 13: Comparison between algorithms, showing total elapsed time as a function of the number of differences permitted, *k*. From top to bottom row we show $\sigma = 4$, 13 and 52. On the left we show $m = 30$ and on the right $m = 55$.
+---PAGE_BREAK---
+
+# 7 Using Bit-Parallel Cutoff in Row-wise BPM
+
+In this section we demonstrate that the idea of using witnesses can be applied to other scenarios. We consider a recent work [7], where the basic BPM algorithm is modified so that the dynamic programming matrix is filled row-wise rather than column-wise. This means that the text $T_1...n$ is cut into consecutive chunks of $w$ characters, and for each chunk $T_{bw+1...bw+w}$ we compute the $m$ rows of the corresponding part of the dynamic programming matrix, so that each row of each chunk is computed in $O(1)$ time using a variant of BPM.
+
+Figure 14 illustrates the idea. The shaded area represents the real area of the dynamic programming matrix $C$ that must be filled. Classical BPM fills it column-wise. If $m \mod w$ is not small, an important amount of work is wasted. This corresponds to work that is anyway carried out inside the bit masks. It is represented by the non-shaded area that is covered by the vertical rectangles. If the same matrix is filled row-wise, we need much less rectangles (bit-parallel steps) to cover the same matrix.
+
+Figure 14: Column-wise versus row-wise bit-parallel filling of dynamic programming matrix $C$.
+
+Modifying BPM to work by rows instead of by columns is rather easy because the rules to compute $C$ are symmetric, so the formula for the transposed matrix is exactly the same. The only difference is that now the first row must start with all zeros, while the first value of row $i$ is $i$. The changes are simple and have already been done in this paper for other purposes (the first to recognize any suffix of $P$, the second to compute edit distance using BPM). The really challenging part is how to preprocess the characters of the current text chunk efficiently, because the $B$ table of a text chunk will be used just for $m$ bit-parallel steps, unlike the pattern preprocessing of column-wise filling, that is used for all the $n$ steps. The details are given in [7]. In particular, it is shown there how to build the $B$ table efficiently in the case of searching DNA.
+
+To take much more advantage of row-wise tiling, the cutoff technique is used in [7], so that only the necessary rows of each chunk are filled. For this sake, it is necessary to determine whether all the current row values exceed $k$, so that no more rows need to be evaluated in the current chunk. The approach of [7] is to use precomputed tables $Sum(q)$ and $Min(q)$ that are two-dimensional and of size $2^q \times 2^q$, where the parameter $q$ is chosen so that the word size $w$ is a multiple of $q$. Let $I$ be a length-$q$ vector in which a set bit denotes an increment by one at that position, and in similar fashion let $D$ be a length-$q$ vector in which a set bit denotes a decrement by one. The value
+---PAGE_BREAK---
+
+Sum($q$)$_{I,D}$ gives the combined increment of $I$ and $D$, that is, $Sum(q)_{I,D} = \sum_{i=1...q} (I[i] - D[i])$. The value $Min(q)_{I,D}$ gives the minimum combined increment between equally long prefixes of $I$ and $D$, that is, $Min(q)_{I,D} = \min(\sum_{i=1...h} (I[i] - D[i]) | 1 \le h \le q)$. In the case of using BPM in row-wise manner, the roles of the vertical and the horizontal difference bit masks are reversed. Consider a situation where the length-$w$ vertical bit masks VP and VN encode the horizontal differences $\Delta h_{i,j+1}, \Delta h_{i,j+2}, ..., \Delta h_{i,j+w}$ and the cell value $M_{i,j}$ of the dynamic programming matrix is known. Let the superscript $h$ denote the $h$th length-$q$ segment of a length-$w$ bit mask. Now for example $VP = VP^1...VP^{w/q}$, and $VP^h$ contains the bits $VP_{hq-q+1}...VP_{hq}$. The definition of $Sum(q)$ means that $M_{i,j+q} = M_{i,j} + Sum(q)_{VP^1,VN^1}$. From the definition of $Min(q)$ we have that $M_{i,j+x} \le k$ for some $1 \le x \le q$ if and only if $M_{i,j} + Min(q)_{VP^1,VN^1} \le k$. Repeating the preceding $w/q$ times is enough to check whether the whole region $M_{i,j+1}...M_{i,j+w}$ contains a cell value not greater than $k$: First check the segment $M_{i,j+1}...M_{i,j+q}$ by using the value $M_{i,j} + Min(q)_{VP^1,VN^1}$. Then compute the value $M_{i,j+q} = M_{i,j} + Sum(q)_{VP^1,VN^1}$. After checking the $h$th segment, the $(h+1)$th segment $M_{i,hq+1}...M_{i,j+(h+1)q}$ can be checked by using the value $M_{i,j+hq} + Min(q)_{VP^{h+1},VN^{h+1}}$, and one can also compute the value $M_{i,j+(h+1)q} = M_{i,j+hq} + Sum(q)_{VP^{h+1},VN^{h+1}}$ for subsequent use in the checking process.
+
+Consider filling the chunk of rows that corresponds to $T_{j+1...j+w}$. Our proposal is to use bit-parallel witnesses (Section 5.3) in implementing the cut-off in row-wise BPM. This is quite straightforward as the case of filling a single chunk of rows in row-wise BPM is very similar to the case of backward scanning in ABNDM. The only differences are that the roles of the text chunk and the pattern are reversed, the boundary values $\Delta h_{0,j}$ depend on the previous chunk of rows, the "window-length" is $m$ instead of $m-k$, and the witness size $Q$ is determined so that the maximum value a witness needs to be able to hold is $\min(m, k+w)$ instead of $m-k$. The last part comes from the fact that the minimum value within a row that needs to be computed is at most $k+1$, and thus the maximum value within a length-$w$ chunk is at most $k+1 + (w-1) = k+w$.
+
+We have tested modifying row-wise BPM to use the variant of our bit-parallel cutoff that was fastest in the previous section. The modification was built on the original code from [7] that builds the *B* table efficiently in the case of DNA searching. A prerequisite for this is that the DNA text has to be packed in a special way. The packed DNA takes 2 bits per character (see [7] for details of the packing scheme). We used the roughly 10MB genome of baker's yeast as the text, and the patterns were selected from the text in random fashion. The tested pattern lengths were $m=16, 32$ and $64$, and for each combination of $m$ and $k$ we measured the average over searching 100 different patterns. The computer used in this test was a Sparc Ultra 2 without other significant processes running during the tests. The computer was setup in 64-bit mode and thus the chunks were of length $w=64$.
+
+We compared our modified version against the original, and for each version we measured both the elapsed run time and the average number of rows filled within the chunks. As mentioned in [7], the original row-wise BPM used a cut-off that requires the cell values to be larger than $k+1$ in order for them to become irrelevant. This way was claimed to be more convenient and also faster in practice. Our modified version uses the strict limit $k$.
+
+The test included also row-wise BPM without cutoff and the regular BPM. The latter was an optimized version taking 75% of the time needed by the original version [12]. The former also used our bit-parallel witnesses to check fast whether row *m* in the current chunk contains a cell with
+---PAGE_BREAK---
+
+a value less than *k*, that is, whether we need to process the lowest row in the chunk cell-by-cell in order to report the pattern occurrences that end inside it. Note that this type of occurrence checking is extra work in comparison to the regular BPM. To take into account the possible gain from less I/O cost when the text is packed, we modified the regular BPM to search in a packed DNA where each character is encoded by two successive bits. Even though the size of the packed text is the same, this simple way of packing is not the same that the row-wise methods use.
+
+Fig. 15 shows the results. It can be seen that row-wise BPM with our bit-parallel cutoff is considerably faster than the original row-wise method. This is true even if we compare our method with $k + 1$ against the original with $k$ to have a comparable number of filled rows due to the difference in the cutoff strategies. The plots also show that, when $m = 64$, the run time graphs of the two algorithms meet when $k = 21$. In that situation our row-wise BPM computes on average roughly 52 rows per chunk, and the time for checking/reporting occurrences is not a large factor. The original row-wise method starts being worse than plain BPM already from $k = 8$. With lower values of $m$ the meeting point is before $k = 21$. This is because in those cases the row-wise methods begin to suffer from the cost of reporting occurrences at lower $k$ values. This effect is also evident in how the graphs for row-wise BPM without cutoff have a distinctive step. The comparison among our two row-wise variants shows that the burden of using the cutoff is reasonably small: the version without cutoff is never much faster even when the cutoff method has to compute all or almost all of the $m$ rows in each chunk.
+
+Note that it is not possible to directly compare these results against those of Section 6, because here we use packed text and there we use standard text encoding. Packed text is necessary for the success of the algorithms of this section, while it is very cumbersome to handle by ABNDM. Yet, regular BPM can be used as a comparison ground between both algorithms. It can be seen that, on DNA text, this method is beaten by ABNDM variants only on $m = 64$ for rather low $k$ values (which, however, are rather common in some applications).
+
+# 8 Conclusions
+
+The most successful approaches to approximate string matching are bit-parallelism and filtering. A promising algorithm combining both is ABNDM [16]. However, the original ABNDM uses a slow $O(k[m/w]n)$ time bit-parallel algorithm (BPA [24]) for its internal working because of its straightforward flexibility. In this paper we have shown how to extend BPM [12] to replace BPA. Since BPM is $O([m/w]n)$ time, we obtain a much faster version of ABNDM.
+
+For this sake, BPM was extended to permit backward scanning of the window and forward verification. The extensions involved making it compute edit distance, making it able to recognize any suffix of the pattern with $k$ differences, and, the most complicated, being able to tell in advance that a match cannot occur ahead, both for backward and forward scanning. We presented two alternatives for the backward scanning: a simple one that may read more characters than necessary, and a more complicated (and more costly per processed character) that reads exactly the required characters.
+
+The main challenge faced was that we needed to act upon absolute values of the matrix cells, while BPM stores the information differentially. Our solution relies on a new concept called a *witness*. A witness is a matrix cell whose absolute value is known. Together with the differential
+---PAGE_BREAK---
+
+Figure 15: The three rows show the test results with the pattern lengths 16, 32 and 64. The left column shows the average time for searching for a pattern from baker's yeast, and the right column shows the corresponding average number of rows filled during the computation.
+---PAGE_BREAK---
+
+values, we update one or more witness values in parallel. Those witnesses are used to deduce,
+bound or compute all the other matrix values.
+
+We present an improved average analysis of ABNDM that shows that it inspects the optimal
+number of characters, $O((k + \log_{\sigma} m)n/m)$. While the original ABNDM [16] is far from this opti-
+mality when its overall complexity is considered (not only inspected characters), our new ABNDM
+versions are much closer to the optimum, reaching average complexity $O((k + \log m)n/m)$. Indeed,
+this is optimal if we regard $\sigma$ as a constant.
+
+The experimental results show that our new algorithm beats the original ABNDM, even when BPA is especially coded with a different procedure for every possible *k* value, often for *k* = 4 and always for *k* > 4, and that it beats a general BPA implementation for *k* ≥ 2. Moreover it was seen that our version of ABNDM becomes the fastest algorithm for many cases with moderately long pattern and fairly low error level, provided the witnesses fit in a single computer word. This includes several interesting cases in searching DNA, natural language text, protein sequences, etc.
+
+To demonstrate that the concept of witness can be applied to other scenarios, we apply it to a recent work that improves upon BPM by filling the matrix row-wise instead of column-wise [7]. A key part of the improved algorithm is the ability to stop when all the matrix cells exceed some value. We show that the use of witnesses provides a much faster solution than the original in [7].
+
+Finally, we notice that the witness concept helps to solve the main problem that arises when trying to compute local score matrices [23] in a bit-parallel fashion. The formula to compute score permits increments and decrements in the score, but it is never let to run below zero. A reasonable simplification, useful for bit-parallel computation, is as follows
+
+$$
+\begin{align*}
+M_{i,0} & \leftarrow 0, & M_{0,j} & \leftarrow 0, \\
+M_{i,j} & \leftarrow \begin{cases} 0 & \text{if } (x_i = y_j) \\ \max(0, M_{i-1,j-1} + 1) & \text{else} \end{cases} & M_{i-1,j-1} & \leftarrow 1, M_{i,j-1} & \leftarrow 1, M_{i-1,j-1} - 1.
+\end{align*}
+$$
+
+An important obstacle preventing the bit-parallel computation of *M* is that we have to know when a cell value has become negative in order to make it zero. Therefore, we need to know the absolute cell values, a scenario where witnesses are the ideal solution. We are currently pursuing this idea.
+
+Acknowledgements
+
+We thank the comments of the reviewers, which helped making the paper more readable.
+
+References
+
+[1] R. Baeza-Yates. Text retrieval: Theory and practice. In *12th IFIP World Computer Congress*, volume I, pages 465–476. Elsevier Science, 1992.
+
+[2] R. Baeza-Yates. A unified view of string matching algorithms. In *Proc. Theory and Practice of Informatics (SOFSEM'96)*, LNCS 1175, pages 1–15, 1996.
+
+[3] R. Baeza-Yates and G. Navarro. Faster approximate string matching. *Algorithmica*, 23(2):127–158, 1999.
+---PAGE_BREAK---
+
+[4] W. Chang and J. Lampe. Theoretical and empirical comparisons of approximate string matching algorithms. In Proc. 3rd Annual Symposium on Combinatorial Pattern Matching (CPM'92), LNCS 644, pages 172-181, 1992.
+
+[5] W. Chang and T. Marr. Approximate string matching and local similarity. In Proc. 5th Annual Symposium on Combinatorial Pattern Matching (CPM'94), LNCS 807, pages 259-273, 1994.
+
+[6] M. Crochemore and W. Rytter. *Text Algorithms*. Oxford University Press, 1994.
+
+[7] K. Fredriksson. Row-wise tiling for the Myers' bit-parallel dynamic programming algorithm. In Proc. 10th International Symposium on String Processing and Information Retrieval (SPIRE'03), LNCS 2857, pages 66-79, 2003.
+
+[8] Z. Galil and K. Park. An improved algorithm for approximate string matching. SIAM Journal on Computing, 19(6):989-999, 1990.
+
+[9] H. Hyyrö. Explaining and extending the bit-parallel algorithm of Myers. Technical Report A-2001-10, University of Tampere, Finland, 2001.
+
+[10] H. Hyyrö and G. Navarro. Faster bit-parallel approximate string matching. In Proc. 13th Annual Symposium on Combinatorial Pattern Matching (CPM'02), LNCS 2373, pages 203-224, 2002.
+
+[11] G. Landau and U. Vishkin. Fast parallel and serial approximate string matching. Journal of Algorithms, 10:157-169, 1989.
+
+[12] G. Myers. A fast bit-vector algorithm for approximate string matching based on dynamic programming. Journal of the ACM, 46(3):395-415, 1999.
+
+[13] G. Navarro. A guided tour to approximate string matching. ACM Computing Surveys, 33(1):31-88, 2001.
+
+[14] G. Navarro and R. Baeza-Yates. Very fast and simple approximate string matching. Information Processing Letters, 72:65-70, 1999.
+
+[15] G. Navarro and R. Baeza-Yates. Improving an algorithm for approximate string matching. Algorithmica, 30(4):473-502, 2001.
+
+[16] G. Navarro and M. Raffinot. Fast and flexible string matching by combining bit-parallelism and suffix automata. ACM Journal of Experimental Algorithmics (JEA), 5(4), 2000.
+
+[17] G. Navarro and M. Raffinot. Flexible Pattern Matching in Strings - Practical on-line search algorithms for texts and biological sequences. Cambridge University Press, 2002.
+
+[18] P. Sellers. The theory and computation of evolutionary distances: pattern recognition. Journal of Algorithms, 1:359-373, 1980.
+
+[19] E. Sutinen and J. Tarhio. On using q-gram locations in approximate string matching. In Proc. European Symposium on Algorithms (ESA'95), LNCS 979, pages 327-340, 1995.
+---PAGE_BREAK---
+
+[20] J. Tarhio and E. Ukkonen. Approximate Boyer-Moore string matching. *SIAM Journal on Computing*, 22(2):243–260, 1993.
+
+[21] E. Ukkonen. Algorithms for approximate string matching. *Information and Control*, 64:100–118, 1985.
+
+[22] E. Ukkonen. Finding approximate patterns in strings. *Journal of Algorithms*, 6:132–137, 1985.
+
+[23] M. Waterman. *Introduction to Computational Biology*. Chapman and Hall, 1995.
+
+[24] S. Wu and U. Manber. Fast text searching allowing errors. *Comm. of the ACM*, 35(10):83–91, 1992.
+
+[25] S. Wu, U. Manber, and G. Myers. A sub-quadratic algorithm for approximate limited expression matching. *Algorithmica*, 15(1):50–67, 1996.
\ No newline at end of file
diff --git a/samples/texts_merged/6762772.md b/samples/texts_merged/6762772.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4a5871d704b4657eeea0614bdb45b19d9f16b61
--- /dev/null
+++ b/samples/texts_merged/6762772.md
@@ -0,0 +1,122 @@
+
+---PAGE_BREAK---
+
+H14-276
+ESTIMATION AND PARAMETERIZATION OF THE INFLUENCE OF SYNOPTIC CONDITIONS ON
+POLLUTION CHARACTERISTICS IN THE PBL
+
+Evgeni Syrakov¹, Kostadin Ganev², Milen Tsankov¹, Emil Cholakov¹
+
+¹University of Sofia, Faculty of Physics, Sofia, Bulgaria
+
+²National Institute of Geophysics, Geodesy and Geography, Bulgarian academy of Sciences, Sofia, Bulgaria
+
+**Abstract:** An approach which includes joint use of resistance laws, PBL- and diffusion models, coordinated with the method of moments has been developed. The influence of a set of turbulent regimes, parameterizes by external parameters in similarity format, on diffusion processes in PBL has been studied on this basis.
+
+**Key words:** resistance laws, turbulent regimes, diffusion and statistical moments, inversion-slope-baroclytic effects.
+
+INTRODUCTION
+
+The combination of external to the PBL aerologic-synoptic parameters, which are easy to obtain in prognostic/diagnostic aspect from synoptic maps, forms a wide range of turbulent regimes in PBL, which have a sound influence on the pollution characteristics. A qualitative categorization of this complex, mutually interacting processes have to be made, so that they could be studied in details and on this basis appropriate instruments for studying the processes have to be developed, which is the basic goal of the present work.
+
+METHODOLOGY
+
+The studies in the present work are based on methodological procedure, which includes several consecutive synchronized
+steps:
+
+* The PBL resistance laws (RL-method) over flat terrain are applied:
+
+$$
+\frac{\aleph \cos \alpha}{C_g} = \ln(R_0 C_g) - A, \quad \frac{\aleph \sin \alpha}{C_g} = -B, \quad \alpha_0 \frac{\aleph^2 S}{C_g \mu} = \ln(R_0 C_g) - C \qquad (1)
+$$
+
+and the respective more complex version of (1) over slope terrain (which due to the limited paper volume will not be shown
+here), where $C_g = U_*/G_0$ and $R_0 = G_0/fz_0$ are the gestrophic drag coefficient and the gestrophic Rossby number,
+$\aleph = 0,4$ is the von Karman constant, $G_0$ is the surface geostrophic wind, $U_*$ is the friction velocity, $\alpha$ is the full cross
+isobaric angle, $\mu = (\aleph U_*/f)/L$ and $S = \beta \delta \theta / fG_0$ are inner and integral dimensionless PBL stratification parameters, $L$
+is the Monin-Obuchov length, $A, B, C$ are universal functions. These functions are determined in sufficiently general
+form in (Syrakov, E., 1990, 2011) and on this basis a numerical solution of the system of transcendent equations (1) is
+obtained, giving the dependence of the basic transfer-interaction parameters $C_g, \alpha, \mu$ on the dimensionless external
+parameters:
+
+$$
+R_0; S; R_{0i}; S_x; S_y; (\text{or } M, \phi); \psi; \mu_N. \tag{2}
+$$
+
+where $R_{0i} = G_0 / fH_i$ is the inversion Rossby number, $H_i$ is inversion height, $S_x = (\aleph^2/f)du_g/dz$ and
+$S_y = (\aleph^2/f)dv_g/dz$ are external dimensionless baroclinic parameters, which can be also expressed by the equivalent
+parameters $M = (S_x^2 + S_y^2)^{1/2}$ and $\phi$ - the angle between the surface geostrophic and thermal wind, $\psi$ is the terrain slope
+angle, $\mu_N = N/f$ is the free-flow stability parameter, $N$ is the free flow Brunt-Vaisala frequency, $f$ is the Coriolis
+parameter.
+
+* Coordinated with the so determined parameters $C_g$, $\alpha$, $\mu$, a PBL model (Syrakov, E. and Ganev, K., 2003, Syrakov et al, 2007)) is realized, taking into account the factors (2), and the velocity components $u, v$, the coefficients of vertical turbulent exchange $k_z$ and $k_{zg}$ are determined.
+
+* At these dynamic conditions a diffusion plume-MM model it is realized, based on the following construction (Syrakov, E. and Ganev, K., 2003):
+
+$$
+c(x, y, z, t) = \frac{c_0(x, z, t)}{\sqrt{2\pi\sigma_y}} \exp\left(-\frac{(y-Y)^2}{2\sigma_y^2}\right) \quad (3)
+$$
+
+where the wind rotation effect is accounted by the mean displacement *Y*. The parameters *Y* and the dispersion σy are
+calculated by the definition formulae:
+---PAGE_BREAK---
+
+$$Y(x, z, t) = c_1/c_0, \sigma_y^2 = c_2/c_0 - Y^2, \quad (4)$$
+
+where the first and second moments $c_1(x, z, t)$ and $c_2(x, z, t)$ are calculated numerically on the basis of moment's method (MM), and zero moment $c_0$ is determined from the equation for linear source. The respective puff-MM model for instantaneous point source is constructed in a similar way. These diffusion models are based on splitting the diffusion to horizontal and vertical parts, and are coordinated with the statistical moment's method, which allows determination of the trajectory-dispersion parameters in the process of solving the problem, i.e. without to give them a priori. The MM models are generalization of the conventional plume and puff models.
+
+## RESULTS AND DISCUSSION
+
+A wide range of turbulent regimes, which have significant influence on the diffusion processes in PBL could be studied by varying parameters (2). A number of cases with $G_0=8m/s$ and $\mu_N=0$, with parameters given in table 1 are chosen as examples.
+
+Table 9. Basic PBL characteristics for the studied different turbulent regimes
+
+| PBL type | barotropic | baroclinic | inversions | terrain slope |
|---|
Case Input parameters | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
|---|
| R0 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | 107 | | S | -500 | 0 | 500 | -500 | -500 | -500 | -500 | -500 | 0 | 500 | -500 | 0 | 500 | | R0i | ~ 1 | ~ 1 | ~ 1 | ~ 1 | ~ 1 | ~ 1 | ~ 1 | 400 | 400 | 400 | ~ 1 | ~ 1 | ~ 1 | | M | 0 | 0 | 0 | 10 | 10 | 10 | 10 | 0 | 0 | 0 | 0 | 0 | 0 | | φ[°] | 0 | 0 | 0 | 0 | 180 | 220 | 270 | 0 | 0 | 0 | 0 | 0 | 0 | | ψ[rad] | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.1 | 0.1 | | H or Hi | 1400 | 850 | 350 | 1600 | 1250 |
+
+From the input parameters from Table 1, applying the above described methodology procedure, at first $C_g$, $\alpha$, $\mu$ are obtained (some results for $\alpha$ are demonstrated in Figure 1) and consecutively then by the PBL the dynamic and by the plume-MM model (3), (4) the diffusion characteristics in PBL are obtained for cases 1-13, which will be considered in the present paper. The examples are for stationary point source with height $h_s=150m$. The x axis is oriented along the wind at source height.
+
+Figure 17. Dependence of full cross isobaric angle $\alpha$ on the external dimensionless parameters (2) for different PBL types
+
+A specific turbulent regime in the PBL corresponds to each set of external parameters (2). From the other hand parameters (2) for a given region can be determined from aerologo-synoptic parameters, taken from synoptic maps (in forecast/diagnostic mode) for each specific synoptic situation. This makes it possible the influence of different synoptic situations on the turbulent regime, hence on the basic pollution characteristics in the PBL to be studied by applying the methodology, described above.
+
+These possibilities will be demonstrated by a chosen number of typical cases with input parameters (2) given in Table 1, which characterize basic PBL factors like baroclinicity, stratification, inversions and terrain slope effects. The pollution characteristics for cases 1 – 13 from Table 1 will be analyzed at $\mu_N=0$. The cases of free-flow stability effects $\mu_N > 0$ are considered in Syrakov et al (2011) – in the present proceedings. For all the considered cases the Ox axis is oriented along the wind at source height, which for all demonstrated experiments is $h_{source} = 150m$. The cases are divided and
+---PAGE_BREAK---
+
+demonstrated in all the figures in two groups: unstable conditions ($S < 0$) - cases 1, 4-8, 11 and stable/neutral conditions ($S \ge 0$) - cases 2, 3, 9, 10, 12.
+
+Figure 2. Surface horizontal displacement $Y(x)$ for turbulent regime cases 1–13: unstable cases 1,4,5,6,7,8,11 and stable/neutral cases 2,3,9,10,12,13, with input external parameters shown on Table 1
+
+Figure 3. Surface concentrations along the plume axis for turbulent regime cases 1–13 with input external parameters shown on Table 1
+
+The surface horizontal displacements $Y(x)$, which describe the plume axis rotation in respect to $Ox$ axis are shown in Figure 2. The Rotation effect as a whole is better manifested for stable/neutral cases, which can be explained by bigger angle $\alpha$ and less intensive vertical exchange. It should be noted that in the terrain slope cases 11-13 for all stratifications the dynamic channelling slope effect dominates, which does not allow big displacements $Y(x)$. The $Y(x)$ behaviour for baroclinic cases 4-7 is also interesting. For the $\phi$ angles, chosen for these cases, the thermal wind opposes the geostrophic, which leads to a decrease of the wind speed gradient. Combined with the significant turbulent exchange ($S = -500$) this leads to a relatively small right-hand rotation of the plume axis. For all the other cases, as can be seen from Figure 2, the displacement of the plume axis is to the left of $Ox$ axis, qualitatively corresponding to the Eckmans spiral. Obviously the $Y(x)$ behaviour is formed by a complex balance of the joint effects of different factors. The same is valid for all the other pollution characteristics.
+---PAGE_BREAK---
+
+Figure 4. Skewness $Sk(x)$ for turbulent regime cases 1 – 13 with input external parameters shown on Table 1
+
+Figure 5. Ratios $c_0^{plume-MM}(x)/c_0^{plume-cl}(x)$ for turbulent regime cases 1 – 13 with input external parameters shown on Table 1
+
+The normalised surface concentrations along the surface plume axis for cases 1 – 13 are shown in Figure 3. At unstable conditions the biggest concentration occurs for the inversion case 8 and for stable/neutral conditions – for the terrain slope stable case 13. The maximal surface concentrations for stable/neutral conditions are significantly further from the source than the unstable cases.
+
+The vertical skewness $Sk(x)$ is shown in Figure 4. The difference from Gaussian vertical concentration distribution ($Sk(x) \neq 0$) is best manifested at distances 4-5000 m from the source. Maximal deviations can be observed for cases 5, 8 and 2.
+
+The ratios $c_0^{plume-MM}(x)/c_0^{plume-cl}(x)$, where the surface plume concentration along the axis $c_0^{plume-cl}(x)$ is again simulated by the plume-MM model, but with $Y(x)=0$ and constant with height wind speed, equal to the wind speed at
+---PAGE_BREAK---
+
+source level are shown in Figure in order the relatively independent influence of the wind rotation to be demonstrated. As it
+can be seen, the effects are strongest for cases 9, 3, 13.
+
+CONCLUSIONS
+
+The obtained results demonstrate the significant influence of the turbulent regimes (and the respective synoptic situations) on the diffusion processes in the PBL. The suggested methodology makes it possible the detailed study of the influence of wind sheer and rotation, roughness, stratification, inversions, baroclinicity, terrain slope etc., in a similarity format, on the basic pollution characteristics – trajectories, dispersions, concentration and concentration field shape – skewness, kurtosis, etc. The approach can be used for applied tasks including estimation of extreme and critical pollution parameters, regulatory procedures and optimization, sub-grid parameterization procedures, etc.
+
+REFERENCES
+
+Syrakov, E., 1990: Process of thermal convection, dynamics, parameterization and diffusion at different scales and atmospheric conditions with application to environmental task, Dr. of Sci. Thesis, University of Sofia.
+
+Syrakov E., K.Ganev, 2003: Accounting for effects of wind rotation in the PBL on the plume characteristics. *Int. J. Environment & Pollution*, vol. 20, No.1-6, 154-164.
+
+Syrakov E., E. Cholakov, M. Tsankov, K. Ganev, 2007: On the influence of PBL conventional and nonlocal turbulent factors on pollution characteristics in local to meso scale. in Carruthers, D., and McHugh, C., (Eds.), Proc. of 11th Intern. Conf. on Harmonization within Atmospheric Dispersion Modelling for Regulatory Purposes, Vol. 2, Cambridge, UK, 2-5 July 2007, 286-290.
+
+Syrakov E., 2011: Atmospheric boundary layer: Structure, Parameterization, Interactions, Heron Press, Sofia, 388pp.
+
+Syrakov E., K. Ganev, M. Tsankov, E. Cholakov, 2011: A comparative analysis of diferent conventional and none-local stable/neutral PBL regimes and their parameterization in air pollution problems, in the present Proceedings
\ No newline at end of file
diff --git a/samples/texts_merged/7087559.md b/samples/texts_merged/7087559.md
new file mode 100644
index 0000000000000000000000000000000000000000..10fbdc7d593d5f543c92eaf490684fb2cfd0bc17
--- /dev/null
+++ b/samples/texts_merged/7087559.md
@@ -0,0 +1,978 @@
+
+---PAGE_BREAK---
+
+SOCLE DEGREES OF FROBENIUS POWERS
+
+ANDREW R. KUSTIN AND ADELA N. VRACIU
+
+*In honor of Phillip Griffith, on the occasion of his retirement*
+
+**ABSTRACT.** Let $k$ be a field of positive characteristic $p$, $R$ be a Gorenstein graded $k$-algebra, and $S = R/J$ be an artinian quotient of $R$ by a homogeneous ideal. We ask how the socle degrees of $S$ are related to the socle degrees of $F_R^e(S) = R/J^{[q]}$. If $S$ has finite projective dimension as an $R$-module, then the socles of $S$ and $F_R^e(S)$ have the same dimension and the socle degrees are related by the formula $D_i = qd_i - (q-1)a(R)$, where $d_1 \le \cdots \le d_\ell$ and $D_1 \le \cdots \le D_\ell$ are the socle degrees of $S$ and $F_R^e(S)$, respectively, and $a(R)$ is the $a$-invariant of the graded ring $R$, as introduced by Goto and Watanabe. We prove the converse when $R$ is a complete intersection.
+
+Let $(R, m)$ be a Noetherian graded algebra over a field of positive characteristic $p$, with irrelevant ideal $m$. We usually let $R = P/C$ with $P$ a polynomial ring, and $C$ a homogeneous ideal. Let $J$ be an $m$-primary homogeneous ideal in $R$. Recall that if $q = p^e$, then the $e^{\text{th}}$ Frobenius power of $J$ is the ideal $J^{[q]}$ generated by all $i^q$ with $i \in J$. The basic question is:
+
+**QUESTION.** How do the degrees of the minimal generators of $(J^{[q]} : m)/J^{[q]}$ vary with $q$?
+
+The largest of the degrees of a generator of the socle $(J:m)/J$ will be called
+the top socle degree of $R/J$. The question of finding a linear bound for the top
+socle degree of $R/J^{[q]}$ has been considered by Brenner in [3] from a different
+point of view; his main motivation there is finding inclusion-exclusion criteria
+for tight closure.
+
+The answer to the Question is well-known (although not explicitly stated
+in the existing literature) in the case when $J$ has finite projective dimension;
+see Observation 2.1. We prove that the converse holds when $R = P/C$ is a
+complete intersection.
+
+Received June 13, 2006; received in final form January 3, 2007.
+2000 Mathematics Subject Classification. 13A35.
+The second author was supported in part by a Research And Productive Scholarship
+Award from the University of South Carolina.
+---PAGE_BREAK---
+
+THEOREM A. Let $k$ be a field of positive characteristic $p$, $q = p^e$ for some positive integer $e$, $P$ be a positively graded polynomial ring over $k$, and $R = P/C$ be a complete intersection ring with $C$ generated by a homogeneous regular sequence. Let $\mathfrak{m}$ be the maximal homogeneous ideal of $R$, $J$ be a homogeneous $\mathfrak{m}$-primary ideal in $R$, and $I$ be a lifting of $J$ to $P$. Let $\ell$ be the dimension of the socle $(J : \mathfrak{m})/J$ of $R/J$ and $d_1, \dots, d_\ell$ be the degrees of the generators of the socle. Then the following statements are equivalent:
+
+(a) $\mathrm{pd}_R R/J < \infty$.
+
+(b) The socle $(J^{[q]} : \mathfrak{m})/J^{[q]}$ of $R/J^{[q]}$ has dimension $\ell$ and the degrees of the generators are $qd_i - (q-1)a(R)$, for $1 \le i \le \ell$, where $a(R)$ is the $a$-invariant of $R$.
+
+(c) $(C+I)^{[q]} : (C^{[q]} : C) = C + I^{[q]}$.
+
+(d) $I^{[q]} \cap C = (I \cap C)^{[q]} + CI^{[q]}$.
+
+Of course, the following general question remains wide open and very compelling:
+
+**QUESTION.** How do the socle degrees of Frobenius powers $J^{[q]}$ encode homological information about the ideals $J^{[q]}$?
+
+The proof of Theorem A appears in Section 2.
+
+# 1. Preliminary notions
+
+In this paper, ring means commutative noetherian ring with one. Let $k$ be a field of positive characteristic $p$. We say that the ring $R$ is a graded $k$-algebra if
+
+(1.1) $R$ is non-negatively graded, $R_0 = k$, and $R$ is finitely generated as a ring over $k$.
+
+Every ring that we study in this paper is a graded $k$-algebra. In particular, “Let $P$ be a polynomial ring” means $P = k[x_1, \dots, x_n]$, for some $n$, and each variable has positive degree. Every calculation in this paper is homogeneous: all elements and ideals that we consider are homogeneous, all ring or module homomorphisms that we consider are homogeneous of degree zero. If $r$ is a homogeneous element of the ring $R$, then $|r|$ is the degree of $r$. The graded $k$-algebra $R$ has a unique homogeneous maximal ideal
+
+$$ \mathfrak{m} = \mathfrak{m}_R = R_+ = \bigoplus_{i>0} R_i; $$
+
+furthermore, $R$ has a unique graded canonical module $K_R$, which is equal to the graded dual of the graded local cohomology module $\mathrm{H}_{\mathfrak{m}}^d(R)$, where $d$ is the Krull dimension of $R$; that is,
+
+$$ K_R = \operatorname{Hom}_R(\mathrm{H}_{\mathfrak{m}}^d(R), E_R), $$
+---PAGE_BREAK---
+
+for $E_R = \operatorname{Hom}_k(R, R/\mathfrak{m})$ the injective envelope of $R/\mathfrak{m}$ as a graded $R$-module.
+(See, for example, [7, Def. 2.1.2].) The *a*-invariant of $R$ is defined to be
+
+$$a(R) = -\min\{m \mid (K_R)_m \neq 0\} = \max\{m \mid (\mathrm{H}_\mathfrak{m}^d(R))_m \neq 0\}.$$
+
+The definition of the *a*-invariant is rigged so that if $R$ is a Gorenstein graded $k$-algebra, then $K_R = R(a(R))$. When the ring $R$ is Cohen-Macaulay, there are many ways to compute $a(R)$. The main tool for these calculations, Proposition 1.2 below, may be found as Proposition 2.2.9 in [7] or Proposition 3.6.12 in [4].
+
+PROPOSITION 1.2. If $R \to S$ is a graded surjection of graded $k$-algebras, and $R$ is Cohen-Macaulay, then
+
+$$K_S = \operatorname{Ext}_R^c(S, K_R),$$
+
+where $c = \dim R - \dim S$. In particular, if $S = R/C$ and the ideal $C$ is generated by the homogeneous regular sequence $f_1, \dots, f_c$, then
+
+$$K_{R/C} = (K_R/CK_R) \left( \sum_{i=1}^{c} |f_i| \right).$$
+
+COROLLARY 1.3.
+
+(a) If $P$ is the polynomial ring $k[x_1, \dots, x_n]$, then $a(P) = -\sum_{i=1}^n |x_i|$.
+
+(b) If $R$ is the complete intersection ring $P/C$, where $P$ is the polynomial ring $k[x_1, \dots, x_n]$ and $C$ is the ideal in $P$ generated by the homogeneous regular sequence $f_1, \dots, f_c$, then $a(R) = \sum_{i=1}^c |f_i| - \sum_{i=1}^n |x_i|$.
+
+(c) If $R \to S$ is a surjection of graded Cohen-Macaulay $k$-algebras, and $S$ has finite projective dimension as an $R$-module, then $a(S) = a(R) + N$, where $N$ is the largest back twist in the minimal homogeneous resolution of $S$ by free $R$-modules. In other words, if
+
+$$0 \rightarrow \bigoplus_i R(-b_{c,i}) \rightarrow \dots \rightarrow \bigoplus_i R(-b_{1,i}) \rightarrow R \rightarrow S \rightarrow 0$$
+
+is the minimal homogeneous resolution of $S$ by free $R$-modules, then
+$N = \max_i\{b_{c,i}\}$.
+
+**DEFINITION.** If $S$ is an artinian graded $k$-algebra, then the socle of $S$,
+
+$$\text{soc } S = 0 : m_S = \{s \in S \mid sm_S = 0\},$$
+
+is a finite dimensional graded $k$-vector space: $\text{soc } S = \bigoplus_{i=1}^\ell k(-d_i)$. We refer to
+the numbers $d_1 \le d_2 \le \cdots \le d_\ell$ as the *socle degrees* of *S*.
+
+OBSERVATION 1.4. Let $R$ be an artinian Gorenstein graded $k$-algebra with socle degree $\delta$, and let $J$ be a homogeneous ideal of $R$. If the socle degrees of $R/J$ are $\{d_i\}$, then the minimal generators of ann $J$ have degrees $\{\delta - d_i\}$.
+---PAGE_BREAK---
+
+*Proof.* Choose minimal generators $g_1, \dots, g_s$ of $\text{ann}\,J$. Gorenstein duality (see Lemma 1.11) implies that
+
+$$ \operatorname{ann}(g_1, \dots, \hat{g}_i, \dots, g_s) \not\subseteq \operatorname{ann}(g_i); $$
+
+and thus, for each $i$, we can choose an element $u_i \in \operatorname{ann}(g_1, \dots, \hat{g}_i, \dots, g_s)$, which represents a generator for the socle of $R/\operatorname{ann}(g_i)$. The ideals $J$ and $\operatorname{ann}(g_1, \dots, g_s)$ are equal and the socle of $R/\operatorname{ann}(g_1, \dots, g_s)$ is minimally generated by $u_1, \dots, u_s$. On the other hand, $u_i g_i$ generates the socle of $R$, so the degree of $u_i$ is equal to $\delta - |g_i|$. $\square$
+
+**PROPOSITION 1.5.** If $S$ is an artinian graded k-algebra and $d_1 \le \cdots \le d_\ell$ are the socle degrees of $S$, then the minimal generators of the canonical module $K_S$ have degrees $-d_\ell \le \cdots \le -d_1$.
+
+*Proof.* Let $P = k[x_1, \dots, x_n]$ be a polynomial ring which maps onto $S$. One may compute the degrees of the generators of $K_S$ as well the socle degrees of $S$ in terms of the back twists in the minimal homogeneous resolution of $S$ as a $P$-module:
+
+$$ 0 \to \bigoplus_i P(-b_{n,i}) \to \dots \to \bigoplus_i P(-b_{1,i}) \to P \to S \to 0. $$
+
+The canonical module $K_S$ is equal to $\text{Ext}_P^n(S, K_P)$, where $K_P = P(a(P))$ and $a(P) = -\sum_{i=1}^n |x_i|$. It follows that the minimal homogeneous resolution of $K_S$ is
+
+$$ (1.6) \quad 0 \to P(a(P)) \to \dots \to \bigoplus_i P(a(P) + b_{n,i}) \to K_S \to 0; $$
+
+therefore, the minimal generators of $K_S$ (over either $S$ or $P$) have degrees $\{-a(P) - b_{n,i}\}$. On the other hand, one may compute $\text{Tor}_n^P(S, P/m_P)$ in each coordinate (see, for example, [9, Lemma 1.3]) in order to conclude that
+
+$$ \bigoplus_i k(-b_{n,i}) = \text{Tor}_n^P(S, k) = \text{soc } S(a(P)). $$
+
+Thus, the socle degrees of $S$ are equal to $\{a(P) + b_{n,i}\}$. $\square$
+
+**COROLLARY 1.7.** Let $R \to S$ be a surjection of graded k-algebras with $S$ artinian, and $R$ Gorenstein. If $pd_R S$ finite, then the socle degrees of $S$ are $\{b_i + a(R)\}$, where the back twists in the minimal homogeneous resolution of $S$ by free $R$-modules are $\{b_i\}$.
+
+*Proof.* We know, from Proposition 1.2, that $K_S = \text{Ext}_R^{\dim R}(S, K_R)$, with $K_R = R(a(R))$; therefore,
+
+$$ 0 \to R(a(R)) \to \dots \to \bigoplus_i R(a(R) + b_i) \to K_S \to 0 $$
+---PAGE_BREAK---
+
+is a minimal resolution of $K_S$ and the minimal generators of $K_S$ as an $R$-module, or as an $S$-module, have degrees $\{-a(R) - b_i\}$. Apply Proposition 1.5. □
+
+Let $R$ be a graded $k$-algebra. We write $eR$ to represent the ring $R$ endowed with an $R$-module structure given by the $e^{\text{th}}$ iteration of the Frobenius endomorphism $\phi_R: R \to R$. (If $r$ is a scalar in $R$ and $s$ is a ring element in $eR$, then $r \cdot s$ is equal to $r^q s \in eR$, for $q = p^e$..) The Frobenius functor $F_R^e(\mathbb{N}) = \mathbb{N} \otimes_R eR$ is base change along the homomorphism $\phi_R^e$.
+
+**NOTATION 1.8.** Let $R$ be a graded $k$-algebra. We use the notation $(\mathbb{N})^{[q]}$ in three ways.
+
+(a) If $\mathfrak{g}$ is a matrix with entries in $R$, then $\mathfrak{g}^{[q]}$ is the matrix in which each entry of $\mathfrak{g}$ is raised to the power $q$. In particular, if $z$ is an element of the free module $\bigoplus_{i=1}^m R(-b_i)$, then $z$ is an $m \times 1$ matrix and $z^{[q]}$ is the matrix in which each entry of $z$ is raised to the power $q$.
+
+(b) If $\mathbb{G}_1$ is the free module $\bigoplus_{i=1}^m R(-b_i)$, then $\mathbb{G}_1^{[q]}$ is the free module $\bigoplus_{i=1}^m R(-qb_i)$.
+
+(c) If $J$ is the $R$-ideal $(a_1, \dots, a_m)$, then $J^{[q]}$ is the $R$-ideal $(a_1^q, \dots, a_m^q)$.
+In particular, if $\mathbb{N}$ is the homogeneous complex of graded free $R$-modules
+
+$$ \dots \to \mathbb{N}_3 \xrightarrow{\mathbf{n}_3} \mathbb{N}_2 \xrightarrow{\mathbf{n}_2} \mathbb{N}_1 \xrightarrow{\mathbf{n}_1} \dots, $$
+
+then $\mathbb{N}^{[q]}$ is a very clean way to write the homogeneous complex
+
+$$ (1.9) \qquad F_R^e(\mathbb{N}): \quad \dots \to \mathbb{N}_3^{[q]} \xrightarrow{\mathbf{n}_3^{[q]}} \mathbb{N}_2^{[q]} \xrightarrow{\mathbf{n}_2^{[q]}} \mathbb{N}_1^{[q]} \xrightarrow{\mathbf{n}_1^{[q]}} \dots. $$
+
+Furthermore, the Frobenius functor is always right exact; so, $F_R^e(R/J) = R/J^{[q]}$.
+
+We conclude this section by gathering a few properties of Gorenstein ideals, Gorenstein duality, and linkage. All of these tricks appear elsewhere in the literature, usually in more generality. We are likely to use them at any moment, without any further ado. Theorem 1.10 is due to Bass [2].
+
+**THEOREM 1.10.** Let $R$ be a local artinian ring. The following statements are equivalent:
+
+(1) $R$ is a Gorenstein ring.
+
+(2) The socle of $R$ is principal.
+
+(3) The ideal $(0)$ is irreducible (in the sense that $(0)$ is not equal to the intersection of two non-zero ideals).
+
+(4) The ring $R$ is self-injective.
+
+When the conditions of Theorem 1.10 are in effect, then the functor $\operatorname{Hom}_R(\mathfrak{A}, R) = (\mathfrak{A})^*$ is exact and if $M$ is a finitely generated $R$-module, then the modules $M$ and $M^*$ have the same length.
+---PAGE_BREAK---
+
+LEMMA 1.11. Let $M$ be an ideal in the artinian local Gorenstein ring $(R, \mathfrak{m})$.
+
+(1) The ideals $M$ and $\text{ann}(\text{ann}(M))$ are equal.
+
+(2) If $N$ is an ideal of $R$ with $M \subseteq N$, then $\text{ann}(M)/\text{ann}(N) \cong \text{Hom}(N/M, R)$.
+
+(3) If $R/M$ is a Gorenstein ring, then there exists an element $y$ of $R$ with $\text{ann}(y) = M$ and $\text{ann}(M) = (y)$.
+
+(4) If $y$ is a non-zero element of $R$, then $R/\text{ann}(y)$ is a Gorenstein ring.
+
+*Proof.* The ideal $M$ is contained in $\text{ann}(\text{ann}(M))$ and the two ideals have the same length because $M^* = R/\text{ann}(M)$. Assertion (1) follows. Apply $\text{Hom}(\_, R)$ to the short exact sequence
+
+$$0 \to N/M \to R/M \to R/N \to 0$$
+
+to obtain (2). If $\text{ann}(M) = (y_1, \dots, y_s)$, then (1) shows that
+
+$$M = \text{ann}(\text{ann}(M)) = \text{ann}(y_1) \cap \cdots \cap \text{ann}(y_s).$$
+
+Under the hypothesis of (3), the ideal $M$ is irreducible and $M = \text{ann}(y_i)$, for some $i$. Apply (1) again to complete the proof of (3). For (4), it is not difficult to see that the socle of $R/\text{ann}(y)$ is a principal ideal. $\square$
+
+If the conditions of (1) in Proposition 1.12 hold, then the ideal $C$ is called a *Gorenstein ideal*. Notice that when we use this term, we automatically assume the ideal $C$ to have finite projective dimension.
+
+PROPOSITION 1.12. Let $R$ be a graded $k$-algebra. Assume that $R$ is a Gorenstein ring. Let $C$ be a homogeneous ideal of $R$ of grade $c$ and finite projective dimension, and $(\mathbb{G}, g_\bullet)$ be the minimal homogeneous resolution of $R/C$ by free $R$-modules.
+
+(1) The following statements are equivalent:
+
+(a) The ring $R/C$ is Gorenstein.
+
+(b) The ring $R/C$ is Cohen-Macaulay and $\text{Ext}_R^c(R/C, R)$ is a cyclic $R/C$-module.
+
+(c) The ring $R/C$ is Cohen-Macaulay and $\text{Ext}_R^c(R/C, R)$ is isomorphic to a shift of $R/C$.
+
+(d) The complex $\text{Hom}_R(\mathbb{G}, R)$ is isomorphic to a shift of $\mathbb{G}$.
+
+(2) If $C$ is a Gorenstein ideal, then the entries of any matrix representation for $g_c$ form a minimal generating set for $C$.
+
+(3) If the field $k$ has characteristic $p$ and the ideal $C$ is a Gorenstein ideal, then $C^{[p]}$ is a Gorenstein ideal.
+
+*Proof.* The equivalence of (a), (b), and (c) is Section 5 of [2]. The ring $R/C$ is Cohen-Macaulay if and only if $R/C$ is a perfect $R$-module of projective dimension equal to $c$; see, for example, Section 16.C of [5]. It is now obvious
+---PAGE_BREAK---
+
+that (c) and (d) are equivalent. Assertion (2) follows from (d). Assertion (3)
+follows from Theorem 1.7 of [10], which guarantees that $F_R(\mathbb{G})$ is the minimal
+homogeneous resolution of $R/C^{[p]}$. $\square$
+
+Peskine and Szpiro popularized the concept of linkage by complete intersec-
+tion ideals. Only slight modifications need be made to [11, Proposition 2.6] in
+order prove assertion (1) in the following result about linkage by Gorenstein
+ideals; see, for example, Section 1 of [8].
+
+PROPOSITION 1.13. Let $R$ be a Gorenstein graded $k$-algebra, and let $L \subseteq M$ be homogeneous Gorenstein ideals (in the sense of Proposition 1.12) of $R$ of grade $c$.
+
+(1) Let $\mathbb{F}_{\bullet}$ and $\mathbb{G}_{\bullet}$ be the minimal homogeneous resolutions of $R/L$ and $R/M$, respectively, and let $\alpha_{\bullet} : \mathbb{F}_{\bullet} \to \mathbb{G}_{\bullet}$ be a homogeneous map of resolutions which extends the natural map $R/L \to R/M$. The map $\alpha_c : \mathbb{F}_c \to \mathbb{G}_c$ is multiplication by a homogeneous element $y$ of $R$. Then
+
+$$ L : M = (L, y) \quad \text{and} \quad L : y = M. $$
+
+(2) If $M$ is generated by the homogeneous regular sequence $f_1, \dots, f_c$ and $L$ is generated by $f_1^{r_1}, \dots, f_c^{r_c}$, then the conclusion of (1) holds for the product $y = f_1^{r_1-1} \cdots f_c^{r_c-1}$.
+
+*Proof*. One can prove (2) directly, or deduce it from (1). $\square$
+
+If *L*, *M*, and *N* are ideals in a ring *R* then a quick calculation yields the
+remarkably useful formula
+
+$$ (1.14) \qquad L : MN = (L : M) : N. $$
+
+The proof of the next result may be read from the proof of Proposition 2 in
+[13].
+
+PROPOSITION 1.15. Let $L$ and $M$ be homogeneous ideals of a ring $R$ of positive prime characteristic $p$. If $L$ and $L:M$ have finite projective dimension, then $L^{[p]}:M^{[p]} = (L:M)^{[p]}$. $\square$
+
+## 2. The plan of attack and a few examples
+
+We first establish "(a) implies (b)" from Theorem A.
+
+OBSERVATION 2.1. Let $k$ be a field of positive characteristic $p$, $R \to S$
+be a surjection of graded k-algebras in the sense of (1.1), with $R$ Gorenstein
+and $S$ artinian. If $S$ has finite projective dimension as an $R$-module, then
+the socles of $S$ and $F_R^e(S)$ have the same dimension; furthermore, if the socle
+degrees of $S$ and $F_R^e(S)$ are given by
+
+$$ d_1 \le d_2 \le \cdots \le d_\ell \quad \text{and} \quad D_1 \le D_2 \le \cdots \le D_\ell, $$
+---PAGE_BREAK---
+
+respectively, then
+
+$$
+(2.2) \qquad D_i = qd_i - (q-1)a(R),
+$$
+
+for all i.
+
+*Proof.* Consider the minimal homogeneous resolution $\mathbb{F}$ of $S$ by free $R$-modules. We know from [10] that $F_R^e(\mathbb{F}) = \mathbb{F}^{[q]}$ is the minimal homogeneous resolution of $F_R^e(S)$. If the back twists of $\mathbb{F}$ are $\{b_i \mid 1 \le i \le L\}$, then the back twists of $\mathbb{F}^{[q]}$ are $\{qb_i\}$. Use Corollary 1.7 to see that $L = \ell$, $d_i = b_i + a(R)$, and $D_i = qb_i + a(R)$, for all $i$. $\square$
+
+**REMARK.** The hypothesis that $R$ is Gorenstein is necessary in Observation 2.1. Let $R=S$ be a non-Gorenstein artinian graded $k$-algebra whose socle lives in at least two distinct degrees. The ring $S$ has finite projective dimension as an $R$-module, and $d_i = D_i$, for all $i$, since $F_R^e(R) = R$. The $a$-invariant of $R$ is equal to the top socle degree of $R$; so, $a(R) = d_\ell$ and (2.2) holds for $d_i = d_\ell$; but does not hold for $d_i < d_\ell$.
+
+We prove the converse of Observation 2.1 under the assumption that $R$ is a complete intersection. Our main result is the following statement.
+
+**THEOREM 2.3.** Let $k$ be a field of positive characteristic $p > R \to S$ be a surjection of graded $k$-algebras in the sense of (1.1), with $R$ a complete intersection and $S$ artinian. Let $e$ be a positive integer, $q = p^e$, and $d_1 \le \cdots \le d_\ell$ be the socle degrees of $S$. If the socle of $F_R^e(S)$ has the same dimension as the socle of $S$, and the socle degrees of $F_R^e(S)$ are given by $D_1 \le D_2 \le \cdots \le D_\ell$ as in (2.2), then $\text{Tor}_1^R(S, eR) = 0$.
+
+**STANDING NOTATION 2.4.** We express $R = P/C$, where $P$ is the polynomial ring $P = k[x_1, \dots, x_n]$, each variable has positive degree, and $C$ is a homogeneous Gorenstein ideal in $P$ of grade $c$. Let $I$ be a homogeneous $m_P$-primary ideal in $P$, $S = P/(I+C)$, $T = P/I$, and let $Z$ be the $(c-1)$-syzygy of the $P$-module $K_T(-a(P))$.
+
+*Proof of Theorem 2.3.* Adopt the notation of 2.4. In Corollary 3.2, we convert numerical information about the socle degrees of *S* and *F**R**e*(*S*) into numerical information about Tor*c**P*(*K**T*, *R*) and Tor*c**P*(*K**F**P**e*(*T*), *R*). In Proposition 4.1, the numerical information about Tor*c*'s is converted into the statement
+
+$$
+\mathrm{Tor}_1^R (Z \otimes_P R, eR) = 0.
+$$
+
+This homological statement is expressed as a statement about ideals:
+
+$$
+(C^{[q]} + I^{[q]}) : (C^{[q]} : C) = C + I^{[q]}
+$$
+
+in Proposition 5.1. In Proposition 6.1 we deduce
+
+$$
+I^{[q]} \cap C = (I \cap C)^{[q]} + CI^{[q]}.
+$$
+---PAGE_BREAK---
+
+This result is equivalent to $\text{Tor}_1^R(S, eR) = 0$, as is recorded in Proposition 3.5. $\square$
+
+We would like to prove that the conclusion of Theorem 2.3 continues to hold after one replaces the hypothesis that $R$ is a complete intersection with the weaker hypothesis that $R$ is Gorenstein. Three of our five steps (3.2, 5.1, and 3.5) work when $R$ is Gorenstein. The arguments that we use in the other two steps (4.1 and 6.1) require that $R$ be a complete intersection, although in Proposition 7.4 we prove the ideal theoretic version of (4.1) under the hypothesis that $R$ is Gorenstein and F-pure. At any rate, if $R$ is a complete intersection and the conclusion of Theorem 2.3 holds, then the Theorem of Avramov and Miller [1] (see also [6]) guarantees that $S$ has finite projective dimension as an $R$-module. We are very curious to know if some form of the Avramov-Miller result,
+
+$$ \text{Tor}_1^R(M, eR) = 0 \implies \text{pd}_R M < \infty, $$
+
+for finitely generated $R$-modules $M$, can be proven when $R$ is Gorenstein, but not necessarily a complete intersection.
+
+*Proof of Theorem A.* Adopt the notation of 2.4.
+
+(a) $\implies$ (b): This is Observation 2.1.
+
+(b) $\implies$ (c): Assume (b). In Corollary 3.2 and Observation 3.3, we show that if the generator degrees of $\text{Tor}_1^P(Z, R)$ are $\{\gamma_i\}$, then the generator degrees of $\text{Tor}_1^P(F_P^e(Z), R)$ are $\{q\gamma_i\}$. Proposition 4.1 shows that $\text{Tor}_1^R(Z \otimes_P R, eR) = 0$. Proposition 5.1 yields (c).
+
+(c) $\implies$ (d): This is Proposition 6.1.
+
+(d) $\implies$ (a): If (d) holds, then Proposition 3.5 shows that $\text{Tor}_1^R(T \otimes_P R, eR) = 0$. The Theorem of Avramov and Miller [1] guarantees that $T \otimes_P R$ has finite projective dimension as an $R$-module. $\square$
+
+If the hypothesis of Theorem 2.3 is weakened to say only that the socles of $S$ and $F_R^e(S)$ have the same dimension (with no mention about how the socle degrees are related), then the conclusion of Theorem 2.3 fails to hold; see Example 2.9. It is curious, however, that if $S$ is Gorenstein, and the defining ideal of $S$ is contained in a proper ideal of $R$ of finite projective dimension (for example, a parameter ideal of $R$), then one need only verify that the socles of $F_R^e(S)$ and $S$ both have dimension one in order to conclude that $S$ has finite projective dimension over $R$.
+
+**THEOREM 2.5.** Let $(R, \mathfrak{m})$ be a local k-algebra, with $R$ a complete intersection, and let $J \subset R$ be an m-primary ideal. Assume that $J \subseteq \mathfrak{b}$ for some proper ideal $\mathfrak{b}$ with $\text{pd}_R \mathfrak{b} < \infty$. If $R/J$ and $R/J^{[q]}$ both are Gorenstein, then $\text{pd}_R J < \infty$.
+---PAGE_BREAK---
+
+*Proof.* Choose $\mathfrak{a} \subseteq J$ a parameter ideal. Use Lemma 1.11 and (1.14) to write $J = \mathfrak{a} : f$, and write $\mathfrak{b} = J : K = \mathfrak{a} : fK$. Since $\mathfrak{b}$ has finite projective dimension, we know, from Proposition 1.15, that
+
+$$ (2.6) \qquad \mathfrak{b}^{[q]} = \mathfrak{a}^{[q]} : f^q K^{[q]}. $$
+
+On the other hand, we have
+
+$$ (2.7) \qquad \mathfrak{b}^{[q]} \subseteq J^{[q]} : K^{[q]}. $$
+
+Since $J^{[q]} \subseteq \mathfrak{a}^{[q]} : f^q$ are both irreducible, we can write $\mathfrak{a}^{[q]} : f^q = J^{[q]} : b$ for some $b \in R$. Plugging this into (2.6) yields
+
+$$ \mathfrak{b}^{[q]} = J^{[q]} : bK^{[q]}. $$
+
+Comparing this with (2.7) we get $J^{[q]} : bK^{[q]} = J^{[q]} : K^{[q]}$ (equation (2.7) gives one inclusion; the other inclusion is always true), which is equivalent to $K^{[q]} \subseteq J^{[q]} + bK^{[q]}$. If $b$ is not a unit, this means that $K^{[q]} \subseteq J^{[q]}$, and so $\mathfrak{b}^{[q]} = J^{[q]} : K^{[q]} = R$, which is a contradiction.
+
+Thus, $b$ must be a unit, so
+
+$$ (2.8) \qquad J^{[q]} = \mathfrak{a}^{[q]} : f^q. $$
+
+Consider the short exact sequence
+
+$$ 0 \to R/J \to R/\mathfrak{a} \to R/(\mathfrak{a}, f) \to 0. $$
+
+Equation (2.8) implies that its tensorization with $eR$ is exact, and thus
+
+$$ \operatorname{Tor}_1^R(R/(\mathfrak{a}, f), eR) = 0, $$
+
+and $(\mathfrak{a}, f)$ has finite projective dimension by the Avramov-Miller result. Consequently, $J$ has finite projective dimension as well. $\square$
+
+**EXAMPLE 2.9.** Note that the conclusion of Theorem 2.5 no longer holds without the assumption that $J$ is contained in a proper ideal of finite projective dimension. Consider, for instance, $R = k[x, y, z]/(x^3+y^3+z^3)$, $J = (x, y, z^2)$, where $k$ is a field of characteristic $p \equiv 2 \pmod 3$. Clearly $J$ is irreducible, $J^{[p]} = (x^p, y^p)$ is also irreducible, but $J$ does not have finite projective dimension.
+
+**3. Convert socle degrees into $\text{Tor}_1^R(Z \otimes_P R, eR)$**
+
+Adopt the notation of 2.4. The first three results in this section convert hypothesis (2.2) about the socle degrees of $S$ and $F_R^e(S)$ into a statement about the generator degrees of $\text{Tor}_1^P(Z, R)$ and $\text{Tor}_1^Q(F_P^e(Z), R)$. Observation 3.4 shows that $\text{Tor}_1^R(Z \otimes_P R, eR)$ is a quotient of $\text{Tor}_1^P(F_P^e(Z), R)$ by a submodule which is built from the generators of $\text{Tor}_1^P(Z, R)$. The notation that is introduced in the proof of Observation 3.4 will be used again in the proofs of Propositions 4.1 and 5.1.
+---PAGE_BREAK---
+
+All of the calculations in Section 3 which we have described so far pertain
+to the proof of (b)⇒(c) in Theorem A. It is curious, however, that Observa-
+tion 3.4 also may be applied (see Proposition 3.5) to give an ideal theoretic
+interpretation of Tor1R(T ⊗P R, eR), which is our contribution to the proof of
+(d)⇒(a) in Theorem A.
+
+LEMMA 3.1. *Adopt the notation of 2.4. If the socle degrees of S are*
+
+$$
+\{d_i \mid 1 \le i \le \ell\},
+$$
+
+then the minimal generators of TorcP(KT(-a(P)), R) have degrees
+
+$\{a(R) - d_i \mid 1 \le i \le \ell\}.$
+
+*Proof.* Let $\mathbb{G}$ be the minimal homogeneous resolution of $R$ by free $P$-modules. Corollary 1.3 (c) tells us that $\mathbb{G}_c = P(a(P) - a(R))$. It follows that
+
+$$
+\begin{align*}
+\operatorname{Tor}_c^P(K_T(-a(P)), R) &= \operatorname{H}_c(K_T(-a(P)) \otimes_P \mathbb{G}) \\
+&= \{\alpha \in K_T(-a(R)) \mid C\alpha = 0\} \\
+&= \operatorname{Hom}_P(R, K_T(-a(R))).
+\end{align*}
+$$
+
+On the other hand, we have a surjection $T \to S$; so Proposition 1.2 guarantees
+
+$$
+K_S = \operatorname{Hom}_T(S, K_T) = \operatorname{Hom}_P(R, K_T).
+$$
+
+Thus,
+
+$$
+K_S(-a(R)) = \operatorname{Hom}_P(R, K_T(-a(R))) = \operatorname{Tor}_c^P(K_T(-a(P)), R).
+$$
+
+Apply Proposition 1.5. $\square$
+
+Lemma 3.1 also applies when the ideal I is replaced by the ideal I^[q];
+consequently, if the socle degrees of F_R^e(S) are {D_i | 1 ≤ i ≤ L}, then the
+minimal generators of Tor_c^P(K_{F_P^e(T)}(-a(P)), R) have degrees {a(R) - D_i |
+1 ≤ i ≤ L}.
+We have established the following conversion of the original
+hypothesis about socle degrees into a statement about generator degrees of
+Tor_c.
+
+COROLLARY 3.2. *Retain the notation of 2.4. Assume that the socles of S and* $F_R^e(S)$ *have the same dimension. Let*
+
+$$
+d_1 \le \dots \le d_\ell \quad \text{and} \quad D_1 \le \dots \le D_\ell
+$$
+
+be the socle degrees of S and $F_R^e(S)$, respectively, and
+
+$\gamma_1 \le \cdots \le \gamma_\ell \text{ and } \Gamma_1 \le \cdots \le \Gamma_\ell,$
+
+be the minimal generator degrees of
+
+$\mathrm{Tor}_c^P(K_T(-a(P)), R)$ and $\mathrm{Tor}_c^P(K_{F_P^e(T)}(-a(P)), R),$
+---PAGE_BREAK---
+
+respectively. Then
+
+$$D_i = qd_i - (q-1)a(R) \text{ for all } i \iff \Gamma_i = q\gamma_i \text{ for all } i. \quad \square$$
+
+We interpret the homological objects of Corollary 3.2 as $\operatorname{Tor}_1$ of the appropriate modules. This process involves index shifting and keeping careful track of the twists.
+
+**OBSERVATION 3.3.** In the notation of 2.4:
+
+(a) $\operatorname{Tor}_c^P(K_T(-a(P)), R) = \operatorname{Tor}_1^P(Z, R)$, and
+
+(b) $\operatorname{Tor}_c^P(K_{F_P^e(T)}(-a(P)), R) = \operatorname{Tor}_1^P(F_P^e(Z), R)$.
+
+*Proof.* We prove (b). Let $\mathbb{F}$ be the minimal homogeneous resolution of $K_T(-a(P))$ by free $P$-modules. The functor $F_P^e(\cdot)$ is exact; so, $\mathbb{F}^{[q]}$, which is equal to $F_P^e(\mathbb{F})$ (see (1.9)), resolves $F_P^e(K_T(-a(P)))$. On the other hand, it is not hard to see that $\mathbb{F}^{[q]}$ resolves some twist of $K_{F_P^e(T)}$. Indeed, if $(\cdot)^* = \operatorname{Hom}_P(\cdot, P)$, then it is clear that $(F_P^e(\mathbb{F}^*))^*$ is equal to a shift of $\mathbb{F}^{[q]}$; furthermore, it is also clear that $(F_P^e(\mathbb{F}^*))^*$ resolves some shift of $K_{F_P^e(T)}$; see, for example, Proposition 1.12. There are many ways to keep track of the shifts. One pain free approach is to apply the technique of (1.6) to $K_T$ and to $K_{F_P^e(T)}$ in order to nail down the fact that $\mathbb{F}^{[q]}$ is the minimal homogeneous resolution of $K_{F_P^e(T)}(-a(P))$; hence,
+
+$$
+\begin{align*}
+F_P^e(K_T(-a(P))) &= K_{F_P^e(T)}(-a(P)), \\
+\operatorname{Tor}_i^P(K_{F_P^e(T)}(-a(P)), R) &= H_i(\mathbb{F}^{[q]} \otimes_P R),
+\end{align*}
+$$
+
+for all $i$. The beginning of the minimal homogeneous resolution of $Z$ is
+
+$$ \dots \to \mathbb{F}_{c+1} \to \mathbb{F}_c \to \mathbb{F}_{c-1} \to Z \to 0. $$
+
+The functor $F_P^e(\cdot)$ is exact; so,
+
+$$ \operatorname{Tor}_1^P(F_P^e(Z), R) = H_c(\mathbb{F}^{[q]} \otimes_P R) = \operatorname{Tor}_c^P(K_{F_P^e(T)}(-a(P)), R). \quad \square $$
+
+**OBSERVATION 3.4.** Let $P \to R$ be a surjection of graded $k$-algebras, with $P$ a polynomial ring, and let $M$ be a finitely generated graded $P$-module. Then there is an exact sequence of graded $R$-modules:
+
+$$ F_R^e (\operatorname{Tor}_1^P(M, R)) \to \operatorname{Tor}_1^P(F_P^e(M), R) \to \operatorname{Tor}_1^R(M \otimes_P R, eR) \to 0. $$
+
+*Proof.* Let $(\mathbb{N}, \mathbf{n})$ be the minimal homogeneous resolution of $M$ by free $P$-modules. The functor $F_P^e(\cdot)$ is exact; so, $\mathbb{N}^{[q]}$ is the minimal homogeneous resolution of $F_P^e(M)$ by free $P$-modules, and $\operatorname{Tor}_1^P(F_P^e(M), R)$ is equal to
+
+$$ H_1(F_P^e(\mathbb{N}) \otimes_P R) = H_1(F_R^e(\mathbb{N} \otimes_P R)). $$
+---PAGE_BREAK---
+
+The functors $F_P^e(\mathbb{N}) \otimes_P R$ and $F_R^e(\mathbb{N}) \otimes_P R$ are equal because the homomorphisms
+
+$$
+\begin{tikzcd}
+& & P \arrow[r, "f_p"] \\
+\text{quot. map} \arrow[r, "g_p"] & \mathbb{N}_0 \arrow[r, "M"] & M
+\end{tikzcd}
+$$
+
+commute. Let $\bar{\phantom{x}}$ denote the functor $\mathbb{N} \otimes_P R$. Select elements $z_1, \dots, z_\ell$ of $\mathbb{N}_1$ so that $\bar{z}_1, \dots, \bar{z}_\ell$ are cycles in $\mathbb{N} \otimes_P R$ and the homology classes $[\bar{z}_1], \dots, [\bar{z}_\ell]$ form a minimal generating set for $H_1(\mathbb{N} \otimes_P R) = \text{Tor}_1^P(M, R)$. It is clear that $z_i^{[q]}$ is an element of $F_P^e(\mathbb{N}_1)$ with $\overline{z_i^{[q]}} = \bar{z}_i^{[q]}$ a cycle in $F_P^e(\mathbb{N}) \otimes_P R = F_R^e(\mathbb{N} \otimes_P R)$, for each $i$.
+
+The technique of killing cycles (see, for example, Section 2 of [12]) tells us
+that
+
+$$
+M : \mathbb{N}_2 \oplus \bigoplus_{i=1}^\ell R(-|z_i|) \xrightarrow{[\bar{n}_2 \bar{z}_1 \dots \bar{z}_\ell]} \mathbb{N}_1 \xrightarrow{\bar{n}_1} \mathbb{N}_0 \to \bar{M} \to 0
+$$
+
+is the beginning of a homogeneous resolution of $\overline{M}$ by free $R$-modules. It
+follows that
+
+$$
+\operatorname{Tor}_1^R(\overline{M}, eR) = H_1(\overline{\mathbb{M}} \otimes_R eR) = \frac{H_1(F_R^e(\overline{\mathbb{N}}))}{([\overline{z_1}^{[q]}], \dots, [\overline{z_\ell}^{[q]}])} = \frac{\operatorname{Tor}_1^P(F_P^e(M), R)}([\overline{z_1}^{[q]}, \dots, [\overline{z_\ell}^{[q]}])}. \quad \square
+$$
+
+The following result is an application of the technique of Observation 3.4.
+
+PROPOSITION 3.5. Let $R = P/C$ and $T = P/I$, where $I$ and $C$ are homogeneous ideals in the polynomial ring $P$. Then
+
+$$
+\mathrm{Tor}_1^R(T \otimes_P R, eR) = \frac{I^{[q]} \cap C}{(I \cap C)^{[q]} + I^{[q]}C}.
+$$
+
+*Proof.* Apply Observation 3.4 to see that
+
+$$
+\mathrm{Tor}_1^R(T \otimes_P R, eR) = \frac{\mathrm{H}_1(\mathbb{N}^{[q]} \otimes_P R)}{([\bar{z}_1^{[q]}], \dots, [\bar{z}_\ell^{[q]}])},
+$$
+
+where $(\mathbb{N}, \boldsymbol{n})$ is the minimal homogeneous resolution of $T$ by free $P$-modules,
+$-$ is the functor $\mathbb{N} \otimes_P R$, and $z_1, \dots, z_\ell$ are homogeneous elements of $\mathbb{N}_1$ with
+$[\bar{z}_1], \dots, [\bar{z}_\ell]$ a minimal generating set for
+
+$$
+H_1(\mathbb{N} \otimes_P R) = \text{Tor}_1^P (P/I, P/C) = (I \cap C)/IC.
+$$
+
+Observe that $I \cap C = (\boldsymbol{n}_1(z_1), \dots, \boldsymbol{n}_1(z_\ell)) + IC$. Observe also, that
+
+$$
+H_1(\mathbb{N}^{[q]} \otimes_P R) = \text{Tor}_1^P (P/I^{[q]}, P/C) = (I^{[q]} \cap C)/I^{[q]}C.
+$$
+
+The isomorphism $H_1(\mathbb{N}^{[q]} \otimes_P R) \to (I^{[q]} \cap C)/I^{[q]}C$ carries $[\bar{z}_i^{[q]}]$ to the class
+of $(\boldsymbol{n}_1(z_i))^q$. $\square$
+---PAGE_BREAK---
+
+**4. Degree considerations concerning Tor₁**
+
+Proposition 4.1, followed by [1], is a general statement that says that if the
+degrees of the minimal generators of
+
+$Tor_1^P(M, R)$ and $Tor_1^P(F_P^e(M), R)$
+
+are related in the appropriate manner, then $M \otimes_P R$ has finite projective dimension as an $R$-module. When the notation of 2.4 and the hypothesis of Theorem 2.3 are in effect, then Corollary 3.2 and Observation 3.3 show that Proposition 4.1 may be applied with $M = Z$.
+
+PROPOSITION 4.1. Let $P \to R$ be a surjection of graded k-algebras, with $P$ a polynomial ring and $R$ a complete intersection, and let $M$ be a finitely generated graded $P$-module. Suppose that the minimal generators of $Tor_1^P(M, R)$ have degrees $\{\gamma_i | 1 \le i \le l\}$. If the minimal generators of $Tor_1^P(F_P^e(M), R)$ have degrees $\{q\gamma_i | 1 \le i \le l\}$, then $Tor_1^R(M \otimes_P R, eR) = 0$.
+
+*Proof.* Inflation of the base field $k \to K$ gives rise to faithfully flat extensions $P \to P \otimes_k K$ and $R \to R \otimes_k K$. Consequently, we may assume that $k$ is a perfect field. Let $C$ be the ideal in $P$ with $R = P/C$, and let $f_1, \dots, f_c$ be a homogeneous regular sequence in $P$ that generates $C$. We retain the notation from the proof of Observation 3.4. So,
+
+$$N : N_2 \xrightarrow{n_2} N_1 \xrightarrow{n_1} N_0 \to M \to 0$$
+
+is the beginning of the minimal homogeneous resolution of $M$ by free $P$-modules, $-$ is the functor $\mathcal{O} \otimes_P R$, and $z_1, \dots, z_l$ are homogeneous elements of $N_1$ with $[\bar{z}_1], \dots, [\bar{z}_l]$ a minimal generating set for $H_1(\bar{N}) = Tor_1^P(M, R)$. Let $Y$ be the subset $\{[\bar{z}_1^{[q]}], \dots, [\bar{z}_l^{[q]}]\}$ of $H_1(\bar{N}^{[q]}) = Tor_1^P(F_P^e(M), R)$. We will prove that
+
+$$ (4.2) \qquad Y \text{ generates } H_1(\bar{N}^{[q]}). $$
+
+As soon as (4.2) is established, then the proof is complete by Observation 3.4.
+
+Fix an integer $\delta$. Define $W'_{\delta}$ and $W''_{\delta}$ to be the $P$-submodules of $H_1(\overline{\mathbb{N}}^{[q]})$
+which are generated by
+
+$$ \sum_{\delta1R(Z ⊗P R, eR) = 0. Our goal in Theorem 2.3
+is to prove that Tor1R(T ⊗P R, eR) = 0. Homological arguments in Sections 3
+and 5 connect these Tor-modules to quotients of ideals. In the present section
+we show how information about the Tor-module of Z gives information about
+the Tor-module of T, when R is a complete intersection.
+
+PROPOSITION 6.1. Let P be a regular ring of positive characteristic p,
+and let C and I be ideals in P. Assume that C is generated by the regular
+sequence f1, . . . , fc and that
+
+$$
+(6.2) \qquad (C + I)^{[q]} : y = C + I^{[q]},
+$$
+---PAGE_BREAK---
+
+where $y = (f_1 \cdots f_c)^{q-1}$. Then
+
+$$I^{[q]} \cap C = (I \cap C)^{[q]} + CI^{[q]}.$$
+
+*Proof.* Recall, from Proposition 1.13, that $C^{[q]} : C = (y, C^{[q]})$. Take $\xi \in I^{[q]} \cap C$. We prove that if $1 \le t \le c(q-1)$, then
+
+$$ (6.3) \qquad \xi \in C^t + C^{[q]} + CI^{[q]} \implies \xi \in C^{t+1} + C^{[q]} + CI^{[q]}. $$
+
+Of course, we know that the hypothesis of (6.3) holds for $t=1$. Once we have established (6.3), then, since $C^{c(q-1)+1} \subseteq C^{[q]}$, we know that
+
+$$ \xi \in I^{[q]} \cap C^{[q]} + CI^{[q]} = (I \cap C)^{[q]} + CI^{[q]}, $$
+
+because the Frobenius functor on $P$ is flat.
+
+Now we prove (6.3). Write $\xi$ as an element of $C^{[q]} + CI^{[q]}$ plus
+
+$$ \sum_{\alpha} b_{\alpha} f_{1}^{\alpha_{1}} \cdots f_{c}^{\alpha_{c}}, $$
+
+where $\alpha = (\alpha_1, \dots, \alpha_c)$ varies over all $c$-tuples of non-negative integers with $\alpha_i < q$ for all $i$ and $\sum_{i=1}^c \alpha_i = t$. Fix an index $\alpha$. Observe that
+
+$$ f_1^{q-1-\alpha_1} \cdots f_c^{q-1-\alpha_c} \xi $$
+
+is equal to $b_\alpha y$ plus an element of $C^{[q]} + I^{[q]}$. Hypothesis (6.2) tells us that $b_\alpha$ is in $C + I^{[q]}$; (6.3) is established, and the proof is complete. $\square$
+
+**7. The Gorenstein F-pure case**
+
+The question of whether the conclusion of Theorem 2.3 holds when $R$ is Gorenstein is still open. In this section, we include partial results in this direction. Recall that the ring $R$ of positive prime characteristic $p$ is F-pure if whenever $J$ is an ideal of $R$ and $x$ is an element of $R$ with $x \notin J$, then $x^q \notin J^{[q]}$ for all $q = p^e$.
+
+First note that the top socle degree (tsd) of a Frobenius power is always at least equal to the “expected” top socle degree, in the sense of Observation 2.1:
+
+PROPOSITION 7.1. Let $k$ be a field of positive characteristic $p$, $R \to S$ be a surjection of graded $k$-algebras with $S$ artinian. Assume that either $R$ is a complete intersection or $R$ is Gorenstein and F-pure. If $d$ is the top socle degree of $S$, then the top socle degree of $F_R^e(S)$ is at least $qd - (q-1)a(R)$.
+
+*Proof.* Write $S = R/J$, with $J \subset R$ an $m$-primary ideal, where $m$ is the unique homogeneous maximal ideal of $R$.
+
+We first assume that $R$ is Gorenstein and F-pure. Let $\mathfrak{a}$ be an $m$-primary ideal of $R$, generated by a regular sequence, with $\mathfrak{a} \subset J$. Let $g_1, \dots, g_s$, with $|g_1| \le \dots \le |g_s|$, be elements in $R$ which represent a minimal generating set for $(\mathfrak{a} : J)/\mathfrak{a}$. The hypothesis that $R$ is F-pure ensures that
+---PAGE_BREAK---
+
+$g_i^q \notin (g_1^q, \dots, g_i^q, \dots, g_s^q, \mathfrak{a}^{[q]})$ for all $i$; therefore, $g_1^q, \dots, g_s^q$ represents a minimal generating set for $(g_1^q, \dots, g_s^q, \mathfrak{a}^{[q]})/\mathfrak{a}^{[q]}$. It follows that the minimum generator degree (min gen degree) of $(g_1^q, \dots, g_s^q, \mathfrak{a}^{[q]})/\mathfrak{a}^{[q]}$ is $q|g_1|$. Apply Observation 1.4 to the ideal $J/\mathfrak{A}$ of the ring $R/\mathfrak{a}$ to see that
+
+$$
+\begin{align*}
+\mathrm{tsd}\, R/J &= \mathrm{tsd}\left(\frac{R/\mathfrak{a}}{J/\mathfrak{a}}\right) \\
+&= \operatorname{socle degree} R/\mathfrak{a} - \operatorname{min gen degree} (\operatorname{ann}(J/\mathfrak{a})) \\
+&= \operatorname{socle degree} R/\mathfrak{a} - \operatorname{min gen degree} ((\mathfrak{a}:J)/\mathfrak{a}) \\
+&= \operatorname{socle degree} R/\mathfrak{a} - |g_1|.
+\end{align*}
+$$
+
+Duality (see Lemma 1.11) gives
+
+$$
+\mathfrak{a}^{[q]} : (\mathfrak{a}^{[q]} : (g_1^q, \ldots, g_s^q)) = (g_1^q, \ldots, g_s^q, \mathfrak{a}^{[q)})
+$$
+
+in $R$; so the annihilator of the ideal
+
+$$
+(7.2) \qquad \frac{\mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)}{\mathfrak{a}^{[q]}}
+$$
+
+in the ring $R/\mathfrak{a}^{[q]}$ is $(g_1^q, \dots, g_s^q, \mathfrak{a}^{[q]})/\mathfrak{a}^{[q]}$. Apply Observation 1.4 to the ideal (7.2) to see that
+
+$$
+\begin{align*}
+\mathrm{tsd} \frac{R}{\mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)} &= \mathrm{tsd} \frac{R/\mathfrak{a}^{[q]}}{\left(\mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)\right)/\mathfrak{a}^{[q]}} \\
+&= \text{socle degree } \frac{R}{\mathfrak{a}^{[q]}} - \text{min gen degree} \left( \text{ann} \frac{\mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)}{\mathfrak{a}^{[q]}} \right) \\
+&= \text{socle degree } \frac{R}{\mathfrak{a}^{[q]}} - \text{min gen degree} \left( \frac{(g_1^q, \dots, g_s^q, \mathfrak{a}^{[q]})}{\mathfrak{a}^{[q]}} \right) \\
+&= \text{socle degree } \frac{R}{\mathfrak{a}^{[q]}} - q|g_1|.
+\end{align*}
+$$
+
+The $R$-module $R/\mathfrak{a}$ has finite projective dimension; so Observation 2.1 yields
+
+$$
+\operatorname{socle degree} \frac{R}{\mathfrak{a}^{[q]}} = q \operatorname{socle degree} \frac{R}{\mathfrak{a}} - (q-1)a(R).
+$$
+
+Duality gives $J = \mathfrak{a} : (g_1, \dots, g_s)$. It follows that $J^{[q]} \subseteq \mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)$; therefore,
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+\textrm{tsd} \frac{R}{J^{[q]}} &\ge \textrm{tsd} \frac{R}{\mathfrak{a}^{[q]} : (g_1^q, \dots, g_s^q)} = \text{socle degree} \frac{R}{\mathfrak{a}^{[q]}} - q|g_1| \\
+&= \text{socle degree} \frac{R}{\mathfrak{a}^{[q]}} - q \left( \text{socle degree} \frac{R}{\mathfrak{a}} - \text{tsd} \frac{R}{J} \right) \\
+&= q \text{tsd} \frac{R}{J} + \left( \text{socle degree} \frac{R}{\mathfrak{a}^{[q]}} - q \text{socle degree} \frac{R}{\mathfrak{a}} \right) \\
+&= q \text{tsd} \frac{R}{J} - (q-1)a(R).
+\end{align*}
+$$
+
+The proof is complete if $R$ is Gorenstein and $F$-pure. Throughout the rest of the argument, $R$ is a complete intersection. We begin by reducing to the case where $J$ is an irreducible ideal. Assume, for the time being, that the result has been established for irreducible ideals. Let $J = J_1 \cap \cdots \cap J_n$, with each $J_i$ irreducible. Recall that $\text{tsd } R/J$ is the largest integer $d$ with $R_d \not\subseteq J$. It follows that the $\text{tsd } R/J$ is equal to the maximum of the set $\{\text{tsd } R/J_k\}$. Fix a subscript $k$ with $\text{tsd } R/J = \text{tsd } R/J_k$. We know that $J^{[q]} \subseteq J_k^{[q]}$; therefore,
+
+$$
+q \operatorname{tsd} \frac{R}{J} - (q-1)a(R) = q \operatorname{tsd} \frac{R}{J_k} - (q-1)a(R) \le \operatorname{tsd} \frac{R}{J_k^{[q]}} \le \operatorname{tsd} \frac{R}{J^{[q]}}.
+$$
+
+Henceforth, the ideal $J$ is irreducible. Write $R = P/C$, where $P$ is a polynomial ring and the ideal $C$ is generated by the homogeneous regular sequence $f_1, \dots, f_c$. Let $I$ be the pre-image of $J$ in $P$. In particular, $C \subseteq I$. The rings $R/J = P/I$ and $P/I^{[q]}$ are Gorenstein, so Observation 1.4 gives
+
+$$
+\text{socle degree} \frac{P}{I^{[q]}} - M = \text{tsd} \frac{P}{I^{[q]} + C},
+$$
+
+where $M$ is the least degree among homogeneous non-zero elements of $(I^{[q]} : C)/I^{[q]}$. The $P$-module $P/I$ has finite projective dimension; so Observation 2.1 yields
+
+$$
+qsocle degree(P/I) - (q-1)a(P) = socle degree(P/I^{[q]}).
+$$
+
+Recall the formula $a(P/C) = a(P) + \sum_{i=1}^{c} |f_i|$. The inequality
+
+$$
+q \operatorname{socle degree}(P/I) - (q-1)a(P/C) \le \operatorname{tsd} \left( \frac{P}{I^{[q]} + C} \right)
+$$
+
+is equivalent to the inequality
+
+$$
+(7.3) \qquad M \le (q-1)(|f_1| + \dots + |f_c|).
+$$
+
+We establish (7.3). There exists an integer $t$, with $0 \le t \le c(q-1)$, such that
+$C^t \not\subseteq I^{[q]}$; but $C^{t+1} \subseteq I^{[q]}$. Thus, some element $f_1^{t_1} \cdots f_c^{t_c}$ of $C^t$, with $\sum t_i = t$
+and $0 \le t_i \le q-1$ for all $i$, represents a non-zero element of $(I^{[q]} : C)/I^{[q]}$;
+therefore,
+
+$$
+M \le |f_1^{t_1} \cdots f_c^{t_c}| \le (q-1)(|f_1| + \cdots + |f_c|). \quad \square
+$$
+---PAGE_BREAK---
+
+The next result shows that we can get most of the way through the proof
+of Theorem 2.3 under the assumption that $R$ is Gorenstein and F-pure. The
+conclusion of Proposition 7.4 is exactly the same as the conclusion that is
+obtained when Proposition 5.1 is used in the proof of Theorem 2.3. Only the
+last step (the analogue of Proposition 6.1) is still missing.
+
+PROPOSITION 7.4. Let $k$ be a field of positive characteristic $p$, $R \to S$ be a surjection of graded $k$-algebras with $R$ Gorenstein and $S$ artinian. Assume, in addition, that $R$ is F-pure. Assume that $d_1 \le \cdots \le d_\ell$ are the socle degrees of $S$ and the socle of $F_R^e(S)$ has the same dimension as the socle of $S$, with degrees of the generators given by $D_1 \le D_2 \le \cdots \le D_\ell$, with
+
+$$D_i = qd_i - (q-1)a(R),$$
+
+for all i. Let $R = P/C$, and $S = R/IR$, with $P$ a polynomial ring, $I \subset P$. Then we have
+
+$$(C^{[q]} + I^{[q]}) : (C^{[q]} : C) = C + I^{[q]}.$$
+
+*Proof*. Let $A = C + (x_1, \dots, x_d)$, where the images of $x_1, \dots, x_d$ are a system of parameters in $R$. Let $K = A : (I + C)$, so that we also have $I + C = A : K$. We have
+
+$$ (I^{[q]} + C^{[q]}) : (C^{[q]} : C) = (A^{[q]} : K^{[q]}) : (C^{[q]} : C) = (A^{[q]} : (C^{[q]} : C)) : K^{[q]}. $$
+
+We claim that $A^{[q]} : (C^{[q]} : C) = A^{[q]} + C$. We see this by looking at the comparison map of resolutions induced by the projection $P/A^{[q]} \to P/(A^{[q]} + C)$. If $\mathbb{F}$ is the resolution of $P/C$, and $\mathbb{K}$ is the Koszul complex on $x_1, \dots, x_d$, then the resolution of $P/A^{[q]}$ is given by $\mathbb{F}^{[q]} \otimes \mathbb{K}^{[q]}$, the resolution of $P/(A^{[q]} + C)$ is given by $\mathbb{F} \otimes \mathbb{K}^{[q]}$, and the comparison map between them is given by the comparison map $\mathbb{F}^{[q]} \to \mathbb{F}$, tensored with $\mathbb{K}^{[q]}$. Thus, the last map is multiplication by an element of $P$ which represents the generator of $(C^{[q]} : C)/C^{[q]}$.
+
+It follows that $(I^{[q]} + C^{[q]}) : (C^{[q]} : C) = (A^{[q]} + C) : K^{[q]}$. It is clear that
+
+$$(A^{[q]} + C) : K^{[q]} \supseteq I^{[q]} + C.$$
+
+We next show that the rings defined by these ideals have the same socle de-
+grees. Let $\delta$ and $\Delta$ be the socle degrees of the Gorenstein rings $P/A$ and
+$P/(A^{[q]} + C)$, respectively. The $P/C$-module $P/A$ has finite projective di-
+mension; so
+
+$$\Delta = q\delta - (q-1)a(P/C).$$
+
+Let $g_1, \dots, g_s$ be elements of $K$ which represent a minimal generating set for $K/A$. Observation 1.4 gives that the socle degrees of $P/(I+C)$ are $\{\delta - |g_i|\}$. So, our hypothesis tells us that the socle degrees of $P/(I^{[q]}+C)$ are
+
+$$\{q(\delta - |g_i|) - (q-1)a(P/C)\} = \{\Delta - q|g_i|\}.$$
+---PAGE_BREAK---
+
+It is clear that $g_1^q, \dots, g_s^q$ represents a generating set for $(K^{[q]} + C)/(A^{[q]} + C)$.
+Apply the hypothesis that $P/C$ is F-pure to each of the ideals $(g_1^q, \dots, \hat{g}_i^q, \dots, g_s^q, A^{[q]})$ of $P/C$ to conclude that $(K^{[q]} + C)/(A^{[q]} + C)$ is minimally generated by $g_1^q, \dots, g_s^q$. Observation 1.4 yields that the socle degrees of
+
+$$ \frac{P}{(A^{[q]} + C) : K^{[q]}} $$
+
+are exactly the same as the socle degrees of $P/(I^{[q]} + C)$; therefore, Lemma 1.1 of [9] shows that
+
+$$ I^{[q]} + C = (A^{[q]} + C) : K^{[q]} = (I^{[q]} + C^{[q]}) : (C^{[q]} : C). \quad \square $$
+
+## REFERENCES
+
+[1] L. L. Avramov and C. Miller, *Frobenius powers of complete intersections*, Math. Res. Lett. **8** (2001), 225–232. MR 1825272 (2002b:13022)
+
+[2] H. Bass, *On the ubiquity of Gorenstein rings*, Math. Z. **82** (1963), 8–28. MR 0153708 (27 #3669)
+
+[3] H. Brenner, *A linear bound for Frobenius powers and an inclusion bound for tight closure*, Michigan Math. J. **53** (2005), 585–596. MR 2207210 (2006k:13010)
+
+[4] W. Bruns and J. Herzog, *Cohen-Macaulay rings*, Cambridge Studies in Advanced Mathematics, vol. 39, Cambridge University Press, Cambridge, 1993. MR 1251956 (95h:13020)
+
+[5] W. Bruns and U. Vetter, *Determinantal rings*, Lecture Notes in Mathematics, vol. 1327, Springer-Verlag, Berlin, 1988. MR 953963 (89i:13001)
+
+[6] S. P. Dutta, *On modules of finite projective dimension over complete intersections*, Proc. Amer. Math. Soc. **131** (2003), 113–116 (electronic). MR 1929030 (2003j:13016)
+
+[7] S. Goto and K. Watanabe, *On graded rings*. I, J. Math. Soc. Japan **30** (1978), 179–213. MR 494707 (81m:13021)
+
+[8] A. R. Kustin and M. Miller, *Deformation and linkage of Gorenstein algebras*, Trans. Amer. Math. Soc. **284** (1984), 501–534. MR 743730 (85k:13015)
+
+[9] A. R. Kustin and B. Ulrich, *If the socle fits*, J. Algebra **147** (1992), 63–80. MR 1154674 (93e:13017)
+
+[10] C. Peskine and L. Szpiro, *Dimension projective finie et cohomologie locale. Applications à la démonstration de conjectures de M. Auslander, H. Bass et A. Grothendieck*, Inst. Hautes Études Sci. Publ. Math. **42** (1973), 47–119. MR 0374130 (51 #10330)
+
+[11] __________, *Liaison des variétés algébriques*. I, Invent. Math. **26** (1974), 271–302. MR 0364271 (51 #526)
+
+[12] J. Tate, *Homology of Noetherian rings and local rings*, Illinois J. Math. **1** (1957), 14–27. MR 0086072 (19,119b)
+
+[13] A. Vraciu, *Tight closure and linkage classes in Gorenstein rings*, Math. Z. **244** (2003), 873–885. MR 2000463 (2004k:13008)
+
+ANDREW R. KUSTIN, MATHEMATICS DEPARTMENT, UNIVERSITY OF SOUTH CAROLINA,
+COLUMBIA, SC 29208, USA
+
+*E-mail address: kustin@math.sc.edu*
+
+ADELA N. VRACIU, MATHEMATICS DEPARTMENT, UNIVERSITY OF SOUTH CAROLINA,
+COLUMBIA, SC 29208, USA
+
+*E-mail address: vraciu@math.sc.edu*
\ No newline at end of file
diff --git a/samples/texts_merged/7122526.md b/samples/texts_merged/7122526.md
new file mode 100644
index 0000000000000000000000000000000000000000..a5adf867fb58123a388132ba5889ffbc11cdb5cb
--- /dev/null
+++ b/samples/texts_merged/7122526.md
@@ -0,0 +1,169 @@
+
+---PAGE_BREAK---
+
+# Parameterization of Muon Production Profiles in the Atmosphere
+
+Stef Verpoest$^{a,*}$ and Thomas K. Gaisser$^b$
+
+$^a$Dept. of Physics and Astronomy, University of Gent, B-9000 Gent, Belgium
+
+$^b$Bartol Research Institute and Dept. of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA
+
+E-mail: stef.verpoest@ugent.be, gaisser@udel.edu
+
+Production of high-energy muons in cosmic-ray air showers, relevant for underground detectors, depends on the properties of the primary cosmic ray as well as the atmospheric temperature through the competition between decay and re-interaction of charged pions and kaons. We present a parameterization of muon production profiles based on simulations as a function of the primary cosmic-ray energy, mass and zenith angle, the minimum energy for a muon to reach the detector and an atmospheric temperature profile. We illustrate how this can be used to calculate muon bundle properties such as multiplicity and transverse size and their seasonal variations in the context of underground measurements in coincidence with a surface detector which fixes the primary cosmic-ray energy.
+
+37th International Cosmic Ray Conference (ICRC 2021)
+July 12th – 23rd, 2021
+Online – Berlin, Germany
+
+*Presenter
+---PAGE_BREAK---
+
+# 1. Introduction
+
+The yield of high-energy muons in air showers induced by cosmic rays interacting near the top of the atmosphere is relevant for understanding event rates and properties of muon bundles in underground detectors. The following formula, originally proposed by Elbert [1], has been used to estimate the average multiplicity $\langle N_{\mu} \rangle$ of muons above a certain energy $E_{\mu}$:
+
+$$ \langle N_{\mu}(> E_{\mu}, E_0, A, \theta) \rangle \approx A \times \frac{K}{E_{\mu} \cos \theta} \left( \frac{E_0}{A E_{\mu}} \right)^{\alpha_1} \left( 1 - \frac{A E_{\mu}}{E_0} \right)^{\alpha_2}, \quad (1) $$
+
+where $E_0$, $A$, and $\theta$ are respectively the energy, mass number, and zenith of the primary cosmic ray nucleus. The normalization constant $K$ and exponents $\alpha_1$ and $\alpha_2$ are to be derived from simulations. The scaling with $AE_{\mu}/E_0$ follows from the superposition approximation, which assumes that an incident nucleus of mass $A$ and energy $E_0$ can be treated as $A$ independent nucleons of energy $E_0/A$.
+
+The Elbert formula does not describe the fact that muon production depends on the density (or temperature) of the atmosphere through the competition between re-interaction and decay of the parent mesons. In summer, when the atmosphere is warmer and less dense, more mesons will decay to muons rather than interact, and the number of high-energy muons in the shower will be larger. In this work, we present a generalization of the Elbert formula describing the production of muons above some energy threshold as a function of slant depth in the atmosphere, based on a parameterization of simulations, and including factors taking into account the atmospheric temperature (Section 2). This parameterization allows one to estimate not only the multiplicity of muon bundles in air showers but also its transverse size and the seasonal variations of both these properties, which we illustrate for the case of IceCube [2] in Section 3.
+
+Other applications include the calculation of event rates of single- and multiple-muon events in underground detectors, where one integrates over the spectrum of primary nucleons [3], but are not discussed here.
+
+# 2. Muon production profiles
+
+The production of muons above a certain energy threshold, differential in slant depth throughout the atmosphere along the shower axis, is referred to as the longitudinal production profile. The idea is to perform a large number of air-shower simulations and to obtain the average muon production profile for primary cosmic rays with energy $E_0$, mass number $A$, zenith angle $\theta$, and for muons with energy above $E_{\mu}$. We have used CORSIKA v7.7100 [4] using Sibyll 2.3c [5] as the high-energy interaction model, and an atmospheric profile describing the average South Pole atmosphere in April between 2007 and 2011 [6]. To the average profiles obtained from simulation, we fit a function of
+---PAGE_BREAK---
+
+the following form, which we explain below,
+
+$$
+\begin{equation}
+\begin{aligned}
+\left\langle \frac{dN}{dX} (> E_{\mu}, X, T, E_0, A, \theta) \right\rangle &= N_{max} \times \exp\left(\frac{(X_{max} - X)}{\lambda}\right) \times \left(\frac{X_0 - X}{X_0 - X_{max}}\right)^{\frac{(X_{max} - X_0)}{\lambda}} \times \frac{X_{max} - X}{\lambda(X - X_0)} \\
+&\quad \times \left[ 0.92 \times \frac{r_{\pi}\lambda_{\pi}\epsilon_{\pi}}{fE_{\mu}\cos(\theta)X} \times \frac{1}{1 + \frac{r_{\pi}\lambda_{\pi}\epsilon_{\pi}}{fE_{\mu}\cos(\theta)X}} + 0.08 \times \frac{r_K\lambda_K\epsilon_K}{fE_{\mu}\cos(\theta)X} \times \frac{1}{1 + \frac{r_K\lambda_K\epsilon_K}{fE_{\mu}\cos(\theta)X}} \right] \\
+&\quad \times \left(1 - \frac{AE_{\mu}}{E_0}\right)^{5.99},
+\end{aligned}
+\tag{2}
+\end{equation}
+$$
+
+where *T* is the temperature at a slant depth *X*.
+
+The first line on the right-hand side is the derivative of the Gaisser-Hillas (G-H) function [7], which we interpret as the rate of production of charged mesons per dX (g/cm²). The parameters of number of particles at shower maximum ($N_{max}$), depth of shower maximum ($X_{max}$), depth of first interaction ($X_0$), and interaction length ($\lambda$) are the free parameters during the fit and, as they are applied here to the charged mesons in the hadronic cascade, their numerical values are quite different from those of the original G-H function.
+
+In the second line of Eq. (2), we multiply by the probability for mesons to decay to a muon relative to the total rate of decay and re-interaction. We consider two channels for muon production, namely decay of charged pions and kaons $\pi^+/K^+ \rightarrow \mu + v_\mu$, with branching ratios of 100% and 63.5% respectively. The decay fraction for charged pions with interaction length $\lambda_\pi$ and decay length $d_\pi$ is
+
+$$
+\frac{1/d_{\pi}}{1/d_{\pi} + 1/\lambda_{\pi}}. \qquad (3)
+$$
+
+The decay length is given by [8]
+
+$$
+\frac{1}{d_{\pi}} = \frac{\epsilon_{\pi}}{E_{\pi} \cos \theta X}, \quad (4)
+$$
+
+where $E_\pi$ is the energy of the pion and $\epsilon_\pi$ the pion critical energy given by
+
+$$
+\epsilon_{\pi} = \frac{m_{\pi}c^2 RT}{c \tau_{\pi} Mg} \approx 115 \text{ GeV} \times \frac{T}{220 \text{ K}}, \quad (5)
+$$
+
+with *c* the speed of light in vacuum, *m*π and τπ the pion mass and lifetime, *R* the molar gas constant, *M* the molar mass of the atmosphere, and *g* the gravitational constant. On average, the muon that results from pion decay has an energy *E*μ = *r*π × *E*μ with *r*π ≈ 0.79. For kaons, the critical energy is larger by a factor of 7.45 because of its larger mass and shorter decay length, and the muon energy in this case is defined by *r*K ≈ 0.52. The factors of 0.92 and 0.08 preceding the pion and kaon terms are the relative fractions of momentum carried by charged pions and kaons after taking into account the branching ratios. The momentum fraction carried by charged pions and kaons in p-air interactions is given by Fig. 5.2 of Ref. [8] as 0.29 and 0.040 respectively. Combined with the branching ratios, this gives 0.29/(1 × 0.29 + 0.635 × 0.04) = 0.92 for charged pions and 0.08
+---PAGE_BREAK---
+
+**Figure 1:** Ratio between the mean energy of muons above the threshold and the muon threshold energy $E_μ$. Markers are derived from vertical proton and iron shower simulations. Our approximation of $f$ used in Eq. (2) is given by the black line.
+
+**Figure 2:** Normalized muon production profiles for vertical showers with $E_μ > 300$ GeV. The markers show the average profiles obtained from simulations, the lines are the fits of Eq. (2) to the simulation results.
+
+for charged kaons. To take into account the fact that the mean energy of muons is larger than the threshold muon energy itself, we replace $E_μ$ by $fE_μ$, where the factor $f$ gives the ratio between the mean energy of muons above the threshold energy and the threshold energy $E_μ$. Its behaviour can be derived from simulations and is shown in Fig. 1 for the muon energy range we consider. It has a piecewise behaviour parametrized by the black line, with the parameters included in Table 1.
+
+The third line of Eq. (2) is the threshold factor from the Elbert formula Eq. (1), with an exponent fitted to our simulations. It describes the suppression in muon multiplicity when the energy per nucleon is close to the minimum muon energy.
+
+Examples of the formula of Eq. (2) fit to production profiles derived from CORSIKA simulations are shown in Fig. 2 for $E_μ > 300$ GeV. We have repeated this procedure for muon threshold energies of 300, 400, 500, 700, and 1000 GeV and a large range of primary energies. The optimized values of $N_{max}$, $X_{max}$, $\lambda$, and $X_0$ for vertical proton showers are shown in Fig. 3. We observe that their behaviour depends in leading order on $E_0/AE_μ$, and parametrize it with the following
+---PAGE_BREAK---
+
+**Figure 3:** Optimal values of the fit parameters $N_{max}$, $X_{max}$, $\lambda$, and $X_0$ of Eq. 2, as obtained from fits to vertical proton showers for various minimum muon energies $E_\mu$ over a large range of primary energies. The black lines are fits to these results with the functions of Eq. (6), resulting in the parameters given in Table 1.
+
+functions,
+
+$$
+\begin{align}
+N_{\text{max}} &= c_i \times A \times \left( \frac{E_0}{A E_{\mu}} \right)^{p_i} \tag{6} \\
+X_{\text{max}, \lambda, X_0} &= a_i + b_i \times \log_{10} \left( \frac{E_0}{A E_{\mu}} \right),
+\end{align}
+$$
+
+where $c_i$, $p_i$, $a_i$, and $b_i$ are defined for each function separately and have two regimes with a break at $R_b = \frac{E_0}{A E_\mu} = 10^q$ and parameters ($a_i$, $b_i$) with $i=1$ below the break and $i=2$ above. The resulting parameters are listed in Table 1. A simple Python implementation of this parameterization is made available on Github¹. Note that the scaling with $E_0/AE_\mu$ is not perfect; a remaining dependence on $E_\mu$ can be observed in Fig. 3. It is therefore recommended to optimize the simulations and fits to the energy regime relevant for the application or detector that is studied.
+
+### 3. Seasonal variations of muon bundle properties
+
+As an example of the application of the parameterization of Section 2, we will examine the case where muon bundles are observed in an underground detector and the primary cosmic ray energy is determined independently by a surface detector. We will use values relevant for air showers detected coincident between the surface array IceTop [9], located at the South Pole, which detects air showers with primary energies between 1 PeV and 1 EeV at an atmospheric depth of roughly 700 g/cm², and IceCube[2], which sits vertically below IceTop buried under 1.5 km of ice and allows for the detection of muons above approximately 400 GeV. The calculations are performed using atmospheric data for the South Pole obtained from the AIRS satellite [10], which provides the temperature at different, unevenly spaced, atmospheric pressure levels between 1 hPa and 700 hPa. These pressures are converted to atmospheric depth and interpolated to a regular grid.
+
+¹https://github.com/verpoest/muon-profile-parameterization
+---PAGE_BREAK---
+
+**Table 1:** Parameter values for Eq. (6) for $300 \text{ GeV} \le E_\mu \le 1 \text{ TeV}$.
+
+ | i | ci | pi | q |
|---|
| Nmax | 1 | 0.124 | 1.012 | 2.677 | | 2 | 0.244 | 0.902 | | i | ai (g/cm2) | bi (g/cm2) | q |
|---|
| Xmax | 1 | 366.2 | 139.5 | 3.117 | | 2 | 642.2 | 51.0 | | λ | 1 | 266.0 | 42.1 | 2.074 | | 2 | 398.8 | -21.9 | | X0 | 1 | -2.9 | -2.6 | 4.025 | | 2 | -15.8 | 0.6 | | f | 1 | 1 | 0.53 | 2.72 | | 2 | 2.45 | - |
+
+**Figure 4:** Left: Integral production profiles for 10 PeV proton and iron showers for three days in 2017 at the South Pole where the expected muon multiplicity is approximately minimal, maximal and average. Right: Variation of the expected multiplicity throughout the year for vertical 10 PeV showers from five primary mass groups.
+
+Using the temperature profiles together with the parameterization, we obtain muon production profiles which can be integrated to find the expected muon multiplicity. Fig. 4 shows the expected multiplicity of muons above 400 GeV in 10 PeV vertical showers throughout the year 2017, as well as the integral profiles for three days corresponding roughly to the days with minimal, average, and maximal multiplicity. It can be seen that the multiplicity is maximal in the austral summer, when temperatures are highest. The calculation predicts a seasonal variation of about 6% around the mean. This may be an important uncertainty to consider in cosmic-ray composition analyses based on muon bundle measurements [11].
+
+Because the parameterization describes the muon production as function of slant depth in the atmosphere, it is also possible to extract information about the altitude of production and to estimate the transverse size of a muon bundle. A muon with energy $E_\mu$ produced at an altitude $h$ with a transverse momentum $p_T$ will have a transverse distance from the shower axis given by
+---PAGE_BREAK---
+
+**Figure 5:** Left: Differential muon production versus altitude for three different days at the South Pole, measured relative to the surface above IceCube, in vertical 10 PeV showers. Right: Seasonal variation of the estimated transverse size of the muon bundle (altitude effect only) for five mass groups.
+
+$$r_T = \frac{p_T}{E_\mu} \times \frac{h}{\cos \theta}, \qquad (7)$$
+
+where $\theta$ is the zenith angle of the primary. At a vertical depth $X_v$, the atmospheric pressure is $P = gX_v$ and the density is given by $\rho = -dX_v/dh$. Assuming the ideal gas law, one can calculate the altitude corresponding to vertical depth $X_v$ as
+
+$$h(X_v) = \frac{RT}{Mg} \ln \frac{X_0}{X_v}, \qquad (8)$$
+
+where $X_0$ is the vertical depth at $h=0$. Using this, we will perform a simple estimate of the expected bundle size, assuming a mean value of transverse momentum for the muons of $\langle p_T \rangle \approx 350$ MeV [12]. As zero-point $h=0$ for the altitude we use the surface above the IceCube detector, located at an elevation of 2835 m with an atmospheric depth $X_0 \approx 700$ g/cm². The left panel of Fig. 5 shows the differential muon production as function of altitude for vertical proton and iron showers at three different days corresponding again roughly to the yearly average and two extremal days. It is clear that muons are produced higher in the atmosphere for heavier nuclei. For a given primary mass, production happens at higher altitude in the summer compared to colder days because of the thermal expansion of the atmosphere. An estimate of the expected bundle size $\langle r_T \rangle$ is obtained by taking the weighted average of the transverse distance for a muon with $\langle p_T \rangle$ at a depth $X$ using Eq. (7), multiplying it with the production profile and integrating over depth. The result is shown in the right panel of Fig. 5, where we see that the muon bundle has the largest spread in the warmest months, corresponding to the higher production altitude. The magnitude of the seasonal variations is roughly 10% around the average value.
+
+Note that we report the multiplicity and transverse size of the muon bundle at the surface above the IceCube detector. The estimate of the transverse size is also limited to the geometrical effect. A full estimate of the muon bundle properties in the detector needs to take into account propagation through the overburden, where multiple scattering of the muons will further increase the spread of the muons [13]. Also separation of muons by bending in the geomagnetic field before they reach
+---PAGE_BREAK---
+
+the surface can be important, especially for inclined showers [14].
+
+## 4. Summary
+
+We have presented a parameterization of muon production profiles in cosmic-ray air showers based on fits to air-shower simulations. The production profile for a certain primary cosmic ray and a muon energy threshold can be obtained for realistic atmospheres to estimate the muon multiplicity and the transverse size of the muon bundle caused by the geometrical separation related to the muon production altitude. Because the temperature dependence of the decay probability of parent mesons is included in Eq. (2), the seasonal variations of these quantities can be determined. An estimate performed at fixed primary energy relevant for the case of IceCube shows that the multiplicity and the transverse size are maximal when the atmosphere is at its warmest, consistent with the increased decay rate and higher muon production altitude resulting from the thermal expansion of the atmosphere, and vice-versa when the atmosphere is colder. Because the parameterization does not scale perfectly with the ratio of the muon energy and primary nucleon energy, it should be optimized for detectors with different conditions, e.g. the overburden.
+
+Further applications of the parameterization exist but are not included here. One example is the calculation of rates of events of single and multiple muons in underground detectors, as discussed in Ref. [3].
+
+## References
+
+[1] J. W. Elbert, "Multiple muons produced by cosmic ray interactions..." in *Proceedings of the DUMAND Summer Workshop*, pp. 101-121. Scripps Institution of Oceanography, La Jolla CA, 1979.
+
+[2] **IceCube** Collaboration, M. G. Aartsen et al. JINST 12 no. 03, (2017) P03012.
+
+[3] T. K. Gaisser and S. Verpoest. arXiv:2106.12247.
+
+[4] D. Heck et al., CORSIKA: A Monte Carlo code to simulate extensive air showers, *Report FZKA 6019*, Forschungszentrum Karlsruhe, 1998.
+
+[5] F. Riehn, R. Engel, A. Fedynitch, T. K. Gaisser, and T. Stanev Phys. Rev. D 102 no. 6, (2020) 063002.
+
+[6] De Ridder, Sam, *Sensitivity of IceCube cosmic ray measurements to the hadronic interaction models*. PhD thesis, Ghent University, 2019.
+
+[7] T. Gaisser and A. Hillas *Proceedings, 15th International Cosmic Ray Conference (ICRC1977): Plovdiv, Bulgaria 8* (1977) 353-357.
+
+[8] T. K. Gaisser, R. Engel, and E. Resconi, *Cosmic Rays and Particle Physics*. Cambridge University Press, 2016.
+
+[9] **IceCube** Collaboration, R. Abbasi et al. Nucl. Instrum. Meth. A 700 (2013) 188-220.
+
+[10] NASA-AIRS. https://airs.jpl.nasa.gov/data/get-data.
+
+[11] **IceCube** Collaboration, M. G. Aartsen et al. Phys. Rev. D 100 no. 8, (2019) 082002.
+
+[12] B. Alper et al. Phys. Lett. B 47 (1973) 275-280.
+
+[13] P. Lipari and T. Stanev Phys. Rev. D 44 (1991) 3543-3554.
+
+[14] **Pierre Auger** Collaboration, P. Abreu et al. JCAP 11 (2011) 022.
\ No newline at end of file
diff --git a/samples/texts_merged/7127238.md b/samples/texts_merged/7127238.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1c38a922baaa9d1b99d2bc0acd735fc2b8053d0
--- /dev/null
+++ b/samples/texts_merged/7127238.md
@@ -0,0 +1,1324 @@
+
+---PAGE_BREAK---
+
+Large Minors in Expanders
+
+Julia Chuzhoy*
+
+Rachit Nimavat†
+
+January 30, 2019
+
+Abstract
+
+In this paper we study expander graphs and their minors. Specifically, we attempt to answer the following question: what is the largest function $f(n, \alpha, d)$, such that every $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$ contains **every** graph $H$ with at most $f(n, \alpha, d)$ edges and vertices as a minor? Our main result is that there is some universal constant $c$, such that $f(n, \alpha, d) \ge \frac{n}{c \log n} \cdot (\frac{\alpha}{d})^c$. This bound achieves a tight dependence on $n$: it is well known that there are bounded-degree $n$-vertex expanders, that do not contain any grid with $\Omega(n/\log n)$ vertices and edges as a minor. The best previous result showed that $f(n, \alpha, d) \ge \Omega(n/\log^\kappa n)$, where $\kappa$ depends on both $\alpha$ and $d$. Additionally, we provide a randomized algorithm, that, given an $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$, and another graph $H$ containing at most $\frac{n}{c \log n} \cdot (\frac{\alpha}{d})^c$ vertices and edges, with high probability finds a model of $H$ in $G$, in time poly($n \cdot (\frac{d}{\alpha})^{O(\log(d/\alpha))}$). We also show a simple randomized algorithm with running time poly($n, d/\alpha$), that obtains a similar result with slightly weaker dependence on $n$ but a better dependence on $d$ and $\alpha$, namely: if $G$ is an $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$, and $H$ contains at most $\frac{\alpha^3 n}{c' d^5 \log^2 n}$ edges and vertices, where $c'$ is an absolute constant, then our algorithm with high probability finds a model of $H$ in $G$.
+
+We note that similar but stronger results were independently obtained by Krivelevich and Ne-
+nadov: they show that $f(n, \alpha, d) = \Omega(\frac{n\alpha^2}{d^2\log n})$, and provide an efficient algorithm, that, given an
+$n$-vertex $\alpha$-expander of maximum vertex degree at most $d$, and a graph $H$ with $O(\frac{n\alpha^2}{d^2\log n})$ vertices
+and edges, finds a model of $H$ in $G$.
+
+Finally, we observe that expanders are the ‘most minor-rich’ family of graphs in the following
+sense: for every $n$-vertex and $m$-edge graph $G$, there exists a graph $H$ with $O(\frac{n+m}{\log n})$ vertices and
+edges, such that $H$ is not a minor of $G$.
+---PAGE_BREAK---
+
+# 1 Introduction
+
+In this paper we study large minors in expander graphs. A graph $G$ is an $\alpha$-expander, if, for every partition $(A, B)$ of its vertices into non-empty subsets, the number of edges connecting vertices of $A$ to vertices of $B$ is at least $\alpha \cdot \min\{|A|, |B|\}$. We say that $G$ is an expander, if it is an $\alpha$-扩 expansion for some constant $0 < \alpha < 1$, that is independent of the graph size. A graph $H$ is a minor of a given graph $G$, if one can obtain a graph isomorphic to $H$ from $G$, via a sequence of edge- and vertex-deletions and edge-contractions.
+
+Bounded-degree expanders are graphs that are simultaneously extremely well connected, while being sparse. Expanders are ubiquitous in discrete mathematics, theoretical computer science and beyond, arising in a wide variety of fields ranging from computational complexity to designing robust computer networks (see [HLW06] for a survey on expanders and their applications). In this paper we study an extremal problem about expanders: what if the largest function $f(n, \alpha, d)$, such that every $n$-vertex $\alpha$-扩 expander with maximum vertex degree at most $d$ contains every graph with at most $f(n, \alpha, d)$ vertices and edges as a minor?
+
+Our main result is that there is a constant $c$, such that $f(n, \alpha, d) \ge \frac{n}{c \log n} \cdot (\frac{\alpha}{d})^c$. As we discuss below, this result achieves an optimal dependence on $n$. We also provide a randomized algorithm that, given an $n$-vertex $\alpha$-扩 expander with maximum vertex degree at most $d$, and another graph $H$ containing at most $\frac{n}{c \log n} \cdot (\frac{\alpha}{d})^c$ edges and vertices, with high probability finds a model of $H$ in $G$, in time poly($n$) $\cdot (d/\alpha)^{\log(d/\alpha)}$. Additionally, we show a simple randomized algorithm with running time poly($n$, $d/\alpha$), that achieves a bound that has a slightly worse dependence on $n$ but a better dependence on $d$ and $\alpha$: if $G$ is an $n$-vertex $\alpha$-扩 expander with maximum vertex degree at most $d$, and $H$ is any graph with at most $\frac{\alpha^3 n}{c'd^5 \log^2 n}$ edges and vertices, for some universal constant $c'$, the algorithm finds a model of $H$ in $G$ with high probability.
+
+Independently from our work, Krivelevich and Nenadov (see Theorem 8.1 in [Kri18a]) provide an elegant proof of a similar but stronger result: namely, they show that $f(n, \alpha, d) = \Omega(\frac{n\alpha^2}{d^2\log n})$, and provide an efficient algorithm, that, given an $n$-vertex $\alpha$-扩 expander of maximum vertex degree at most $d$, and a graph $H$ with $O(\frac{n\alpha^2}{d^2\log n})$ vertices and edges, finds a model of $H$ in $G$.
+
+One of our main motivations for studying this question is the Excluded Grid Theorem of Robertson and Seymour. This is a fundamental result in graph theory, that was proved by Robertson and Seymour [RS86] as part of their Graph Minors series. The theorem states that there is a function $t: \mathbb{Z}^+ \to \mathbb{Z}^+$, such that for every integer $g > 0$, every graph of treewidth at least $t(g)$ contains the $(g \times g)$-grid as a minor. The theorem has found many applications in graph theory and algorithms, including routing problems [RS95], fixed-parameter tractability [DH08, DH07], and Erdős-Pósa-type results [RS86, Car88, Ree97, FST11]. For an integer $g > 0$, let $t(g)$ be the smallest value, such that every graph of treewidth at least $t(g)$ contains the $(g \times g)$-grid as a minor. An important open question is establishing tight bounds on the function $t$. Besides being a fundamental graph-theoretic question in its own right, improved upper bounds on $t$ directly affect the running times of numerous algorithms that rely on the theorem, as well as parameters in various graph-theoretic results, such as, for example, Erdős-Pósa-type results.
+
+In a series of works [RS86, RST94, KK12, LS15, CC16, Chu15, Chu16a, CT], it was shown that $t(g) = \tilde{O}(g^9)$ holds. The best currently known negative result, due to Robertson et al. [RST94] is that $t(g) = \Omega(g^2 \log g)$. This is shown by employing a family bounded-degree expander graphs of large girth. Specifically, consider an $n$-vertex expander $G$ whose maximum vertex degree is bounded by a constant independent of $n$, and whose girth is $\Omega(\log n)$. It is not hard to show that the treewidth of $G$ is $\Omega(n)$. Assume now that $G$ contains the $(g \times g)$-grid as a minor, for some value $g$. Such a grid contains $\Omega(g^2)$
+---PAGE_BREAK---
+
+disjoint cycles, each of which must consume $\Omega(\log n)$ vertices of $G$, and so $g \le O(\sqrt{n/\log n})$. This
+simple argument is the best negative result that is currently known for the Excluded Grid Theorem.
+In fact, Robertson and Seymour conjecture that this bound is tight, that is, $t(g) = \Theta(g^2 \log g)$ must
+hold. A natural question therefore is whether this analysis is tight, and in particular, whether every
+$n$-vertex bounded-degree expander must contain a $(g \times g)$-grid as a minor, for $g = O(\sqrt{n/\log n})$. In
+this paper we answer this question in the affirmative, and moreover, we show that every graph with
+at most $O(n/\log n)$ vertices and edges is a minor of such an expander.
+
+The problem of finding large minors in bounded-degree expanders was first considered by Kleinberg and Rubinfield [KR96]. Building on the random walk-based techniques of Broder et al. [BFU94], they showed that every expander $G$ on $n$ vertices contains every graph with $O(n/\log^\kappa n)$ vertices and edges as a minor. The exponent $\kappa$ depends on the expansion $\alpha$ and the maximum degree $d$ of the expander; we estimate it to be at least $\Theta(\log^2 d / \log^2(1/\alpha))$. They also show an efficient algorithm for finding a model of such a graph in $G$.
+
+Another related direction of research is the existence of large clique minors in graphs. The study of the
+size of the largest clique minor in a graph is motivated by Hadwiger's conjecture, which states that, if
+the chromatic number of a graph is at least $k$, then it contains a clique with $k$ vertices as a minor. One
+well-known result in this area, due to Kawarbayashi and Reed [KR10], shows that every $\alpha$-expander $G$
+with $n$ vertices and maximum vertex-degree bounded by $d$ contains a clique with $\Omega(\alpha\sqrt{n}/d)$ vertices
+as a minor. Recently, Krivelevich and Nenadov [KN18] improved the dependence on the expansion $\alpha$
+and the maximum vertex degree $d$ under a somewhat stronger definition of expansion. We note that
+both these bounds have tight dependence on $n$, since $G$ contains only $O(n)$ edges. Our results imply
+a weaker bound of $\Omega\left((\frac{\alpha}{d})^{c'} \sqrt{n/\log n}\right)$ on the size of the clique minor, for some absolute constant $c'$.
+
+The existence of large clique minors was also studied in the context of random graphs. Recall that $G \sim \mathcal{G}(n,p)$ is a random graph on $n$ vertices, whose edges are added independently with probability $p$ each. Bollobás, Catlin and Erdős [BCE80] showed that Hadwiger’s conjecture is true for almost all graphs $\mathcal{G}(n,p)$ for every constant $p > 0$. Fountoulakis et al. [FKO09] later showed that for every $\epsilon > 0$, there is a constant $\hat{c}_{\epsilon}$ such that the following is true: if $q(n,\epsilon)$ is the probability that the graph $G \sim \mathcal{G}(n, \frac{1+\epsilon}{n})$ does not contain a clique minor on $\lceil \hat{c}_{\epsilon} \sqrt{n} \rceil$ vertices, then $\lim_{n \to \infty} q(n,\epsilon) = 0$. Using a theorem from [Kri18b], our results imply a slightly weaker bound of $\Omega(\sqrt{n/\log n})$ on the clique minor size.
+
+**Our Results and Techniques.** All graphs that we consider are finite; they do not have loops or parallel edges. Given a graph $H$, we define its *size* to be $|V(H)| + |E(H)|$. Our main result is summarized in the following theorem:
+
+**Theorem 1.1** There is a constant $c^*$, such that for all $0 < \alpha < 1$ and $d \ge 1$, if $G$ is an $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$, and $H$ is any graph of size at most $\frac{n}{c^* \log n} \cdot (\frac{\alpha}{d})^{c^*}$, then $H$ is a minor of $G$. Moreover, there is a randomized algorithm, whose running time is poly$(n) \cdot (d/\alpha)^{O(\log(d/\alpha))}$, that, given $G$ and $H$ as above, with high probability, finds a model of $H$ in $G$.
+
+As discussed above, the theorem implies that we cannot get stronger negative results for the Excluded Grid Theorem using bounded-degree $\alpha$-expanders, where $\alpha$ is independent of the graph size. But this leaves open the possibility of obtaining stronger negative results when $\alpha$ is a function of $n$, such as, for example, $\alpha = 1/\text{poly} \log n$, or $\alpha = 1/n^\epsilon$ for some small constant $\epsilon$. Our next result provides a simpler algorithm, with better running time and a better dependence on $d$ and $\alpha$, at the cost of slightly weaker dependence on $n$ in the minor size.
+---PAGE_BREAK---
+
+**Theorem 1.2** There is a constant $\tilde{c}^*$ and a randomized algorithm, that, given an $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$, where $0 < \alpha < 1$, and another graph $H$ of size at most $\frac{\alpha^3 n}{\tilde{c}^* d^5 \log^2 n}$, with high probability computes a model of $H$ in $G$, in time poly($n, d/\alpha$).
+
+The following corollary easily follows from Theorem 1.1 and a result of [Kri18b].
+
+**Corollary 1.3** For every $\epsilon > 0$, there is a constant $c_\epsilon$ depending only on $\epsilon$, such that a random graph $G \sim G(n, \frac{1+\epsilon}{n})$ with high probability contains every graph of size at most $c_\epsilon n / \log n$ as a minor.
+
+As mentioned earlier, similar but somewhat stronger results were obtained independently by Krivelevich and Nenadov (see Theorem 8.1 in [Kri18a]).
+
+As a final comment, we show in Appendix B that expanders are the ‘most minor-rich’ family of graphs:
+
+**Observation 1.4** For every graph $G$ of size $s \ge 2$, there is a graph $H_G$ of size at most $20s/\log s$ such that $G$ does not contain $H_G$ as a minor.
+
+We now turn to describe our techniques, starting with the simpler result: Theorem 1.2. Given an $n$-vertex $\alpha$-expander $G$ with maximum vertex degree at most $d$, we compute a partition of $G$ into two disjoint subgraphs, $G_1$ and $G_2$, such that $G_1$ is a connected graph; $G_2$ is an $\alpha'$-expander for a somewhat weaker parameter $\alpha'$, and a large matching $\mathcal{M}$ connecting vertices of $G_1$ to vertices of $G_2$. We refer to the edges of $\mathcal{M}$, and to their endpoints, as terminals. Assume now that we are given a graph $H$, containing at most $\frac{\alpha^3 n}{\tilde{c}^* d^5 \log^2 n}$ vertices and edges. We can assume w.l.o.g. that the maximum vertex degree in $H$ is at most 3, as we can compute a graph $H'$ of size at most twice the size of $H$, such that the maximum vertex degree of $H'$ is at most 3, and $H$ is a minor of $H'$. Using the transitivity of the minor relation, it is now sufficient to show that $H'$ is a minor of $G$. Therefore, we assume that the maximum vertex degree in $H$ is at most 3, and we denote $|V(H)| = n'$. Using the standard grouping technique, we partition the graph $G_1$ into connected subgraphs $S_1, \dots, S_{n'}$, each of which contains at least $\Theta(d^2 \log^2 n / \alpha^2)$ terminals. Assume that $H = \{v_1, \dots, v_{n'}\}$. We map the vertex $v_i$ of $H$ to the graph $S_i$. Let $E_i \subseteq \mathcal{M}$ be the set of edges of $\mathcal{M}$ incident to the vertices of $S_i$. Every edge $(v_i, v_j) \in E(H)$ is embedded into a path in the expander $G_2$, that connects some edge of $E_i$ to some edge of $E_j$. The paths are found using standard techniques: we use the classical result of Leighton and Rao [LR99] to show that for every edge $e = (v_i, v_j)$ of $H$, there is a large set $\mathcal{P}_e$ of paths in $G_2$, connecting edges of $E_i$ to edges of $E_j$, such that all resulting paths in $\mathcal{P} = \bigcup_{e \in E(H)} \mathcal{P}_e$ are short, and cause a small vertex-congestion in $G_2$. We then use the constructive proof of the Lovasz Local Lemma by Moser and Tardos [MT10] to select a single path $\mathcal{P}_e$ from each such set $\mathcal{P}_e$, so that the resulting paths are disjoint in their vertices.
+
+The proof of Theorem 1.1 is somewhat more complex. As before, we assume w.l.o.g. that maximum vertex degree in the graph $H$ is at most 3. We define a new combinatorial object called a Path-of-Expanders System (see Figure 1). At a high level, a Path-of-Expanders System of width $w$ and expansion $\alpha'$ consists of 12 graphs: graphs $T_1, \dots, T_6$ that are $\alpha'$-expanders, and graphs $S_1, \dots, S_6$ that are connected graphs. For each $1 \le i \le 6$, we are also given a matching $\mathcal{M}'_i$ of cardinality $w$ connecting vertices of $S_i$ to vertices of $T_i$; the endpoints of the edges of $\mathcal{M}'_i$ in $S_i$ and $T_i$ are denoted by $B_i$ and $C_i$, respectively. For each $1 \le i < 6$, we are given a matching $\mathcal{M}_i$ connecting every vertex of $B_i$ to some vertex of $S_{i+1}$; the endpoints of the edges of $\mathcal{M}_i$ that lie in $S_{i+1}$ are denoted by $A_{i+1}$. We show that an $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$ must contain a Path-of-Expanders System of width $w \ge n(\alpha/d)^c$ and expansion $\alpha' = (\alpha/d)^{c'}$ for some constants $c$ and $c'$, and provide an algorithm with running time poly($n) \cdot (d/\alpha)^{O(\log(d/\alpha))}$ to compute it. Next, we split the Path-of-Expanders System into three parts. The first part is the union of the graphs $S_2, T_2$ and the
+---PAGE_BREAK---
+
+matching $M_2'$. We view the vertices of $B_2$ as terminals, and we use the graph $T_2$ and the matching $M_2'$ in order to partition them into large enough groups, and to define a connected sub-graph of $T_2 \cup M_2'$ spanning each such group, like in the proof of Theorem 1.2. We ensure that the number of groups is equal to the number of vertices in the graph $H$ that we are trying to embed into $G$. Every vertex of $H$ is then embedded into a separate group, together with the corresponding connected sub-graph of $T_2 \cup M_2'$ spanning the group.
+
+We use the graphs $S_3, \dots, S_6, T_3, \dots, T_6$ in order to route all but a small fraction of the edges of $H$. The algorithm in this part is inspired by the algorithm of Frieze [Fri01] for routing a large set of demand pairs in an expander graph via edge-disjoint paths. Lastly, the remaining edges of $H$ are routed in graph $S_1 \cup T_1 \cup M_1'$, using essentially the same algorithm as the one in the proof of Theorem 1.2.
+
+Figure 1: An illustration of the Path-of-Expanders System $\Pi = (\mathcal{S}, \mathcal{M}, A_1, B_6, \mathcal{T}, \mathcal{M}')$. For each $1 \le i \le 6$, the vertices of $A_i$, $B_i$ and $C_i$ are shown in red, blue and green, respectively.
+
+**Organization.** We start with Preliminaries in Section 2. The proof of Theorem 1.1 is provided in Section 3, with some of the technical details deferred to Sections 4 and 5. Section 6 contains an algorithm for constructing a Path-of-Expanders System. The proof of Theorem 1.2 appears in Section 7, and the proofs of Corollary 1.3 and Observation 1.4 appear in Sections A and B of the Appendix, respectively.
+
+# 2 Preliminaries
+
+Throughout the paper, for an integer $l \ge 1$, we denote $[\ell] = \{1, \dots, \ell\}$. All logarithms in the paper are to the base of 2.
+
+All graphs that we consider are finite; they do not have loops or parallel edges.
+
+We will use the following simple observation, whose proof is deferred to the Appendix.
+
+**Observation 2.1** There is an efficient algorithm, that, given a set $\{x_1, \dots, x_r\}$ of non-negative integers, with $\sum_i x_i = N$, and $x_i \le 3N/4$ for all $i$, computes a partition $(A, B)$ of $\{1, \dots, r\}$, such that $\sum_{i \in A} x_i \ge N/4$ and $\sum_{i \in B} x_i \ge N/4$.
+
+Given a graph $G = (V, E)$ and a subset $V' \subseteq V$ of its vertices, we denote by $\delta_G(V')$ the set of all edges that have exactly one endpoint in $V'$, and by $E_G[V']$ the set of all edges with both endpoints in
+---PAGE_BREAK---
+
+$V'$. For readability, we write $\delta_G(v)$ instead of $\delta_G(\{v\})$. Given a pair $V', V'' \subseteq V$ of disjoint subsets of vertices, we denote by $E_G(V', V'')$ the set of all the edges with one endpoint in $V'$ and another in $V''$. We will omit the subscript $G$ when the underlying graph is clear from context. For a subset $V' \subseteq V$ of vertices of $G$, we denote by $G[V']$ the subgraph of $G$ induced by $V'$.
+
+Given a path $P$ in a graph $G$, we denote by $V_P$ and $E_P$ the sets of all its vertices and edges, respectively. Given a path $P$ and a set $V'$ of vertices of $G$, we say that $P$ is *disjoint* from $V'$ iff $V_P \cap V' = \emptyset$. We say that $P$ is *internally disjoint* from $V'$ iff every vertex of $V' \cap V_P$ is an endpoint of $P$.
+
+Similarly, suppose we are given two paths $P, P'$ in a graph $G$. We say that the two paths are *disjoint* iff $V_P \cap V_{P'} = \emptyset$, and we say that they are *internally disjoint* iff all vertices in $V_P \cap V_{P'}$ serve as endpoints of both these paths.
+
+Let $\mathcal{P}$ be any set of paths in a graph $G$. We say that $\mathcal{P}$ is a set of *disjoint* paths iff every pair $P, P' \in \mathcal{P}$ of distinct paths are disjoint. We say that $\mathcal{P}$ is a set of *internally disjoint* paths iff every pair $P, P' \in \mathcal{P}$ of distinct paths are internally disjoint. We denote by $V(\mathcal{P}) = \bigcup_{P \in \mathcal{P}} V_P$ the set of all vertices participating in the paths of $\mathcal{P}$. Given a pair $V', V''$ of subsets of vertices of $V$ (that are not necessarily disjoint), we say that a path $P \in \mathcal{P}$ connects $V'$ to $V''$ iff one of its endpoints is in $V'$ and the other endpoint is in $V''$. We use a shorthand $\mathcal{P}: V' \rightsquigarrow V''$ to indicate that $\mathcal{P}$ is a collection of disjoint paths, where each path $P \in \mathcal{P}$ connects $V'$ to $V''$. Notice that each path in $\mathcal{P}$ must originate at a distinct vertex of $V'$ and terminate at a distinct vertex of $V''$.
+
+Finally, assume that we are given a (partial) matching $\mathcal{M}$ over the vertices of $G$, and a set $\mathcal{P}$ of $|\mathcal{M}|$ paths. We say that $\mathcal{P}$ routes $\mathcal{M}$ iff for every pair of vertices $(v', v'') \in \mathcal{M}$, there is a path $P \in \mathcal{P}$, whose endpoints are $v'$ and $v''$.
+
+**Sparsest Cut and Expansion.** A cut in $G$ is a bipartition $(S, S')$ of its vertices, that is, $S \cup S' = V$, $S \cap S' = \emptyset$ and $S, S' \neq \emptyset$. The sparsity of the cut $(S, S')$ is $|E(S, S')| / \min\{|S|, |S'|\}$. The expansion of a graph $G$, denoted by $\varphi(G)$, is the minimum sparsity of any cut in $G$.
+
+**Definition 1** Given a parameter $\alpha > 0$, we say that a graph $G$ is an $\alpha$-expander iff $\varphi(G) \ge \alpha$. Equivalently, for every subset $S$ of at most $|V(G)|/2$ vertices of $G$, $\delta_G(S) \ge \alpha|S|$.
+
+The following theorem follows from the standard Cheeger's inequality, that shows that for any graph $G$, whose maximum vertex degree is bounded by $d$, $\frac{\lambda(G)}{2} \le \varphi(G) \le \sqrt{2d\lambda(G)}$, where $\lambda(G)$ is the second smallest eigenvalue of the Laplacian of $G$, and from the algorithm of [Fie73] (see also [AM84, Alo86, Alo98]).
+
+**Theorem 2.2** There is an efficient algorithm, that, given an n-vertex graph G with maximum vertex degree at most d, computes a cut (A, B) in G of sparsity O($\sqrt{d\varphi(G)}$).
+
+Finally, we use the following simple claim several times; the claim allows one to “fix” an expander, after a small number of edges were deleted from it. The proof appears in Appendix.
+
+**Claim 2.3** Let $T$ be an $\alpha$-expander, and let $E'$ be any subset of edges of $T$. Then there is an $\alpha/4$-expander $T' \subseteq T \setminus E'$, with $|V(T')| \ge |V(T)| - \frac{4|E'|}{\alpha}$.
+
+## Graph Minors.
+
+**Definition 2 (Graph Minors)** We say that a graph $H = (U, F)$ is a minor of a graph $G = (V, E)$ iff there is a map $f$, called a model of $H$ in $G$, mapping every vertex $u \in U$ to a subset $X_u \subseteq V$ of vertices, and mapping every edge $e \in F$ to a path $P_e$ in $G$, such that:
+---PAGE_BREAK---
+
+* For every vertex $u \in U$, $G[X_u]$ is connected;
+
+* For every edge $e = (u, v) \in F$, the path $P_e$ connects $X_u$ to $X_v$;
+
+* For every pair $u, v \in U$ of distinct vertices, $X_u \cap X_v = \emptyset$; and
+
+* Paths $\{P_e \mid e \in F\}$ are internally disjoint from each other and they are internally disjoint from the set $\bigcup_{u \in U} X_u$ of vertices.
+
+For a vertex $u \in U$ we sometimes call $G[X_u]$ the embedding of $u$ into $G$, and for an edge $e \in F$, we sometimes refer to $P_e$ as the embedding of $e$ into $G$.
+
+**Well-Linkedness and Path-of-Sets System.** We use a slight variation of the standard definition of (node)-well-linkedness.
+
+**Definition 3 (Well-Linkedness)** We say that a set $A$ of vertices in a graph $G$ is well-linked iff for every pair $A', A''$ of disjoint equal-cardinality subsets of $A$, there is a set $P: A' \rightsquigarrow A''$ of $|A'|$ paths in $G$, that are internally disjoint from $A$. (Note that the paths in $P$ must be disjoint).
+
+Next, we define a Path-of-Sets system, that was first introduced in [CC16] (a somewhat similar object called *grill* was introduced by [LS15]), and was used since then in a number of graph theoretic results.
+
+**Definition 4 (Path-of-Sets System)** Given integers $w, \ell > 0$ a Path-of-Sets System of width $w$ and length $\ell$ (see Figure 2) consists of:
+
+* a sequence $S = (S_1, \dots, S_\ell)$ of $\ell$ disjoint connected graphs, that we refer to as clusters;
+
+* for each $1 \le i \le \ell$, two disjoint subsets, $A_i, B_i \subseteq V(S_i)$ of $w$ vertices each; and
+
+* For each $1 \le i < \ell$, a collection $\mathcal{M}_i$ of edges, connecting every vertex of $B_i$ to a distinct vertex of $A_{i+1}$.
+
+We denote the Path-of-Sets System by $\Sigma = (S, \mathcal{M}, A_1, B_\ell)$, where $\mathcal{M} = \bigcup_i \mathcal{M}_i$. We also denote by $G_\Sigma$ the graph defined by the Path-of-Sets System, that is, $G_\Sigma = (\bigcup_{i=1}^\ell S_i) \cup \mathcal{M}$.
+
+We say that a given Path-of-Sets System is a Strong Path-of-Sets System iff all $1 \le i \le \ell$, the vertices of $A_i \cup B_i$ are well-linked in $S_i$. We say that it is $\alpha$-expanding, iff for all $1 \le i \le \ell$, graph $S_i$ is an $\alpha$-扩ender. Note that a Strong Path-of-Sets System is not necessarily $\alpha$-expanding and vice versa.
+---PAGE_BREAK---
+
+Figure 2: An illustration of a Path-of-Sets System $(\mathcal{S}, \mathcal{M}, A_1, B_\ell)$. For each $i \in [\ell]$, the vertices of $A_i$ and $B_i$ are shown in red and blue respectively.
+
+Figure 3: An illustration of the subgraphs $G_{\Pi}'$ and $G_{\Pi}''$ of $G_{\Pi}$.
+
+## 2.1 Path-of-Expanders System
+
+Path-of-Expanders System is the main new structural object that we use.
+
+**Definition 5 (Path-of-Expanders System)** *Given an integer $w > 0$ and a parameter $0 < \alpha < 1$, a Path-of-Expanders System of width $w$ and expansion $\alpha$ (see Figure 1) consists of:*
+
+* a Strong Path-of-Sets System $\Sigma = (\mathcal{S}, \mathcal{M}, A_1, B_6)$ of width $w$ and length $6$;
+
+* a sequence $\mathcal{T} = (T_1, \dots, T_6)$ of 6 disjoint connected graphs, such that for each $1 \le i \le 6$, $T_i$ is disjoint from $S_1, \dots, S_6$, and it is an $\alpha$-expander; and
+
+* for each $1 \le i \le 6$, a perfect matching $\mathcal{M}'_i$ between $B_i$ and some subset $C_i$ of $w$ vertices of $T_i$.
+
+We denote the Path-of-Expanders System by $\Pi = (\mathcal{S}, \mathcal{M}, A_1, B_6, \mathcal{T}, \mathcal{M}')$, where $\mathcal{M}' = \bigcup_i \mathcal{M}'_i$. For convenience, for each $1 \le i \le 6$, we denote by $W_i$ be the graph obtained from the union of the graphs $S_i$ and $T_i$, and the matching $\mathcal{M}'_i$.
+
+Similarly to the Path-of-Sets System, we associate with the Path-of-Expanders System $\Pi$ a graph $G_{\Pi}$, obtained by taking the union of the graphs $S_1, \dots, S_6, T_1, \dots, T_6$ and the sets $\mathcal{M}, \mathcal{M}'$ of edges.
+
+We will be interested in three subgraphs of $G_{\Pi}$ (see Figure 3): (i) Graph $W_1$, that we denote by $G'_{\Pi}$; (ii) Graph $W_2$; and (iii) Graph $G''_{\Pi}$, obtained by taking the union of $W_3 \cup W_4 \cup W_5 \cup W_6$ and the edges of $\mathcal{M}_3 \cup \mathcal{M}_4 \cup \mathcal{M}_5$.
+---PAGE_BREAK---
+
+**Definition 6** We say that a graph $G$ contains a Path-of-Sets System of width $w$ and length $\ell$ as a minor iff there is a Path-of-Sets System $\Sigma$ of width $w$ and length $\ell$, such that its corresponding graph $G_\Sigma$ is a minor of $G$. Similarly, we say that a graph $G$ contains a Path-of-Expanders System of width $w$ and expansion $\alpha$ as a minor iff there is a Path-of-Expanders System $\Pi$ of width $w$ and expansion $\alpha$, such that its corresponding graph $G_\Pi$ is a minor of $G$.
+
+The following theorem, that we prove in Section 6, shows that an expander must contain a Path-of-Expanders System with suitably chosen parameters, and provides an algorithm to compute its model in the expander.
+
+**Theorem 2.4** There are constants $\hat{c}_1, \hat{c}_2$, and an algorithm, that, given an $\alpha$-expander $G$ with $|V(G)| = n$, whose maximum vertex degree is at most $d$, and $0 < \alpha < 1$, constructs a Path-of-Expanders System $\Pi$ of expansion $\tilde{\alpha} \ge (\frac{\alpha}{d})^{\hat{c}_1}$ and width $w \ge n \cdot (\frac{\alpha}{d})^{\hat{c}_2}$, such that the corresponding graph $G_{\Pi}$ has maximum vertex degree at most $d+1$ and is a minor of $G$. Moreover, the algorithm computes a model of $G_{\Pi}$ in $G$. The running time of the algorithm is poly($n$) $\cdot (d/\alpha)^{O(\log(d/\alpha))}$.
+
+# 3 Proof of Theorem 1.1
+
+The goal of this section is to prove Theorem 1.1. We prove it by using the following theorem.
+
+**Theorem 3.1** There is constant $c_0$ and a randomized algorithm, that, given a Path-of-Expanders System $\Pi$ with expansion $\alpha$ and width $w$, such that the maximum vertex degree in $G_{\Pi}$ is at most $d$ and $|V(G_{\Pi})| \le n$ for some $n > c_0$, together with a graph $H$ of maximum vertex degree at most 3 and $|V(H)| \le \frac{w^2\alpha^2}{2^{19}d^4n\log n}$, with high probability, in time poly($n$), finds a model of $H$ in $G_{\Pi}$.
+
+Before proving Theorem 3.1, we complete the proof of Theorem 1.1 using it. Let $G$ be the given $\alpha$-扩aler with $|V(G)|=n$, and maximum vertex degree at most $d$. Recall that $0 < \alpha < 1$. By letting $c^*$ be a sufficiently large constant, we can assume that $n$ is sufficiently large, so that, for example, $n > c_0$, where $c_0$ is the constant from Theorem 3.1. Indeed, otherwise, it is enough to show that the graph with 1 vertex is a minor of $G$, which is trivially true. Therefore, we assume from now on that $n$ is sufficiently large. From Theorem 2.4, $G$ contains as a minor a Path-of-Expanders System $\Pi$ of width $w \ge n \cdot (\frac{\alpha}{d})^{\hat{c}_2}$ and expansion $\tilde{\alpha} \ge (\frac{\alpha}{d})^{\hat{c}_1}$, such that the maximum vertex degree in $G_{\Pi}$ is at most $d+1$. Using these bounds, we get that:
+
+$$
+\begin{aligned}
+\frac{w^2 \tilde{\alpha}^2}{2^{19}(d+1)^4 n \log n} &\ge n^2 \cdot \left(\frac{\alpha}{d}\right)^{2\hat{c}_2} \cdot \left(\frac{\alpha}{d}\right)^{2\hat{c}_1} \cdot \frac{1}{2^{23} d^4 n \log n} \\
+&= \frac{n \alpha^{2(\hat{c}_1+\hat{c}_2)}}{2^{23} d^{4+2(\hat{c}_1+\hat{c}_2)} \log n} \\
+&\ge \frac{3n}{c^* \log n} \cdot \left(\frac{\alpha}{d}\right)^{c^*},
+\end{aligned}
+$$
+
+for $c^* \ge \max\{4 + 2(\hat{c}_1 + \hat{c}_2), c_0, 2^{25}\}$. Therefore, if $H'$ is a graph with maximum vertex degree at most 3, and $|V(H')| \le \frac{3n}{c^*\log n} \cdot (\frac{\alpha}{d})^{c^*}$, then, from Theorem 3.1, $G$ contains $H'$ as a minor, and from Theorems 2.4 and 3.1, its model in $G$ can be computed with high probability by a randomized algorithm, in time poly($n$) $\cdot (d/\alpha)^{O(\log(d/\alpha))}$.
+
+Consider now any graph $H = (U,F)$ of size at most $\frac{n}{c^*\log n} \cdot (\frac{\alpha}{\Delta})^{c^*}$. Let $n' = |U|$ and $m' = |F|$, so $n' + m' \le \frac{n}{c^*\log n} \cdot (\frac{\alpha}{d})^{c^*}$. We construct another graph $H'$, whose maximum vertex degree is at most 3
+---PAGE_BREAK---
+
+and $|V(H')| \le n' + 2m'$, such that $H$ is a minor of $H'$. Since $H'$ must be a minor of $G$, it follows that $H$ is a minor of $G$. In order to construct graph $H'$ from $H$, we consider every vertex $u \in U$ of degree $d_u > 3$ in turn, and replace it with a cycle $C_u$ on $d_u$ vertices, such that every edge incident to $u$ in $H$ is incident to a distinct vertex of $C_u$. It is easy to verify that the resulting graph $H'$ has maximum vertex degree at most 3, that $H$ is a minor of $H'$, and that $|V(H')| \le 2m' + n'$, completing the proof of Theorem 1.1. Notice that this proof is constructive, that is, there is a randomized algorithm that constructs a model of $H$ in $G$ in time poly($n$) $\cdot (d/\alpha)^{O(\log(d/\alpha))}$. The remainder of this section is dedicated to proving Theorem 3.1, with some details deferred to subsequent sections.
+
+## 3.1 Large Minors in Path-of-Expanders System
+
+This subsection is devoted to the proof Theorem 3.1. We assume that we are given a Path-of-Expanders System $\Pi = (\mathcal{S}, \mathcal{M}, A_1, B_6, \mathcal{T}, \mathcal{M}')$ of width $w$ and expansion $\alpha$, whose corresponding graph $G_\Pi$ contains at most $n$ vertices, where $n > c_0$ for some large enough constant $c_0$, and its maximum vertex degree is bounded by $d$. In order to simplify the notation, we denote $G_\Pi$ by $G$. We also use the following parameter: $\rho = 2^{16} \lfloor \frac{d^3 n \log n}{\alpha^2 w} \rfloor$.
+
+We are also given a graph $H = (U, F)$ of maximum degree 3, with $|U| \le \frac{w^2 \alpha^2}{2^{19} d^4 n \log n} \le \frac{w}{8d\rho}$.
+
+Our goal is to find a model of $H$ in $G$. Our algorithm consists of three steps. In the first step, we associate with each vertex $u \in U$, a subset $X_u$ of vertices of $W_2$, such that $W_2[X_u]$ is a connected graph. This defines the embeddings of the vertices of $H$ into $G$ for the model of $H$ that we are computing. In the second step, we embed all but a small fraction of the edges of $H$ into $G''_{\Pi}$, and in the last step, we embed the remaining edges of $H$ into $G'_{\Pi}$. We now describe each step in detail.
+
+**Step 1: Embedding the Vertices of H.** In this step we compute an embedding of every vertex of $H$ into a connected subgraph of $W_2$. Recall that graph $W_2$ is the union of the graphs $S_2$ and $T_2$, and the matching $\mathcal{M}'_2$, connecting the vertices of $B_2 \subseteq V(S_2)$ to the vertices of $C_2 \subseteq V(T_2)$, where $|B_2| = |C_2| = w$. We use the following simple observation, that was used extensively in the literature (often under the name of “grouping technique”) (see e.g. [CKS05, RZ10, And10, Chu16b]). The proof is deferred to Section D of the Appendix.
+
+**Observation 3.2** There is an efficient algorithm that, given a connected graph $\hat{G}$ with maximum vertex degree at most $d$, an integer $r \ge 1$, and a subset $R \subseteq V(\hat{G})$ of vertices of $\hat{G}$ with $|R| \ge r$, computes a collection $\{V_1, \dots, V_r\}$ of $r$ mutually disjoint subsets of $V(\hat{G})$, such that:
+
+* For each $i \in [r]$, the induced graph $\hat{G}[V_i]$ is connected; and
+
+* For each $i \in [r]$, $|V_i \cap R| \ge \lfloor|R|/(dr)\rfloor$.
+
+We apply the above observation to the graph $T_2$, together with vertex set $R = C_2$ and parameter $r = \lfloor \frac{w}{8d\rho} \rfloor$. Let $\mathcal{U}$ be the resulting collection of $r$ subsets of vertices of $T_2$. Recall that for each set $V_i \in \mathcal{U}$, $|V_i \cap C_2| \ge \lfloor \frac{|C_2|}{dr} \rfloor \ge \lfloor \frac{w}{d|w/8d\rho|} \rfloor \ge 3\rho$. Since $|U| \le \frac{w}{8d\rho}$, we can choose $|U|$ distinct sets $V_1, \dots, V_{|U|} \in \mathcal{U}$. We also denote $U = \{u_1, \dots, u_{|U|}\}$. Finally, for each $1 \le i \le |U|$, we let $E^i \subseteq \mathcal{M}'_2$ be the subset of edges that have an endpoint in $V_i$, and we let $B_2^i$ be the subset of vertices of $B_2$ that serve as endpoints of the edges in $E^i$. Since $|V_i \cap C_2| \ge 3\rho$, $|B_2^i| \ge 3\rho$ for all $i$. We are now ready to define the embeddings of the vertices of $H$ into $G$. For each $1 \le i \le |U|$, we let $f(u_i) = G[B_2^i \cup V_i]$. Notice that for all $1 \le i \le |U|$, $f(u_i)$ is a connected graph, and for all $1 \le i < j \le |U|$, $f(u_i) \cap f(u_j) = \emptyset$. In
+---PAGE_BREAK---
+
+the remaining steps, we focus on embedding the edges of $H$ into $G$, such that the resulting paths are internally disjoint from $B_2 \cup T_2$.
+
+Figure 4: A sketch of the partition of $T_2$ and $B_2$. Vertices of $B_2$ and $C_2$ are shown in blue and red respectively.
+
+**Step 2: Routing in $G_{\Pi}''$.** Consider some vertex $u_i \in U$, its corresponding graph $f(u_i)$, and the set $B_2^i \subseteq B_2$ of vertices that lie in $f(u_i)$; recall that $|B_2^i| \ge 3\rho$. Recall that the maximum vertex degree in $H$ is at most 3. For every edge $e \in \delta_H(u_i)$, we select an arbitrary subset $B_2^i(e) \subseteq B_2^i$ of $\rho$ vertices, so that all resulting sets $\{B_2^i(e)\}_{e \in \delta_H(u_i)}$ are mutually disjoint.
+
+Recall that graph $G_{\Pi}$ contains a perfect matching $\mathcal{M}_2$ between the vertices of $B_2$ and the vertices of $A_3$. We let $\tilde{E}^i \subseteq \mathcal{M}_2$ be the subset of edges whose endpoints lie in $B_2^i$, and denote by $A_3^i \subseteq A_3$ the set of endpoints of the edges of $E^i$ that lie in $A_3$. For every edge $e \in \delta(v_i)$, we let $A_3^i(e) \subseteq A_3^i$ be the set of $\rho$ vertices that are connected to the vertices of $B_2^i(e)$ with an edge of $\mathcal{M}_2$. Clearly, all resulting vertex sets $\{A_3^i(e)\}_{e \in \delta_H(u_i)}$ are mutually disjoint. Let $A'_3 = \bigcup_{u_i \in U} \bigcup_{e \in \delta_H(u_i)} A_3^i(e)$, and notice that
+
+$$ |A'_3| \le 3\rho \cdot |U| \le 3\rho \cdot \frac{w}{8d\rho} = \frac{3w}{8d} \le \frac{w}{2}. $$
+
+The following lemma, whose proof is deferred to Section 4, allows us to embed a large number of edges of $H$ in $G_{\Pi}''$.
+
+**Lemma 3.3** *There is an efficient algorithm, that, given a Path-of-Expanders System $\Pi = (S, \mathcal{M}, A_1, B_6, \mathcal{T}, \mathcal{M}')$ of expansion $\alpha$ and width $w$, where $0 < \alpha < 1$ and $w$ is an integral multiple of 4, whose corresponding graph $G_{\Pi}$ contains at most $n$ vertices and has maximum vertex degree at most $d$, together with a subset $A'_3 \subseteq A_3$ of at most $w/2$ vertices, and a collection $\{A_3^1, \dots, A_3^{2r}\}$ of mutually disjoint subsets of $A'_3$ of cardinality $\rho = 2^{16} \left\lfloor \frac{d^3 n \log n}{\alpha^2 w} \right\rfloor$ each, where $r > \frac{w \alpha^2 (\log \log n)^2}{d^3 (\log^3 n)}$, returns a partition $\mathcal{I}', \mathcal{I}''$ of $\{1, \dots, r\}$, and a set $\mathcal{P}^* = \{P_j^* | j \in \mathcal{I}'\}$ of disjoint paths in $G_{\Pi}''$, such that for each $j \in \mathcal{I}'$, path $P_j^*$ connects $A_3^j$ to $A_3^{j+r}$, and $|\mathcal{I}''| \le \frac{w \alpha^2 (\log \log n)^2}{d^3 (\log^3 n)}$.*
+
+We obtain the following immediate corollary of the lemma.
+
+**Corollary 3.4** *There is an efficient algorithm to compute a partition $(F_1, F_2)$ of the set $F$ of edges of $H$, and for each edge $e = (u_i, u_j) \in F_1$, a path $P_e^*$ in graph $G_{\Pi}''$, connecting a vertex of $A_3^i(e)$ to a vertex of $A_3^j(e)$, such that all paths in set $\mathcal{P}_1^* = \{P^*(e) | e \in F_1\}$ are disjoint, and $|F_2| \le \frac{w\alpha^2(\log\log n)^2}{d^3\log^3 n}$.*
+
+**Proof:** By appropriately ordering the collection $\{A_3^i(e) | u_i \in U, e \in \delta_H(e)\}$ of vertex subsets, and applying Lemma 3.3 to the resulting sequence of subsets of $A'_3$, we obtain a set $F_1 \subseteq F$ of edges of
+---PAGE_BREAK---
+
+$H$, and for each edge $e = (u_i, u_j) \in F_1$, a path $P_e^*$, connecting a vertex of $A_3^i(e)$ to a vertex of $A_3^j(e)$ in graph $G_{\Pi}''$, such that all paths in set $\mathcal{P}_1^* = \{P^*(e) | e \in F_1\}$ are disjoint. Let $F_2 = F \setminus F_1$. From Lemma 3.3, $|F_2| \le \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$. $\square$
+
+For each edge $e = (u_i, u_j) \in F_1$, we extend the path $P_e^*$ to include the two edges of $\mathcal{M}_2$ incident to its endpoints, so that $P_e^*$ now connects a vertex of $B_2^i$ to a vertex of $B_2^j$. Path $P_e^*$ becomes the embedding $f(e)$ of $e$ in the model $f$ of $H$ that we are constructing. For convenience, the resulting set of paths $\{P_e^* | e \in F_1\}$ is still denoted by $\mathcal{P}_1^*$. The paths in $\mathcal{P}_1^*$ remain disjoint from each other; they are internally disjoint from $W_2$, and completely disjoint from $W_1$ (see Figure 5).
+
+**Step 3: Routing in $G_{\Pi'}$.** In this step we complete the construction of a minor of $H$ in $G$, by embedding the edges of $F_2$. The main tool that we use is the following lemma, whose proof is deferred to Section 5.
+
+**Lemma 3.5** *There is a universal constant c, and an efficient algorithm that, given a Path-of-Expanders System Π = (S, M, A₁, B₆, T, M') of expansion α and width w, such that the corresponding graph Gᵢ contains at most n vertices and has maximum vertex degree at most d, computes a subset B'₁ ⊆ B₁ of at least $\frac{cwα^2}{d^3 \log^2 n}$ vertices, such that the following holds. There is an efficient randomized algorithm, that given any matching M* over the vertices of B'₁, with high probability returns a set P of disjoint paths in W₁, routing M*.*
+
+We now conclude the last step using the above lemma. Let $B'_1 \subseteq B_1$ be the subset of at least $\frac{cw\alpha^2}{d^3 \log^2 n}$ vertices, computed by algorithm from Lemma 3.5. Let $A'_2 \subseteq A_2$ be the set of all the vertices connected to the vertices of $B'_1$ by the edges of the matching $\mathcal{M}_1$. Observe that $|A'_2| \ge 2|F_2|$, since:
+
+$$
+2|F_2| \le \frac{2w\alpha^2(\log \log n)^2}{d^3 \log^3 n} \le \frac{cw\alpha^2}{d^3 \log^2 n} = |B'_1| = |A'_2|,
+$$
+
+since we have assumed that $n$ is sufficiently large. We let $A_2''$ be an arbitrary subset of $2|F_2|$ vertices of $A_2'$.
+
+Recall that every vertex $u_i \in U$, and edge $e \in \delta_H(u_i)$, we have defined a subset $B_2^i(e) \subseteq B_2^i$ of vertices. We select an arbitrary representative vertex $b_2^i(e) \in B_2^i(e)$, and we let $B_2' = \{b_2^i(e) | u_i \in U, e \in \delta(u_i) \cap F_2\}$ be the resulting set of representative vertices, so that $|B_2'| = 2|F|$.
+
+Since $(A_2 \cup B_2)$ are well-linked in $S_2$, there is a set $Q_2$ of $2|F_2|$ disjoint paths in $S_2$, connecting every vertex of $B'_2$ to some vertex of $A_2''$, such that the paths in $Q_2$ are internally disjoint from $A_2 \cup B_2$. For each vertex $b_2^i(e) \in B'_2$, let $a_2^i(e) \in A_2''$ be the corresponding endpoint of the path of $Q_2$ that originates at $b_2^i(e)$ (see Figure 6). Let $b'_1(e) \in B'_1$ be the vertex of $B_1$ that is connected to $a_2^i(e)$ with an edge from $\mathcal{M}_1$. We can now naturally define a matching $\mathcal{M}^*$ over the vertices of $B'_1$, where for every edge $e = (u_i, u_j) \in F_2$, we add the pair $(b'_1(e), b'_1(e))$ of vertices to the matching. From Lemma 3.5, with high probability we obtain a collection $\mathcal{P}_2^* = \{P_e^* | e \in F_2\}$ of disjoint paths in $W_1$, such that, for every edge $e = (u_i, u_j) \in F_2$, the corresponding path $P_e^*$ connects $b'_1(e)$ to $b'_1(e)$. We extend this path to connect the vertex $b'_2(e)$ to the vertex $b'_2(e)$, by using the edges of $\mathcal{M}_1$ that are incident to $b'_1(e)$ to $b'_1(e)$, and the paths of $Q_2$ that are incident to $a'_2(e)$ to $a'_2(e)$. Notice that the resulting extended paths are internally disjoint from $B_2$, and are completely disjoint from $T_2 \cup G_{\Pi}''$. We now embed each edge $e \in F_2$ into the path $P_e^*$, that is, we set $f(e) = P_e^*$. This completes the construction of the model of H in $G_{\Pi}$, except for the proofs of Lemmas 3.3 and 3.5, that are provided in Sections 4 and 5, respectively.
+---PAGE_BREAK---
+
+Figure 5: An illustration of a path $P_e^* \in P_1^*$ routing an edge $e = (u_i, u_j) \in F_1$. Dashed boundaries represent the labeled subsets.
+
+Figure 6: An illustration of the path $P_e^* \in P_2^*$ connecting $e = (u_i, u_j) \in F_2$.
+
+# 4 Routing in $G''_{II}$
+
+This section is dedicated to the proof of Lemma 3.3. We define a new combinatorial object, called a Duo-of-Expanders System.
+
+**Definition 7** A Duo-of-Expanders System of width $w$, expansion $\alpha$ (see Figure 7) consists of:
+
+• two disjoint graphs $T_1, T_2$, each of which is an $\alpha$-expander;
+
+• a set $X$ of $w$ vertices that are disjoint from $T_1 \cup T_2$, and three subsets $D_0, D_1 \subseteq V(T_1)$ and $D_2 \subseteq V(T_2)$ of $w$ vertices each, where all three subsets are disjoint; and
+
+• a complete matching $\tilde{\mathcal{M}}$ between the vertices of $X$ and the vertices of $D_0$, and a complete matching $\tilde{\mathcal{M}}'$ between the vertices of $D_1$ and the vertices of $D_2$, so $|\tilde{\mathcal{M}}| = |\tilde{\mathcal{M}}'| = w$.
+
+We denote the Duo-of-Expanders System by $\mathcal{D} = (T_1, T_2, X, \tilde{\mathcal{M}}, \tilde{\mathcal{M}}')$. The set $X$ of vertices is called the backbone of $\mathcal{D}$. Let $G_D$ be the graph corresponding to the Duo-of-Expanders System $\mathcal{D}$, so $G_D$ is the union of graphs $T_1, T_2$, the set $X$ of vertices, and the set $\tilde{\mathcal{M}} \cup \tilde{\mathcal{M}}'$ of edges.
+---PAGE_BREAK---
+
+Figure 7: An illustration of the Duo-of-Expanders System.
+
+Similarly to Path-of-Expanders System, given a graph $G$, we say that it contains a Duo-of-Expanders System $\mathcal{D}$ as a minor iff $G_{\mathcal{D}}$ is a minor of $G$.
+
+The following lemma is central to the proof of Lemma 3.3.
+
+**Lemma 4.1** *There is an efficient algorithm that, given a Duo-of-Expanders System $\mathcal{D}$ of width $w/4$ and expansion $\alpha$, for some $0 < \alpha < 1$, such that the corresponding graph $G_{\mathcal{D}}$ contains at most $n$ vertices and has maximum vertex degree at most $d$, together with a collection $\{X_1, \dots, X_{2r}\}$ of mutually disjoint subsets of the backbone $X$ of cardinality $\sigma = 2^{15} \lfloor \frac{d^3 n \log n}{\alpha^2 w} \rfloor$ each, where $r > \frac{w \alpha^2 (\log \log n)^2}{d^3 \log^3 n}$, returns a partition $\mathcal{I}', \mathcal{I}''$ of $\{1, \dots, r\}$, and for each $j \in \mathcal{I}'$, a path $\bar{P}_j$ connecting a vertex of $X_j$ to a vertex of $X_{j+r}$ in $G_{\mathcal{D}}$, such that the paths in set $\mathcal{P} = \{P_j \mid j \in \mathcal{I}'\}$ are disjoint, and $|\mathcal{I}''| \le r \cdot \frac{\log \log n}{\log n}$.*
+
+We defer the proof of Lemma 4.1 to Section 4.1, after we complete the proof of Lemma 3.3 using it. Recall that we are given a Path-of-Expanders System $\Pi = (S, \mathcal{M}, A_1, B_6, \mathcal{T}, \mathcal{M}')$, together with its corresponding graph $G_{\Pi}$. Recall that we are also given a subset $A'_3 \subseteq A_3$ of at most $w/2$ vertices, and a partition of $A'_3$ into $2r$ disjoint subsets $A_3^1, \dots, A_3^{2r}$, of cardinality $\rho = 2^{16} \lfloor \frac{d^3 n \log n}{\alpha^2 w} \rfloor$ each, where
+$$r \ge \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}.$$
+For each $1 \le i \le 2r$, we arbitrarily partition $A'_3$ into two subsets, $W_1^i, W_2^i$, of cardinality $\rho/2$ each (note that $\rho$ is an even integer). Let $W_1 = \bigcup_{i=1}^{2r} W_1^i$ and let $W_2 = \bigcup_{i=1}^{2r} W_2^i$. Note that $|W_1|, |W_2| \le |A'_3|/2 \le w/4$. We add arbitrary vertices of $A_3 \setminus A'_3$ to $W_1$ and $W_2$, until each of them contains $w/4$ vertices (recall that $w/4$ is an integer), while keeping them disjoint. The vertices of $A_3 \setminus (W_1 \cup W_2)$ are then arbitrarily partitioned into two subsets, $Y_1$ and $Y_2$, of cardinality $w/4$ each.
+Next, we show that graph $G_{\Pi}''$ contains two disjoint Duo-of-Expanders Systems as minors. We will then use Lemma 4.1 in each of the two Duo-of-Expanders Systems in turn in order to obtain the desired routing.
+
+**Claim 4.2** *There is an efficient algorithm to compute two disjoint subgraphs, $G^{(1)}$ and $G^{(2)}$ of $G_{\Pi}''$, and for each $z \in \{1, 2\}$, to compute a model $f^{(z)}$ of a Duo-of-Expanders System $\mathcal{D}^{(z)} = (T_1^{(z)}, T_2^{(z)}, X^{(z)}, \tilde{\mathcal{M}}^{(z)}, (\tilde{\mathcal{M}}')^{(z)})$ of width $w/4$ and expansion $\alpha$ in $G^{(z)}$, such that the corresponding graph $G_{\mathcal{D}^{(z)}}$ has maximum vertex degree at most $d$, and for every vertex $w \in W_z$, there is a distinct vertex $v(w)$ in the backbone $X^{(z)}$, such that $w \in f^{(z)}(v(w))$.*
+
+**Proof of Claim 4.2.** From the definition of the Path-of-Expanders System, for $3 \le j \le 6$, the set $A_j \cup B_j$ of vertices is well-linked in $S_j$. Therefore, there is a set $\mathcal{P}_j$ of $w$ node-disjoint paths in $S_j$, connecting $A_j$ to $B_j$. By concatenating the path sets $\mathcal{P}_3, \mathcal{P}_4, \mathcal{P}_5, \mathcal{P}_6$, and the edge sets $\mathcal{M}_3, \mathcal{M}_4, \mathcal{M}_5$, we obtain a collection $\mathcal{P}$ of $w$ node-disjoint paths in $G_{\Pi}''$, connecting $A_3$ to $B_6$. We partition $\mathcal{P}$ into two subsets: set $\mathcal{P}^{(1)}$ contains all paths originating at the vertices of $W_1 \cup Y_1$, and set $\mathcal{P}^{(2)}$ contains all paths originating at the vertices of $W_2 \cup Y_2$.
+
+We are now ready to define the two graphs $G^{(1)}$ and $G^{(2)}$. Graph $G^{(1)}$ is obtained from the union of the expanders $T_3$ and $T_4$, the paths of $\mathcal{P}^{(1)}$, and the edges of $\mathcal{M}'_3 \cup \mathcal{M}'_4$ that have an endpoint lying on
+---PAGE_BREAK---
+
+the paths of $\mathcal{P}^{(1)}$. Graph $G^{(2)}$ is defined similarly by using $T_5, T_6$, the paths of $\mathcal{P}^{(2)}$, and the edges of
+$\mathcal{M}'_5 \cup \mathcal{M}'_6$ that have an endpoint lying on the paths of $\mathcal{P}^{(2)}$. It is immediate to verify that the graphs
+$G^{(1)}$ and $G^{(2)}$ are disjoint.
+
+It now remains to show that each of the resulting graphs contains a Duo-of-Expanders System as a minor, with the required properties. We show this for $G^{(1)}$; the proof for $G^{(2)}$ is symmetric. Our first step is to contract every path of $\mathcal{P}^{(1)}$ into a single vertex. For each such path $P \in \mathcal{P}^{(1)}$, let $w \in W_1 \cup Y_1$ be the first vertex of $P$. We denote the new vertex obtained by contracting $P$ by $v(w)$. We let the backbone $X^{(1)}$ of the new Duo-of-Expanders System $\mathcal{D}^{(1)}$ be $X^{(1)} = \{v(w) | w \in W_1\}$, so $|X^{(1)}| = w/4$. We map every vertex $w \in W_1$ to the corresponding vertex $v(w)$ in the model of $G_{\mathcal{D}^{(1)}}$ that we are constructing in $G^{(1)}$; that is, we set $f^{(1)}(w) = v(w)$. We also map the two expanders $T_1^{(1)}, T_2^{(1)}$ of $\mathcal{D}^{(1)}$ to $T_3$ and $T_4$, respectively, by setting $T_1^{(1)} = T_3$ and $T_2^{(1)} = T_4$.
+
+Consider some vertex $w \in W_1 \cup Y_1$ and the path $P \in \mathcal{P}^{(1)}$ originating from $w$. Let $w'$ be the unique vertex of $P$ that belongs to $B_3$, and let $w''$ be the unique vertex of $P$ that belongs to $B_4$, in the original Path-of-Expanders System Π. Recall that there is an edge of $\mathcal{M}'_3$, connecting $w'$ to some vertex $u_w \in C_3$, and there is an edge of $\mathcal{M}'_4$, connecting $w''$ to some vertex $u_w' \in C_4$. Therefore, there are edges $(v(w), u_w)$ and $(v(w), u'(w))$ in the new contracted graph.
+
+We set $D_0^{(1)} = \{u_w \mid w \in W_1\}$, and we let $\tilde{\mathcal{M}}^{(1)} = \{(v(w), u_w) \mid w \in W_1\}$. We also set $D_1^{(1)} = \{u_y \mid y \in \hat{Y}_1\}$, and $D_1^{(2)} = \{u_y' \mid y \in \hat{Y}_1\}$. Observe that all three sets $D_0^{(1)}, D_1^{(1)}, D_2^{(1)}$ of vertices are disjoint, and they contain $w/4$ vertices each. It now remains to define the set $(\tilde{\mathcal{M}}')^{(1)}$ of edges, that connect vertices of $D_1^{(1)}$ and $D_2^{(1)}$. In order to do so, for every vertex $y \in Y_1$, we merge the two edges $(v(y), u_y)$ and $(v(y), u_y')$ into a single edge, by contracting one of these two edges. The resulting edge is added to $(\tilde{\mathcal{M}}')^{(1)}$. It is easy to see that we have obtained a Duo-of-Expanders System $\mathcal{D}^{(1)}$, whose width is $w/4$ and expansion $\alpha$. It is easy to verify that the maximum vertex degree in the corresponding graph $G_{\mathcal{D}^{(1)}}$ is bounded by $d$. Notice that for every vertex $w \in W_1$, there is a distinct vertex $v(w) \in X^{(1)}$, such that $w \in f^{(1)}(v(w))$. $\square$
+
+We apply Lemma 4.1 to $\mathcal{D}^{(1)}$, together with vertex sets $W_1^1, \dots, W_1^{2r}$, each of which now contains $\rho/2 = 2^{15} \left\lfloor \frac{d^3 n \log n}{\alpha^2 w} \right\rfloor$ vertices obtaining a partition $(\mathcal{I}', \mathcal{I}'')$ of $\{1, \dots, r\}$, together with a set $\mathcal{P}_1 = \{P_j \mid j \in \mathcal{I}'\}$ of disjoint paths in $G_{\mathcal{D}^{(1)}}$, such that for all $j \in \mathcal{I}'$ path $P_j$ connects a vertex of $W_1^j$ to a vertex of $W_1^{r+j}$, and $\lvert\mathcal{I}''\rvert \le r \cdot \frac{\log \log n}{\log n}$. Since $G_{\mathcal{D}^{(1)}}$ is a minor of $G^{(1)}$, it is immediate to obtain a collection $\mathcal{P}'_1 = \{P'_j \mid j \in \mathcal{I}'\}$ of disjoint paths in $G^{(1)}$, such that for all $j \in \mathcal{I}'$ path $P'_j$ connects a vertex of $A_3^j$ to a vertex of $A_3^{j+r}$.
+
+If $\lvert\mathcal{I}''\rvert \le \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$, then we terminate the algorithm, and return the set $\mathcal{P}'$ of paths, together with the partition $(\mathcal{I}', \mathcal{I}'')$ of $\mathcal{I}$. Next, we denote $\lvert\mathcal{I}''\rvert = r'$, and we assume that $r' > \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$.
+
+We apply Lemma 4.1 to $\mathcal{D}^{(2)}$, together with vertex sets $\{W_2^j, W_2^{j+r} \mid j \in \mathcal{I}''\}$, that are appropriately ordered. We then obtain a partition $\mathcal{I}_1, \mathcal{I}_2$ of $\mathcal{I}'',$ and a set $\mathcal{P}_2 = \{P_j \mid j \in \mathcal{I}_1\}$ of disjoint paths in $G_{\mathcal{D}^{(2)}}$, such that for each $j \in \mathcal{I}_2$ path $P_j$ connects a vertex of $W_2^j$ to a vertex of $W_2^{j+r}$, and $\lvert\mathcal{I}_2\rvert \le r' \cdot \frac{\log \log n}{\log n} \le r \cdot \frac{(\log \log n)^2}{\log^2 n}$. As before, since $G_{\mathcal{D}^{(2)}}$ is a minor of $G^{(2)}$, it is immediate to obtain a collection $\mathcal{P}'_2 = \{P'_j \mid j \in \mathcal{I}_2\}$ of disjoint paths in $G^{(2)}$, such that for all $j \in \mathcal{I}_1$ path $P'_j$ connects a vertex of $A_3^j$ to a vertex of $A_3^{j+r}$.
+
+We return the partition $(\mathcal{I}' \cup \mathcal{I}_1, \mathcal{I}_2)$ of $\{1, \dots, r\}$, together with the set $\mathcal{P}'_1 \cup \mathcal{P}'_2$ of paths. Since the
+---PAGE_BREAK---
+
+graphs $G^{(1)}$ and $G^{(2)}$ are disjoint, all paths in $P_1' \cup P_2'$ are disjoint. It now only remains to show that
+$|\mathcal{I}_2| \le \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$.
+
+Recall that the set $A'_3$ of at most $w/2$ vertices is partitioned into $2r$ subsets of cardinality $\rho = 2^{16} \lfloor \frac{d^3 n \log n}{\alpha^2 w} \rfloor$. Therefore:
+
+$$
+\begin{align*}
+r &\le \frac{w}{4\rho} \\
+&= \frac{w}{2^{18} \lfloor d^3 n \log n / (\alpha^2 w) \rfloor} \\
+&\le \frac{w^2 \alpha^2}{2^{17} d^3 n \log n} \\
+&\le \frac{w \alpha^2}{2^{17} d^3 \log n}.
+\end{align*}
+$$
+
+Therefore, $|\mathcal{I}_2| \le r \cdot \frac{(\log \log n)^2}{\log^2 n} \le \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$, as required.
+
+4.1 Routing in Duo-of-Expanders — Proof of Lemma 4.1
+
+The goal of this section is to prove Lemma 4.1. The proof is inspired by the algorithm of Frieze [Fri01] for routing a large set of demand pairs in an expander graph via edge-disjoint paths. Recall that we are given a Duo-of-Expanders System $\mathcal{D}$ of width $w/4$ and expansion $\alpha$, for some $0 < \alpha < 1$, such that the maximum vertex degree in the corresponding graph $G_{\mathcal{D}} = (V, E)$ is at most $d$, and $|V| \le n$. We are also given mutually disjoint subsets $\{X_1, \dots, X_{2r}\}$ of the backbone $X$, of cardinality $\sigma = 2^{15} \lfloor \frac{d^3 n \log n}{w\alpha^2} \rfloor$ each, where $r > \frac{w\alpha^2(\log \log n)^2}{d^3 \log^3 n}$. In particular, since $|X| = w/4$, we get that $2r\sigma \le w/4$, and so $r \le \frac{w}{8\sigma} \le \frac{w}{8 \cdot 2^{15} \lfloor d^3 n \log n / (w\alpha^2) \rfloor} \le \frac{w^2 \alpha^2}{2^{17} d^3 n \log n}$. Therefore, we obtain the following bounds on $r$ that we will use throughout the proof:
+
+$$
+\frac{w\alpha^2}{d^3} \cdot \frac{(\log \log n)^2}{\log^3 n} < r \le \frac{w^2\alpha^2}{2^{17}d^3n\log n}. \quad (1)
+$$
+
+For convenience, we will denote $G_D$ by $G$ for the rest of this subsection.
+
+We will iteratively construct the set $\mathcal{P}$ of disjoint paths in $G$, where for each path $P \in \mathcal{P}$, there is some index $j \in \{1, \dots, r\}$, such that $P$ connects $X_j$ to $X_{j+r}$. Whenever a path $P$ is added to $\mathcal{P}$, we delete all vertices of $P$ from $G$. Throughout the algorithm, we say that an index $j \in [r]$ is *settled* iff there is a path $P_j \in \mathcal{P}$ connecting $X_j$ to $X_{j+r}$, and otherwise we say that it is *not settled*. We use a parameter $\gamma = 512nd^2/w\alpha$. We say that a path $P$ in $G$ is *permissible* iff $P$ contains at most $\gamma \log \log n$ nodes of $T_1$ and at most $\gamma \log n$ nodes of $T_2$.
+
+**The Algorithm.** Start with $\mathcal{P} = \emptyset$. While there is an index $j \in [r]$ and a permissible path $P_j^*$ in the current graph $G$ such that:
+
+• j is not settled;
+
+• $P_j^*$ connects $X_j$ to $X_{j+r}$; and
+---PAGE_BREAK---
+
+• $P_j^*$ is internally disjoint from X:
+
+add $P_j^*$ to $\mathcal{P}$ and delete all vertices of $P_j^*$ from $G$.
+
+In order to complete the proof of Lemma 4.1, it is enough to show that, when the algorithm terminates, at most $\frac{r \log \log n}{\log n}$ indices $j \in [r]$ are not settled. Assume for contradiction that this is not true. Let $\mathcal{P}$ be the path set obtained at the end of the algorithm, and let $\tilde{V} = V(\mathcal{P})$ be the set of vertices participating in the paths of $\mathcal{P}$. We further partition $\tilde{V}$ into three subsets: $\tilde{V}_1 = \tilde{V} \cap V(T_1)$; $\tilde{V}_2 = \tilde{V} \cap V(T_2)$; and $\tilde{X} = \tilde{V} \cap X$. Note that, since $|\mathcal{P}| \le r$, we are guaranteed that $|\tilde{V}_1| \le \gamma r \log \log n$; $|\tilde{V}_2| \le \gamma r \log n$, and, since we have assumed that $|\mathcal{P}| \le r(1 - \log \log n / \log n)$, and all paths in $\mathcal{P}$ are internally disjoint from X, we get that $|\tilde{X}| \le 2r - \frac{2r \log \log n}{\log n}$.
+
+We now proceed as follows. First, we show that $T_1 \setminus \tilde{V}_1$ and $T_2 \setminus \tilde{V}_2$ both contain very large $\alpha/4$-expanders. We also show that there is a large number of edges in $\tilde{\mathcal{M}}'$ that connect these two expanders. This will be used to show that there must still be a permissible path $P_j^*$, connecting two sets $X_j$ and $X_{j+r}$ for some index $j$ that is not settled yet, leading to a contradiction. We start with the following claim that allows us to find large expanders in $T_1 \setminus \tilde{V}_1$ and $T_2 \setminus \tilde{V}_2$.
+
+**Claim 4.3** Let $T$ be an $\alpha$-expander with maximum vertex degree at most $d$, and let $Z$ be any subset of vertices of $T$. Then there is an $\alpha/4$-expander $T' \subseteq T \setminus Z$, with $|V(T')| \ge |V(T)| - \frac{4d|Z|}{\alpha}$.
+
+The proof of Claim 4.3 follows immediately from Claim 2.3, by letting $E'$ be the set of all edges incident to the vertices of $Z$. The following corollary follows immediately from Claim 4.3.
+
+**Corollary 4.4** There is a subgraph $T'_1 \subseteq T_1 \setminus \tilde{V}_1$ that is an $\alpha/4$-expander, and $|V(T_1) \setminus V(T'_1)| \le 4dr\gamma \log \log n/\alpha$. Similarly, there is a subgraph $T'_2 \subseteq T_2 \setminus \tilde{V}_2$ that is an $\alpha/4$-expander, and $|V(T_2) \setminus V(T'_2)| \le 4dr\gamma \log n/\alpha$.
+
+Let $R_1 = V(T_1) \setminus V(T'_1)$ and let $R_2 = V(T_2) \setminus V(T'_2)$. We refer to the vertices of $R_1$ and $R_2$ as the vertices that were discarded from $T_1$ and $T_2$, respectively. The vertices that belong to $T'_1$ and $T'_2$ are called surviving vertices. It is easy to verify that $|R_1|, |R_2| \le w/64$. Indeed, observe that $|R_1|, |R_2| \le 4dr\gamma \log n/\alpha$. Since, from Equation (1), $r \le \frac{w^2\alpha^2}{2^{17}d^3n\log n}$, we get that altogether:
+
+$$ |R_1|, |R_2| \le \frac{4dr\gamma \log n}{\alpha} \le \frac{\gamma w^2 \alpha}{2^{15} d^2 n} \le \frac{w}{64}, $$
+
+since $\gamma = 512nd^2/w\alpha$.
+
+Recall that the Duo-of-Expanders $\mathcal{D}$ contains a matching $\tilde{\mathcal{M}}'$ between the set $D_1 \subseteq V(T_1)$ of w/4 vertices and the set $D_2 \subseteq V(T_2)$ of w/4 vertices. Next, we show that there are large subsets $D'_1 \subseteq D_1$ and $D'_2 \subseteq D_2$ of surviving vertices, such that a subset of $\tilde{\mathcal{M}}'$ defines a complete matching between them.
+
+**Observation 4.5** There are two sets $D'_1 \subseteq D_1$ and $D'_2 \subseteq D_2$ containing at least w/16 vertices each, and a subset $\hat{\mathcal{M}} \subseteq \tilde{\mathcal{M}}'$ of edges, such that $\hat{\mathcal{M}}$ is a complete matching between $D'_1$ and $D'_2$.
+
+**Proof:** Let $\hat{D}_1 = D_1 \setminus R_1$. Since $|R_1| \le w/64$, $\hat{D}_1| \ge w/8$. Let $\hat{\mathcal{M}}' \subseteq \mathcal{M}'$ be the set of edges whose endpoints lie in $\hat{D}_1$, and let $\hat{D}_2 \subseteq D_2$ be the set of vertices that serve as endpoints for the edges in
+---PAGE_BREAK---
+
+$\hat{M}'$, so $|\hat{D}_2| \ge w/8$. Finally, let $D'_2 = D_2 \setminus R_2$, so $|D'_2| \ge w/8 - |R_2| \ge w/16$. We let $\tilde{M} \subseteq \hat{M}'$ be the set of all edges incident to the vertices of $D'_2$, and we let $D'_1$ be the set of endpoints of these edges. $\square$
+
+Our second main tool is the following claim, that shows that for any pair of large enough sets of vertices in an expander, there is a short path connecting them. The proof uses standard methods and is deferred to Appendix.
+
+**Claim 4.6** Let $T$ be an $\alpha'$-expander for some $0 < \alpha' < 1$, such that $|V(T)| \le n$, and the maximum vertex degree in $T$ is at most $d$. Let $Z, Z' \subseteq V(T)$ be two vertex subsets, with $|Z| = z$ and $|Z'| = z'$. Then there is a path in $T$, connecting a vertex of $Z$ to a vertex of $Z'$, whose length is at most $\frac{8d}{\alpha'}(\log(n/z) + \log(n/z'))$. In particular, for every pair $v, v'$ of vertices in $T$, there is a path of length at most $16d \log n/\alpha'$ connecting $v$ to $v'$ in $T$.
+
+Let $J \subseteq \{1, \dots, r\}$ be the set of indices that are not settled yet. From our assumption, $|J| \ge \frac{r \log \log n}{\log n}$. For every index $j \in J$, consider the corresponding sets $X_j, X_{j+r}$ of vertices of $X$, and let $Y_j, Y_{j+r}$ be the sets of vertices of $D_0$, that are connected to $X_j$ and $X_{j+r}$ via the matching $\tilde{M}$. Let $Y'_j = Y_j \setminus R_1$ and let $Y'_{j+r} = Y_{j+r} \setminus R_1$ be the subsets of surviving vertices in $Y_j$ and $Y_{j+r}$ respectively. We say that index $j$ is bad iff $|Y'_j| < \sigma/2$ or $|Y'_{j+r}| < \sigma/2$; otherwise we say that it is a good index. Recall that $|R_1| \le 4dr\gamma \log \log n/\alpha$. Therefore, the total number of bad indices is at most:
+
+$$
+\begin{align*}
+\frac{2|R_1|}{\sigma} &\le \frac{8dr\gamma \log \log n}{\alpha \cdot 2^{15} [d^3n \log n/(w\alpha^2)]} \\
+&\le \frac{w\alpha r\gamma \log \log n}{2^{11} d^2 n \log n} \\
+&\le \frac{r \log \log n}{4 \log n} \cdot \frac{w\alpha\gamma}{512d^2n} \\
+&\le \frac{r \log \log n}{4 \log n},
+\end{align*}
+$$
+
+since $\gamma = 512nd^2/w\alpha$.
+
+Let $J' \subseteq J$ be the set of all good indices, so $|J'| \ge \frac{r \log \log n}{2 \log n}$. We say that an index $j \in J'$ is *happy* iff there is a path $P_1(j)$ in $T_1'$, of length at most $(\gamma \log \log n)/4$, connecting a vertex of $Y'_j$ to a vertex of $D'_1$, and there is a path $P_2(j)$ in $T_1'$, of length at most $(\gamma \log \log n)/4$, connecting a vertex of $Y'_{j+r}$ to a vertex of $D'_1$. The following claim will finish the proof of Lemma 3.3.
+
+**Claim 4.7** At least one index of $J'$ is happy.
+
+Assume first that the claim is correct. Consider the paths $P_1(j)$ and $P_2(j)$ in $T'$, given by Claim 4.7, and assume that path $P_1(j)$ connects a vertex $v \in Y'_j$ to a vertex $v' \in D'_1$. Let $v'' \in D'_2$ be the vertex connected to $v'$ by an edge of $\tilde{M}$, that we denote by $e_v$. Similarly, assume that path $P_2(j)$ connects a vertex $u \in Y'_j$ to a vertex $u' \in D'_1$. Let $u'' \in D'_2$ be the vertex connected to $u'$ by an edge of $\tilde{M}$, that we denote by $e_u$. From Claim 4.6, there is a path $P$ in $T_2'$, of length at most $64d \log n/\alpha < \gamma \log n$, connecting $v''$ to $u''$. By combining $P_1(j), e_v, P', e_u, P_2(j)$, together with the edges of $\tilde{M}$ incident to $u$ and $v$, we obtain an admissible path, connecting a vertex of $X_j$ to a vertex of $X_{j+r}$, a contradiction. It now remains to prove Claim 4.7.
+
+**Proof of 4.7.** We say that a vertex $v$ of $D_0 \cap V(T_1')$ is *happy* iff there is a path in $T_1'$, of length at most $(\gamma \log \log n)/4$, connecting $v$ to a vertex of $D'_1$. Assume for contradiction that the claim is false. Then for each good index $j$, either all vertices of $Y'_j$ are unhappy, or all vertices of $Y'_{j+r}$ are unhappy.
+---PAGE_BREAK---
+
+Let $Z \subseteq D_0 \cap V(T'_1)$ be the set of all unhappy vertices. Since $|Y'_j|, |Y'_{j+1}| \ge \sigma/2$, and $|J'| \ge \frac{r \log \log n}{2 \log n}$, we get that:
+
+$$
+\begin{align*}
+|Z| &\ge \frac{r \log \log n}{2 \log n} \cdot \frac{\sigma}{2} \\
+&\ge \frac{w\alpha^2(\log \log n)^3}{2d^3 \log^4 n} \cdot 2^{14} \cdot \left\lfloor \frac{d^3 n \log n}{w\alpha^2} \right\rfloor \\
+&\ge \frac{2^{12}n(\log \log n)^3}{\log^3 n}.
+\end{align*}
+$$
+
+Let $Z' = D_1'$, so $|Z'| \ge w/16$. From Claim 4.6, there is a path in $T_1'$, connecting a vertex of $Z$ to a vertex of $Z'$, of length at most: $\frac{32d}{\alpha} (\log(n/|Z|) + \log(n/|Z'|)) \le \frac{32d}{\alpha} \left( \log\left(\frac{\log^3 n}{2^{13}(\log \log n)^3}\right) + \log(\frac{16n}{w}) \right) \le \frac{32d}{\alpha} (3\log \log n + \log(\frac{16n}{w})) \le (\gamma \log \log n)/4$, since $\gamma = 512nd^2/(w\alpha)$. $\square$
+
+# 5 Routing in $G'_{II}$
+
+The goal of this section is to prove Lemma 3.5. We use the following lemma, whose proof uses standard techniques and is deferred to Section F of Appendix.
+
+**Lemma 5.1** There is a universal constant $c$, and an efficient randomized algorithm, that, given graph $G = (V, E)$ with $|V| \le n$, such that the maximum vertex degree in $G$ is at most $d$ and a parameter $0 < \alpha < 1$, together with a collection $\{C_1, \dots, C_{2r}\}$ of mutually disjoint subsets of $V$ of cardinality $q = \lceil cd^2 \log^2 n / \alpha^2 \rceil$ each, computes one of the following:
+
+• either a collection $Q = \{Q_1, \dots, Q_r\}$ of paths in $G$, where for each $1 \le j \le r$, path $Q_j$ connects a vertex of $C_j$ to a vertex of $C_{r+j}$, and with high probability the paths in $Q$ are disjoint; or
+
+• a cut $(S, S')$ in $G$ of sparsity less than $\alpha$.
+
+Consider the subgraph $W'$ of $G_{\Pi}$; recall that it consists of two graphs, $S_1$ and $T_1$, where $S_1$ is a connected graph and $T_1$ is an $\alpha$-expander. Recall that $S_1$ contains a set $B_1$ of $w$ vertices; $T_1$ contains a set $C_1$ of $w$ vertices, and $\mathcal{M}'_1$ is a perfect matching between these two sets.
+
+We let $q = \lceil cd^2 \log^2 n/\alpha^2 \rceil$, where $c$ is the constant from Lemma 5.1, and we let $r = \lfloor w/dq \rfloor = \Omega(w\alpha^2/d^3 \log^2 n)$. Observe that $q \le \lfloor w/dr \rfloor$. We use Observation 3.2 to compute $r$ connected subgraphs $S^1, \dots, S^r$ of $S_1$, each of which contains at least $\lfloor w/dr \rfloor \ge q$ vertices of $B_1$. For $1 \le i \le r$, we denote $B^i = B_1 \cap V(S^i)$. We also let $\mathcal{M}^i \subseteq \mathcal{M}'_1$ be the set of edges incident to the vertices of $B^i$ in $\mathcal{M}'_1$, and we let $C^i \subseteq C_1$ be the set of the endpoints of the edges of $\mathcal{M}^i$ that lie in $C_1$. Observe that for all $1 \le i \le 2r$, $|C^i| \ge q$. For each $1 \le i \le 2r$, we select an arbitrary vertex $b_i \in B^i$, and we let $B' = \{b_i | 1 \le i \le 2r\}$, so that $|B'| = 2r = \Omega(w\alpha^2/d^3 \log^2 n)$, as required.
+
+Assume now that we are given an arbitrary matching $\mathcal{M}^*$ over the vertices of $B'$. By appropriately re-indexing the sets $B^i$, we can assume w.l.o.g. that $\mathcal{M}^* = \{(b_i, b_{r+i})\}_{i=1}^r$. Since $T_1$ is an $\alpha$-expander, the algorithm of Lemma 5.1 computes a collection $\mathcal{Q} = \{Q_1, \dots, Q_r\}$ of paths in $T_1$, where for each $1 \le j \le r$, path $Q_j$ connects some vertex $c_j^* \in C^j$ to some vertex $c_{j+r}^* \in C^{j+r}$, and with high probability the paths in $\mathcal{Q}$ are disjoint.
+---PAGE_BREAK---
+
+Consider now some index $1 \le j \le 2r$. We let $e_j$ be the unique edge of the matching $\mathcal{M}'_1$ incident to $c_j^*$, and we let $b_j^* \in B^j$ be the other endpoint of this edge. Since graph $S^j$ is connected, and it contains both $b_j$ and $b_j^*$, we can find a path $P_j$ in $S^j$, connecting $b_j$ to $b_j^*$. For each $1 \le j \le r$, let $P_j^*$ be the path obtained by concatenating $P_j, e_j, Q_j, e_{j+r}, P_{j+r}$, and let $\mathcal{P}^* = \{P_j^* | 1 \le j \le r\}$. It is immediate to verify that, if the paths in $\mathcal{Q}$ are disjoint from each other, then so are the paths in $\mathcal{P}^*$, since all graphs in $\{S^j | 1 \le j \le 2r\}$ are disjoint from each other and from $T_1$. Moreover, for each $1 \le j \le r$, path $P_j^*$ connects $b_j$ to $b_{j+r}$, as required.
+
+# 6 Constructing a Path-of-Expanders System
+
+The goal of this section is to prove Theorem 2.4. The proof consists of three parts. In the first part, we construct an $\alpha'$-expanding Path-of-Sets System of length 24 in $G$, for some $\alpha'$. In the second part, we transform it into a Strong Path-of-Sets System of the same length. In the third and the final part, we turn the Strong Path-of-Sets System into a Path-of-Expanders System.
+
+## 6.1 Part 1: Constructing an Expanding Path-of-Sets System
+
+The main technical result of this section is the following theorem.
+
+**Theorem 6.1** There is a constant $c_x > 3$, and a deterministic algorithm, that, given an $n$-vertex $\alpha$-扩ender $G$ with maximum vertex degree at most $d$, where $0 < \alpha < 1$, computes, in time poly($n$) $\left(\frac{d}{\alpha}\right)^{\mathcal{O}(\log(d/\alpha))}$ a partition $(V', V'')$ of $V(G)$, such that $|V'|, |V''| \ge \frac{\alpha|V(G)|}{256d}$, and each graph $G[V'], G[V'']$ is an $\alpha^*$-扩ender, for $\alpha^* \ge (\frac{\alpha}{d})^{c_x}$.
+
+The main tool that we use in the proof of the theorem is the following lemma.
+
+**Lemma 6.2** There is a constant $c'_x$, and deterministic algorithm, that, given an $n$-vertex $\alpha$-扩ender $G$ with maximum vertex degree at most $d$, where $0 < \alpha < 1$, computes, in time poly($n$) $\left(\frac{d}{\alpha}\right)^{\mathcal{O}(\log(d/\alpha))}$, a subset $V' \subseteq V(G)$ of vertices, such that $\frac{\alpha|V(G)|}{256d} \le |V'| \le \frac{\alpha|V(G)|}{8d}$, and $G[V']$ is an $\hat{\alpha}^*$-扩ender, for $\hat{\alpha}^* \ge (\frac{\alpha}{d})^{c'_x}$.
+
+**Proof:** Given a graph $G$, we say that a partition $(U', U'')$ of $V(G)$ is a balanced cut iff $|U'|, |U''| \ge |V(G)|/4$.
+
+Our starting point is the following claim.
+
+**Claim 6.3** There is an efficient algorithm that, given an $n$-vertex graph $G = (V, E)$, and a parameter $\beta$, returns one of the following:
+
+• either a subset $V' \subseteq V$ of vertices, such that $n/2 \le |V'| \le 3n/4$ and $G[V']$ is an $\Omega(\frac{\beta^2}{d})$-扩ender;
+
+• or a partition $(S,T)$ of $V$ with $|E_G(S,T)| < \beta \cdot \min\{|S|, |T|\}$.
+
+**Proof:** We start with an arbitrary balanced cut $(U', U'')$ in $G$ with $|U'| \ge |U''|$, and perform a number of iterations. In every iteration, we will either establish that $G[U']$ is an $\Omega(\frac{\beta^2}{d})$-扩ender, or compute the desired partition $(S,T)$ of $V$, or find a new balanced cut $(J', J'')$ in $G$ with $|E(J', J'')| < |E(U', U'')|$. In the first two cases, we terminate the algorithm and return either $V' = U'$ (in the first case), or the
+---PAGE_BREAK---
+
+cut $(S, T)$ (in the second case). In the last case, we replace $(U', U'')$ with $(J', J'')$, and continue to the
+next iteration.
+
+We now describe the execution of an iteration. Recall that we are given a balanced cut $(U', U'')$ of $G$
+with $|U'| \ge |U''|$. If $|E(U', U'')| < \beta \cdot \min\{|U'|, |U''|\}$, then we return the cut $(S, T) = (U', U'')$ and
+terminate the algorithm. Therefore, we assume that $|E(U', U'')| \ge \beta \cdot \min\{|U'|, |U''|\}$. We apply the
+algorithm from Theorem 2.2 to graph $G[U']$, and consider the cut $(S, T)$ of $G[U']$ computed by the
+algorithm. We then consider two cases. First, if $|E(S, T)| \ge \frac{\beta^2}{4} \min\{|S|, |T|\}$, then from Theorem 2.2,
+we are guaranteed that $G[U']$ is an $\Omega(\frac{\beta^2}{d})$-expander. We terminate the algorithm and return $V' = U'$.
+We assume that $|E(S, T)| < \frac{\beta}{4} \min\{|S|, |T|\}$ from now on, and we assume w.l.o.g. that $|T| \le |S|$.
+We consider again two cases. First, if $|E(T, U'')| \le \frac{\beta}{2}|T|$, we define a new cut $(S', T)$ in $G$, where
+$S' = S \cup U''$. We then get that $|T| \le |S'|$, and moreover, $|E_G(S', T)| = |E_G(S, T)| + |E_G(U'', T)| < \beta|T|$.
+We return the cut $(S', T)$ and terminate the algorithm.
+
+The final case is when $|E(T, U'')| > \frac{\beta}{2}|T|$. In this case, we are guaranteed that $|E(T, U'')| > |E(S, T)|$.
+Therefore, if we consider the cut $(J', J'')$, where $J' = S$ and $J'' = T \cup U''$, then $(J', J'')$ is a balanced
+cut in $G$, and moreover:
+
+$$
+|E(J', J'')| = |E(S, U'')| + |E(S, T)| < |E(S, U'')| + |E(T, U'')| = |E(U', U'')|.
+$$
+
+We then replace $(U', U'')$ with the new cut $(J', J'')$, and continue to the next iteration. It is easy to verify that every iteration can be executed in time poly($n$). Since the number of the edges in the set $E(U', U'')$ decreases in every iteration, the number of iterations is also bounded by poly($n$). $\square$
+
+By combining Claim 6.3 with Observation 2.1, we obtain the following simple corollary.
+
+**Corollary 6.4** *There is an efficient algorithm that, given an n-vertex graph G = (V, E) with maximum vertex degree at most d, and a parameter β, returns one of the following:*
+
+• either a subset $V' \subseteq V$ of vertices, such that $n/4 \le |V'| \le 3n/4$ and $G[V']$ is an $\Omega(\frac{\beta^2}{d})$-expander;
+
+• or a balanced partition $(S,T)$ of $V$ with $|E_G(S,T)| < \beta \cdot \min\{|S|, |T|\}$.
+
+*Proof:* Throughout the algorithm, we maintain a set $E'$ of edges of $G$ that we remove from the graph, starting with $E' = \emptyset$, and a collection $\mathcal{G}$ of disjoint induced subgraphs of $G \setminus E'$, starting with $\mathcal{G} = \{G\}$. The algorithm continues as long as there is some graph $H \in \mathcal{G}$, with $|V(H)| > 3|V(G)|/4$. In every iteration, we select the unique graph $H \in \mathcal{G}$ with $|V(H)| > 3|V(G)|/4$, and apply Claim 6.3 to it, with the parameter $\beta/4$. If the outcome is a subset $V' \subseteq V(H)$ of vertices, such that $|V(H)|/2 \le |V'| \le 3|V(H)|/4$, and $H[V']$ is an $\Omega(\frac{\beta^2}{d})$-expander, then we return $V'$: it is easy to verify that $n/4 \le |V'| \le 3n/4$, so $V'$ is a valid output. Otherwise, we obtain a partition $(S', T')$ of $V(H)$ with $|E(S', T')| < \frac{\beta}{4} \cdot \min\{|S'|, |T'\}|$. We add the edges of $E(S', T')$ to $E'$, remove $H$ from $\mathcal{G}$, and add $H[S']$ and $H[T']$ to $\mathcal{G}$ instead. If $|S'| < |T'|$, then our algorithm will never attempt to process the graph $H[S']$ again, so we charge the edges of $E(S', T')$ to the vertices of $S'$, where every vertex of $S'$ is charged fewer than $\beta/4$ units. The algorithm terminates when every graph $H \in \mathcal{G}$ has $|V(H)| \le 3n/4$ (unless it terminates earlier with an expander). Notice that from our charging scheme, at the end of the algorithm, $|E'| < n\beta/4$. Moreover, using Observation 2.1, we can partition the final collection $\mathcal{H}$ of graphs into two subsets, $\mathcal{H}', \mathcal{H}''$, such that $\sum_{H \in \mathcal{H}'} |V(H)|, \sum_{H \in \mathcal{H}''} |V(H)| \ge n/4$. Letting $S = \bigcup_{H \in \mathcal{H}'} V(H)$ and $T = \bigcup_{H \in \mathcal{H}''} V(H)$, we obtain a balanced partition $(S,T)$ of $V(G)$. Since $E(S,T) \subseteq E'$, we get that $|E(S,T)| < \frac{\beta n}{4} \le \beta \cdot \min\{|S|, |T|\}$. $\square$
+---PAGE_BREAK---
+
+We now turn to complete the proof of Lemma 6.2. We denote $|V(G)| = n$, and we let $n^* = \alpha|V(G)|/(8d)$. Our goal now is to compute a subset $V' \subseteq V(G)$ of vertices, with $n^*/32 \le |V'| \le n^*$, such that $G[V']$ is an $\hat{\alpha}^*$-expander, where $\hat{\alpha}^* \ge (\frac{\alpha}{d})^{c_x'}$ for some constant $c_x'$. Our algorithm is recursive. Over the course of the algorithm, we will consider smaller and smaller sub-graphs of $G$, containing at least $n^*/4$ vertices each. For each such subgraph $G' \subseteq G$, we define its level $L(G')$ as follows. Let $n' = |V(G')|$. If $n' \le 4n^*/3$, then $L(G') = 0$; otherwise, $L(G') = \lceil \log_{4/3}(n'/n^*) \rceil$. Intuitively, $L(G')$ is the number of recursive levels that we will use for processing $G'$. Notice that, from the definition of $n^*$, $L(G) \le O(\log(d/\alpha))$. We use the following claim.
+
+**Claim 6.5** There is a deterministic algorithm, that, given a subgraph $G' \subseteq G$, such that $|V(G')| \ge n^*/4$, and a parameter $0 < \beta < 1$, returns one of the following:
+
+• Either a balanced cut $(S,T)$ in $G'$ with $|E_{G'}(S,T)| < \beta \cdot \min\{|S|,|T|\}$; or
+
+• A subset $V' \subseteq V(G')$ of vertices of $G'$, such that $n^*/32 \le |V'| \le n^*$, and $G'[V']$ is an $\hat{\beta}$-expander, for $\hat{\beta} \ge \Omega\left(\frac{\beta^2}{d \cdot 2^{10 L(G')}}\right)$.
+
+The running time of the algorithm is poly(n) · $\left(\frac{256d}{\beta}\right)^{L(G')}.$
+
+We prove the claim below, after we complete the proof of Lemma 6.2 using it. We apply Claim 6.5 to the input graph $G$ and the parameter $\alpha$. Since $G$ is an $\alpha$-expander, we cannot obtain a cut $(S,T)$ in $G$ with $|E(S,T)| < \alpha \min\{|S|,|T|\}$. Therefore, the outcome of the algorithm is a subset $V' \subseteq V$ of vertices of $G$, with $n^*/32 \le V' \le n^*$, such that $G[V']$ is a $\hat{\alpha}$-expander, for $\hat{\alpha} = \Omega\left(\frac{\alpha^2}{d \cdot 2^{10 L(G)}}\right)$, in time poly(n) · $(\frac{256d}{\alpha})^{L(G)}$. Recall that $L(G) \le O(\log(d/\alpha))$. Therefore, we get that $\hat{\alpha} = \Omega\left(\frac{\alpha^2}{d \cdot 2^{O(\log(d/\alpha))}}\right) \ge (\alpha/d)^{c_x'}$ for some constant $c_x'$, and the running time of the algorithm is poly(n) · $(\frac{d}{\alpha})^{O(\log(d/\alpha))}$. It now remains to prove Claim 6.5.
+
+**Proof of Claim 6.5.** We denote $|V(G')| = n'$. We let $c$ be a large enough constant. We prove by induction on $L(G')$ that the claim is true, with the running time of the algorithm bounded by $n^c \cdot (256d/\beta)^{L(G')}$. The base of the recursion is when $L(G') = 0$, and so $n^*/4 \le n' \le 4n^*/3$. We apply Corollary 6.4 to graph $G'$ with the parameter $\beta$. If the outcome of the corollary is a subset $V' \subseteq V(G')$ of vertices with $n'/4 \le |V'| \le 3n'/4$, such that $G'[V']$ is an $\Omega(\beta^2/d)$-expander, then we terminate the algorithm and return $V'$. Notice that in this case, we are guaranteed that $n^*/16 \le |V'| \le n^*$. Otherwise, the algorithm returns a balanced cut $(S,T)$ in $G'$, with $|E_{G'}(S,T)| < \beta \cdot \min\{|S|,|T|\}$. We then return this cut. The running time of the algorithm is poly(n).
+
+We now assume that the theorem holds for all graphs $G'$ with $L(G') < i$, for some integer $i > 0$, and prove it for a given graph $G'$ with $L(G') = i$. Let $n' = |V(G')|$. The proof is somewhat similar to the proof of Corollary 6.4. Throughout the algorithm, we maintain a balanced cut $(U', U'')$ of $G'$, with $|U'| \ge |U''|$. Initially, we start with an arbitrary such balanced cut. Notice that $|E(U', U'')| \le |E(G')| \le n'd$. While $|E(U', U'')| \ge \beta n'/4$, we perform iterations (that we call phases for convenience, since each of them consists of a number of iterations). At the end of every phase, we either compute a subset $V' \subseteq V(G')$ of vertices of $G'$, such that $n^*/32 \le |V'| \le n^*$, and $G'[V']$ is an $\hat{\beta}$-expander, in which case we terminate the algorithm and return $V'$; or we compute a new balanced cut $(J', J'')$ in $G'$, such that $|E(J', J'')| \le |E(U', U'')| - \frac{\beta n'}{32}$. If $|E(J', J'')| < \beta n'/4$, then we return this cut; it is easy to verify that $|E(J', J'')| < \beta \cdot \min\{|J'|, |J''|\}$. Otherwise, we replace $(U', U'')$ with the new cut $(J', J'')$, and continue to the next iteration. Since initially $|E(U', U'')| \le n'd$, and since $|E(U', U'')|$ decreases by at least $\frac{\beta n'}{32}$ in every phase, the number of phases is bounded by $\frac{32d}{\beta}$. We now proceed to describe a single phase.
+---PAGE_BREAK---
+
+**An execution of a phase.** We assume that we are given a balanced cut $(U', U'')$ in $G'$, with $|U'| \ge |U''|$, and $|E(U', U'')| \ge \beta n'/4$. Our goal is to either compute a subset $V'$ of vertices of $G'$ such that $n^*/32 \le |V'| \le n^*$ and $G'[V']$ is an $\hat{\beta}$-expander, or return another balanced cut $(J', J'')$ in $G'$, with $|E(J', J'')| \le |E(U', U'')| - \frac{\beta n'}{32}$. Let $\beta' = \beta/32$. Over the course of the algorithm, we will maintain a set $E'$ of edges that we remove from the graph, starting with $E' = \emptyset$, and a collection $\mathcal{G}$ of subgraphs of $G[U']$ (that will contain at most 4 such subgraphs). As each graph $H \in \mathcal{G}$ is a subgraph of $G[U']$, we are guaranteed that $|V(H)| \le 3n'/4$, and so $L(H) \le L(G') - 1$. We start with $\mathcal{H}$ containing a single graph, the graph $G'[U']$. We then iterate, while there is a graph $H \in \mathcal{H}$ with $|V(H)| > |U'|/2.
+
+In every iteration, we let $H \in \mathcal{H}$ be the unique graph with $|V(H)| > |U'|/2$. Notice that $|V(H)| \ge n'/4 \ge n^*/3$, since we have assumed that $L(G') > 0$ and so $n' \ge 4n^*/3$. We apply the algorithm from the induction hypothesis to $H$, with the parameter $\beta' = \beta/32$. If the outcome is a subset $V' \subseteq V(H)$ of vertices of $G'$, such that $n^*/32 \le |V'| \le n^*$ and $H[V']$ is a $\hat{\beta}'$-expander, for $\hat{\beta}' \ge \Omega\left(\frac{(\beta')^2}{d \cdot 2^{10L(H)}}\right)$ then we terminate the algorithm and return $V'$. Notice that, since $L(H) \le L(G)-1$, and $\beta' = \beta/32$, we get that $\frac{(\beta')^2}{d \cdot 2^{10L(H)}} \ge \frac{\beta^2}{d \cdot 2^{10L(G')}}$, so $G'[V']$ is a $\hat{\beta}$-expander. Otherwise, the algorithm returns a balanced cut $(S,T)$ of $V(H)$, such that $|E(S,T)| < \beta' \cdot \min\{|S|, |T|\}$. We add the edges of $E(S,T)$ to $E'$, remove $H$ from $\mathcal{H}$, and add $H[S]$ and $H[T]$ to $\mathcal{H}$. The algorithm terminates once for every graph $H \in \mathcal{H}$, $|V(H)| \le |U'|/2$. Let $r = |\mathcal{H}|$ at the end of the algorithm. Since the cuts $(S,T)$ that we compute in every iteration are balanced, it is easy to verify that we run the algorithm from the induction hypothesis at most 3 times, and that $r \le 4$, since in every iteration the size of the largest graph in $\mathcal{H}$ decreases by at least factor 3/4, and $(3/4)^3 < 1/2$. Denote $\mathcal{H} = \{H_1, \dots, H_r\}$, and for each $1 \le j \le r$, let $V_j = V(H_j)$, and let $m_j = |E(V_j, U'')|$. Since $|E(U', U'')| \ge \beta n'/4$, there is some index $1 \le j \le r$, such that $|E(V_j, U'')| \ge \beta n'/16$. We define a new balanced cut $(J', J'')$, by setting $J' = U' \setminus V_j$ and $J'' = U'' \cup V_j$. Since $|V_j| \le |U'|/2$, it is immediate to verify that it is a balanced cut. Moreover, it is immediate to verify that $|E'| \le \beta'|U'| \le 3\beta'n'/4 \le \beta n'/32$, and so:
+
+$$|E(J', J'')| \le |E(U', U'')| - |E(V_j, U'')| + |E'| \le |E(U', U'')| - \frac{\beta n'}{16} + \frac{\beta n'}{32} \le |E(U', U'')| - \frac{\beta n'}{32}.$$
+
+Finally, we bound the running time of the algorithm. The running time is at most poly($n$) plus the time required for the recursive calls to the same procedure. Recall that the number of phases in the algorithm is at most $32d/\beta$, and every phase requires up to 3 recursive calls. Therefore, the total number of recursive calls is bounded by $100d/\beta$. Each recursive call is to a graph $H$ that has $L(H) < L(G)$. From the induction hypothesis, the running time of each recursive call is bounded by $n^c \cdot (256d/\hat{\beta}')^{L(G)-1} \le n^c \cdot (256d/\hat{\beta})^{L(G)-1}$, and so the total running time of the algorithm is bounded by:
+
+$$n^c + \frac{100d}{\beta} \cdot n^c \cdot \left(\frac{256d}{\hat{\beta}}\right)^{L(G)-1} \le n^c \cdot \left(\frac{256d}{\hat{\beta}}\right)^{L(G)},$$
+
+since $\beta > \hat{\beta}$. $\square$
+
+We are now ready to complete the proof of Theorem 6.1.
+
+**Proof of Theorem 6.1.** We start with the input *n*-vertex $\alpha$-扩散 G 和 apply Lemma 6.2 to
+---PAGE_BREAK---
+
+it, obtaining a subset $V_1 \subseteq V(G)$ of vertices, such that $G[V_1]$ is a $\hat{\alpha}^*$-expander and $\frac{\alpha n}{256d} \le |V_1| \le \frac{\alpha n}{8d}$.
+Let $E' = \delta_G(V_1)$. Since the maximum vertex degree in $G$ is at most $d$, $|E'| \le \frac{\alpha n}{8}$.
+
+We use the following claim, which is similar to Claim 2.3, except that it provides an efficient algorithm instead of the existential result of Claim 2.3, at the expense of obtaining somewhat weaker parameters. The proof appears in the Appendix.
+
+**Claim 6.6** There is an efficient algorithm, that given an $\alpha$-expander $G = (V, E)$ with maximum vertex degree at most $d$ and a subset $E' \subseteq E$ of its edges, computes a subgraph $H \subseteq G \setminus E'$ that is an $\Omega(\frac{\alpha^2}{d})$-expander, and $|V(H)| \ge |V| - \frac{4|E'|}{\alpha}$.
+
+We apply Claim 6.6 to graph $G$ and the set $E'$ of edges computed above. Let $H \subseteq G \setminus E'$ be the resulting graph, and let $V_2 = V(H)$. From Claim 6.6, $|V_2| \ge n - \frac{4|E'|}{\alpha} \ge n/2$. Since $|V_1| < n/2$ and the set $E'$ of edges disconnects the vertices of $V_1$ from the rest of the graph, while $H$ is an $\Omega(\frac{\alpha^2}{d})$-expander and therefore a connected graph, $V_1 \cap V_2 = \emptyset$.
+
+We are now ready to define the final partition $(V', V'')$ of $V(G)$, by letting it be the minimum cut separating the vertices of $V_1$ from the vertices of $V_2$ in $G$: that is, we require that $V_1 \subseteq V'$, $V_2 \subseteq V''$, and among all such partitions $(V', V'')$ of $V(G)$, we select the one minimizing $|E(V', V'')|$. The partition $(V', V'')$ can be computed efficiently using standard techniques: we construct a new graph $\tilde{G}$ by starting with $G$, contracting all vertices of $V_1$ into a source $s$, contracting all vertices of $V_2$ into a destination $t$, and computing a minimum $s-t$ cut in the resulting graph. The resulting cut naturally defines the partition $(V', V'')$ of $V(G)$. Let $E'' = E(V', V'')$, and denote $|E''| = z$. From Menger's theorem, there is a set $\mathcal{P}$ of $z$ edge-disjoint paths in $G$, connecting $V_1$ to $V_2$. Therefore, there is a set $\mathcal{P}_1$ of $z$ edge-disjoint paths in $G[V'] \cup E''$, where each path in $\mathcal{P}_1$ connects a distinct edge of $E''$ to a vertex of $V_1$, and similarly, there is a set $\mathcal{P}_2$ of $z$ edge-disjoint paths in $G[V''] \cup E''$, where each path in $\mathcal{P}_2$ connects a distinct edge of $E''$ to a vertex of $V_2$.
+
+We claim that each of the graphs $G[V']$, $G[V'']$ is an $\alpha^*$-expander, for $\alpha^* = \frac{\alpha \hat{\alpha}^*}{512d}$. We prove this for $G[V']$; the proof for $G[V'']$ is similar. Assume for contradiction that $G[V']$ is not an $\alpha^*$-expander. Then there is a cut $(X, Y)$ in $G[V']$, such that $|E(X, Y)| < \alpha^* \cdot \min\{|X|, |Y|\}$. Assume w.l.o.g. that $|X \cap V_1| \le |Y \cap V_1|$. We now consider two cases.
+
+The first case happens when $|X \cap V_1| \ge \frac{\alpha|X|}{512d}$. In that case, since $G[V_1]$ is an $\hat{\alpha}^*$-expander, there are at least $\hat{\alpha}^* \cdot |X \cap V_1| \ge \frac{\hat{\alpha}^* \cdot \alpha|X|}{512d} \ge \alpha^*|X|$ edges connecting $X \cap V_1$ to $Y \cap V_1$, and so $|E(X, Y)| > \alpha^* \cdot \min\{|X|, |Y|\}$, a contradiction. Therefore, we assume now that $|X \cap V_1| < \frac{\alpha|X|}{512d}$.
+
+Figure 8: An illustration for the proof of Theorem 6.1
+
+We partition the edges of $\delta_G(X)$ into two subsets: set $E_1$ contains all edges that lie in $E(V', V'')$, and set $E_2$ contains all remaining edges, so $E_2 = E(X, Y)$ (see Figure 8). Note that from the definition
+---PAGE_BREAK---
+
+of the cut $(X, Y)$, $|E_2| < \alpha^*|X|$. Recall that for every edge $e \in E(V', V'')$, there is a path $P_e \in P_1$ contained in $G[V'] \cup E(V', V'')$, connecting $e$ to a vertex of $V_1$, such that all paths in $P_1$ are edge-disjoint. Let $\tilde{\mathcal{P}} \subseteq P_1$ be the set of paths originating at the edges of $E_1$. We further partition $\tilde{\mathcal{P}}$ into two subsets: set $\tilde{\mathcal{P}}'$ contains all paths $P_e$ that contain an edge of $E_2$, and $\tilde{\mathcal{P}}''$ contains all remaining paths. Notice that $|\tilde{\mathcal{P}}'| \le |E_2| < \alpha^*|X|$. On the other hand, every path $P_e \in \tilde{\mathcal{P}}''$ is contained in $G[X] \cup E_1$, and contains a vertex of $V_1 \cap X$ – the endpoint of $P_e$. Since we have assumed that $|V_1 \cap X| < \frac{\alpha|X|}{512d}$, and since the maximum vertex degree in $G$ is at most $d$, while the paths in $\tilde{\mathcal{P}}''$ are edge-disjoint, we get that $|\tilde{\mathcal{P}}''| < \frac{\alpha|X|}{512}$. Altogether, we get that $|E_1| = |\tilde{\mathcal{P}}| \le \alpha^*|X| + \frac{\alpha|X|}{512}$, and $|\delta_G(X)| = |E_1| + |E_2| \le 2\alpha^*|X| + \frac{\alpha|X|}{512} \le \frac{\alpha|X|}{256} < \alpha \cdot \min\{|X|, n/256\} \le \alpha \cdot \min\{|X|, |V(G) \setminus X|\},$ since $|V(G) \setminus X| \ge n/256$, as $V_2 \cap X = \emptyset$. This contradicts the fact that $G$ is an $\alpha$-expander. $\square$
+
+**Corollary 6.7** There is an algorithm, that, given, an $n$-vertex $\alpha$-expander $G$ with maximum vertex degree at most $d$ and an integer $\ell \ge 1$, where $0 < \alpha < 1/3$, computes an $\alpha_\ell$-expanding Path-of-Sets system $\Sigma$ of length $\ell$ and width $w_\ell = [\alpha_\ell n]$, together with a subgraph $G_\Sigma$ of $G$, where $\alpha_\ell = \alpha c_x^{\ell-1}/d^{2\ell-2}$, and $c_x \ge 3$ is the constant from Theorem 6.1. The running time of the algorithm is poly($n$) $\cdot (\frac{d}{\alpha_\ell})^{O(\log(d/\alpha_\ell))}$.
+
+We note that we will use the corollary for with $\ell = 48$, and so the resulting Path-of-Sets System will have expansion $(\alpha/d)^{O(1)}$, and the running time of the algorithm from Corollary 6.7 is $\text{poly}(n) \cdot (\frac{d}{\alpha})^{O(\log(d/\alpha))}$.
+
+**Proof:** The proof is by induction on $\ell$. The base case is when $\ell = 1$. We choose two arbitrary disjoint subsets $A_1, B_1$ of $[w_1] < n/2$ of vertices, and we let $S_1 = G$. This defines an $\alpha$-expanding Path-of-Sets System of length 1 and width $w_1$.
+
+We now assume that we are given an integer $\ell > 1$, and an $\alpha_{\ell-1}$-expanding Path-of-Sets System $\Sigma = (S, M, A_1, B_{\ell-1})$ of length $\ell - 1$ and width $w_{\ell-1}$, where $G_\Sigma \subseteq G$. We assume that $S = (S_1, \dots, S_{\ell-1})$. We compute an $\alpha_\ell$-expanding Path-of-Sets System $\Sigma' = (S', M', A'_1, B'_\ell)$ of length $\ell$ and width $w_\ell$. We will denote $S' = (S'_1, \dots, S'_\ell)$, and for each $1 \le i \le \ell'$, the corresponding vertex sets $A_i$ and $B_i$ in $S'_i$ are denoted by $A'_i$ and $B'_i$, respectively.
+
+For all $1 \le i < \ell - 1$, we set $S'_i = S_i$. We also let $A'_1 \subseteq A_1$ be any subset of $w_\ell$ vertices, and for $1 \le i < \ell - 2$, we let $M'_i \subseteq M_i$ be any subset of $w_\ell$ edges; the endpoints of these edges lying in $B_i$ and $A_{i+1}$ are denoted by $B'_i$ and $A'_{i+1}$ respectively. It remains to define $S'_{\ell-1}, S'_\ell$, the matchings $M'_{\ell-2}$ and $M'_{\ell-1}$ (that implicitly define the sets $B'_{\ell-2}, A'_{\ell-1}, B'_{\ell-1}, A'_\ell$ of vertices), and the set $B'_\ell$ of vertices.
+
+We apply Theorem 6.1 to graph $S_{\ell-1}$, and compute, in time $\text{poly}(n) \cdot (\frac{d}{\alpha_{\ell-1}})^{O(\log(d/\alpha_{\ell-1}))}$ a partition $(V', V'')$ of $V(S_{\ell-1})$, such that $|V'|, |V''| \ge \frac{\alpha_{\ell-1}|V(S_{\ell-1})|}{256d}$, and each graph $G[V'], G[V'']$ is an $\alpha^*$-expander, for $\alpha^* \ge (\frac{\alpha_{\ell-1}}{d})^{c_x}$.
+
+One of the two subsets, say $V'$, must contain at least half of the vertices of $A_{\ell-1}$. We set $S'_{\ell-1} = S_{\ell-1}[V']$ and $S'_\ell = S_{\ell-1}[V'']$. Recall that: $|V'|, |V''| \ge \frac{\alpha_{\ell-1}|V(S_{\ell-1})|}{256d} \ge \frac{\alpha_{\ell-1}w_{\ell-1}}{128d}$. Since graph $S_{\ell-1}$ is an $\alpha_{\ell-1}$-expander, there are at least $\frac{\alpha_{\ell-1}^2 w_{\ell-1}}{128d}$ edges connecting $V'$ to $V''$. Since maximum vertex degree in $G$ is at most $d$, there is a matching $\mathcal{M}$, between vertices of $V'$ and vertices of $V''$, with $|\mathcal{M}| \ge \frac{\alpha_{\ell-1}^2 w_{\ell-1}}{128d^2}$. We claim that $|\mathcal{M}| \ge w_\ell$. In order to see this, it is enough to prove that $w_\ell \le \frac{\alpha_{\ell-1}^2 w_{\ell-1}}{128d^2}$. Since $w_\ell = [\alpha_\ell n]$, this is equivalent to proving that:
+---PAGE_BREAK---
+
+$$ \alpha_{\ell} \leq \frac{\alpha_{\ell-1}^3}{256d^2}. $$
+
+This is easy to verify from the definition of $\alpha_\ell$ and the fact that $c_x \ge 3$. We let $\mathcal{M}'_{\ell-1}$ be any subset of $\mathcal{M}$ containing $w_\ell$ edges. The endpoints of the edges of $\mathcal{M}'_{\ell-1}$ lying in $V'$ and $V''$ are denoted by $B'_{\ell-1}$ and $A'_\ell$ respectively. We let $B'_\ell$ be any subset of $w_\ell$ vertices of $V'' \setminus A'_\ell$. Finally, we let $A'_{\ell-1}$ any subset of $w_\ell$ vertices of $(V' \cap A_{\ell-1}) \setminus B'_{\ell-1}$; $\mathcal{M}'_{\ell-2} \subseteq \mathcal{M}_{\ell-2}$ the subset of edges whose endpoints lie in $A'_{\ell-1}$; and $B'_{\ell-2}$ the set of endpoints of the edges of $\mathcal{M}'_{\ell-2}$ lying in $B_{\ell-2}$. This completes the construction of the Path-of-Sets System $\Sigma'$. It is immediate to verify that it has length $\ell$, width $w_\ell$, and that $G_{\Sigma'} \subseteq G$. It remains to prove that it is $\alpha_\ell$-expanding, or equivalently, that $S'_{\ell-1}$ and $S'_\ell$ are $\alpha_\ell$-expanders. Recall that Theorem 6.1 guarantees that both these graphs are $\alpha^*$-expanders, where $\alpha^* \ge (\frac{\alpha_{\ell-1}}{d})^{c_x}$. It is now enough to verify that $\alpha^* \ge \alpha_\ell$, which is immediate to do from the definition of $\alpha_\ell$:
+
+$$ \alpha^* \ge \left(\frac{\alpha_{\ell-1}}{d}\right)^{c_x} = \frac{\left(\frac{\alpha^{c_x \ell-2}}{d^{c_x^{2\ell-4}}}\right)^{c_x}}{d^{c_x}} = \frac{\alpha^{c_x \ell-1}}{d^{c_x^{2\ell-3}} \cdot d^{c_x}} \ge \frac{\alpha^{c_x \ell-1}}{d^{c_x^{2\ell-2}}} = \alpha_\ell $$
+
+Lastly, the running time of the algorithm is dominated by partitioning $S_{\ell-1}$, and is bounded by $\text{poly}(n) \cdot (\frac{d}{\alpha_{\ell-1}})^{O(\log(d/\alpha_{\ell-1}))} \le \text{poly}(n) \cdot (\frac{d}{\alpha_\ell})^{O(\log(d/\alpha_\ell))}$, as required. $\square$
+
+We apply Corollary 6.7 to the input graph $G$, with the parameter $\ell = 48$, obtaining a sub-graph $G_\Sigma \subseteq G$, and an $\alpha'$-expanding Path-of-Sets System $\Sigma'$ of length 48 and width $w' = [\alpha'n]$, where $\alpha' = (\alpha/d)^{O(1)}$. The running time of the algorithm is $\text{poly}(n) \cdot (\frac{d}{\alpha})^{O(\log(d/\alpha))}$.
+
+## 6.2 Part 2: From Expanding to Strong Path-of-Sets System
+
+The goal of this subsection is to prove the following theorem:
+
+**Theorem 6.8** There is an efficient algorithm, that, given a parameter $\ell > 0$, and an $\alpha$-Expanding Path-of-Sets System $\Sigma$ of width $w$ and length $4\ell$, where $0 < \alpha < 1$, such that the corresponding graph $G_\Sigma$ has maximum vertex-degree at most $d$, computes a Strong Path-of-Sets System $\Sigma'$, of width $w' = \Omega(\alpha^3 w / d^4)$ and length $\ell$, such that the maximum vertex degree in the corresponding graph $G_{\Sigma'}$ is at most $d$, and $G_{\Sigma'}$ is a minor of $G_\Sigma$. Moreover, the algorithm computes a model of $G_{\Sigma'}$ in $G_\Sigma$.
+
+We use the following simple claim, whose proof is deferred to the Appendix.
+
+**Claim 6.9** There is an efficient algorithm, that, given an $\alpha$-expander $G$, whose maximum vertex degree is at most $d$, where $0 < \alpha < 1$, together with two disjoint subsets $A, B$ of its vertices of cardinality $z$ each, computes a collection $\mathcal{P}$ of $\lceil \alpha z / d \rceil$ disjoint paths, connecting vertices of $A$ to vertices of $B$ in $G$.
+
+We will also use the following theorem, whose proof is similar to some arguments that appeared in [CC16], and is deferred to the Appendix.
+
+**Theorem 6.10** There is an efficient algorithm, that, given an $\alpha$-Expanding Path-of-Sets System $\Sigma = (S, M, A_1, B_3)$ of width $w$ and length $3$, where $0 < \alpha < 1$, and the corresponding graph $G_\Sigma$ has maximum vertex degree at most $d$, computes subsets $\hat{A}_1 \subseteq A_1, \hat{B}_3 \subseteq B_3$ of $\Omega(\alpha^2 w / d^3)$ vertices each, such that $\hat{A}_1 \cup \hat{B}_3$ is well-linked in $G_\Sigma$.
+---PAGE_BREAK---
+
+We are now ready to complete the proof of Theorem 6.8.
+
+**Proof of Theorem 6.8.** We construct a Strong Path-of-Sets System $\Sigma' = (\mathcal{S}', \mathcal{M}', A_1', B_\ell')$ of length $\ell$ and width $w'$, denoting $\mathcal{S}' = (S_1', \dots, S_\ell')$. For all $1 \le i \le \ell$, the corresponding vertex sets $A_i$ and $B_i$ are denoted by $A'_i$ and $B'_i$, respectively.
+
+For all $1 \le i \le \ell$, we let $\Sigma_i$ be the $\alpha$-expanding Path-of-Sets System of width $w$ and length 3 obtained by using the clusters $S_{4i-3}$, $S_{4i-2}$, $S_{4i-1}$, and the matchings $\mathcal{M}_{4i-3}$ and $\mathcal{M}_{4i-2}$. In order to define the new Path-of-Sets System, for each $1 \le i \le \ell$, we set $S'_i = G_{\Sigma_i}$. We apply Theorem 6.10 to $\Sigma_i$, to obtain subsets $\hat{A}_i \subseteq A_{4i-3}$, $\hat{B}_i \subseteq B_{4i-1}$ of $\Omega(\alpha^2 w/d^3)$ vertices each, such that $\hat{A}_i \cup \hat{B}_i$ are well-linked in $S'_i$.
+
+In order to complete the construction of the Path-of-Sets System $\Sigma'$, we let $A'_1 \subseteq \hat{A}_1$ be any subset of $w'$ vertices, and we define $B'_\ell \subseteq \hat{B}_\ell$ similarly. It remains to define, for each $1 \le i < \ell$, the matching $\mathcal{M}'_i$. We will ensure that the endpoints of the resulting matching are contained in $\hat{B}_i$ and $\hat{A}_{i+1}$, respectively, ensuring that the resulting Path-of-Sets System is strong.
+
+Consider some index $1 \le i < \ell$. Recall that we have computed the sets $\hat{B}_i \subseteq B_{4i-1}$, $\hat{A}_{i+1} \subseteq A_{4i+1}$ of vertices. We let $E'_i \subseteq \mathcal{M}_{4i-1}$ be the set of edges incident to the vertices of $\hat{B}_i$, and we denote by $\tilde{A}_{4i} \subseteq A_{4i}$ the set of vertices in $A_{4i}$ that serve as their endpoints. Similarly, we let $E''_i \subseteq \mathcal{M}_{4i}$ be the set of edges incident to the vertices of $\hat{A}_{i+1}$, and we denote by $\hat{B}_{4i} \subseteq B_{4i}$ the set of vertices in $B_{4i}$ that serve as their endpoints. From Claim 6.9, there is a set $\mathcal{Q}_i$ of disjoint paths in $S_{4i}$, connecting vertices of $\tilde{A}_{4i}$ to vertices of $\hat{B}_{4i}$, of cardinality $w' = \Omega(\alpha^3 w/d^4)$. By extending the paths in $\mathcal{Q}_i$ to include the edges of $E'_i \cup E''_i$ incident to them, we obtain a collection $\mathcal{Q}'_i$ of $w'$ disjoint paths in $S_{4i} \cup \mathcal{M}_{4i-1} \cup \mathcal{M}_{4i}$, connecting vertices of $\hat{B}_i$ to vertices of $\hat{A}_{i+1}$. We denote the endpoints of the paths in $\mathcal{Q}'_i$ lying in $\hat{B}_i$ by $B'_i$, and the endpoints of the paths in $\mathcal{Q}'_i$ lying in $\hat{A}_{i+1}$ by $A'_{i+1}$. The paths in $\mathcal{Q}'_i$ naturally define the matching $\mathcal{M}'_i$ between the vertices of $B'_i$ and the vertices of $A'_{i+1}$. This concludes the definition of the Path-of-Sets System $\Sigma'$. It is immediate to verify that it is a strong Path-of-Sets System of length $\ell$ and width $w'$, and to obtain a model of $G_{\Sigma'}$ in $G_\Sigma$. Note that graph $G_{\Sigma'}$ has maximum vertex degree at most *d*.
+
+Recall that in Part 1 of the algorithm, we have obtained a sub-graph $G_\Sigma \subseteq G$, and an $\alpha'$-expanding Path-of-Sets System $\Sigma$ of length 48 and width $w' = \lceil \alpha'n \rceil$, where $\alpha' = (\alpha/d)^{O(1)}$. Applying Theorem 6.8 to $\Sigma$, we obtain a Strong Path-of-Sets System $\Sigma'$ of length 12 and width $w'' = \Omega\left(\frac{(\alpha')^3 w'}{d^4}\right) = \Omega\left(\frac{(\alpha')^4}{d^4}n\right) = \left(\frac{\alpha}{d}\right)^{O(1)} \cdot n$. We have also computed a model of $G_{\Sigma'}$ in $G$, and established that the maximum vertex degree in $G_{\Sigma'}$ is at most *d*. For convenience, we let *c'* be a constant, such that $w'' \ge \frac{\alpha^{c'}}{d^{c'}}n$.
+
+## 6.3 From Strong Path-of-Sets System to Path-of-Expanders System
+
+The goal of this subsection is to prove the following theorem:
+
+**Theorem 6.11** There is an efficient algorithm, that, given a Strong Path-of-Sets System $\Sigma$ of width $w$ and length 12, such that the corresponding graph $G_\Sigma$ has at most *n* vertices and has maximum vertex degree at most *d*, computes a Path-of-Expanders System $\Pi$ of width $\hat{w} = \Omega\left(\frac{w^4}{d^2 n^3}\right)$ and expansion $\hat{\alpha} \ge \Omega\left(\frac{w^2}{n^2 d}\right)$, whose corresponding graph $G_{\Pi}$ has maximum vertex degree at most *d* + 1 and is a minor of $G_\Sigma$. Moreover, the algorithm computes a model of $G_{\Pi}$ in $G_\Sigma$.
+
+Before we prove Theorem 6.11, we complete the proof of Theorem 2.4 using it. Recall that our input is an $\alpha$-expander $G$, for some $0 < \alpha < 1$, with $|V(G)| = n$, such that the maximum vertex degree in
+---PAGE_BREAK---
+
+$G$ is at most $d$. Our goal is to provide an algorithm that computes a Path-of-Expanders System $\Pi$ of expansion $\tilde{\alpha} \ge (\frac{\alpha}{d})^{\hat{c}_1}$ and width $\tilde{w} \ge n \cdot (\frac{\alpha}{d})^{\hat{c}_2}$, such that the maximum vertex degree in $G_{\Pi}$ is at most $d+1$, and to compute a minor of $G_{\Pi}$ in $G$.
+
+Recall that in Step 2 we have constructed a Strong Path-of-Sets System $\Sigma'$ of length 12 and width $w'' \ge \frac{\alpha^{c'}}{d^{c'}}n$, for some constant $c'$, such that $G_{\Sigma'}$ has maximum vertex degree at most $d$. We have also computed a model of $G_{\Sigma'}$ in $G$. Our last step is to apply Theorem 6.11 to $\Sigma'$. As a result, we obtain a Path-of-Expanders System $\Pi$ of width $\hat{w} = \Omega(\frac{(w')^4}{d^2n^3})$ and expansion $\hat{\alpha} \ge \Omega(\frac{(w'')^2}{n^2d})$, whose corresponding graph $G_{\Pi}$ has maximum vertex degree at most $d+1$. We also obtain a model of $G_{\Pi}$ in $G_{\Sigma}$.
+
+Substituting the value $w'' \ge \frac{\alpha^{c'}}{d^{c'}}n$, we get that the width of the Path-of-Expanders System is $\Omega(\frac{\alpha^{4c'}}{d^2+4c'}) \cdot n$, and that its expansion is $\Omega(\frac{\alpha^{2c'}}{d^{2c'+1}})$. By appropriately setting the constants $\hat{c}_1$ and $\hat{c}_2$, we ensure that the width of the Path-of-Expanders System is at least $n \cdot (\frac{\alpha}{d})^{\hat{c}_2}$ and its expansion is at least $(\frac{\alpha}{d})^{\hat{c}_1}$.
+
+In the remainder of this section, we prove Theorem 6.11. We can assume w.l.o.g. that $w^4 \ge 2^{14}n^3d^2$, since otherwise it is sufficient to produce a Path-of-Expanders System of width 1, which is trivial to do. We denote the input Strong Path-of-Sets System by $\Sigma = (\mathcal{S}, \mathcal{M}, A_1, B_{12})$, where $\mathcal{S} = (S_1, \dots, S_{12})$, and we let $G_{\Sigma}$ be its corresponding graph. For convenience, we denote by $\mathcal{I}_{\text{even}}$ and $\mathcal{I}_{\text{odd}}$ the sets of all even and all odd indices in $\{1, \dots, 12\}$, respectively. The algorithm consists of three steps. In the first step, for every index $i \in \mathcal{I}_{\text{even}}$, we find a large set $\mathcal{P}_i$ of disjoint paths connecting $A_i$ to $B_i$ in $S_i$, and a subgraph $T_i \subseteq S_i$ that is an $\hat{\alpha}$-expander, such that the paths in $\mathcal{P}_i$ are disjoint from $T_i$. In the second step, for each such index $i \in \mathcal{I}_{\text{even}}$, we compute another set $\mathcal{Q}_i$ of disjoint paths in $S_i$, and a large enough subset $\mathcal{P}'_i \subseteq \mathcal{P}_i$ of paths, such that every path in $\mathcal{Q}_i$ connects a vertex on a distinct path of $\mathcal{P}'_i$ to a distinct vertex of $T_i$. In the third and the final step we compute the Path-of-Expanders System $\Pi$ and a model of $G_{\Pi}$ in $G_{\Sigma}$.
+
+**Step 1.** In this step, we prove the following lemma.
+
+**Lemma 6.12** *There is an efficient algorithm, that, given an index $i \in \mathcal{I}_{\text{even}}$, computes a set $\mathcal{P}_i$ of $\lfloor \frac{w^2}{16nd} \rfloor$ paths in $S_i$, and a subgraph $T_i \subseteq S_i$, such that:*
+
+* graph $T_i$ is an $\hat{\alpha}$-expander, and it contains at least $w/2$ vertices of $A_i$;
+
+* the paths in $\mathcal{P}_i$ are disjoint from each other; they are also disjoint from $T_i$ and internally disjoint from $A_i \cup B_i$;
+
+* every path in $\mathcal{P}_i$ connects a vertex of $A_i$ to a vertex of $B_i$; and
+
+* every path in $\mathcal{P}_i$ has length at most $2n/w$.
+
+**Proof:** For convenience, we omit the subscript $i$ in this proof. We are given a graph $S$ that contains at most $n$ vertices and has maximum vertex degree at most $d$, and two disjoint subsets $A, B$ of $V(S)$ of cardinality $w$ each, such that each of $A \cup B$ is well-linked in $S$. Therefore, there is a set $\mathcal{P}$ of $w$ disjoint paths in $S$, connecting vertices of $A$ to vertices of $B$, such that the paths in $\mathcal{P}$ are internally disjoint from $A \cup B$. We say that a path in $\mathcal{P}$ is short if it contains at most $2n/w$ vertices, and otherwise it is long. Since $|V(S)| \le n$, at most $w/2$ paths in $\mathcal{P}$ can be long, and the remaining paths must be short. Let $\mathcal{P}' \subseteq \mathcal{P}$ be any subset of $\lfloor \frac{w^2}{16nd} \rfloor$ paths in $\mathcal{P}$. It is now sufficient to show an algorithm that
+---PAGE_BREAK---
+
+computes an $\hat{\alpha}$-expander $T \subseteq S$, such that $T$ is disjoint from the paths in $\mathcal{P}'$. In order to do so, we
+let $E'$ be the set of all edges lying on the paths in $\mathcal{P}'$, so $|E'| \le |{\mathcal{P}'}| \cdot \frac{2n}{w} \le \left\lfloor \frac{w^2}{16nd} \right\rfloor \cdot \frac{2n}{w} \le \frac{w}{8}$.
+
+We start with $T = S \setminus E'$, and then iteratively remove edges from $T$, until we obtain a connected component of the resulting graph that is an $\hat{\alpha}$-expander, containing at least $w/2$ vertices of $A$. Notice that the original graph $T$ is not necessarily connected. We also maintain a set $E''$ of edges that we remove from $T$, initialized to $E'' = \emptyset$. Our algorithm is iterative. In every iteration, we apply Theorem 2.2 to the current graph $T$, to obtain a cut $(Z, Z')$ in $T$. If the sparsity of the cut is at least $\frac{w}{16n}$, that is, $|E_T(Z, Z')| \ge \frac{w}{16n} \min\{|Z|, |Z'|\}$, then we terminate the algorithm. Theorem 2.2 then guarantees that the expansion of $T$ is $\Omega(\frac{w^2}{n^2d})$, that is, $T$ is a $\hat{\alpha}$-expander. Otherwise, $|E_T(Z, Z')| < \frac{w}{16n} \min\{|Z|, |Z'|\}$. Assume w.l.o.g. that $|Z \cap A| \ge |Z' \cap A|$. We then add the edges of $E_T(Z, Z')$ to $E''$, set $T = T[Z]$, and continue to the next iteration. Note that the number of edges added to $E''$ during this iteration is at most $\frac{|Z'|w}{16n}$.
+
+Clearly, the graph $T$ we obtain at the end of the algorithm is an $\hat{\alpha}$-expander, and it is disjoint from all paths in $\mathcal{P}'$. It now only remains to show that $T$ contains at least $w/2$ vertices of $A$. Assume for contradiction that this is false.
+
+Assume that the algorithm performs $r$ iterations, and for each $1 \le j \le r$, let $(Z_j, Z'_j)$ be the cut computed by the algorithm in iteration $j$, where $|Z_j \cap A| \ge |Z'_j \cap A|$. But then for all $1 \le j \le r$, $|Z'_j \cap A| \le w/2$ must hold. Let $n_j = |Z'_j \cap A|$. Since the vertices of $A$ are well-linked in $S$, $\delta_S(Z'_j) \ge n_j$. Therefore:
+
+$$
+\sum_{j=1}^{r} |\delta_S(Z'_j)| \geq \sum_{j=1}^{r} n_j \geq w/2,
+$$
+
+since we have assumed that the final graph $T$ has fewer than $w/2$ vertices of $A$. On the other hand,
+all edges in $\bigcup_{j=1}^r \delta_S(Z'_j)$ are contained in $E' \cup E''$, and so:
+
+$$
+\sum_{j=1}^{r} |\delta_S(Z'_j)| \leq 2|E' \cup E''|.
+$$
+
+Recall that $|E'| \le \frac{w}{8}$, and it is easy to verify that $|E''| \le \frac{w}{16n} \cdot n = \frac{w}{16}$. Therefore, $\sum_{j=1}^r |\delta_S(Z'_j)| < \frac{w}{2}$,
+a contradiction. $\square$
+
+**Step 2.** For every index $i \in I_{\text{even}}$, let $A'_i \subseteq A_i$ be the subset of vertices that serve as endpoints for the paths in $\mathcal{P}_i$. The goal of this step is to prove the following lemma.
+
+**Lemma 6.13** There is an efficient algorithm, that, given an index $i \in I_{\text{even}}$, computes a subset $\mathcal{P}_i' \subseteq \mathcal{P}_i$ of $\hat{w}$ paths, and, for each path $P \in \mathcal{P}_i'$, a path $Q_P$ in $S_i$, that connects a vertex of P to a vertex of $T_i$, such that the paths in set $Q_i = \{Q_P \mid P \in \mathcal{P}_i'\}$ are disjoint from each other, internally disjoint from $T_i$, and internally disjoint from the paths in $\mathcal{P}_i'$.
+
+**Proof:** We fix an index $i \in I_{\text{even}}$, and for convenience omit the subscript $i$ for the remainder of the proof. Recall that we are given a set $A' \subseteq A$ of $\left\lfloor \frac{w^2}{16nd} \right\rfloor$ vertices, that serve as endpoints of the paths in $\mathcal{P}$. Recall that $T$ contains at least $w/2$ vertices of $A$. We let $A'' \subseteq A$ be any set of $\left\lfloor \frac{w^2}{16nd} \right\rfloor$ vertices of $A$ lying in $T$. Since the set $A$ of vertices is well-linked in $S$, there is a set $\mathcal{Q}$ of $\left\lfloor \frac{w^2}{16nd} \right\rfloor$ node-disjoint
+---PAGE_BREAK---
+
+paths, connecting the vertices of $A'$ to the vertices of $A''$ in $S$. We say that a path in $Q$ is short if it contains fewer than $\frac{64n^2d}{w^2}$ vertices, and otherwise we say that it is long. Since $S$ contains at most $n$ vertices, and the paths in $Q$ are disjoint, at most $\frac{w^2}{64nd}$ paths of $Q$ are long. We let $\hat{Q} \subseteq Q$ be the set of all short paths, so $|\hat{Q}| \ge \frac{w^2}{64nd}$, and we let $\hat{A} \subseteq A'$ be the set of vertices that serve as endpoints of the paths in $\hat{Q}$. We also let $\hat{P} \subseteq P$ be the set of paths originating from the vertices in $\hat{A}$. We are now ready to compute the set $P'$ of paths, and the corresponding paths $Q_P$ for all $P \in P'$.
+
+We start with $P' = \emptyset$, and then iterate. While $\hat{P} \neq \emptyset$, let $P$ be any path in $\hat{P}$, and let $a \in \hat{A}$ be the vertex from which it originates. Let $Q$ be the path of $\hat{Q}$ originating at $a$. We prune the path $Q$ as needed, so that it connects a vertex of $P$ to a vertex of $T$, but is internally disjoint from $P$ and $T$. Let $Q'$ be the resulting path. We then add $P$ to $P'$, and we let $Q_P = Q'$. Next, we delete from $\hat{P}$ all paths that intersect $Q'$ (since the length of $Q'$ is at most $\frac{64n^2d}{w^2}$, we delete at most $\frac{64n^2d}{w^2}$ paths from $\hat{P}$), and for every path $P^*$ that we delete from $\hat{P}$, we delete from $\hat{Q}$ the path sharing an endpoint with $P^*$ (so at most $\frac{64n^2d}{w^2}$ paths are deleted from $\hat{Q}$). Similarly, we delete from $\hat{Q}$ every path that intersects $P$ (since the length of $P$ is at most $2n/w$, we delete at most $\frac{2n}{w} \le \frac{64n^2d}{w^2}$ paths from $\hat{Q}$), and for every path $Q^*$ that we delete from $\hat{Q}$, we delete from $\hat{P}$ the path sharing an endpoint with $Q^*$ (again, at most $\frac{64n^2d}{w^2}$ paths are deleted from $\hat{P}$). Overall, we delete at most $\frac{128n^2d}{w^2}$ paths from $\hat{P}$, and at most $\frac{128n^2d}{w^2}$ paths from $\hat{Q}$. The paths that remain in both sets form pairs – that is, for every path $P^* \in \hat{P}$, there is a path $Q^* \in \hat{Q}$ originating at the same vertex of $A$, and vice versa. Furthermore, and all paths in $\hat{P} \cup \hat{Q}$ are disjoint from the paths in $P' \cup \{Q_P | P \in P'\}$.
+
+At the end of the algorithm, we obtain a subset $P' \subseteq P$ of paths, and for each path $P \in P'$, a path $Q_P$ in $S$, connecting a vertex of $P$ to a vertex of $T$, such that the paths in set $Q' = \{Q_P | P \in P'\}$ are disjoint from each other, internally disjoint from $T$, and internally disjoint from the paths in $P'$. It now only remains to show that $|\mathcal{P}'| \ge \hat{\omega}$.
+
+Recall that we start with $|\hat{P}| \ge \frac{w^2}{64nd}$. In every iteration, we add one path to $P'$, and delete at most $\frac{128n^2d}{w^2}$ paths from $\hat{P}$. Since we have assumed that $w^4 \ge 2^{14}n^3d^2$, we get that $\frac{256n^2d}{w^2} \le \frac{w^2}{64nd}$. It is then easy to verify that at the end of the algorithm, $|\mathcal{P}'| \ge \left\lfloor \frac{|\hat{P}|}{256n^2d/w^2} \right\rfloor \ge \Omega\left(\frac{w^4}{n^3d^2}\right) = \hat{\omega}$. $\square$
+
+**Step 3.** In this step we complete the construction of the Path-of-Expanders System Π. We will also define a minor $G'$ of $G_\Sigma$ and compute a model of $G_\Pi$ in $G'$; it is then easy to obtain a model of $G_\Pi$ in $G_\Sigma$.
+
+Consider some index $i \in I_{\text{even}}$, and the sets $P'_i, Q_i$ of paths computed in Step 2. Let $P \in P'_i$ be any such path, and assume that it connects a vertex $a_P \in A_i$ to a vertex $b_P \in B_i$. Let $v_P \in P$ be the endpoint of $Q_P$ lying on $P$, and let $c_P$ be its other endpoint. Finally, let $e_P$ be the edge of $M_{i-1}$ incident to $a_P$ and let $b'_P \in B_{i-1}$ be its other endpoint. Similarly, if $i \neq 12$, let $e'_P$ be the edge of $M_i$ incident to $b_P$, and let $a'_P \in A_{i+1}$ be its other endpoint (see Figure 9(a)).
+
+We contract the edge $e_P$ and all edges lying on the sub-path of $P$ between $a_P$ and $v_P$, so that $v_P$ and $b'_P$ merge. The resulting vertex is denoted by $b'_P$. We also suppress all inner vertices on the path $Q_P$, obtaining an edge $\hat{e}_P$, connecting $b'_P$ to $c_P$. Finally, if $i \neq 12$, then we contract all edges on the sub-path of $P$ between $v_P$ and $b_P$, obtaining an edge $\hat{e}'_P = (b_P, a'_P)$. We let $\hat{E}_i = \{\hat{e}_P | P \in P'_i\}$ and we let $\hat{E}'_i = \{\hat{e}'_P | P \in P'_i\}$ be the sets of these newly defined edges. Notice that the edges of $\hat{E}_i$ connect a subset of $\hat{\omega}$ vertices of $B_{i-1}$ (that we denote by $\hat{B}_{i-1}$) to a subset of $\hat{\omega}$ vertices of $T_i$ (that we denote by $\hat{C}_i$), and for $i \neq 12$, the edges of $\hat{E}'_i$ connect every vertex of $\hat{B}_{i-1}$ to some vertex of $A_{i+1}$; we denote the set of endpoints of these edges that lie in $A_{i+1}$ by $\hat{A}_{i+1}$.
+
+Once we perform this procedure for every path $P \in P'_i$, for all $i \in I_{\text{even}}$, we delete from the resulting
+---PAGE_BREAK---
+
+Figure 9: The contractions of the edges on paths P and $Q_P$.
+
+graph all edges and vertices except those lying in graphs $S_i$ for $i \in I_{\text{odd}}$, graphs $T_i$ for $i \in I_{\text{even}}$, and the edges in $E_i \cup E_i'$ for $i \in I_{\text{even}}$. The resulting graph, denoted by $G'$, is a minor of $G$, and it is easy to verify that its maximum vertex degree is at most $d+1$.
+
+We now define a Path-of-Expanders System $\Pi = (\Sigma \mathcal{M} A_1 B_6 \mathcal{T} \mathcal{M}')$, where the clusters of $\Sigma$ are denoted by $S_1, S_6$; for each $1 \le i \le 6$ the corresponding sets $A_i, B_i, C_i$ of vertices are denoted by $A_i, B_i$ and $C_i$ respectively; the matching $\mathcal{M}_i'$ is denoted by $\mathcal{M}_i'$ and the expander $T_i$ is denoted by $T_i$. For all $1 \le i < 6$, we also denote the matching $\mathcal{M}_i$ by $\mathcal{M}_i$.
+
+For each $1 \le i \le 6$, we let the cluster $S_i$ of $\Sigma$ be $S_{2i-1}$, and we let the expander $T_i$ be $T_{2i}$. We also set $C_i = C_{2i}$, and $\mathcal{M}_i' = E_{2i}$. If $i > 1$, then we let $A_i = A_{2i-1}$, and we let $A_1$ be any subset of $w$ vertices of $A_1$. Similarly, if $i < 6$, then we let $B_i = B_{2i-1}$, and we let $B_6$ be any subset of $w$ vertices of $B_6$. Finally, for $i < 6$, we let $\mathcal{M}_i = E'_{2i}$. It is immediate to verify that we have obtained a Path-of-Expanders System of width $w$ and expansion $\alpha$, and a model of $G_\Pi$ in $G'$. It is now immediate to obtain a model of $G_\Pi$ in $G_\Sigma$.
+
+# 7 Proof of Theorem 1.2
+
+The goal of this section is to provide the proof of Theorem 1.2. Notice that Theorem 1.2 provides slightly weaker dependence on $n$ in the minor size than Theorem 1.1, but it has several advantages: its proof is much simpler, the algorithm's running time is polynomial in $n$ $d$ and $\alpha$, and it provides a better dependence on $\alpha$ and $d$ in the bound on the minor size. Our algorithm also has an additional useful property: if it fails to find the required model, then with high probability it certifies that the input graph is not an $\alpha$-expander by exhibiting a cut of sparsity less than $\alpha$.
+
+Let $G = (V, E)$ be the given $n$-vertex $\alpha$-expander with maximum vertex degree at most $d$. As in the proof of Theorem 1.1, given a graph $H$ with $n'$ vertices and $m'$ edges, we can construct another graph $H'$, whose maximum vertex-degree is at most 3 and $|V(H')| \le n' + 2m' \le 2\left\lfloor \frac{n}{c_*^{\alpha}} \log^2 n \cdot \frac{\alpha^3}{d^5} \right\rfloor$, such that $H$ is a minor of $H'$. It is now enough to provide an efficient algorithm that computes a model of $H'$ in $G$. For convenience of notation, we denote $H'$ by $H = (UF)$, and we denote $U = \{u_1 : u_{|U|}\}$. We can assume that $n > c_0$ for a large enough constant $c_0$ by appropriately setting the constant $c^*$, as otherwise it is enough to show that every graph of size 1 is a minor of $G$, which is trivial.
+
+Our algorithm consists of a number of iterations. We say that a partition $(V', V'')$ of $V$ is good iff $|V'| |V''| \ge n$ (4d); and $G[V'] G[V'']$ are both connected graphs. We start with an arbitrary good
+---PAGE_BREAK---
+
+partition $(V_1 V_2)$ of $V$, obtained by using the algorithm from Observation 3.2 with $r=2$. Assume without loss of generality that $|V_1| \ge |V_2|$. We now try to compute a model of $H$ in $G$, by first embedding the vertices of $H$ into connected sub-graphs of $G[V_2]$, and then routing the edges of $H$ in $G[V_1]$. We show an efficient algorithm, that with high probability returns one of the following:
+
+* either a good partition $(V'_1 V'_2)$ such that $|E(V'_1 V'_2)| < |E(V_1 V_2)|$ (in this case, we proceed to the next iteration); or
+
+* a model of $H$ in $G$ (in this case, we terminate the algorithm and return the model).
+
+Clearly, we terminate after $|E|$ iterations, succeeding with high probability. We now describe a single iteration in detail. Recall that we are given a good partition $(V_1 V_2)$ of $V$ with $|V_1| \ge |V_2|$. Since $G$ is an $\alpha$-expander, we have $|E(V_1 V_2)| \ge \alpha n$ ($4d$) (note that, if this is not the case, we have found a cut $(V_1 V_2)$ of sparsity less than $\alpha$). Since the maximum vertex-degree in $G$ is bounded by $d$, we can efficiently find a matching $\mathcal{M} \subseteq E(V_1 V_2)$ of cardinality at least $\alpha n$ ($8d^2$). We denote the endpoints of the edges in $\mathcal{M}$ lying in $V_1$ and $V_2$ by $Z$ and $Z'$, respectively. Let $\rho := 3 \cdot [4cd^2 \log^2 n \alpha^2]$, where $c$ is the constant from Lemma 5.1.
+
+Recall that $U$ is the set of vertices in the graph $H$. We apply Observation 3.2 to the graph $G[V_2]$, together with $R = Z'$ and parameter $r = |U|$, to obtain a collection $\mathcal{W} = \{W_1, W_{|U|}\}$ of disjoint connected subgraphs of $G[V_2]$, such that for all $1 \le i \le |U|$,
+
+$$ |V(W_i) \cap Z'| \ge \left\lfloor \frac{|Z'|}{d|U|} \right\rfloor \ge \left\lfloor \frac{\alpha n}{8d^3|U|} \right\rfloor \ge \left\lfloor \frac{\alpha n}{8d^3} \cdot \frac{c^* d^5 \log^2 n}{2n\alpha^3} \right\rfloor = \left\lfloor \frac{c^* d^2 \log^2 n}{16\alpha^2} \right\rfloor $$
+
+Here, we have used the fact that $|U| \le 2 \left\lfloor \frac{n}{\tilde{c}^* \log^2 n} \cdot \frac{\alpha^3}{d^5} \right\rfloor$. By appropriately setting the constant $c^*$ in the bound on $|U|$, we can ensure that for all $1 \le i \le |\tilde{U}|$, $|V(W_i) \cap Z'| \ge 3\rho$.
+
+Recall that we are given a graph $H = (UF)$ with maximum vertex-degree 3 and that we have denoted $U = \{u_1, u_{|U|}\}$. For $1 \le i \le |U|$, we think of the graph $W_i$ as representing the vertex $u_i$ of $H$. For each $1 \le i \le |\tilde{U}|$, and for each edge $e \in \delta_H(u_i)$, we select an arbitrary subset $Z'_i(e) \subseteq V(W_i) \cap Z'$ of $\rho$ vertices, such that all resulting sets $\{Z'_i(e) | e \in \delta_H(u_i)\}$ of vertices are mutually disjoint. Let $E_i(e) \subseteq \mathcal{M}$ be the subset of edges of $\mathcal{M}$ that have an endpoint in $Z'_i(e)$, so $|E_i(e)| = \rho$. We let $Z_i(e)$ be the set of vertices of $Z$ that serve as endpoints of the edges in $E_i(e)$. Notice that all resulting sets $\{Z_i(e) | 1 \le i \le |\tilde{U}|, e \in \delta_H(u_i)\}$ are mutually disjoint, and each of them contains $\rho'$ vertices.
+
+We apply the algorithm of Lemma 5.1 to the graph $G[V_1]$, together with the parameter $\alpha = 2$ and the family $\{Z_i(e) | 1 \le i \le |\tilde{U}|, e \in \delta_H(u_i)\}$ of vertex subsets, that we order appropriately.
+
+**Case 1. The algorithm returns a cut.** In this case, we obtain a cut $(XY)$ in $G[V_1]$ of sparsity less than $\alpha = 2$. We will compute a good partition $(V'_1 V'_2)$ of $V$ with $|E(V'_1 V'_2)| < |E(V_1 V_2)|$. We need the following simple observation whose proof appears in Appendix.
+
+**Observation 7.1** There is an efficient algorithm, that given a connected graph $G = (VE)$ and a cut $(XY)$ in $G$, produces a cut $(X^* Y^*)$, whose sparsity is less than or equal to that of $(XY)$, such that both $G[X^*]$ and $G[Y^*]$ are connected.
+
+We apply Observation 7.1 to graph $G[V_1]$ and cut $(XY)$, obtaining a new cut $(X^* Y^*)$ of sparsity less than $\alpha = 2$, such that both $G[X^*]$ and $G[Y^*]$ are connected. For convenience, we denote the cut $(X^* Y^*)$ by $(XY)$, and we assume without loss of generality that $|Y| \le |X|$. Notice that $|Y| \le |V_1|$, $2 \le |V|$.
+
+31
+---PAGE_BREAK---
+
+Since $G$ is an $\alpha$-expander, $|\delta_G(Y)| \ge \alpha|Y|$ (note that, if this is not the case, then have found a cut $(Y V \setminus Y)$ of sparsity less than $\alpha$).
+
+Since $\delta_G(Y) = E(X Y) \cup E(Y V_2)$, we get that $|E(Y V_2)| \ge \alpha|Y|$ 2, and $|E(X Y)| < |E(Y V_2)|$. In particular, $E(Y V_2) \ne \emptyset$. We now define a new cut $(V'_1 V'_2)$ of $G$, where $V'_2 = V_2 \cup Y$ and $V'_1 = X$. We claim that $(V'_1 V'_2)$ is a good partition of $V(G)$. It is immediate to verify that $|V'_1| |V'_2| \ge n$ (4d), and that $G[V'_1] = G[X]$ is connected. Moreover, since $G[Y]$ is connected and $E(Y V_2) \ne \emptyset$, $G[V'_2] = G[V_2 \cup Y]$ is also connected. Lastly, we claim that $|E(V'_1 V'_2)| < |E(V_1 V_2)|$. Indeed, since $|E(X Y)| < |E(V_2 Y)|$:
+
+$$|E(V'_1 V'_2)| = |E(V_1 V_2)| - |E(V_2 Y)| + |E(Y X)| < |E(V_1 V_2)|$$
+
+Therefore, we have computed a good partition $(V'_1 V'_2)$ of $V(G)$, with $|E(V'_1 V'_2)| < |E(V_1 V_2)|$ as required.
+
+**Case 2. The algorithm returns paths.** In this case, we have obtained, for every edge $e = (u_i u_j) \in F$, a path $Q(e)$ in $G[V_1]$, connecting a vertex of $Z_i(e)$ to a vertex of $Z_j(e)$, such that, with high probability, the paths in $\{Q(e) \mid e \in F\}$ are mutually disjoint. If the paths in $\{Q(e) \mid e \in F\}$ are not mutually disjoint, the algorithm fails. We assume from now on that the paths in $\{Q(e) \mid e \in F\}$ are mutually disjoint. We extend each path $Q(e)$ to include the two edges of $\mathcal{M}$ that are incident to its endpoints, so that $Q(e)$ now connects a vertex of $Z'_i(e)$ to a vertex of $Z'_j(e)$.
+
+We are now ready to define the model of $H$ in $G$. For every $1 \le i \le |U|$, we let $f(u_i) = W_i$, and for every edge $e \in F$, we let $f(e) = Q(e)$. It is immediate to verify that this mapping indeed defines a valid model of $H$ in $G$. This completes the proof of Theorem 1.2.
+
+# A Proof of Corollary 1.3.
+
+In this subsection we prove Corollary 1.3. We use the following result of Krivelevich [Kri18b]:
+
+**Theorem A.1 (Corollary 1 of [Kri18b])** For every $\epsilon > 0$, there exists $\gamma > 0$, such that for every $n > 0$, a random graph $G \sim G(n \frac{1+\epsilon}{n})$ contains an induced bounded-degree $\gamma$-expander $G$ on at least $\gamma n$ vertices w.h.p.
+
+Let $G \sim G(n \frac{1+\epsilon}{n})$. From the above theorem, w.h.p., there is an induced bounded-degree $\gamma$-扩ender $G \subseteq G$ on at least $\gamma n$ vertices, for some $\gamma$ depending only on $\epsilon$. From Theorem 1.1, every graph $H$ of size at most $c_\epsilon n$ log $n$ is a minor of $G$, where $c_\epsilon$ is some constant depending on $\epsilon$ only. Corollary 1.3 now follows. $\square$
+
+# B Proof of Observation 1.4.
+
+Recall that we are given an integer $s$ and a graph $G = (V E)$ of size $s$. Assume for now that $2 \le s < 2^{20}$. Let $H_G$ be a graph with $s+1$ vertices and 0 edges. Notice that the number of vertices in $H_G$ is strictly more than that in $G$, and hence $H_G$ is not a minor of $G$. The observation now follows since $20s \log s \ge s+1$. Thus from now on, we assume that $s \ge 2^{20}$ and hence, $20s \log s \ge 2^{20}$.
+
+We denote by $\mu(G) = |\{H \mid H \text{ is a minor of } G\}|$. For an integer $r$, let $\mathcal{F}_r$ be the set of all graphs of size at most $r$. The following two observations now complete the proof of Observation 1.4.
+---PAGE_BREAK---
+
+**Observation B.1** $\mu(G) \le 3^s$.
+
+**Proof:** From the definition of minors, every minor $H$ of $G$ can be identified by a subset $E_H^{\text{del}} \subseteq E$
+of deleted edges, a subset $E_H^{\text{cont}} \subseteq E$ of contracted edges and a subset $V_H^{\text{del}} \subseteq V$ of deleted vertices.
+Thus,
+
+$$
+\mu(G) \leq 2^{|V|} \cdot 3^{|E|} \leq 3^{|V|+|E|} \leq 3^s
+\quad \square
+$$
+
+**Observation B.2** For every even integer $r \ge 2^{10}$, $|\mathcal{F}_r| \ge r^{r/10}$.
+
+**Proof:** Let $k = \lfloor r^{0.9} \rfloor$. We lower-bound the number of graphs containing exactly $k$ vertices and exactly $r$ 2 edges. Notice that, since $r \ge 2^{10}$, $k+r \ge 2 \le r$. For convenience, assume that the set $V^* = \{1 \ldots k\}$ of vertices and their indices are fixed. We will first lower-bound the number of vertex-labeled graphs with the set $V^*$ of vertices, that contain exactly $r$ 2 edges. Since there are only $\binom{k}{2}$ 'edge-slots', this number is at least:
+
+$$
+\left(\binom{k}{2}\right) \geq \left(\binom{r^{1.6}}{2}\right) \geq \left(\frac{r^{1.6} - r}{2}\right)^{r/2} \geq (r^{0.6})^{r/2} \geq r^{0.3r}
+$$
+
+Here, the inequalities hold for all $r \ge 2^{10}$. Notice that two graphs $G_1 = (V^* E_1)$ and $G_2 = (V^* E_2)$
+with labeled vertices are isomorphic to each other iff there is a permutation of the vertices, mapping
+$E_1$ to $E_2$. Thus, the number of non-isomorphic graphs on $k$ vertices and $r$ 2 edges is at least:
+
+$$
+\frac{r^{0.3r}}{k!} \geq \frac{r^{0.3r}}{(r^{0.9})!} > \frac{r^{0.3r}}{r^{0.9r^{0.9}}} \geq r^{r^{0.9}(0.3r^{0.1}-0.9)} \geq r^{r/10}
+$$
+
+We are now ready to complete the proof of Observation 1.4. Assume for contradiction that $G$ contains every graph in the family $\mathcal{F}^* = \mathcal{F}_{(20s/\log s)}$ as a minor. Recall that $20s \log s \ge 2^{20}$. However, from the above two observations, $|\mathcal{F}^*| \ge (20s \log s)^{20s/(10\log s)}$, while $\mu(G) \le 3^s$. It is immediate to verify that $|\mathcal{F}^*| > \mu(G)$, a contradiction. $\square$
+
+C Proofs Omitted from Section 2
+
+C.1 Proof of Observation 2.1
+
+We assume without loss of generality that $x_1 \ge x_2 \ge \cdots \ge x_r$, and process the integers in this order.
+When $x_i$ is processed, we add $i$ to $A$ if $\sum_{j \in A} x_j \le \sum_{j \in B} x_j$, and we add it to $B$ otherwise. We claim
+that at the end of this process, $\sum_{i \in A} x_i \sum_{i \in B} x_i \ge N$ 4 must hold. Indeed, 1 is always added to $A$.
+If $x_1 \ge N$ 4, then, since $x_1 \le 3N$ 4, it is easy to see that both subsets of integers sum up to at least
+$N$ 4. Otherwise, $|\sum_{i \in A} x_i - \sum_{i \in B} x_i| \le \max_i\{x_i\} \le x_1 \le N$ 4, and so $\sum_{i \in A} x_i \sum_{i \in B} x_i \ge N$ 4.
+
+C.2 Proof of Claim 2.3.
+
+Our algorithm iteratively removes edges from $T \setminus E'$, until we obtain a connected component of
+the resulting graph that is an $\alpha$ 4-expander. We start with $T' = T \setminus E'$ (notice that $T'$ is not
+necessarily connected). We also maintain a set $E''$ of edges that we remove from $T'$, initialized to
+$E'' = \emptyset$. While $T'$ is not an $\alpha$ 4-expander, let $(X Y)$ be a cut of sparsity less than $\alpha$ 4 in $T'$, that is
+---PAGE_BREAK---
+
+$|E_{T'}(X Y)| < \alpha \min (|X| |Y|)$ 4. Assume w.l.o.g. that $|X| \ge |Y|$. Update $T'$ to be $T'[X]$, add the edges of $E(X Y)$ to $E''$, and continue to the next iteration.
+
+Assume that the algorithm performs $r$ iterations, and for each $1 \le i \le r$, let $(X_i, Y_i)$ be the cut computed by the algorithm in iteration $i$. Since $|X_i| \ge |Y_i|$, $|Y_i| \le |V(T')|$ 2. At the same time, if we denote $E_i = E'' \cap E(X_i, Y_i)$, then $|E_i| < \alpha |Y_i|$ 4. Therefore:
+
+$$|E''| = \sum_{i=1}^{r} |E_i| \le \alpha \sum_{i=1}^{r} |Y_i| \quad 4$$
+
+On the other hand, since $T$ is an expander, the total number of edges leaving each set $Y_i$ in $T$ is at least $\alpha |Y_i|$, and all such edges lie in $E' \cup E''$. Therefore:
+
+$$|E'| + |E''| \ge \alpha \sum_{i=1}^{r} |Y_i| \quad 2$$
+
+Combining both bounds, we get that $|E'| \ge \alpha \sum_{i=1}^{r} |Y_i|$ 4. We get that $\sum_{i=1}^{r} |Y_i| \le \frac{4|E'|}{\alpha}$, and therefore
+$|V(T')| \ge |V(T)| - \frac{4|E'|}{\alpha}$. $\square$
+
+## D Proof of Observation 3.2.
+
+Let $\tau$ be any spanning tree of $G$, rooted at an arbitrary degree-1 vertex of $\tau$. We start with $\mathcal{U} = \emptyset$. Our algorithm performs a number of iterations, where in each iteration we add one new set $U \subseteq V(G)$ of vertices to $\mathcal{U}$, such that $G[U]$ is connected and $\lfloor|R|(dr)\rfloor \le |U \cap R| \le |R|$ $r$, and we remove the vertices of $U$ from $\tau$. We execute the iterations as long as $\lfloor V(\tau) \cap R \rfloor \ge \lfloor|R|(dr)\rfloor$, after which we terminate the algorithm, and return the current collection $\mathcal{U}$ of vertex subsets.
+
+In order to execute an iteration, we let $v$ be the lowest vertex of $\tau$, such that the subtree $\tau_v$ of $\tau$ rooted at $v$ contains at least $\lfloor|R|(dr)\rfloor$ vertices of $R$. Since the maximum vertex degree in $G$ is bounded by $d$, tree $\tau_v$ contains fewer than $d \cdot \lfloor|R|(dr)\rfloor \le |R|$ $r$ vertices of $R$. We add a new set $U = V(\tau_v)$ of vertices to $\mathcal{U}$, delete the vertices of $U$ from $\tau$, and continue to the next iteration.
+
+Let $\mathcal{U}$ be the final collection of vertex subsets obtained at the end of the algorithm. It is immediate to verify that for every set $U \in \mathcal{U}$, $G[U]$ is connected and, from the above discussion, $\lfloor|R|(dr)\rfloor \le |U \cap R| \le |R|$ $r$. Therefore, $|\mathcal{U}| \ge r$. $\square$
+
+## E Proof of Claim 4.6.
+
+Consider the following sequence of vertex subsets. Let $S_0 = Z$, and for all $i > 0$, let $S_i$ contain all vertices of $S_{i-1}$, and all neighbors of vertices in $S_{i-1}$. Notice that, if $|S_{i-1}| \le |V(T)|$ 2, then, since $T$ is an $\alpha'$-expander, there are at least $\alpha'|S_{i-1}|$ edges leaving the set $S_{i-1}$, and, since the maximum vertex degree in $T$ is at most $d$, there are at least $\frac{\alpha'|S_{i-1}|}{d}$ vertices that do not belong to $S_{i-1}$, but are neighbors of vertices in $S_{i-1}$. Therefore, $|S_i| \ge |S_{i-1}|(1 + \frac{\alpha'}{d})$. We claim that there must be an index $i^* \le \frac{8d}{\alpha'} \log(n z)$, such that $|S_{i^*}| > |V(T)|$ 2. Indeed, otherwise, we get that for $i = \lceil \frac{8d}{\alpha'} \log(n z) \rceil$:
+---PAGE_BREAK---
+
+$$
+|S_{i*}| \geq |S_0| \left(1 + \frac{\alpha'}{d}\right)^i \geq z \cdot e^{i\alpha'/(2d)} \geq z \cdot e^{4\log(n/z)} > n \ 2
+$$
+
+Here, the second inequality follows from the fact that (1 + 1 / x ) 2 x > e for all x > 1. We construct a similar sequence S ′ 0 S ′ 1 for Z ′ . Similarly, there is an index i ∗∗ ≤ 8 d α ′ log( n z ′ ), such that S ′ i ∗∗ contains more than half the vertices of T . Therefore, there is a path connecting a vertex of Z to a vertex of Z ′ , whose length is at most 8 d α ′ (log( n z ) + log( n z ′ )).
+
+F Proof of Lemma 5.1
+
+Recall that we are given a graph $G = (V E)$, with $|V| \le n$ and maximum vertex degree at most $d$, and a parameter $0 < \alpha < 1$. We are also given a collection $\{C_1, C_{2r}\}$ of disjoint subsets of $V$, each containing $q = \lceil cd^2 \log^2 n \alpha^2 \rceil$ vertices, for some constant $c$ to be fixed later. Our goal is to either find a set $Q = \{Q_1, Q_r\}$ of disjoint paths, such that for each $1 \le j \le r$, path $Q_j$ connects $C_j$ to $C_{j+r}$; or compute a cut $(S, S')$ in $G$ of sparsity less than $\alpha$.
+
+We use a standard definition of multicommodity flow. A flow $f$ consists of a collection $\mathcal{P}$ of paths in $G$, called flow-paths, and, for each path $P \in \mathcal{P}$, an associated flow value $f(P) > 0$. The edge-congestion of $f$ is the maximum amount of flow passing through any edge, that is, $\max_{e \in E} \left\{ \sum_{P \in \mathcal{P}} f(P) \right\}$. We say that the flow in $f$ causes no edge-congestion iff the edge-congestion due to $f$ is at most 1. Similarly, the vertex congestion of $f$ is the maximum flow passing through any vertex, that is, $\max_{v \in V} \left\{ \sum_{P \in \mathcal{P}: v \in P} f(P) \right\}$. If a path $P$ does not lie in $\mathcal{P}$, then we implicitly set $f(P) = 0$. For any pair $s, t \in V$ of vertices, let $\mathcal{P}(s,t)$ be the set of all paths connecting $s$ to $t$ in $G$. We say that $f$ transfers $z$ flow units between $s$ and $t$ iff $\sum_{P \in \mathcal{P}(s,t)} f(P) \ge z$.
+
+The following theorem is a consequence of Theorem 18 from [LR99] that we prove after completing the proof of Lemma 5.1.
+
+**Theorem F.1** There is an efficient randomized algorithm, that, given a graph $G = (V E)$ with $|V| = n$ and maximum vertex degree at most $d$, and a parameter $0 < \alpha < 1$, together with a (possibly partial) matching $\mathcal{M}$ over the vertices of $G$, computes one of the following:
+
+• either a collection $Q' = \{Q(u v) \mid (u v) \in \mathcal{M}\}$ of paths, such that for all $(u v) \in \mathcal{M}$, path $Q(u v)$ connects $u$ to $v$; the paths in $Q'$ with high probability cause vertex-congestion at most $\eta = O(d \log n \alpha)$, and the length of every path in $Q$ is at most $L = O(d \log n \alpha)$; or
+
+• a cut $(S, S')$ in $G$ of sparsity less than $\alpha$.
+
+We are now ready to complete the proof of Lemma 5.1. We construct a matching $\mathcal{M}$ over the vertices of $V$, as follows. For each $1 \le j \le r$, we add an arbitrary matching $\mathcal{M}_j$, containing $q$ edges, between the vertices of $C_j$ and the vertices $C_{j+r}$. We then set $\mathcal{M} = \bigcup_{j=1}^r \mathcal{M}_j$. We apply the algorithm from Theorem F.1 to the graph $G$, parameter $\alpha$ and the matching $\mathcal{M}$. If the algorithm returns a cut of sparsity less than $\alpha$, we terminate the algorithm and return the cut. Therefore, we assume from now on that the algorithm returns a set $Q'$ of paths with the following properties:
+
+• For each $j \in [r]$, there is a subset $Q'_j \subseteq Q'$ of $q$ paths connecting vertices of $C_j$ to vertices of $C_{j+r}$;
+---PAGE_BREAK---
+
+* All paths in $Q'$ have length at most $L = O(d \log n \alpha)$; and
+
+* With high probability, every vertex of $G$ participates in at most $\eta = O(d \log n \alpha)$ paths of $Q'$.
+
+If the vertex-congestion caused by the paths in $Q'$ is greater than $\eta$, the algorithm terminates with a failure. Therefore, we assume from now on that the paths in $Q'$ cause vertex-congestion at most $\eta$. We use the constructive version of the Lovasz Local Lemma by Moser and Tardos [MT10] in order to select one path from each set $Q'_j$, so that the resulting paths are node-disjoint with high probability. The next theorem summarizes the symmetric version of the result of [MT10].
+
+**Theorem F.2 ([MT10])** Let $X$ be a finite set of mutually independent random variables in some probability space. Let $\mathcal{A}$ be a finite set of bad events determined by these variables. For each event $A \in \mathcal{A}$, let $\text{vbl}(A) \subseteq X$ be the unique minimal subset of variables determining $A$, and let $\Gamma(A) \subseteq \mathcal{A}$ be a subset of bad events $B$, such that $A \neq B$, but $\text{vbl}(A) \cap \text{vbl}(B) \neq \emptyset$. Assume further that for each $A \in \mathcal{A}$, $|\Gamma(A)| \le D$, $\text{Pr}[A] \le p$, and $\text{ep}(D+1) \le 1$. Then there is an efficient randomized algorithm that computes an assignment to the variables of $X$, such that with high probability none of the events in $\mathcal{A}$ holds.
+
+For each $1 \le i \le r$, we choose one of its paths $Q_i \in Q_i$ independently at random. We let $z_i$ be the random variable indicating which path has been chosen. For every pair $Q Q' \in Q'$ of intersecting paths, such that $Q Q'$ belong to distinct sets $Q'_i Q'_j$ let $\mathcal{E}(Q Q')$ be the bad event that both these paths were selected. Notice that the probability of $\mathcal{E}(Q Q')$ is $1/q^2$. Notice also that $\text{vbl}(\mathcal{E}(Q Q')) = \{z_i z_j\}$, where $Q \in Q'_i, Q' \in Q'_j$. There are at most $qL\eta$ events $\mathcal{E}(Q Q')$, with $z_i \in \text{vbl}(\mathcal{E}(Q Q'))$: set $Q'_i$ contains $q$ paths; each of these paths has length at most $L$, so there are at most $qL$ vertices that participate in the paths in $Q'_i$. Each such vertex may be shared by at most $\eta$ other paths. Similarly, there are at most $qL\eta$ events $\mathcal{E}(Q Q')$, with $z_j \in \text{vbl}(\mathcal{E}(Q Q'))$. Therefore, $|\Gamma(\mathcal{E}(Q Q'))| \le 2qL\eta$. Let $D = 2qL\eta$. It now only remains to show that $(D+1)ep \le 1$. Indeed,
+
+$$ (D+1)ep = \frac{\mathcal{O}(qL\eta)}{q^2} = \frac{\mathcal{O}(L\eta)}{q} = \mathcal{O}\left(\frac{d^2 \log^2 n}{\alpha^2 q}\right) $$
+
+By choosing the constant $c$ in the definition of $q$ to be large enough, we can ensure that $(D+1)ep \le 1$ holds. Using the algorithm from Theorem F.2, we obtain a collection $\mathcal{Q} = \{Q_1, \dots, Q_r\}$ of paths in $G$, where for each $j \in [r]$, path $Q_j$ connects a vertex of $C_j$ to a vertex of $C_{j+r}$, and with high probability the resulting paths are disjoint. This completes the proof of Lemma 5.1, except for the proof of Theorem F.1 that we provide next.
+
+## F.1 Proof of Theorem F.1.
+
+We use a slight adaptation of Theorem 18 from [LR99].
+
+**Theorem F.3 (Adaptation of Theorem 18 from [LR99])** There is an efficient algorithm, that, given a $n$-vertex graph $G$ with maximum vertex degree at most $d$, together with a parameter $0 < \alpha < 1$ computes one of the following:
+
+* either a flow $f$ in $G$, with every pair of vertices in $G$ transferring $\frac{\alpha}{64n \log n}$ flow units to each other with no edge-congestion, such that every flow-path has length at most $\frac{64d \log n}{\alpha}$; or
+
+* a cut $(S S')$ in $G$ of sparsity less than $\alpha$.
+---PAGE_BREAK---
+
+We provide the proof of Theorem F.3 below, after completing the proof of Theorem F.1 using it.
+
+We apply Theorem F.3 to the graph $G$ and the parameter $\alpha$. If the algorithm returns a cut ($S S'$) of sparsity less than $\alpha$, then we terminate the algorithm and return this cut. Therefore, we assume from now on that the algorithm returns the flow $f$. Let $f'$ be a flow obtained from $f$ by scaling it up by factor $64 \log n \alpha$, so that every pair of vertices in $G$ now sends 1 $n$ flow units to each other, with total edge-congestion at most $64 \log n \alpha$.
+
+We start by showing that there is a multi-commodity flow $f^*$, where every pair $(u v) \in \mathcal{M}$ of vertices sends one flow unit to each other simultaneously, on flow-paths of length at most $128d \log n \alpha$, with total vertex-congestion at most $128d \log n \alpha$. Let $(u v) \in \mathcal{M}$ be any pair of vertices. The new flow between $u$ and $v$ is defined as follows: $u$ sends 1 $n$ flow units to every vertex of $G$, using the flow $f'$, and $v$ collects 1 $n$ flow units from every vertex of $G$, using the flow $f'$. In other words, the flow $f^*$ between $u$ and $v$ is obtained by concatenating all flow-paths in $f'$ originating at $u$ with all flow-paths in $f'$ terminating at $v$. It is easy to see then that every flow-path in $f'$ is used at most twice: once by each of its endpoints; all flow-paths in $f^*$ have length at most $128d \log n \alpha$; and the total edge-congestion due to flow $f^*$ is at most $128 \log n \alpha$. Since the maximum vertex degree in $G$ is at most $d$, flow $f^*$ causes vertex-congestion at most $128d \log n \alpha$.
+
+Next, for every pair $(u v) \in \mathcal{M}$, we select one path $Q(u v) \in \mathcal{P}(u v)$ at random, where a path $P \in \mathcal{P}(u v)$ is selected with probability $f^*(P)$ the amount of flow sent on $P$ by $f^*$. We then let $Q' = \{Q(u v) | (u v) \in \mathcal{M}\}$. Notice that the length of every path in $Q'$ is at most $128d \log n \alpha$. It remains to show that the total vertex-congestion due to paths in $Q'$ is at most $O(d \log n \alpha)$ with high probability. This is done by standard techniques. Consider some vertex $x \in V$. We say that the bad event $\mathcal{E}(x)$ happens if more than $8 \cdot 128d \log n \alpha$ paths of $Q$ use the vertex $x$. We use the following variation of the Chernoff bound (see [DP09]):
+
+**Theorem F.4** Let $X_1, X_n$ be independent random variables taking values in $[0, 1]$, let $X = \sum_i X_i$, and let $\mu = \mathbf{E}[X]$. Then for all $t > 2e\mu$, $\mathrm{Pr}[X > t] \le 2^{-t}$.
+
+It is easy to see that the expected number of paths in $Q'$ that contain $x$ is at most $128d \log n \alpha$, and so the probability of $\mathcal{E}(x)$ is bounded by $1/n^4$. From the Union Bound, the probability that any such event happens for any vertex $x \in V$ is bounded by $1/n^3$. Therefore, with high probability, every vertex of $G$ belongs to $2^{10}d \log n \alpha = O(d \log n \alpha)$ paths in $Q'$. This finishes the proof of Theorem F.1 except for the proof of Theorem F.3, that we prove in the next sub-section. $\square$
+
+## F.2 Proof of Theorem F.3
+
+The proof follows closely that of [LR99]; we provide it here for completeness. Recall that we are given a graph $G = (V E)$ with maximum vertex-degree at most $d$, $|V| = n$ and a parameter $0 < \alpha < 1$. We let $L = 64d \log n \alpha$. For every pair $u v$ of vertices in $V$, let $\mathcal{P}^{\le L}(u v)$ be the set of all paths in $G$ between $u$ and $v$ that contain at most $L$ vertices. We employ standard linear program for uniform multicommodity flow:
+---PAGE_BREAK---
+
+$$
+\begin{array}{lll}
+\text{(LP-1)} & \max & f^* \\
+\text{s.t.} & & \\
+& \sum_{P \in \mathcal{P}^{\le L}(u,v)} f(P) \ge f^* & \forall u, v \in V \\
+& \sum_{\substack{u,v \in V \\ e \in P}} \sum_{\substack{P \in \mathcal{P}^{\le L}(u,v) \\ e \in P}} f(P) \le 1 & \forall e \in E \\
+& f(P) \ge 0 & \forall u, v \in V; \forall P \in \mathcal{P}^{\le L}(u, v)
+\end{array}
+$$
+
+In general, the dual of the standard relaxation of the uniform multicommodity flow problem is the problem of assigning lengths (e) to the edges $e \in E$, so as to minimize $\sum_e (e)$, subject to the constraint that the total sum of all pairwise distances between pairs of vertices is at least 1, where the distance between pairs of vertices is defined with respect to $e$.
+
+In our setting, given lengths (e) on edges $e \in E$, we need to use L-hop bounded distances between vertices, defined as follows: for all $u v \in V$, if $\mathcal{P}^{\le L}(u v) \neq \emptyset$, then we let:
+
+$$D_{\ell}^{\le L}(u v) = \min_{P \in \mathcal{P}^{\le L}(u,v)} \left\{ \sum_{e \in P} (e) \right\};$$
+
+otherwise, we set $D_{\ell}^{\le L}(u v) = \infty$. The dual of (LP-1) can now be written as follows:
+
+$$
+\begin{array}{lll}
+\text{(LP-2)} & \min & \sum_{e \in E} (e) \\
+\text{s.t.} & & \\
+& \sum_{u,v \in V} D_{\ell}^{\le L}(u v) \ge 1 \\
+& (e) \ge 0 & \quad \forall e \in E
+\end{array}
+$$
+
+Even though Linear Programs (LP-1) and (LP-2) are of exponential size, they can be solved efficiently using standard techniques (that is, edge-based flow formulation). Let $f_{\text{OPT}}^{*}$ be the value of the optimal solution to (LP-1). We let $W^{*} = \frac{d}{nL} = \frac{\alpha}{64n \log n}$. If $f_{\text{OPT}}^{*} \ge W^{*}$, then we return the flow $f$ corresponding to the optimal solution of (LP-1); it is immediate to verify that it satisfies all requirements. Therefore, we assume from now on that $f_{\text{OPT}}^{*} < W^{*}$. We will provide an efficient algorithm to compute a cut $(S, S')$ in $G$ of sparsity less than $\alpha$.
+
+Given a length function : $E \mapsto \mathbb{R}_{\ge 0}$, we denote by $W(\cdot) = \sum_{e \in E} (e)$ the total 'weight of'. We need the following definition.
+
+**Definition 8** *Given an integer r and a length function (e) on edges e ∈ E, the r-hop bounded diameter of G is max$_{u,v \in V}$ {$D_{\ell}^{\le r}(u v)$}.*
+
+Consider the optimal solution $\mathbf{OPT}: E \to \mathbb{R}^{+}$ to (LP-2). Observe that, by the strong duality, the value of the solution $W(\mathbf{OPT}) = f_{\mathbf{OPT}}^{*}$, and so $W(\mathbf{OPT}) < W^{*}$ holds.
+
+We define a new solution to (LP-2) as follows: for each edge $e$, we let $(e) = \mathbf{OPT}(e) \cdot \frac{W^{*}}{W(\ell_{\mathbf{OPT}})}$. Since $W^{*} > W(\mathbf{OPT})$, it immediate to verify that we obtain a valid solution to (LP-2), of value $W(\cdot) = W^{*}$.
+---PAGE_BREAK---
+
+Moreover, the constraint governing the sum of pairwise *L*-hop bounded distances is now satisfied with strict inequality:
+
+$$
+\sum_{u,v} D_{\ell}^{\le L}(u,v) > 1 \qquad (2)
+$$
+
+The lengths (e) on edges are fixed from now on, and we denote $D_{\ell}^{\le L}$ by $D^{\le L}$ from now on. We will also use the distance function $D_{\ell}^{\le L/4}$, that we denote by $D^{\le L/4}$ from now on.
+
+We use the following lemma.
+
+**Lemma F.5 (Adaptation of Corollary 20 from [LR99])** There is an efficient algorithm, that, given a graph $G = (V E)$, a parameter $0 < \alpha < 1$ and any edge length function : $E \mapsto \mathbb{R}_{\ge 0}$ of total weight $W() = \sum_{e \in E} (e) \le \frac{\alpha}{64n \log n}$, returns one of the following:
+
+* either a subset $T \subseteq V$ of at least $\left\lceil \frac{2|V|}{3} \right\rceil$ vertices, such that, for $r = \frac{|E|}{2n^2 W(\ell)}$, the r-hop bounded diameter of $G[T]$ is at most $\frac{1}{2n^2}$; or
+
+* a cut (*S* *S*′) in *G* of sparsity less than *α*.
+
+We complete the proof of Lemma F.5 later, after we complete the proof of Theorem F.3 using it. Recall that $W() = W^* = \frac{\alpha}{64n\log n}$. We apply the algorithm from Lemma F.5 to graph G, with parameter α and distance function .
+
+If the algorithm returns a cut (*S* *S*′) of sparsity less than α, we terminate the algorithm and return this cut. Therefore, we assume from now on that the algorithm from Lemma F.5 returns a subset $T \subseteq V$ of at least $2|V|$ 3 vertices such that $G[T]$ has *r*-hop bounded diameter at most $\frac{1}{2n^2}$, where $r = \frac{|E|}{2n^2W(\ell)}$.
+Observe that for all $r' > r$, for every pair *u* *v* of vertices, $D^{\le r'}(u v) \le D^{\le r}(u v)$. Observe also that:
+
+$$
+r = \frac{|E|}{2n^2 W(\i)} \le \frac{\frac{dn}{2}}{2n^2 \frac{d}{nL}} = \frac{L}{4}
+$$
+
+Therefore, the *L* 4-hop bounded diameter of *G*[*T*] is at most $\frac{1}{2n^2}$.
+
+For convenience, for a subset S ⊆ V of vertices and a vertex u ∈ V, we denote by DL/4(u S) := minv∈S DL/4(u v). We use the following lemma.
+
+**Lemma F.6 (Adaptation of Lemma 21 from [LR99])** There is an efficient algorithm, that, given a graph $G = (V E)$, a parameter $0 < \alpha < 1$, any edge length function : $E \mapsto \mathbb{R}_{\ge 0}$, a length parameter $L \ge \frac{2d \ln n}{\alpha}$ and a subset $T \subseteq V$ of at least $[2|V| - 3]$ vertices, such that $\sum_{v \in V} D^{\le L}(v T) > \frac{4W(\ell)}{\alpha}$, returns a cut $(S S')$ of $V$ with sparsity less than $\alpha$.
+
+We prove Lemma F.6 later, after we complete the proof of Theorem F.3 using it.
+
+First, we claim that $\sum_{v \in V} D^{\le L/4}(v T) > \frac{4W^*}{\alpha}$. Indeed, assume for contradiction otherwise, that is:
+
+$$
+\sum_{v \in V} D^{\le L/4}(v T) \le \frac{4W^*}{\alpha} = \frac{4}{\alpha} \cdot \frac{\alpha}{64n \log n} = \frac{1}{16n \log n}
+$$
+
+Recall that the *L* 4-hop bounded diameter of *G*[T] is at most $\frac{1}{2n^2}$. From the triangle inequality, for any pair *u* *v* ∈ *V* of vertices:
+---PAGE_BREAK---
+
+$$D^{\le L}(u, v) \le D^{\le L/4}(u, T) + D^{\le L/4}(v, T) + \frac{1}{2n^2}.$$
+
+Hence,
+
+$$
+\begin{align*}
+\sum_{u,v \in V} D^{\le L}(u,v) &\le \sum_{u,v \in V} \left( D^{\le L/4}(u,T) + D^{\le L/4}(v,T) + \frac{1}{2n^2} \right) \\
+&\le \frac{1}{2} + 2n \sum_{u \in V} D^{\le L/4}(u,T) \\
+&\le \frac{1}{2} + 2n \frac{1}{16n \log n} = \frac{1}{2} + \frac{1}{8 \log n} < 1,
+\end{align*}
+$$
+
+contradicting the fact that $\ell$ is a valid solution to (LP-2). Therefore, $\sum_{u \in V} D^{L/4}(v, T) > \frac{4W^*}{\alpha}$ must hold. Moreover, notice that $\frac{L}{4} = \frac{16d \log n}{\alpha} \ge \frac{2d \ln n}{\alpha}$ holds. We now apply the algorithm from Lemma F.6 to $G$, with parameters $\alpha$ and $L/4$, edge length function $\ell$ and the subset $T$ of vertices, to obtain a cut $(S, S')$ of $V$ with sparsity less than $\alpha$. This completes the proof of Theorem F.3, except for the proofs of Lemma F.5 and Lemma F.6 that we provide in the next subsection.
+
+**F.3 Proof of Lemma F.5**
+
+We start with the following definition:
+
+**Definition 9** Given a graph $G = (V, E)$, a partition of $G$ into components is a collection $\mathcal{G} = \{G[V_1], \dots, G[V_z]\}$ of vertex-induced subgraphs such that $\cup_{i \in [z]} V_i = V$ and for every $i \neq j$, $V_i \cap V_j = \emptyset$.
+
+We use the following lemma, that we prove later for completeness after completing the proof of Lemma F.5 using it.
+
+**Lemma F.7 (Adaptation of Lemma 19 from [LR99])** There is an efficient algorithm, that, given a graph $G = (V, E)$, a parameter $\Delta > 0$, and any edge length function $\ell : E \to \mathbb{R}_{\ge 0}$, partitions $G$ into components $\mathcal{G} = \{G[V_1], \dots, G[V_z]\}$ such that the following holds:
+
+• For each $G[V_i] \in \mathcal{G}$, the $r'$-hop bounded diameter of $G[V_i]$ is at most $\Delta$, for $r' = \Delta|E|/W(\ell)$; and
+
+• $\sum_{i \frac{8W(\ell) \ln n}{|E|}$ holds. For convenience, we denote $\epsilon := \frac{2W(\ell) \ln n}{\Delta |E|}$. Notice that $\epsilon \le 1/4$ holds.
+
+Consider an auxiliary graph $G^+ = (V^+, E^+)$ obtained from $G$ by replacing each edge $e$ with a path consisting of $[\lvert E \rvert \ell(e)/W(\ell)]$ edges. Notice that $\lvert E^+ \rvert \le 2\lvert E \rvert$. For simplicity, we identify the common vertices of $G$ and $G^+$. The following observation is now immediate:
+
+**Observation F.8** For any path of length $\gamma$ in $G^+$, the corresponding path in $G$ has length at most $\frac{W(\ell)\gamma}{|E|}$.
+
+Next, we iteratively partition vertices of $G^+$ into $V_0^+, V_1^+, \dots$, and the required partition of $G$ into components will be given by $G[V_1] = G[V_1^+ \cap V]$, $G[V_2] = G[V_2^+ \cap V]$, $\dots$. We start with $V_0^+ = \emptyset$ and then iterate. We now show how to compute $V_{i+1}^+$ given $V_0^+, \dots, V_i^+$.
+
+We denote $V_i^* := V^+ \setminus \bigcup_{j \le i} V_j^+$. If $V \cap V_i^* = \emptyset$, we have computed the desired partition and the algorithm terminates. Thus, we assume from now on that there is a vertex $v_{i+1} \in V_i^*$. For every integer $j \ge 0$, we denote by $B_j^{i+1}$ the subset of vertices $u \in V_i^*$, such that there is some path of length at most $j$ connecting $v_{i+1}$ and $u$ in $G^+[V_i^*]$.
+
+We let $C_j := \frac{2|E|}{n} + |E_G[B_j^{i+1}]|$ for every integer $j \ge 0$. Let $j_{i+1}^*$ be the smallest $j \ge 0$ such that $C_{j+1} < (1+\epsilon)C_j$. Notice that some such $j_{i+1}^*$ must exist, since $\epsilon > 0$ and $C_{j+1} = C_j$ for $j \to \infty$. We set $V_{i+1}^+ = B_{j_{i+1}^*}^{i+1}$ and proceed to the next iteration. The following observation is now immediate:
+
+**Observation F.9** For every index $i > 0$, $V_i^+ \cap V \neq \emptyset$ and $\lvert E(V_i^+, V_i^*) \rvert < \epsilon \left( \frac{2|E|}{n} + \lvert E[V_i^+] \rvert \right)$.
+
+**Proof:** Notice that for every index $i > 0$ and $j \ge 0$, we have $v_i \in B_j^i$. Thus, $v_i \in V_i^+ \cap V$, and hence $V_i^+ \cap V \neq \emptyset$. From our construction, we have
+
+$$ \frac{2|E|}{n} + |E[B_{j_{i+1}^*}^i]| < (1+\epsilon) \left( \frac{2|E|}{n} + |E[B_{j_{i+1}^*}^i]| \right). $$
+
+Equivalently:
+
+$$ |E[B_{j_{i+1}^*}^i]| - |E[B_{j_{i*}^*}^i]| < \epsilon \left( \frac{2|E|}{n} + |E[B_{j_{i*}^*}^i]| \right). $$
+
+Therefore,
+
+$$ |E(V_i^+, V_i^*)| \le |E[B_{j_{i+1}^*}^i]| - |E[B_{j_{i*}^*}^i]| < \epsilon \left( \frac{2|E|}{n} + |E[B_{j_{i*}^*}^i]| \right) = \epsilon \left( \frac{2|E|}{n} + |E[V_i^+]| \right). $$
+
+□
+
+The following two claims will complete the proof of Lemma F.7.
+
+**Claim F.10** $\sum_{i0} \left| E\left(V_i, \bigcup_{j>i} V_j\right) \right| \le \sum_{i>0} |E(V_i^+, V_i^*)| < \sum_{i>0} \epsilon \left( \frac{2|E|}{n} + |E(|V_i^+|)| \right) $$
+---PAGE_BREAK---
+
+$$
+\le \epsilon (2|E| + |E^+|) \le 4|E|\epsilon = \frac{8W(\ell) \ln n}{\Delta} < \frac{8W(\ell) \log n}{\Delta}.
+$$
+
+Here, the second inequality follows from Observation F.9 and the penultimate inequality follows from
+the fact that $|E^+| \le 2|E|$.
+
+**Claim F.11** For each $G[V_i]$, the $r'$-hop bounded diameter of $G[V_i]$ is at most $\Delta$, for $r' = \frac{\Delta|E|}{W(\ell)}$.
+
+**Proof:** We claim that it suffices to show that, for each $G[V_i]$, the diameter of $G^+[V_i^+]$ is at most $r' = \frac{\Delta|E|}{W(\ell)}$. Indeed, if this is the case, Observation F.8 implies that the $r'$-hop bounded diameter of $G[V_i]$ is at most $\frac{W(\ell)r'}{|E|} = \Delta$. Notice that, in order to show that the diameter of $G^+[V_i^+]$ is at most $r'$, it suffices to show that $j_i^* \le \frac{r'}{2} = \frac{\Delta|E|}{2W(\ell)}$. Fix any index $i$ and the corresponding graph $G^+[V_i^+]$. If $j_i^* \ne 0$, we must have:
+
+$$
+2|E| \ge |E^+| \ge |E(V_i^+)| > (1+\epsilon)^{j_i^*} \frac{2|E|}{n}.
+$$
+
+Therefore, $(1 + \epsilon)^{j_i^*} < n$ must hold, and so:
+
+$$
+j_i^* < \frac{\ln n}{\epsilon} = \frac{\Delta |E|}{2W(\ell)}.
+$$
+
+(We have used the fact that $\epsilon < 1/4$).
+
+F.4 Proof of Lemma F.6
+
+Similarly to the proof of Lemma F.6, consider an auxiliary graph $G^+ = (V^+, E^+)$ obtained from $G$ by
+replacing each edge $e$ with a path consisting of $\lceil |E|\ell(e)/W(\ell) \rceil$ edges. Notice that $|E^+| \le 2|E|$. For
+simplicity, we identify the common vertices of $G$ and $G^+$. Given a subset $S \subseteq V(G^+)$ of vertices, we
+denote by $N(S)$ the set of all vertices $v \in V(G^+)$ such that $v \notin S$, but $v$ has a neighbor in $S$.
+
+Next, we iteratively partition the vertices of $G^+$ into layers, $V_0^+, V_1^+, \dots$, and for each $i \ge 0$, we define
+the corresponding graph $G_i^+ = G^+[V_i^+]$, as follows. We start with $V_0^+ = T$, $G_0^+ = G^+[T]$ and then
+iterate. We now show how to compute $V_{i+1}^+$ and $G_{i+1}^+$, given $V_i^+$ and $G_i^+$, assuming that $V_i^+ \ne V^+
+(otherwise, the algorithm terminates).
+
+Let $E_i := \delta_{G^+}(V_i^+)$ and $C_i := |E_i|$. We partition $E_i$ into two subsets: set $E'_i$ containing all edges $(u,v)$ with $u \in V_i^+$, such that $v$ is a vertex of the original graph $G$; and set $E''_i$ containing all remaining edges. Let $C'_i = |E'_i|$, and let $C''_i = |E''_i|$. We distinguish between the following two cases:
+
+• **Case 1:** $C'_i \ge C_i/2$. In this case, we let $V_{i+1}^{+}$ contain all vertices of $V_i^{+} \cup N(V_i^{+})$. We also set
+$G_{i+1}^{+} = G^{+}[V_{i+1}^{+}]$. Notice that in this case, $|E[G_{i+1}^{+}] \setminus E[G_i^{+}]| \ge C_i$.
+
+* **Case 2:** $C_i'' > C_i/2$. In this case, we let $V_{i+1}^{+}$ only contain the vertices of $V_i^{+}$, and those vertices of $N(V_i^{+})$ that do not lie in the original graph $G$, that is:
+
+$$
+V_{i+1}^{+} = V_{i}^{+} \cup (N(V_{i}^{+}) \setminus V(G)).
+$$
+
+As before, we set $G_{i+1}^+ = G^+[V_{i+1}^+]$. Notice that in this case, $E[G_{i+1}^+] \setminus E[G_i^+]$ contains all edges of $E_i''$, and so $|E[G_{i+1}^+] \setminus E[G_i^+]| \ge C_i'' > C_i/2$.
+---PAGE_BREAK---
+
+From the above discussion we obtain the following observation:
+
+**Observation F.12** For each level $i$, $|E(G_{i+1}^+) \setminus E(G_i^+)| \ge \frac{C_i}{2}$, and in particular $\sum_i C_i \le 2|E^+|$.
+
+For each level $i$, let $n_i = |V(G) \setminus V_i^+|$ – the number of vertices of the original graph $G$ that do not lie in $V_i^+$. Recall that $|T| \ge \lceil 2|V|/3 \rceil$, and so for all $i$, $n_i \le |V|/3 \le |V|/2$. Moreover, $C_i = |\delta_{G^+}(V_i^+)| \ge |\delta_G(V \cap V_i^+)|$.
+
+If, for any level $i$, $C_i < \alpha n_i$, then we return the cut $(V \cap V_{i+1}^+, V \setminus V_i^+)$; it is immediate to see that its sparsity is less than $\alpha$. Therefore, we assume from now on, that for all $i$, $C_i \ge \alpha n_i$. We will reach a contradiction by showing that $\sum_{v \in V} D^{\le L}(v,T) \le \frac{4W(\ell)}{\alpha}$ must hold. In order to do so, we use the following two claims.
+
+**Claim F.13** *The number of indices i for which Case 1 is invoked is at most L.*
+
+**Proof:** Let $i$ be an index for which Case 1 is invoked, so $C'_i \ge C_i/2$. Recall that we have assumed that $C_i \ge \alpha n_i$. Since the maximum vertex-degree of $G$ is bounded by $d$, the number of new vertices of $V$ that are added to $V_{i+1}^+$ is at least $\frac{C'_i}{d} \ge \frac{\alpha n_i}{2d}$. Therefore, $n_{i+1} \le n_i(1 - \frac{\alpha}{2d})$, and the total number of indices $i$ in which Case 1 is invoked must be bounded by $\frac{2d \ln n}{\alpha} \le L$. $\square$
+
+**Claim F.14** $\sum_i n_i \le \frac{4|E|}{\alpha}$.
+
+**Proof:** Recall that we have assumed $C_i \ge \alpha n_i$ for all $i$. Thus,
+
+$$
+\sum_i n_i \le \sum_i \frac{C_i}{\alpha} = \frac{\sum_i C_i}{\alpha} \le \frac{2|E^+|}{\alpha} \le \frac{4|E|}{\alpha}.
+$$
+
+Here, the second-last inequality follows from Observation F.12 and the last inequality follows from the fact that $|E^+| \le 2|E|$.
+
+$\square$
+
+For each vertex $v \in V \setminus T$, let $i_v$ be the unique index, such that $v \in V(G_i^+)$ and $v \notin V(G_{i-1}^+)$. For the remaining vertices $v \in T$, we set $i_v = 0$. Notice that $v$ must be connected by an edge to a vertex $u$ with $i_u < i_v$. Therefore, we can construct a path $P_v^+ = (v_0, v_1, \ldots, v_r)$ in $G^+$, where $v_0 \in T$, $v_r = v$, and for all $1 \le j \le r$, $i_{v_{j-1}} < i_{v_j}$.
+
+Let $P_v$ be the path corresponding to $P_v^+$ in the original graph $G$. Since we invoke Case 1 at most $L$ times, it is easy to verify that $P_v$ contains at most $L$ edges. Moreover:
+
+$$
+D^{\le L}(v,T) \leq \sum_{e \in P_v} \ell(e) \leq \sum_{e \in P_v} \frac{W(\ell)}{|E|} \left[ \frac{|E|\ell(e)}{W(\ell)} \right] = \frac{W(\ell)}{|E|} |E(P_v^+)| \leq i_v \frac{W(\ell)}{|E|}.
+$$
+
+Altogether:
+
+$$
+\sum_v D^{\le L}(v,T) \leq \frac{W(\ell)}{|E|} \sum_v i_v = \frac{W(\ell)}{|E|} \sum_i n_i \leq \frac{4W(\ell)}{\alpha},
+$$
+
+where the last inequality follows from Claim F.14. This contradicts the assumption that $\sum_v D^{\le L}(v,T) > \frac{4W(\ell)}{\alpha}$,
+
+completing the proof of Lemma F.6.
+---PAGE_BREAK---
+
+# G Proofs Omitted from Section 6
+
+## G.1 Proof of Claim 6.6
+
+The proof is very similar to the proof of Claim 2.3. The algorithm iteratively removes edges from $G\setminus E'$, until we obtain a connected component of the resulting graph that is an $\Omega(\frac{\alpha^2}{d})$-expander. We start with $G' = G\setminus E'$ (notice that $G'$ is not necessarily connected). We also maintain a set $E''$ of edges that we remove from $G'$, initialized to $E'' = \emptyset$. We then perform a number of iterations. In every iteration, we apply Theorem 2.2 to $G'$, and obtain a cut $(X, Y)$ in $G'$. If $|E_{G'}(X, Y)| \ge \alpha \cdot \min(|X|, |Y|)/4$, then, from Theorem 2.2, $G'$ is an $\Omega(\frac{\alpha^2}{d})$-expander. We terminate the algorithm and return $G'$. We later show that $|V(G')| \ge |V| - \frac{4|E'|}{\alpha}$. Assume now that $|E_{G'}(X, Y)| < \alpha \cdot \min(|X|, |Y|)/4$, and assume w.l.o.g. that $|X| \ge |Y|$. Update $G'$ to be $G'[X]$, add the edges of $E(X, Y)$ to $E''$, and continue to the next iteration. Clearly, at the end of the algorithm, we obtain a graph $G'$ that is an $\Omega(\frac{\alpha^2}{d})$-expander. It only remains to show that $|V(G')| \ge |V| - \frac{4|E'|}{\alpha}$. The remainder of the analysis is identical to the analysis of Claim 2.3.
+
+Assume that the algorithm performs $r$ iterations, and for each $1 \le i \le r$, let $(X_i, Y_i)$ be the cut computed by the algorithm in iteration $i$. Since $|X_i| \ge |Y_i|$, $|Y_i| \le |V(G)|/2$. At the same time, if we denote $E_i = E'' \cap E(X_i, Y_i)$, then $|E_i| < \alpha|Y_i|/4$. Therefore:
+
+$$ |E''| = \sum_{i=1}^{r} |E_i| < \alpha \sum_{i=1}^{r} |Y_i|/4. $$
+
+On the other hand, since $G$ is an $\alpha$-expander, the total number of edges leaving each set $Y_i$ in $G$ is at least $\alpha|Y_i|$, and all such edges lie in $E' \cup E''$. Therefore:
+
+$$ |E'| + |E''| \ge \alpha \sum_{i=1}^{r} |Y_i|/2. $$
+
+Combining both bounds, we get that $|E'| \ge \alpha \sum_{i=1}^{r} |Y_i|/4$, and so $\sum_{i=1}^{r} |Y_i| \le \frac{4|E'|}{\alpha}$. Therefore,
+
+$$ |V(G')| = |V| - \sum_{i=1}^{r} |Y_i| \ge |V| - \frac{4|E'|}{\alpha}. $$
+
+## G.2 Proof of Claim 6.9
+
+We can compute the largest-cardinality set of disjoint paths connecting vertices of $A$ to vertices of $B$ in $G$ using standard maximum s-t flow computation and the integrality of flow. Therefore, it is sufficient to show that there exists a set of $\lceil \alpha z/d \rceil$ disjoint paths connecting $A$ to $B$ in $G$.
+
+Assume otherwise. Then, from Menger's theorem, there is a set $Z$ of fewer than $\alpha z/d$ vertices in $G$, such that $G \setminus Z$ contains no path from a vertex of $A \setminus Z$ to a vertex of $B \setminus Z$. Let $E'$ be the set of all edges of $G$ incident to the vertices of $Z$. Since the maximum vertex degree in $G$ is at most $d$, $|E'| < \alpha z$. Therefore, graph $G\setminus E'$ contains no path connecting a vertex of $A$ to a vertex of $B$. Let $X$ be the union of all connected components of $G\setminus E'$ containing the vertices of $A$, and let $Y = V(G)\setminus X$. Then $|E(X, Y)| \le |E'| < \alpha z \le \alpha \cdot \min\{|X|, |Y|\}$, contradicting the fact that $G$ is an $\alpha$-expander.
+---PAGE_BREAK---
+
+G.3 Proof of Theorem 6.10
+
+The main tool that we use for the proof of Theorem 6.10 is the following theorem, whose proof appeared
+in [CC16]; we include the proof here for completeness.
+
+**Theorem G.1 (Restatement of Theorem A.4 in [CC16])** There is an efficient algorithm, that,
+given a graph $G$ with maximum vertex degree at most $d$, an integer $q \ge 1$, and a set $\mathcal{P}$ of at least 16dq
+disjoint paths in $G$, computes a subset $\mathcal{P}' \subseteq \mathcal{P}$ of at least $|P|/2$ paths, and a collection $\mathcal{C}$ of disjoint
+connected subgraphs of $G$, such that each path $P \in \mathcal{P}'$ is completely contained in some subgraph $\mathcal{C} \in \mathcal{C}$,
+and each such subgraph contains at least $q$ and at most 4dq paths in $\mathcal{P}$.
+
+**Proof:** Starting from $G$, we construct a new graph $H$, by contracting every path $P \in \mathcal{P}$ into a super-node $u_P$. Let $U = \{u_P \mid P \in \mathcal{P}\}$ be the resulting set of super-nodes. Let $\tau$ be any spanning tree of $H$, rooted at an arbitrary vertex $r$. Given a vertex $v \in V(\tau)$, let $\tau_v$ be the sub-tree of $\tau$ rooted at $v$. Let $J_v' \subseteq V(G)$ be the set of all vertices of $\tau_v$ that belong to the original graph $G$ (that is, they are not super-nodes), and let $J_v''$ be the set of all vertices of $G$ that lie on paths $P \in \mathcal{P}$ with $u_P \in \tau_v$. We then let $J_v = J_v' \cup J_v''$. We also denote $G_v = G[J_v]$; observe that it must be a connected graph. Over the course of the algorithm, we will delete some vertices from $\tau$. The notation $\tau_v$ and $G_v$ is always computed with respect to the most current tree $\tau$. We start with $\mathcal{C} = \emptyset, \mathcal{P}' = \emptyset$, and then iterate.
+
+Each iteration is performed as follows. If $q \le |V(\tau) \cap U| \le 4dq$, then we add the graph $G_r$ corresponding to the root $r$ of $\tau$ to $\mathcal{C}$, and terminate the algorithm. If $|V(\tau) \cap U| < q$, then we also terminate the algorithm (we will show later that $|\mathcal{P}'| \ge |\mathcal{P}|/2$ at this point). Otherwise, let $v$ be the lowest vertex of $\tau$ with $\tau_v \cap U \ge q$. If $v \notin U$, then, since the degree of every vertex in $G$ is at most $d$, $|\tau_v \cap U| \le dq$. We add $G_v$ to $\mathcal{C}$, and all paths in $\{P \mid u_P \in \tau_v\}$ to $\mathcal{P}'$. We then delete all vertices of $\tau_v$ from $\tau$, and continue to the next iteration.
+
+Assume now that $v = u_P$ for some path $P \in \mathcal{P}$. If $|\tau_v \cap U| \le 4dq$, then we add $G_v$ to $\mathcal{C}$, and all paths in $\{P' \mid u_{P'} \in \tau_v\}$ to $\mathcal{P}'$ and continue to the next iteration. So we assume that $|\tau_v \cap U| > 4dq$.
+
+Let $v_1, \dots, v_z$ be the children of $v$ in $\tau$. Build a new tree $\tau'$ as follows. Start with the path $P$, and add the vertices $v_1, \dots, v_z$ to $\tau'$. For each $1 \le i \le z$, let $(x_i, y_i) \in E(G)$ be any edge connecting some vertex $x_i \in V(P)$ to some vertex $y_i \in V(G_{v_i})$; such an edge must exist from the definition of $G_{v_i}$ and $\tau$. Add the edge $(v_i, x_i)$ to $\tau'$. Therefore, $\tau'$ is the union of the path $P$, and a number of disjoint stars whose centers lie on the path $P$, and whose leaves are the vertices $v_1, \dots, v_z$. The degree of every vertex of $P$ is at most $d$. We define the weight of the vertex $v_i$ as the number of the paths in $\mathcal{P}$ contained in $G_{v_i}$ (equivalently, it is $|U \cap \tau_{v_i}|$). Recall that the weight of each vertex $v_i$ is at most $q$, by the choice of $v$. For each vertex $x \in P$, the weight of $x$ is the total weight of its children in $\tau'$. Recall that the total weight of all vertices of $P$ is at least $4dq$, and the weight of every vertex is at most $dq$. We partition $P$ into a number of disjoint segments $\Sigma = (\sigma_1, \dots, \sigma_l)$ of weight at least $q$ and at most $4dq$ each, as follows. Start with $\Sigma = \emptyset$, and then iterate. If the total weight of the vertices of $P$ is at most $4dq$, we build a single segment, containing the whole path. Otherwise, find the shortest segment $\sigma$ starting from the first vertex of $P$, whose weight is at least $q$. Since the weight of every vertex is at most $dq$, the weight of $\sigma$ is at most $2dq$. We then add $\sigma$ to $\Sigma$, delete it from $P$ and continue. Consider the final set $\Sigma$ of segments. For each segment $\sigma$, we add a new graph $C_\sigma$ to $\mathcal{C}$. Graph $C_\sigma$ consists of the union of $\sigma$, the graphs $G_{v_i}$ for each $v_i$ that is connected to a vertex of $\sigma$ with an edge in $\tau'$, and the corresponding edge $(x_i, y_i)$. Clearly, $C_\sigma$ is a connected subgraph of $G$, containing at least $q$ and at most $4dq$ paths of $\mathcal{P}$. We add all those paths to $\mathcal{P}'$, delete all vertices of $\tau_v$ from $\tau$, and continue to the next iteration. We note that path $P$ itself is not added to $\mathcal{P}'$, but all paths $P'$ with $u_{P'} \in V(\tau_v)$ are added to $\mathcal{P}'$.
+
+At the end of this procedure, we obtain a collection $\mathcal{P}'$ of paths, and a collection $\mathcal{C}$ of disjoint connected
+---PAGE_BREAK---
+
+subgraphs of $G$, such that each path $P \in \mathcal{P}'$ is contained in some $C \in \mathcal{C}$, and each $C \in \mathcal{C}$ contains at least $q$ and at most $4dq$ paths from $\mathcal{P}'$. It now remains to show that $|\mathcal{P}'| \geq |\mathcal{P}|/2$. We discard at most $q$ paths in the last iteration of the algorithm. Additionally, when $v = u_P$ is processed, if $|\tau_v \cap U| > 4dq$, then path $P$ is also discarded, but at least $4dq$ paths are added to $\mathcal{P}'$. Therefore, overall, $|\mathcal{P}'| \geq |\mathcal{P}| - \frac{|\mathcal{P}|}{4dq+1} - q \geq |\mathcal{P}|/2$, since $|\mathcal{P}| \geq 16dq$. $\square$
+
+We now turn to prove Theorem 6.10. Recall that we are given an $\alpha$-Expanding Path-of-Sets System $\Sigma = (\mathcal{S}, \mathcal{M}, A_1, B_3)$ of width $w$ and length $3$, where $0 < \alpha < 1$, and the corresponding graph $G_\Sigma$ has maximum vertex degree at most $d$. Our goal is to compute subsets $\hat{A}_1 \subseteq A_1$, $\hat{B}_3 \subseteq B_3$ of $\Omega(\alpha^2 w/d^3)$ vertices each, such that $\hat{A}_1 \cup \hat{B}_3$ is well-linked in $G_\Sigma$. Notice that we can assume w.l.o.g. that $w \ge 256d^3/\alpha^2$, as otherwise it is sufficient that each set $\hat{A}_1, \hat{B}_3$ contains a single vertex, which is trivial to ensure.
+
+We apply Claim 6.9 to graph $S_1$, together with the sets $A_1, B_1$ of vertices, to compute a set $\mathcal{P}_1$ of $\lceil \alpha w/d \rceil$ node-disjoint paths in $S_1$, connecting vertices of $A_1$ to vertices of $B_1$. We then set $q = \lfloor 16d/\alpha \rfloor$, and use Theorem G.1, to compute a subset $\mathcal{P}'_1 \subseteq \mathcal{P}_1$ of at least $|\mathcal{P}_1|/2 \ge \alpha w/(2d)$ paths, and a collection $\mathcal{C}$ of disjoint connected subgraphs of $S_1$, such that each path $P \in \mathcal{P}'_1$ is completely contained in some subgraph $\mathcal{C} \in \mathcal{C}$, and each such subgraph contains at least $q$ and at most $4dq$ paths of $\mathcal{P}'_1$. (Note that from our assumption that $w \ge 256d^3/\alpha^2$, $|\mathcal{P}_1| \ge 16dq$). Clearly, $|\mathcal{C}| \ge \frac{|\mathcal{P}'_1|}{4dq} \ge \frac{\alpha^2 w}{256d^3}$. We select one representative path $P \in \mathcal{P}'_1$ from each subgraph $\mathcal{C} \in \mathcal{C}$, so that $P \subseteq \mathcal{C}$, and we let $\mathcal{P}^*_1 \subseteq \mathcal{P}'_1$ be the resulting set of paths. We are now ready to define the set $\hat{A}_1 \subseteq A_1$ of vertices: set $\hat{A}_1$ contains, for every path $P \in \mathcal{P}^*_1$, the endpoint of $P$ that lies in $A_1$. Note that $|\hat{A}_1| = |\mathcal{P}^*_1| = |\mathcal{C}| \ge \frac{\alpha^2 w}{256d^3}$. For convenience, for every vertex $a \in \hat{A}_1$, we denote by $P_a \in \mathcal{P}^*_1$ the unique path originating at $a$, and we denote by $C_a \in \mathcal{C}$ the unique subgraph of $S_1$ containing $P_a$.
+
+We select a subset $\hat{B}_3 \subseteq B_3$ of at least $\frac{\alpha^2 w}{256d^3}$ vertices similarly, by running the same algorithm in $S_3$. The set of paths obtained as the outcome of Theorem G.1 is denoted by $\mathcal{P}'_3$, and the set of connected subgraphs of $S_3$ by $\mathcal{C}'$. We also denote by $\mathcal{P}^*_3 \subseteq \mathcal{P}'_3$ the set of representative paths that we select from each subgraph of $\mathcal{C}'$. For every vertex $b \in \hat{B}_3$, we denote by $P_b \in \mathcal{P}^*_b$ the unique path originating at $b$, and we denote by $C_b \in \mathcal{C}'$ the unique subgraph containing $P_b$.
+
+It remains to show that $\hat{A}_1 \cup \hat{B}_3$ is well-linked in $G_\Sigma$. We show this using the same arguments as in [CC16]. Let $X, Y \subseteq \hat{A}_1 \cup \hat{B}_3$ be two equal-cardinality sets of vertices. We need to show that there is a set $\mathcal{Q}$ of $|X| = |Y|$ disjoint paths connecting them in $G_\Sigma$, such that the paths in $\mathcal{Q}$ are internally disjoint from $\hat{A}_1 \cup \hat{B}_3$. We define a new subgraph $H \subseteq G_\Sigma$ as follows: graph $H$ is the union of the graph $S_2$ and the matchings $\mathcal{M}_1$ and $\mathcal{M}_2$; additionally, for every vertex $v \in X \cup Y$, we add the graph $C_v$ to $H$. It is now enough to show that there is a set $\mathcal{Q}$ of $|X| = |Y|$ disjoint paths connecting $X$ to $Y$ in $H$; such paths are guaranteed to be internally disjoint from $\hat{A}_1 \cup \hat{B}_3$. From the integrality of flow, it is sufficient to show a flow $F$ in $H$, where every vertex in $X$ sends one flow unit, every vertex in $Y$ receives one flow unit, and every vertex of $H$ carries at most one flow unit. We now construct such a flow. This flow will be a concatenation of three flows, $F_1, F_2, F_3$.
+
+We start by defining the flows $F_1$ and $F_3$. Consider some vertex $v \in X \cup Y$, and assume w.l.o.g. that $v \in \hat{A}_1$. We select an arbitrary subset $U_v \subseteq B_1$ of $q = \lfloor 16d/\alpha \rfloor$ vertices that serve as endpoints of paths $P \in \mathcal{P}'_1$ that are contained in $C_v$. Since $C_v$ is a connected graph, vertex $v$ can send $1/q$ flow units to every vertex in $U_v$ simultaneously, inside the graph $C_v$, so that the flow on every vertex is at most 1. We denote the resulting flow by $F^v$.
+
+We obtain the flow $F_1$ by taking the union of all flows $F^v$ for $v \in X$, and we obtain the flow $F_3$ by taking the union of all flows $F^v$ for $v \in Y$ (we reverse the direction of the flow $F^v$ in the latter case).
+
+Let $R_1 = \bigcup_{v \in X} U_v$, and let $R_2 = \bigcup_{v \in Y} U_v$. Note that $R_1 \cup R_2 \subseteq B_1 \cup A_3$. For every vertex $x \in R_1 \cup R_2$ that lies in $B_1$, we let $x'$ be the vertex of $A_2$, that is connected to x by an edge of $\mathcal{M}_1$. Similarly, for
+---PAGE_BREAK---
+
+every vertex $x \in R_1 \cup R_2$ that lies in $A_3$, we let $x'$ be the vertex of $B_2$, that connects to $x$ by an edge of $\mathcal{M}_2$. Let $R'_1 = \{x' | x \in R_1\}$ and $R'_2 = \{x' | x \in R_2\}$. Note that $R'_1, R'_2$ are disjoint sets of vertices in $S_2$. Since graph $S_2$ is an $\alpha$-expander, there is a flow $F'_2$ in $S_2$, where every vertex in $R'_1$ sends one flow unit, every vertex in $R'_2$ sends one flow unit, and every edge carries at most $1/\alpha$ flow units. Scaling this flow down by factor $q = \lceil 16d/\alpha \rceil$, we obtain a new flow $F_2$ in $S_2$, where every vertex of $R'_1$ sends $1/q$ flow units, every vertex of $R'_2$ receives $1/q$ flow units, and every vertex of $S_2$ carries at most one flow unit.
+
+The final flow $F$ is obtained by concatenating the flows $F_1, F_2$ and $F_3$, and sending $1/q$ flow units on every edge of $\mathcal{M}_1 \cup \mathcal{M}_2$ that is incident to a vertex of $R_1 \cup R_2$. The flow in $F$ guarantees that every vertex of $X$ sends one flow unit, every vertex in $Y$ receives one flow unit, and every vertex of $G_\Sigma$ carries at most one flow unit.
+
+# H Proof of Observation 7.1
+
+We start with the cut $(X, Y)$ and perform a number of iterations. In every iteration, we modify the cut $(X, Y)$ so that the number of connected components in $G \setminus E(X, Y)$ strictly decreases, while ensuring that the cut sparsity does not increase. We now describe the execution of an iteration. Let $(X, Y)$ be the current cut. Let $\mathcal{C}_X$ and $\mathcal{C}_Y$ be the sets of all connected components of $G[X]$ and $G[Y]$ respectively. If $|\mathcal{C}_X| = |\mathcal{C}_Y| = 1$, then we return the cut $(X, Y)$, and terminate the algorithm. We assume from now on that this is not the case.
+
+Assume w.l.o.g. that $|X| \le |Y|$. Let $\rho_X := \frac{|E(X,Y)|}{|X|}$ and $\rho_Y := \frac{|E(X,Y)|}{|Y|}$. We consider the following two cases.
+
+**Case 1:** The first case happens when $|\mathcal{C}_X| > 1$. Recall that $|E(X,Y)| = \rho_X|X|$. Thus, there is a connected component $C \in \mathcal{C}_X$ such that $|E(C,Y)| \ge \rho_X|C|$. Consider a new partition $(X', Y')$, obtained by setting $X' = X \setminus C$ and $Y' = Y \cup C$. Notice that the number of connected components in $G \setminus E(X', Y')$ decreases by at least one. The sparsity of the new cut is:
+
+$$ \frac{|E(X', Y')|}{\min\{|X'|, |Y'|\}} = \frac{|E(X', Y')|}{|X'|} = \frac{|E(X, Y)| - |E(C, Y)|}{|X| - |C|} \le \frac{\rho_X|X| - \rho_X|C|}{|X| - |C|} = \rho_X. $$
+
+**Case 2:** If Case 1 does not happen, then $|\mathcal{C}_Y| > 1$ must hold. As before, there is a connected component $C \in \mathcal{C}_Y$ such that $|E(C,Y)| \ge \rho_Y|C|$. Consider the new partition $(X', Y')$ by setting $X' = X \cup C$ and $Y' = Y \setminus C$. Notice that the number of connected components in $G \setminus E(X', Y')$ decreases by at least one. In order to bound the sparsity of the new cut, we consider two cases. If $|X'| \ge |Y'|$, then the sparsity of the new cut is:
+
+$$ \frac{|E(X', Y')|}{|Y'|} = \frac{|E(X, Y)| - |E(C, Y)|}{|Y| - |C|} \le \frac{\rho_Y|Y| - \rho_Y|C|}{|Y| - |C|} = \rho_Y \le \rho_X. $$
+
+Otherwise, the sparsity of the new cut is:
+
+$$ \frac{|E(X', Y')|}{|X'|} = \frac{|E(X, Y)| - |E(C, Y)|}{|X| + |C|} < \frac{|E(X, Y)|}{|X|} = \rho_X. $$
+
+It is immediate to verify that the algorithm is efficient, and that it produces the cut $(X^*, Y^*)$ with the required properties.
+---PAGE_BREAK---
+
+References
+
+[Alo86] Noga Alon. Eigenvalues and expanders. *Combinatorica*, 6(2):83–96, 1986.
+
+[Alo98] Noga Alon. Spectral techniques in graph algorithms. In *Latin American Symposium on Theoretical Informatics*, pages 206–215. Springer, 1998.
+
+[AM84] Noga Alon and Vitali D Milman. Eigenvalues, expanders and superconcentrators. In *Foundations of Computer Science, 1984. 25th Annual Symposium on*, pages 320–322. IEEE, 1984.
+
+[And10] Matthew Andrews. Approximation algorithms for the edge-disjoint paths problem via Raecke decompositions. In *Proceedings of IEEE FOCS*, pages 277–286, 2010.
+
+[BCE80] Béla Bollobás, Paul Catlin, and Paul Erdős. Hadwiger’s conjecture is true for almost every graph. *European Journal of Combinatorics*, 1(3):195 – 199, 1980.
+
+[BFU94] Andrei. Broder, Alan. Frieze, and Eli Upfal. Existence and construction of edge-disjoint paths on expander graphs. *SIAM Journal on Computing*, 23(5):976–989, 1994.
+
+[Car88] Thomassen Carsten. On the presence of disjoint subgraphs of a specified type. *Journal of Graph Theory*, 12(1):101–111, 1988.
+
+[CC16] Chandra Chekuri and Julia Chuzhoy. Polynomial bounds for the grid-minor theorem. *J. ACM*, 63(5):40:1–40:65, December 2016.
+
+[Chu15] Julia Chuzhoy. Excluded grid theorem: Improved and simplified. In *Proceedings of the Forty-seventh Annual ACM Symposium on Theory of Computing*, STOC ’15, pages 645–654, New York, NY, USA, 2015. ACM.
+
+[Chu16a] Julia Chuzhoy. Improved Bounds for the Excluded Grid Theorem. *ArXiv e-prints*, February 2016.
+
+[Chu16b] Julia Chuzhoy. Routing in undirected graphs with constant congestion. *SIAM J. Comput.*, 45(4):1490–1532, 2016.
+
+[CKS05] Chandra Chekuri, Sanjeev Khanna, and F. Bruce Shepherd. Multicommodity flow, well-linked terminals, and routing problems. In *Proc. of ACM STOC*, pages 183–192, 2005.
+
+[CT] Julia Chuzhoy and Zihan Tan. Towards tight(er) bounds for the excluded grid theorem. SODA 2019, to appear.
+
+[DH07] Erik D Demaine and MohammadTaghi Hajiaghayi. Quickly deciding minor-closed parameters in general graphs. *European Journal of Combinatorics*, 28(1):311–314, January 2007.
+
+[DH08] Erik D. Demaine and MohammadTaghi Hajiaghayi. The bidimensionality theory and its algorithmic applications. *The Computer Journal*, 51(3):292–302, 2008.
+
+[DP09] Devdatt Dubhashi and Alessandro Panconesi. *Concentration of Measure for the Analysis of Randomized Algorithms*. Cambridge University Press, New York, NY, USA, 1st edition, 2009.
+
+[Fie73] Miroslav Fiedler. Algebraic connectivity of graphs. *Czechoslovak mathematical journal*, 23(2):298–305, 1973.
+
+[FKO09] Nikolaos Fountoulakis, Daniela Kühn, and Deryk Osthus. The order of the largest complete minor in a random graph. *Random Structures & Algorithms*, 33(2):127–141, 2009.
+---PAGE_BREAK---
+
+[Fri01] Alan M Frieze. Edge-disjoint paths in expander graphs. *SIAM Journal on Computing*, 30(6):1790–1801, 2001.
+
+[FST11] Fedor V. Fomin, Saket Saurabh, and Dimitrios M. Thilikos. Strengthening Erdos-Pósa property for minor-closed graph classes. *Journal of Graph Theory*, 66(3):235–240, 2011.
+
+[HLW06] Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander graphs and their applications. *Bull. Amer. Math. Soc. (N.S.)*, 43(4):439–561, 2006.
+
+[KK12] Ken-ichi Kawarabayashi and Yusuke Kobayashi. Linear min-max relation between the treewidth of H-minor-free graphs and its largest grid. In *29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)*, volume 14 of Leibniz International Proceedings in Informatics (LIPIcs), pages 278–289, Dagstuhl, Germany, 2012.
+
+[KN18] Michael Krivelevich and Rajko Nenadov. Complete minors in graphs without sparse cuts. *arXiv preprint arXiv:1812.01961*, 2018.
+
+[KR96] Jon Kleinberg and Ronitt Rubinfeld. Short paths in expander graphs. In *Proceedings of the 37th Annual Symposium on Foundations of Computer Science*, FOCS '96, pages 86–, Washington, DC, USA, 1996. IEEE Computer Society.
+
+[KR10] Ken-ichi Kawarabayashi and Bruce Reed. A separator theorem in minor-closed classes. In *2010 IEEE 51st Annual Symposium on Foundations of Computer Science*, pages 153–162, Oct 2010.
+
+[Kri18a] Michael Krivelevich. Expanders - how to find them, and what to find in them. *arXiv preprint arXiv:1812.11562*, 2018.
+
+[Kri18b] Michael Krivelevich. Finding and using expanders in locally sparse graphs. *SIAM Journal on Discrete Mathematics*, 32(1):611–623, 2018.
+
+[LR99] Tom Leighton and Satish Rao. Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. *J. ACM*, 46(6):787–832, November 1999.
+
+[LS15] Alexander Leaf and Paul Seymour. Tree-width and planar minors. *Journal of Combinatorial Theory, Series B*, 111:38 – 53, 2015.
+
+[MT10] Robin A. Moser and Gábor Tardos. A constructive proof of the general lovász local lemma. *J. ACM*, 57:11:1–11:15, February 2010.
+
+[Ree97] Bruce Reed. *Surveys in Combinatorics*, chapter Treewidth and Tangles: A New Connectivity Measure and Some Applications. London Mathematical Society Lecture Note Series. Cambridge University Press, 1997.
+
+[RS86] Neil Robertson and Paul Seymour. Graph minors. v. excluding a planar graph. *J. Comb. Theory Ser. B*, 41(1):92–114, August 1986.
+
+[RS95] Neil Robertson and P. D. Seymour. Graph minors. xiii: The disjoint paths problem. *J. Comb. Theory Ser. B*, 63(1):65–110, January 1995.
+
+[RST94] Neil Robertson, Paul Seymour, and Robin Thomas. Quickly excluding a planar graph. *J. Comb. Theory, Ser. B*, 62(2):323–348, 1994.
+
+[RZ10] Satish Rao and Shuheng Zhou. Edge disjoint paths in moderately connected graphs. *SIAM J. Comput.*, 39(5):1856–1887, 2010.
\ No newline at end of file
diff --git a/samples/texts_merged/7149586.md b/samples/texts_merged/7149586.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c190836f96c09c7fa1c3c50c983c77c0f43dec5
--- /dev/null
+++ b/samples/texts_merged/7149586.md
@@ -0,0 +1,1167 @@
+
+---PAGE_BREAK---
+
+# Mixed-Integer Nonlinear Programming Models and Algorithms for Large-Scale Supply Chain Design with Stochastic Inventory Management
+
+Fengqi You, Ignacio E. Grossmann*
+
+Department of Chemical Engineering, Carnegie Mellon University
+Pittsburgh, PA 15213, USA
+
+## Abstract
+
+An important challenge for most chemical companies is to simultaneously consider inventory optimization and supply chain network design under demand uncertainty. This leads to a problem that requires integrating a stochastic inventory model with the supply chain network design model. This problem can be formulated as a large scale combinatorial optimization model that includes nonlinear terms. Since these models are very difficult to solve, they require exploiting their properties and developing special solution techniques to reduce the computational effort. In this work, we analyze the properties of the basic model and develop solution techniques for a joint supply chain network design and inventory management model for a given product. The model is formulated as a nonlinear integer programming problem. By reformulating it as a mixed-integer nonlinear programming (MINLP) problem and using an associated convex relaxation model for initialization, we first propose a heuristic method to quickly obtain good quality solutions. Further, a decomposition algorithm based on Lagrangean relaxation is developed for obtaining global or near-global optimal solutions. Extensive computational examples with up to 150 distribution centers and 150 retailers are presented to illustrate the performance of the algorithms and to compare them with the full-space solution.
+
+* To whom all correspondence should be addressed. E-mail: grossmann@cmu.edu
+---PAGE_BREAK---
+
+# 1. Introduction
+
+Due to increasing pressure for remaining competitive in the global market place, an emerging challenge for the process industries has become how to manage the inventories at the enterprise level so as to reduce costs and improve the customer service.¹, ² A key challenge to achieve this goal is to integrate inventory management with supply chain network design decisions, so that decisions such as the number of inventory stocking locations and the associated amount of inventory can be determined simultaneously for lower costs and higher customer service level.
+
+Although supply chain network design problems and the inventory management problems have been studied extensively in recent years,³⁻⁸ most of the models consider inventory management and supply chain network design separately. On the other hand, there are related works on supply chain optimization that take into account the inventory costs, but consider inventory issues without detailed inventory management policies. In these models the safety stock level is given as a parameter, and usually treated as a lower bound of the total inventory level,⁹⁻¹¹ or considered as the inventory targets that would lead to some penalty costs if violated.¹²⁻¹⁴ This approach cannot optimize the safety stock levels, especially when considering demand uncertainty.¹⁵⁻¹⁷ Thus, it can only provide an approximation of the inventory cost, and therefore lead to suboptimal solutions. Jung et al.¹⁸ introduce a simulation-optimization framework to estimate the optimal safety stock levels, but the supply chain design decisions are not jointly optimized.
+
+Recently, Shen et al.¹⁹ proposed a joint location-inventory model that integrates supply chain network design model with inventory management under demand uncertainty. In their work, the management of working inventory and safety stock are taken into account besides the distribution center location decisions. To solve the resulting nonlinear integer programming problem, the authors simplified the model by assuming that the uncertain demand in each retailer has the same variance-to-mean ratio. Based on this assumption, they reformulated the model as a set-covering problem and solved it with a branch-and-price algorithm. The proposed algorithm performs well
+---PAGE_BREAK---
+
+for large scale problems. However, the assumption for identical variance-to-mean ratio
+might not provide a good approximation to real world problems because the demand
+uncertainties for each retailer may vary significantly. Thus, to allow the model to
+accommodate more general cases, an efficient algorithm is needed for the model
+without the simplifications.
+
+Lagrangean relaxation and Lagrangean decomposition methods are recognized as efficient tools for solving large-scale optimization problems with “special” structures. The Lagrangean relaxation and subgradient optimization are discussed by Fisher.²⁰, ²¹ Later, Guignard and Kim²² proposed the well-known Lagrangean decomposition method that yields stronger bounds than the Lagrangean relaxation algorithm. A large number of applications of Lagrangean-based algorithms for supply chain optimization and related problems have been reported in the past. Various Lagrangean-based heuristic algorithms for large scale facility location problems are discussed by Beasley.²³ Based on this work, Holmberg and Ling²⁴ proposed a novel Lagrangean heuristic method for location problems with staircase costs. Sridharan²⁵ implemented the Lagrangean relaxation method for the plant location problem with consideration of capacity issues. A Lagrangean relaxation and decomposition method for multiproduct tri-echelon supply chain design problem is proposed by Pirkul and Jayaraman.²⁶ Later, Klose²⁷ developed a relax-and-cut algorithm for the capacitated facility location problems. The proposed method yielded significant improvements in computational efficiency. van den Heever et al.²⁸ developed a Lagrangean heuristic method for the design and planning of offshore oil fields. A Lagrangean based temporal decomposition algorithm for supply chain planning was proposed by Jackson and Grossmann.¹² Recently, Neiro and Pinto²⁹ applied the Lagrangean based method to a petroleum supply chain planning model. The results showed that significant improvement in computational efficiency can be achieved by using Lagrangean decomposition.
+
+The objective of this work is to develop effective algorithms for large-scale joint
+supply chain network design and inventory management problem for a given product.
+This work relies on the integer nonlinear programming model proposed by Shen et al.¹⁹.
+We first reformulate the model as a mixed-integer nonlinear programming (MINLP)
+---PAGE_BREAK---
+
+model, and then solve it with different solution approaches, including a proposed heuristic method that relies on initialization from convex relaxations and a Lagrangean relaxation algorithm. The results from the full-scale solution and those from the various solution strategies are then compared and analyzed.
+
+The rest of this paper is organized as follows. Some basic concepts of inventory management with risk pooling are discussed in Section 2. Section 3 presents the problem statement, while Section 4 provides a detailed description of the joint supply chain network design and inventory management model proposed by Shen et al.,¹⁹ stating and defining the objective function and constraints of the model. The solution strategies including the MINLP reformulation and the Lagrangean relaxation algorithm are proposed in Section 5. Two small illustrative examples on a supply chain for liquid oxygen (LOX) are given in Section 6. Section 7 presents the comparison between different solution strategies and the full scale model, along with an analysis of the solution quality. Finally, Section 8 concludes on the performance of the proposed algorithm and the overall results.
+
+## 2. Inventory Management Model with Risk Pooling
+
+In this section, we briefly review some inventory management models that are related to the problem addressed in this work. Detailed discussion about inventory management models are given by Zipkin.⁵
+
+Figure 1 shows the inventory profile in a distribution center (or any stocking facility) for a given product. As we can see, the inventory level decreases due to the customer demand, and increases when replenishments arrive. The reorder point is a specific inventory level. It means that each time when the inventory level goes down to the reorder point, a replenishment order will be placed. The time it requires from placing an order until the replenishment arrives at the distribution center is defined as the ordering lead time. Typically, the total inventory consists of two parts, working inventory and safety stock. The working inventory represents products that have been ordered from the supplier due to replenishment, but not yet shipped out of the distribution center to satisfy the demand. The safety stock is the inventory for buffering
+---PAGE_BREAK---
+
+the system against stockouts due to the uncertain demands during the ordering lead time.
+
+**Figure 1 Inventory profile changing with time**
+
+**Figure 2 Inventory profile for deterministic demand with (Q, r) policy**
+
+A popular inventory control policy widely used in practice is the order quantity/reorder point (Q, r) inventory policy. When using this policy, each time when inventory level depletes to reorder point r, a fixed order quantity Q will be placed for replenishment. When the demand is deterministic with consistent demand rate, the inventory profile is a series of identical square triangle as given in Figure 2. Each of
+---PAGE_BREAK---
+
+these square triangles has the same height (the order quantity Q), and the same width
+denoted as the replenishment interval. The optimal order quantity and replenishment
+interval for this deterministic demand case can be determined by using an economic
+order quantity (EOQ) model, which takes into account the trade-off between fixed
+ordering costs, transportation costs and working inventory holding costs (EOQ model
+formulation for our model is given in equation (3) in Section 3.2.). Although the EOQ
+model uses the deterministic demands, it has proved to provide very good
+approximations for working inventory costs of systems with (Q, r) policy under
+demand uncertainty.³⁰,³¹ A common approach for the (Q, r) inventory model, as pointed
+out by Axsater,³⁰ is to first replace the stochastic demand with its mean value and then
+determine the optimal order quantity Q with the deterministic EOQ model, and finally
+find out the optimal reorder point under uncertain demand based on the order quantity.
+
+**Figure 3 Safety stock and service level under normally distributed demand**
+
+A distribution center under demand uncertainty may not always have sufficient
+stock to handle the changing demand. If the reorder point (inventory level) is less than
+the demand during the order lead time, stockout may happen. *Type I service level* is
+defined as the probability that the total inventory on hand is more than the demand (as
+shown Figure 3). If the demand is normally distributed with mean $\mu$ and standard
+deviation $\sigma$ and the ordering lead time is $L$, the optimal safety stock level to
+guarantee a service level $\alpha$ is $z_{\alpha}\sqrt{L\sigma}$, where $z_{\alpha}$ is a standard normal deviate such
+---PAGE_BREAK---
+
+that $Pr(z \le z_{\alpha}) = \alpha$. We should note that the acceptable practice in this field is to assume a normal distribution of the demand, although of course other distribution functions can be specified.
+
+To consider the total safety stock of an inventory system, Eppen³² proposed the “risk pooling effect”, which states that significant safety stock cost can be saved by grouping retailers. In particular, Eppen considered a single period problem with *N* retailers and one supplier. Each retailer *i* has normally distributed demand with mean $\mu_i$ and standard deviation $\sigma_i$, and the correlation coefficient of demand at retail *i* and *j* is $\rho_{ij}$. The order lead time from the supplier to all these retailers are the same and given as *L*. Eppen compared two operational modes of retailer supply chain: decentralized mode and centralized mode. In the decentralized mode, each retailer orders independently to minimize its own expected cost. Since in this mode the optimal safety stock in retailer *i* is $z_{\alpha}\sqrt{L}\sigma_i$, the total safety stock in the system is given by,
+
+$$z_{\alpha}\sqrt{L}\sum_{i=1}^{N}\sigma_{i}.$$
+
+In the centralized mode, all the retailers are considered as a whole and a single quantity is ordered for replenishment, so as to minimize the total expected cost of the entire system. Since in the centralized mode all the retailers are grouped, and the demand at each retailer follows a normal distribution $N(\mu_i, \sigma_i^2)$, the total uncertain demand of the entire system during the order lead time will also follow a normal distribution with mean $L\sum_{i=1}^{N}\mu_i$ and standard deviation $\sqrt{L\sum\sigma_i^2+2\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\sigma_i\sigma_j\rho_{ij}}$. Therefore, the total safety stock of the distribution centers in the centralized mode is,
+
+$$z_{\alpha}\sqrt{L}\left[\sum_{i=1}^{N}\sigma_{i}^{2}+2\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\sigma_{i}\sigma_{j}\rho_{ij}\right]$$
+
+Thus, if the demands of all the *N* retailers are independent, the optimal safety stock can be expressed by $z_{\alpha}\sqrt{L}\sum_{i=1}^{N}\sigma_{i}^{2}$, which is less than $z_{\alpha}\sqrt{L}\sum_{i=1}^{N}\sigma_{i}$. Eppen's simple model illustrates the potential saving in safety stock costs due to risk pooling.
+
+In summary, for an inventory system including multiple distribution centers operating with (Q, r) policy and Type I service level under demand uncertainty, the total
+---PAGE_BREAK---
+
+inventory cost consists of working inventory costs and safety stock costs. The optimal working inventory costs can be estimated with a deterministic EOQ model, and the safety stock costs can be reduced by risk pooling.
+
+### 3. Problem Statement
+
+We assume we are given a supply chain consisting of one or more suppliers and a set of retailers $i \in I$ (this can also be customers or markets, for convenience, we denote as “retailer” in the rest part of this paper unless specified), together with a number candidate sites for distribution centers $j \in J$. For an example of the network structure, see Figure 4). The locations of the supplier(s), potential distribution centers and the retailers are known and the distances between them are given. The replenishment lead time $L$ of each distribution center is assumed to be the same for all the candidate distribution centers. This in turn means that the suppliers can be treated implicitly and lumped into one supplier. There is a fixed setup cost $f_j$ when each distribution center is installed. Each retailer $i$ has a normally distributed demand with mean $\mu_i$ and variance $\sigma_i^2$, which is independent of the other retailers’ demands (The model can be easily extended to consider correlated demands in the retailers by modifying the safety stock terms as discussed at the end of Section 2). Each distribution center can serve more than one retailer, but each retailer should be only assigned to exactly one distribution center to satisfy the demand. Linear transportation costs are incurred for shipments from supplier to distribution center $j$ with fixed cost $g_j$ and unit cost $a_j$ and from distribution center $j$ to retailer $i$ with unit cost $d_{ij}$.
+
+Most of the inventory in the network is held in the distribution centers where the inventory is managed with a $(Q, r)$ policy with type I service.⁵ Inventory costs are incurred at each distribution centers, and consist of both working inventory and safety stock. The retailers only maintain a very small amount of inventory whose costs are ignored.
+
+The problem is to determine how many distribution centers (DCs) to install, where
+---PAGE_BREAK---
+
+to locate them, which DCs to assign to each retailer, how often to reorder for
+replenishment at each DC, and what level of safety stock to maintain so as to minimize
+the total location, transportation, and inventory costs, while ensuring a specified level
+of service.
+
+**Figure 4 Supply chain network structure (three echelons)**
+
+# 4. Model Formulation
+
+The joint supply chain network design and inventory management model of Shen et al. ¹⁹ is used as the basis for the present work, in which we will not rely on the assumption that each customer has identical variance-to-mean ratio. The joint location-inventory model is a nonlinear integer program that deals with the supply chain network design for a given product, and considers its detailed inventory management. The definition of sets, parameters, and variables of the model are as follows:
+
+**Sets/Indices**
+
+$I$ Set of retailers indexed by $i$
+
+$J$ Set of candidate DC site indexed by $j$
+
+**Parameters**
+
+Fixed cost (annual) of locating a DC at candidate site $j$
+
+$f_j$
+---PAGE_BREAK---
+
+dij Unit transportation cost from DC j to retailer i
+
+$\chi$ Days per year (to convert daily demand and variance values to annual costs)
+
+$\mu_i$ Mean demand at retailer i (daily)
+
+$\sigma_i^2$ Variance of demand at retailer i (daily)
+
+$F_j$ Fixed cost of placing an order from the supplier to the DC at candidate site j
+
+$g_j$ Fixed transportation cost from the supplier to the DC at candidate site j
+
+$a_j$ Unit transportation cost from the supplier to the DC at candidate site j
+
+$L$ Lead time from the supplier to the candidate DC sites (in days)
+
+$h$ Unit inventory holding cost
+
+$\alpha$ Desired probability of retailer orders satisfied
+
+$\beta$ Weight factor assigned to transportation costs
+
+$\theta$ Weight factor assigned to inventory costs
+
+$z_{\alpha}$ Standard normal deviate such that $\text{Pr}(z \le z_{\alpha}) = \alpha$
+
+**Decision Variables (0-1)**
+
+$X_j$ 1 if we locate a DC in candidate site j, and 0 otherwise
+
+$Y_{ij}$ 1 if retailer i is served by the DC at candidate site j, and 0 otherwise
+
+**4.1. Objective Function**
+
+The objective of this model is to minimize the total weighted cost of the following
+items:
+
+* fixed cost for locating facilities,
+
+• transportation costs from DCs to retailers,
+
+* fixed order placing costs, transportation costs from the supplier to DCs and the expected working inventory costs in the DCs,
+
+• safety stock costs in DCs.
+
+The facility location cost is given by,
+
+$$
+\sum_{j \in J} f_j X_j \tag{1}
+$$
+---PAGE_BREAK---
+
+The product of yearly expected mean demand ($\chi\mu_i$) and the unit transportation cost ($d_{ij}$) leads to the annual DC to retailer transportation costs. If the retailer $i$ is not served by the DC in candidate location $j$, the transportation cost is zero. Hence, the total expected transportation costs from DCs to retailers can be expressed as:
+
+$$ \sum_{j \in J} \sum_{i \in I} \chi d_{ij} \mu_i Y_{ij} \quad (2) $$
+
+As all the retailers have stochastic demands and all the DCs manage the inventory using a $(Q, r)$ policy with *Type I Service* constraint, the working inventory cost can be approximated with an economic order quantity model (EOQ) with very small error bound.$^{30, 31}$ Let $n$ be the number of replenishments per year and $D$ be the annual demand for the product. Thus, the annual costs of ordering, shipping and working inventory from the supplier to the DCs are approximated by:
+
+$$ Fn + \beta \left( g + a \frac{D}{n} \right) n + \theta \frac{hD}{2n} \quad (3) $$
+
+The first term $Fn$ is the total ordering cost per year. The second term is the annual transportation cost times the weighted factor ($\beta$), where $(D/n)$ is the expected shipment size, and the shipping cost is given by a linear function $v(x) = g + ax$. The third term is the annual working inventory costs times the weighted factor ($\theta$), where $D/(2n)$ is the average inventory level on hand. Considering (3) as a function of annual order number $n$, by setting the first order derivative to zero with respect to $n$, we can obtain the optimal order number $n = \sqrt{\theta h D / (2(F + \beta g))}$. Therefore, by substituting into (3), the total optimal cost for replenishments, including ordering, transportation and working inventory holding cost is given by,
+
+$$ \beta aD + \sqrt{2\theta h(F + \beta g)D} \quad (4) $$
+
+Substituting the demand $D$ with the annual expected demand of the product in each DC ($\sum_{i \in I} \chi\mu_i Y_{ij}$), the total replenishment costs for all the DCs can be expressed by,
+---PAGE_BREAK---
+
+$$ \beta \sum_{j \in J} a_j \sum_{i \in I} \chi \mu_i Y_{ij} + \sum_{j \in J} \sqrt{2\theta h(F_j + \beta g_j) \sum_{i \in I} \chi \mu_i Y_{ij}} \quad (5) $$
+
+As the demand at each retailer follows a given normal distribution, let $\mu_i$ and $\sigma_i^2$ be the mean and variance of demand of the product at retailer $i$. Due to the risk-pooling effect,³² the lead time demand at each DC is also normally distributed with a mean of $L \sum_{i \in S} \mu_i$ and a variance of $L \sum_{i \in S} \sigma_i^2$. Thus, the safety stock required in the DC at candidate location $j$ to ensure that stockouts occur with a probability of $\alpha$ or less is,
+
+$$ z_{\alpha} \sqrt{L \sum_{i \in I} \sigma_i^2 Y_{ij}}. \quad (6) $$
+
+Therefore, the objective function of this model is given by
+
+$$
+\begin{aligned}
+\text{Min:} & \quad \sum_{j \in J} f_j X_j + \beta \sum_{j \in J} \sum_{i \in I} \chi d_{ij} \mu_i Y_{ij} + \beta \sum_{j \in J} a_j \sum_{i \in I} \chi \mu_i Y_{ij} \\
+& \quad + \sum_{j \in J} \sqrt{2\theta h(F_j + \beta g_j) \sum_{i \in I} \chi \mu_i Y_{ij}} + \theta h z_{\alpha} \sum_{j \in J} \sqrt{\sum_{i \in I} L \sigma_i^2 Y_{ij}}
+\end{aligned}
+\quad (7)
+$$
+
+where each term accounts for the fixed facility location cost, DC to retailer transportation costs, replenishment costs (including supplier to DC transportation costs, fixed ordering costs and working inventory costs) and safety stock costs.
+
+## 4.2. Network Constraints
+
+Two constraints are used to define the network structure. The first one is that each retailer $i$ should be served by only one DC,
+
+$$ \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \quad (8) $$
+
+The second constraint states that if a retailer $i$ is served by the DC in candidate location $j$, the DC must exist,
+
+$$ Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J. \quad (9) $$
+
+Finally, all the decision variables are binary variables in this model:
+
+$$ X_j \in \{0,1\}, \quad \forall j \in J \quad (10) $$
+
+$$ Y_{ij} \in \{0,1\}, \quad \forall i \in I, \forall j \in J \quad (11) $$
+---PAGE_BREAK---
+
+**4.3. INLP Model**
+
+Grouping the parameters, we can rearrange the objective function and formulate the
+problem to (P0) as the following integer nonlinear programming (INLP) problem:
+
+$$
+\begin{align}
+\textbf{(P0)} \quad & \text{Min:} \quad \sum_{j \in J} (f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}}) \tag{12} \\
+& \text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I \tag{8} \\
+& \qquad Y_{ij} \le X_j, \qquad \forall i \in I, \forall j \in J \tag{9} \\
+& \qquad X_j \in \{0,1\}, \qquad \forall j \in J \tag{10} \\
+& \qquad Y_{ij} \in \{0,1\}, \qquad \forall i \in I, \forall j \in J \tag{11}
+\end{align}
+$$
+
+where
+
+$$
+\begin{align*}
+\hat{d}_{ij} &= \beta \chi \mu_i (d_{ij} + a_j) \\
+K_j &= \sqrt{2\theta h \chi (F_j + \beta g_j)} \\
+q &= \theta h z_{\alpha} \\
+\hat{\sigma}_i^2 &= L \sigma_i^2
+\end{align*}
+$$
+
+5. Solution Approach
+
+The joint supply chain network design and inventory management model
+((8)-(12)) is a nonlinear integer program where all the decision variables are binary
+variables. Besides its combinatorial nature, the nonlinear terms are nonconvex which
+make the optimization model very difficult to solve. In order to address this problem,
+previous researchers¹⁹, ³³ have simplified the model by assuming that the
+variance-to-mean ratio at all the retailers are identical, but in the real world this ratio
+for each retailer may vary from others, and thus an efficient algorithm is required to
+solve the model (P0) without the aforementioned assumption so as to provide a good
+approximation for real cases. In the next section, we reformulate the INLP model (P0)
+as a mixed-integer nonlinear programming (MINLP) problem with fewer 0-1
+variables and solve it with different solution approaches, including a heuristic method
+---PAGE_BREAK---
+
+to obtain “good quality” solutions very quickly, and a Lagrangean relaxation algorithm
+for obtaining global or near-global optimal solutions.
+
+**5.1. MINLP Formulation**
+
+The original INLP model (P0) is very difficult to solve for large instances due to
+the potentially large number of binary variables (see Table 2 in Section 7 for
+examples). As shown in the proposition below, the assignment variables (Yij) in the
+model can be relaxed as continuous variables without changing the optimal integer
+solution. This allows us to reformulate (P0) as a MINLP problem with fewer 0-1
+variables and most of them appearing in linear form.
+
+**Proposition 1.** The continuous variables $Y_{ij}$ yield 0-1 integer values when (P0) is globally optimized or locally optimized for fixed 0-1 value for $X_j$.
+
+Proposition 1 means that the following problem (P1) yields integer values on the
+assignment variables $Y_{ij}$ when it is globally optimized or locally optimized for a
+fixed 0-1 integer value for $X_j$.
+
+$$
+\begin{align}
+& \text{(P1)} \quad \text{Min:} \quad \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}} \right) \tag{13} \\
+& \text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I. \tag{8} \\
+& \qquad Y_{ij} \le X_j, \qquad \forall i \in I, \forall j \in J. \tag{9} \\
+& \qquad X_j \in \{0,1\}, \qquad \forall j \in J \tag{10} \\
+& \qquad Y_{ij} \ge 0, \qquad \forall i \in I, \forall j \in J. \tag{14}
+\end{align}
+$$
+
+The proof, which is given in Appendix A, is based on the fact that for fixed $X_j$
+problem (P1) is a concave minimization problem defined over a polyhedron, and for
+which local and global solution for fixed integer values of $X_j$, yield integer values
+for the continuous variables $Y_{ij}$.
+---PAGE_BREAK---
+
+Proposition 1 allows us to solve the MINLP model (P1) instead of the INLP model (P0) significantly reducing the computational effort. It is interesting to note that if we set the unit inventory holding cost $h = 0$, the square root terms in the objective function (13) can be removed and problem (P1) reduces to the widely studied “Uncapacitated Facility Location” (UFL) problem,³, ⁴, ³⁴, ³⁵ which is known to exhibit integer solutions for relaxed variables $Y_{ij}$. Furthermore, this problem is also known to be solvable through its LP relaxation for most instances.
+
+(P1) is an MINLP problem with linear constraints and a nonlinear objective function including nonconvexities in the continuous variables. Optimization methods that can be used for obtaining the global optimal solution of problem (P1) include the branch and reduce method,³⁶, ³⁷ the α-BB method,³⁸ the spatial branch and bound search method for bilinear and linear fractional terms³⁹, ⁴⁰ and the outer-approximation method by Kesavan et al.⁴¹ All these methods rely on a branch and bound solution procedure. The difference among these methods lies on the definition of the convex envelopes for computing the lower bound, and on how to perform the branching on the discrete and continuous variables. The global optimization solver that is commercially available is BARON,⁴² which implements a branch-and-reduce solution method.
+
+Since a global optimization algorithm can be expensive, another alternative is to use an MINLP method that relies on the assumption that the functions are convex. Although in this case global optimality cannot be guaranteed, the solutions can be obtained much faster, because a local optimal solution can be efficiently be found for a fixed value of the integer variables (optimal or near optimal). A general review of these MINLP methods is given in Grossmann.⁴³ Methods include the branch and bound method,⁴⁴ Generalized Benders Decomposition,⁴⁵ Outer-Approximation,⁴⁶, ⁴⁷ LP/NLP based branch and bound,⁴⁸ and Extended Cutting Plane Method.⁴⁹ A number of computer codes are available that implement these methods. The program DICOPT⁴⁷ is an MINLP solver that is based on the Outer Approximation
+---PAGE_BREAK---
+
+algorithm,⁴⁶ and is available in the modeling system GAMS.⁵⁰ It should be noted that
+this code has a heuristic termination criterion for nonconvex problems. The code
+α-ECP implements the extended cutting plane method by Westerlund and
+Pettersson.⁴⁹ Codes that implement the branch and bound method include the code
+MINLP_BB⁴⁴ available in AMPL, and the program SBB which is also available in
+GAMS. Recently, the open source MINLP solver Bonmin,⁵¹ which is part of the
+COIN-OR project,⁵² implements an extension of the branch-and-cut
+outer-approximation algorithm that was proposed by Quesada and Grossmann,⁴⁸ as
+well as the branch and bound and outer-approximation method.
+
+**5.2. MINLP Reformulation**
+
+In order to improve the computational efficiency of solving the MINLP model
+(P1) with the above cited solvers, we present in this section a reformulation of (P1).
+
+The square root term in the objective function of (P1) can give rise to difficulties
+in the optimization procedure. When the DC in location *j* is not selected, both
+square root terms would take a value of zero, which leads to unbounded gradients in
+the NLP optimization and hence numerical difficulties. Thus, we reformulate the
+model in order to eliminate the square root terms. We first introduce two sets of
+non-negative continuous variables, Z1j and Z2j, to represent the square-root terms
+in the objective function:
+
+$$
+Z1_j^2 = \sum_{i \in I} \mu_i Y_{ij}, \quad \forall j \in J \tag{15}
+$$
+
+$$
+Z 2_j^2 = \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}, \quad \forall j \in J \qquad (16)
+$$
+
+$$
+Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J \tag{17}
+$$
+
+Because the non-negative variables Z1_j and Z2_j are introduced in the
+objective function with positive coefficients, and this problem is a minimization
+problem, (15) and (16) can be further relaxed as the following inequalities,
+
+$$
+-Z1_j^2 + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \qquad (18)
+$$
+---PAGE_BREAK---
+
+$$
+-Z 2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \tag{19}
+$$
+
+Thus, the reformulated model can then be expressed as the following MINLP problem denoted as (P2),
+
+$$
+\textbf{(P2)} \quad \text{Min:} \quad \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j Z 1_j + q Z 2_j \right) \qquad (20)
+$$
+
+$$
+\text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \tag{8}
+$$
+
+$$
+Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J. \tag{9}
+$$
+
+$$
+-Z1_j^2 + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \tag{18}
+$$
+
+$$
+-Z2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \qquad (19)
+$$
+
+$$
+X_j \in \{0,1\}, \quad \forall j \in J
+\quad (10)
+$$
+
+$$
+Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \tag{14}
+$$
+
+$$
+Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J
+\quad (17)
+$$
+
+(P2) can be trivially shown to be equivalent to (P1) but with linear objective function and quadratic terms in the constraints (18) and (19). As shown in Appendix A, the following property can be established for problem (P2).
+
+**Proposition 2.** The global optimal solution of problem (P2), or a local optimal solution with fixed 0-1 value for $X_j$, has all the continuous variables $Y_{ij}$ take on integer value (0 or 1).
+
+5.3. Heuristic Algorithm
+
+Since problem (P2) is a nonconvex problem, the solution is highly dependent of
+the starting point when using an MINLP solver that relies on convexity assumption.
+To obtain a “good” feasible starting point, we first relax the nonconvex nonlinear
+constraints (18) and (19) in (P2) by replacing the concave terms with their
+corresponding secants, which represent the convex envelopes⁵³ of these functions.
+---PAGE_BREAK---
+
+From (15) and (16), it is easy to see that the lower bounds $Z1_j$ and $Z2_j$ are both 0, and their upper bounds are $\sqrt{\sum_{i \in I} \mu_i}$ and $\hat{\sigma}_i^2$ respectively.
+
+Therefore, the secant of (18) is given by,
+
+$$ -\sqrt{\sum_{i \in I} \mu_i} \cdot Z1_j + \sum_{i \in I} \mu_i Y_{ij} \leq 0, \quad \forall j \in J \quad (21) $$
+
+Similarly, the secant of (19) is given by,
+
+$$ -\sqrt{\sum_{i \in I} \hat{\sigma}_i^2} \cdot Z2_j + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \leq 0, \quad \forall j \in J \quad (22) $$
+
+In this way the convex relaxation of model (P2) can be formulated as Problem (P3):
+
+$$ (\textbf{P3}) \quad \begin{array}{ll} \text{Min:} & \displaystyle \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j Z1_j + qZ2_j \right) \\[1em] \text{s.t.} & \displaystyle \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \\[1em] & Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J. \\[1em] & -\sqrt{\sum_{i \in I} \mu_i} \cdot Z1_j + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \\[1em] & -\sqrt{\sum_{i \in I} \hat{\sigma}_i^2} \cdot Z2_j + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \\[1em] & X_j \in \{0,1\}, \quad \forall j \in J \\[1em] & Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \\[1em] & Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J
+\end{array} \quad (20) $$
+
+(8)
+
+(9)
+
+(21)
+
+(22)
+
+(10)
+
+(14)
+
+(17)
+
+(P3) is a mixed-integer linear programming (MILP) problem which is the convex relaxation of problem (P2). The optimal solution of variables $X_j$ and $Y_{ij}$ of problem (P3) is a feasible solution of problem (P2) due to the linear constraints (8) and (9), and it can provide an initial point before solving (P2) with an MINLP solver. In this way, we can greatly speed up the computation and enhance the likelihood of obtaining a near-optimal solution of model (P2). In summary, the heuristic algorithm for obtaining a good quality solution with reasonable computational effort by using MINLP solvers that rely on convexity assumptions is as follows:
+---PAGE_BREAK---
+
+**Algorithm 1:** (Heuristic Algorithm)
+
+**Step 1:** Solve the MILP model (P3).
+
+**Step 2:** Use the optimal values of variables $X_j$ and $Y_{ij}$ obtained from Step 1 as the starting point, and solve problem (P2) with an MINLP solver that relies on convexity assumptions (such as DICOPT, SBB, $\alpha$-ECP, MINLP_BB, Bonmin, etc.) for obtaining a near-optimal solution.
+
+Note that if we solve problem (P2) with Algorithm 1 by using an MINLP solver that relies on convexity assumptions, the optimal solution may not be globally optimal. However, the optimal solution still has all the $Y_{ij}$ variables at integer values based on Proposition 1 (see Appendix A for details). Furthermore, the solution obtained by using heuristic Algorithm 1 for problem (P2) is also a feasible solution of problem (P1).
+
+## 5.4. A Lagrangean Relaxation Algorithm
+
+In order to obtain potentially better solutions, we propose a Lagrangean relaxation algorithm for obtaining global optimal or near global optimal solutions of model (P2).
+
+### 5.4.1. The Decomposition Procedure
+
+In the Lagrangean relaxation algorithm, we use a “spatial” decomposition scheme by dualizing the assignment constraints (8) in (P2) using the Lagrangean multipliers $\lambda_i$, which is similar to the works by Beasley²³ and Daskin et al.³³ As a result, we obtain the following relaxed problem (denoted by $\mathbf{P}(\boldsymbol{\lambda}))$,
+
+$$ (\mathbf{P}(\boldsymbol{\lambda})) \quad V = \text{Min:} \quad \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} (\hat{d}_{ij} - \lambda_i) Y_{ij} + K_j Z_1 + qZ_2 \right) + \sum_{i \in I} \lambda_i \quad (23) $$
+
+$$ \text{s.t.} \quad Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J \quad (9) $$
+
+$$ -Z_1^2 + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \quad (18) $$
+---PAGE_BREAK---
+
+$$
+\begin{align}
+& -Z2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \tag{19} \\
+& X_j \in \{0,1\}, \quad \forall j \in J \tag{10} \\
+& Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \tag{14} \\
+& Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J \tag{17}
+\end{align}
+$$
+
+where *V* is the objective function value. Next, we observe that (*P*(*λ*)) can be
+decomposed into |*J*| subproblems, one for each candidate DC site *j* ∈ *J*, where each
+one is denoted by (*P**j*(*λ*)) and is shown for a specific subproblem for candidate DC site
+*j** as follows:
+
+$$
+\begin{equation}
+\begin{split}
+(\mathbf{P}_{j^*}(\boldsymbol{\lambda})) V_{j^*} = \text{Min: } & f_{j^*} X_{j^*} + \sum_{i \in I} (\hat{d}_{ij^*} - \lambda_i) Y_{ij^*} + K_{j^*} Z 1_{j^*} + q Z 2_{j^*}
+\end{split}
+\tag{24}
+\end{equation}
+$$
+
+s.t.
+$$
+\begin{array}{l@{\quad}l@{\qquad}l}
+Y_{ij^*} & \le X_{j^*}, & \forall i \in I \\
+\\
+-Z1_{j^*}^2 & + \sum_{i \in I} \mu_i Y_{ij^*} & \le 0 \\
+\\
+-Z2_{j^*}^2 & + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij^*} & \le 0 \\
+\\
+Y_{ij^*} & \ge 0, & \forall i \in I \\
+\\
+X_{j^*} & \in \{0,1\} \\
+\\
+Z1_{j^*} & \ge 0, Z2_{j^*} \ge 0
+\end{array}
+$$
+
+Subproblem (P_j(λ)) has one binary variable (X_j^*), |I|+2 continuous variables (Z1_j^*, Z2_j^*, Y_j^*) and 2|I|+2 constraints. Because we have |J| subproblems (P_j(λ)), and one for each candidate DC site j∈J, we call it a "spatial" decomposition scheme, i.e. decomposition by the spatial structure of the supply chain network.¹²,²⁶ Let V_j denote the globally optimal objective function value of problem (P_j(λ)). As a result of the decomposition procedure, the globally optimal objective function value of (P(λ)), which corresponds to a lower bound of problem (P2), can be calculated by:
+---PAGE_BREAK---
+
+$$
+V = \sum_{j \in J} V_j + \sum_{i \in I} \lambda_i . \tag{25}
+$$
+
+For each fixed value of the Lagrangean multipliers $\lambda_i$, we solve problem (P$_j(\lambda)$)
+by globally minimizing (24) for each candidate DC location j (e.g. using BARON).
+Then, based on (25), the optimal objective function value of problem (P$(\lambda)$) can be
+calculated for each fixed value of $\lambda_i$. Using a standard subgradient method$^{20, 21}$ to
+update the Lagrangean multiplier $\lambda_i$, the algorithm iterates until a preset optimality
+tolerance is reached.
+
+**5.4.2. Lagrangean Relaxation Subproblems**
+
+In each iteration with fixed values of the Lagrange multipliers $\lambda_i$, the design variables ($X_j$) are optimized separately in each subproblem ($\mathbf{P}_j(\boldsymbol{\lambda})$) in the aforementioned decomposition procedure. For each subproblem ($\mathbf{P}_j(\boldsymbol{\lambda})$), we can observe that the objective function value of ($\mathbf{P}_j(\boldsymbol{\lambda})$) is 0 if and only if $X_j = 0$ (i.e. we do not select DC $j$). In other words, there is a feasible solution that leads to the objective function value of subproblem ($\mathbf{P}_j(\boldsymbol{\lambda})$) equal to 0. Therefore, the globally minimum objective function value of subproblem ($\mathbf{P}_j(\boldsymbol{\lambda})$) should be less than or equal to zero. Given this observation, it is possible that under some value of $\lambda_i$ (such as $\lambda_i = 0$, $i \in I$) the optimal objective function values for all the subproblem ($\mathbf{P}_j(\boldsymbol{\lambda})$) are 0 (i.e., $X_j = 0$, $j \in J$, we do not select any DC). However, the original assignment constraint (8) implies a redundant constraint that at least one DC should be selected to meet the demands, i.e.
+
+$$
+\sum_{j \in J} X_j \geq 1. \tag{26}
+$$
+
+Once constraint (8) is relaxed, constraint (26) becomes “not redundant” and should be
+taken into account in the algorithm.²³,²⁶ To satisfy the constraint (26) in the Lagrangean
+relaxation procedure, we make the following modifications to the aforementioned step
+---PAGE_BREAK---
+
+of solving problem $\mathbf{P}_j(\boldsymbol{\lambda})$ for each candidate DC location $j$.
+
+First, consider the problem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$, which is actually a special case of $(\mathbf{P}_j(\boldsymbol{\lambda}))$
+when $X_j = 1$. The formulation for a specific $j^*$ is given as:
+
+$$
+\begin{equation}
+\begin{array}{@{}l@{\quad}l@{}}
+(\mathbf{PR}_{j^*}(\boldsymbol{\lambda})) & \hat{V}_{j^*} = \text{Min: } f_{j^*} + \sum_{i \in I} (\hat{d}_{ij^*} - \lambda_i) Y_{ij^*} + K_{j^*} Z 1_{j^*} + qZ 2_{j^*} \quad (27) \\
+& \text{s.t. } Y_{ij^*} \le 1, \quad \forall i \in I \\
+& -Z 1_{j^*}^2 + \sum_{i \in I} \mu_i Y_{ij^*} \le 0 \\
+& -Z 2_{j^*}^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij^*} \le 0 \\
+& Y_{ij^*} \ge 0, \quad \forall i \in I \\
+& Z 1_{j^*} \ge 0, \ Z 2_{j^*} \ge 0
+\end{array}
+\end{equation}
+$$
+
+where $\hat{V}_j$ is denoted as the globally optimal objective function value of the problem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$.
+
+Note that the $X_j$ variable does not appear in subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$. Therefore,
+the minimum objective function value of subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$ is equal to the
+minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ when $X_j = 1$. However, it is not
+always the same as the globally minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$,
+because $(\mathbf{P}_j(\boldsymbol{\lambda}))$ could be globally optimal when $X_j = 0$,
+
+For each fixed value of the Lagrange multiplier $\lambda_i$, if the globally minimum
+objective function value of the Lagrange subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$ is negative, it means
+that when $X_j = 1$ the minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ is
+negative. Because we know when $X_j = 0$ the objective function value of problem
+$(\mathbf{P}_j(\boldsymbol{\lambda}))$ is 0, it follows that under this value of the Lagrange multiplier, the globally
+minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ is the same as the minimum
+---PAGE_BREAK---
+
+objective function value of problem (PRj(λ)), which is a negative value. Therefore, it
+is optimal to have Xj = 1 under this value of the Lagrange multiplier.
+
+On the other hand, if the minimum objective function value of problem (PRj(λ))
+is positive, it means that when Xj = 1, the optimal objective function value of problem
+(Pj(λ)) could not be negative. Thus the optimal objective function value of problem
+(Pj(λ)) would be 0 when Xj = 0 (because if Xj = 1 the minimum objective function
+value would be positive, as given in the objective function value of problem (PRj(λ))).
+
+A possible extreme case is that the minimum objective function value of all the
+Lagrangean subproblems (PRj(λ)) are positive (for example, λi = 0, i ∈ I). In this
+case, it means that the globally minimum objective function value of problem (Pj(λ))
+are all 0, i.e. we do not select any DC. To satisfy the implied constraint (26) that we
+need to select at least one DC, we just install the DC j with smallest objective
+function value, though this value is positive. By using the relationship between problem
+(PRj(λ)) and (Pj(λ)), we can solve (PRj(λ)) instead of (Pj(λ)) for equivalent
+optimality.
+
+Therefore, the algorithm for solving the Lagrangean relaxation subproblems is as
+follows. For each fixed value of λᵢ, we solve PRⱼ(λ) for every candidate DC location
+j. Then select the DCs in candidate location j (i.e. let Xⱼ = 1), for which V̂ⱼ ≤ 0.
+For all the remaining DCs for which V̂ⱼ > 0, we do not select them and set Xⱼ = 0. On
+the other hand, if all the V̂ⱼ > 0, ∀j ∈ J, we select only one DC with the minimum V̂ⱼ,
+i.e. Xⱼ* = 1 for the j* such that V̂ⱼ* = minⱼ{Vⱼ}.
+
+By doing this at each iteration of the Lagrangean relaxation (for each value of the
+multiplier λᵢ), we ensure that the optimal solution always satisfies Σⱼ Xⱼ ≥ 1. Thus the
+globally optimal objective function of (P(λ)) can be recalculated as:
+---PAGE_BREAK---
+
+$$V = \sum_{j \in J} \hat{V}_j + \sum_{i \in I} \lambda_i . \quad (28)$$
+
+**5.4.3. Obtaining Feasible Solutions**
+
+As the original model (P2) has very few constraints, there are several methods to obtain a feasible solution for the problem.
+
+The initial feasible solution can be obtained with the following two methods:
+
+The first method is to select a DC, and then assign all the retailers to this DC, i.e. pick up a $j^* \in J$, and let $X_{j^*} = 1$, $Y_{ij^*} = 1, \forall i \in I$. This method provides a simple way for obtaining a feasible solution and the resulting objective function value provides a valid upper bound of the global optimal objective function value.
+
+The second method is to solve the problem (P2) with an MINLP solver, possibly using Algorithm 1 to obtain a near optimal solution, which is also a feasible solution of the original problem. This usually provides a “tighter” upper bound than the one obtained with the first method.
+
+To obtain a feasible solution during the iterations, we first fix the values of the design variables ($X_j$) at the optimal values of the Lagrangean relaxation subproblems, and then solve the original model (P2) with a nonlinear programming (NLP) solver (not necessarily a global solver). The optimal values of the assignment variables ($Y_{ij}$) in the Lagrangean relaxation subproblems are used as the initial values in the nonlinear optimization procedure. Nonlinear solvers such as MINOS, CONOPT, SNOPT, KNITRO, IPOPT can be used in this step.
+
+Note that solving (P2) with fixed values of $X_j$ variables using a local or global NLP solver guarantees that the feasible solutions generated in this step have all the $Y_{ij}$ variables at integer values based on Proposition 2 (see Appendix A for details).
+
+**5.4.4. The Solution Algorithm**
+
+To summarize, the solution algorithm is as follows:
+
+**Algorithm 2: (Lagrangean Relaxation Algorithm)**
+---PAGE_BREAK---
+
+**Step 1:** (Initialization) for the initial value of the multiplier $\lambda^1$ of constraint (8) use an arbitrary guess, or the multiplier values corresponding to a local optimum of the NLP relaxation of model (P2). Let the incumbent upper bound be $UB = +\infty$, lower bound be $LB = -\infty$ and iteration number be $t=1$. Set the step length parameter $\theta = 2$.
+
+**Step 2:** Solve the modified Lagrangean relaxation program ($PR_j(\lambda^t)$) with fixed Lagrangean multiplier vector $\lambda^t$ for all the $j$ using a global optimization solver (e.g. BARON). Denote the optimal objective function value as $\hat{V}_j(\lambda^t)$ and the optimal solutions as $\hat{Y}_{ij}(\lambda^t)$.
+
+If $\hat{V}_j(\lambda^t) > 0, \forall j \in J$, let $X_{j*}(\lambda^t) = 1$ for the $j^*$ such that $\hat{V}_{j*}(\lambda^t) = \min_{j \in J}\{\hat{V}_j(\lambda^t)\}$.
+Else, let $X_j(\lambda^t) = 1$ for all $j$ with $\hat{V}_j(\lambda^t) \le 0$, and $X_j(\lambda^t) = 0$ for all $j$ such that
+$\hat{V}_j(\lambda^t) > 0$. Calculate $V(\lambda^t) = \sum_{j \in J, X_j(\lambda^t)=1} \hat{V}_j + \sum_{i \in I} \lambda_i^t$.
+
+If $V(\lambda^t) > LB$, update the lower bound by setting $LB = V(\lambda^t)$.
+
+If more than 2 iterations of the subgradient procedure²⁰ are performed without an increment of $LB$, then halve the step length parameter by setting $\theta = \frac{\theta}{2}$.
+
+**Step 3:** Fixing the design variable values as $X_j = X_j(\lambda^t)$ and using $\hat{Y}_{ij}(\lambda^t)$ as the initial values of the assignment variables $Y_{ij}$, solve problem ($P(\lambda^t)$) in the reduced space with fixed $\lambda^t$ and $X_j(\lambda^t)$ using an NLP solver (local or global). Denote the optimal solution as $Y_{ij}(\lambda^t)$ and the optimal objective function value as $\bar{V}(\lambda^t)$.
+
+If $\bar{V}(\lambda^t) < UB$, update the upper bound by setting $UB = \bar{V}(\lambda^t)$.
+
+**Step 4:** Calculate the subgradient ($G_i$) using
+
+$$ G_i^t = 1 - \sum_{j \in J} \hat{Y}_{ij}(\lambda^t), \quad i \in I \qquad (29) $$
+
+Compute the step size $T$, ²⁰, ²¹
+---PAGE_BREAK---
+
+$$T^t = \frac{\theta \cdot (UB - LB)}{\sum_{i \in I} (G_i^t)^2} \quad (30)$$
+
+Update the multipliers:
+
+$$\lambda^{t+1} = \max\{0, \lambda^t + T^t \cdot G^t\} \quad (31)$$
+
+**Step 5:** If $gap = \frac{UB - LB}{UB} < tol$ (e.g. $10^{-5}$), or $\|\lambda^{t+1} - \lambda^t\|^2 < tol$ (e.g. $10^{-3}$) or the maximum number of iterations has been reached, set $UB$ as the optimal objective function value, and set $X_j(\lambda^t)$ and $Y_{ij}(\lambda^t)$ as the optimal solution.
+
+Else, increment $t$ as $t+1$, go to Step 2.
+
+We should note that the above algorithm is guaranteed to provide rigorous lower bounds in Step 2 since the subproblems are globally optimized. Also, the feasible solution generated in Step 3 has all the $Y_{ij}$ variables at integer values as we mentioned in Section 5.3.3. Thus, the solution obtained by using this algorithm for problem (P2) is also a feasible solution of problem (P1). Due to the duality gap, the above algorithm must be stopped after a finite number of iterations. As will be shown in the computational results, the duality gaps are quite small.
+
+## 6. Illustrative Example
+
+To illustrate the application of this model, we consider a small illustrative example for the supply chain of liquid oxygen (LOX) consisting of one plant, three potential DCs and six customers as given in Figure 5. The tri-echelon “plant-DC-customer” supply chain is similar to the “supplier-DC-retailer” network that we discussed before, and the joint supply chain design and inventory management model can be also used to minimize total network design, transportation and inventory costs.
+---PAGE_BREAK---
+
+Figure 5 LOX supply chain network superstructure for the illustrative example
+
+**6.1. Illustrative Example for Industrial Application**
+
+We first consider an instance to illustrate the application of the joint supply chain
+design and inventory management model. In this instance, the fixed cost to install a DC
+($f_j$) is $10,000/year, and the fixed cost for ordering from supplier ($F_j$) is $100/
+replenishment. The order lead time for all the DCs are 7 days, and we consider 97.5%
+service level, thus the associated service level parameter $z_{\alpha}$ is 1.96. We consider 365
+days in a year, and the annual inventory holding for LOX is $3.65/Liter, i.e. daily
+inventory holding cost is $0.01/Liter. Both weight parameters $\beta$ and $\theta$ are set to 1.
+The remaining data for demand uncertainty and transportation costs are given in Table
+B1, B2 and B3 in Appendix B.
+
+We solve model (P2) directly to obtain the global optimum by using the BARON
+solver with GAMS,⁵⁰ because the problem only includes 3 binary variables, 24
+continuous variables and 30 constraints. The resulting optimal supply chain is given in
+Figure 6. We can see that only two DCs are installed, and they both serve the nearest
+three customers. The optimal replenishment number for DC1 is around 44 times, and
+for DC3 is around 57 times. This means that 108,770 liters of LOX are shipped from the
+---PAGE_BREAK---
+
+plant to DC1 in 44 shipments, i.e. roughly one shipment every eight days, and 182,865 liters of LOX are shipped from the plant to DC3 with 57 shipments, i.e. roughly one shipment every 6 days. The yearly expected flows of the corresponding transportation links are given in Figure 6. The optimal total cost is $366,624.27/year, which includes $200,000/year for installing DC1 and DC3, $65,320.40/year for the transportation cost from the DCs to customers, $77,444.86/year for the transportation cost from the plant to DCs, $10,163.62/year for fixed ordering cost, $10,301.48/year for the cost of working inventory and $3,393.91/year for the cost of safety stocks in the two installed DCs. The major trade-off for this instance is between DC installation costs, transportation costs and inventory costs.
+
+**Figure 6** Optimal network structure for the LOX supply chain
+
+## 6.2. Illustrative Example for the key trade-offs
+
+To better illustrate the trade-offs in this problem, we consider different weighted parameters for the transportation and inventory cost. All the data for demand uncertainty and transportation costs are the same as the previous example and given in Tables B1-B3 in Appendix B. Other important model coefficients for instances discussed in this section are given in Table B4 in Appendix B. Note that to reveal the
+---PAGE_BREAK---
+
+trade-offs and using different weighted parameters, the units of some parameters are removed for scaling purpose.
+
+**Table 1** Comparison result for the illustrative example
+
+| Transportation cost weight factor (β) | Inventory cost weight factor (θ) | Objective Function (Cost) | No. DCs | Network structure | | 0.01 | 0.01 | 2260.26 | 2 | Figure 6a | | 0.1 | 0.01 | 8122.93 | 3 | Figure 6b | | 0.001 | 0.01 | 1099.25 | 1 | Figure 6c | | 0.01 | 0.1 | 5359.18 | 1 | Figure 6d | | 0.01 | 0.001 | 1341.04 | 3 | Figure 6e |
+
+GAMS/BARON is also used to solve model (P2) directly for obtaining global optimal solutions. We first consider a base case with the transportation cost and inventory cost weighted factor as $β = 0.01$, $θ = 0.01$, and then consider different values of weights.
+
+(a) $β = 0.01, θ = 0.01, Cost = 2260.26$ (base case)
+
+(b) $β = 0.1, θ = 0.01, Cost = 8122.93$
+
+(c) $β = 0.001, θ = 0.01, Cost = 1099.25$
+---PAGE_BREAK---
+
+(d) $\beta = 0.01, \theta = 0.1, \text{Cost} = 5359.18$
+
+(e) $\beta = 0.01, \theta = 0.001, \text{Cost} = 1341.04$
+
+**Figure 7** Optimal LOX supply chain network structure of the illustrative example for different transportation cost and inventory cost weighted parameters
+
+The results for different instances are given in Table 1, and their associated optimal supply chain network structures are given in Figure 7. We can see that in the base case, only two DCs are selected to install and they are connected to three retailers respectively. When we increase the transportation cost factor to $\beta = 0.1$, all the three DCs are installed and each of them serves two retailers (Figure 7b). If we decrease the transportation cost factor to $\beta = 0.001$, only DC 3 is selected to be installed and it serves all the retailers. Thus, the larger the weighted factor for transportation costs $\beta$, the more DCs are installed. On the other hand, when we fix the transportation cost factor $\beta = 0.01$, and consider different values of the inventory cost factor $\theta$, we can similarly find out from Figure 7d and 6e that the larger weighted factor for inventory costs, the fewer DCs are installed.
+
+Based on this analysis, we obtain a similar conclusion as Shen et al.¹⁹ that the more DCs are installed, the more transportation costs are potentially reduced, but less inventory cost are saved. The major reason of this performance is that from an inventory cost aspect, the more retailers are pooled to a DC, the more cost saving can be achieved, but from a transportation cost viewpoint, installing more DC to serve different retailers may reduce the total transportation cost. Thus, the trade-off between inventory and transportation costs is established and reflects on the number of DCs besides the tradeoffs for supply chain design costs and operation costs.
+---PAGE_BREAK---
+
+## 7. Computational Results
+
+In order to illustrate the applicability of the proposed solution strategies, we carry out computational experiments for instances with 33, 88 and 150 retailer locations with different weight parameters $\beta$ and $\theta$. In all cases, each retailer location is also a candidate DC location, i.e. there are as many candidate DC locations as the retailer locations for each instance. Note that in analogy to the industrial gases supply chain introduced in Section 6, the “retailers” correspond to the “customers”.
+
+The model sizes of instance for $n$ retailer locations (and $n$ candidate DC locations) are given in Table 2. This means that for the largest problem with 150 retailers and 150 candidate distribution centers, the original INLP problem (P0) includes 22,650 binary variables and 22,650 constraints, while the reformulated problem (P2) includes only 150 binary variables, 22,800 continuous variables and 22,950 constraints.
+
+**Table 2 Model statistics of instance for n retailer locations (and n candidate distribution center locations)**
+
+| Model | (P0) | (P1) | (P2) | (P3) | (PRj(λ)) |
|---|
| No. of discrete variables | n2 + n | n | n | n | 0 | | No. of continuous variables | 0 | n2 | n2 + 2n | n2 + 2n | n + 2 | | No. of constraints | n2 + n | n2 + n | n2 + 3n | n2 + 3n | 2 |
+
+For every retailer $i$, we set the annual inventory holding cost $h=1$, the service level parameter $z_{\alpha} = 1.96$ (97.5% service level), the order lead time $L = 7$ days, the fixed order cost $F_i = 10$, the unit shipping cost (from supplier to retailer) $a_i = 5$, and the fixed shipping cost (from supplier to retailer) $g_i = 10$. Each retailer location
+---PAGE_BREAK---
+
+represents a city in the U.S., with mean demand $\mu_i$ equal to the city population divided by 2000, based on the data from U.S. Census 2000.⁵⁴ The standard deviation-to-mean ratio of each customer demand $\sigma_i / \mu_i$ is generated uniformly on $U[0, 0.3]$, and the fixed DC installation cost $f_i$ is generated uniformly on $U[90, 110]$.
+
+All the instances are modeled with GAMS⁵⁰ and solved with solvers including DICOPT, SBB, Bonmin and BARON on an Intel 3.2 GHz machine with 512 MB RAM.
+
+Tables 3 shows the optimal objective function values for solving problems (P1) and (P2) directly and with Algorithm 1 by using different solvers, including the outer-approximation algorithm in solver DICOPT, the branch & bound method in solver SBB, and the global optimization solver BARON. We can see that for all the instances we considered, problems (P1) and (P2) cannot be solved directly by using MINLP solvers such as DICOPT and SBB. This is presumably due to the unbounded gradient when solving the NLP relaxation of (P1) and (P2). In contrast, with the proposed heuristic algorithm 1 which involves the solution of (P3) and (P2), we obtain “good quality” solutions by using all the solvers. It is interesting to note that for all the computational instances, the optimal solutions are found in the NLP relaxation step. This shows that the NLP relaxation of the MINLP problems (P1) and (P2) are quite effective in practice, although theoretically they are not guaranteed to yield integer solutions. Similar conclusions are also reported in Shen et al.,¹⁹ Cornuejols et al.,³⁴ Conn and Cornuejols.³⁵
+
+Table 4 shows the detailed computational times and the objective function values from the computational experiments using different algorithms and solvers to solve instances ranging from 33 to 150 retailer locations with different weight parameters. All the instances are solved with Algorithm 1 by using the global optimization solver BARON and MINLP solvers that rely on convexity assumptions (Bonmin, DICOPT, SBB) with default options, and the proposed Lagrangean heuristic algorithm (Algorithm 2) of which the Lagrangean subproblems (for lower bounds) are solved with the global optimization solver BARON and the feasibility subproblem (for upper
+---PAGE_BREAK---
+
+bounds) are solved with the NLP solver CONOPT. For comparison purposes, all the instances are also solved with the global optimization solver BARON to obtain global optimal solutions, although BARON failed to terminate the search after more than 10 hours for large instances. By comparing the solution and the associated computational times, we can see that both Algorithms 1 and 2 can obtain good quality solutions that are equal to, or very close to the global optimum.
+
+Algorithm 1 requires much less computational time, and for the smaller instances the solutions obtained are quite close to the global optimum. Note that for all the instances where we can obtain an exact global optimum, the solutions from Algorithm 1 are within 5% of the global optimal solution.
+
+The Lagrangean relaxation algorithm requires longer computational times than Algorithm 1, but the quality of the solutions is significantly improved. The detailed computational results of the Lagrangean relaxation algorithm and the global optimization solver BARON are given in Table 5, where the solutions and computational times are compared. For all instances, the Lagrangean relaxation algorithm requires much shorter computational times than using BARON to obtain the same quality solutions. The instances that BARON can solve to global optimality in less than 10 hours, are solved more efficiently by using the Lagrangean based algorithm in shorter computational times and with 0% optimality gap. For the remaining large scale instances that BARON cannot close the gaps in 10 hours, the optimality gaps of the Lagrangean based algorithm are much smaller (usually less than 1.2%) than the optimality gaps by using BARON for 10 hours.
+
+The results show that good solutions without excessive computational times can be obtained with the proposed Algorithm 1, and near-global solutions can be obtained with the proposed Lagrangean relaxation method (Algorithm 2).
+---PAGE_BREAK---
+
+Table 3 Comparison of the optimal objective function values for solving different problems with different solvers and algorithms
+
+| No. Retailers | β | θ | Solving MINLP problem (P1) Directly | Solving MINLP problem (P2) Directly | Algorithm 1 for MINLP Problem (P2) |
|---|
| DICOPT | SBB | DICOPT | SBB | MILP Relaxation (P3) | SBB | DICOPT | BARON |
|---|
| 33 | 0.001 | 0.1 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 398.64 | 398.64 | 398.64 | 398.64 | | 33 | 0.001 | 0.5 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 580.46 | 580.46 | 580.46 | 580.46 | | 33 | 0.005 | 0.1 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 890.44 | 1023.00 | 1023.00 | 1023.00 | | 33 | 0.005 | 0.5 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 1072.25 | 1384.03 | 1384.03 | 1384.03 | | 88 | 0.001 | 0.1 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 837.68 | 935.02 | 935.02 | 867.55* | | 88 | 0.001 | 0.5 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 1161.12 | 1386.28 | 1386.28 | 1295.02* | | 88 | 0.005 | 0.1 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 1956.30 | 2297.74 | 2297.74 | 2297.80* | | 88 | 0.005 | 0.5 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 2279.74 | 3082.19 | 3082.19 | 3022.67* | | 150 | 0.001 | 0.5 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 1674.08 | 2205.37 | 2205.37 | 1847.93* | | 150 | 0.005 | 0.1 | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | Loc. Infeas. | 3107.87 | 4069.09 | 4069.09 | 3689.71* |
+
+* Suboptimal solution obtained with BARON for 10 hour limit. Detailed upper and lower bounds are reported in Table 5.
+---PAGE_BREAK---
+
+**Table 4** Comparison of computational results for solving model (P2) with different algorithms and solvers
+
+No. Retailers | β θ | Algorithm 1 | Algorithm 2 |
|---|
Bonmin (Ipopt) | DICOPT (CONOPT) | SBB (CONOPT) | BARON (global optimum) |
|---|
| Obj. fun. | Time (s) | Obj. fun. | Time (s) | Obj. fun. | Time (s) | Obj. fun. | Time (s) | Obj. fun. | Time (s) |
|---|
| 33 | 0.001 0.1 | 398.64 | 10.125 | 398.64 | 0.22 | 398.64 | 0.20 | 398.64 | 53.31 | 398.64 | 15.8 | | 33 | 0.001 0.2 | 457.61 | 366.97 | 457.61 | 0.22 | 457.61 | 0.25 | 457.61 | 54.12 | 457.61 | 16.8 | | 33 | 0.001 0.5 | 580.46 | 496.281 | 580.46 | 0.23 | 580.46 | 0.22 | 580.46 | 74.27 | 580.46 | 17.9 | | 33 | 0.001 1.0 | 728.21 | 227.828 | 728.21 | 0.17 | 728.21 | 0.17 | 728.21 | 39.14 | 728.21 | 15.85 | | 33 | 0.001 5.0 | 1460.40 | 235.765 | 1460.40 | 0.18 | 1460.40 | 0.20 | 1460.40 | 93.22 | 1460.40 | 37.28 | | 33 | 0.001 0.1 | 398.64 | 10.125 | 398.64 | 0.22 | 398.64 | 0.20 | 398.64 | 53.31 | 398.64 | 15.8 | | 33 | 0.003 0.1 | 770.26 | 274.078 | 770.26 | 0.23 | 770.26 | 0.27 | 734.60 | 75.67 | 734.60 | 42.79 | | 33 | 0.005 0.1 | 1023.00 | 244.641 | 1023.00 | 0.72 | 1023.00 | 0.72 | 1007.31* | > 10 hr | 1006.01 | 90.85 | | 33 | 0.008 0.1 | 1248.59 | 333.953 | 1248.59 | 0.80 | 1248.59 | 0.80 | 1249.37* | > 10 hr | 1248.59 | 53.13 | | 33 | 0.010 0.1 | 1418.57 | 201.610 | 1418.57 | 0.80 | 1418.57 | 0.76 | 1398.54* | > 10 hr | 1398.39 | 92.74 | | 88 | 0.001 0.1 | --** | --** | 935.02 | 20.89 | 935.02 | 20.94 | 867.55* | > 10 hr | 867.55 | 356.1 | | 88 | 0.001 0.5 | --** | --** | 1386.28 | 42.16 | 1386.28 | 38.50 | 1295.02* | > 10 hr | 1230.99 | 322.54 | | 88 | 0.005 0.1 | --** | --** | 2297.74 | 42.11 | 2297.74 | 42.16 | 2297.80* | > 10 hr | 2284.06 | 840.28 | | 88 | 0.005 0.5 | --** | --**
+
+* Suboptimal solution obtained with BARON for 10 hour limit Detailed upper and lower bounds are reported in Table 5.
+
+** Data not available due to solver failure..
+---PAGE_BREAK---
+
+**Table 5** Comparison of bounds by using the Lagrangean heuristic algorithm and global optimizer BARON
+
+No. Retailers | β | θ | Lagrangean Relaxation (Algorithm 2) | BARON (global optimum) |
|---|
Upper Bound | Lower Bound | Gap | Iterations | Time (s) | Upper Bound | Lower Bound | Optimality Gap | Time (s) |
|---|
| 33 | 0.001 | 0.1 | 398.64 | 398.64 | 0 % | 10 | 15.8 | 398.64 | 398.64 | 0 % | 53.31 | | 33 | 0.001 | 0.2 | 457.61 | 457.61 | 0 % | 6 | 16.8 | 457.61 | 457.61 | 0 % | 54.12 | | 33 | 0.001 | 0.5 | 580.46 | 580.46 | 0 % | 6 | 17.9 | 580.46 | 580.46 | 0 % | 74.27 | | 33 | 0.001 | 1.0 | 728.21 | 728.21 | 0 % | 6 | 15.85 | 728.21 | 728.21 | 0 % | 39.14 | | 33 | 0.001 | 5.0 | 1460.40 | 1460.40 | 0 % | 13 | 37.28 | 1460.40 | 1460.40 | 0 % | 93.22 | | 33 | 0.001 | 0.1 | 398.64 | 398.64 | 0 % | 10 | 15.80 | 398.64 | 398.64 | 0 % | 53.31 | | 33 | 0.003 | 0.1 | 734.60 | 734.60 | 0 % | 16 | 42.79 | 734.60 | 734.60 | 0 % | 75.67 | | 33 | 0.005 | 0.1 | 1006.01 | 1004.53 | 0.147 % | 32 | 90.85 | 1007.31* | 965.29 | 4.353 % | 36000 | | 33 | 0.008 | 0.1 | 1248.59 | 1248.59 | 0 % | 19 | 53.13 | 1249.37* | 1215.12 | 2.819 % | 36000 | | 33 | 0.010 | 0.1 | 1398.39 | 1397.7 | 0.049 % | 33 | 92.74 | 1398.54* | 1364.82 | 2.471 % | 36000 | | 88 | 0.001 | 0.1 | 867.55 | 867.54 | 0.001 % | 21 | 356.1 | 867.55* | 837.68 | 3.566 % | 36000 | | 88 | 0.001 | 0.5 | 1230.99 | 1223.46 | 0.615 % | 24 | 322.54 | 1295.02* | 1165.15 | 11.146 % | 36000 | | 88 | 0.005 | 0.1 | 2284.06 | 2280.74 | 0.146 % | 55 | 840.28 | 2297.80* | 2075.51 | 10.710 % | 36000 | 88 150 150 |
+
+* Suboptimal solution obtained with BARON for 10 hour limit.
+
+-36-
+---PAGE_BREAK---
+
+Figure 8 Network structure for the case of 33 retailers with $\beta=0.001$, $\theta=0.1$
+---PAGE_BREAK---
+
+(b) Computational time
+
+**Figure 9** Results with different $\theta$ values for the case of 33 retailers, $\beta=0.001$
+
+(a) Objective function value
+---PAGE_BREAK---
+
+(b) Computational time
+
+**Figure 10** Results with different $\beta$ values for the case of 33 retailers, $\theta=0.1$
+
+The network structure of the 33 retailer case with $\beta = 0.001$, $\theta = 0.1$ is given in Figure 8. Figure 9 and Figure 10 show how the objective function values and computational times change as the weights for transportation costs ($\beta$) and inventory costs ($\theta$) change. We can see that large weights will lead to an increase of the objective function value for both cases. From Figure 9, we can see that global optimal solutions can be obtained by using either Algorithm 1 or Algorithm 2, but Algorithm 1 requires less computational time. From Figure 10, we can see that although Algorithm 1 with MINLP solvers that rely on convexity assumptions always converges more quickly, the optimal objective function values are often higher. Compared with the global optimizer BARON, the proposed Lagrangean relaxation algorithm can converge to the global optimum in much shorter times for 33 retailers, $\theta = 0.1$ and different $\beta$ values.
+
+# 8. Conclusion
+
+This paper has proposed two algorithms for solving the joint supply chain network design and stochastic inventory management model presented by Shen et al.¹⁹ The first algorithm is a heuristic method based on using MINLP optimization methods that rely
+---PAGE_BREAK---
+
+on assuming convexity in the functions. Computational experiments show that this heuristic algorithm, which includes an initialization scheme, can obtain good quality solutions (typically within 5% of the global optimum). The second algorithm is a heuristic Lagrangean relaxation and decomposition algorithm for obtaining global or near-global optimal solutions. Although there are duality gaps due to the nonconvex nature of the model, extensive numerical examples suggest that the solutions obtained with this algorithm are typically within 1.2% of the global optimum. Moreover, the second algorithm requires much less computational effort than the global optimization solver BARON.
+
+This research can be extended to consider capacity constraints, as both the supplier(s) and the distribution centers are assumed to have infinite capacity in this model. It is likely that Proposition 1 will not hold for the joint capacitated facility location and inventory management models due to the additional capacitated constraints. However, the proposed algorithms can still be used to solve the relaxed problems when branching on the assignment variables.
+
+Another possible extension is to consider different lead times for the distribution centers. In this model the lead times from supplier(s) to all the distribution centers are assumed to be the same, so cost saving can be achieved by risk pooling. If the replenishment lead times of each distribution centers are different, pooling the customers may or may not save costs. It would be also interesting to see how the supply chain network structure and the associated inventory levels change, as the lead time at each distribution center changes, especially when addressing responsiveness issue in the supply chain network design.
+
+## Acknowledgment
+
+The authors acknowledge financial support from the Pennsylvania Infrastructure Technology Alliance (PITA) and the National Science Foundation under Grant No. DMI-0556090.
+
+## Appendix A: Properties of Model (P0)
+
+In this section, we will present some properties of the INLP model (P0). Especially, we show in Proposition 1 that the binary variables for assignment decisions ($Y_{ij}$) in
+---PAGE_BREAK---
+
+the model (P0) can be relaxed as continuous variables while treating the $X_j$ variables as integer without changing the global optimal integer solution or a local optimal solution for fixed 0-1 values for $X_j$.
+
+Let us first consider the following relaxation problem (P1) with the assignment variables $Y_{ij}$ in (P0) relaxed as continuous variables as in constraint (14).
+
+$$
+\begin{align}
+\text{(P1)} \quad & \text{Min:} && \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}} \right) \tag{13} \\
+& \text{s.t.} && \sum_{j \in J} Y_{ij} = 1, && \forall i \in I. \tag{8} \\
+& && Y_{ij} \le X_j, && \forall i \in I, \forall j \in J. \tag{9} \\
+& && X_j \in \{0,1\}, && \forall j \in J \tag{10} \\
+& && Y_{ij} \ge 0, && \forall i \in I, \forall j \in J. \tag{14}
+\end{align}
+$$
+
+We also consider problem (P1) in the reduced space where all the binary variables $X_j$ are fixed to be $X_j^* = 0$ or 1, $\forall j \in J$. We denote this problem as (AP1).
+
+$$
+\text{(AP1)} \quad \text{Min:} \quad \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}} \right) \quad (\text{A1})
+$$
+
+$$
+\text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I. \hfill (\text{A2})
+$$
+
+$$
+Y_{ij} \le X_j^*, \quad \forall i \in I, \forall j \in J. \tag{A3}
+$$
+
+$$
+Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \tag{A4}
+$$
+
+Problem (P1) is an MINLP problem, and problem (AP1) is a nonlinear programming (NLP) problem with all the binary variables $X_j$ in (P1) fixed to certain values. Note that problem (AP1) has the following properties given in Lemma 1 and Lemma 2.
+
+**Lemma 1.** *Problem (AP1) is a concave minimization problem defined over a polyhedron.*
+
+**Proof:**
+
+It is trivial to see that all the constraints of problem (AP1) are linear, therefore the linear constraints correspond to a polyhedron.
+---PAGE_BREAK---
+
+Next, we prove that the objective function given in (A1) is concave. Let us assume $\mathbf{Y}^1 = \{Y_{ij}^1 | i \in I, j \in J\}$ and $\mathbf{Y}^2 = \{Y_{ij}^2 | i \in I, j \in J\}$ are two feasible solutions satisfying the constraints of problem (AP1). Let $V^1$ and $V^2$ be the associated objective function values, we have:
+
+$$V^1 = \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij}^1 + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}^1} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}^1} \right) \quad (A5)$$
+
+$$V^2 = \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij}^2 + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}^2} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}^2} \right) \quad (A6)$$
+
+Let $0 \le t \le 1$, and $\mathbf{Y}^0 = t \cdot \mathbf{Y}^1 + (1-t) \cdot \mathbf{Y}^2 = \{Y_{ij}^0 = t \cdot Y_{ij}^1 + (1-t) \cdot Y_{ij}^2 | i \in I, j \in J\}$. Since all the constraints of problem (AP1) are linear, it is trivial to show $\mathbf{Y}^0$ is also a feasible solution of problem (AP1). Let $V^0$ be the associated objective function value.
+
+We then have,
+
+$$
+\begin{aligned}
+V^0 &= \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij}^0 + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}^0} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}^0} \right) \\
+&= \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} [t \cdot Y_{ij}^1 + (1-t) \cdot Y_{ij}^2] + K_j \sqrt{\sum_{i \in I} \mu_i [t \cdot Y_{ij}^1 + (1-t) \cdot Y_{ij}^2]} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 [t \cdot Y_{ij}^1 + (1-t) \cdot Y_{ij}^2]} \right)
+\end{aligned}
+\quad (A7) $$
+
+Thus we have:
+
+$$
+\begin{aligned}
+& V^0 - [t \cdot V^1 + (1-t) \cdot V^2] \\
+&= \sum_{j \in J} K_j \left( \sqrt{[t \cdot \sum_{i \in I} (\mu_i Y_{ij}^1) + (1-t) \cdot \sum_{i \in I} (\mu_i Y_{ij}^2)] - [t \cdot \sqrt{\sum_{i \in I} \mu_i Y_{ij}^1} + (1-t) \cdot \sqrt{\sum_{i \in I} \mu_i Y_{ij}^2}] } + q \sqrt{[t \cdot \sum_{i \in I} (\hat{\sigma}_i^2 Y_{ij}^1) + (1-t) \cdot \sum_{i \in I} (\hat{\sigma}_i^2 Y_{ij}^2)] - [t \cdot \sqrt{\sum_{i \in I} (\hat{\sigma}_i^2 Y_{ij}^1)} + (1-t) \cdot \sqrt{\sum_{i \in I} (\hat{\sigma}_i^2 Y_{ij}^2)}]} \right)
+\end{aligned}
+\quad (A8) $$
+
+Now consider the following function:
+
+$$f(t) = \sqrt{t \cdot a + (1-t) \cdot b} - [t \cdot a + (1-t) \cdot b] - [t \cdot b + (1-t) \cdot a]$$
+
+(A9)
+
+where $0 \le t \le 1$ and $a, b \ge 0$. Thus, we have:
+---PAGE_BREAK---
+
+$$
+\begin{align}
+f(t) &= \sqrt{t \cdot a + (1-t) \cdot b} - [t \cdot \sqrt{a} + (1-t) \cdot \sqrt{b}] = \frac{t \cdot a + (1-t) \cdot b - [t \cdot \sqrt{a} + (1-t) \cdot \sqrt{b}]^2}{\sqrt{t \cdot a + (1-t) \cdot b} + [t \cdot \sqrt{a} + (1-t) \cdot \sqrt{b}]} \nonumber \\
+&= \frac{t \cdot (1-t) \cdot (\sqrt{a} - \sqrt{b})^2}{\sqrt{t \cdot a + (1-t) \cdot b} + [t \cdot \sqrt{a} + (1-t) \cdot \sqrt{b}]} \ge 0 \tag{A10}
+\end{align}
+$$
+
+Comparing (A9), (A10) with each term in (A8), it follows that
+
+$$V^0 - [t \cdot V^1 + (1-t) \cdot V^2] \geq 0$$
+
+Therefore, the objective function of problem (AP1) is a concave function. ■
+
+**Lemma 2.** The $Y_{ij}$ variables take on integer values (0 or 1) for any local or global optimal solution of problem (API).
+
+**Proof:**
+
+Based on Lemma 1, we know that problem (AP1) corresponds to the minimization of a concave function over a polyhedron. As proved by Falk and Hoffmann,⁵⁵ any local or global optimal solution of this problem always lies on a vertex of the polyhedron.
+
+Furthermore, it is trivial to see the coefficient matrix of constraint (A2) is totally unimodular,⁵⁶ while constraints (A3) and (A4) only provide integer bounds of the $Y_{ij}$ variables. Thus, constraints (A2), (A3) and (A4) define an integral polyhedron, of which the extreme points are at the integer values (0 or 1) of the $Y_{ij}$ variables.⁵⁶ Therefore, all the $Y_{ij}$ variables equal to 0 or 1 for any local or global optimal solution of problem (AP1). ■
+
+**Proposition 1.** *The continuous variables* $Y_{ij}$ *yield 0-1 integer values when* (P0) *is globally optimized or locally optimized for fixed 0-1 value for* $X_j$.
+
+**Proof:**
+
+The MINLP problem (**P1**) is a relaxation of the INLP problem (**P0**) by treating all the $Y_{ij}$ as continuous variables, while problem (**AP1**) is an NLP subproblem of (**P1**) with a
+---PAGE_BREAK---
+
+fixed value of the integer variables $X_j$. Based on Lemma 2, we can then conclude that
+when (P1) is globally optimized or locally optimized for fixed 0-1 values for $X_j$, all
+the $Y_{ij}$ variables take integer values (0 or 1).
+
+**Proposition 2.** The global optimal solution of problem (P2), or a local optimal solution with fixed 0-1 value for $X_j$, has all the continuous variables $Y_{ij}$ take on integer value (0 or 1).
+
+**Proof:**
+
+Similarly, we consider problem (P2) in the reduced space where all the binary
+variables $X_j$ are fixed to be $X_j^* = 0$ or 1, $\forall j \in J$. We denote this problem as (AP2).
+
+$$
+\begin{align}
+\text{(AP2) Min:} \quad & \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j Z_1 + qZ_2 \right) \tag{A11} \\
+\text{s.t.} \quad & \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I. \tag{A2} \\
+& Y_{ij} \le X_j^*, \qquad \forall i \in I, \forall j \in J. \tag{A3} \\
+& Y_{ij} \ge 0, \qquad \forall i \in I, \forall j \in J \tag{A4} \\
+& -Z_1^2 + \sum_{i \in I} \mu_i Y_{ij} \le 0, \qquad \forall j \in J \tag{A12} \\
+& -Z_2^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \qquad \forall j \in J \tag{A13} \\
+& Z_1 \ge 0, \qquad \forall j \in J \tag{A14} \\
+& Z_2 \ge 0, \qquad \forall j \in J \tag{A15}
+\end{align}
+$$
+
+We associate the Lagrange multiplier $\lambda_i$ with constraint (A2), $\mu_{ij}$ with constraint (A3), $t_{ij}$ with constraint (A4), $\rho_j$ with constraint (A12), $\gamma_j$ with constraint (A13), $\zeta_j$ with constraint (A14) and $\xi_j$ with constraint (A15). The Lagrange function for this problem is:
+
+$$
+\begin{equation}
+\begin{aligned}
+L = & \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j Z_1 + qZ_2 \right) + \sum_{i \in I} \lambda_i \left( 1 - \sum_{j \in J} Y_{ij} \right) + \sum_{i \in I} \sum_{j \in J} \mu_{ij} (Y_{ij} - X_j^*) \\
+& - \sum_{i \in I} \sum_{j \in J} t_{ij} Y_{ij} + \sum_{j \in J} \rho_j \left( -Z_1^2 + \sum_{i \in I} \mu_i Y_{ij} \right) + \sum_{j \in J} \gamma_j \left( -Z_2^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \right) - \sum_{j \in J} \zeta_j Z_1 - \sum_{j \in J} \xi_j Z_2
+\end{aligned}
+\end{equation}
+$$
+---PAGE_BREAK---
+
+Then the KKT conditions for this problem are:
+
+$$ \frac{\partial L}{\partial Y_{ij}} = \hat{d}_{ij} - \lambda_i + \mu_{ij} - t_{ij} + \rho_j \mu_i + \gamma_j \hat{\sigma}_i^2 = 0, \forall i \in I, j \in J \quad (A16) $$
+
+$$ \frac{\partial L}{\partial Z1_j} = K_j - 2 \cdot \rho_j \cdot Z1_j - \zeta_j = 0, \forall j \in J \quad (A17) $$
+
+$$ \frac{\partial L}{\partial Z2_j} = q - 2 \cdot \gamma_j \cdot Z2_j - \xi_j = 0, \forall j \in J \quad (A18) $$
+
+$$ \mu_{ij}(Y_{ij} - X_j^*) = 0, \forall i \in I, \forall j \in J \quad (A19) $$
+
+$$ t_{ij}Y_{ij} = 0, \forall i \in I, \forall j \in J \quad (A20) $$
+
+$$ \rho_j(-Z1_j^2 + \sum_{i \in I} \mu_i Y_{ij}) = 0, \forall j \in J \quad (A21) $$
+
+$$ \gamma_j(-Z2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}) = 0, \forall j \in J \quad (A22) $$
+
+$$ \zeta_j \cdot Z1_j = 0, \forall j \in J \quad (A23) $$
+
+$$ \xi_j \cdot Z2_j = 0, \forall j \in J \quad (A24) $$
+
+$$ \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \quad (A25) $$
+
+$$ Y_{ij} \le X_j^*, \quad \forall i \in I, \forall j \in J. \quad (A3) $$
+
+$$ Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \quad (A4) $$
+
+$$ -Z1_j^2 + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \quad (A12) $$
+
+$$ -Z2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \quad (A13) $$
+
+$$ Z1_j \ge 0, \quad \forall j \in J \quad (A14) $$
+
+$$ Z2_j \ge 0, \quad \forall j \in J \quad (A15) $$
+
+$$ \lambda_i \ge 0, \mu_{ij} \ge 0, t_{ij} \ge 0, \forall i \in I, \forall j \in J. \quad (A25) $$
+
+$$ \rho_j \ge 0, \gamma_j \ge 0, \zeta_j \ge 0, \xi_j \ge 0, \forall j \in J. \quad (A26) $$
+
+On the other hand, for problem (AP1), if we similarly associate Lagrange multiplier $\lambda_i$ with constraint (A2), $\mu_{ij}$ with constraint (A3) and $t_{ij}$ with constraint (A4). The Lagrange function for problem (AP1) is:
+---PAGE_BREAK---
+
+$$L' = \sum_{j \in J} \left( f_j X_j^* + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}} \right) + \sum_{i \in I} \lambda_i \left( 1 - \sum_{j \in J} Y_{ij} \right) + \sum_{i \in I} \sum_{j \in J} \mu_{ij} (Y_{ij} - X_j^*) - \sum_{i \in I} \sum_{j \in J} t_{ij} Y_{ij}$$
+
+Then the KKT conditions for this problem are:
+
+$$\frac{\partial L'}{\partial Y_{ij}} = \hat{d}_{ij} + \frac{K_j \mu_i}{2 \sqrt{\sum_{i \in I} \mu_i Y_{ij}}} + \frac{q \hat{\sigma}_i^2}{2 \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}}} - \lambda_i + \mu_{ij} - t_{ij} = 0, \quad \forall i \in I, j \in J \quad (A27)$$
+
+$$\mu_{ij}(Y_{ij} - X_j^*) = 0, \quad \forall i \in I, \forall j \in J \quad (A19)$$
+
+$$t_{ij}Y_{ij} = 0, \quad \forall i \in I, \forall j \in J \quad (A20)$$
+
+$$\sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \quad (A2)$$
+
+$$Y_{ij} \le X_j^*, \quad \forall i \in I, \forall j \in J. \quad (A3)$$
+
+$$Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \quad (A4)$$
+
+$$\lambda_i \ge 0, \mu_{ij} \ge 0, t_{ij} \ge 0, \forall i \in I, \forall j \in J. \quad (A25)$$
+
+By substituting equations (A17), (A18), (A21), (A22), (A23) and (A24) into (A27),
+we can have
+
+$$\frac{\partial L'}{\partial Y_{ij}} = \hat{d}_{ij} + \frac{K_j \mu_i}{2\sqrt{\sum_{i\in I} \mu_i Y_{ij}}} + \frac{q\hat{\sigma}_i^2}{2\sqrt{\sum_{i\in I} (\hat{\sigma}_i^2 Y_{ij})}} - \lambda_i + \mu_{ij} - t_{ij} = \hat{d}_{ij} - \lambda_i + \mu_{ij} - t_{ij} + \rho_j \mu_i + \gamma_j \hat{\sigma}_i^2 = \frac{\partial L}{\partial Y_{ij}}$$
+
+which is the same as (A14).
+
+This then shows that the KKT conditions of problem (AP1) and (AP2) are equivalent,
+although due to the square root terms in (A23) problem (AP1) may have unbounded
+gradients. Because we know that all the optimal solutions of problem (AP1) have $Y_{ij}$
+take on integer values 0 or 1, we can conclude that $Y_{ij}$ variables also have integer
+values when (AP2) is (locally or globally) optimized. Similarly to Proposition 1, we
+can conclude that problem (P2) has all the $Y_{ij}$ take on integer values when it is 1
+globally optimized or locally optimized for fixed $X_j$.
+---PAGE_BREAK---
+
+# Appendix B: Data for the Illustrative Examples
+
+**Table B1** Parameters for demand uncertainty for the illustrative example
+
+ | Mean demand μi (Liters/day) | Standard Deviation σi (Liters/day) |
|---|
| Customer 1 | 95 | 30 | | Customer 2 | 157 | 50 | | Customer 3 | 46 | 25 | | Customer 4 | 234 | 80 | | Customer 5 | 75 | 25 | | Customer 6 | 192 | 80 |
+
+**Table B2** Parameters for unit transportation cost ($d_{ij}$) between DCs and customers ($/Liter)
+
+ | DC 1 | DC 2 | DC 3 |
|---|
| Customer 1 | 0.04 | 2.00 | 2.88 | | Customer 2 | 0.08 | 1.36 | 1.32 | | Customer 3 | 0.36 | 0.08 | 1.04 | | Customer 4 | 0.88 | 0.10 | 0.52 | | Customer 5 | 1.52 | 1.80 | 0.12 | | Customer 6 | 3.36 | 2.28 | 0.08 |
+
+**Table B3** Parameters for shipping cost between Plant and DCs
+
+ | Fixed shipping cost from plant to DC ($g_j$) | Unit shipping cost ($a_j$) |
|---|
| ($/shipment) | ($/Liter) |
|---|
| DC 1 | 13 | 0.24 |
|---|
| DC 2 | 10 | 0.20 |
|---|
| DC 3 | 14 | 0.28 |
|---|
+
+**Table B4** Model coefficients for the illustrative example in Section 6.2
+
+| Fj | Fixed order cost per replenishment | 10 |
|---|
| fj | Fixed cost to install a DC (annual) | 100 |
|---|
| zα | Service level parameter | 1.96 |
|---|
| L | Order lead time (days) | 7 |
|---|
| h | Unit inventory holding cost (annual) | 12 |
|---|
+---PAGE_BREAK---
+
+Reference
+
+1. Grossmann, I. E., Enterprise-wide Optimization: A New Frontier in Process Systems Engineering. *AIChE Journal* **2005**, *51*, 1846-1857.
+
+2. Chopra, S.; Meindl, P., *Supply Chain Management: Strategy, Planning and Operation*. Prentice Hall: Saddle River, NJ, 2003.
+
+3. Daskin, M. S., *Network and Discrete Location: Models, Algorithms, and Applications*. Wiley: New York, 1995.
+
+4. Owen, S. H.; Daskin, M. S., Strategic facility location: A review. *European Journal of Operations Research* **1998**, *111*, 423-447.
+
+5. Zipkin, P. H., *Foundations of Inventory Management*. McGraw-Hill: Boston, MA, 2000.
+
+6. Cachon, G. P.; Fisher, M., Supply Chain Inventory Management and the Value of Shared Information. *Management Science* **2000**, *46*, (8), 1032-1048.
+
+7. Kok, A. G.; Graves, S. C., *Supply Chain Management: Design, Coordination and Operation*. Elsevier: 2003.
+
+8. Tsiakis, P.; Shah, N.; Pantelides, C. C., Design of Multi-echelon Supply Chain Networks under Demand Uncertainty. *Industrial & Engineering Chemistry Research* **2001**, *40*, 3585-3604.
+
+9. Bok, J.-K.; Grossmann, I. E.; Park, S., Supply Chain Optimization in Continuous Flexible Process Networks. *Industrial & Engineering Chemistry Research* **2000**, *39*, (5), 1279-1290.
+
+10. Relvas, S.; Matos, H. A.; A.P.F.D., B.-P.; Fialho, J.; Pinheiro, A. S., Pipeline Scheduling and Inventory Management of a Multiproduct Distribution Oil System. *Industrial & Engineering Chemistry Research* **2006**, *45*, 7841-7855.
+
+11. Schulz, E. P.; Diaz, M. S.; Bandoni, J. A., Supply chain optimization of large-scale continuous process. *Computers & Chemical Engineering* **2006**, *29*, 1305-1316.
+
+12. Jackson, J. R.; Grossmann, I. E., Temporal decomposition scheme for nonlinear multisite production planning and distribution models. *Industrial & Engineering Chemistry Research* **2003**, *42*, (13), 3045-3055.
+
+13. Lim, M.-F.; Karimi, I. A., Resource-Constrained Scheduling of Parallel Production Lines Using Asynchronous Slots. *Industrial & Engineering Chemistry Research* **2003**, *42*, 6832-6842.
+
+14. Varma, V. A.; Reklaitis, G. V.; Blau, G. E.; Pekny, J. F., Enterprise-wide modeling & optimization – An overview of emerging research challenges and opportunities. *Computers & Chemical Engineering* **2007**, *31*, 692-711.
+
+15. Chen, C.-L.; Lee, W.-C., Multi-objective optimization of multi-echelon supply chain networks with uncertain product demands and prices. *Computers & Chemical Engineering* **2004**, *28*, 1131-1144.
+
+16. Gupta, A.; Maranas, C. D., Managing demand uncertainty in supply chain planning.
+---PAGE_BREAK---
+
+Computers & Chemical Engineering **2003**, *27*, 1219-1227.
+
+17. Gupta, A.; Maranas, C. D.; McDonald, C. M., Mid-term supply chain planning under demand uncertainty: customer demand satisfaction and inventory management. *Computers & Chemical Engineering* **2000**, *24*, 2613-2621.
+
+18. Jung, J. Y.; Blau, G.; Pekny, J. F.; Reklaitis, G. V.; Eversdyk, D., Integrated safety stock management for multi-stage supply chains under production capacity constraints. In *Computers & Chemical Engineering*, Submitted: 2007.
+
+19. Shen, Z.-J. M.; Coullard, C.; Daskin, M. S., A joint location-inventory model. *Transportation Science* **2003**, *37*, 40-55.
+
+20. Fisher, M. L., The Lagrangean relaxation method for solving integer programming problems. *Management Science* **1981**, *27*, (1), 18.
+
+21. Fisher, M. L., An application oriented guide to Lagrangian relaxation. *Interfaces* **1985**, *15*, (2), 2-21.
+
+22. Guignard, M.; Kim, S., Lagrangean decomposition: A model yielding stronger Lagrangean bounds. *Mathematical Programming* **1987**, *39*, 215-228.
+
+23. Beasley, J. E., Lagrangean heuristics for location problems. *European Journal of Operations Research* **1993**, *65*, (3), 383-399.
+
+24. Holmberg, K.; Ling, J., A Lagrangean heuristic for the facility location problem with staircase costs. *European Journal of Operations Research* **1997**, *97*, (1), 63-74.
+
+25. Sridharan, R., The capacitated plant location problem. *European Journal of Operations Research* **1993**, *87*, 203-213.
+
+26. Pirkul, H.; Jayaraman, V., Production, transportation, and distribution planning in a multi-commodity tri-echelon system. *Transportation Science* **1996**, *30*, 291-302.
+
+27. Klose, A., An Lagrangean relax-and-cut approach for two-stage capacitated facility location problems. *Journal of the Operational Research Society* **2000**, *126*, 408-421.
+
+28. van den Heever, S. A.; Grossmann, I. E.; Vasantharajan, S., A Lagrangean decomposition heuristic for the design and planning of offshore hydrocarbon field infrastructures with complex economic objectives. *Industrial & Engineering Chemistry Research* **2001**, *40*, (13), 2857-2875.
+
+29. Neiro, S. M. S.; Pinto, J. M., A general modeling framework for the operational planning of petroleum supply chains. *Computers & Chemical Engineering* **2004**, *28*, 871-896.
+
+30. Axsater, S., Using the Deterministic EOQ Formula in Stochastic Inventory Control. *Management Science* **1996**, *42*, (6), 830-834.
+
+31. Zheng, Y. S., On properties of stochastic inventory systems. *Management Science* **1992**, *38*, 87-103.
+
+32. Eppen, G., Effects of centralization on expected costs in a multi-echelon newsboy problem. *Management Science* **1979**, *25*, (5), 498-501.
+
+33. Daskin, M. S.; Coullard, C.; Shen, Z.-J. M., An inventory-location model: formulation, solution algorithm and computational results. *Annals of Operations Research* **2002**, *110*, 83-106.
+---PAGE_BREAK---
+
+34. Cornuejols, G.; Nemhauser, G. L.; Wolsey, L. A., The uncapacitated facility location problem. In *Discrete Location Theory*, Mirchandani, P.; Francis, R., Eds. John Wiley and Son Inc.: New York, 1990; pp 119-171.
+
+35. Conn, A. R.; Cornuejols, G., A Projection Method for the Uncapacitated Facility Location Problem. *Mathematical Programming* **1990**, *46*, 273-298.
+
+36. Ryoo, H. S.; Sahinidis, N. V., A branch-and-reduce approach to global optimization. *Journal of Global Optimization* **1996**, *8*, (2), 107-138.
+
+37. Tawarmalani, M.; Sahinidis, N. V., Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. *Mathematical Programming* **2004**, *99*, 563-591.
+
+38. Adjiman, C. S.; Androulakis, I. P.; Floudas, C. A., Global optimization of mixed-integer nonlinear problems. *AIChE Journal* **2000**, *16*, (9), 1769-1797.
+
+39. Quesada, I.; Grossmann, I. E., A Global Optimization Algorithm for Linear Fractional and Bilinear Programs. *Journal of Global Optimization* **1995**, *6*, 39-76.
+
+40. Smith, E. M. B.; Pantelides, C. C., A symbolic reformulation/spatial branch and bound algorithm for the global optimization of nonconvex MINLPs. *Computers & Chemical Engineering* **1999**, *23*, 457-478.
+
+41. Kesavan, P.; Allgor, R. J.; Gatzke, E. P.; Barton, P. I., Outer Approximation Algorithms for Separable Nonconvex Mixed-Integer Nonlinear Programs. *Mathematical Programming* **2004**, *100*, (3), 517-535.
+
+42. Sahinidis, N. V., BARON: A general purpose global optimization software package. *Journal of Global Optimization* **1996**, *8*, (2), 201-205.
+
+43. Grossmann, I. E., Review of Nonlinear Mixed-Integer and Disjunctive Programming Techniques. *Optimization and Engineering* **2002**, *3*, 227-252.
+
+44. Leyffer, S., Integrating SQP and branch and bound for mixed integer nonlinear programming. *Computational Optimization and Applications* **2001**, *18*, 295-309.
+
+45. Geoffrion, A. M., Generalized Benders Decomposition. *Journal of Optimization Theory and Applications* **1972**, *10*, (4), 237-260.
+
+46. Duran, M. A.; Grossmann, I. E., An outer-approximation algorithm for a class of mixed-integer nonlinear programs. *Mathematical Programming* **1986**, *36*, (3), 307.
+
+47. Viswanathan, J.; Grossmann, I. E., A combined penalty-function and outer-approximation method for MINLP optimization. *Computers & Chemical Engineering* **1990**, *14*, (7), 769-782.
+
+48. Quesada, I.; Grossmann, I. E., An LP/NLP based branch and bound algorithm for convex MINLP optimization problems. *Computers & Chemical Engineering* **1992**, *16*, 937-947.
+
+49. Westerlund, T.; Pettersson, F., A cutting plane method for solving convex MINLP problems. *Computers & Chemical Engineering* **1995**, *19*, S131-S136.
+
+50. Brooke, A.; Kendrick, D.; Meeraus, A.; Raman, R., GAMS- A User's Manual. In GAMS Development Corp.: 1998.
+
+51. Bonami, P.; Waechter, A.; Biegler, L. T.; Conn, A. R.; Cornuéjols, G.; Grossmann, I. E.; Laird,
+---PAGE_BREAK---
+
+C. D.; Lee, J.; Lodi, A.; Margot, F.; Sawaya, N., An Algorithmic Framework for Convex Mixed Integer Nonlinear Programs. In *Technical Report RC23771*, IBM Thomas J. Watson Research Center: Yorktown Heights, NY, 2005.
+
+52. http://www.coin-or.org
+
+53. Falk, J. E.; Soland, R. M., An Algorithm for separable nonconvex programming problems. *Management Science* **1969**, *15*, 550-569.
+
+54. http://www.census.gov/main/www/cen2000.html
+
+55. Falk, J. E.; Hoffman, K. R., A Successive Underestimation Method for Concave Minimization Problems. *Mathematics of Operations Research* **1976**, *1*, (3), 251-259.
+
+56. Nemhauser, G. L.; Wolsey, L. A., *Integer and Combinatorial Optimization*. John Wiley and Son Inc.: New York, NY, 1988.
\ No newline at end of file
diff --git a/samples/texts_merged/7156187.md b/samples/texts_merged/7156187.md
new file mode 100644
index 0000000000000000000000000000000000000000..960ba62829d3a10a5233017acf87bd75f134d84b
--- /dev/null
+++ b/samples/texts_merged/7156187.md
@@ -0,0 +1,776 @@
+
+---PAGE_BREAK---
+
+# Fractional Zero Forcing via Three-color Forcing Games
+
+Leslie Hogben* Kevin F. Palmowski† David E. Roberson‡ Michael Young§
+
+May 13, 2015
+
+**Abstract**
+
+An *r*-fold analogue of the positive semidefinite zero forcing process that is carried out on the *r*-blowup of a graph is introduced and used to define the fractional positive semidefinite forcing number. Properties of the graph blowup when colored with a fractional positive semidefinite forcing set are examined and used to define a three-color forcing game that directly computes the fractional positive semidefinite forcing number of a graph. We also present a three-color interpretation of the skew zero forcing game. The treatment of fractional positive semidefinite forcing number is paralleled to develop a fractional parameter based on the standard zero forcing process and it is shown that this parameter is exactly the skew zero forcing number. The three-color approach and an algorithm are used to characterize graphs whose skew zero forcing number equals zero.
+
+**Key words.** zero forcing, fractional, positive semidefinite, graph
+
+**Subject classifications.** 05C72, 05C50, 05C57, 05C85
+
+## 1 Introduction
+
+2 This paper studies fractional versions (in the spirit of [9]) of the standard and positive semidefinite
+3 zero forcing numbers and introduces three-color forcing games to compute these parameters.
+
+4 1.1 Zero forcing games
+
+5 The zero forcing process was introduced independently in [1] as a method of forcing zeros in a null
+6 vector of a matrix described by a graph in order to upper bound the nullity of the matrix and in
+7 [4] for control of quantum systems. The original process has since spawned numerous variants. In
+8 this section, we introduce zero forcing games and the terminology used therein.
+
+*Department of Mathematics, Iowa State University, Ames, IA 50011, USA (LHogben@iastate.edu) and American Institute of Mathematics, 600 E. Brokaw Rd., San Jose, CA 95112, USA (hogben@aimath.org).
+
+†Department of Mathematics, Iowa State University, Ames, IA 50011, USA (kpalmow@iastate.edu).
+
+‡Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-03-01, 21 Nanyang Link, Singapore 637371 (droberson@ntu.edu.sg).
+
+§Department of Mathematics, Iowa State University, Ames, IA 50011, USA (myoung@iastate.edu).
+---PAGE_BREAK---
+
+Abstractly, a *forcing game* is a type of coloring game that is played on a simple graph *G*. First, a “target color,” typically blue or dark blue, is designated. Each vertex of the graph is then colored the target color, white, or possibly some other color (in prior work, only white and the target color have been used). A *forcing rule* is chosen: this is a rule that describes the conditions under which some vertex can cause another vertex to change to the target color. If vertex *u* causes a neighboring vertex *w* to change color, we say that *u* forces *w* and write *u* → *w*. The forcing rule is repeatedly applied until no more forces can be performed, at which point the game ends; the coloring at the end is called the *final coloring*. An ordered list of the forces performed is referred to as a *chronological list of forces*. Note that there is usually some choice as to which forces are performed, as well as the order in which these forces occur. As such, a single forcing set may generate many different chronological lists of forces; however, the final coloring is unique for all of the games discussed herein. If the graph is totally colored with the target color at the end of the game, then we say that *G* has been *forced*. The goal of the game is to force *G*. If this is possible, then the initial set of non-white vertices is called a *forcing set*.
+
+The (standard) zero forcing game uses only the colors blue (the target color) and white. The (standard) zero forcing rule is as follows:
+
+If $w$ is the only white neighbor of a blue vertex $u$, then $u$ can force $w$.
+
+A (standard) zero forcing set is an initial set of blue vertices that can force $G$ using this rule. The (standard) zero forcing number of $G$, denoted $Z(G)$, is the minimum cardinality of a zero forcing set for $G$. We present an illustrative example.
+
+Figure 1: Standard zero forcing game example
+
+**Example 1.1.** Let $G$ be as in Figure 1 and choose the initial set of blue vertices $B = \{a, f, g\}$ (Figure 1a). Since each vertex in $B$ has only one white neighbor, we are able to perform the forces $a \to c$, $f \to d$, and $g \to e$; Figure 1b shows the state of the system after these first forces are performed. After this, the only white vertex remaining in the graph is $b$, which is then forced by $c$ (Figure 1c). Thus we have forced $G$ and conclude that $B$ is a (standard) zero forcing set; it is left as an exercise to verify that $B$ is a minimum (standard) zero forcing set, so $Z(G) = |B| = 3$.
+
+From this point forward, we will omit the word “standard” when referring to the standard zero
+forcing game, its forcing rule, or zero forcing sets whenever there is no risk of ambiguity.
+
+The positive semidefinite zero forcing game is a modification of the zero forcing game used to force zeros in a null vector of a positive semidefinite matrix described by a graph [3]. Like the zero
+---PAGE_BREAK---
+
+forcing game, positive semidefinite zero forcing uses only the colors blue (target) and white. The *positive semidefinite zero forcing rule* is the same as the standard zero forcing rule, except that this rule also features a *disconnect rule*:
+
+Remove all blue vertices from the graph, leaving a set of connected components. To each connected component (of white vertices) in turn, add the blue vertices, the edges among the blue vertices, and any edges between the blue vertices and that component, and perform forces via the standard rule: If $w$ is the only white neighbor of a blue vertex $u$ in this induced subgraph, then $u$ can force $w$.
+
+It is not assumed that disconnection occurs; if there is only one component, then we simply force via the standard forcing rule. If disconnection does occur, then after the force the graph is “reassembled” prior to applying the rule again. As one would expect, a *positive semidefinite zero forcing set* is an initial set of blue vertices that can force $G$ using this rule, and the *positive semidefinite zero forcing number* of $G$, denoted $Z^+(G)$, is the minimum cardinality of a positive semidefinite zero forcing set for $G$. As in the standard zero forcing case, we examine an illustrative example.
+
+Figure 2: Positive semidefinite zero forcing game example
+
+**Example 1.2.** Let $G$ be as in Figure 2 and choose the initial set of blue vertices $B = \{\text{c}, \text{d}\}$ (Figure 2a). This is clearly not a standard zero forcing set, since no initial force can be made using the standard zero forcing rule; however, the positive semidefinite zero forcing game allows us to use the disconnect rule, and this example reveals its power. Applying the disconnect rule yields the connected components shown in Figure 2b. $B$ is then connected to each component and one force is performed in each component (Figure 2c). After forcing, the graph is reassembled (Figure 2d).
+---PAGE_BREAK---
+
+The final force in the process, $e \to g$, does not require the disconnect rule (Figure 2e). As before, we were able to force $G$, so the initial set $B$ is a positive semidefinite zero forcing set; it is left as an exercise to verify that $B$ is also minimum and $Z^+(G) = 2$.
+
+The skew zero forcing game, another variant on zero forcing that uses the colors white and blue (target), was first considered in [8] to force zeros in a null vector of a skew symmetric matrix described by a graph. The skew zero forcing rule is as follows:
+
+If $w$ is the only white neighbor of any vertex $u$, then $u$ can force $w$.
+
+Skew zero forcing removes the standard requirement that the forcing vertex $u$ be blue; as a result, skew zero forcing allows white vertex forcing, i.e., a white vertex is allowed to force its only white neighbor. A skew zero forcing set is an initial set of blue vertices that can force $G$ using this rule, and the skew zero forcing number of $G$, denoted $Z^-(G)$, is the minimum cardinality of a skew zero forcing set for $G$. We return for a final time to our illustrative example.
+
+Figure 3: Skew zero forcing game example
+
+**Example 1.3.** Let $G$ be as in Figure 3 and choose the initial blue vertex $B = \{a\}$ (Figure 3a). The vertex $a$ is able to perform a standard force on its neighbor $c$, and vertices $f$ and $g$ are able to perform white vertex forces on their neighbors $d$ and $e$, respectively (Figure 3b). At this point, the standard forces $c \to b$, $d \to f$, and $e \to g$ can be performed, which forces $G$ (Figure 3c). $B$ is thus a skew zero forcing set; as before, it is an exercise to show that $B$ is minimum and $Z^-(G) = 1$.
+
+## 1.2 Motivation and method
+
+This paper focuses on fractional versions of the standard and positive semidefinite zero forcing numbers. We first present the construction of fractional chromatic number found in [9] as an example of the method used to define a fractional graph parameter. A proper coloring of a graph $G$ is an assignment of colors to the vertices of $G$ such that adjacent vertices receive different colors. The chromatic number of $G$, denoted $\chi(G)$, is the least number of colors required to properly color $G$. We can generalize a proper coloring of $G$ using $c$ colors to a proper $r$-fold coloring with $c$ colors, or a $c:r$-coloring: from a total of $c$ colors, we assign $r$ colors to each vertex of $G$ such that adjacent vertices receive disjoint sets of colors. The $r$-fold chromatic number of $G$, denoted $\chi_r(G)$, is the smallest value of $c$ such that $G$ has a $c:r$-coloring; we emphasize that to compute $\chi_r(G)$ we fix $r$ and minimize the value of $c$. The fractional chromatic number of $G$ is then defined as
+
+$$ \chi_f(G) = \inf_{r \in \mathbb{N}} \left\{ \frac{\chi_r(G)}{r} \right\}. $$
+---PAGE_BREAK---
+
+The interested reader is referred to [9] for an in-depth treatment of fractional chromatic number, as well as other fractional graph parameters. For this paper, defining an *r*-fold version of a graph parameter and then defining the fractional parameter as the infimum of the ratios of the *r*-fold parameter to *r* are key ideas.
+
+Suppose that $G$ is a simple graph on $n$ vertices with $V(G) = \{1, 2, \dots, n\}$. We say that a matrix $A \in \mathbb{C}^{nr \times nr}$ *r*-fits $G$ if, after partitioning $A$ as a block $n \times n$ matrix, block $A_{ii} = I_r$ for each $i$ and for all $i, j$ with $i \neq j$, block $A_{ij} = 0_{r \times r}$ if and only if $ij \notin E(G)$ [7]. While there may be many such matrices for a given graph, the following result shows that certain structure can be assumed.
+
+**Proposition 1.4.** Suppose that $A \in \mathbb{C}^{nr \times nr}$ *r*-fits a graph $G$ on $n$ vertices. We can construct a unitary matrix $U$ such that $U^*AU$ *r*-fits $G$ and if $ij \in E(G)$, then every entry of block $(U^*AU)_{ij}$ is nonzero.
+
+*Proof.* Assume that $V(G) = \{1, 2, \dots, n\}$ and partition $A = [A_{ij}]$ as an $n \times n$ block matrix with $A_{ij} \in \mathbb{C}^{r \times r}$. By definition, we have $A_{ii} = I_r$ for each $i \in [1:n]$ and for $i, j \in [1:n]$ with $i \neq j$ we have $A_{ij} = 0_{r \times r}$ if and only if $ij \notin E(G)$.
+
+For each $i \in [1:n]$, let $U_i \in \mathbb{C}^{r \times r}$ be a random unitary matrix with $U_i$ and $U_j$ chosen independently if $i \neq j$. Define $U = \text{blockdiag}(U_1, \dots, U_n)$ and let $C = U^*AU$. Partitioning $C$ conformally with $A$, we have $C_{ij} = U_i^* A_{ij} U_j$. Notice that $C_{ii} = U_i^* I_r U_i = I_r$ and for $i \neq j$ if $ij \notin E(G)$, then $C_{ij} = U_i^* 0_{r \times r} U_j = 0_{r \times r}$.
+
+Suppose $ij \in E(G)$ and consider the product $A_{ij}U_j$; note that $A_{ij} \neq 0_{r \times r}$. Since $U_j$ is random, with high probability no column of $U_j$ lies in ker $A_{ij}$, so no column of $A_{ij}U_j$ is a zero vector. Let $\mathbf{z}$ be any column of $A_{ij}U_j$ (so with high probability $\mathbf{z} \neq \mathbf{0}$) and consider $(U_i^*\mathbf{z})_k$. If $(U_i^*\mathbf{z})_k = 0$, then $\mathbf{z}$ is orthogonal to the $k^{th}$ column of $U_i$. Since $U_i$ is a random unitary matrix, with high probability this does not happen. We conclude that if $ij \in E(G)$, then with high probability no entry of $C_{ij}$ is zero. Thus $C$ *r*-fits $G$ and has the desired structure. $\square$
+
+Let $G$ be a graph and choose $r \in \mathbb{N}$. The *r*-blowup of $G$ is the graph $G^{(r)}$ constructed by replacing each vertex of $u \in V(G)$ with an independent set of $r$ vertices, denoted $R_u$, and replacing each edge $uw \in E(G)$ by the edges of a complete bipartite graph on partite sets $R_u$ and $R_w$.¹ We call the set $R_u$ a *cluster*. Note that $V(G^{(r)}) = \bigcup_{u \in V(G)} R_u$ and if $uw \in E(G)$ then every vertex of $R_u$ is adjacent to every vertex of $R_w$ in $G^{(r)}$.
+
+Suppose that $A \in \mathbb{C}^{nr \times nr}$ is positive semidefinite and *r*-fits a graph *G* on *n* vertices with *V*(*G*) = {1, 2, ..., *n*. By Proposition 1.4, without loss of generality we can assume that if *ij* ∉ *E*(*G*), then block *A**ij* has no zero entries. Consider the graph of such a matrix *A*, namely, the simple graph with vertex set {1, 2, ..., *nr*} and with an edge between vertices *k* and *l* if *k* ≠ *l* and the entry in row *k* and column *l* of *A* is nonzero. Since *A**ii* = *I**r*, the vertices of *G* will map to independent sets (clusters) of size *r*; let *R**i* denote the cluster associated with vertex *i* ∈ *V*(*G*). Since each entry of *A**ij* is nonzero, every vertex in *R**i* will be adjacent to every vertex in *R**j*, and vice versa. Hence the graph of *A* is exactly *G*(*r*), the *r*-blowup of *G*.
+
+¹Given graphs G and H, the lexicographic product of G with H, denoted G ×$_L$ H, is the graph with V(G ×$_L$ H) = V(G) × V(H) and (g,h)(i,j) ∈ E(G ×$_L$ H) if gi ∈ E(G) or if g = i and hj ∈ E(H). We can also define the r-blowup of G as G⁽(r⁾) = G ×ₗ Kᵣ, where Kᵣ denotes the empty graph on r vertices.
+---PAGE_BREAK---
+
+The positive semidefinite zero forcing number of a graph is an upper bound on the maximum positive semidefinite nullity of the graph, which equals the order of the graph minus its minimum positive semidefinite rank [3, 5]. The authors of [7] define an *r*-fold analogue of minimum positive semidefinite rank and use this new parameter to define fractional minimum positive semidefinite rank. A key element of this treatment is that the *r*-fold minimum positive semidefinite rank of a graph can be expressed as the rank of a positive semidefinite matrix that *r*-fits the graph [7, Theorem 3.9]. Our previous discussion allows us to assume that the graph of such a matrix is $G^{(r)}$.
+
+As mentioned in Section 1.1, playing the positive semidefinite zero forcing game can be interpreted as forcing zeros in a null vector of a positive semidefinite matrix whose graph is $G$, hence the connection to maximum positive semidefinite nullity and minimum positive semidefinite rank. Since the *r*-fold minimum positive semidefinite rank is defined in terms of matrices that *r*-fit the original graph, an *r*-fold analogue of positive semidefinite zero forcing number would naturally be associated with a game played on the graph of a positive semidefinite matrix that *r*-fits $G$. To this end, our *r*-fold forcing parameters will be defined in terms of forcing games played on $G^{(r)}$; while the interpretation of forcing zeros in a null vector is no longer valid, the spirit of the process remains.
+
+## 1.3 Definitions and notation
+
+Throughout this paper, all graphs are simple. We use $|G|$ to denote the order of a graph $G$, i.e., $|G| = |V(G)|$. If $G$ is a graph and $S \subseteq V(G)$, then $G[S]$ denotes the subgraph of $G$ induced by $S$, namely, the graph with $V(G[S]) = S$ and $E(G[S]) = \{uv \in E(G) : u, v \in S\}$. We use $G-S$ as shorthand for the induced subgraph $G[V(G) \setminus S]$. The neighborhood of a vertex $u \in V(G)$, denoted $N(u)$, is the set of vertices adjacent to $u$. The degree of a vertex $u$ is the number of neighbors of $u$, i.e., $|N(u)|$. A leaf is a vertex of degree one. We use $\delta(G)$ to denote the minimum of the degrees of the vertices of $G$.
+
+If $S$ and $T$ are disjoint sets, then $S \dot{\cup} T$ denotes the *disjoint union* of the sets. Note that $S \dot{\cup} T = S \cup T$; we use the $\dot{\cup}$ notation to emphasize that the sets are disjoint.
+
+Throughout, $B$ will be used to denote a set of blue vertices associated with a two-color forcing game. We emphasize that in a two-color forcing game the target color is blue. For three-color forcing games, we use two non-white colors: dark blue, which is our target color, and light blue. $B$ will be used to denote a set of colored vertices associated with a three-color forcing game. Given such a set $B$, we let $\mathcal{D}$ be the set of dark blue vertices and $\mathcal{L}$ be the set of light blue vertices. Since $\mathcal{D} \cap \mathcal{L} = \emptyset$, we have $B = \mathcal{D} \dot{\cup} \mathcal{L}$. While $B$ is a set, we will abuse notation and write $B = (\mathcal{D}, \mathcal{L})$ to emphasize the decomposition of $B$ into its component sets.
+
+## 1.4 Contribution and organization of the paper
+
+In Section 2 we introduce and examine the fractional positive semidefinite forcing number of a graph. An *r*-fold extension of the positive semidefinite zero forcing number, based on graph blowups, is introduced and used to define the fractional positive semidefinite forcing number of a graph $G$, denoted $Z_f^+(G)$. We also introduce a three-color forcing game played on $G$ called the fractional positive semidefinite forcing game and prove a main result of that section (cf. Theorem 2.21):
+---PAGE_BREAK---
+
+**Theorem.** For any graph $G$, $Z_f^+(G)$ is the minimum number of dark blue vertices in a (three-color) fractional positive semidefinite forcing set for $G$.
+
+This result allows us to determine the fractional positive semidefinite forcing number of a graph by playing the fractional positive semidefinite forcing game, as opposed to computation via the $r$-fold approach. We prove numerous results pertaining to fractional positive semidefinite forcing number and the structure of optimal fractional positive semidefinite forcing sets and apply these results to compute the fractional positive semidefinite forcing number for some common graph families. We also prove that any graph has an ordinary (two-color) minimum positive semidefinite zero forcing set such that the first force in the forcing process can be done without using the disconnect rule.
+
+In Section 3 we introduce a three-color forcing game that is equivalent to the skew zero forcing game. The three-color approach is used to prove numerous results pertaining to skew zero forcing. We define an $r$-fold analogue of the (standard) zero forcing game and using this to define the fractional forcing number of a graph, denoted $Z_f(G)$. A main result of that section shows that skew zero forcing number and fractional zero forcing number of a graph are the same (cf. Theorem 3.21):
+
+**Theorem.** For any graph $G$, $Z_f(G) = Z^-(G)$.
+
+We conclude the section by introducing an algorithm that is used to characterize graphs that satisfy $Z^-(G) = 0$.
+
+## 2 Fractional positive semidefinite forcing
+
+In this section, we define an $r$-fold analogue of the positive semidefinite zero forcing game and use this to define the $r$-fold and fractional positive semidefinite forcing numbers of a graph $G$. We investigate structural properties of $r$-fold positive semidefinite forcing sets and use these properties to develop a simple three-color game to directly compute the fractional positive semidefinite forcing number of a graph. Properties of the fractional positive semidefinite forcing number are also investigated.
+
+### 2.1 The $r$-fold positive semidefinite forcing game and fractional positive semidefinite forcing number
+
+Let $G$ be a graph and for some $r \in \mathbb{N}$ consider the following $r$-fold positive semidefinite forcing game, which is a two-color forcing game played on $G^{(r)}$. As in any forcing game, we initially color some set $B \subseteq V(G^{(r)})$ blue and then try to force $G^{(r)}$ through repeated application of the following $r$-fold positive semidefinite forcing rule:
+
+**Definition 2.1** ($r$-fold positive semidefinite forcing rule). Let $B_t$ denote the set of blue vertices of $G^{(r)}$ at some step $t$ of the $r$-fold positive semidefinite forcing process² and let $W_1, \dots, W_h$ denote
+
+²We caution the reader that a chronological list of forces is not a propagating process and $B_t$ here has different meaning than that used in the study of propagation.
+---PAGE_BREAK---
+
+the sets of vertices of the connected components of $G^{(r)} - B_t$. If $u \in B_t$ and $|N(u) \cap W_i| \le r$, then $u$ can force $N(u) \cap W_i$, i.e., all white neighbors of $u$ in $G^{(r)}[B_t \cup W_i]$ can be simultaneously colored blue.
+
+The *r*-fold positive semidefinite forcing game can be thought of as a generalization of the positive semidefinite zero forcing game: instead of forcing one white neighbor in a component after applying the disconnect rule, a vertex forces up to *r* white neighbors in a component. This is a positive semidefinite analog of the *r*-forcing process described in [2], but we apply this process only to the blowup of the graph.
+
+If $G^{(r)}$ can be forced, then the initial set of blue vertices is called an *r*-fold positive semidefinite (PSD) forcing set for $G$. An *r*-fold PSD forcing set $B$ is minimum if there is no *r*-fold PSD forcing set of smaller cardinality than $B$. The cardinality of a minimum *r*-fold PSD forcing set is called the *r*-fold positive semidefinite forcing number of $G$ and is denoted $Z_{[r]}^{+}(G)$. We define the fractional positive semidefinite forcing number of a graph $G$ as
+
+$$ Z_f^+(G) = \inf_{r \in \mathbb{N}} \left\{ \frac{Z_{[r]}^+(G)}{r} \right\}. $$
+
+Note that $G^{(1)} = G$ and a 1-fold PSD forcing set is exactly a positive semidefinite zero forcing set. Any positive semidefinite zero forcing set $B$ can be converted into an *r*-fold PSD forcing set (for $r \ge 2$) by the following rule: if $u \in B$, then color every vertex in $R_u \in V(G^{(r)})$ blue. This creates an *r*-fold PSD forcing set that contains $r \cdot Z^+(G)$ blue vertices, so $Z_{[r]}^+(G) \le r \cdot Z^+(G) = r \cdot Z_{[1]}^+(G)$. We conclude that
+
+$$ Z_f^+(G) = \inf_{r \in \mathbb{N}} \left\{ \frac{Z_{[r]}^+(G)}{r} \right\} = \inf_{r \ge 2} \left\{ \frac{Z_{[r]}^+(G)}{r} \right\}. $$
+
+## 2.2 Global interpretation of *r*-fold positive semidefinite forcing
+
+Suppose that we are playing the *r*-fold positive semidefinite forcing game on $G^{(r)}$, where $r \ge 2$. So far, we have viewed the game from a local perspective while generally ignoring the global structure of the blowup, namely, clusters joined by edges. Shifting to a global view gives insight into the mechanics of the forcing game. In this section, we assume that $r \ge 2$.
+
+Three specific types of cluster are of particular interest. An *All cluster* is a cluster in which all vertices are colored blue. A *One cluster* is a cluster in which exactly one vertex is colored blue and the rest are colored white. A *None cluster* is a cluster in which all vertices are colored white. We define a *All-One-None (minimum) r-fold positive semidefinite forcing set* $B$ for a graph $G$ to be a (minimum) *r*-fold PSD forcing set in which each cluster of $G^{(r)}$ is either an All, One, or None cluster when $G^{(r)}$ is colored with $B$. For the sake of brevity, we will hereafter shorten All-One-None to AON.
+
+We say that a cluster $R_u$ is forced into when any vertex in $R_u$ is forced. Once a cluster changes from a non-All to an All cluster, we say that the cluster has been forced.
+
+**Observation 2.2.** Any cluster that is forced into becomes an All cluster after the forcing operation, so forcing into a cluster and forcing the cluster are equivalent.
+---PAGE_BREAK---
+
+**Remark 2.3.** At some stage of the $r$-fold positive semidefinite forcing process using a particular chronological list of forces, let $B_t$ denote the set of blue vertices in $G^{(r)}$. Assume that $R_u \not\subseteq B_t$ for some $u \in V(G)$. Suppose that the next force in the process is done by $x \in R_u$, so $x$ has at most $r$ white neighbors. Since $R_u \not\subseteq B_t$, there exists at least one white vertex $w \in R_u$. Because $x$ and $w$ have the same neighbors and $w$ is white, all white neighbors of $x$ are connected through $w$ and lie in the same connected component. Hence, after $x$ forces, all neighbors of every vertex in $R_u$ must be blue, so without loss of generality $R_u$ can be forced in the next step of the forcing process.
+
+This remark yields a new definition.
+
+**Definition 2.4.** If at any stage of the $r$-fold positive semidefinite forcing process a vertex in any partially-filled cluster performs a force, then that cluster can itself be forced at the next forcing step. We refer to this process as *backforcing*.
+
+Remark 2.3 asserts that requiring backforcing does not affect whether or not a set is an $r$-fold PSD forcing set, so we will always assume that backforcing is used when performing the $r$-fold positive semidefinite forcing process. As we will see, this assumption is quite powerful.
+
+**Definition 2.5.** Let $R_{u_1}, R_{u_2}, \dots, R_{u_m}$ be “partially-filled” clusters (i.e., no cluster is an All or a None) in $G^{(r)}$ that together contain $pr + q$ blue vertices for some $0 \le p < m$ and $0 \le q < r$. We define the process of consolidation as follows: use $pr$ of the blue vertices to convert $R_{u_1}, \dots, R_{u_p}$ into All clusters and move the remaining $q$ blue vertices into $R_{u_{p+1}}$.
+
+Consolidation allows us to literally consolidate a group of blue vertices spread among many clusters into the fewest number of clusters possible.
+
+Our goal for the remainder of this section is to use these tools and definitions to develop an equivalent characterization of the $r$-fold positive semidefinite forcing game that relies only upon a particular type of AON $r$-fold PSD forcing set.
+
+**Remark 2.6.** Suppose that $r \ge 3$. If an $r$-fold forcing set $B$ creates a global AON structure in $G^{(r)}$, then from a global perspective exactly one cluster is forced at each step of the forcing process. This is because the vertex that performs the force can only force into One or None clusters, and if this vertex were adjacent to more than one of these (in any combination), then it would have more than $r$ white neighbors and could not actually perform a force.
+
+The case when $r = 2$ is slightly different. In this case, it is possible for a vertex to force two
+One clusters at the same forcing step (cf. Example 2.11 below). Every 2-fold PSD forcing set is
+automatically an AON set, so we cannot claim that if $G^{(r)}$ has a global AON structure, then exactly
+one cluster will be forced at the next forcing step. However, Theorem 2.7 uses consolidation to
+show that even though every AON PSD forcing set need not have this property, there always exist
+an AON minimum PSD forcing set and forcing process that do.
+
+**Theorem 2.7.** Let $G$ be a graph and suppose $r \ge 2$. Then there exists an AON minimum $r$-fold
+PSD forcing set for $G$. For all $r \ge 3$, exactly one cluster of $G^{(r)}$ will be forced at each step of any
+forcing process that begins with any such set. For $r = 2$, there exists a forcing process for the set
+constructed such that exactly one cluster of $G^{(r)}$ is forced at each step.
+---PAGE_BREAK---
+
+*Proof.* We first consider the case where $r \ge 3$. Let $B$ be a minimum $r$-fold PSD forcing set for $G$ and assume that $B$ is not AON. Write a chronological list of the forces performed using the forcing set $B$, assuming the use of backforcing, and let $B_t$, $t \ge 0$, denote the set of blue vertices after step $t$ of this forcing process, where $B_0 = B$.
+
+Suppose that a vertex $x \in R_u$ performs a force at step $\ell \ge 1$ of the forcing process and $R_u \not\subseteq B_{\ell-1}$. By Observation 2.2, $R_u$ was not forced into at any step prior to step $\ell$. Since we assume backforcing and $R_u$ contains at least one white vertex, $R_u$ was not used to force any other cluster prior to step $\ell$, and $R_u$ will be forced in step $\ell+1$. Thus if $R_u$ is not a One cluster, we can uncolor every blue vertex in $R_u$ except for $x$ without changing the ability of $x$ to force or the ability of $R_u$ to be backforced at step $\ell+1$; since $R_u$ is not involved in any forces prior to step $\ell$, we can make this change in the original set $B$ and obtain a forcing set with fewer blue vertices, contradicting the assumption that $B$ was a minimum forcing set. Thus every cluster in a minimum $r$-fold PSD forcing set that is not an All cluster and contains a vertex that performs a force must be a One cluster.
+
+Now, suppose that at step $\ell \ge 1$ we have $x \to W \subseteq (R_{u_1} \cup R_{u_2} \cup \cdots \cup R_{u_m})$ for some $m \ge 2$, where each $R_{u_j}$ contains at least one white vertex. Since $x$ is performing a force, it has at most $r$ white neighbors in the component containing $\cup_{j=1}^m R_{u_j}$, so there are at least $r(m-1)$ blue vertices in $\cup_{j=1}^m R_{u_j}$. By Observation 2.2, each cluster $R_{u_j}$ is an All cluster after step $\ell$, and no $R_{u_j}$ was forced into prior to step $\ell$. Since we assume backforcing and each of the $R_{u_j}$ clusters contains at least one white vertex, none of the $R_{u_j}$ clusters contains a vertex that was used to force at a step prior to step $\ell$. Analogous to Remark 2.3, removing blue vertices from any of the $R_{u_j}$ will not affect the application of the disconnect property, as each $R_{u_j}$ contains at least one white vertex. Similarly, adding blue vertices to convert an $R_{u_j}$ into an All cluster may make available additional disconnects (which we do not use), but these would not affect any previous forces. Therefore, we can consolidate the (at least $r(m-1)$) blue vertices in $\cup_{j=1}^m R_{u_j}$ without affecting the ability to perform any previous force.
+
+Without loss of generality, suppose that $R_{u_1}, \dots, R_{u_{m-1}}$ become All clusters after the consolidation and any remaining blue vertices are left in $R_{u_m}$. After consolidation, the new force at step $\ell$ will be $x \to R_{u_m}$; after this point, the state of the system is the same as it would have been had we not consolidated (i.e., every $R_{u_j}$ is an All cluster), so future forces are unaffected by consolidation. Furthermore, after consolidation, exactly one cluster ($R_{u_m}$) is forced at step $\ell$. Since the consolidation process does not affect any of the forces before or after the force at step $\ell$, we are free to perform the consolidation on the original set $B$ to obtain a new minimum $r$-fold PSD forcing set $\tilde{B}$ and the sequence of vertices that perform forces remains unchanged.
+
+Note that since $\tilde{B}$ is minimum, $R_{u_m}$ must necessarily be a None cluster: if not, then we could remove the blue vertices in $R_{u_m}$ and obtain a valid forcing set with fewer blue vertices, contradicting that $\tilde{B}$ is minimum.
+
+By repeated application of the consolidation process, we are able to convert every non-One cluster into an All cluster or a None cluster. By Remark 2.6, any AON forcing process for $r \ge 3$ must necessarily consist of forcing only one cluster at each step, which proves the claim for $r \ge 3$.
+
+Now, suppose that $r=2$. Every minimum 2-fold PSD forcing set for $G$ is automatically an AON set. Suppose that, at step $\ell \ge 1$ of the forcing process, more than one cluster must be forced. Since any vertex can force at most 2 of its neighbors, it must be the case that two One clusters are
+---PAGE_BREAK---
+
+forced at this step. For the reasons described in the $r \ge 3$ case, we can consolidate these two One clusters into one All cluster and one None cluster without affecting any previous or future forces; after this consolidation, only one cluster (the None) is forced at step $\ell$. Thus we can modify our original minimum forcing set (as before) and the result follows for the $r = 2$ case (using the forcing process to which consolidation was applied). □
+
+We call the type of AON minimum *r*-fold PSD forcing set guaranteed to exist by Theorem 2.7 an *optimal AON r-fold PSD forcing set*. We emphasize that an optimal AON *r*-fold PSD forcing set is minimum by definition, and given an optimal AON *r*-fold PSD forcing set there is a corresponding forcing process in which exactly one cluster is forced at each step. Further, the set of blue vertices at each step of the forcing process associated with an optimal AON *r*-fold PSD forcing set will always create a global AON structure in $G^{(r)}$.
+
+Suppose that $B$ is an AON $r$-fold PSD forcing set for a graph $G$ and color $G^{(r)}$ with $B$. We use $a(B)$ to denote the number of All clusters in $G^{(r)}$ and $\ell(B)$ to denote the number of One clusters in $G^{(r)}$. This implies that $|B| = r \cdot a(B) + \ell(B)$. This new terminology yields a corollary to Theorem 2.7.
+
+**Corollary 2.8.** For every graph $G$ and $r \ge 2$, if $B$ is any optimal AON $r$-fold PSD forcing set for $G$, then $Z_{[r]}^+(G) = |B| = r \cdot a(B) + \ell(B)$.
+
+**Definition 2.9.** Let $r, s \ge 2$ with $s \neq r$ and suppose that $B$ is an AON $r$-fold PSD forcing set for $G$. Copy the AON structure of $G^{(r)}$ when colored with $B$ onto $G^{(s)}$ to create a new AON set of blue vertices of cardinality $s \cdot a(B) + \ell(B)$. This process is called *replication*.
+
+**Remark 2.10.** It is possible that replicating a 2-fold PSD forcing set $B$ for $G$ onto $G^{(s)}$ for some $s > 2$ may not yield a valid forcing set; this would occur when, at some step of the forcing process on $G^{(2)}$, two One clusters are forced simultaneously (see Example 2.11). However, if $B$ is an optimal AON 2-fold PSD forcing set, then Theorem 2.7 guarantees that there is a forcing process in which exactly one force occurs at each step, so replication will yield a valid forcing set. Thus if $B'$ is obtained by replicating an optimal AON $r$-fold PSD forcing set onto $G^{(s)}$ for some $r, s \ge 2$ with $s \neq r$, then $B'$ is an AON $s$-fold PSD forcing set for $G$ and the same forcing process used on $G^{(r)}$ will work with $B'$. As we see in Example 2.12, however, $B'$ may not be minimum and hence not optimal.
+
+Figure 4: AON 2-fold PSD forcing sets for $K_3$
+
+**Example 2.11.** Consider the (minimum) 2-fold PSD forcing sets for $K_3$ shown in Figure 4. For simplicity, the edges in the figure represent the complete bipartite graphs between the clusters at their endpoints. The first forcing step in Figure 4a would consist of forcing two of the One clusters simultaneously. This set is no longer a forcing set when replicated onto $K_3^{(s)}$ for $s \ge 3$, as each of
+---PAGE_BREAK---
+
+the blue vertices will have too many white neighbors to perform a force. The optimal AON PSD forcing set shown in Figure 4b, however, can be replicated successfully, as only one cluster must be forced at any step of the forcing process.
+
+Figure 5: Optimal AON *r*-fold PSD forcing sets
+
+**Example 2.12.** Suppose that we have the complete bipartite graph $K_{5,2}$ and let $G$ be the graph formed by attaching one leaf to each of the vertices in the partite set containing five vertices (Figure 5a). Consider the optimal AON *r*-fold PSD forcing sets for $G$ shown in Figure 5. When $r=2$ (Figure 5b), the (unique) minimum PSD forcing set creates two All clusters, so $Z_{[2]}^{+}(G) = 4$. When $r=3$ (Figure 5c), the (unique) minimum PSD forcing set creates five One clusters, so $Z_{[3]}^{+}(G) = 5$. In this case, replicating either set onto the other blowup will generate a forcing set that is not minimum, hence not optimal.
+
+We have shown that the *r*-fold PSD forcing number of a graph can be computed using an optimal AON *r*-fold PSD forcing set. We now prove further properties of AON *r*-fold PSD forcing sets and use these results to provide an alternate definition of the fractional PSD forcing number.
+
+**Lemma 2.13.** Let $G$ be a graph on $n$ vertices and choose $r \ge n$. For any AON $r$-fold PSD forcing set $B$ there exists an AON $r$-fold PSD forcing set $\tilde{B}$ with $|\tilde{B}| \le |B|$, $a(\tilde{B}) = a(B)$, and $\ell(\tilde{B}) < n$.
+
+*Proof.* If $a(B) = n$, then clearly $\ell(B) < n$, so choose $\tilde{B} = B$. Now, assume that $a(B) < n$. If $r \ge n \ge 3$, then only one cluster is forced at each step of any forcing process. If $n \le r \le 2$, then since $a(B) < n$ we must have $r=n=2$ and again one cluster is forced at each forcing step. For any $r$, if the first cluster forced with some forcing process is completely white, then $\ell(B) < n$ and we let $\tilde{B} = B$. If not, let $\tilde{B}$ be the set obtained by replacing the first cluster forced with a None cluster. $\square$
+
+**Lemma 2.14.** Let $G$ be a graph on $n$ vertices and fix $r \ge n$. Let $B$ be an optimal AON $r$-fold PSD forcing set for $G$ and let $B'$ be an AON $r$-fold PSD forcing set for $G$. Then $a(B) \le a(B')$.
+
+*Proof.* By Lemma 2.13, we can assume without loss of generality that $\ell(B') < n$. Since $B$ is optimal, it is minimum, so $r \cdot a(B) + \ell(B) = |B| \le |B'| = r \cdot a(B') + \ell(B')$. Dividing through by $r$ and manipulating this inequality yields
+
+$$a(B) - a(B') \le \frac{\ell(B') - \ell(B)}{r} < \frac{n}{r} \le 1.$$
+
+348
+
+349
+
+350
+
+351
+
+352
+
+353
+
+354
+
+355
+
+356
+
+357
+
+358
+
+359
+
+360
+
+361
+
+362
+
+363
+
+364
+
+365
+
+366
+
+367
+
+368
+
+369
+
+370
+
+371
+
+372
+
+373
+---PAGE_BREAK---
+
+Since $a(B) - a(B')$ is an integer, we must have $a(B) - a(B') \le 0$, which proves the claim. □
+
+**Corollary 2.15.** Let $G$ be a graph on $n$ vertices and fix $r \ge n$. If $B$ and $B'$ are optimal AON $r$-fold PSD forcing sets for $G$, then $a(B) = a(B')$.
+
+Thus for a fixed “large enough” $r$, every optimal AON $r$-fold PSD forcing set for $G$ must contain the same number of All clusters (and, consequently, One clusters). Of particular interest is the case $r=n=|G|$. We define $a_{\star}^{+}(G)$ to be the unique number of All clusters created in $G^{(n)}$ by any optimal AON $n$-fold PSD forcing set for $G$, and define $\ell_{\star}^{+}(G)$ to be the unique number of One clusters created in this manner. Our next result shows that once $r \ge n$, increasing $r$ will not change the number of All clusters created by an optimal AON $r$-fold PSD forcing set (i.e., the number will remain the constant $a_{\star}^{+}(G)$).
+
+**Proposition 2.16.** Let $G$ be a graph on $n$ vertices. For all $r \ge n$, if $B$ is an optimal AON $r$-fold PSD forcing set for $G$, then $a(B) = a_{\star}^{+}(G)$.
+
+*Proof.* Let $\tilde{B}$ be the AON $n$-fold PSD forcing set formed by replicating $B$ onto $G^{(n)}$. By Lemma 2.14, $a_{\star}^{+}(G) \le a(\tilde{B}) = a(B)$.
+
+Similarly, let $B'$ be the AON $r$-fold PSD forcing set formed by replicating any optimal AON $n$-fold PSD forcing set onto $G^{(r)}$. By Lemma 2.14, $a(B) \le a(B') = a_{\star}^{+}(G)$, and thus equality holds. □
+
+Proposition 2.16 yields an elegant description of the $r$-fold positive semidefinite forcing number for $r \ge n$, which we state as a corollary.
+
+**Corollary 2.17.** Let $G$ be a graph on $n$ vertices. For all $r \ge n$, $Z_{[r]}^{+}(G) = r \cdot a_{\star}^{+}(G) + \ell_{\star}^{+}(G)$. Additionally,
+
+$$ \lim_{r \to \infty} \frac{Z_{[r]}^{+}(G)}{r} = a_{\star}^{+}(G). $$
+
+Before we can prove the final result of this section, which ties the fractional positive semidefinite forcing number into the machinery developed in this section, we require one final utility result.
+
+**Lemma 2.18.** Let $G$ be a graph on $n$ vertices and choose $r \ge 2$. Then for any optimal AON $r$-fold PSD forcing set $B$, $\frac{|B|}{r} \ge a_{\star}^{+}(G)$.
+
+*Proof.* First, suppose that $2 \le r < n$. Let $\tilde{B}$ be the AON $n$-fold PSD forcing set obtained by replicating $B$ onto $G^{(n)}$. Then $a(B) = a(\tilde{B})$ and $\ell(B) = \ell(\tilde{B})$, so
+
+$$ \frac{|B|}{r} = a(B) + \frac{\ell(B)}{r} = a(\tilde{B}) + \frac{\ell(\tilde{B})}{r} \geq a(\tilde{B}) + \frac{\ell(\tilde{B})}{n} = \frac{|\tilde{B}|}{n}. $$
+
+Let $B'$ be any optimal AON $n$-fold PSD forcing set for $G$. Since $B'$ is optimal, it is minimum, hence $|\tilde{B}| \ge |B'|$. Therefore,
+
+$$ \frac{|B|}{r} \ge \frac{|\tilde{B}|}{n} \ge \frac{|B'|}{n} = a_{\star}^{+}(G) + \frac{\ell_{\star}^{+}(G)}{n} \ge a_{\star}^{+}(G), $$
+---PAGE_BREAK---
+
+which proves the claim for $r < n$.
+
+If $r \ge n$, then Proposition 2.16 shows that $|B| = r \cdot a_{\star}^{+}(G) + \ell_{\star}^{+}(G)$ and the conclusion follows. $\square$
+
+We conclude this section with an alternate characterization of fractional positive semidefinite forcing number.
+
+**Theorem 2.19.** For every graph $G$,
+
+$$Z_f^+(G) = a_\star^+(G).$$
+
+*Proof.* Recall that $Z_f^+ = \inf_{r \ge 2} \left\{ \frac{Z_{[r]}^+(G)}{r} \right\}$. By Corollary 2.17, $Z_f^+(G) \le a_\star^+(G)$.
+
+Let $B$ be an optimal AON $r$-fold PSD forcing set for $G$. Then by Corollary 2.8 and Lemma 2.18,
+
+$$\frac{Z_{[r]}^{+}(G)}{r} = \frac{|B|}{r} \geq a_{\star}^{+}(G),$$
+
+and thus equality holds. $\square$
+
+This result shows that the fractional positive semidefinite forcing number of a graph is always a nonnegative integer – hence, it is fractional in name (and construction) only.
+
+## 2.3 Three-color interpretation of fractional positive semidefinite forcing
+
+Motivated by the AON interpretation of the $r$-fold positive semidefinite forcing game, we consider a three-color forcing game that allows us to compute the fractional positive semidefinite forcing number for any graph without playing the $r$-fold game.
+
+Let $G$ be a graph and consider the following *fractional positive semidefinite forcing game*, which is a three-color forcing game that uses the colors dark blue (target), light blue, and white. Assign to each vertex of $G$ one of these colors and let $B = (\mathcal{D}, \mathcal{L})$, where $\mathcal{D}$ denotes the set of dark blue vertices and $\mathcal{L}$ denotes the set of light blue vertices.³ We repeatedly apply the following *fractional positive semidefinite forcing rule*:
+
+**Definition 2.20** (fractional positive semidefinite forcing rule). Let $\mathcal{B}_t = (\mathcal{D}_t, \mathcal{L}_t)$ denote the set of colored vertices of a graph $G$ at some step of the fractional positive semidefinite forcing process and let $W_1, \dots, W_h$ denote the sets of vertices of the connected components of $G - \mathcal{D}_t$. If $u \in (\mathcal{D}_t \cup (\mathcal{L}_t \cap W_i))$ and $w \in W_i$ is the only light blue or white neighbor of $u$ in $G[\mathcal{D}_t \cup W_i]$, then $u$ can force $w$, i.e., $w$ can be colored dark blue.
+
+Loosely speaking, we apply the disconnect rule from positive semidefinite zero forcing using the dark blue vertices of $G$, and then in each reconstructed component any dark or light blue vertex can force its only light blue or white neighbor. As usual, the goal of this forcing game is to choose the initial set $B$ in such a way that by repeated application of this rule the entire graph can be
+
+³Recall that this is equivalent to writing $\mathcal{B} = \mathcal{D} \dot{\cup} \mathcal{L}$; see also Section 1.3.
+---PAGE_BREAK---
+
+forced (i.e., turned dark blue). If $G$ can be forced, then we say that the initial set $\mathcal{B}$ is a fractional positive semidefinite (PSD) forcing set for $G$. The (three-color) fractional positive semidefinite forcing number of $G$, denoted $\hat{Z}_f^+(G)$, is then defined as
+
+$$ \hat{Z}_f^+(G) = \min \{|D| : (D, \mathcal{L}) \text{ is a fractional PSD forcing set for } G, \text{ for some } \mathcal{L}\}. $$
+
+We say that a fractional PSD forcing set $\mathcal{B} = (D, \mathcal{L})$ for $G$ is optimal if $|D| = \hat{Z}_f^+(G)$ and no fractional PSD forcing set for $G$ with $|D| = \hat{Z}_f^+(G)$ has fewer than $|\mathcal{L}|$ light blue vertices. We use $\ell_\star^+(G)$ to denote the number of light blue vertices in any optimal fractional PSD forcing set for $G$, i.e., $\ell_\star^+(G) = |\mathcal{L}|$.
+
+The process of backforcing described for the $r$-fold positive semidefinite forcing game applies to the fractional positive semidefinite forcing game, albeit with a three-color modification. After a light blue vertex $u$ performs a force, all of its neighbors must necessarily be dark blue, and so we can backforce $u$ at the next forcing step. As in the $r$-fold case, backforcing is a powerful technique: once a light blue vertex is able to force its only non-dark-blue neighbor, it can itself then be forced in the next step.
+
+The observant reader will notice that we have defined “fractional positive semidefinite forcing number” twice: here, and in Section 2.1. The final result of this section shows that this is not an error: the parameter $Z_f^+$, defined via an $r$-fold two-color game, is equal to the parameter $\hat{Z}_f^+$, which is defined via a three-color game.
+
+**Theorem 2.21.** For any graph $G$ on $n$ vertices,
+
+$$ Z_f^+(G) = \hat{Z}_f^+(G). $$
+
+*Proof.* Let $\mathcal{B}$ be an optimal AON $n$-fold PSD forcing set for $G$. By Theorem 2.19, we have $a(B) = a_+^*(G) = Z_f^+(G)$. Let $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ be an optimal fractional PSD forcing set for $G$, and note that optimality implies that $|\mathcal{D}| = \hat{Z}_f^+(G)$.
+
+Color $G^{(n)}$ with $\mathcal{B}$. Color $G$ with $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \tilde{\mathcal{L}})$, defined as follows: let $\tilde{\mathcal{D}} = \{u : R_u$ is an All cluster in $G^{(n)}\}$ and let $\tilde{\mathcal{L}} = \{u : R_u$ is a One cluster in $G^{(n)}\}$. Since $\mathcal{B}$ is an optimal AON $n$-fold PSD forcing set, exactly one cluster is forced at each step of the forcing process using $\mathcal{B}$, and $G^{(n)}$ can be forced. Further, backforcing is applied to One clusters in $G^{(n)}$, and One clusters correspond to light blue vertices, to which backforcing can also be applied. Therefore, the forcing process (from a global viewpoint) used on $G^{(n)}$ can be used to force $G$, so $\tilde{\mathcal{B}}$ is a fractional PSD forcing set for $G$ and $\hat{Z}_f^+(G) \le |\tilde{\mathcal{D}}| = a(B) = Z_f^+(G)$.
+
+Now, color $G$ with $\mathcal{B}$. Color $G^{(n)}$ as follows, and let $\tilde{\mathcal{B}}$ be the set of blue vertices: if $u \in \mathcal{D}$, then let $R_u$ be an All cluster, and if $u \in \mathcal{L}$, then let $R_u$ be a One cluster. Since $\mathcal{B}$ is a forcing set, $\tilde{\mathcal{B}}$ is an AON $n$-fold PSD forcing set for $G$ (with essentially the same forcing process). By Lemma 2.14, we have $a(B) \le a(\tilde{\mathcal{B}})$, so $Z_f^+(G) = a(B) \le a(\tilde{\mathcal{B}}) = |\mathcal{D}| = \hat{Z}_f^+(G)$ and thus equality holds. $\square$
+
+**Corollary 2.22.** For any graph $G$, $\ell_\star^+(G) = \ell_\star^+(G)$.
+
+As a consequence of these results, the $\hat{Z}_f^+$ and $\ell_\star^+$ notations will be suppressed in favor of the simpler $Z_f^+$ and $\ell_\star^+$.
+---PAGE_BREAK---
+
+In contrast to the process of computing the values of fractional versions of general graph pa-
+rameters, computing the fractional positive semidefinite forcing number of a graph does not require
+any explicit knowledge of the *r*-fold analog. If knowledge of $Z_f^+$ is all that is of interest, one can
+bypass the *r*-fold game entirely and opt to play the fractional positive semidefinite forcing game
+instead.
+
+## 2.4 Results for fractional positive semidefinite forcing number
+
+The fractional positive semidefinite forcing game allows us to easily prove many interesting properties of the fractional positive semidefinite forcing number.
+
+**Remark 2.23.** Any isolated vertex in $G$ must be colored dark blue. Thus if $\delta(G) = 0$, then $Z_f^+(G) \ge 1$.
+
+**Observation 2.24.** If $G$ has connected components $\{G_i\}_1^m$, then $Z_f^+(G) = \sum_1^m Z_f^+(G_i)$ and $\ell_*^+(G) = \sum_1^m \ell_*^+(G_i)$.
+
+In light of this observation, we are able to focus on connected graphs.
+
+**Remark 2.25.** $Z_f^+(G) \le Z^+(G) \le Z(G)$. The first inequality holds because any positive semidefinite zero forcing set for $G$ can be thought of as a fractional PSD forcing set for $G$ with $Z^+(G)$ dark blue vertices, and the second inequality is well-known (cf. [5]).
+
+**Proposition 2.26.** Let $G$ be a graph and let $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ be a fractional PSD forcing set for $G$. Then $Z^+(G) \le |\mathcal{B}| = |\mathcal{D}| + |\mathcal{L}|$. Further, $Z^+(G) \le Z_f^+(G) + \ell_*^+(G)$.
+
+*Proof.* $\mathcal{B} = \mathcal{D} \cup \mathcal{L}$ is a positive semidefinite zero forcing set for $G$, so $Z^+(G) \le |B| = |\mathcal{D}| + |\mathcal{L}|$. The second claim follows by choosing $\mathcal{B}$ to be optimal. $\square$
+
+A natural question is whether $Z^+(G) = Z_f^+(G) + \ell_*^+(G)$ in general. By taking a minimum positive semidefinite zero forcing set for $G$ and changing some vertices to light blue, it may be possible to obtain an optimal fractional PSD forcing set for $G$. While this technique does work for many natural examples, the result does not hold for every graph, as the next example shows.
+
+Figure 6: Graph from Example 2.27 with $p = 5$, $q = 2$
+---PAGE_BREAK---
+
+**Example 2.27.** Let $G$ be the generalization of the graph from Example 2.12, where instead of $K_{5,2}$ we use $K_{p,q}$ with partite sets $P$ and $Q$ satisfying $|P| = p > q = |Q| \ge 2$. By coloring each of these leaves light blue, we can force all of $P$, and using the disconnect rule we can subsequently backforce the leaves and force all of $Q$. Thus $Z_f^+(G) = 0$ and $\ell_*(G) = p$, but it is known that $Z^+(G) = q < 0 + p = Z_f^+(G) + \ell_*(G)$.
+
+The key to this example is that the set $B = \mathcal{L}$ is a minimal positive semidefinite zero forcing set for $G$, but it is not a minimum positive semidefinite zero forcing set.
+
+**Remark 2.28.** If $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ is an optimal fractional PSD forcing set for a connected graph $G$, then any vertex that is colored light blue must perform a force before it is itself forced; if not, then that vertex can be colored white to obtain a fractional PSD forcing set with the same number of dark blue vertices and fewer light blue vertices, contradicting the optimality of $\mathcal{B}$. Additionally, no two light blue vertices in an optimal fractional PSD forcing set can be adjacent, as one would have to force the other before the other has performed a force. Therefore, $\mathcal{L}$ is an independent set in $G$, so $\ell_*(G) \le \alpha(G)$.
+
+We now present two results from positive semidefinite zero forcing.
+
+**Remark 2.29.** Let $G$ be a graph. At some step $t$ of the positive semidefinite zero forcing process, let $B_t$ denote the set of blue vertices and $W_1, \dots, W_h$ denote the sets of vertices of the connected components of $G - B_t$. Then by Remark 2.1.14 in [11] we may select any $i$ and perform the next force in $G[B_t \cup W_i]$.
+
+**Lemma 2.30 ([10], Lemma 2.1.1).** Let $G$ be a graph and let $B$ be a positive semidefinite zero forcing set of $G$. If $v \in B$ is the vertex that performs the first force, $v \to w$, where $w$ is a white neighbor of $v$, then $(B \setminus \{v\}) \cup \{w\}$ is a positive semidefinite zero forcing set of $G$.
+
+The following result is a three-color version of Lemma 2.30. The proof is similar to the proof of the two-color version found in [10] and is omitted.
+
+**Lemma 2.31.** Let $G$ be a graph and let $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ be a fractional PSD forcing set for $G$. Suppose that the first force, $v \to w$, is performed by some $v \in \mathcal{D}$ on some $w \notin \mathcal{L}$. Let $\tilde{\mathcal{D}} = (\mathcal{D} \setminus \{v\}) \cup \{w\}$. Then $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \mathcal{L})$ is also a fractional PSD forcing set for $G$.
+
+Notice that if $\mathcal{B}$ is an optimal fractional PSD forcing set for $G$, then the first vertex forced in $G$ must necessarily be white. This observation lets us apply Lemma 2.31 to any optimal fractional PSD forcing set, provided that the first force is done by a dark blue vertex.
+
+**Theorem 2.32.** If $G$ is a graph with at least one edge, then $G$ has an optimal fractional PSD forcing set with which the first force can be performed by a light blue vertex.
+
+*Proof.* The result is trivially true for any optimal fractional PSD forcing set with which the first force can be performed by a light blue vertex. Note that if the first force with an optimal set can be done without using the disconnect rule, then this force must be done by a light blue vertex, else the set is not optimal.
+---PAGE_BREAK---
+
+Suppose for the sake of contradiction that $G$ does not have an optimal fractional PSD forcing set with which the first force can be performed by a light blue vertex. By the previous argument, the disconnect rule must be applied to perform the first force with any optimal fractional PSD forcing set. Let $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ be an optimal fractional PSD forcing set such that $|W_1|$ is minimum, where $W_1, W_2, \dots, W_h$ are the sets of vertices of the connected components of $G - \mathcal{D}$ and $|W_1| \le |W_2| \le \dots \le |W_h|$. By Remark 2.29 we can assume that the first vertex forced lies in $W_1$. Let $v \to w$ be the first force, where $v \in \mathcal{D}$ by assumption and $w \in W_1$.
+
+By Lemma 2.31, the set $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \mathcal{L})$ with $\tilde{\mathcal{D}} = (\mathcal{D} \setminus \{v\}) \cup \{w\}$ is also an optimal fractional
+PSD forcing set for $G$. Since $w$ must be the only non-dark-blue neighbor of $v$ in $W_1$, it must be the
+case that $v$ joins a component other than $W_1$ in $G - \tilde{\mathcal{D}}$; further, in $G - \tilde{\mathcal{D}}$, the component $W_1$ will
+not contain the vertex $w$, and may split into multiple smaller components. If $W_1 \neq \{w\}$, then this
+argument shows that there must be a component with fewer than $|W_1|$ vertices in $G - \tilde{\mathcal{D}}$, which
+contradicts the choice of $\mathcal{B}$; thus we must have $W_1 = \{w\}$. However, the first force in $G$ using $\tilde{\mathcal{B}}$
+can therefore be chosen as $w \to v$, which can be done without applying the disconnect rule; by the
+comments above, $w$ can thus be light blue, contradicting optimality of $\mathcal{B}$.
+
+We conclude that $G$ must have an optimal fractional PSD forcing set with which the first force
+can be performed by a light blue vertex. □
+
+Theorem 2.32 yields a lower bound on $Z_f^+(G)$ as a corollary.
+
+**Corollary 2.33.** For any graph $G$, $\delta(G) - 1 \le Z_f^+(G)$.
+
+*Proof.* The result is trivial for $\delta(G) \le 1$. If $\delta(G) \ge 2$, then $G$ has an edge, so by Theorem 2.32 there exists some optimal fractional PSD forcing set $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ such that the first force in $G$ can be done by some $u \in \mathcal{L}$. Remark 2.28 asserts that $u$ has no light blue neighbors, and all white neighbors of $u$ must be in the same component of $G - \mathcal{D}$. Since $u$ can force, all but one of its neighbors must be dark blue. Thus $\mathcal{D} \ge |N(u)| - 1 \ge \delta(G) - 1$. $\square$
+
+An additional corollary to Theorem 2.32 gives a lower bound on $\ell_{*}^{+}(G)$ in the case where $G$ has at least one edge.
+
+**Corollary 2.34.** If $G$ is a graph with at least one edge, then $\ell_{*}^{+}(G) \ge 1$.
+
+The following result is a two-color analogue of Theorem 2.32 that applies to the positive semidefinite forcing game. The proof is similar to that of Theorem 2.32 and is omitted.
+
+**Theorem 2.35.** If *G* is a graph with at least one edge, then there exists a minimum positive semidefinite zero forcing set for *G* such the first force can be done without using the disconnect rule.
+
+With Theorem 2.35, we can obtain an upper bound on $Z_f^+(G)$.
+
+**Corollary 2.36.** For any graph G with at least one edge, $Z_f^+(G) \le Z^+(G) - 1$.
+---PAGE_BREAK---
+
+*Proof.* Theorem 2.35 ensures that there is some minimum positive semidefinite zero forcing set *B* such that the first force using *B* can be done without using the disconnect rule. If *B* is obtained by coloring the vertex that performs this first force light blue and all of the other vertices in *B* dark blue, then *B* is a fractional PSD forcing set with $Z^+(G) - 1$ dark blue vertices. $\square$
+
+## 2.5 Fractional positive semidefinite forcing numbers for graph families
+
+In this section, we determine the fractional PSD forcing numbers for common graph families, illustrating the utility of some of the results in Section 2.4.
+
+**Example 2.37.** Let $n \ge 2$ and let $V(K_n) = \{v_1, v_2, \dots, v_n\}$. Note that $Z^+(K_n) = n-1$ (cf. [5, Example 46.4.2]). Applying Corollaries 2.33 and 2.36, $\delta(K_n)-1 = n-2 \le Z_f^+(K_n) \le Z^+(K_n)-1 = n-2$ and thus equality holds. By Corollary 2.34, $\ell_\star^+(K_n) \ge 1$. The set $\mathcal{B} = (\{v_1, v_2, \dots, v_{n-2}\}, \{v_{n-1}\})$ is thus an optimal fractional PSD forcing set for $K_n$, so $Z_f^+(K_n) = n-2$ and $\ell_\star^+(K_n) = 1$.
+
+In each of the next four examples, optimality of the exhibited fractional PSD forcing sets is obtained by application of Corollaries 2.33 and 2.34.
+
+**Example 2.38.** For any $n \ge 2$, the set $\mathcal{B} = (\emptyset, \{v_1\})$ is an optimal fractional PSD forcing set for $P_n$, where $V(P_n) = \{v_1, v_2, \dots, v_n\}$ in path order, so $Z_f^+(P_n) = 0$ and $\ell_\star^+(P_n) = 1$.
+
+**Example 2.39.** For any $n \ge 3$, the set $\mathcal{B} = (\{v_1\}, \{v_2\})$ is an optimal fractional PSD forcing set for $C_n$, where $V(C_n) = \{v_1, v_2, \dots, v_n\}$ in cycle order, so $Z_f^+(C_n) = 1$ and $\ell_\star^+(C_n) = 1$.
+
+**Example 2.40.** Let $n \ge 4$ and consider the wheel on $n$ vertices, $W_n$, which is obtained by adding a vertex $w$ adjacent to every vertex of $C_{n-1}$. If $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ is any optimal fractional PSD forcing set for $C_{n-1}$, then $\tilde{\mathcal{B}} = (\mathcal{D} \cup \{w\}, \mathcal{L})$ is an optimal fractional PSD forcing set for $W_n$, so $Z_f^+(W_n) = 2$ and $\ell_\star^+(W_n) = 1$.
+
+**Example 2.41.** Let $p \ge q \ge 1$ and consider $K_{p,q}$, the complete bipartite graph on partite sets $P$ and $Q$ with $|P| = p$ and $|Q| = q$. Let $\mathcal{D}$ be a set containing any $(q-1)$ elements of $Q$ and let $\mathcal{L}$ be a set containing any one element of $P$; then $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ is an optimal fractional PSD forcing set for $K_{p,q}$, so $Z_f^+(K_{p,q}) = q-1$ and $\ell_\star^+(K_{p,q}) = 1$.
+
+As a final example, we consider the fractional PSD forcing number of a tree.
+
+**Example 2.42.** Suppose that $T$ is a tree of order at least 2. We have $Z^+(T) = 1$ (cf. [5, Example 46.4.3]), so Corollary 2.36 implies that $0 \le Z_f^+(T) \le Z^+(T) - 1 = 0$ and hence equality holds. Corollary 2.34 implies that $\ell_\star^+(T) \ge 1$; if we let $\mathcal{L}$ be any leaf of $T$, then $\mathcal{B} = (\emptyset, \mathcal{L})$ is an optimal fractional PSD forcing set, so $Z_f^+(T) = 0$ and $\ell_\star^+(T) = 1$.
+
+# 3 Three-color interpretation of skew zero forcing
+
+In this section, we introduce a three-color interpretation of the skew zero forcing game and use this to show that the skew zero forcing number and "fractional (zero) forcing number" of a graph are equal. Using the three-color interpretation, we derive new results pertaining to skew zero forcing number and the associated coloring process.
+---PAGE_BREAK---
+
+## 3.1 The three-color skew zero forcing game
+
+Consider the following three-color forcing game played on a graph $G$. Choose an initial set of dark blue vertices, $\mathcal{D}$, and a set of light blue vertices, $\mathcal{L}$, and let $B = (\mathcal{D}, \mathcal{L})$; color all other vertices of $G$ white. The forcing rule is as follows:
+
+**Definition 3.1** (three-color skew zero forcing rule). If $w$ is the only non-dark-blue neighbor of a dark blue or light blue vertex $u$, then $u$ can force $w$.
+
+The set $\mathcal{B}$ is a *three-color skew zero forcing set* if $G$ can be forced after repeated application of the three-color skew zero forcing rule. We define
+
+$$ \hat{Z}^{-}(G) = \min \{|\mathcal{D}| : (\mathcal{D}, \mathcal{L}) \text{ is a three-color skew zero forcing set for } G \text{ for some } \mathcal{L}\}. $$
+
+A three-color skew zero forcing set $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ is *optimal* if $|\mathcal{D}| = \hat{Z}^{-}(G)$ and no such forcing set for $G$ has fewer light blue vertices than $\mathcal{B}$. Let $\ell_{\star}^{-}(G)$ denote the number of light blue vertices in any optimal three-color skew zero forcing set for $G$, i.e., $\ell_{\star}^{-}(G) = |\mathcal{L}|$.
+
+The inclusion of the word “skew” in the development of $\hat{Z}^{-}(G)$ is not an accident. In fact, it is easy to see that the three-color skew zero forcing game is equivalent to the (two-color) skew zero forcing game described in Section 1.1: dark blue vertices correspond to (regular) blue vertices in two-color skew zero forcing, light blue vertices correspond to white vertices that perform white vertex forcing, and white vertices that do not perform a white vertex force are the same in both cases. Therefore, $\hat{Z}^{-}(G) = Z^{-}(G)$, and we are free to use the more familiar notation (i.e., $Z^{-}(G)$) when discussing the three-color game.
+
+The only difference between a three-color skew zero forcing set and a (two-color) skew zero forcing set is that three-color skew zero forcing sets include light blue vertices; as these vertices are white in the two-color game, they are not included in forcing sets.
+
+**Remark 3.2.** Notice that any three-color skew zero forcing set for a graph $G$ is also a fractional PSD forcing set for $G$: playing the three-color skew zero forcing game is equivalent to playing the fractional PSD zero forcing game without using the disconnect rule. Therefore, $Z_{f}^{+}(G) \leq Z^{-}(G)$.
+
+From this point forward, since they give more information than their two-color counterparts, we will focus on three-color skew zero forcing sets, and typically omit the “three-color” descriptor for the sake of brevity.
+
+## 3.2 General results for skew zero forcing
+
+The three-color interpretation easily lends itself to making observations about skew zero forcing number of a graph.
+
+The next two results are well-known for $Z^{-}(G)$ using the two-color interpretation.
+
+**Remark 3.3.** Any isolated vertex in $G$ must be colored dark blue, so if $\delta(G) = 0$, then $Z^{-}(G) \geq 1$.
+
+**Observation 3.4.** If $G$ has connected components $\{G_i\}_1^m$, then $Z^{-}(G) = \sum_1^m Z^{-}(G_i)$ and $\ell_{\star}^{-}(G) = \sum_1^m \ell_{\star}^{-}(G_i)$.
+---PAGE_BREAK---
+
+In light of this observation, we are able to focus our attention on connected graphs.
+
+**Remark 3.5.** For every connected graph $G$, $\delta(G) - 1 \le Z^{-}(G)$. This is because if a candidate skew zero forcing set does not contain at least $\delta(G) - 1$ dark blue vertices, then every dark blue or light blue vertex has at least two white or light blue neighbors, so the forcing process cannot start.
+
+**Remark 3.6.** Suppose that $G$ is a connected graph on 2 or more vertices and color each of its vertices dark blue. Any one adjacent pair can then be re-colored white and light blue (in either order), so $Z^{-}(G) \le |G| - 2$.
+
+**Remark 3.7.** For every connected graph $G$, we have $Z^{-}(G) \le Z(G) \le Z^{-}(G) + \ell_{*}^{-}(G)$. The first inequality is well-known and follows because every zero forcing set for a graph $G$ is also a skew zero forcing set for $G$. For the second, note that if $B = (\mathcal{D}, \mathcal{L})$ is an optimal skew zero forcing set, then $B = \mathcal{D} \cup \mathcal{L}$ is a (standard) zero forcing set.
+
+As a consequence of Remark 3.7, any graph for which $Z^{-}(G) = Z(G)$ must have $\ell_{*}^{-}(G) = 0$. Figure 7 shows such a graph.
+
+Figure 7: 3-star $K_{1,3}$ with optimal skew zero forcing set showing $\ell_{*}^{-}(G) = 0$
+
+The justification for the next observation is the same as that given in Remark 2.28.
+
+**Observation 3.8.** If $B = (\mathcal{D}, \mathcal{L})$ is an optimal skew zero forcing set for a connected graph $G$, then any vertex that is colored light blue must perform a force before it is itself forced. No two light blue vertices in an optimal skew zero forcing set can be adjacent. The set $\mathcal{L}$ is independent set in $G$, and $\ell_{*}^{-}(G) \le \alpha(G)$.
+
+**Remark 3.9.** For any connected graph $G$, we have $0 \le \ell_{*}^{-}(G) \le \lfloor \frac{|G|-Z^{-}(G)}{2} \rfloor$. The numerator is the number of non-dark-blue vertices in an optimal skew zero forcing set, and in the worst case, half of these vertices would need to be colored light blue to force their white neighbors. Observe also that $\ell_{*}^{-}(G) < |G|$. If $G$ contains any isolated vertices, then they must be dark blue, and this claim is trivial. Otherwise, each connected component of $G$ has order at least two, so $\ell_{*}^{-}(G) \le \lfloor \frac{|G|-Z^{-}(G)}{2} \rfloor \le \lfloor \frac{|G|}{2} \rfloor < |G|$.
+
+## 3.3 Skew zero forcing as fractional zero forcing
+
+In this section, we develop an r-fold version of the standard zero forcing game and use it to prove that the “fractional (zero) forcing number” of a graph is equal to the skew zero forcing number of the graph. This treatment is similar to the positive semidefinite case discussed in Sections 2.1 and 2.2.
+---PAGE_BREAK---
+
+Let $G$ be a graph and for some $r \in \mathbb{N}$ consider the following $r$-fold forcing game, which is a two-color forcing game played on $G^{(r)}$, the $r$-blowup of $G$. As in any zero forcing game, we initially color some set $B \subseteq V(G^{(r)})$ blue and then try to force $G^{(r)}$ through repeated application of the following $r$-fold forcing rule:
+
+**Definition 3.10** ($r$-fold forcing rule). At some step $t$ of the forcing process, let $B_t$ denote the set of blue vertices in $G^{(r)}$. If $u \in B_t$ and $|N(u) \setminus B_t| \le r$, then $u$ can force $N(u) \setminus B_t$, i.e., all white neighbors of $u$ can be colored blue simultaneously.
+
+The observant reader will notice that the *r*-fold forcing rule is exactly the *r*-forcing rule found in [2], although applied to $G^{(r)}$ instead of $G$. The *r*-fold forcing game was developed in the spirit of fractional graph theory [9], while the *r*-forcing process described in [2] is more general. We have chosen to use different terminology with our treatment to emphasize this key difference.
+
+If $G^{(r)}$ can be forced, then the initial set of blue vertices is called an *r*-fold forcing set for $G$. A minimum *r*-fold forcing set is an *r*-fold forcing set of minimum cardinality. The *r*-fold forcing number of $G$, $Z_{[r]}(G)$, is the cardinality of a minimum *r*-fold forcing set.⁴ We define the fractional forcing number of $G$ as
+
+$$Z_f(G) = \inf_{r \in \mathbb{N}} \left\{ \frac{Z_{[r]}(G)}{r} \right\}.$$
+
+Clearly, $Z_{[1]}(G) = Z(G)$. By an argument similar to that used in Section 2.1, it is easy to see that for $r \ge 2$, we have $Z_{[r]}(G) \le r \cdot Z(G)$, so we can equivalently define fractional forcing number as
+
+$$Z_f(G) = \inf_{r \ge 2} \left\{ \frac{Z_{[r]}(G)}{r} \right\}.$$
+
+Our goal in this section is to prove that $Z_f(G) = Z^-(G)$ for any graph $G$. In order to do this, we will follow an approach similar to that used in Section 2.2, with the noted difference that we have a three-color interpretation of skew zero forcing that can be used to simplify some of our arguments.
+
+The global view of the *r*-fold forcing game, analogous to that of the *r*-fold positive semidefinite forcing game, will also be considered. Since backforcing does not apply to this game, in addition to All, One, and None clusters in $G^{(r)}$, we consider one other type of cluster: a *Most cluster* is a cluster in which all but one vertex is colored blue. We consider Most clusters only for $r \ge 3$, as when $r=2$ a Most cluster is equivalent to a One cluster. An *All-Most-One-None (AMON) r-fold forcing set* is an *r*-fold forcing set for $G$ that creates All, Most, One, and None clusters in $G^{(r)}$. As before, we let $a(B)$ denote the number of All clusters and $\ell(B)$ denote the number of One clusters created in $G^{(r)}$ by an AMON *r*-fold forcing set *B*; we introduce $m(B)$ to denote the number of Most clusters created by *B*. If *B* is an AMON *r*-fold forcing set, then $|B| = r \cdot a(B) + (r-1) \cdot m(B) + \ell(B) = r(a(B)+m(B)) + \ell(B) - m(B)$.
+
+Many of the remarks and observations from Section 2.2 apply to the global interpretation of the *r*-fold forcing game, so we present these results together without justification.
+
+**Observation 3.11.** Forcing into a cluster $R_u$ is equivalent to forcing $R_u$. Once a cluster is forced, it becomes an All cluster. Each cluster performs at most one force.
+
+⁴Note that $Z_{[r]}(G) = F_r(G^{(r)})$, where $F_k(H)$ is the $k$-forcing number of a graph $H$; see [2].
+---PAGE_BREAK---
+
+**Theorem 3.12.** For any graph $G$ and any $r \ge 2$, an AMON minimum $r$-fold forcing set for $G$ exists, as does a forcing process in which at each step either exactly one cluster is forced or a One cluster and a Most cluster (or, when $r = 2$, two One clusters) are forced simultaneously.
+
+*Proof.* If $r=2$, then every minimum forcing set is an AON forcing set, which is a specific type of AMON forcing set. Further, at most two clusters are forced at each step of the forcing process, and if two clusters are forced simultaneously, then both must be One clusters. Hence the result is trivially true.
+
+Assume that $r \ge 3$. Let $B$ be a minimum $r$-fold forcing set for $G$ and suppose that $B$ is not AMON. Create a chronological list of forces in $G^{(r)}$.
+
+Suppose that at step $\ell \ge 1$ we have $x \to R_u$ for some $u$, and $R_u$ is the only cluster forced at this step. If $R_u$ is not a One or a None cluster, then consider the set $B'$ obtained by replacing $R_u$ with a One cluster. Note that each vertex in $R_u$ has the same neighborhood, so if $R_u$ performs a force, then that force can be done by any blue vertex in $R_u$; thus it does not matter which vertex in $R_u$ is colored blue after this replacement. No future force is affected by this change, as $R_u$ will become an All cluster at step $\ell$, and since each cluster performs at most one force and $R_u$ contains a blue vertex, no previous force (if there was one) is affected. Thus $B'$ is a forcing set with fewer blue vertices than $B$, which contradicts that $B$ is minimum. As such, if a single cluster is forced at some step of the forcing process, then it is either a One or a None cluster.
+
+Now, suppose that at step $\ell \ge 1$ we have $x \to W \subseteq (R_{u_1} \cup R_{u_2} \cup \cdots \cup R_{u_m})$ for some $m \ge 2$, where each $R_{u_j}$ contains at least one white vertex. Since $x$ is performing a force, it has at most $r$ white neighbors. Thus we can perform a partial consolidation on the blue vertices spread among the $R_{u_j}$ as follows: convert $R_{u_1}, R_{u_2}, \ldots, R_{u_{m-2}}$ into All clusters (when $m=2$, create zero All clusters this way), convert $R_{u_{m-1}}$ into a Most cluster, and leave the remaining blue vertices in $R_{u_m}$. Consolidation does not affect the ability of $x$ to force at step $\ell$, and after this force the state of the graph is the same as it would have been had we not consolidated, so no future force is disabled by this technique. Further, each of the $R_{u_j}$ contains at least one blue vertex after the partial consolidation, so if any $R_{u_j}$ had been used to perform a force prior to step $\ell$, then the same force can still be performed; thus past forces are also not affected by the partial consolidation. If we let $\tilde{B}$ be the set obtained by performing this particular partial consolidation on $B$, then these arguments show that $\tilde{B}$ is also a minimum $r$-fold forcing set for $G$.
+
+Notice that after partial consolidation, $R_{u_m}$ must necessarily be a One cluster: if not, then we can replace $R_{u_m}$ with a One cluster to obtain a valid forcing set with fewer blue vertices, contradicting the minimality of $B$ (and $\tilde{B}$). Therefore, after partial consolidation, $x$ will force exactly two clusters, simultaneously – a Most cluster and a One cluster.
+
+By performing partial consolidation, each cluster will become an All, Most, One, or None cluster, and a forcing process exists with which at each step either a single One or None cluster will be forced, or a Most and a One cluster will be forced simultaneously. □
+
+The type of AMON minimum *r*-fold forcing set guaranteed by Theorem 3.12 is called an *optimal AMON r-fold forcing set for G*; we emphasize that optimal AMON forcing sets are minimum and that there is a corresponding forcing process in which at most two clusters are forced simultaneously.
+Using such a set, $G^{(r)}$ will always have a global AMON structure.
+---PAGE_BREAK---
+
+**Corollary 3.13.** For any optimal AMON *r*-fold forcing set *B*, $\ell(B) \geq m(B)$.
+
+*Proof.* For each Most cluster in an optimal AMON *r*-fold forcing set there exists a corresponding One cluster that is forced simultaneously using the forcing process guaranteed by Theorem 3.12. Thus the number of Most clusters cannot exceed the number of One clusters. $\square$
+
+**Corollary 3.14.** For any graph *G* with optimal AMON *r*-fold forcing set *B*,
+
+$$Z_{[r]}(G) = r(a(B) + m(B)) + \ell(B) - m(B).$$
+
+To obtain our main results of this section, we require a way to convert an AMON *r*-fold forcing set for *G* into a (three-color) skew zero forcing set for *G*, and vice-versa.
+
+**Remark 3.15.** For $r \ge 2$, let $B$ be an optimal AMON $r$-fold forcing set for a graph $G$. Color $G^{(r)}$ with $B$ and let $\tilde{B} = (\tilde{D}, \tilde{\mathcal{L}})$, where $\tilde{D} = \{u : R_u$ is an All or Most cluster$\}$ and $\tilde{\mathcal{L}} = \{u : R_u$ is a One cluster$\}$. It is easy to see that $\tilde{B}$ is a skew zero forcing set for $G$. Similarly, let $B = (\mathcal{D}, \mathcal{L})$ be a skew zero forcing set for $G$. Color $G^{(r)}$ according to the following rule: if $u \in \mathcal{D}$, then make $R_u$ an All cluster, and if $u \in \mathcal{L}$, then make $R_u$ a One cluster. The set $\tilde{B}$ of blue vertices is an AMON $r$-fold forcing set for $G$ (with $m(\tilde{B}) = 0$).
+
+**Definition 3.16.** Regardless of whether we transform an *r*-fold forcing set into a three-color skew zero forcing set or a three-color skew zero forcing set into an *r*-fold forcing set, we call the process described in Remark 3.15 *conversion*.
+
+When performing conversion, we will always specify which type of set is being converted.
+
+**Proposition 3.17.** Let $G$ be a graph on $n$ vertices and fix $r \ge n$. If $B$ is an optimal AMON $r$-fold forcing set for $G$, then $\ell(B) < n$.
+
+*Proof.* Since $a(B) + m(B) + \ell(B) \le n$, we have $\ell(B) \le n - (a(B) + m(B))$. If $a(B) + m(B) > 0$, then the claim follows trivially.
+
+Suppose that $a(B) = m(B) = 0$. If $r \ge 3$, then exactly one cluster of $G^{(r)}$ is forced at each step of the forcing process, since there are no Most clusters. Optimality of $B$ implies that the first cluster forced must be a None cluster, so $\ell(B) < n$.
+
+Now suppose that $r=2$, in which case $n=1$ or $n=2$. Since $a(B)=0$, $G$ cannot contain any isolated vertices, so we must have $n=2$ and $G=K_2$. In this case, the optimal AMON $r$-fold forcing sets for $G$ create a single One cluster, so $\ell(B) = 1 < 2 = n$. $\square$
+
+**Proposition 3.18.** Let $G$ be a graph on $n$ vertices. If $r \ge n$ and $B$ is an optimal AMON $r$-fold forcing set, then $a(B) + m(B) = Z^{-}(G)$.
+
+*Proof.* Assume the hypotheses. Converting $B$ into a skew zero forcing set $\tilde{B} = (\tilde{D}, \tilde{\mathcal{L}})$ yields $Z^{-}(G) \le |\tilde{D}| = a(B) + m(B)$.
+
+Now, let $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ be an optimal skew zero forcing set for $G$ and convert $\mathcal{B}$ into an AMON $r$-fold forcing set $\tilde{\mathcal{B}}$. Since $\mathcal{B}$ is optimal, it is minimum, so $|\mathcal{B}| \le |\tilde{\mathcal{B}}|$. Thus
+
+$$a(B) + m(B) + \frac{\ell(B) - m(B)}{r} = \frac{|\mathcal{B}|}{r} \le \frac{|\tilde{\mathcal{B}}|}{r} = |\mathcal{D}| + \frac{|\mathcal{L}|}{r} = Z^{-}(G) + \frac{\ell_{\star}^{-}(G)}{r}.$$
+---PAGE_BREAK---
+
+Since $\ell(B) - m(B) \le \ell(B) < n$ by Proposition 3.17, $\ell_\star^-(G) < n$ by Remark 3.9, and $n \le r$ by assumption, applying the floor function through the above inequality yields $a(B)+m(B) \le Z^-(G)$, and thus equality holds. $\square$
+
+**Corollary 3.19.** If $G$ is a graph on $n$ vertices, then
+
+$$ \lim_{r \to \infty} \frac{Z_{[r]}(G)}{r} = Z^{-}(G). $$
+
+*Proof.* Let $r \ge n$ and suppose that $B$ is an optimal AMON $r$-fold forcing set. By Proposition 3.18, $a(B) + m(B) = Z^{-}(G)$. Therefore
+
+$$ \frac{Z_{[r]}(G)}{r} = \frac{|B|}{r} = a(B) + m(B) + \frac{\ell(B)}{r} = Z^{-}(G) + \frac{\ell(B)}{r}. $$
+
+Since $0 \le \ell(B) < n$ (independent of the choice of $B$ and for all $r \ge n$), taking the limit as $r$ approaches $\infty$ proves the result. $\square$
+
+**Proposition 3.20.** For any $r \ge 2$ and any graph $G$, if $B$ is an optimal AMON $r$-fold forcing set, then $\frac{|B|}{r} \ge Z^{-}(G)$.
+
+*Proof.* Let $B = (\mathcal{D}, \mathcal{L})$ be obtained by converting $B$ into a skew zero forcing set. Since $\ell(B) - m(B) \ge 0$ by Corollary 3.13, we have
+
+$$ \frac{|B|}{r} = a(B) + m(B) + \frac{\ell(B) - m(B)}{r} \geq a(B) + m(B) = |\mathcal{D}| \geq Z^{-}(G). \quad \square $$
+
+**Theorem 3.21.** For any graph $G$,
+
+$$ Z_f(G) = Z^{-}(G). $$
+
+*Proof.* Since $Z_{[r]}(G) = |B|$ for any optimal AMON $r$-fold forcing set $B$, Proposition 3.20 yields $\frac{Z_{[r]}(G)}{r} \ge Z^{-}(G)$ for all $r \ge 2$, so
+
+$$ Z_f(G) = \inf_{r \ge 2} \left\{ \frac{Z_{[r]}(G)}{r} \right\} \ge Z^{-}(G). $$
+
+Further, equality holds by Corollary 3.19. $\square$
+
+## 3.4 Leaf-stripping and skew zero forcing number
+
+In this section, we prove results about graphs with leaves and show that skew zero forcing number is unchanged by removing leaves and their neighbors. A leaf-stripping algorithm, based on an algorithm described in [6], is introduced and used to characterize graphs $G$ that have $Z^{-}(G) = 0$. For convenience, we define $Z^{-}(\emptyset) = 0$.
+
+**Lemma 3.22.** Let $G$ be a graph with leaf $u \in V(G)$ and let $v \in V(G)$ be the neighbor of $u$. Let $B = (\mathcal{D}, \mathcal{L})$ be an optimal skew zero forcing set for $G$.
+---PAGE_BREAK---
+
+i. If $u \in B$, then $v \notin B$.
+
+ii. If $u \notin B$, then $v \notin D$.
+
+*Proof*. For the first claim, since $u \in B$ and $v$ is the only neighbor of $u$, we can choose $u \to v$ as the first step in the forcing process. In this case $v$ must be white, regardless of whether $u$ is dark or light blue, because $B$ is optimal.
+
+For the second claim, if $v \in D$, then the set $\tilde{B} = (D \setminus \{v\}, \mathcal{L} \cup \{u\})$ has fewer dark blue vertices than $B$ but is a skew zero forcing set for $G$, contradicting the optimality of $B$. □
+
+**Theorem 3.23.** If $G$ is a graph with leaf $u \in V(G)$ and $v \in V(G)$ is the neighbor of $u$, then
+$Z^{-}(G - \{u, v\}) = Z^{-}(G)$.
+
+*Proof.* Suppose that $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \tilde{\mathcal{L}})$ is an optimal skew zero forcing set for $\tilde{\mathcal{G}} = G - \{u, v\}$ and let $\mathcal{D} = \tilde{\mathcal{D}}, \mathcal{L} = \tilde{\mathcal{L}} \cup \{u\}$, and $\mathcal{B} = (\mathcal{D}, \mathcal{L})$. Carry out the forcing process on $G$ using $\mathcal{B}$ for the initial coloring, starting with $u \to v$. Since $v$ is then dark blue, it does not affect the ability of its neighbors to force, so the forcing process on $G$ can be continued until $\tilde{\mathcal{G}}$ is colored dark blue (since $\tilde{\mathcal{B}} = \mathcal{B} \setminus \{u\}$ is a skew zero forcing set for $\tilde{\mathcal{G}}$). The final force can then be $v \to u$, turning $G$ completely dark blue, so $\mathcal{B}$ is a skew zero forcing set for $G$ with $Z^{-}(\tilde{\mathcal{G}})$ dark blue vertices. Thus $Z^{-}(G) \le Z^{-}(\tilde{\mathcal{G}})$.
+
+Now suppose that $\mathcal{B} = (\mathcal{D}, \mathcal{L})$ is an optimal skew zero forcing set for $G$; we consider three cases.
+As before, $\tilde{\mathcal{G}}$ will denote $G - \{u, v\}$.
+
+First, if $u \in \mathcal{L}$, then $v$ is white by Lemma 3.22 and $u \to v$ can be taken as the first step of the forcing process. Without loss of generality, we can assume that $v \to u$ is the last step of the forcing process. By continuing the forcing process, we will color $\tilde{\mathcal{G}}$ completely dark blue, since $\mathcal{B}$ is a skew zero forcing set for $G$ and $v$ cannot force any vertex in $\tilde{\mathcal{G}}$; thus $\mathcal{B} \setminus \{u\}$ is a skew zero forcing set for $\tilde{\mathcal{G}}$ with $Z^{-}(G)$ dark blue vertices, so $Z^{-}(\tilde{\mathcal{G}}) \le Z^{-}(G)$.
+
+Next, suppose that $u \in D$; again, by Lemma 3.22, $v$ is white and $u \to v$ can be chosen as the first step of the forcing process. If $v$ never subsequently forces any of its other neighbors, then $\mathcal{B}$ is not optimal, since $u$ could have been chosen as a light blue vertex instead of a dark blue vertex (and then $v \to u$ could be the final step in the new forcing process). Thus $v$ must eventually force one of its neighbors, say $w$. It must be the case that at that stage all neighbors of $v$ (except $w$) are colored dark blue, and since $v$ is itself dark blue it did not affect any of the forces that led to this state. Therefore, if we let $\tilde{\mathcal{D}} = (\mathcal{D} \setminus \{u\}) \cup \{w\}$ and $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \mathcal{L})$, we will have a set containing $Z^{-}(G)$ dark blue vertices that can color all of $\tilde{\mathcal{G}}$ dark blue. We see that $Z^{-}(\tilde{\mathcal{G}}) \le Z^{-}(G)$.
+
+Lastly, suppose that $u \notin B$, i.e., $u$ is white. By Lemma 3.22, $v \notin D$. There is a point in time after which $v$ will be dark blue; all forces prior to this time (except possibly $v \to u$ in the case where $v$ is light blue) do not involve $v$ in any way, and all forces after this time (except possibly $v \to u$) can be performed regardless of the presence of $v$, as it is dark blue. Let $\tilde{\mathcal{B}} = (\tilde{\mathcal{D}}, \tilde{\mathcal{L}})$, where $\tilde{\mathcal{L}} = \mathcal{L} \setminus \{v\}$ if $v \in \mathcal{L}$ and $\tilde{\mathcal{L}} = \mathcal{L}$ otherwise. Then $\tilde{\mathcal{B}}$ can completely force $\tilde{\mathcal{G}}$, so $Z^{-}(\tilde{\mathcal{G}}) \le Z^{-}(G)$. □
+
+Motivated by this result, we introduce a *leaf-stripping algorithm* that can be used to reduce
+a graph *G* to a smaller graph with the same skew zero forcing number. This algorithm is a
+modification of Algorithm 3.16 in [6].
+---PAGE_BREAK---
+
+**Algorithm 3.24 (Leaf-stripping).**
+
+**Input:** Graph $G$
+
+**Output:** Graph $\hat{G}$ with $\delta(\hat{G}) \neq 1$, or $\hat{G} = \emptyset$
+
+**BEGIN**
+
+$\hat{G} \leftarrow G$
+
+**WHILE** $\hat{G}$ has a leaf $u$ with neighbor $v$ **DO**
+
+$\hat{G} \leftarrow \hat{G} - \{u, v\}$
+
+**RETURN** $\hat{G}$.
+
+**Theorem 3.25.** Let $G$ be a graph and let $\hat{G}$ be the graph returned by Algorithm 3.24. Then
+
+i. $Z^{-}(G) = Z^{-}(\hat{G});$ and
+
+ii. $Z^{-}(G) = 0$ if and only if $\hat{G} = \emptyset$.
+
+*Proof.* The first claim follows by repeated application of Theorem 3.23, and hence if $\hat{G} = \emptyset$, then $Z^{-}(G) = 0$. For the converse of the second claim, suppose that Algorithm 3.24 does not return the empty set. If $\delta(\hat{G}) = 0$, then $1 \le Z^{-}(\hat{G})$. If $\delta(\hat{G}) \ge 2$, then
+
+$$1 \le \delta(\hat{G}) - 1 \le Z^{-}(\hat{G}).$$
+
+In either case, $1 \le Z^{-}(\hat{G}) = Z^{-}(G)$, which completes the proof. $\square$
+
+This result immediately yields the following corollaries.
+
+**Corollary 3.26.** If $G$ is a graph on an odd number of vertices, then $Z^{-}(G) > 0$.
+
+**Corollary 3.27.** If $G$ is a graph and $Z^{-}(G) = 0$, then $G$ has a unique perfect matching. Further, by applying the leaf-stripping algorithm to $G$, each removed leaf and its neighbor contribute an edge to this perfect matching.
+
+Note that the converse of Corollary 3.27 is false, as the next example shows.
+
+Figure 8: Graph $G$ with unique perfect matching and $Z^{-}(G) > 0$
+
+**Example 3.28.** Consider the graph $G$ shown in Figure 8. The thick edges in the figure show the unique perfect matching for $G$, but since $\delta(G) = 2$, we have $Z^{-}(G) \ge 2-1=1$. In fact, $Z^{-}(G) = 1$, and the forcing set shown is optimal.
+---PAGE_BREAK---
+
+**Remark 3.29.** If $G$ is a graph on $n$ vertices, then Algorithm 3.24 returns the graph $\hat{G}$ in at most $\lfloor \frac{n}{2} \rfloor$ leaf-stripping steps. Theorem 3.25 asserts that if $Z^{-}(\hat{G})$ is known, then the algorithm has computed $Z^{-}(G) = Z^{-}(\hat{G})$. In particular, if $T$ is a tree, then necessarily $\hat{G} = pK_1$ for some $p \ge 0$ and $Z^{-}(T) = p.
+
+Theorem 3.25 also yields yet another upper bound on $\ell_{*}^{-}(G)$ when $Z^{-}(G) = 0$.
+
+**Remark 3.30.** For any graph $G$ on $n$ vertices with $Z^{-}(G) = 0$, we have $\ell_{*}^{-}(G) \le \frac{n}{2}$. The leaf-stripping algorithm identifies a set of vertices (the “leaves”) that can all be colored light blue to carry out the forcing process.
+
+A natural question is whether we can prove a version of Theorem 3.25 that applies to the fractional positive semidefinite forcing game. If Algorithm 3.24 returns the empty set when applied to a graph $G$, then by Remark 3.2 and Theorem 3.25 we have $0 \le Z_f^+(G) \le Z^-(G) = 0$, and so equality holds for one direction. The converse may fail, however: the graph $G$ from Example 2.27 satisfies $Z_f^+(G) = 0$, but applying the algorithm to $G$ would return the nonempty partite set $Q$. Thus while we cannot generate a positive semidefinite analogue of Theorem 3.25, the result can still be a useful tool when Algorithm 3.24 returns the empty set.
+
+To demonstrate, let $p \ge 2$ and consider the “fuzzy orange” $G = K_p \circ K_1$ obtained by attaching one leaf to each vertex of $K_p$ (this is the corona of $K_p$ with $K_1$). In [1], it is shown that $\text{mr}(G) = \text{mr}^{+}(G) = p+1$, so $M^{+}(G) = |G| - \text{mr}^{+}(G) = p-1 \le Z^{+}(G)$. It is easy to see that any $(p-1)$ leaves form a positive semidefinite zero forcing set for $G$, so $Z^{+}(G) = p-1$.
+
+**Example 3.31.** Let $G = K_p \circ K_1$ for some $p \ge 2$. Algorithm 3.24 returns the empty set when applied to $G$, so by Remark 3.2 and Theorem 3.25 we have $0 \le Z_f^+(G) \le Z^-(G) = 0$ and thus equality holds. By Proposition 2.26, we have $p-1 = Z^+(G) \le Z_f^+(G) + \ell_*^+(G) = \ell_*^+(G)$. If $\mathcal{L}$ is a set consisting of any $p-1$ of the leaves, then $\mathcal{B} = (\emptyset, \mathcal{L})$ is an optimal fractional PSD forcing set for $G$, so $Z_f^+(G) = 0$ and $\ell_*^+(G) = p-1$.
+
+As an aside, this example also demonstrates that the positive semidefinite zero forcing number of a graph can be quite different from the value of the fractional PSD forcing number.
+
+## Acknowledgements
+
+This research has been supported in part by Iowa State University Holl funds. Kevin F. Palmowski is supported by an Iowa State University Department of Mathematics Lambert Graduate Research Assistantship. David E. Roberson is supported in part by the Singapore National Research Foundation under NRF RF Award No. NRF-NRFF2013-13.
+
+Some of this work was done while Leslie Hogben and Kevin F. Palmowski were visiting the Institute for Mathematics and its Applications (IMA); they thank IMA both for financial support (from NSF funds) and for providing a wonderful collaborative research environment. The authors also extend many thanks to Steve Butler for help with computations.
+---PAGE_BREAK---
+
+References
+
+[1] AIM Minimum Rank – Special Graphs Work Group (F. Barioli, W. Barrett, S. Butler, S. M. Cioaba, D. Cvetković, S. M. Fallat, C. Godsil, W. Haemers, L. Hogben, R. Mikkelson, S. Narayan, O. Pryporova, I. Sciriha, W. So, D. Stevanović, H. van der Holst, K. Vander Meulen, and A. Wangsness). Zero forcing sets and the minimum rank of graphs. *Linear Algebra Appl.*, 428: 1628–1648, 2008.
+
+[2] D. Amos, Y. Caro, R. Davila, and R. Pepper. Upper bounds on the $k$-forcing number of a graph. arXiv:1401.6206v1 [math.CO], 2014.
+
+[3] F. Barioli, W. Barrett, S. M. Fallat, H. T. Hall, L. Hogben, B. Shader, P. van den Driessche, and H. van der Holst. Zero forcing parameters and minimum rank problems. *Linear Algebra Appl.*, 433: 401–411, 2010.
+
+[4] D. Burgarth and V. Giovannetti. Full control by locally induced relaxation. *Phys. Rev. Lett.*, 99: 100501, 2007.
+
+[5] S. Fallat and L. Hogben. Minimum Rank, Maximum Nullity, and Zero Forcing Number of Graphs. In *Handbook of Linear Algebra*, 2nd ed., L. Hogben, ed., CRC Press, Boca Raton, FL, 2013.
+
+[6] C. Grood, J. Harmse, L. Hogben, T. J. Hunter, B. Jacob, A. Klimas, and S. McCathern. Minimum rank with zero diagonal. *Electron. J. Linear Algebra*, 27: 458–477, 2014.
+
+[7] L. Hogben, K. Palmowski, D. Roberson, and S. Severini. Orthogonal Representations, Projective Rank, and Fractional Minimum Positive Semidefinite Rank: Connections and New Directions. arXiv:1502.00016 [math.CO], 2015.
+
+[8] IMA-ISU research group on minimum rank (M. Allison, E. Bodine, L. M. DeAlba, J. Debnath, L. DeLoss, C. Garnett, J. Grout, L. Hogben, B. Im, H. Kim, R. Nair, O. Pryporova, K. Savage, B. Shader, A. Wangsness Wehe). Minimum rank of skew-symmetric matrices described by a graph. *Linear Algebra Appl.*, 432: 2457–2472, 2010.
+
+[9] E. Scheinerman and D. Ullman. *Fractional Graph Theory*, Dover, Mineola, NY, 2011; also available online from http://www.ams.jhu.edu/~ers/fgt/.
+
+[10] T. A. Peters. Positive semidefinite maximum nullity and zero forcing number. Ph.D. thesis, Iowa State University, 2012.
+
+[11] N. J. Warnberg. Positive semidefinite propagation time. Ph.D. thesis, Iowa State University, 2014.
\ No newline at end of file
diff --git a/samples/texts_merged/7173360.md b/samples/texts_merged/7173360.md
new file mode 100644
index 0000000000000000000000000000000000000000..931be6c4b43ebc24fbbec3d81a47c1ba47c97044
--- /dev/null
+++ b/samples/texts_merged/7173360.md
@@ -0,0 +1,993 @@
+
+---PAGE_BREAK---
+
+INDIVIDUAL BASED MODEL WITH COMPETITION
+IN SPATIAL ECOLOGY*
+
+DMITRI FINKELSHTEIN†, YURI KONDRATIEV‡, AND OLEKSANDR KUTOVIY§
+
+**Abstract.**
+
+We analyze an interacting particle system with a Markov evolution of birth-and-death type. We have shown that a local competition mechanism (realized via a density dependent mortality) leads to a globally regular behavior of the population in course of the stochastic evolution.
+
+**Key words.** Continuous systems, spatial birth-and-death processes, correlation functions, individual based models, spatial plant ecology
+
+**AMS subject classifications.** 60J80; 60K35; 82C21; 82C22
+
+**1. Introduction.** Complex systems theory is a quickly growing interdisciplinary area with a very broad spectrum of motivations and applications. Having in mind biological applications, S. Levin (see [26]) characterized complex adaptive systems by such properties as diversity and individuality of components, localized interactions among components, and the outcomes of interactions used for replication or enhancement of components. In the study of these systems, proper language and techniques are delivered by the interacting particle models which form a rich and powerful direction in modern stochastic and infinite dimensional analysis. Interacting particle systems have a wide use as models in condensed matter physics, chemical kinetics, population biology, ecology (individual based models), sociology and economics (agent based models).
+
+In this paper we consider an individual based model (IBM) in spatial ecology introduced by Bolker and Pacala [4, 5], Dieckmann and Law [6] (BDLP model). A population in this model is represented by a configuration of motionless organisms (plants) located in an infinite habit (an Euclidean space in our considerations). The habit is considered to be a continuous space as opposed to discrete spatial lattices used in the most of mathematical models of interacting particle systems. We need the infinite habit to avoid boundary effects in the population evolution and the latter moment is quite similar to the necessity to work in the thermodynamic limit for models of statistical physics. Let us also mention a recent paper [2] in which a modification of the BDLP model for the case of moving organisms (e.g., branching diffusion of the plankton) was considered.
+
+A general IBM in the plant ecology is a stochastic Markov process in the configuration space with events comprising birth and death of the configuration points, i.e., we are dealing with a birth-and-death process in the continuum. In the particular case of the BDLP model, each plant produces seeds independently of others and then these seeds are distributed in the space accordingly to a dispersion kernel $a^+$. This part of the process may be considered as a kind of the spatial branching. In the same time, the model includes also a mortality mechanism. The mortality intensity consists of
+
+*This work was supported by DFG through SFB-701, Bielefeld University, Germany.
+†Institute of Mathematics, National Academy of Sciences of Ukraine, Kyiv, Ukraine(fdl@imath.kiev.ua).
+‡Fakultät für Mathematik, Universität Bielefeld, 33615 Bielefeld, Germany (kondrat@math.uni-bielefeld.de); Reading University.
+§Fakultät für Mathematik, Universität Bielefeld, 33615 Bielefeld, Germany (kutoviy@math.uni-bielefeld.de).
+---PAGE_BREAK---
+
+two parts. The first one corresponds to a constant intrinsic mortality value $m > 0$ s.t. any plant dies independently of others after a random time (exponentially distributed with parameter $m$). The second part in the mortality rate is density dependent. The latter is expressed in terms of a competition kernel $a^-$ which describes an additional mortality rate for any given point of the configuration coming from the rest of the population, see Section 3 for the precise description of the model, in particular, (3.6). The latter formula gives the heuristic form of the Markov generator in the BDLP model.
+
+Assuming the existence of the corresponding Markov process, we derive in Section 5 an evolution equation for correlation functions $k_t^{(n)}$, $n \ge 1$, of the considered model. In [4, 5], [6] this system was called the system of spatial moment equations for plant competition and, actually, this system itself was taking as a definition of the dynamics in the BDLP model. The mathematical structure of the correlation functions evolution equation is close to other well-known hierarchical systems in mathematical physics, e.g., BBGKY hierarchy for the Hamiltonian dynamics (see, e.g. [3]) or the diffusion hierarchy for the gradient stochastic dynamic in the continuum (see e.g. [21]). As in all hierarchical chains of equations, we can not expect the explicit form of the solution, and even more, the existence problem for these equations is a highly delicate question.
+
+There is an approximative approach to produce an information about the behavior of the solutions to the hierarchical chains. This approach is called the closure procedure and consists of the following steps. The first step is to cut all correlation functions of the higher orders and the second one is to subscribe the rest correlation functions by the properly factorized correlation functions of the lower orders. As result, one obtains a finite system of non-linear equations instead of the original linear but infinite system of a hierarchical type. This closure procedure is essentially non-unique, see [7].
+
+The aim of this paper is to study the moment equations for the BDLP model by methods of functional analysis and analysis on the configuration spaces developed in [13], [14], [15] and already applied to the non-equilibrium birth-and-death type continuous space stochastic dynamics in [16], [18]. We obtain some rigorous results concerning the existence and properties of the solution for different classes of initial conditions. One of the main question we clarify in the paper concerns the role of the competition mechanism in the regulation of the spatial structure of an evolving population. More precisely, considering the model without competition, i.e., the case $a^- = 0$, we arrive in the situation of the so-called continuous contact model [9], [17], [22]. In the ecological framework, this model describes free growth of a plant population with the given constant mortality. We note that (independently on the value of the mortality $m > 0$) the considered contact model exhibits very strong clustering that is reflected in the bound (3.5) on the correlation functions at any moment of time $t > 0$. Note that this effect on the level of the computer simulation was discovered already in [2] and now it has the rigorous mathematical formulation and clarification. A direct consequence of the competition in the model is the suppression of such clustering. Namely, assuming the strong enough competition and the big intrinsic mortality $m$, we prove the sub-Poissonian bound for the solution to the moment equations provided such bound was true for the initial state. Moreover, we clarify specific influences of the constant and the density dependent mortality intensities separately. More precisely, the big enough intrinsic mortality $m$ gives a uniform in time bound for each correlation function and the strong competition results ensure the regular spatial distribution of the typical configuration for any moment of
+---PAGE_BREAK---
+
+time that is reflected in the sub-Poissonian bound. Joint influence of the intrinsic
+mortality and the competition leads to the existence of the unique invariant measure
+for our model which is just Dirac measure concentrated on the empty configuration.
+The latter means that the corresponding stochastic evolution of the population is
+asymptotically exhausting.
+
+We would like to mention also the work [10] in which the BDLP model was studied
+in the case of the bounded habit in the stochastic analysis framework. The latter case
+differs essentially from the model we consider in the present paper as well as main
+problems studied in [10], which are related to the scaling limits for the considered
+processes.
+
+**2. General facts and notations.** Let $\mathcal{B}(\mathbb{R}^d)$ be the family of all Borel sets in $\mathbb{R}^d$. $\mathcal{B}_b(\mathbb{R}^d)$ denotes the system of all bounded sets in $\mathcal{B}(\mathbb{R}^d)$.
+
+The space of $n$-point configuration is
+
+$$ \Gamma_0^{(n)} = \Gamma_{0,\mathbb{R}^d}^{(n)} := \{\eta \subset \mathbb{R}^d \mid |\eta| = n\}, \quad n \in \mathbb{N}_0 := \mathbb{N} \cup \{0\}, $$
+
+where $|A|$ denotes the cardinality of the set $A$. The space $\Gamma_{\Lambda}^{(n)} := \Gamma_{0, \Lambda}^{(n)}$ for $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$ is defined analogously to the space $\Gamma_0^{(n)}$. As a set, $\Gamma_0^{(n)}$ is equivalent to the symmetrization of
+
+$$ \widetilde{(\mathbb{R}^d)^n} = \{(x_1, \dots, x_n) \in (\mathbb{R}^d)^n \mid x_k \neq x_l \text{ if } k \neq l\}, $$
+
+i.e. $(\widetilde{\mathbb{R}}^d)^n/S_n$, where $S_n$ is the permutation group over $\{1, \dots, n\}$. Hence one can introduce the corresponding topology and Borel $\sigma$-algebra, which we denote by $O(\Gamma_0^{(n)})$ and $B(\Gamma_0^{(n)})$, respectively. Also one can define a measure $m^{(n)}$ as an image of the product of Lebesgue measures $dm(x) = dx$ on $(\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d))$.
+
+The space of finite configurations
+
+$$ \Gamma_0 := \bigsqcup_{n \in \mathbb{N}_0} \Gamma_0^{(n)} $$
+
+is equipped with the topology which has structure of disjoint union. Therefore, one
+can define the corresponding Borel σ-algebra $\mathcal{B}(\Gamma_0)$.
+
+A set $B \in \mathcal{B}(\Gamma_0)$ is called bounded if there exists $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$ and $N \in \mathbb{N}$ such that $B \subset \bigsqcup_{n=0}^N \Gamma_\Lambda^{(n)}$. The Lebesgue—Poisson measure $\lambda_z$ on $\Gamma_0$ is defined as
+
+$$ \lambda_z := \sum_{n=0}^{\infty} \frac{z^n}{n!} m^{(n)}. $$
+
+Here $z > 0$ is the so called activity parameter. The restriction of $\lambda_z$ to $\Gamma_\Lambda$ will be also denoted by $\lambda_z$.
+
+The configuration space
+
+$$ \Gamma := \{\gamma \subset \mathbb{R}^d \mid |\gamma \cap \Lambda| < \infty, \text{ for all } \Lambda \in \mathcal{B}_b(\mathbb{R}^d)\} $$
+
+is equipped with the vague topology. It is a Polish space (see e.g. [15]). The cor-
+responding Borel $\sigma$-algebra $\mathcal{B}(\Gamma)$ is defined as the smallest $\sigma$-algebra for which all
+mappings $N_{\Lambda}: \Gamma \to \mathbb{N}_{0}$, $N_{\Lambda}(\gamma) := |\gamma \cap \Lambda|$ are measurable, i.e.,
+
+$$ \mathcal{B}(\Gamma) = \sigma(N_{\Lambda}|\Lambda \in \mathcal{B}_b(\mathbb{R}^d)). $$
+---PAGE_BREAK---
+
+One can also show that $\Gamma$ is the projective limit of the spaces $\{\Gamma_\Lambda\}_{\Lambda \in \mathcal{B}_b(\mathbb{R}^d)}$ w.r.t. the projections $p_\Lambda : \Gamma \to \Gamma_\Lambda$, $p_\Lambda(\gamma) := \gamma_\Lambda$, $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$.
+
+The Poisson measure $\pi_z$ on $(\Gamma, \mathcal{B}(\Gamma))$ is given as the projective limit of the family of measures $\{\pi_z^\Lambda\}_{\Lambda \in \mathcal{B}_b(\mathbb{R}^d)}$, where $\pi_z^\Lambda$ is the measure on $\Gamma_\Lambda$ defined by $\pi_z^\Lambda := e^{-zm(\Lambda)}\lambda_z$.
+
+We will use the following classes of functions: $L_{\text{ls}}^0(\Gamma_0)$ is the set of all measurable functions on $\Gamma_0$ which have a local support, i.e. $G \in L_{\text{ls}}^0(\Gamma_0)$ if there exists $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$ such that $G|_{\Gamma_0 \setminus \Gamma_\Lambda} = 0$; $\mathcal{B}_{\text{bs}}(\Gamma_0)$ is the set of bounded measurable functions with bounded support, i.e. $G|_{\Gamma_0 \setminus B} = 0$ for some bounded $B \in \mathcal{B}(\Gamma_0)$.
+
+On $\Gamma$ we consider the set of cylinder functions $\mathcal{FL}^0(\Gamma)$, i.e. the set of all measurable functions $G$ on $(\Gamma, \mathcal{B}(\Gamma))$ which are measurable w.r.t. $\mathcal{B}_\Lambda(\Gamma)$ for some $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$. These functions are characterized by the following relation: $F(\gamma) = F|_{\Gamma_\Lambda}(\gamma_\Lambda)$.
+
+The following mapping between functions on $\Gamma_0$, e.g. $L_{\text{ls}}^0(\Gamma_0)$, and functions on $\Gamma$, e.g. $\mathcal{FL}^0(\Gamma)$, plays the key role in our further considerations:
+
+$$KG(\gamma) := \sum_{\eta \in \gamma} G(\eta), \quad \gamma \in \Gamma, \qquad (2.1)$$
+
+where $G \in L_{\text{ls}}^0(\Gamma_0)$, see e.g. [13, 24, 25]. The summation in the latter expression is taken over all finite subconfigurations of $\gamma$, which is denoted by the symbol $\eta \in \gamma$. The mapping $K$ is linear, positivity preserving, and invertible, with
+
+$$K^{-1}F(\eta) := \sum_{\xi \subset \eta} (-1)^{|\eta \setminus \xi|} F(\xi), \quad \eta \in \Gamma_0. \qquad (2.2)$$
+
+Let $\mathcal{M}_{\text{fm}}^{1}(\Gamma)$ be the set of all probability measures $\mu$ on $(\Gamma, \mathcal{B}(\Gamma))$ which have finite local moments of all orders, i.e. $\int_{\Gamma} |\gamma_{\Lambda}|^n \mu(d\gamma) < +\infty$ for all $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$ and $n \in \mathbb{N}_0$. A measure $\rho$ on $(\Gamma_0, \mathcal{B}(\Gamma_0))$ is called locally finite iff $\rho(A) < \infty$ for all bounded sets $A$ from $\mathcal{B}(\Gamma_0)$. The set of such measures is denoted by $\mathcal{M}_{\text{lf}}(\Gamma_0)$.
+
+One can define a transform $K^* : \mathcal{M}_{\text{fm}}^{1}(\Gamma) \to \mathcal{M}_{\text{lf}}(\Gamma_0)$, which is dual to the $K$-transform, i.e., for every $\mu \in \mathcal{M}_{\text{fm}}^{1}(\Gamma)$, $G \in \mathcal{B}_{\text{bs}}(\Gamma_0)$ we have
+
+$$\int_{\Gamma} KG(\gamma)\mu(d\gamma) = \int_{\Gamma_0} G(\eta)(K^*\mu)(d\eta).$$
+
+The measure $\rho_\mu := K^*\mu$ is called the correlation measure of $\mu$.
+
+As shown in [13] for $\mu \in \mathcal{M}_{\text{fm}}^{1}(\Gamma)$ and any $G \in L^1(\Gamma_0, \rho_\mu)$ the series (2.1) is $\mu$-a.s. absolutely convergent. Furthermore, $KG \in L^1(\Gamma, \mu)$ and
+
+$$\int_{\Gamma_0} G(\eta) \rho_{\mu}(d\eta) = \int_{\Gamma} (KG)(\gamma) \mu(d\gamma). \qquad (2.3)$$
+
+A measure $\mu \in \mathcal{M}_{\text{fm}}^{1}(\Gamma)$ is called locally absolutely continuous w.r.t. $\pi_z$ iff $\mu_\Lambda := \mu \circ p_\Lambda^{-1}$ is absolutely continuous with respect to $\pi_z^\Lambda$ for all $\Lambda \in \mathcal{B}_\Lambda(\mathbb{R}^d)$. In this case $\rho_\mu := K^*\mu$ is absolutely continuous w.r.t $\lambda_z$. We denote
+
+$$k_\mu(\eta) := \frac{d\rho_\mu}{d\lambda_z}(\eta), \quad \eta \in \Gamma_0.$$
+
+The functions
+
+$$k_{\mu}^{(n)} : (\mathbb{R}^d)^n \longrightarrow \mathbb{R}_+ \qquad (2.4)$$
+---PAGE_BREAK---
+
+$$k_{\mu}^{(n)}(x_1, \dots, x_n) := \begin{cases} k_{\mu}(\{x_1, \dots, x_n\}), & \text{if } (x_1, \dots, x_n) \in (\mathbb{R}^d)^n \\ 0, & \text{otherwise} \end{cases}$$
+
+are the correlation functions well known in statistical physics, see e.g [28], [29].
+
+We recall now the so-called Minlos lemma which plays very important role in our calculations (cf., [20]).
+
+LEMMA 2.1. Let $n \in \mathbb{N}$, $n \ge 2$, and $z > 0$ be given. Then
+
+$$\int_{\Gamma_0} \dotsi \int_{\Gamma_0} G(\eta_1 \cup \dots \cup \eta_n) H(\eta_1, \dots, \eta_n) d\lambda_z(\eta_1) \dotsi d\lambda_z(\eta_n) \\
+= \int_{\Gamma_0} G(\eta) \sum_{(\eta_1, \dots, \eta_n) \in \mathcal{P}_n(\eta)} H(\eta_1, \dots, \eta_n) d\lambda_z(\eta)$$
+
+for all measurable functions $G: \Gamma_0 \mapsto \mathbb{R}$ and $H: \Gamma_0 \times \dots \times \Gamma_0 \mapsto \mathbb{R}$ with respect to which both sides of the equality make sense. Here $\mathcal{P}_n(\eta)$ denotes the set of all ordered partitions of $\eta$ in $n$ parts, which may be empty.
+
+**3. Description of the model.** In the present paper we study the special case of the general birth-and-death processes in continuum. The spatial birth-and-death processes describe evolution of configurations in $\mathbb{R}^d$, in which points of configurations (particles, individuals, elements) randomly appear (born) and disappear (die) in the space. Among all birth-and-death processes we will distinguish those in which new particles appear only from existing ones. These processes correspond to the models of the spatial ecology.
+
+The simplest example of such processes is the so-called “free growth” dynamics. During this stochastic evolution the points of configuration independently create new ones distributed in the space according to a dispersion probability kernel $0 \le a^+ \in L^1(\mathbb{R}^d)$ which is an even function. Any existing point has an infinite life time, i. e. they do not die. Heuristically, the Markov pre-generator of this birth process has the following form:
+
+$$(L_+F)(\gamma) = \varkappa^+ \sum_{y \in \gamma} \int_{\mathbb{R}^d} a^+(x-y) D_x^+ F(\gamma) dx,$$
+
+where
+
+$$D_x^+ F(\gamma) = F(\gamma \cup x) - F(\gamma),$$
+
+and $\varkappa^+ > 0$ is some positive constant.
+
+The existence of the process associated with $L_+$ can be shown using the same technique as in [9], [22]. Let $\mu_t$ be the corresponding evolution of measures in time on $\mathcal{M}_{\text{fm}}^1(\Gamma)$. By $k_t^{(n)}$, $n \ge 0$ we denote the dynamics of the corresponding $n$-th order correlation functions (provided they exist). Note, that each of such functions describes the density of the system at the moment $t$.
+
+Then, using (2.3), for any continuous $\varphi$ on $\mathbb{R}^d$ with bounded support, we obtain
+
+$$\begin{align*}
+\frac{d}{dt} \int_{\mathbb{R}^d} \varphi(x) k_t^{(1)}(x) dx &= \frac{d}{dt} \int_{\Gamma} \langle \varphi, \gamma \rangle d\mu_t(\gamma) = \int_{\Gamma} L_+ \langle \varphi, \gamma \rangle d\mu_t(\gamma) \\
+&= \varkappa^+ \int_{\Gamma} \langle a^+ * \varphi, \gamma \rangle d\mu_t(\gamma) = \varkappa^+ \int_{\mathbb{R}^d} (a^+ * \varphi)(x) k_t^{(1)}(x) dx \\
+&= \varkappa^+ \int_{\mathbb{R}^d} \varphi(x) (a^+ * k_t^{(1)})(x) dx,
+\end{align*}$$
+---PAGE_BREAK---
+
+where * denotes the classical convolution on $\mathbb{R}^d$. Hence, $k_t^{(1)}$ grows exponentially in $t$. In particular, for the translation invariant case one has $k_0^{(1)}(x) \equiv k_0^{(1)} > 0$ and as a result
+
+$$k_t^{(1)} = e^{\varkappa^+ t} k_0^{(1)}. \quad (3.1)$$
+
+One of the possibilities to prevent the density growth of the system is to include the death mechanism. The simplest one is described by the independent death rate (mortality) $m > 0$. This means that any element of a population has an independent exponentially distributed with parameter $m$ random life time. The independent death together with the independent creation of new particles by already existing ones describe the so-called contact model in the continuum, see e.g. [22]. The pre-generator of such model is given by the following expression:
+
+$$\begin{align*}
+(L_{\text{CM}} F)(\gamma) &= m \sum_{x \in \gamma} D_x^{-} F(\gamma) + (L_{+} F)(\gamma) \\
+&= m \sum_{x \in \gamma} D_x^{-} F(\gamma) + \varkappa^{+} \sum_{y \in \gamma} \int_{\mathbb{R}^d} a^{+}(x-y) D_x^{+} F(\gamma) dx,
+\end{align*}$$
+
+where
+
+$$D_x^{-} F(\gamma) = F(\gamma \setminus x) - F(\gamma).$$
+
+The Markov process associated with the generator $L_{\text{CM}}$ was constructed in [22]. This construction was generalized in [9] for more general classes of functions $a^+$. Let us note, that the contact model in the continuum may be used in the epidemiology to model the infection spreading process. The values of this process represent the states of the infected population. This is analog of the contact process on a lattice. Of course, such interpretation is not in the spatial ecology concept. On the other hand, contact process is a spatial branching process with a given mortality rate.
+
+The dynamics of correlation functions in the contact model was considered in [17]. Namely, taking $m=1$ for correctness, we have for any $n \ge 1$, $t > 0$ the correlation function of n-th order has the following form
+
+$$\begin{align}
+k_t^{(n)}(x_1, \dots, x_n) &= e^{n(\varkappa^+-1)t} \left[ \bigotimes_{i=1}^n e^{tL_{a+}^i} \right] k_0^{(n)}(x_1, \dots, x_n) \tag{3.2} \\
+&\quad + \varkappa^+ \int_0^t e^{n(\varkappa^+-1)(t-s)} \left[ \bigotimes_{i=1}^n e^{(t-s)L_{a+}^i} \right] \nonumber \\
+&\quad \times \sum_{i=1}^n k_s^{(n-1)}(x_1, \dots, \tilde{x}_i, \dots, x_n) \sum_{j: j \neq i} a^+(x_i - x_j) ds, \nonumber
+\end{align}$$
+
+where
+
+$$L_{a+}^i k^{(n)}(x_1, \dots, x_n) = \varkappa^+ \int_{\mathbb{R}^d} a^+(x_i - y) [k^{(n)}(x_1, \dots, x_{i-1}, y, x_{i+1}, \dots, x_n) - k^{(n)}(x_1, \dots, x_n)] dy$$
+
+and the symbol $\tilde{x}_i$ means that the $i$-th coordinate is omitted. Note that $L_{a+}^i$ is a Markov generator and the corresponding semigroup (in $L^\infty$ space) preserves positivity.
+---PAGE_BREAK---
+
+It was also shown in [17], that if there exists a constant $C > 0$ (independent of $n$) such that for any $n \ge 0$ and $(x_1, \dots, x_n) \in (\mathbb{R}^d)^n$
+
+$$k_0^{(n)}(x_1, \dots, x_n) \le n! C^n,$$
+
+then for any $t \ge 0$ and almost all (a.a.) $(x_1, \dots, x_n) \in (\mathbb{R}^d)^n$ (w.r.t. Lebesgue measure) the following estimate holds for all $n \ge 0$
+
+$$k_t^{(n)}(x_1, \dots, x_n) \le \varkappa^+(t)^n (1+a_0)^n e^{n(\varkappa^+-1)t} (C+t)^n n! \quad (3.3)$$
+
+Here
+
+$$a_0 = \|a\|_{L^\infty(\mathbb{R}^d)}, \quad \varkappa^+(t) := \max[1, \varkappa^+, \varkappa^+e^{-(\varkappa^+-1)t}].$$
+
+For the translation invariant case the value $\varkappa^+ = 1$ is critical. Namely, from (3.2) we deduce that
+
+$$k_t^{(1)} = e^{(\varkappa^+-1)t} k_0^{(1)}. \quad (3.4)$$
+
+Therefore, for $\varkappa^+ < 1$ the density will exponentially decrease to 0 (as $t \to \infty$), for $\varkappa^+ > 1$ the density will exponentially increase to $\infty$, and for $\varkappa^+ = 1$ the density will be a constant. One can easily see from the estimate (3.3) that, in the case $\varkappa^+ < 1$, the correlation functions of all orders decrease to 0 as $t \to \infty$. On the other hand, for fixed $t$, the estimate (3.3) implies factorial bound in $n$ for $k_t^{(n)}$. As result, we may expect the clustering of our system. To show clustering we start from the Poisson distribution of particles and obtain an estimate from below for the time evolutions of correlations between particles in a small region.
+
+Hence, let $\varkappa^+ < 1$, $k_0^{(n)} = C^n$. Let $B$ is some bounded domain of $\mathbb{R}^d$ such that
+
+$$\alpha := \inf_{x,y \in B} a^+(x-y) > 0.$$
+
+Let $\beta = \min\{\alpha\varkappa^+, C\}$. For any $\{x_1, x_2\} \subset B$, formula (3.2) implies
+
+$$k_t^{(2)}(x_1, x_2) \ge 2C\varkappa^+\alpha \int_0^t e^{2(\varkappa^+-1)(t-s)}ds \ge 2\beta^2te^{2(\varkappa^+-1)t}.$$
+
+We consider $t \ge 1$. One can prove by induction that for any $\{x_1, \dots, x_n\} \subset B$, $n \ge 2$
+
+$$k_t^{(n)}(x_1, \dots, x_n) \ge \beta^n e^{n(\varkappa^+-1)t} n! \quad (3.5)$$
+
+Indeed, for $n=2$ this statement has been proved. Suppose that (3.5) holds for $n-1$. Then, by (3.2), one has
+
+$$\begin{align*}
+k_t^{(n)}(x_1, \dots, x_n) &\ge \varkappa^+ \int_0^t e^{n(\varkappa^+-1)(t-s)} n\beta^{n-1} e^{(n-1)(\varkappa^+-1)s} (n-1)!(n-1)\alpha ds \\
+&\ge \beta^n n! e^{n(\varkappa^+-1)t} \int_0^t e^{-(\varkappa^+-1)s} ds \ge \beta^n e^{n(\varkappa^+-1)t} n!.
+\end{align*}$$
+
+As it was mentioned before, the later bound shows the clustering in the contact model. All previous consideration may be extended for the case $m \ne 1$: we should only replace 1 by $m$ in the previous calculations.
+---PAGE_BREAK---
+
+As a conclusion we have: the presence of mortality ($m > \varkappa^+$) in the free growth model prevents the growth of density, i. e. the correlation functions of all orders decay in time. But it doesn't influence on the clustering in the system. One of the possibilities to prevent such clustering is to consider the so-called density dependent death rate. Namely, let us consider the following pre-generator:
+
+$$ (LF)(\gamma) = \sum_{x \in \gamma} \left[ m + \varkappa^{-} \sum_{y \in \gamma \setminus x} a^{-}(x-y) \right] D_{x}^{-} F(\gamma) \quad (3.6) \\ + \varkappa^{+} \int_{\mathbb{R}^{d}} \sum_{y \in \gamma} a^{+}(x-y) D_{x}^{+} F(\gamma) dx. $$
+
+Here $0 \le a^- \in L^1(\mathbb{R}^d)$ is an arbitrary, even function such that
+
+$$ \int_{\mathbb{R}^d} a^-(x)dx = 1 $$
+
+(in other words, $a^-$ is a probability density) and $\varkappa^- > 0$ is some positive constant. It is easy to see that the operator $L$ is well-defined, for example, on $\mathcal{FL}^0(\Gamma)$.
+
+The generator (3.6) describes the Bolker—Dieckmann—Law—Pacala (BDLP) model, which was introduced in [4, 5, 6]. During the corresponding stochastic evolution the birth of individuals occurs independently and the death is ruled not only by the global regulation (mortality) but also by the local regulation with the kernel $\varkappa^{-}a^{-}$. This regulation may be described as a competition (e.g., for resources) between individuals in the population.
+
+The main result of this article is presented in Section 5, Theorem 5.1. It may be informally stated in the following way:
+
+If the mortality $m$ and the competition kernel $\varkappa^{-}a^{-}$ are large enough, then the dynamics of correlation functions associated with the pre-generator (3.6) preserves (sub-)Poissonian bound for correlation functions for all times.
+
+In particular, it prevents clustering in the model.
+
+In the next sections we explain how to prove this fact. In the last section of the present paper we discuss the necessity to consider "large enough" death.
+
+**4. Semigroup for the symbol of the generator.** The problem of the construction of the corresponding process in $\Gamma$ concerns the possibility to construct the semigroup associated with $L$. This semigroup determines the solution to the Kolmogorov equation, which formally (only in the sense of action of operator) has the following form:
+
+$$ \frac{dF_t}{dt} = LF_t, \quad F_t|_{t=0} = F_0. \qquad (4.1) $$
+
+To show that $L$ is a generator of a semigroup in some reasonable functional spaces on $\Gamma$ seems to be a difficult problem. This difficulty is hidden in the complex structure of the non-linear infinite dimensional space $\Gamma$.
+
+In various applications the evolution of the corresponding correlation functions (or measures) helps already to understand the behavior of the process and gives candidates for invariant states. The evolution of correlation functions of the process is related heuristically to the evolution of states of our IPS. The latter evolution
+---PAGE_BREAK---
+
+is formally given as a solution to the dual Kolmogorov equation (Fokker—Planck equation):
+
+$$ \frac{d\mu_t}{dt} = L^*\mu_t, \quad \mu_t|_{t=0} = \mu_0, \qquad (4.2) $$
+
+where $L^*$ is the adjoint operator to $L$ on $\mathcal{M}_{\text{fm}}^{1}(\Gamma)$, provided, of course, that it exists.
+
+In the recent paper [16], the authors proposed the analytic approach for the construction of a non-equilibrium process on $\Gamma$, which uses deeply the harmonic analysis technique. In the present paper we follow the scheme proposed in [16] in order to construct the evolution of correlation functions. The existence problem for the evolution of states in $\mathcal{M}_{\text{fm}}^{1}(\Gamma)$ and, as a result, of the corresponding process on $\Gamma$ is not realized in this paper. It seems to be a very technical question and remains open.
+
+Following the general scheme, first we should construct the evolution of functions which corresponds to the symbol (K-image) $\hat{L} = K^{-1}LK$ of the operator $L$ in $L^1$-space on $\Gamma_0$ w.r.t. the weighted Lebesgue-Poisson measure. This weight is crucial for the corresponding evolution of correlation functions. It determines the growth of correlation functions in time and space. Below we start the detailed realization of the discussed scheme.
+
+Let us set for $\eta \in \Gamma_0$
+
+$$ E^{a^\#}(\eta) := \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^\#(x-y), $$
+
+where $a^\#$ denotes either $a^-$ or $a^+$.
+
+**PROPOSITION 4.1.** *The image of $L$ under the $K$-transform (or symbol of the operator $L$) on functions $G \in B_{bs}(\Gamma_0)$ has the following form*
+
+$$
+\begin{aligned}
+(\hat{L}G)(\eta) &:= (K^{-1}LKG)(\eta) \\
+&= -\left(m|\eta| + \varkappa^- E^{a^-}(\eta)\right) G(\eta) - \varkappa^- \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^-(x-y)G(\eta \setminus y) \\
+&\quad + \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)G((\eta \setminus y) \cup x)dx \\
+&\quad + \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)G(\eta \cup x)dx.
+\end{aligned}
+ $$
+
+For the proof see [8].
+
+With the help of Proposition 4.1, we derive the evolution equation for quasi-observables (functions on $\Gamma_0$) corresponding to the Kolmogorov equation (4.1). It has the following form
+
+$$ \frac{dG_t}{dt} = \hat{L}G_t, \quad G_t|_{t=0} = G_0. \qquad (4.3) $$
+
+Then in the way analogous to those in which the corresponding Fokker-Planck equation (4.2) was determined for (4.1) we get the evolution equation for the correlation functions corresponding to the equation (4.3):
+
+$$ \frac{dk_t}{dt} = \hat{L}^*k_t, \quad k_t|_{t=0} = k_0. \qquad (4.4) $$
+---PAGE_BREAK---
+
+The precise form of the adjoint operator $\hat{L}^*$ will be given in Section 5. It is very
+important to emphasize that in the papers [4, 5] the equation (4.4) was obtained from
+quit heuristic arguments and, moreover, it was considered as the definition for the
+evolution of the BDLP model.
+
+Let $\lambda$ be the Lebesgue-Poisson measure on $\Gamma_0$ with activity parameter equal to 1.
+
+For arbitrary and fixed $C > 0$ we consider the operator $\tilde{L}$ as a pre-generator of a semigroup in the functional space
+
+$$ \mathcal{L}_C := L^1(\Gamma_0, C^{|\eta|}\lambda(d\eta)). \quad (4.5) $$
+
+In this section, symbol $\|\cdot\|_C$ stands for the norm of the space (4.5).
+
+For any $\omega > 0$ we introduce the set $\mathcal{H}(\omega, 0)$ of all densely defined closed operators $T$ on $\mathcal{L}_C$, the resolvent set $\rho(T)$ of which contains the sector
+
+$$ \text{Sect} \left( \frac{\pi}{2} + \omega \right) := \left\{ \zeta \in \mathbb{C} \mid \left| \arg \zeta \right| < \frac{\pi}{2} + \omega \right\}, \quad \omega > 0 $$
+
+and for any $\varepsilon > 0$
+
+$$ ||(T - \zeta 1)^{-1}|| \leq \frac{M_\varepsilon}{|\zeta|}, \quad |\arg \zeta| \leq \frac{\pi}{2} + \omega - \varepsilon, $$
+
+where $M_\varepsilon$ does not depend on $\zeta$.
+
+Let $\mathcal{H}(\omega, \theta)$, $\theta \in \mathbb{R}$ denotes the set of all operators of the form $T = T_0 + \theta$ with $T_0 \in \mathcal{H}(\omega, 0)$.
+
+**REMARK 4.1.** It is well-known (see e.g., [12]), that any $T \in \mathcal{H}(\omega, \theta)$ is a generator of a semigroup $U(t)$ which is holomorphic in the sector $|\arg t| < \omega$. The function $U(t)$ is not necessarily uniformly bounded, but it is quasi-bounded, i.e.
+
+$$ ||U(t)|| \leq \text{const } |e^{\theta t}| $$
+
+in any sector of the form $|\arg t| \leq \omega - \varepsilon$.
+
+PROPOSITION 4.2. For any $C > 0$, $m > 0$, and $\varkappa^- > 0$ the operator
+
+$$ (L_0 G)(\eta) := - \left(m|\eta| + \varkappa^- E^{a^-}(\eta)\right) G(\eta), $$
+
+$$ D(L_0) = \left\{ G \in \mathcal{L}_C \mid \left(m|\eta| + \varkappa^- E^{a^-}(\eta)\right) G(\eta) \in \mathcal{L}_C \right\} $$
+
+is a generator of a contraction semigroup on $\mathcal{L}_C$. Moreover, $L_0 \in \mathcal{H}(\omega, 0)$ for all $\omega \in (0, \frac{\pi}{2})$.
+
+*Proof.* It is not difficult to show that $L_0$ is a densely defined and closed operator in $\mathcal{L}_C$.
+
+Let $0 < \omega < \frac{\pi}{2}$ be arbitrary and fixed. Clear, that for all $\zeta \in \text{Sect}(\frac{\pi}{2} + \omega)$
+
+$$ |m|\eta| + \varkappa^{-} E^{a^{-}}(\eta) + \zeta| > 0, \quad \eta \in \Gamma_{0}. $$
+
+Therefore, for any $\zeta \in \text{Sect}(\frac{\pi}{2} + \omega)$ the inverse operator $(L_0 - \zeta 1)^{-1}$, the action of which is given by
+
+$$ [(L_0 - \zeta 1)^{-1} G](\eta) = - \frac{1}{m|\eta| + \varkappa^{-} E^{a^{-}}(\eta) + \zeta} G(\eta), \quad (4.6) $$
+---PAGE_BREAK---
+
+is well defined on the whole space $\mathcal{L}_C$. Moreover, it is a bounded operator in this
+space and
+
+$$
+\| (L_0 - \zeta \mathbf{1})^{-1} \| \le \begin{cases} \frac{1}{|\zeta|}, & \text{if } \operatorname{Re} \zeta \ge 0, \\ \frac{M}{|\zeta|}, & \text{if } \operatorname{Re} \zeta < 0, \end{cases} \tag{4.7}
+$$
+
+where the constant $M$ does not depend on $\zeta$.
+
+The case Re ζ ≥ 0 is a direct consequence of (4.6) and inequality
+
+$$
+m|\eta| + \varkappa^{-} E^{a^{-}}(\eta) + \mathrm{Re}\,\zeta \geq \mathrm{Re}\,\zeta \geq 0.
+$$
+
+We prove now the bound (4.7) in the case Re ζ < 0. Using (4.6), we have
+
+$$
+\begin{align*}
+||(L_0 - \zeta \mathbf{1})^{-1} G||_C &= \left\| \frac{1}{|m| \cdot |\varkappa^{-}E^{a^{-}}(\cdot) + \zeta|} G(\cdot) \right\|_C \\
+&= \frac{1}{|\zeta|} \left\| \frac{|\zeta|}{|m| \cdot |\varkappa^{-}E^{a^{-}}(\cdot) + \zeta|} G(\cdot) \right\|_C .
+\end{align*}
+$$
+
+Since $\zeta \in \text{Sect} (\frac{\pi}{2} + \omega)$,
+
+$$
+|\operatorname{Im} \zeta| \geq |\zeta| |\sin(\frac{\pi}{2} + \omega)| = |\zeta| \cos \omega.
+$$
+
+Hence,
+
+$$
+\frac{|\zeta|}{|m|\eta + \varkappa^{-}E^{a^{-}}(\eta) + \zeta|} \leq \frac{|\zeta|}{|\operatorname{Im}\zeta|} \leq \frac{1}{\cos\omega} =: M
+$$
+
+and (4.7) is fulfilled.
+
+The rest of the statement of the lemma follows directly from the theorem of
+Hille—Yosida (see e.g., [12]). □
+
+We define now
+
+$$
+(L_1 G)(\eta) := \varkappa^{-} \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^{-}(x-y)G(\eta \setminus y), \quad G \in D(L_1) := D(L_0).
+$$
+
+The lemma below implies that the operator $L_1$ is well-defined.
+
+LEMMA 4.3. For any $\delta > 0$ there exists $C_0 := C_0(\delta) > 0$ such that for all $C < C_0$
+the following estimate holds
+
+$$
+\Vert L_1 G \Vert_C \le a \Vert L_0 G \Vert_C, \quad G \in D(L_1), \tag{4.8}
+$$
+
+with $a = a(C) < \delta$.
+
+Proof. By modulus property
+
+$$
+||L_1 G||_C = \int_{\Gamma_0} |L_1 G(\eta)| C^{\vert\eta\vert} \lambda(d\eta) \quad (4.9)
+$$
+
+can be estimated by
+
+$$
+\varkappa^{-} \int_{\Gamma_0} \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^{-}(x-y) |G(\eta \setminus y)| C^{\vert\eta\vert} \lambda(d\eta). \quad (4.10)
+$$
+---PAGE_BREAK---
+
+By Minlos lemma, (4.10) is equal to
+
+$$
+\begin{align*}
+& \varkappa^{-} \int_{\Gamma_0} \int_{\mathbb{R}^d} \sum_{x \in \eta} a^{-}(x-y) |G(\eta)| C^{|\eta|+1} dy \lambda(d\eta) = \\
+&= \varkappa^{-} \int_{\Gamma_0} C |\eta| |G(\eta)| C^{|\eta|} \lambda(d\eta) \leq \frac{\varkappa^{-}}{m} C ||L_0 G||_C.
+\end{align*}
+$$
+
+Therefore, (4.8) holds with
+
+$$
+a = \frac{\varkappa^{-} C}{m}.
+$$
+
+Clear, that taking
+
+$$
+C_0 = \frac{\delta m}{\varkappa^{-}}
+$$
+
+we obtain that $a < \delta$ for $C < C_0$. $\square$
+
+We set now
+
+$$
+(L_2 G)(\eta) := (L_{2, \varkappa^+} G)(\eta) = \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y) G((\eta \setminus y) \cup x) dx,
+$$
+
+$$
+G \in D(L_2) := D(L_0).
+$$
+
+The operator $(L_2, D(L_2))$ is well defined due to the lemma below:
+
+LEMMA 4.4. For any $\delta > 0$ there exists $\varkappa_0^+ := \varkappa_0^+(\delta) > 0$ such that for all $\varkappa^+ < \varkappa_0^+$ the following estimate holds
+
+$$
+||L_2 G||_C \leq a ||L_0 G||_C, \quad G \in D(L_2), \tag{4.11}
+$$
+
+with $a = a(\varkappa^+) < \delta$.
+
+*Proof.* Analogously to the previous lemma we estimate
+
+$$
+||L_2 G||_C = \int_{\Gamma_0} |L_2 G(\eta)| C^{|\eta|} \lambda(d\eta) \quad (4.12)
+$$
+
+by
+
+$$
+\varkappa^+ \int_{\Gamma_0} \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y) |G((\eta \setminus y) \cup x)| C^{\vert\eta\vert} dx \lambda(d\eta). \quad (4.13)
+$$
+
+By Minlos lemma, (4.13) is equal to
+
+$$
+\varkappa^+ \int_{\Gamma_0} \sum_{x \in \eta} \int_{\mathbb{R}^d} a^+(x-y) |G(\eta)| C^{|η|} dy λ(dη) \leq \frac{\varkappa^+}{m} ||L_0 G||_C.
+$$
+
+Taking $\varkappa_0^+ = \delta m$ we prove the lemma. $\square$
+
+The operator defined as:
+
+$$
+(NG)(\eta) = |\eta|G(\eta), \quad G \in D(L_0) \tag{4.14}
+$$
+
+is called the number operator.
+---PAGE_BREAK---
+
+**REMARK 4.2.** We proved, in particular, that for $G \in D(L_0) = D(L_1) = D(L_2)$
+
+$$
+\begin{align*}
+||L_1 G||_C &\le \kappa^{-} C ||NG||_C, \\
+||L_2 G||_C &\le \kappa^{+} ||NG||_C.
+\end{align*}
+$$
+
+Finally, we consider the last part of the operator $\hat{L}$:
+
+$$
+(L_3 G)(\eta) := \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)G(\eta \cup x)dx, \quad D(L_3) := D(L_0).
+$$
+
+LEMMA 4.5. For any $\delta > 0$ and any $\varkappa^+ > 0$, $C > 0$ such that
+
+$$
+\varkappa^+ E^{a^+}(\eta) < \delta C (\varkappa^- E^{a^-}(\eta) + m|\eta|) \quad (4.15)
+$$
+
+the following estimate holds
+
+$$
+||L_3 G||_C \leq a ||L_0 G||_C, \quad G \in D(L_3), \tag{4.16}
+$$
+
+with $a = a(\varkappa^+, C) < \delta$.
+
+*Proof.* Using the same tricks as in the two previous lemmas we have
+
+$$
+\begin{align}
+||L_3 G||_C &= \int_{\Gamma_0} |L_3 G(\eta)| C^{\vert\eta\vert} \lambda(d\eta) \nonumber \\
+&\leq \varkappa^+ \int_{\Gamma_0} \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y) |G(\eta \cup x)| C^{\vert\eta\vert} dx \lambda(d\eta). \tag{4.17}
+\end{align}
+$$
+
+By Minlos lemma, (4.17) is equal to
+
+$$
+\frac{\kappa^+}{C} \int_{\Gamma_0} E^{a^+}(\eta) |G(\eta)| C^{|n|} \lambda(d\eta).
+$$
+
+The assertion of the lemma is now trivial. □
+
+**THEOREM 4.6.** *Assume that the functions a*, *a*+ and the constants *κ*-, *κ*+ > 0, *m* > 0 and *C* > 0 satisfy*
+
+$$
+C \kappa^{-} a^{-} \geq 2 \kappa^{+} a^{+}, \quad (4.18)
+$$
+
+$$
+m > 2 (\kappa^{-} C + \kappa^{+}).
+$$
+
+Then, the operator $\hat{L}$ is a generator of a holomorphic semigroup $\hat{U}_t$, $t \ge 0$ in $\mathcal{L}_C$.
+
+*Proof.* The statement of the theorem follows directly from Remark 4.2, Lemma 4.5 and the theorem about the perturbation of holomorphic semigroup (see, e.g. [12]). For the reader's convenience, below we give its formulation:
+
+For any $T \in \mathcal{H}(\omega, \theta)$ and for any $\epsilon > 0$ there exists positive constants $\alpha, \delta$ such that if the operator $A$ satisfies
+
+$$
+||Au|| \leq a||Tu|| + b||u||, \quad u \in D(T) \subset D(A),
+$$
+
+with $a < \delta$, $b < \delta$, then $T + A \in \mathcal{H}(\omega - \varepsilon, \alpha)$.
+
+In particular, if $\theta = 0$ and $b = 0$, then $T + A \in \mathcal{H}(\omega - \varepsilon, 0)$
+
+Following the proof of this theorem (see, e.g. [12]) and taking into account the fact that $L_0 \in \mathcal{H}(\omega, 0)$ for any $\omega \in (0, \frac{\pi}{2})$, one can conclude in our case that $\delta$ can be chosen equal to $\frac{1}{2}$. This is exactly the reason of appearing multiplicand 2 at the l.h.s. of (4.18). □
+---PAGE_BREAK---
+
+**5. Evolution of correlation functions.** Let us consider the evolution equation (4.4), which corresponds to the operator $\hat{L}^*$
+
+$$ \frac{dk_t}{dt} = \hat{L}^* k_t, \quad k_t|_{t=0} = k_0. $$
+
+Using the general scheme, proposed in [8] we find the precise form of $\hat{L}^*$:
+
+$$ \begin{aligned} \hat{L}^* k(\eta) = & - (m|\eta| + \varkappa^- E^{a^-}(\eta)) k(\eta) + \varkappa^+ \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^+(x-y)k(\eta \setminus x) \\ & + \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)k((\eta \setminus y) \cup x)dx \\ & - \varkappa^- \int_{\mathbb{R}^d} \sum_{y \in \eta} a^-(x-y)k(\eta \cup x)dx. \end{aligned} $$
+
+The main questions which we would like to study now are the existence and properties of the solution to the hierarchical system of equations (4.4). The answers to these questions are given in the following theorem
+
+**THEOREM 5.1.** *Suppose that all assumptions of Theorem 4.6 are fulfilled. Then for any initial function $k_0$ from the class*
+
+$$ K_C := \{ k : \Gamma_0 \to \mathbb{R} \mid k \cdot C^{-|\eta|} \in L^\infty(\Gamma_0, \lambda) \} $$
+
+the corresponding solution $k_t$ to (4.4) exists and will be again the function from $K_C$
+for any moment of time $t \ge 0$.
+
+*Proof.* Following the scheme proposed in [16], we construct the corresponding evolution of the locally finite measures on $\Gamma_0$. In order to realize this construction we consider the dual space $K_C$ to the Banach space $\mathcal{L}_C$. The duality is given by the following expression
+
+$$ \langle\langle G, k \rangle\rangle := \int_{\Gamma_0} G \cdot k \, d\lambda, \quad G \in \mathcal{L}_C. \qquad (5.1) $$
+
+It is clear that $\mathcal{K}_C$ is the Banach space with the norm
+
+$$ ||k|| := ||C^{-|\cdot|}k(\cdot)||_{L^{\infty}(\Gamma_0, \lambda)}. $$
+
+Note also, that $k \cdot C^{-|\cdot|} \in L^{\infty}(\Gamma_0, \lambda)$ means that the function $k$ satisfies the bound
+
+$$ |k(\eta)| \leq \text{const } C^{|\eta|} \quad \text{for } \lambda\text{-a.a. } \eta \in \Gamma_0. $$
+
+The evolution on $\mathcal{K}_C$, which corresponds to $\hat{U}_t$, $t \ge 0$ constructed in Theorem 4.6, may
+be determined in the following way:
+
+$$ \langle\langle G, k_t \rangle\rangle := \langle\langle \hat{U}_t G, k_0 \rangle\rangle. $$
+
+We denote
+
+$$ \hat{U}_t^* k_0 := k_t. $$
+
+Using the same arguments as in [16], it becomes clear that $k_t = \hat{U}_t^* k_0$ is the solution
+to (4.4) in the Banach space $\mathcal{K}_C$. $\square$
+
+It is important to emphasize that in the case of $a^{-} \equiv 0$ and $\varkappa^{+} < m$
+
+$$ k_t^{(n)} \to 0, \quad t \to 0, \quad \text{for any } n \ge 1, $$
+
+see e. g. [17]. Therefore, we may expect that the correlation functions of our model
+satisfy this property as well.
+---PAGE_BREAK---
+
+**6. Stationary equation for the system of correlation functions.** Let us consider for any $\alpha \in \mathbb{R}$ the following Banach subspace of $K_C$:
+
+$$ K_C^\alpha := \{k \in K_C \mid k^{(0)}(\emptyset) = \alpha\}. $$
+
+In this section we study the existence problem for the solutions to the stationary equation
+
+$$ \hat{L}^*k = 0 \quad (6.1) $$
+
+in $K_C^1$. The main result is formulated in the following way:
+
+**THEOREM 6.1.** *Suppose that*
+
+$$ \frac{C\kappa^{-}}{m} + \frac{\kappa^{+}}{m} + \frac{1}{C} < 1 \quad (6.2) $$
+
+and
+
+$$ \kappa^{-}a^{-} \geq \kappa^{+}a^{+} $$
+
+then the solution $k = (k^{(n)})_{n \geq 0}$ to (6.1) is unique in $K_C^1$ and such that
+
+$$ k^{(n)} = 0, \quad n \geq 1. $$
+
+*Proof*. Let
+
+$$ (\hat{L}^*k)(\eta) = 0. $$
+
+The latter means that
+
+$$ \begin{aligned} \left(m|\eta| + \kappa^{-}E^{a^{-}}(\eta)\right) k(\eta) = & -\kappa^{-}\sum_{x \in \eta} \int_{\mathbb{R}^d} k(y \cup \eta)a^{-}(x-y)dy + \\ & +\kappa^{+}\sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^{+}(x-y)k(\eta \setminus x) + \kappa^{+}\int_{\mathbb{R}^d} \sum_{y \in \eta} a^{+}(x-y)k((\eta \setminus y) \cup x)dx. \end{aligned} $$
+
+The last relation holds for any $k \in K_C^1$ at the point $\eta = \emptyset$. Hence, one can consider it on $K_C^0$.
+
+Let us denote for $\eta \neq \emptyset$
+
+$$ \begin{aligned} (Sk)(\eta) = & -\frac{\kappa^{-}}{m|\eta| + \kappa^{-}E^{a^{-}}(\eta)} \sum_{x \in \eta} \int_{\mathbb{R}^d} k(y \cup \eta)a^{-}(x-y)dy + \\ & +\frac{\kappa^{+}}{m|\eta| + \kappa^{-}E^{a^{-}}(\eta)} \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^{+}(x-y)k(\eta \setminus x) + \\ & +\frac{\kappa^{+}}{m|\eta| + \kappa^{-}E^{a^{-}}(\eta)} \int_{\mathbb{R}^d} \sum_{y \in \eta} a^{+}(x-y)k((\eta \setminus y) \cup x)dx \end{aligned} $$
+
+and
+
+$$ (Sk)(\emptyset) = 0. $$
+---PAGE_BREAK---
+
+Let
+
+$$
+\|k\|_C = \esssup_{\eta \in \Gamma_0} \frac{|k(\eta)|}{C^{\|\eta\|}},
+$$
+
+then
+
+$$
+\begin{align*}
+& \|Sk\|_C \\
+&\le \|k\|_C \esssup_{\eta \in \Gamma_0 \setminus \{\emptyset\}} \frac{\varkappa^{-} C}{m |\eta| + \varkappa^{-} E^{a^{-}}(\eta)} \sum_{x \in \eta} \int_{\mathbb{R}^d} a^{-} (x-y) dy \\
+&\quad + \frac{\|k\|_C}{C} \esssup_{\eta \in \Gamma_0 \setminus \{\emptyset\}} \frac{\varkappa^{+}}{m |\eta| + \varkappa^{-} E^{a^{-}}(\eta)} \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^{+}(x-y) \\
+&\quad + \|k\|_C \esssup_{\eta \in \Gamma_0 \setminus \{\emptyset\}} \frac{\varkappa^{+}}{m |\eta| + \varkappa^{-} E^{a^{-}}(\eta)} \int_{\mathbb{R}^d} \sum_{y \in \eta} a^{+}(x-y) dx \\
+&\le \|k\|_C \frac{C\varkappa^{-}}{m} + \|k\|_C \frac{\varkappa^{+}}{m} + \|k\|_C \frac{1}{C} = \|k\|_C \left( \frac{C\varkappa^{-}}{m} + \frac{\varkappa^{+}}{m} + \frac{1}{C} \right),
+\end{align*}
+$$
+
+if
+
+$$
+\varkappa^{+} E^{a^{+}} (\eta) \leq \varkappa^{-} E^{a^{-}} (\eta) + m |\eta|.
+$$
+
+As result,
+
+$$
+\|S\| \leq \frac{1}{m} C \varkappa^{-} + \frac{1}{m} \varkappa^{+} + \frac{1}{C} < 1.
+$$
+
+The assertion of the theorem is now obvious. □
+
+**REMARK 6.1.** For any $C > 1$ one may chose $\varkappa^{-} > 0$ and $m > 0$ such that (6.2) is satisfied. The latter means, that, asymptotically, our system exhausted to the system with the stationary state $\delta_0(d\gamma)$ (the Dirac measure concentrated on the empty configuration $\emptyset$). In other words, the population evolving due to the BDLP dynamics is asymptotically degenerated.
+
+**7. Further developments.** In Theorem 5.1 we have shown that functions $k_t$
+is bounded by $C^n$ for all $t > 0$, provided that $k_0$ satisfies initially the bound of the
+same type. Using approximation arguments (see e.g. [16], [18]) one may prove that
+the corresponding time evolution of the correlation function will be also correlation
+function for some probability measure on $\Gamma$. We suppose to discuss this problem
+as well as other probabilistic aspects of the BDLP model in a forthcoming paper.
+The main aim of the present paper is to analyze evolution of correlation functions.
+Namely, we have shown that dynamics of correlation functions stays in the space $K_C$.
+This property seems to be very strong. To show that system of correlation functions
+evolving in time stays in the same space is already difficult even for the contact model.
+Namely, (3.3) implies that the evolution of correlation functions at some moment of
+time $t$ may leave the space
+
+$$
+\{ k : \Gamma_0 \to \mathbb{R} | k \cdot C^{-|\eta|} \cdot |\eta|! \in L^\infty(\Gamma_0, \lambda) \}.
+$$
+
+The reason is that $C$ may depend on $t$, which is true at least for the case $\varkappa^+ \ge 1$ ($m=1$ at the moment). Hence, we may expect that the dynamics of correlation
+---PAGE_BREAK---
+
+functions for the contact process lives in some bigger space. Of course, this is possible
+only for $\varkappa^+ \le 1$ since for $\varkappa^+ > 1$ density tends to infinity. Hence, let us consider the
+case $\varkappa^+ = 1$. One candidate for such bigger space is
+
+$$
+\mathcal{R}_C := \{ k : \Gamma_0 \to \mathbb{R} \mid k \cdot C^{-|\eta|} \cdot (|\eta|!)^2 \in L^\infty(\Gamma_0, \lambda) \}.
+$$
+
+Note, that the invariant measure of the contact process belongs to this space (see [17, Theorem 4.2]), provided that $d \ge 3$, $a^+$ has finite second moment w.r.t. the Lebesgue measure and the Fourier transform of $a^+$ is integrable on $\mathbb{R}^d$. Below we show that the evolution of correlation functions at any moment of time $t$ is a function from $\mathcal{R}_C$.
+
+Indeed, let $\varkappa^+ = 1$ and suppose that there exists $C > 0$ such that for any $n \ge 1$
+and for any $x_1, \dots, x_n \in \mathbb{R}^d$
+
+$$
+k_0^{(n)}(x_1, \dots, x_n) \leq \frac{1}{2} C^n (n!)^2.
+$$
+
+Then, it is clear that $k_0 \in \mathcal{R}_C$. Now, suppose that $k_t^{(n-1)} \le C^{n-1}((n-1)!)^2$. We prove the corresponding inequality for $k_t^{(n)}$ using the mathematical induction. By (3.3) we have for any $x_1, \dots, x_n \in \mathbb{R}^d$
+
+$$
+\begin{align}
+& k_t^{(n)}(x_1, \ldots, x_n) \tag{7.1} \\
+& \le \frac{1}{2} C^n (n!)^2 \nonumber \\
+& \quad + \int_0^t \left[ \bigotimes_{i=1}^n e^{(t-s)L_a^i} \right] \sum_{i=1}^n C^{n-1}((n-1)!)^2 \sum_{j:j \ne i} a^+(x_i - x_j) ds \nonumber \\
+& = \frac{1}{2} C^n (n!)^2 + C^{n-1}((n-1)!)^2 \sum_{i=1}^n \sum_{j:j \ne i} \int_0^t \left( e^{2(t-s)L_a^+} a^+ \right) (x_i - x_j) ds, \nonumber
+\end{align}
+$$
+
+where for $f \in L^1(\mathbb{R}^d)$
+
+$$
+L_{a^{+}} f(x) = \int_{\mathbb{R}^{d}} a(x-y)[f(y)-f(x)]dx, \quad x \in \mathbb{R}^{d}.
+$$
+
+For the bound above we have used the fact, that for any $1 \le i \ne j \le n$
+
+$$
+L_{a^+}^i a^+(x_i - x_j) = L_{a^+}^j a^+(x_i - x_j) = (L_{a^+} a^+) (x_i - x_j), \quad x_i, x_j \in \mathbb{R}^d.
+$$
+
+This relation can be easily checked by simple computations.
+
+Note, that $L_{a+}$ is a generator of the Markov semigroup which preserves positivity
+in $L^1(\mathbb{R}^d)$. Hence,
+
+$$
+g_t(x) := \int_0^t (e^{2(t-s)L_{a+}} a^+) (x) ds \ge 0, \quad x \in \mathbb{R}^d, t \ge 0,
+$$
+
+and $g_t \in L^1(\mathbb{R}^d)$. Then we have
+
+$$
+g_t(x) = |g_t(x)| = \frac{1}{(2\pi)^d} \left| \int_{\mathbb{R}^d} e^{ipx} \hat{g}_t(p) dp \right| \le \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \int_0^t e^{2(t-s)(\hat{a}^+(p)-1)} |\hat{a}^+(p)| ds dp,
+$$
+
+where symbol $\hat{f}$ denotes the Fourier transform of the function $f \in L^1(\mathbb{R}^d)$. Therefore,
+
+$$
+g_t(x) \leq \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \frac{1 - e^{2t(\hat{a}^+(p)-1)}}{2(1 - \hat{a}^+(p))} |\hat{a}^+(p)| dp \leq \frac{1}{2(2\pi)^d} \int_{\mathbb{R}^d} \frac{|\hat{a}^+(p)|}{1 - \hat{a}^+(p)} dp.
+$$
+---PAGE_BREAK---
+
+It was shown in [17] that under the conditions posed on function $a^+$ for the case of
+invariant measure
+
+$$D := \int_{\mathbb{R}^d} \frac{|\hat{a}^+(p)|}{1 - \hat{a}^+(p)} dp < \infty.$$
+
+Finally, if additionally
+
+$$C \geq \frac{D}{(2\pi)^d},$$
+
+then we obtain from (7.1)
+
+$$k_t^{(n)}(x_1, \dots, x_n) \leq \frac{1}{2} C^n (n!)^2 + \frac{1}{2} C^{n-1} ((n-1)!)^2 n(n-1) \frac{D}{(2\pi)^d} \leq C^n (n!)^2.$$
+
+As result, $k_t \in \mathcal{R}_C$ for all $t \ge 0$.
+
+Therefore, the dynamics of correlation functions for the contact model stays in $\mathcal{R}_C$, hence, this dynamics is really very clustering for $\varkappa^+ = m = 1$. As before, we may extend our consideration on the case $m \ne 1$.
+
+Summarizing previous results in this section we claim that the presence of the big mortality and the big competition kernel prevents clustering in the system making it sub-Poissonian distributed. But, is it really necessary to add “big” mortality and competition kernel? Below we discuss this problem.
+
+If we want to study the quasibounded semigroup with the generator $\hat{L}$ on $\mathcal{L}_C$ for some $C > 0$ then, naturally, this generator should be an accretive operator in $\mathcal{L}_C$. Hence, for some $b \ge 0$ the following bound should be true
+
+$$\int_{\Gamma_0} \operatorname{sgn}(G(\eta)) \cdot (\hat{L} - b \mathbb{1}) G(\eta) d\lambda_C(\eta) \le 0, \quad \forall G \in D(\hat{L}),$$
+
+since
+
+$$C^{|η|} dλ(η) = dλ_C(η).$$
+
+Let us define the “diagonal” part of the operator $\hat{L}$:
+
+$$ (\hat{L}_{diag} G)(\eta) := -m|\eta|G(\eta) - \varkappa^- E^{a^-}(\eta)G(\eta) + \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)G((\eta \setminus y) \cup x)dx $$
+
+and consider for some $n \ge 1$
+
+$$G = (0,0, G^{(n)}, 0,0), \quad G^{(n)} \in L^1((\mathbb{R}^d)^n).$$
+
+Then
+
+$$ (\hat{L}G)(\eta) = \begin{cases} \varkappa^+ \int_{\mathbb{R}^d} \sum_{y \in \eta} a^+(x-y)G^{(n)}(\eta \cup x)dx, & |\eta| = n-1 \\ -\varkappa^- \sum_{x \in \eta} \sum_{y \in \eta \setminus x} a^-(x-y)G^{(n)}(\eta \setminus y), & |\eta| = n+1 \\ (\hat{L}_{diag}G^{(n)})(\eta), & |\eta| = n \\ 0, & \text{otherwise} \end{cases}. $$
+
+Note that sgn $(G(\eta)) = 0$ if $|\eta| \neq n$.
+---PAGE_BREAK---
+
+Therefore, for arbitrary $n \ge 1$
+
+$$
+\begin{align*}
+0 \ge I_n &:= \int_{\Gamma_0} \operatorname{sgn}(G(\eta)) \cdot (\langle \hat{L} - b\mathbf{1} \rangle G)(\eta) d\lambda_C(\eta) \\
+&= \int_{\Gamma_0^{(n)}} \operatorname{sgn}(G(\eta)) \cdot (\langle \hat{L}_{diag} - b\mathbf{1} \rangle G^{(n)})(\eta) d\lambda_C(\eta) \\
+&= \frac{C^n}{n!} \int_{(\mathbb{R}^d)^n} \operatorname{sgn}\bigl(G^{(n)}(x^{(n)})\bigr) \cdot \bigl(\langle \hat{L}_{diag} - b\mathbf{1} \rangle G^{(n)}(x^{(n)})\bigr) dx^{(n)}.
+\end{align*}
+$$
+
+Let us fix some $t > 0$ and $\Lambda \in \mathcal{B}_b(\mathbb{R}^d)$. Set for $n \ge 1$
+
+$$
+G^{(n)}(x^{(n)}) = t^n \prod_{k=1}^{n} \chi_{\Lambda}(x_k) = t^n \mathbb{1}_{\Gamma_{\Lambda}^{(n)}}(\{x^{(n)}\}) \in L^1((\mathbb{R}^d)^n).
+$$
+
+Then, the equality
+
+$$
+\operatorname{sgn}\left(G^{(n)}(x^{(n)})\right) = \prod_{k=1}^{n} \chi_{\Lambda}(x_k)
+$$
+
+implies
+
+$$
+\begin{align*}
+0 \geq \frac{n!}{t^n C^n} I_n \\
+&= \int_{\Lambda^n} \left( -mn \prod_{k=1}^{n} \chi_{\Lambda}(x_k) - \varkappa^{-} E^{a^{-}}(x^{(n)}) \prod_{k=1}^{n} \chi_{\Lambda}(x_k) \right. \\
+&\qquad \left. + \varkappa^{+} \int_{\mathbb{R}^d} \sum_{j=1}^{n} a^{+}(x-x_j) \prod_{k \neq j} \chi_{\Lambda}(x_k) \chi_{\Lambda}(x) dx \right) dx^{(n)} - b \int_{\Lambda^n} \prod_{k=1}^{n} \chi_{\Lambda}(x_k) dx^{(n)} \\
+&= -\varkappa^{-} \int_{\Lambda^n} E^{a^{-}}(x^{(n)}) dx^{(n)} + \varkappa^{+} \sum_{j=1}^{n} \prod_{k \neq j} \int_{\Lambda^{n-1}} dx_k \int_{\Lambda} a^{+}(x-x_j) dx dx_j \\
+&\qquad - (b+mn) |\Lambda|^{n} \\
+&= -\varkappa^{-} \int_{\Lambda^n} E^{a^{-}}(x^{(n)}) dx^{(n)} + \varkappa^{+} n |\Lambda|^{n-1} \int_{\Lambda} \int_{\Lambda} a^{+}(x-y) dx dy - (b+mn) |\Lambda|^{n}.
+\end{align*}
+$$
+
+We suppose, in fact, that for any $n \ge 1$
+
+$$
+I_n \le 0.
+$$
+
+Since $E^{a^{-}}(\eta) = 0$ for $|\eta| \le 1$ we get
+
+$$
+\begin{align*}
+0 \geq \sum_{n=1}^{\infty} I_n &= -m \sum_{n=1}^{\infty} n \frac{t^n C^n}{n!} |\Lambda|^n - \varkappa^{-} \sum_{n=1}^{\infty} \frac{t^n C^n}{n!} \int_{\Lambda^n} E^{a^{-}}(x^{(n)}) dx^{(n)} \\
+&\quad + \varkappa^{+} \sum_{n=1}^{\infty} \frac{t^n C^n}{n!} n |\Lambda|^{n-1} \int_{\Lambda} \int_{\Lambda} a^{+}(x-y)dxdy - b \sum_{n=1}^{\infty} \frac{t^n C^n}{n!} |\Lambda|^n \\
+&= -mtC |\Lambda| e^{Ct|\Lambda|} - \varkappa^{-} \int_{\Gamma_{\Lambda}} E^{a^{-}}(\eta) d\lambda_{Ct}(\eta) + \varkappa^{+}Cte^{Ct|\Lambda|} \int_{\Lambda} \int_{\Lambda} a^{+}(x-y)dxdy
+\end{align*}
+$$
+---PAGE_BREAK---
+
+$$
+\begin{align*}
+& -b(e^{Ct|\Lambda|} - 1) \\
+&= -mtC |\Lambda| e^{Ct|\Lambda|} - \varkappa^{-} C^2 t^2 \int_{\Gamma_{\Lambda}} \int_{\Lambda} \int_{\Lambda} a^{-}(x-y) dxdy d\lambda_{Ct}(\eta) \\
+&\quad + \varkappa^{+} Cte^{Ct|\Lambda|} \int_{\Lambda} \int_{\Lambda} a^{+}(x-y) dxdy - b(e^{Ct|\Lambda|} - 1) \\
+&= e^{Ct|\Lambda|} \left[ Ct \left( \varkappa^{+} \int_{\Lambda} \int_{\Lambda} a^{+}(x-y)dxdy - \varkappa^{-} Ct \int_{\Lambda} \int_{\Lambda} a^{-}(x-y) dxdy - m|\Lambda| \right) \right. \\
+&\qquad \left. - b(1-e^{-Ct|\Lambda|}) \right].
+\end{align*}
+$$
+
+Therefore, for any $t > 0$ and any $\Lambda \in B_b(\mathbb{R}^d)$
+
+$$
+\begin{align*}
+0 \ge \varkappa^+ \int_\Lambda \int_\Lambda a^+(x-y)dxdy - \varkappa^- Ct \int_\Lambda \int_\Lambda a^-(x-y)dxdy - m|\Lambda| \\
+\qquad - b \frac{(1-e^{-Ct|\Lambda|})}{Ct} =: B.
+\end{align*}
+$$
+
+Suppose that there exists $z > 0$ such that
+
+$$a^{+}(x) \geq z a^{-}(x), \quad x \in \mathbb{R}^{d},$$
+
+then taking for some $\epsilon > 0$
+
+$$t = \epsilon \frac{z\varkappa^{+}}{\varkappa^{-}C} > 0$$
+
+we obtain
+
+$$
+B \ge (1-\epsilon)\varkappa^+ z \int_{\Lambda} \int_{\Lambda} a^{-}(x-y)dxdy - m|\Lambda| \\
+- \frac{b\varkappa^{-}}{\epsilon z \varkappa^{+}} \left(1 - \exp\left(-\frac{z\varkappa^{+}}{2\varkappa^{-}}|\Lambda|\right)\right) \sim ((1-\epsilon)\varkappa^{+}z - m)|\Lambda|, \quad \Lambda \uparrow \mathbb{R}^d,
+$$
+
+which contradicts to $B \le 0$. As result, $m$ can not be arbitrary small.
+
+**Acknowledgments.** The financial support of DFG through the SFB 701 (Bielefeld University) and German-Ukrainian Project 436 UKR 113/80 and 436 UKR 113/94 is gratefully acknowledged. This work was partially supported by FCT, POCI2010, FEDER. Yu.K. is very thankful to R. Law for fruitful and stimulating discussions.
+
+REFERENCES
+
+[1] YU. M. BEREZANSKY, YU. G. KONDRATIEV, T. KUNA, AND E. V. LYTVYNOV, *On a spectral representation for correlation measures in configuration space analysis*, Methods Funct. Anal. Topology, 5, no.4 (1999), pp. 87–100.
+
+[2] D. A. BIRCH AND W. R. YOUNG, *A master equation for a spatial population model with pair interactions*, Theoretical Population Biology, 70, (2006), pp. 26–42.
+
+[3] N. N. BOGOLIUBOV, *Problems of a dynamical theory in statistical physics*, Studies in Statistical Mechanics, 1, (1962), pp. 1–118.
+
+[4] B. M. BOLKER AND S. W. PACALA, *Using Moment Equations to Understand Stochastically Driven Spatial Pattern Formation in Ecological Systems*, Theoretical Population Biology, 52, (1997), pp. 179–197.
+---PAGE_BREAK---
+
+[5] B. M. BOLKER AND S. W. PACALA, *Spatial moment equations for plant competi-
+tions: Understanding Spatial Strategies and the Advantages of Short Dispersal*,
+The American Naturalist, 153, no.6 (1999), pp. 575-602.
+
+[6] U. DIECKMANN AND R. LAW, *Relaxation projections and the method of moments*, The Geometry of Ecological Interactions, Cambridge Univ. Press. (2000), pp. 412-455.
+
+[7] U. DIECKMANN, R. LAW, AND D. J. MURRELL, *On moment closures for population dy-
+namics in continuous space*, Journal of Theoretical Biology, 229, (2004) pp. 421432.
+
+[8] D. L. FINKELShtein, Yu. G. KONDRATIEV, AND M. J. OLIVEIRA, *Markov evolu-
+tions and hierarchical equations in the continuum I. One-component systems*,
+http://arxiv.org/abs/0707.0619.
+
+[9] D. L. FINKELShtein, Yu. G. KONDRATIEV, AND A. V. SKOROKHOD, *One- and Two-
+component Contact Process with Long Range Interaction in Continuum*, In prepara-
+tion.
+
+[10] N. FOURNIER AND S. MELEARD, *A microscopic probabilistic description of a locally reg-
+ulated population and macroscopic approximations* The Annals of Applied Probab-
+ility, 14, no.4 (2004), pp. 1880-1919.
+
+[11] H. O. GEORGII, *Gibbs measures and Phase Transitions*, Walter de Gruyter, 1982.
+
+[12] T. KATO, *Perturbation theory for linear operators* Berlin, Heidelberg, New York:
+Springer Verlag, 1966.
+
+[13] YU. G. KONDRATIEV AND T. KUNA, *Harmonic analysis on configuration space. I. Gen-
+eral theory*, Infin. Dimens. Anal. Quantum Probab. Relat. Top., 5, no.2 (2002),
+pp. 201-233.
+
+[14] YU. G. KONDRATIEV, T. KUNA, AND O. KUTOVIY, *On relations between a priori bounds
+for measures on configuration spaces*, Infin. Dimens. Anal. Quantum Probab.
+Relat. Top., 7, no. 2 (2004), pp. 195-213.
+
+[15] YU. G. KONDRATIEV AND O. V. KUTOVIY, *On the metrical properties of the configura-
+tion space*, Math. Nachr., 279, no.7 (2006), pp. 774-783.
+
+[16] YU. G. KONDRATIEV, O. V. KUTOVIY, AND R. A. MINLOS, *On non-equilibrium stochas-
+tic dynamics for interacting particle systems in continuum*, to appear in: J. Funct.
+Anal.
+
+[17] YU. G. KONDRATIEV, O. V. KUTOVIY, AND S. PIROGOV, *Correlation functions and
+invariant measures in continuous contact model*, SFB-701 Preprint, University of
+Bielefeld, Bielefeld, Germany (2007).
+
+[18] YU. G. KONDRATIEV, O. V. KUTOVIY, AND E. ZHIZHINA, *Nonequilibrium Glauber-type
+dynamics in continuum*, J. Math. Phys. 47 no. 11 (2006) 17 pp.
+
+[19] YU. G. KONDRATIEV AND E. LYTVYNOV, *Glauber dynamics of continuous particle sys-
+tems*, Ann. Inst. H.Poincare, Ser. A, Probab. Statist. 41 (2005), pp. 685-702.
+
+[20] YU. G. KONDRATIEV, R. MINLOS, AND E. ZHIZHINA, *One-particle subspaces of the gen-
+erator of Glauber dynamics of continuous particle systems*, Rev. Math. Phys., 16,
+no. 9 (2004), pp. 1-42.
+
+[21] YU. G. KONDRATIEV, A. L. REBENKO, AND M. RÖCKNER, *On diffusion dynamics for
+continuous systems with singular superstable interaction*, J. Math. Phys., 45, no. 5
+(2004), pp. 1826-1848.
+
+[22] YU. G. KONDRATIEV AND A. V. SKOROKHOD, *On contact processes in Continuum*, Infin.
+Dimens. Anal. Quantum Probab. Relat. Top. 9, no. 2 (2006), pp. 187-198.
+
+[23] T. KUNA, *Studies in Configuration Space Analysis and Applications*, Ph. D. thesis,
+Bonn University, 1999.
+
+[24] A. LENARD, *States of classical statistical mechanical systems of infinitely many parti-
+cles*. I, Arch. Rational Mech. Anal., 59 (1975), pp. 219-239.
+
+[25] A. LENARD, *States of classical statistical mechanical systems of infinitely many parti-
+cles*. II, Arch. Rational Mech. Anal., 59 (1975), pp. 241-256.
+
+[26] S. A. LEVIN, *Complex adaptive systems:exploring the known, the unknown and the
+unknowable*, Bulletin of the AMS, 40, no. 1, pp. 3-19.
+
+[27] C. PRESTON, *Random Fields*, Lecture notes in Mathematics, Vol. 534, Berlin Heidel-
+berg, New York: Springer, 1976.
+
+[28] D. RUELLE, *Statistical Mechanics*, New York, Benjamin, 1969.
+
+[29] D. RUELLE, *Superstable interactions in classical statistical mechanics*, Com-
+mun. Math. Phys., 18 (1970), pp. 127-159.
\ No newline at end of file
diff --git a/samples/texts_merged/7222038.md b/samples/texts_merged/7222038.md
new file mode 100644
index 0000000000000000000000000000000000000000..acb8c290d66b638be5bf044644167c0ce15b3240
--- /dev/null
+++ b/samples/texts_merged/7222038.md
@@ -0,0 +1,318 @@
+
+---PAGE_BREAK---
+
+Role of interaction in the binding of two Spin-orbit Coupled Fermions
+
+Chong Ye,¹,² Jie Liu,²,³ Li-Bin Fu¹,*
+
+¹Graduate School, China Academy of Engineering Physics, Beijing 100193, China
+
+²National Laboratory of Science and Technology on Computational Physics,
+Institute of Applied Physics and Computational Mathematics, Beijing 100088, China
+
+³HEDPS, CAPT, and CICIFSA MoE, Peking University, Beijing 100871, China
+
+We investigate role of an attractive s-wave interaction with positive scattering length in the
+binding of two spin-orbit coupled fermions in the vacuum and on the top of a Fermi sea in the
+single impurity system, motivated by current interests in exploring exotic binding properties in the
+appearance of spin-orbit couplings. For weak spin-orbit couplings where the density of states is
+not significantly altered, we analytically show that the high-energy states become more important
+in determining the binding energy when the scattering length decreases. Consequently, tuning the
+interaction gives rise to a rich behavior, including a zigzag of the momentum of the bound state
+or inducing transitions among the meta-stable states. By exactly solving the two-body quantum
+mechanics for a spin-orbit coupled Fermi mixture of $^{40}$K-$^{40}$K-$^6$Li, we demonstrate that our analysis
+can also apply to the case when the density of states is significantly modified by the spin-orbit
+coupling. Our findings pave a way for understanding and controlling the binding of fermions in the
+presence of spin orbit couplings.
+
+PACS numbers: 03.65.Ge, 71.70.Ej, 67.85.Lm
+
+I. INTRODUCTION
+
+In ultracold physics, many schemes have been proposed to generated various types of synthetic spin-orbit couplings (SOC) by controlling atom-light interaction [1]. In 2011, I. B. Spielman's group in NIST had generated an equal weight combination of Rashba-type and Dresselhaus-type SOC in $^{87}$Rb [2]. Afterwards, SOC has triggered a great amount of experimental interest [3-5]. In the appearance of SOC, the ultracold atomic gases have been altered dramatically [6-8].
+
+One basic issue is the binding of two spin-orbit cou-
+pled fermions in the vacuum [9–16] where SOC has given
+rise to the change of binding energy and the appearance
+of finite-momentum dimer bound states. Another rele-
+vant issue is the binding of two fermions on the top of
+a Fermi sea (the molecular state) for the case where a
+single impurity is immersed in a noninteracting Fermi
+gas [16–21]. In the appearance of SOC, the center-of-
+mass (c.m.) momentum of the molecular state becomes
+finite [16, 21]. All of these can be understood from the
+perspective of two-body quantum mechanics. It general
+contains three components: the threshold energy associ-
+ated with the c.m. momentum, the density of states, and
+the interacting strength. For extremely weak attractive
+interaction, changes of two-body properties under SOC
+came from the different threshold behavior of the density
+of states [9, 10, 14]. However, in the strong interacting
+regime, the binding of two fermions presents a rich behav-
+ior [13, 16] such as the variation of the c.m. momentum
+and the competition between two meta-stable states with
+the tuning of interacting strength. These phenomena can
+
+not simply owe to the threshold behavior of the density
+of states. Therefore, the mechanism as to how all these
+three components cooperate with each other in deter-
+mining the novel two-body properties is pressing needed.
+The establishment of such a comprehensive picture will
+shed light on ongoing explorations of the intriguing be-
+havior of spin-orbit coupled Fermi gases [9–16]. Below,
+we report a theoretical contribution to address this issue,
+which also allows predictions of new phenomena.
+
+We investigate the two-body quantum mechanisms of
+the binding of two spin-orbit coupled fermions in the vac-
+uum and on the top of a Fermi sea in the single impurity
+Fermi gas. We consider an attractive s-wave interaction
+with positive scattering length, the strength of which can
+be tuned in a wide range via a Feshbach resonance [22].
+From Sec. II to Sec. IV, we give analyses which do
+not dependent on the concrete type of SOC. In Sec. II,
+by decomposing the two-body energy (molecular energy)
+into the threshold energy and the binding energy, both
+of which depend on the c.m. momentum of two fermions,
+we establish a direct relation between the interaction, the
+density of states, and the binding energy. In Sec. III,
+with the first-order perturbation analysis in the weak
+SOC limit, we reveals that the low-energy states play
+a decisive role in determining the binding energy when
+the scattering length is large, in contrast to the small
+scattering length case where the high-energy states can
+dominate. This allows us to elucidate the mechanism
+underlying interesting phenomena such as a zigzag be-
+havior of the two-body ground state momentum and the
+competition between two meta-stable states in Sec. IV.
+In Sec. V, we illustrate our analysis with an interact-
+ing Fermi mixture of $^{40}$K-$^{40}$K-$^6$Li with $^{40}$K containing
+an $(\alpha k_x \sigma_z + h \sigma_x)$-type SOC, which can be realized by
+the state of the art experimental techniques using cold
+atoms [4, 23]. Remarkably, by exactly solving the two-
+
+* lbfu@gscaep.ac.cn
+---PAGE_BREAK---
+
+body problem for this system, we show that our analysis affords insights into the main properties of the binding of two spin-orbit coupled fermions, even when the density of states is significantly altered by SOC. Our findings reveal the role of interaction in the binding of two spin-orbit coupled fermions and allow deep physical understandings of the rich two-body properties in the presence of SOC.
+
+## II. BINDING OF TWO FERMIONS WITH SOC
+
+We consider two different spin-orbit coupled fermionic species in three dimensions (3D) at zero temperature: atom A (B) has $N_a$ ($N_b$) components, with the corresponding non-interacting Hamiltonian $H_a$ ($H_b$). We consider an attractive s-wave contact interaction with positive scattering length between the two fermionic species as described by
+
+$$H_{\text{int}} = \frac{U}{V} \sum_{\mathbf{Q},\mathbf{k},\mathbf{k}'} a_{\mathbf{k},l_0}^{\dagger} b_{\mathbf{Q}-\mathbf{k},m_0}^{\dagger} a_{\mathbf{k}',l_0} b_{\mathbf{Q}-\mathbf{k}',m_0}, \quad (1)$$
+
+with $\mathbf{Q}$ the c.m. momentum of two scattering fermions. Here $a_{\mathbf{k},l_0}^{\dagger}$ ($b_{\mathbf{k},m_0}^{\dagger}$) denotes the creation operator of a SOC-free atom A (B) in the $l_0$-th ($m_0$-th) spin component with momentum $\mathbf{k}$, U is the bare interaction, and V is the quantization volume. The total Hamiltonian is thus $H = H_a + H_b + H_{\text{int}}$.
+
+With this Hamiltonian, we address to the binding of two fermionic atoms A and B (i) in the vacuum [9–16] and (ii) on the top of a non-interacting Fermi sea of atoms A in the situation where a single impurity of B is immersed in a non-interacting Fermi gas of A [17–21]. The ansatz wave function of the two-body bound state and the molecular state can be expressed in a general form
+
+$$|\Psi_Q\rangle = \sum_{i,j} \sum_k' \psi_{Q,k}^{i,j} \alpha_{k,i}^{\dagger} \beta_{Q-k,j}^{\dagger} |\phi\rangle, \quad (2)$$
+
+where $\mathbf{Q}$ is the c.m. momentum of two particles. For (i), $|\phi\rangle$ is the vacuum state and the summation $\sum_k'$ includes all the states. For (ii), $|\phi\rangle$ is the non-interacting spin-orbit coupled Fermi sea of A and the summation $\sum_k'$ excludes the states below the Fermi surfaces, reflecting the effect of Pauli blocking. Here, $\alpha_{\mathbf{k},i}^{\dagger} = \sum_{i'} \lambda_{\mathbf{k}}^{i,i'} a_{\mathbf{k},i}'$ ($\beta_{\mathbf{k},i}^{\dagger} = \sum_{i'} \eta_{\mathbf{k}}^{i,i'} b_{\mathbf{k},i}'$) is the creation operator of an atom A (B) in the i-th eigen-state of Hamiltonian $H_a$ ($H_b$) with momentum $\mathbf{k}$ and energy $\varepsilon_{\mathbf{k},i}^a$ ($\varepsilon_{\mathbf{k},i}^b$) and $\psi_{Q,k}^{i,j}$ denotes the variational coefficient. The coefficients $\lambda_{\mathbf{k}}^{i,i'}$ and $\eta_{\mathbf{k}}^{i,i'}$ are fixed by SOC. Solving the eigen-equation $H|\Psi_Q\rangle = E_Q|\Psi_Q\rangle$ gives
+
+$$\psi_{Q,k}^{i,j} = \frac{(\lambda_{k}^{i,l_0} \eta_{Q-k}^{j,m_0})^*}{E_Q - E_{Q,k}^{ij}} \frac{U}{V} \sum_{k',i',j'} \psi_{Q,k'}^{i',j'} \lambda_{k'}^{i',l_0} \eta_{Q-k'}^{j',m_0}, \quad (3)$$
+
+with $E_{Q,k}^{ij} = \varepsilon_{k,i}^a + \varepsilon_{Q-k,j}^b$. Rearranging Eq. (3), we obtain a self-consistent equation for two-body energy
+
+(molecular energy) $E_Q$ in the momentum-space representation, i.e.,
+
+$$\frac{1}{U} = \frac{1}{V} \sum_{i,j} \sum_k' \frac{|\lambda_k^{i,l_0}|^2 |\eta_{Q-k}^{j,m_0}|^2}{E_Q - E_{Q,k}^{ij}}. \quad (4)$$
+
+A key step of our treatment next constitutes a decomposition of $E_Q$: Defining the threshold energy associated with the c.m. momentum $\mathbf{Q}$ by $E_Q^Q = \min_{i,j,k}\{E_{Q,k}^{ij}\}$, we write $E_Q^{ij} = E_{th}^Q + \varepsilon$. The rest of the two-body energy (molecular energy) is therefore $E_Q^{sc} = E_Q - E_{th}^Q$. While $E_{th}^Q$ is only affected by SOC, $E_Q^{sc}$ encodes the effect of interaction. Such decomposition of $E_Q$ in terms of $E_Q^{sc}$ and $E_{th}^Q$, as we shall see, allows a transparent correspondence to the SOC-free counterpart. Following a standard procedure, we obtain the self-consistent equation for $E_Q^{sc}$ in the energy domain of $\varepsilon$ as in Ref. [24]
+
+$$\int_0^\infty \frac{\gamma_Q^\varepsilon d\varepsilon}{E_Q^{sc} - \varepsilon} = \frac{1}{U}. \quad (5)$$
+
+Here, $\gamma_Q^\varepsilon$ is defined by
+
+$$\gamma_Q^\varepsilon = \sum_i \sum_j \int' |\lambda_k^{i,l_0}|^2 |\eta_{Q-k}^{j,m_0}|^2 |J| dv d\mu, \quad (6)$$
+
+which describes the density of states in 3D [25]. For (i), the integration $\int' dv d\mu$ includes all the states. For (ii), the integration $\int' dv d\mu$ excludes the states below the Fermi surfaces. In Eq. (6), $\mu$ and $\nu$ label the degrees of freedom other than $\varepsilon$, and J denotes the standard Jacobian. These formulas can be also easily adapted to describing the binding of two homo-nuclear fermions where A and B are the same fermionic species.
+
+Equation (5) establishes a direct relation between the interaction U, the density of states $\gamma_Q^\varepsilon$, and $E_Q^{sc}$. Intuition behind it can be gained in the limit of vanishing SOC in case (i), where $E_{th}^Q = Q^2/(2m_\mu)$ with $m_\mu$ the reduced mass of two fermions, and $\gamma_Q^\varepsilon = \gamma_0^\varepsilon = 2\sqrt{2m_\mu\varepsilon}$. Then, $E_Q^{sc}$ is independent of Q as ensured by Eq. (5), and can be identified as $E_Q^{sc} = \varepsilon_b = -1/(2m_\mu a_s^2)$ [$\hbar \equiv 1$] with $a_s > 0$ the s-wave scattering length, i.e., the binding energy at rest. In this case, Eq. (5) reduces to, in the momentum space representation, the well known renormalization equation for two scattering particles, i.e.,
+
+$$\frac{1}{U} = \frac{m_\mu}{2\pi a_s} - \frac{1}{V} \sum_k \frac{2m_\mu}{k^2}. \quad (7)$$
+
+Equation (5) thus extends the standard prescription for two interacting fermions to the presence of SOC, where $E_Q^{sc}$ is the counterpart of the binding energy $\varepsilon_b$.
+
+## III. ROLE OF INTERACTION
+
+Based on above treatment, below we elucidate how the interaction cooperates with the effect of SOC in deter-
+---PAGE_BREAK---
+
+mining the behavior of $E_Q^{sc}$, when the interaction strength $a_s^{-1}$ is tuned in a wide range via Feshbach resonance [22]. To compare to the SOC-free case, we introduce the quantity $\xi_Q \equiv E_Q^{sc} - \epsilon_b$. For weak SOC that does not significantly alter the density of states, the leading term of $\xi_Q$ can be derived from Eq. (5) as [26]:
+
+$$ \xi_Q = - \left[ \int_0^\infty \frac{\gamma_Q^\epsilon}{(\epsilon - \epsilon_b)^2} d\epsilon \right]^{-1} \int_0^\infty \frac{\gamma_Q^\epsilon - \gamma_0^\epsilon}{\epsilon - \epsilon_b} d\epsilon. \quad (8) $$
+
+Here we have ignored the modification of the renormalization relation by SOC [27–31]. In discussing the effect of interaction on $\xi_Q$, we will be interested in (i) $\frac{\partial\xi_Q}{\partial a_s^{-1}}$ and (ii) $\Delta_{QQ'} \equiv \xi_{Q'} - \xi_Q$: The sign of the former reflects how $\xi_Q$ for fixed Q changes with interaction, while that of the latter tells whether a large or small Q is energetically favored for a given interaction. Using Eq. (8), we find $\Delta_{QQ'} \simeq -[\int_0^\infty \frac{\gamma_Q^\epsilon}{(\epsilon-\epsilon_b)^2}d\epsilon]^{-1}\int_0^\infty \frac{\gamma_{Q'}^\epsilon-\gamma_Q^\epsilon}{\epsilon-\epsilon_b}d\epsilon$. Both of $\xi_Q$ and $\Delta_{QQ'}$ rely crucially on $\gamma_Q^\epsilon$. Thus, while the form of $\gamma_Q^\epsilon$ varies with specific setups [see Eq. (6)], its qualitative analysis affords insights into generic behavior of $\xi_Q$, as we elaborate next. In order to give some analyses, we apply the further approximation
+
+$$ \begin{aligned} \xi_Q &\approx -\left[\int_0^\infty \frac{\gamma_0^\epsilon}{(\epsilon - \epsilon_b)^2} d\epsilon\right]^{-1} \int_0^\infty \frac{\gamma_Q^\epsilon - \gamma_0^\epsilon}{\epsilon - \epsilon_b} d\epsilon \\ &\propto \sqrt{-\epsilon_b} \int_0^\infty \frac{\gamma_Q^\epsilon - \gamma_0^\epsilon}{\epsilon - \epsilon_b} d\epsilon. \end{aligned} \quad (9) $$
+
+Consider first the simplest case where $\gamma_Q^\epsilon - \gamma_0^\epsilon > 0$ for all energy levels $\epsilon$ [32], i.e., SOC induces an increase in the number of available scattering states at all energies. From Eq. (9), we see $\xi_Q < 0$, hence binding with finite Q leads to an energy decrease as compared to the SOC-free case, irrespective of the interacting strength. Such energy drop, following from $\frac{\partial\xi_Q}{\partial a_s^{-1}} > 0$, can be further enhanced by increasing $a_s^{-1}$. If, moreover, $\gamma_Q^\epsilon$ increases monotonically with Q, we have $\Delta_{QQ'} < 0$, i.e., $\xi_Q$ decreases with increasing Q for fixed scattering length. The amplitude of this decrease can be controlled by tuning the scattering length, which enhances with increased $a_s^{-1}$.
+
+In contrast, if the effect of SOC is such that $\gamma_Q^\epsilon - \gamma_0^\epsilon$ alters sign depending on the energy $\epsilon$ of the state, $\xi_Q$ can exhibit a very rich behavior. To demonstrate it, consider $\gamma_Q^\epsilon - \gamma_0^\epsilon$ has opposite sign in the low- and high-energy regimes, with a sign flip occurring at the energy $\epsilon_0$. Applying the mean value theorem to Eq. (9), we find
+
+$$ \int_0^\infty \frac{\gamma_Q^\epsilon - \gamma_0^\epsilon}{\epsilon - \epsilon_b} d\epsilon = f_l / (\epsilon_1 - \epsilon_b) + f_h / (\epsilon_2 - \epsilon_b), \quad (10) $$
+
+with $\epsilon_1 \in (0, \epsilon_0)$, and $\epsilon_2 \in (\epsilon_0, \infty)$. Here $f_l = \int_0^{\epsilon_0} (\gamma_Q^\epsilon - \gamma_0^\epsilon)d\epsilon$ and $f_h = \int_{\epsilon_0}^\infty (\gamma_Q^\epsilon - \gamma_0^\epsilon)d\epsilon$ are the number of scattering states in the low- and high-energy regimes, respectively. Since $f_l$ and $f_h$ have opposite signs, the contribution from the high-energy states to $\xi_Q$ is suppressed by the smaller pre-factor compared to the low-energy states.
+
+Yet, such suppression becomes less significant when $a_s^{-1}$ increases, following similar reasoning as before. We thus expect the sign of $\xi_Q$ to be mainly determined by the low-energy states for large $a_s$, whereas the high-energy states can become decisive for small $a_s$. This has interesting physical implications: by tuning the scattering length and hence the sign of $\xi_Q$ and $\Delta_{QQ'}$, we can control whether a bound pair favors nonzero Q, and even the specific choice of Q.
+
+#### IV. TYPICAL BEHAVIORS OF TWO-BODY GROUND STATES
+
+We now show that, combining $E_{th}^Q$, above insights into the cooperative effects of interaction and SOC on $E_Q^{sc}$ allows predictions on generic features of the dispersion $E_Q$. This can be best illustrated in two following cases.
+
+(i) If $E_{th}^Q$ has only one minimum, without interaction, the two-body (molecular) ground state c.m. momentum $Q_g$ will locate at $Q_1$ where $E_{th}^Q$ is minimized. By contrast, adding interaction can strongly modify $E_Q^{sc}$ and thus $E_Q$, according to previous analysis, which renders $Q_g$ to deviate from $Q_1$. Such deviation intimately depends on the behavior of $E_Q^{sc}$: If $E_Q^{sc}$ varies monotonically with Q for a fixed scattering length, $Q_g$ shifts from $Q_1$ in such a way that a smaller $E_Q^{sc}$ can be reached. Such shift can be further enhanced by increasing $a_s^{-1}$, provided it does not qualitatively alter the behavior of $E_Q^{sc}$, i.e., $E_Q^{sc}$ stays increasing (or decreasing) with Q when varying $a_s^{-1}$ [c.f. inset of Fig. 1(d)]. If, instead, the behavior of $E_Q^{sc}$ undergoes a qualitative change when $a_s^{-1}$ increases, e.g. from increasing to decreasing with Q [see inset of Fig. 2(c)], $Q_g$ will first exhibit a zigzag away from $Q_1$ before increasing above $Q_1$ monotonically [see inset of Fig. 2(d)].
+
+(ii) In general $E_{th}^Q$ can have multiple local minima, each corresponding to a meta-stable state. For individual meta-stable state, the associated c.m. momentum exhibits similar behavior as in (i). An interesting question then concerns how two-body (molecular) ground state transits among multiple meta-stable states when the interaction is tuned. To address it, suppose for simplicity that $E_{th}^Q$ has two degenerate local minima at $Q_1$ and $Q_2$ respectively, and $E_Q^{sc}$ varies monotonically with Q for a fixed scattering length. The two-body (molecular) ground state c.m. momentum Q is expected to be close to $Q_1$ or $Q_2$, depending on which corresponds to a smaller $E_Q^{sc}$. If the behavior of $E_Q^{sc}$ can be changed qualitatively by tuning $a_s^{-1}$, say from increase to decrease with Q, a transition of the system between the two meta-stable states can be induced. This phenomenon also occurs when the two local minima $E_{th}^Q$ become non-degenerate, due to the competition between $E_{th}^Q$ and $E_Q^{sc}$, which is the origin of the transition discussed in Ref. [16]. In addition, with the increasing of $a_s^{-1}$, $E_Q^{sc}$ will dominate over $E_{th}^Q$ in
+---PAGE_BREAK---
+
+Figure 1. Binding of spin-orbit coupled fermions in the vacuum. (a) The distribution of $\gamma_Q^\epsilon - \gamma_0^\epsilon$. (b) $\xi_Q$ as a function of $Q$ with different $(k_0 a_s)^{-1}$ according to Eq. (8). (c) The helicity-dependent threshold energy $E_{th,+}^Q$ ($E_{th,-}^Q$) is the minimum energy of two particles with A in the upper (lower) helicity branch and a c.m. momentum $Q$. (d) The two-body energy with different interacting strengths by exactly solving Eq. (4). The inset shows the variation of the ground state c.m. momentum. Here $Q_0 = -1.5k_0e_x$.
+
+determining the dispersion of $E_Q$. This may qualitative
+change the dispersion of two-body (molecular) energy,
+say from a double-well type with two meta-stable states
+to a single-well type with one meta-stable state, which
+may cause the disappear of the transition.
+
+V. SPIN-ORBIT COUPLED
+THREE-COMPONENT FERMI MIXTURE
+
+Previous discussions from Sec. II to Sec. IV are not dependent on the concrete type of the SOC. To give an example, below we present concrete calculations by solving Eq. (4) for a system of interacting Fermi mixture of $^{40}$K-$^{40}$K-$^{6}$Li (A-A-B), where the atom $^{40}$K couples to SOC and the atom $^{6}$Li is spinless. Here, we choose an $(\alpha k_x \sigma_z + h \sigma_x)$-type SOC which can be readily realized experimentally in $^{40}$K [4]. In this three-component mixture, the $^{6}$Li fermions are tuned close to a wide Feshbach resonance with spin up species of $^{40}$K [23]. The Hamil-
+
+tonian for the system reads
+
+$$
+\begin{align}
+H = & \sum_{\mathbf{k},\sigma} \varepsilon_{\mathbf{k}}^a a_{\mathbf{k},\sigma}^{\dagger} a_{\mathbf{k},\sigma} + \sum_{\mathbf{k}} (ha_{\mathbf{k},\uparrow}^{\dagger}a_{\mathbf{k},\downarrow} + ha_{\mathbf{k},\downarrow}^{\dagger}a_{\mathbf{k},\uparrow}) \nonumber \\
+& + \sum_{\mathbf{k}} \varepsilon_{\mathbf{k}}^b b_{\mathbf{k}}^{\dagger}b_{\mathbf{k}} + \frac{U}{V} \sum_{\mathbf{k},\mathbf{k}',\mathbf{q}} a_{\frac{\mathbf{q}}{2}+\mathbf{k},\uparrow}^{\dagger} b_{\frac{\mathbf{q}}{2}-\mathbf{k},\downarrow}^{\dagger} a_{-\mathbf{k}',\uparrow} a_{\frac{\mathbf{q}}{2}-\mathbf{k},\downarrow} a_{\mathbf{k}',\uparrow} \nonumber \\
+& + \sum_{\mathbf{k}} (\alpha k_x a_{\mathbf{k},\uparrow}^{\dagger} a_{\mathbf{k},\uparrow} - \alpha k_x a_{\mathbf{k},\downarrow}^{\dagger} a_{\mathbf{k},\downarrow}) . \tag{11}
+\end{align}
+$$
+
+Here $a_{\mathbf{k},\sigma}$ ($\sigma=\uparrow,\downarrow$) denotes the annihilation operator of a SOC-free particle $A$ with spin $\sigma$ and momenta $\mathbf{k}$, while the operator $b_\mathbf{k}$ annihilates a particle $B$ with momenta $\mathbf{k}$. In addition, $\varepsilon_\mathbf{k}^{a(b)} = k^2/(2m_{a(b)})$ is the kinetic energy of particle $A(B)$. The SOC parameters $h$ and $\alpha$ are respectively proportional to the Raman coupling strength and the momentum transfer in the Raman process generating the SOC [4]. We also note that via a global pseudo-spin rotation such SOC can be transformed to an equal weight combination of Rashba-type and Dresselhaus-type SOC ($\alpha k_x \sigma_y + h \sigma_z$) which is the first SOC generated in ultra-cold atomic gases [2]. Therefore, the ($\alpha k_x \sigma_z + h \sigma_x$)-type SOC can be interpreted as an equal weight combination
+---PAGE_BREAK---
+
+Figure 2. Binding of spin-orbit coupled fermions on top of a Fermi sea in single impurity system. (a) The distribution of $\gamma_Q^\epsilon - \gamma_0^\epsilon$. (b) $\xi_Q$ as a function of $Q$ with different $(k_0 a_s)^{-1}$ according to Eq.(8). The inset shows $\xi_Q$ in the region near $Q_1$. (c) The threshold $E_{th}^Q = \min\{E_{th,-}^Q, E_{th,+}^Q\}$. The helicity-dependent threshold energy $E_{th,+}^Q$ ($E_{th,-}^Q$) is the minimum energy of two particles with A in the upper (lower) helicity branch and a c.m. momentum Q. (d) The two-body energy with different interacting strengths by exactly solving Eq. (4). The inset shows the variation of the ground state c.m. momentum. Here $Q_0 = -2k_0 e_x$.
+
+of Rashba-type and Dresselhaus-type SOC [4].
+
+In the presence of SOC, the single-particle eigenstates of A in the helicity basis are created by operators $a_{\mathbf{k},\pm}^\dagger = \lambda_\mathbf{k}^{\pm,\uparrow} a_{\mathbf{k},\uparrow}^\dagger + \lambda_\mathbf{k}^{\pm,\downarrow} a_{\mathbf{k},\downarrow}^\dagger$, with $\lambda_\mathbf{k}^{\pm,\uparrow} = \pm \zeta_\mathbf{k}^\pm$, $\lambda_\mathbf{k}^{\pm,\downarrow} = \zeta_\mathbf{k}^\mp$, and $\zeta_\mathbf{k}^\pm = [\sqrt{\hbar^2 + \alpha^2 k_x^2 \pm \alpha k_x}]^{1/2}/\sqrt{2}[\hbar^2 + \alpha^2 k_x^2]^{1/4}$, with $+(-)$ labelling the upper (lower) helicity branch. The single particle *dispersions* of two helicity branches are $\varepsilon_{\mathbf{k},\pm}^a = \varepsilon_\mathbf{k}^a \pm \sqrt{\hbar^2 + \alpha^2 k_x^2}$. Here we have measured the energy in the unit of $E_0 = 2\alpha^2 m_a / \hbar^2$, the momentum in the unit of $k_0 = 2\alpha m_a / \hbar^2$, and $h = 0.4 E_0$.
+
+We first present our results for the binding of A and B in the vacuum, as summarized in Fig. 1. The density of states [see Fig. 1(a)] exhibits a monotonic decrease with both **Q** and $\epsilon$. As expected, $E_Q^{sc}$ will change monotonically with respect to both **Q** and $a_s^{-1}$ [see Fig. 1(b)]. Together with $E_{th}^Q$ [see Fig. 1(c)], we see that the actual ground state c.m. momenta will be pulled to the direction with a smaller magnitude than $Q_1$ and the increase of $a_s^{-1}$ will enhance this tendency [see Fig. 1(d)].
+
+We now turn to the binding of A and B on the top of
+the Fermi sea of A in the situation where a single impu-
+
+rity of *B* immerses in a non-interacting Fermi sea of spin-
+orbit coupled *A* with the Fermi energy *E**h* = -1.5*E*0, as
+illustrated in Fig. 2. There, both the density of states
+[see Fig. 2(a)] and *E*Qsc [see Fig. 2(b)] exhibit a rich be-
+havior. In addition, from *E*thQ in Fig. 2(c), we see that
+there exist two meta-stable states near **Q**1 and **Q**2, re-
+spectively. Let us first analyze the c.m. momenta asso-
+ciated with the meta-stable states, e.g., the one formed
+near **Q**1. Seen from Fig. 2(a), γQε for c.m. momentum
+near **Q**1 decreases with **Q** in the low energy region (e.g.
+0 < ε < 2*E*0), but increases in the high energy region
+(e.g. 6*E*0 < ε < 10*E*0). In addition, near **Q**1, *E*scQ [see
+the inset of Fig. 2(b)] shows a qualitative change with in-
+creasing of *a*s-1. We thus expect from earlier discussions
+a zigzag behavior of c.m. momenta of the meta-stable
+state, as confirmed by our results plotted in the inset of
+Fig. 2(d). Next, we discuss which of the two meta-stable
+states is energetically favored. Due to the degeneracy of
+the two local minima of *E*thQ, this is determined by the
+density of states, which is larger near **Q**1 than that near
+**Q**2 [see Fig. 2(a)]. Hence the meta-stable state near **Q**1
+---PAGE_BREAK---
+
+Figure 3. (a) The distribution of $\gamma_Q^\epsilon - \gamma_Q^\delta$. (b) $\xi_Q$ as a function of $Q$ with different $(k_0 a_s)^{-1}$ according to Eq. (8). (c) The threshold energy $E_{th}^Q = \min\{E_{th,-}^Q, E_{th,+}^Q\}$. The helicity-dependent threshold energy $E_{th,+}^Q$ ($E_{th,-}^Q$) is the minimum energy of two particles with A in the upper (lower) helicity branch and a c.m. momentum $Q$. (d) The two-body energy with different interacting strengths by exactly solving Eq. (4). Here $Q_0 = -3k_0e_x$.
+
+is energetically favored by $E_{sc}^Q$ [see Fig. 2(b)]. We thus expect the molecular ground state c.m. momentum to be near $Q_1$, well agreeing with Fig. 2(d).
+
+Comparing the binding of A and B in the vacuum and on top of the filled Fermi sea, we observe that the presence of Fermi sea not only elevates $E_{th}^Q$ in the regime $Q_1 < Q < Q_2$, giving rise to two minima, but also enhances the density of states there. Consequently, the minimum of $E_{sc}^Q$ occurs at $Q_3$, and the two meta-stable states merge together [see Fig. 2(d)] following from previous analysis. We remark that, while the SOC here has dramatically modified the density of states compared to the SOC-free case, our analyses based on perturbation treatment agree remarkably well with the exact numerical results.
+
+## VI. CONCLUDING DISCUSSIONS AND SUMMARY
+
+When the Fermi sea has only one Fermi surface, the two meta-stable states formed near $Q_1$ and $Q_2$ are favored by the threshold energy and the density of states, respectively, see Fig. 3. In Ref. [16] with high Fermi
+
+energy, tuning the interaction can induce a transition between the two meta-stable states. In contrast, as illustrated in Fig. 3 where the Fermi energy $E_h = 0$, such transition is missing and the increase of $a_s^{-1}$ will eventually cause a merge of the two meta-stable states. With an increase of the Fermi energy, our case crossovers to that discussed in Ref. [16]. In addition, we note that for the single impurity Fermi system we only consider the lowest energy state within our ansatz, the ground state of the system should be given by connecting the molecular ground state to the polaron ground state which describes the particle-hole excitations above the Fermi sea.
+
+Summarizing, we have investigated how the tuning of interacting strength of an attractive s-wave interaction affects two-body energy under certain distribution of the density of states. Combining with the dispersion of the threshold energy, we can predict typical behavior of the two-body bound state when tuning the scattering length and hence the interaction, including the change of the c.m. momentum of the two-body ground state and the competition between multiple meta-stable states. Our perturbation analyses are not dependent on the concrete type of SOC and corroborated by the exact numerical solution of the two-body problem for a spin-orbit coupled
+---PAGE_BREAK---
+
+Fermi mixture of $^{40}$K-$^{40}$K-$^{6}$Li, even though the density
+of states is significantly altered by the effect of SOC.
+
+VII. ACKNOWLEDGMENTS
+
+We thank Ying Hu for helpful discussion. The work
+is supported by the National Basic Research Program of
+China (973 Program) (Grants No. 2013CBA01502 and
+No. 2013CB834100), the National Natural Science Foun-
+dation of China (Grants No. 11374040, No. 11475027,
+No. 11575027, and No. 11274051).
+
+[1] Jean Dalibard, Fabrice Gerbier, Gediminas Juzeliūnas, and Patrik Öhberg, Rev. Mod. Phys. **83**, 1523 (2011).
+
+[2] Y.-J. Lin, K. Jimenéz-García, and I. B. Spielman, Nature (London) **471**, 83 (2011).
+
+[3] Jin-Yi Zhang, Si-Cong Ji, Zhu Chen, Long Zhang, Zhi-Dong Du, Bo Yan, Ge-Sheng Pan, Bo Zhao, YouJin Deng, Hui Zhai, Shuai Chen, and Jian-Wei Pan, Phys. Rev. Lett. **109**, 115301 (2012).
+
+[4] Pengjun Wang, Zeng-Qiang Yu, Zhengkun Fu, Jiao Miao, Lianghui Huang, Shijie Chai, Hui Zhai, and Jing Zhang, Phys. Rev. Lett. **109**, 095301 (2012).
+
+[5] Lawrence W. Cheuk, Ariel T. Sommer, Zoran Hadzibabic, Tarik Yefsah, Waseem S. Bakr, and Martin W. Zwierlein, Phys. Rev. Lett. **109**, 095302 (2012); Lianghui Huang, Zengming Meng, Pengjun Wang, Peng Peng, Shao-Liang Zhang, Liangchao Chen, Donghao Li, Qi Zhou and Jing Zhang, Nat. Phys. **10**, 1038 (2016).
+
+[6] H. Zhai, Int. J. Mod. Phys. B **26**, 1230001 (2012).
+
+[7] V. Galitski and I. B. Spielman, Nature, **494**, 49 (2013).
+
+[8] H. Zhai, Rep. Prog. Phys., **78**, 026001 (2015).
+
+[9] Jayantha P. Vyasanakere, and Vijay B. Shenoy, Phys. Rev. B **83**, 094515 (2011).
+
+[10] Jayantha P. Vyasanakere, Shizhong Zhang, and Vijay B. Shenoy, Phys. Rev. B **84**, 014512 (2011).
+
+[11] Hui Hu, Lei Jiang, Xia-Ji Liu, and Han Pu, Phys. Rev. Lett. **107**, 195304 (2011).
+
+[12] Zeng-Qiang Yu and Hui Zhai, Phys. Rev. Lett. **107**, 195305 (2011).
+
+[13] Ren Zhang, Fan Wu, Jun-Rong Tang, Guang-Can Guo, Wei Yi, and Wei Zhang, Phys. Rev. A **87**, 033629 (2013).
+
+[14] Vijay B. Shenoy, Phys. Rev. A **88**, 033609 (2013).
+
+[15] Fan Wu, Ren Zhang, Tian-Shu Deng, Wei Zhang, Wei Yi, and Guang-Can Guo, Phys. Rev. A **89**, 063610 (2014).
+
+[16] Lihong Zhou, Xiaoling Cui, and Wei Yi, Phys. Rev. Lett. **112**, 195301 (2014).
+
+[17] F. Chevy, Phys. Rev. A **74**, 063628 (2006).
+
+[18] R. Combescot, A. Recati, C. Lobo and F. Chevy, Phys. Rev. Lett., **98**, 180402 (2007).
+
+[19] Sascha Zollner, G. M. Bruun, and C. J. Pethick, Phys. Rev. A **83**, 021603(R) (2011).
+
+[20] Marco Koschorreck, Daniel Pertot, Enrico Vogt, Bernd Frohlich, Michael Feld and Michael Kohl, Nature **485**, 619 (2012).
+
+[21] Wei Yi and Wei Zhang, Phys. Rev. Lett **109**, 140402 (2012).
+
+[22] Cheng Chin, Rudolf Grimm, Paul Julienne, and Eite Tiesinga, Rev. Mod. Phys. **82**, 1225 (2010).
+
+[23] André Schirotzek, Cheng-Hsun Wu, Ariel Sommer, and Martin W. Zwierlein, Phys. Rev. Lett. **102**, 230402 (2009); M. Koschorreck, D. Pertot, E. Vogt, B. Fröhlich, M. Feld, and M. Köhl, Nature (London) **485**, 619 (2012).
+
+[24] W. Ketterle and M. W. Zwierlein, Rivista del Nuovo Ci-mento **31**, 247-422 (2008).
+
+[25] We note that for homo-nuclear systems, our derivation for $\gamma_Q^e$ reduces to the singlet density of states as discussed in Ref. [14].
+
+[26] With Eq.(5) and renomolization equation, we have $\int_0^\infty \frac{\gamma_0^e}{\epsilon_b - \epsilon} d\epsilon = \int_0^\infty \frac{\gamma_Q^e}{\xi_Q + \epsilon_b - \epsilon} d\epsilon$. The right hand of the equation can be written as follows: $\int_0^\infty \frac{\gamma_Q^e}{\xi_Q + \epsilon_b - \epsilon} d\epsilon = \int_0^\infty [\frac{\gamma_Q^e}{\epsilon_b - \epsilon} - \frac{\xi_Q \gamma_Q^e}{(\epsilon_b - \epsilon)^2} + \frac{\xi_Q^2 \gamma_Q^e}{(\epsilon_b - \epsilon)^3} + ...] d\epsilon$. To first order of $\xi_Q$ ($|\frac{\xi_Q}{\epsilon_b}| \ll 1$), we arrive at Eq.(8).
+
+[27] Xiaoling Cui, Phys. Rev. A **85**, 022705 (2012).
+
+[28] Peng Zhang, Long Zhang, and Wei Zhang, Phys. Rev. A **86**, 042707 (2012).
+
+[29] Peng Zhang, Long Zhang, and Youjin Deng, Phys. Rev. A **86**, 053608 (2012).
+
+[30] Yuxiao Wu and Zhenhua Yu, Phys. Rev. A **87**, 032703 (2013).
+
+[31] Hao Duan, Li You, and Bo Gao, Phys. Rev. A **87**, 052708 (2013).
+
+[32] the analysis for the case with $\gamma_Q^e - \gamma_0^e > 0$ is similar.
\ No newline at end of file
diff --git a/samples/texts_merged/7259202.md b/samples/texts_merged/7259202.md
new file mode 100644
index 0000000000000000000000000000000000000000..01c5470d3f884b748b0150b3a6f97d5a4b4ed3e8
--- /dev/null
+++ b/samples/texts_merged/7259202.md
@@ -0,0 +1,174 @@
+
+---PAGE_BREAK---
+
+MATHEMATICAL ANALYSIS OF A
+THREE COMPARTMENT CATENARY
+MODEL
+
+Dr.V.Anand
+
+Assistant Professor in Mathematics
+
+Kakatiya Institute of Technology & Science, Warangal
+
+Abstract - Mathematical biology is a modern research area, forming part of biology and mathematics discipline. Mathematical modeling gives an idea how to transform physical situation into mathematical language in the form of differential equations. This paper is on pharmacokinetics which is considering a part of mathematical biology deals with distribution of drugs or tracers among the various compartments (parts) of the human body. In this model I explained the formation of differential equations and computed the distribution of drug concentration in three different compartments with respect to time.
+
+**Keywords – Pharmacokinetics, Catenary model, Eigen values, Routh-Hurwitz condition.**
+
+# 1: INTRODUCTION
+
+In pharmacokinetics or drug kinetics or the distribution of a drug between various parts of the body, each part of the body is treated as one compartment [1]. These compartments are named as consisting of cells, intestinal fluids and blood vessels etc., [2,3] .Models of the type discussed are called compartment models, they are used extensively in medical, biological[4] and ecological studies. Mathematical models in pharmacokinetics [5,6,7,8] are also considered as Bio mathematical models. Because of chain like appearance of the graph (Fig1), such a compartment models called catenary model.
+
+In this paper , I considered three compartments as one is central compartment (Blood vessels), second Intestinal fluid and third compartment is tissue compartment (cells). In this compartment model drug distributes most rapidly into first or central compartment, less rapidly into second and finally very slowly into the third compartment. In practice, an initial injection of labeled material into blood would not be instantaneously mixed, but it circulates and recirculates among the various compartments of the body.
+
+Fig1: A Three-compartment catenary model
+---PAGE_BREAK---
+
+2: MODEL BLOCK – DIAGRAMS and MODEL EQUATIONS
+
+The rate of change of drug in compartment I:
+
+$$ \frac{d[\text{rate of change of drug in compartment I}]}{dt} = - \frac{[\text{rate of change of drug from Compartment I to Compartment II}]}{k_{12} + k_{10}} x_1 + \frac{[\text{rate of reentry of uncovered drug from compartment II to compartment I}]}{k_{21} x_2} $$
+
+The rate of change of drug in compartment II:
+
+$$ \begin{array}{c} \text{The rate of change of} \\ \text{drug in compartment} \\ \text{II} \\[1em] \dot{x}_2(t) \end{array} = \begin{array}{c} \text{The rate of entry of} \\ \text{drug into} \\ \text{compartment II from} \\ \text{compartment I} \\[1em] (k_{12} x_1) \end{array} - \begin{array}{c} \text{The rate of reentry of} \\ \text{drug from} \\ \text{compartment II to} \\ \text{compartment I for} \\ \text{recycling} \\[1em] (k_{21} x_2) \end{array} - \begin{array}{c} \text{The rate of entry of} \\ \text{drug from} \\ \text{compartment II to} \\ \text{compartment III} \\[1em] (k_{23} x_2) \end{array} + \begin{array}{c} \text{The rate of transfer of} \\ \text{drug from compartment} \\ \text{II to compartment III} \\[1em] (k_{23} x_3) \end{array} $$
+
+The rate of change of drug in compartment III:
+
+$$ \frac{\text{The rate of change of drug in compartment III}}{\dot{x}_3(t)} = \frac{\text{The rate of transfer of drug from compartment II to Compartment III}}{(k_{23} x_2)} - \frac{\text{The rate of reentry of drug from compartment III to Compartment II}}{(k_{32} x_2)} $$
+---PAGE_BREAK---
+
+The model equations for a three compartment model are given by the following set of three linear ordinary differential
+equations:
+
+$$
+\frac{dx_1}{dt} = -(k_{12} + k_{10})x_1 + k_{21}x_2 \quad (1)
+$$
+
+$$
+\frac{dx_2}{dt} = k_{12}x_1 - (k_{21} + k_{23})x_2 + k_{23}x_3 \quad (2)
+$$
+
+$$
+\frac{dx_3}{dt} = k_{23}x_2 - k_{32}x_3 \tag{3}
+$$
+
+Where $k_{10}, k_{12}, k_{21}, k_{23}$ and $k_{32}$ are positive transfer coefficients.
+
+The equations (1), (2), (3) can be put into the form
+
+The system of equations can be written as $\frac{dX}{dt} = AX$
+
+$$
+\frac{d}{dt} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -(k_{12} + k_{10}) & k_{21} & 0 \\ k_{12} & -(k_{21} + k_{23}) & k_{32} \\ 0 & k_{23} & -k_{32} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \quad (4)
+$$
+
+$$
+\frac{d}{dt}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -(k_{12} + k_{10}) & k_{21} & 0 \\ k_{12} & -(k_{21} + k_{23}) & k_{32} \\ 0 & k_{23} & -k_{32} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \quad (5)
+$$
+
+$$
+\text{Where } X=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \text{ and } A=\begin{bmatrix} -(k_{12}+k_{10}) & k_{21} & 0 \\ k_{12} & -(k_{21}+k_{23}) & k_{32} \\ 0 & k_{23} & -k_{32} \end{bmatrix}
+$$
+
+Let $X = X_0 e^{\lambda t}$ be a trial solution with initial conditions $X(0) = [x_{10} \ x_{20} \ x_{30}]^T$
+
+The exponent ‘λ’ satisfies the characteristic equation of A :
+
+$$
+\det[A - \lambda I] = 0
+$$
+
+The characteristic equation for the system is:
+
+$$
+\lambda^3 + (k_{10} + k_{12} + k_{21} + k_{23} + k_{32})\lambda^2 + [k_{12}k_{23} + (k_{21} + k_{23})k_{10} + (k_{12} + k_{21} + k_{10})k_{32}] \lambda + k_{21}k_{23}k_{10} = 0 \quad (6)
+$$
+
+Equation (6) is a cubic polynomial of the form
+
+$$
+\lambda^3 + a_1 \lambda^2 + a_2 \lambda + a_3 = 0
+\qquad
+(7)
+$$
+
+It is evident that $a_1 > 0$ and $a_3 (= k_{21} + k_{23} + k_{10}) > 0$
+
+Clearly the relation $a_1 a_2 > a_3$ satisfies $\Rightarrow a_3(a_1 a_2 - a_3) > 0$
+---PAGE_BREAK---
+
+If $a_1>0$, $a_3>0$, and $a_3(a_1a_2-a_3)>0$ are necessary and sufficient conditions to satisfies Routh-Hurwitz condition.
+
+Hence equation (7) satisfies Routh-Hurwitz condition.
+
+Hence, the three Eigen roots $\lambda_1, \lambda_2, \lambda_3$ of the characteristic equation (7) are necessary and sufficient conditions for all
+roots of the characteristic equation to have negative real part.
+
+Therefore, let the solution for x₁(t) is
+
+$$x_1(t) = c_1 e^{\lambda_1 t} + c_2 e^{\lambda_2 t} + c_3 e^{\lambda_3 t} \quad (7)$$
+
+$$\dot{x}_1(t) = c_1 \lambda_1 e^{\lambda_1 t} + \lambda_2 c_2 e^{\lambda_2 t} + \lambda_3 c_3 e^{\lambda_3 t} \quad (8)$$
+
+Now from the equation (1), we can write
+
+$$
+\begin{aligned}
+x_2(t) &= \frac{1}{k_{21}} [\dot{x}_1(t) + (k_{12} + k_{10})x_1] \\
+&= \frac{1}{k_{21}} \left[ c_1 \lambda_1 e^{\lambda_1 t} + \lambda_2 c_2 e^{\lambda_2 t} + \lambda_3 c_3 e^{\lambda_3 t} \right. \\
+& \qquad \left. + (c_1 e^{\lambda_1 t} + c_2 e^{\lambda_2 t} + c_3 e^{\lambda_3 t})(k_{12} + k_{10}) \right] \\
+&= \frac{1}{k_{21}} \left[ (k_{12} + k_{10} + \lambda_1)c_1 e^{\lambda_1 t} + (k_{12} + k_{10} + \lambda_2)c_2 e^{\lambda_2 t} \right. \\
+& \qquad \left. + (k_{12} + k_{10} + \lambda_3)c_3 e^{\lambda_3 t} \right]
+\end{aligned}
+\quad (9) $$
+
+Equation (9) written as
+
+$$\dot{x}_2(t) = \frac{1}{k_{21}} \left[ (k_{12} + k_{10} + \lambda_1)c_1\lambda_1 e^{\lambda_1 t} + (k_{12} + k_{10} + \lambda_2)c_2\lambda_2 e^{\lambda_2 t} + (k_{12} + k_{10} + \lambda_3)c_3\lambda_3 e^{\lambda_3 t} \right] \quad (10)$$
+
+Now from the equation (2), we can write
+
+$$x_3(t) = \frac{1}{k_{32}} [\dot{x}_2(t) - k_{12}x_1 + (k_{23} + k_{21})x_2]$$
+---PAGE_BREAK---
+
+$$ = \frac{1}{k_{21} k_{32}} \begin{bmatrix} (k_{12} + k_{10} + \lambda_1) c_1 \lambda_1 e^{\lambda_1 t} + (k_{12} + k_{10} + \lambda_2) c_2 \lambda_2 e^{\lambda_2 t} \\ + (k_{12} + k_{10} + \lambda_3) c_3 \lambda_3 e^{\lambda_3 t} \\ -k_{21} k_{12} (c_1 e^{\lambda_1 t} + c_2 e^{\lambda_2 t} + c_3 e^{\lambda_3 t}) \\ + (k_{23} + k_{21}) \begin{bmatrix} (k_{12} + k_{10} + \lambda_1) c_1 e^{\lambda_1 t} \\ + (k_{12} + k_{10} + \lambda_2) c_2 e^{\lambda_2 t} \\ + (k_{12} + k_{10} + \lambda_3) c_3 e^{\lambda_3 t} \end{bmatrix} \end{bmatrix} \quad (11) $$
+
+### 3: NUMERICAL COMPUTATION
+
+Let the Initial values: $x_{10}=50, x_{20}=30, x_{30}=10$
+
+Consider the random values of transfer coefficients ($k_{10}, k_{12}, k_{21}, k_{23}, k_{32}$) as follows
+
+Fig1: $K_{10}=0.185, K_{12}=0.175, K_{21}=0.15, K_{23}=0.01, K_{32}=0.001$
+---PAGE_BREAK---
+
+Fig2: K₁₀=0.233, K₁₂=0.166, K₂₁=0.136, K₂₃=0.123, K₃₂=0.05
+
+Fig3:K₁₀=0.12, K₁₂=0.075, K₂₁=0.055, K₂₃=0.127, K₃₂=0.052
+
+4: CONCLUSIONS
+
+From the figure1, it is observed that drug concentration in the second compartment sudden rise and slowly decreases and comes to stable over a time period. From the figure2, it is identified that drug concentration in the third compartment gradually increases and slowly decreases and comes to stable over a time and also it is noticed that from the figure3, the elimination process in the compartments has a common meeting time.
+---PAGE_BREAK---
+
+REFERENCES
+
+[1] J.N.Kapur., "Mathematical models in Biology and Medicine (1985)", Affiliated east-west press pvt ltd, p 316-317.
+
+[2] Geoffery Gordon., "System simulation (1999)", prentice-hal India, p34-36.
+
+[3] Evert, C.F, and M.F. Randal., "Formulation and computation of compartment models"J.pharm.Sci., IX,no.3(1970), 102-114.
+
+[4] Jacquez, John A., "compartment analysis in biology and medicine", New York: Elsevier Scientific publishing company, 1972.
+
+[5] J.M. Watt and Andrew Young., "an attempt to simulate the liver on a computer", Computer. Journal .5, pp221-227, 1962.
+
+[6] Mayersohn, M. and Gibaldi,M. Mathematical methods in pharmacokinetics I. Use of the Laplace transform for solving differential rate equations.
+Am. J Pharm. Educ. 1970; 34: 608-614
+
+[7] Mayersohn, M. and Gibaldi, M. Mathematical methods in pharmacokinetics II. Solution of the two compartment open model. Am. J. Pharm. Educ. 1971; 35: 19-28.
+
+[8] V.Anand, Dr.N.Ch Pattabhi Ramacharyulu, Dr.B.Ravindra Reddy., " Mathematical Model of Glucose and Insulin Kinetics".
+International Journal of Scientific and Innovative Mathematical Research Volume 3, Issue 1, January 2015, PP 60-66".
\ No newline at end of file
diff --git a/samples/texts_merged/7332466.md b/samples/texts_merged/7332466.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f5c430a7bf186fe6bf74743915b159d697b1afd
--- /dev/null
+++ b/samples/texts_merged/7332466.md
@@ -0,0 +1,548 @@
+
+---PAGE_BREAK---
+
+Accelerated Stochastic Gradient-free and Projection-free Methods
+
+Feihu Huang ¹² Lue Tao ¹² Songcan Chen ¹²
+
+Abstract
+
+In the paper, we propose a class of accelerated stochastic gradient-free and projection-free (a.k.a., zeroth-order Frank-Wolfe) methods to solve the constrained stochastic and finite-sum nonconvex optimization. Specifically, we propose an accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW) method based on the variance reduced technique of SPIDER/SpiderBoost and a novel momentum accelerated technique. Moreover, under some mild conditions, we prove that the Acc-SZOFW has the function query complexity of $O(d\sqrt{n}\epsilon^{-2})$ for finding an $\epsilon$-stationary point in the finite-sum problem, which improves the existing best result by a factor of $O(\sqrt{n}\epsilon^{-2})$, and has the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem, which improves the existing best result by a factor of $O(\epsilon^{-1})$. To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new variance reduced technique of STORM, which still reaches the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem without relying on any large batches. In particular, we present an accelerated framework of the Frank-Wolfe methods based on the proposed momentum accelerated technique. The extensive experimental results on black-box adversarial attack and robust black-box classification demonstrate the efficiency of our algorithms.
+
+¹College of Computer Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
+²MIIT Key Laboratory of Pattern Analysis & Machine Intelligence. Correspondence to: Feihu Huang .
+
+# 1. Introduction
+
+In the paper, we focus on solving the following constrained stochastic and finite-sum optimization problems
+
+$$ \min_{x \in \mathcal{X}} f(x) = \begin{cases} \mathbb{E}_{\xi}[f(x; \xi)] & (\text{stochastic}) \\ \frac{1}{n} \sum_{i=1}^{n} f_i(x) & (\text{finite-sum}) \end{cases} \quad (1) $$
+
+where $f(x) : \mathbb{R}^d \to \mathbb{R}$ is a nonconvex and smooth loss function, and the restricted domain $\mathcal{X} \subseteq \mathbb{R}^d$ is supposed to be convex and compact, and $\xi$ is a random variable that following an unknown distribution. When $f(x)$ denotes the expected risk function, the problem (1) will be seen as a stochastic problem. While $f(x)$ denotes the empirical risk function, it will be seen as a finite-sum problem. In fact, the problem (1) appears in many machine learning models such as multitask learning, recommendation systems and, structured prediction (Jaggi, 2013; Lacoste-Julien et al., 2013; Hazan & Luo, 2016). For solving the constrained problem (1), one common approach is the projected gradient method (Iusem, 2003) that alternates between optimizing in the unconstrained space and projecting onto the constrained set $\mathcal{X}$. However, the projection is quite expensive to compute in many constrained sets such as the set of all bounded nuclear norm matrices. The Frank-Wolfe algorithm (i.e., conditional gradient)(Frank & Wolfe, 1956; Jaggi, 2013) is a good candidate for solving the problem (1), which only needs to compute a linear operator instead of projection operator at each iteration. Following (Jaggi, 2013), the linear optimization on $\mathcal{X}$ is much faster than the projection onto $\mathcal{X}$ in many problems such as the set of all bounded nuclear norm matrices.
+
+Due to its projection-free property and ability to handle structured constraints, the Frank-Wolfe algorithm has recently regained popularity in many machine learning applications, and its variants have been widely studied. For example, several convex variants of Frank-Wolfe algorithm (Jaggi, 2013; Lacoste-Julien & Jaggi, 2015; Lan & Zhou, 2016; Xu & Yang, 2018) have been studied. In the big data setting, the corresponding online and stochastic Frank-Wolfe algorithms (Hazan & Kale, 2012; Hazan & Luo, 2016; Hassani et al., 2019; Xie et al., 2019) have been developed, and their convergence rates were studied. The above Frank-Wolfe algorithms were mainly studied in the convex setting.
+
+Proceedings of the 37th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s).
+---PAGE_BREAK---
+
+Table 1. Function query complexity comparison of the representative non-convex zeroth-order Frank-Wolfe methods for finding an $\epsilon$-stationary point of the problem (1), i.e., $\mathbb{E}\|\nabla G(x)\| \le \epsilon$. $T$ denotes the total iterations. GauGE, UniGE and CooGE are abbreviations of Gaussian distribution, Uniform distribution and Coordinate-wise smoothing gradient estimators, respectively. Note that FW-Black and Acc-ZO-FW are deterministic algorithms, the other are stochastic algorithms. Here **query-size** denotes the function query size required in estimating one zeroth-order gradient in these algorithms. Note that these query-sizes are only used in the theoretical analysis.
+
+| Problem | Algorithm | Reference | Gradient Estimator | Query Complexity | Query-Size |
|---|
| Finite-Sum | FW-Black | Chen et al. (2018) | GauGE or UniGE | O(dnε-4) | O(ndT) | | Acc-ZO-FW | Ours | CooGE | O(dnε-2) | O(nd) | | Acc-SZOFW | Ours | CooGE | O(dn1/2ε-2) | O(n1/2d) | | Stochastic | ZO-SFW | Sahu et al. (2019) | GauGE | O(d2/3ε-4) | O(1) | | ZSCG | Balasubramanian & Ghadimi (2018) | GauGE | O(de-4) | O(dT) | | Acc-SZOFW | Ours | CooGE | O(de-3) | O(dT1/2) | | Acc-SZOFW* | Ours | UniGE | O(de-3) | O(d-1/2T1/2) | | Acc-SZOFW* | Ours | CooGE | O(de-3) | O(d) | | Acc-SZOFW* | Ours | UniGE | O(d1/2ε-3) | O(1) |
+
+In fact, the Frank-Wolfe algorithm and its variants are also successful in solving nonconvex problems such as adversarial attacks (Chen et al., 2018). Recently, some nonconvex variants of Frank-Wolfe algorithm (Lacoste-Julien, 2016; Reddi et al., 2016; Qu et al., 2018; Shen et al., 2019; Yurtsever et al., 2019; Hassani et al., 2019; Zhang et al., 2019) have been developed.
+
+Until now, the above Frank-Wolfe algorithm and its variants need to compute the gradients of objective functions at each iteration. However, in many complex machine learning problems, the explicit gradients of the objective functions are difficult or infeasible to obtain. For example, in the reinforcement learning (Malik et al., 2019; Huang et al., 2020), some complex graphical model inference (Wainwright et al., 2008) and metric learning (Chen et al., 2019a) problems, it is difficult to compute the explicit gradients of objective functions. Even worse, in the black-box adversarial attack problems (Liu et al., 2018b; Chen et al., 2018), only function values (i.e., prediction labels) are accessible. Clearly, the above Frank-Wolfe methods will fail in dealing with these problems. Since it only uses the function values in optimization, the gradient-free (zeroth-order) optimization method (Duchi et al., 2015; Nesterov & Spokoiny, 2017) is a promising choice to address these problems. More recently, some zeroth-order Frank-Wolfe methods (Balasubramanian & Ghadimi, 2018; Chen et al., 2018; Sahu et al., 2019) have been proposed and studied. However, these zeroth-order Frank-Wolfe methods suffer from high function query complexity in solving the problem (1) (please see Table 1).
+
+In the paper, thus, we propose a class of accelerated zeroth-order Frank-Wolfe methods to solve the problem (1), where $f(x)$ is possibly black-box. Specifically, we propose an accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW) method based on the variance reduced technique of SPIDER/SpiderBoost (Fang et al., 2018; Wang et al., 2018) and a novel momentum accelerated technique. Further, we propose a novel accelerated stochastic zeroth-order
+
+Frank-Wolfe (Acc-SZOFW*) to relax the large mini-batch size required in the Acc-SZOFW.
+
+## Contributions
+
+In summary, our main contributions are given as follows:
+
+1) We propose an accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW) method based on the variance reduced technique of SPIDER/SpiderBoost and a novel momentum accelerated technique.
+
+2) Moreover, under some mild conditions, we prove that the Acc-SZOFW has the function query complexity of $O(d\sqrt{n}\epsilon^{-2})$ for finding an $\epsilon$-stationary point in the finite-sum problem (1), which improves the exiting best result by a factor of $O(\sqrt{n}\epsilon^{-2})$, and has the function query complexity of $O(de^{-3})$ in the stochastic problem (1), which improves the exiting best result by a factor of $O(\epsilon^{-1})$.
+
+3) We further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) to relax the large mini-batch size required in the Acc-SZOFW. We prove that the Acc-SZOFW* still has the function query complexity of $O(de^{-3})$ without relying on the large batches.
+
+4) In particular, we propose an accelerated framework of the Frank-Wolfe methods based on the proposed momentum accelerated technique.
+
+# 2. Related Works
+
+## 2.1. Zeroth-Order Methods
+
+Zeroth-order (gradient-free) methods can be effectively used to solve many machine learning problems, where the explicit gradient is difficult or infeasible to obtain. Recently, the zeroth-order methods have been widely studied in machine learning community. For example, several zeroth-order methods (Ghadimi & Lan, 2013; Duchi et al., 2015;
+---PAGE_BREAK---
+
+Nesterov & Spokoiny, 2017) have been proposed by us-
+ing the Gaussian smoothing technique. Subsequently, (Liu
+et al., 2018b; Ji et al., 2019) recently proposed acceler-
+ated zeroth-order stochastic gradient methods based on the
+variance reduced techniques. To deal with nonsmooth op-
+timization, some zeroth-order proximal gradient methods
+(Ghadimi et al., 2016; Huang et al., 2019c; Ji et al., 2019)
+and zeroth-order ADMM-based methods (Gao et al., 2018;
+Liu et al., 2018a; Huang et al., 2019a;b) have been proposed.
+In addition, more recently, (Chen et al., 2019b) has pro-
+posed a zeroth-order adaptive momentum method. To solve
+the constrained optimization, the zeroth-order Frank-Wolfe
+methods (Balasubramanian & Ghadimi, 2018; Chen et al.,
+2018; Sahu et al., 2019) and the zeroth-order projected gradi-
+ent methods (Liu et al., 2018c) have been recently proposed
+and studied.
+
+**2.2. Variance-Reduced and Momentum Methods**
+
+To accelerate stochastic gradient descent (SGD) algorithm,
+various variance-reduced algorithms such as SAG (Roux
+et al., 2012), SAGA (Defazio et al., 2014), SVRG (Johnson
+& Zhang, 2013) and SARAH (Nguyen et al., 2017a) have
+been presented and studied. Due to the popularity of deep
+learning, recently the large-scale nonconvex learning prob-
+lems received wide interest in machine learning community.
+Thus, recently many corresponding variance-reduced algo-
+rithms to nonconvex SGD have also been proposed and stud-
+ied, e.g., SVRG (Allen-Zhu & Hazan, 2016; Reddi et al.,
+2016), SCSG (Lei et al., 2017), SARAH (Nguyen et al.,
+2017b), SPIDER (Fang et al., 2018), SpiderBoost (Wang
+et al., 2018; 2019), SNVRG (Zhou et al., 2018).
+
+Another effective alternative is to use momentum-based
+method to accelerate SGD. Recently, various momentum-
+based stochastic algorithms for the convex optimization
+have been proposed and studied, e.g., APCG (Lin et al.,
+2014), AccProxSVRG (Nitanda, 2014) and Katyusha (Allen-
+Zhu, 2017). At the same time, for the nonconvex optimiza-
+tion, some momentum-based stochastic algorithms have
+been also studied, e.g., RSAG (Ghadimi & Lan, 2016), Prox-
+SpiderBoost-M (Wang et al., 2019), STORM (Cutkosky &
+Orabona, 2019) and Hybrid-SGD (Tran-Dinh et al., 2019).
+
+**3. Preliminaries**
+
+**3.1. Zeroth-Order Gradient Estimators**
+
+In this subsection, we introduce two useful zeroth-order
+gradient estimators, i.e., uniform smoothing gradient estima-
+tor (UniGE) and coordinate smoothing gradient estimator
+(CooGE) (Liu et al., 2018b; Ji et al., 2019). Given any
+function $f_i(x) : \mathbb{R}^d \to \mathbb{R}$, the UniGE can generate an ap-
+
+proximated gradient as follows:
+
+$$
+\hat{\nabla}_{\text{unif}} f_i(x) = \frac{d(f_i(x + \beta u) - f_i(x))}{\beta} u, \quad (2)
+$$
+
+where $u \in \mathbb{R}^d$ is a vector generated from the uniform distribution over the unit sphere, and $\beta$ is a smoothing parameter.
+While the CooGE can generate an approximated gradient:
+
+$$
+\hat{\nabla}_{\text{coo}} f_i(x) = \sum_{j=1}^{d} \frac{f_i(x + \mu_j e_j) - f_i(x - \mu_j e_j)}{2\mu_j} e_j, \quad (3)
+$$
+
+where $\mu_j$ is a coordinate-wise smoothing parameter, and $e_j$
+is a basis vector with 1 at its $j$-th coordinate, and 0 otherwise.
+Without loss of generality, let $\mu = \mu_1 = \cdots = \mu_d$.
+
+**3.2. Standard Frank-Wolfe Algorithm and Assumptions**
+
+The standard Frank-Wolfe (i.e., conditional gradient) algo-
+rithm solves the above problem (1) by the following itera-
+tion: at t + 1-th iteration,
+
+$$
+\begin{cases}
+w_{t+1} = \arg \max_{w \in \mathcal{X}} \langle w, -\nabla f(x_t) \rangle, \\
+x_{t+1} = (1 - \gamma_{t+1}) x_t + \gamma_{t+1} w_{t+1},
+\end{cases}
+\tag{4}
+$$
+
+where $\gamma_{t+1} \in (0, 1)$ is a step size. For the nonconvex optimization, we apply the following duality gap (i.e., Frank-Wolfe gap (Jaggi, 2013))
+
+$$
+\mathcal{G}(x) = \max_{w \in \mathcal{X}} \langle w - x, -\nabla f(x) \rangle, \quad (5)
+$$
+
+to give the standard criteria of convergence ||G(x)|| ≤ ε (or
+E||G(x)|| ≤ ε) for finding an ε-stationary point, as in (Reddi
+et al., 2016).
+
+Next, we give some standard assumptions regarding problem (1) as follows:
+
+**Assumption 1.** Let $f_i(x) = f(x; \xi_i)$, where $\xi_i$ samples from the distribution of random variable $\xi$. Each loss function $f_i(x)$ is L-smooth such that
+
+$$
+\begin{align*}
+& \| \nabla f_i(x) - \nabla f_i(y) \| \leq L \| x - y \| , && \forall x, y \in \mathcal{X} , \\
+& f_i(x) \leq f_i(y) + \nabla f_i(y)^T (x - y) + \frac{L}{2} \| x - y \|^2 .
+\end{align*}
+$$
+
+Let $f_\beta(x) = E_{u \sim U_B}[f(x+\beta u)]$ be a smooth approximation of $f(x)$, where $U_B$ is the uniform distribution over the $d$-dimensional unit Euclidean ball $B$. Following (Ji et al., 2019), we have $E_{(u,\xi)}[\hat{\nabla}_{uni}f_\xi(x)] = \nabla f_\beta(x)$.
+
+**Assumption 2.** The variance of stochastic (zeroth-order) gradient is bounded, i.e., there exists a constant $\sigma_1 > 0$ such that for all $x$, it follows $\mathbb{E}\|\nabla f_\xi(x) - \nabla f(x)\|^2 \le \sigma_1^2$; There exists a constant $\sigma_2 > 0$ such that for all $x$, it follows $\mathbb{E}\|\hat{\nabla}_{uni}f_\xi(x) - \nabla f_\beta(x)\|^2 \le \sigma_2^2$.
+---PAGE_BREAK---
+
+**Assumption 3.** The constraint set $\mathcal{X} \subseteq \mathbb{R}^d$ is compact with the diameter: $\max_{x,y \in \mathcal{X}} \|x - y\| \le D$.
+
+**Assumption 4.** The objective function $f(x)$ is bounded from below in $\mathcal{X}$, i.e., there exists a non-negative constant $\Delta$, for all $x \in \mathcal{X}$ such as $f(x) - \inf_{y \in \mathcal{X}} f(y) \le \Delta$.
+
+Assumption 1 imposes the smoothness on each loss function $f_i(x)$ or $f(x, \xi_i)$, which is commonly used in the convergence analysis of nonconvex algorithms (Ghadimi et al., 2016). Assumption 2 shows that the variance of stochastic or zeroth-order gradient is bounded in norm, which have been commonly used in the convergence analysis of stochastic zeroth-order algorithms (Gao et al., 2018; Ji et al., 2019). Assumptions 3 and 4 are standard for the convergence analysis of Frank-Wolfe algorithms (Jaggi, 2013; Shen et al., 2019; Yurtsever et al., 2019).
+
+# 4. Accelerated Stochastic Zeroth-Order Frank-Wolfe Algorithms
+
+In the section, we first propose an accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW) algorithm based on the variance reduced technique of SPIDER/SpiderBoost and a novel momentum accelerated technique. We then further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) algorithm to relax the large mini-batch size required in the Acc-SZOFW.
+
+## 4.1. Acc-SZOFW Algorithm
+
+In the subsection, we propose an Acc-SZOFW algorithm to solve the problem (1), where the loss function is possibly black-box. The Acc-SZOFW algorithm is given in Algorithm 1.
+
+We first propose an accelerated **deterministic** zeroth-order Frank-Wolfe (Acc-ZO-FW) algorithm to solve the finite-sum problem (1) as a baseline by using the zeroth-order gradient $v_t = \frac{1}{n} \sum_{i=1}^{n} \hat{\nabla}_{coo} f_i(z_t)$ in Algorithm 1. Although Chen et al. (2018) has proposed a **deterministic** zeroth-order Frank-Wolfe (FW-Black) algorithm by using the momentum-based accelerated zeroth-order gradients, our Acc-ZO-FW algorithm still has lower query complexity than the FW-Black algorithm (see Table 1).
+
+When the sample size $n$ is very large in the finite-sum optimization problem (1), we will need to waste lots of time to obtain the estimated full gradient of $f(x)$, and in turn make the whole algorithm became very slow. Even worse, for the stochastic optimization problem (1), we can never obtain the estimated full gradient of $f(x)$. As a result, the stochastic optimization method is a good choice. Specifically, we can draw a mini-batch $\mathcal{B} \subseteq \{1, 2, \dots, n\}$ ($b = |\mathcal{B}|$) or $\mathcal{B} = \{\xi_1, \dots, \xi_b\}$ from the distribution of random variable $\xi$, and can obtain the following stochastic zeroth-order gra-
+
+**Algorithm 1 Acc-SZOFW Algorithm**
+
+1: **Input:** Total iteration *T*, step-sizes {$\eta_t, \gamma_t \in (0, 1)^{T-1}$}, weighted parameters {$\alpha_t \in [0, 1]^{T-1}$}, epoch-size *q*, mini-batch size *b* or *b*1, *b*2;
+2: **Initialize:** $x_0 = y_0 = z_0 \in \mathcal{X}$;
+3: **for** t = 0, 1, ..., T - 1 **do**
+4: **if** mod (t, q) = 0 **then**
+5: For the **finite-sum** setting, compute $v_t$ =
+ $\hat{\nabla}_{coo} f(z_t) = \frac{1}{n} \sum_{i=1}^{n} \hat{\nabla}_{coo} f_i(z_t)$;
+6: For the **stochastic** setting, randomly select $b_1$ samples $\mathcal{B}_1 = \{\xi_1, \dots, \xi_{b_1}\}$, and compute $v_t$ =
+ $\hat{\nabla}_{coo} f_{\mathcal{B}_1}(z_t)$, or draw i.i.d. $\{u_1, \dots, u_{b_1}\}$ from uniform distribution over unit sphere, then compute $v_t = \hat{\nabla}_{uni} f_{\mathcal{B}_1}(z_t)$;
+7: **else**
+8: For the **finite-sum** setting, randomly select $b = |\mathcal{B}|$ samples $\mathcal{B} \subseteq \{1, \dots, n\}$, and compute $v_t$ =
+ $\frac{1}{b} \sum_{j \in \mathcal{B}} [\hat{\nabla}_{coo} f_j(z_t) - \hat{\nabla}_{coo} f_j(z_{t-1})] + v_{t-1}$;
+9: For the **stochastic** setting, randomly select $b_2$ samples $\mathcal{B}_2 = \{\xi_1, \dots, \xi_{b_2}\}$, and compute $v_t$ =
+ $\frac{1}{b_2} \sum_{j \in \mathcal{B}_2} [\hat{\nabla}_{coo} f_j(z_t) - \hat{\nabla}_{coo} f_j(z_{t-1})] + v_{t-1}$, or
+ draw i.i.d. $\{u_1, \dots, u_{b_2}\}$ from uniform distribution over unit sphere, then $v_t$ =
+ $\frac{1}{b_2} \sum_{j \in \mathcal{B}_2} [\hat{\nabla}_{uni} f_j(z_t) - \hat{\nabla}_{uni} f_j(z_{t-1})] + v_{t-1}$;
+
+10: **end if**
+
+11: Optimize $w_t = \arg\max_{w \in \mathcal{X}} \langle w, -v_t \rangle$;
+
+12: Update $x_{t+1} = x_t + \gamma_t (w_t - x_t)$;
+
+13: Update $y_{t+1} = z_t + \eta_t (w_t - z_t)$;
+
+14: Update $z_{t+1} = (1 - \alpha_{t+1}) y_{t+1} + \alpha_{t+1} x_{t+1}$;
+
+15: **end for**
+
+16: **Output:** $z_\zeta$ chosen uniformly random from $\{z_t\}_{t=1}^T$.
+
+dient:
+
+$$\hat{\nabla}_{f_B}(x) = \frac{1}{b} \sum_{j \in B} \hat{\nabla}_{f_j}(x),$$
+
+where $\hat{\nabla}_{f_j}(\cdot)$ includes $\hat{\nabla}_{coo} f_j(\cdot)$ and $\hat{\nabla}_{uni} f_j(\cdot)$.
+
+However, this standard zeroth-order stochastic Frank-Wolfe algorithm suffers from large variance in the zeroth-order stochastic gradient. Following (Balasubramanian & Ghadimi, 2018; Sahu et al., 2019), this variance will result in high function query complexity. Thus, we use the variance reduced technique of SPIDER/SpiderBoost as in (Ji et al., 2019) to reduce the variance in the stochastic gradients. Specifically, in Algorithm 1, we use the following semi-stochastic gradient for solving the stochastic problem:
+
+$$v_t =
+\begin{cases}
+\frac{1}{b_1} \sum_{i \in B_1} (\hat{\nabla}_{f_i}(x_t)), & \text{if } \mod(t,q) = 0 \\
+\frac{1}{b_2} \sum_{i \in B_2} (\hat{\nabla}_{f_i}(x_t) - (\hat{\nabla}_{f_i}(x_{t-1})) + v_{t-1}), & \text{otherwise}
+\end{cases}
+$$
+---PAGE_BREAK---
+
+**Algorithm 2 Acc-SZOFW* Algorithm**
+
+1: **Input:** Total iteration $T$, step-sizes $\{\eta_t, \gamma_t\} \in (0, 1)\}_{t=0}^{T-1}$, weighted parameters $\{\alpha_t \in [0, 1]\}_{t=1}^{T-1}$ and the parameter $\{\rho_t\}_{t=1}^{T-1}$;
+2: **Initialize:** $x_0 = y_0 = z_0 \in \mathcal{X}$;
+3: **for** $t = 0, 1, \dots, T-1$ **do**
+4: **if** $t = 0$ **then**
+5: Sample a point $\xi_0$, and compute $v_0 = \nabla_{coo}f_{\xi_0}(z_0)$, or draw a vector $u \in \mathbb{R}^d$ from uniform distribution over unit sphere, then compute $v_0 = \nabla_{uni}f_{\xi_0}(z_0)$;
+6: **else**
+7: Sample a point $\xi_t$, and compute $v_t = \nabla_{coo}f_{\xi_t}(z_t) + (1 - \rho_t)(v_{t-1} - \nabla_{coo}f_{\xi_t}(z_{t-1}))$, or draw a vector $u \in \mathbb{R}^d$ from uniform distribution over unit sphere, then compute $v_t = \nabla_{uni}f_{\xi_t}(z_t) + (1 - \rho_t)(v_{t-1} - \nabla_{uni}f_{\xi_t}(z_{t-1}))$;
+8: **end if**
+9: Optimize $w_t = \arg\max_{w \in \mathcal{X}} \langle w, -v_t \rangle$;
+10: Update $x_{t+1} = x_t + \gamma_t(w_t - x_t)$;
+11: Update $y_{t+1} = z_t + \eta_t(w_t - z_t)$;
+12: Update $z_{t+1} = (1 - \alpha_{t+1})y_{t+1} + \alpha_{t+1}x_{t+1}$;
+13: **end for**
+14: **Output:** $z_\zeta$ chosen uniformly random from $\{z_t\}_{t=1}^T$.
+
+Moreover, we propose a novel momentum accelerated framework for the Frank-Wolfe algorithm. Specifically, we introduce two intermediate variables $x$ and $y$, as in (Wang et al., 2019), and our algorithm keeps all variables $\{x, y, z\}$ in the constraint set $\mathcal{X}$. In Algorithm 1, when set $\alpha_{t+1} = 0$ or $\alpha_{t+1} = 1$, our algorithm will reduce to the zeroth-order Frank-Wolfe algorithm with the variance reduced technique of SPIDER/SpiderBoost. When $\alpha_{t+1} \in (0, 1)$, our algorithm will generate the following iterations:
+
+$$
+\begin{align*}
+z_1 &= z_0 + ((1-\alpha_1)\eta_0 + \alpha_1\gamma_0)(w_0 - z_0), \\
+z_2 &= z_1 + ((1-\alpha_2)\eta_1 + \alpha_2\gamma_1)(w_1 - z_1) \\
+&\quad + \alpha_2(1-\gamma_1)(1-\alpha_1)(\gamma_0 - \eta_0)(w_0 - z_0), \\
+z_3 &= z_2 + ((1-\alpha_3)\eta_2 + \alpha_3\gamma_2)(w_2 - z_2) \\
+&\quad + \alpha_3(1-\gamma_2)(1-\alpha_2)(\gamma_1 - \eta_1)(w_1 - z_1) \\
+&\quad + \alpha_3(1-\gamma_2)(1-\alpha_2)(1-\gamma_1)(1-\alpha_1)(\gamma_0 - \eta_0)(w_0 - z_0), \\
+&\vdots
+\end{align*}
+$$
+
+From the above iterations, the updating parameter $z_t$ is a linear combination of the previous terms $w_i - z_i$ ($i \le t$), which coincides the aim of momentum accelerated technique (Nesterov, 2004; Allen-Zhu, 2017). In fact, our momentum accelerated technique does not rely on the version of gradient $v_t$. In other words, our momentum accelerated technique can be applied in the zeroth-order, first-order, determinate or stochastic Frank-Wolfe algorithms.
+
+## 4.2. Acc-SZOFW* Algorithm
+
+In this subsection, we propose a novel Acc-SZOFW* algorithm based on a new momentum-based variance reduced technique of STORM/Hybrid-SGD (Cutkosky & Orabona, 2019; Tran-Dinh et al., 2019). Although the above Acc-SZOFW algorithm reaches a lower function query complexity, it requires large batches (Please see Table 1). Clearly, the Acc-SZOFW algorithm can not be well competent to the very large-scale problems and the data flow problems. Thus, we further propose a novel Acc-SZOFW* algorithm to relax the large batches required in the Acc-SZOFW. Algorithm 2 details the Acc-SZOFW* algorithm.
+
+In Algorithm 2, we apply the variance-reduced technique of STORM to estimate the zeroth-order stochastic gradients, and update the parameters $\{x, y, z\}$ as in Algorithm 1. Specifically, we use the zeroth-order stochastic gradients as follows:
+
+$$
+\begin{align}
+v_t ={}& \rho_t \underbrace{\nabla f_{\xi_t}(z_t)}_{\text{SGD}} + (1 - \rho_t) (\underbrace{\nabla f_{\xi_t}(z_t) - \nabla f_{\xi_t}(z_{t-1})}_{\text{SPIDER}} + v_{t-1}) \nonumber \\
+ ={}& \hat{\nabla} f_{\xi_t}(z_t) + (1 - \rho_t) (v_{t-1} - \hat{\nabla} f_{\xi_t}(z_{t-1})), \tag{6}
+\end{align}
+$$
+
+where $\rho_t \in (0, 1]$. Recently, (Zhang et al., 2019; Xie et al., 2019) have been applied this variance-reduced technique of STORM to the Frank-Wolfe algorithms. However, these algorithms strictly rely on the unbiased stochastic gradient. To the best of our knowledge, we are the first to apply the STORM to the zeroth-order algorithm, which does not rely on the unbiased stochastic gradient.
+
+# 5. Convergence Analysis
+
+In the section, we study the convergence properties of both the Acc-SZOFW and Acc-SZOFW* algorithms. All related proofs are provided in the supplementary document. Throughout the paper, $\|\cdot\|$ denotes the vector $\ell_2$ norm and the matrix spectral norm, respectively. Without loss of generality, let $\alpha_t = \frac{1}{t+1}$, $\gamma_t = (1+\theta_t)\eta_t$ with $\theta_t = \frac{1}{(t+1)(t+2)}$ in our algorithms.
+
+## 5.1. Convergence Properties of Acc-SZOFW Algorithm
+
+In this subsection, we study the convergence properties of the Acc-SZOFW Algorithm based on the CooGE and UniGE zeroth-order gradients, respectively. The detailed proofs are provided in the Appendix A.1.
+
+We first study the convergence properties of the deterministic Acc-ZO-FW algorithm as a baseline, which is Algorithm 1 using the deterministic zeroth-order gradient $v_t = \frac{1}{n} \sum_{i=1}^{n} \nabla_{coo} f_i(z_t)$ for solving the finite-sum problem (1).
+---PAGE_BREAK---
+
+### 5.1.1. DETERMINISTIC ACC-ZO-FW ALGORITHM
+
+**Theorem 1.** Suppose $\{x_t, y_t, z_t\}_{t=0}^{T-1}$ be generated from Algorithm 1 by using the **deterministic** zeroth-order gradient $v_t = \frac{1}{n}\sum_{i=1}^n \nabla_{coof_i} f(z_t)$, and let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\gamma_t = (1+\theta_t)\eta_t$, $\eta = \eta_t = T^{-\frac{1}{2}}$, $\mu = d^{-\frac{3}{2}}T^{-\frac{1}{2}}$, then we have
+
+$$ \mathbb{E}[\mathcal{G}(z_\zeta)] = \frac{1}{T} \sum_{t=1}^{T-1} \mathcal{G}(z_t) \le O(\frac{1}{T^{\frac{1}{2}}}) + O(\frac{\ln(T)}{T^{\frac{3}{2}}}), $$
+
+where $z_\zeta$ is chosen uniformly randomly from $\{z_t\}_{t=0}^{T-1}$.
+
+**Remark 1.** Theorem 1 shows that the deterministic Acc-ZO-FW algorithm under the CooGE has $O(T^{-\frac{1}{2}})$ convergence rate. The Acc-ZO-FW algorithm needs nd samples to estimate the zeroth-order gradient $v_t$ at each iteration. For finding an $\epsilon$-stationary point, i.e., $\mathbb{E}[\mathcal{G}(z_\zeta)] \le \epsilon$, by $T^{-\frac{1}{2}} \le \epsilon$, we choose $T = \epsilon^{-2}$. Thus the **deterministic** Acc-ZO-FW has the function query complexity of $ndT = O(dnc^{-2})$. Comparing with the existing deterministic zeroth-order Frank-Wolfe algorithm, i.e., FW-Black (Chen et al., 2018), our Acc-ZO-FW algorithm has a lower query complexity of $ndT = O(dnc^{-2})$, which improves the existing result by a factor of $O(\epsilon^{-2})$ (please see Table 1).
+
+### 5.1.2. ACC-SZOFW (COOGE) ALGORITHM
+
+**Lemma 1.** Suppose the zeroth-order stochastic gradient $v_t$ be generated from Algorithm 1 by using the CooGE zeroth-order gradient estimator. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$ and $\gamma_t = (1+\theta_t)\eta_t$ in Algorithm 1. For the finite-sum setting, we have
+
+$$ \mathbb{E}\|\nabla f(z_t) - v_t\| \le L\sqrt{d}\mu + \frac{L(\sqrt{6d}\mu + 2\sqrt{3D}\eta)}{\sqrt{b/q}}. $$
+
+For the stochastic setting, we have
+
+$$ \begin{aligned} \mathbb{E}\|\nabla f(z_t) - v_t\| &\le L\sqrt{d}\mu + \frac{L(\sqrt{6d}\mu + 2\sqrt{3D}\eta)}{\sqrt{b_2/q}} \\ &\quad + \frac{\sqrt{3}\sigma_1}{\sqrt{b_1}} + \sqrt{6dL}\mu. \end{aligned} $$
+
+**Theorem 2.** Suppose $\{x_t, y_t, z_t\}_{t=0}^{T-1}$ be generated from Algorithm 1 by using the CooGE zeroth-order gradient estimator, and let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\gamma_t = (1+\theta_t)\eta_t$, $\eta = \eta_t = T^{-\frac{1}{2}}$, $\mu = d^{-\frac{3}{2}}T^{-\frac{1}{2}}$, $b = q$, or $b_2 = q$ and $b_1 = T$, then we have
+
+$$ \mathbb{E}[\mathcal{G}(z_\zeta)] = \frac{1}{T} \sum_{t=1}^{T-1} \mathbb{E}[\mathcal{G}(z_t)] \le O(\frac{1}{T^{\frac{1}{2}}}) + O(\frac{\ln(T)}{T^{\frac{3}{2}}}), $$
+
+where $z_\zeta$ is chosen uniformly randomly from $\{z_t\}_{t=0}^{T-1}$.
+
+**Remark 2.** Theorem 2 shows that the Acc-SZOFW (CooGE) algorithm has convergence rate of $O(T^{-\frac{1}{2}})$. When mod $(t,q)=0$, the Acc-SZOFW algorithm needs nd or $b_1d$ samples to estimate the zeroth-order gradient $v_t$ at each iteration and needs $T/q$ iterations, otherwise it needs $2bd$ or $2b_2d$ samples to estimate $v_t$ at each iteration and needs $T$ iterations. In the finite-sum setting, by $T^{-\frac{1}{2}} \le \epsilon$, we choose $T = \epsilon^{-2}$, and let $b=q=\sqrt{n}$, the Acc-SZOFW has the function query complexity of $dnT/q + 2dbT = O(d\sqrt{n}\epsilon^{-2})$ for finding an $\epsilon$-stationary point. In the stochastic setting, let $b_2=q=\epsilon^{-1}$ and $b_1=T=\epsilon^{-2}$, the Acc-SZOFW has the function query complexity of $db_1T/q + 2db_2T = O(d\epsilon^{-3})$ for finding an $\epsilon$-stationary point.
+
+### 5.1.3. ACC-SZOFW (UNIGE) ALGORITHM
+
+**Lemma 2.** Suppose the zeroth-order stochastic gradient $v_t$ be generated from Algorithm 1 by using the UniGE zeroth-order gradient estimator. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$ and $\gamma_t = (1+\theta_t)\eta_t$ in Algorithm 1. For the stochastic setting, we have
+
+$$ \mathbb{E}\|\nabla f(z_t) - v_t\| \le \frac{\beta L d}{2} + \frac{L(\sqrt{3d\beta} + 2\sqrt{6Dd\eta})}{\sqrt{2b_2/q}} + \frac{\sigma_2}{\sqrt{b_1}}. $$
+
+**Theorem 3.** Suppose $\{x_t, y_t, z_t\}_{t=0}^{T-1}$ be generated from Algorithm 1 by using the UniGE zeroth-order gradient estimator, and let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\gamma_t = (1+\theta_t)\eta_t$, $\eta = \eta_t = T^{-\frac{1}{2}}$, $\beta = d^{-1}T^{-\frac{1}{2}}$, $b_2 = q$, and $b_1 = T/d$, then we have
+
+$$ \mathbb{E}[\mathcal{G}(z_\zeta)] = \frac{1}{T} \sum_{t=1}^{T-1} \mathbb{E}[\mathcal{G}(z_t)] \le O(\frac{\sqrt{d}}{T^{\frac{1}{2}}}) + O(\frac{\sqrt{d}\ln(T)}{T^{\frac{3}{2}}}), $$
+
+where $z_\zeta$ is chosen uniformly randomly from $\{z_t\}_{t=0}^{T-1}$.
+
+**Remark 3.** Theorem 3 shows that the Acc-SZOFW (UniGE) algorithm has $O(\sqrt{dT^{-\frac{1}{2}}})$ convergence rate. When mod $(t,q)=0$, the Acc-SZOFW (UniGE) algorithm needs $b_1$ samples to estimate the zeroth-order gradient $v_t$ at each iteration and needs $T/q$ iterations, otherwise it needs $2b_2$ samples to estimate $v_t$ at each iteration and needs $T$ iterations. By $\sqrt{dT^{-\frac{1}{2}}} \le \epsilon$, we choose $T = d\epsilon^{-2}$, and let $b_2=q=\epsilon^{-1}$ and $b_1=\epsilon^{-2}$, the Acc-SZOFW has the function query complexity of $b_1T/q + 2b_2T = O(d\epsilon^{-3})$ for finding an $\epsilon$-stationary point.
+
+## 5.2. Convergence Properties of Acc-SZOFW* Algorithm
+
+In this subsection, we study the convergence properties of the Acc-SZOFW* Algorithm based on the CooGE and UniGE, respectively. The detailed proofs are provided in the Appendix A.2.
+---PAGE_BREAK---
+
+### 5.2.1. ACC-SZOFW* (COOGE) ALGORITHM
+
+**Lemma 3.** Suppose the zeroth-order gradient $v_t = \nabla_{\text{coo}} f_{\xi_t}(z_t) + (1 - \rho_t)(v_{t-1} - \nabla_{\text{coo}} f_{\xi_t}(z_{t-1}))$ be generated from Algorithm 2. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\gamma_t = (1+\theta_t)\eta_t$, $\eta = \eta_t \le (t+1)^{-a}$ and $\rho_t = t^{-a}$ for some $a \in (0, 1]$ and the smoothing parameter $\mu = \mu_t \le d^{-\frac{1}{2}}(t+1)^{-a}$, then we have
+
+$$ \mathbb{E}\|v_t - \nabla f(z_t)\| \le L\sqrt{d\mu} + \sqrt{C}(t+1)^{-\frac{a}{2}}, \quad (7) $$
+
+where $C = \frac{2(12L^2D^2+12L^2+3\sigma_1^2)}{2-2^{-a}-a}$ for some $a \in (0, 1]$.
+
+**Theorem 4.** Suppose $\{x_t, y_t, z_t\}_{t=0}^{T-1}$ be generated from Algorithm 2 by using the CooGE zeroth-order gradient estimator. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\eta = \eta_t = T^{-\frac{2}{3}}$, $\gamma_t = (1+\theta_t)\eta_t$, $\rho_t = t^{-\frac{2}{3}}$ for $t \ge 1$ and $\mu = d^{-\frac{1}{2}}T^{-\frac{2}{3}}$, then we have
+
+$$ \mathbb{E}[\mathcal{G}(z_\zeta)] = \frac{1}{T} \sum_{t=1}^{T-1} \mathbb{E}[\mathcal{G}(z_t)] \le O(\frac{1}{T^{\frac{1}{3}}}) + O(\frac{\ln(T)}{T^{\frac{4}{3}}}), $$
+
+where $z_\zeta$ is chosen uniformly randomly from $\{z_t\}_{t=0}^{T-1}$.
+
+**Remark 4.** Theorem 4 shows that the Acc-SZOFW*(CooGE) algorithm has $O(T^{-\frac{2}{3}})$ convergence rate. It needs 2d samples to estimate the zeroth-order gradient $v_t$ at each iteration, and needs T iterations. For finding an $\epsilon$-stationary point, i.e., ensuring $\mathbb{E}[\mathcal{G}(z_\zeta)] \le \epsilon$, by $T^{-\frac{1}{3}} \le \epsilon$, we choose $T = \epsilon^{-3}$. Thus the Acc-SZOFW* has the function query complexity of $2dT = O(de^{-3})$. Note that the Acc-SZOFW* algorithm only requires a small mini-batch size such as 2 and reaches the same function query complexity as the Acc-SZOFW algorithm that requires large batch sizes $b_2 = \epsilon^{-1}$ and $b_1 = \epsilon^{-2}$. For clarity, we need to emphasize that the mini-batch size denotes the sample size required at each iteration, while the query-size (in Table 1) denotes the function query size required in estimating one zeroth-order gradient in these algorithms. In fact, there exists a positive correlation between them. For example, in the Acc-SZOFW* algorithm, the mini-batch size is 2, and the corresponding query-size is 2d.
+
+### 5.2.2. ACC-SZOFW* (UNIGE) ALGORITHM
+
+**Lemma 4.** Suppose the zeroth-order gradient $v_t = \nabla_{\text{uni}} f_{\xi_t}(z_t) + (1 - \rho_t)(v_{t-1} - \nabla_{\text{uni}} f_{\xi_t}(z_{t-1}))$ be generated from Algorithm 2. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\gamma_t = (1+\theta_t)\eta_t$, $\eta = \eta_t \le (t+1)^{-a}$ and $\rho_t = t^{-a}$ for some $a \in (0, 1]$ and the smoothing parameter $\beta = \beta_t \le d^{-1}(t+1)^{-a}$, then we have
+
+$$ \mathbb{E}\|v_t - \nabla f(z_t)\| \le \frac{\beta L d}{2} + \sqrt{C}(t+1)^{-\frac{a}{2}}, \quad (8) $$
+
+where $C = \frac{24dL^2D^2+3L^2+2\sigma_2^2}{2-2^{-a}-a}$ for some $a \in (0, 1]$.
+
+**Theorem 5.** Suppose $\{x_t, y_t, z_t\}_{t=0}^{T-1}$ be generated from Algorithm 2 by using the UniGE zeroth-order gradient estimator. Let $\alpha_t = \frac{1}{t+1}$, $\theta_t = \frac{1}{(t+1)(t+2)}$, $\eta = \eta_t = T^{-\frac{2}{3}}$, $\gamma_t = (1+\theta_t)\eta_t$, $\rho_t = t^{-\frac{2}{3}}$ for $t \ge 1$ and $\beta = d^{-1}T^{-\frac{2}{3}}$, then we have
+
+$$ \mathbb{E}[\mathcal{G}(z_\zeta)] = \frac{1}{T} \sum_{t=1}^{T-1} \mathbb{E}[\mathcal{G}(z_t)] \le O(\frac{\sqrt{d}}{T^{\frac{1}{3}}}) + O(\frac{\sqrt{d} \ln(T)}{T^{\frac{4}{3}}}), $$
+
+where $z_\zeta$ is chosen uniformly randomly from $\{z_t\}_{t=0}^{T-1}$.
+
+**Remark 5.** Theorem 5 states that the Acc-SZOFW*(UniGE) algorithm has $O(\sqrt{dT^{-\frac{1}{3}}})$ convergence rate. It needs 2 samples to estimate the zeroth-order gradient $v_t$ at each iteration, and needs T iterations. By $\sqrt{dT^{-\frac{1}{3}}} \le \epsilon$, we choose $T = d^{\frac{3}{2}}\epsilon^{-3}$. Thus, the Acc-SZOFW* has the function query complexity of $2T = O(d^{\frac{3}{2}}\epsilon^{-3})$ for finding an $\epsilon$-stationary point.
+
+Figure 1. The convergence of attack loss against iterations of three algorithms on the SAP problem.
+
+## 6. Experiments
+
+In this section, we evaluate the performance of our proposed algorithms on two applications: 1) generating adversarial examples from black-box deep neural networks (DNNs) and 2) robust black-box binary classification with $\ell_1$ norm bound constraint. In the first application, we focus on two types of black-box adversarial attacks: single adversarial perturbation (SAP) against an image and universal adversarial perturbation (UAP) against multiple images. Specifically, we apply the SAP to demonstrate the efficiency of our deterministic Acc-ZO-FW algorithm and compare with the FW-Black (Chen et al., 2018) algorithm. While we apply the UAP and robust black-box binary classification to verify the efficiency of our stochastic algorithms (i.e., Acc-SZOFW and Acc-SZOFW*) and compare with the ZO-SFW (Sahu et al., 2019) algorithm and the ZSCG (Balasubramanian & Ghadimi, 2018) algorithm. All of our experiments are conducted on a server with an Intel Xeon 2.60GHz CPU and an NVIDIA Titan Xp GPU. Our implementation is based on PyTorch and the code to reproduce our results is publicly available at https://github.com/TLMichael/Acc-SZOFW.
+---PAGE_BREAK---
+
+Figure 2. Comparison of six algorithms for the UAP problem. Above: the convergence of attack loss against iterations. Below: the convergence of attack loss against queries.
+
+## 6.1. Black-box Adversarial Attack
+
+In this subsection, we apply the zeroth-order algorithms to generate adversarial perturbations to attack the pre-trained black-box DNNs, whose parameters are hidden and only its outputs are accessible. Let $(a, b)$ denote an image $a$ with its true label $b \in \{1, 2, \dots, K\}$, where $K$ is the total number of image classes. For the SAP, we will design a perturbation $x$ for a single image $(a, b)$; For the UAP, we will design a universal perturbation $x$ for multiple images $\{a_i, b_i\}_{i=1}^n$. Following (Guo et al., 2019), we solve the untargeted attack problem as follows:
+
+$$ \min_{x \in \mathbb{R}^d} \frac{1}{n} \sum_{i=1}^{n} p(b_i | a_i + x), \quad \text{s.t.} \ \|x\|_{\infty} \le \varepsilon \quad (9) $$
+
+where $p(\cdot | a)$ represents probability associated with each class, that is, the final output after softmax of neural network. In the problem (9), we normalize the pixel values to $[0, 1]^d$.
+
+In the experiment, we use the pre-trained DNN models on MNIST (LeCun et al., 2010) and CIFAR10 (Krizhevsky et al., 2009) datasets as the target black-box models, which can attain 99.16% and 93.07% test accuracy, respectively. In the SAP experiment, we choose $\varepsilon = 0.3$ for MNIST and $\varepsilon = 0.1$ for CIFAR10. In the UAP experiment, we choose $\varepsilon = 0.3$ for both MNIST dataset and CIFAR10 dataset. For fair comparison, we choose the mini-batch size $b = 20$ for all stochastic zeroth-order methods. We refer readers to Appendix A.3 for more details of the experimental setups and the generated adversarial examples by our proposed algorithms.
+
+Figure 1 shows that the convergence behaviors of three algorithms on SAP problem, where for each curve, we generate 1000 adversarial perturbations on MNIST and 100 adversarial perturbations on CIFAR10, the mean value of loss are plotted and the range of standard deviation is shown as a shadow overlay. For both datasets, the results show that the attack loss values of our Acc-ZO-FW algorithm faster decrease than those of the FW-Black algorithms, as the iteration increases, which demonstrates the superiority of our novel momentum technique and CooGE used in the Acc-ZO-FW algorithm.
+
+Figure 2 shows that the convergence of six algorithms on UAP problem. For both datasets, the results show that all of our accelerated zeroth-order algorithms have faster convergence speeds (i.e. less iteration complexity) than the existing algorithms, while the Acc-SZOFW (UniGE) algorithm and the Acc-SZOFW* (UniGE) have faster convergence speeds (i.e. less function query complexity) than other algorithms (especially ZSCG and ZO-SFW), which verifies that the effectiveness of the variance reduced technique and the novel momentum technique in our accelerated algorithms. We notice that the periodic jitter of the curve of Acc-SZOFW (UniGE), which is due to the gradient variance reduction period of the variance reduced technique and the imprecise estimation of the uniform smoothing gradient estimator makes the jitter more significant. The jitter is less obvious in Acc-SZOFW (CooGE). Figure 2(c) and Figure 2(d) represent the attack loss against the number of function queries. We observe that the performance of our CooGE-based algorithms degrade since the need of large number of queries to construct coordinate-wise gradient estimates. From these results, we also find that the CooGE-based methods can not be competent to high-dimensional datasets due to estimating each coordinate-wise gradient required at least $d$ queries. In addition, the performance of the Acc-SZOFW algorithms is better than the Acc-SZOFW* algorithms in most cases, which is due to the considerable mini-batch size used in the Acc-SZOFW algorithms.
+
+## 6.2. Robust Black-box Binary Classification
+
+In this subsection, we apply the proposed algorithms to solve the robust black-box binary classification task. Given a set of training samples $(a_i, l_i)_{i=1}^n$, where $a_i \in \mathbb{R}^d$ and $l_i \in \{-1, +1\}$, we find optimal parameter $x \in \mathbb{R}^d$ by solving the problem:
+
+$$ \min_{x \in \mathbb{R}^d} \frac{1}{n} \sum_{i=1}^{n} f_i(x), \quad \text{s.t.} \ \|x\|_1 \le \theta, \quad (10) $$
+
+where $f_i(x)$ is the black-box loss function, that only returns the function value given an input. Here, we specify the loss function $f_i(x) = \frac{\sigma^2}{2}(1 - \exp(-\frac{(l_i - a_i^T x)^2}{\sigma^2}))$, which is the nonconvex robust correntropy induced loss. In the
+---PAGE_BREAK---
+
+Figure 3. Comparison of six algorithms for robust black-box binary classification. Above: the convergence of train loss against iterations.
+Below: the convergence of train loss against queries.
+
+experiment, we use four public real datasets¹. We set $\sigma = 10$ and $\theta = 10$. For fair comparison, we choose the mini-batch size $b = 100$ for all stochastic zeroth-order methods. In the experiment, we use four public real datasets, which are summarized in Table 2. For each dataset, we use half of the samples as training data and the rest as testing data. We elaborate the details of the parameter setting in Appendix A.4.
+
+Table 2. Real datasets for black-box binary classification.
+
+| DATA SET | #SAMPLES | #FEATURES | #CLASSES | | phishing | 11,055 | 68 | 2 | | a9a | 32,561 | 123 | 2 | | w8a | 49,749 | 300 | 2 | | covtype.binary | 581,012 | 54 | 2 |
+
+Figure 3 shows that the convergence of six algorithms on the black-box binary classification problem. We see that the results are similar as in the case of the UAP problem. For all datasets, the results show that all of our accelerated algorithms have faster convergence speeds (i.e. less iteration complexity) than the existing algorithms, while the Acc-SZOFW (UniGE) algorithm and the Acc-SZOFW* (UniGE) have faster convergence speeds (i.e. less function query complexity) than other algorithms (especially ZSCG and ZO-SFW), which further demonstrates the efficiency of our accelerated algorithms. Similar to Figure 2, the periodic jitter of the curve of Acc-SZOFW (UniGE) also appears
+
+and seems to be more intense in the covtype-binary dataset. We speculate that this is because the variance of the random gradient estimator is too high in this situation. We also provide the convergence of test loss in Appendix A.4, which is analogous to those of train loss.
+
+## 7. Conclusions
+
+In the paper, we proposed a class of accelerated stochastic gradient-free and projection-free (zeroth-order Frank-Wolfe) methods. In particular, we also proposed a momentum accelerated framework for the Frank-Wolfe methods. Specifically, we presented an accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW) method based on the variance reduced technique of SPIDER and the proposed momentum accelerated technique. Further, we proposed a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) to relax the large mini-batch size required in the Acc-SZOFW. Moreover, both the Acc-SZOFW and Acc-SZOFW* methods obtain a lower query complexity, which improves the state-of-the-art query complexity in both finite-sum and stochastic settings.
+
+## Acknowledgements
+
+We thank the anonymous reviewers for their valuable comments. This paper was partially supported by the Natural Science Foundation of China (NSFC) under Grant No. 61806093 and No. 61682281, and the Key Program of NSFC under Grant No. 61732006, and Jiangsu Postdoctoral Research Grant Program No. 2018K004A.
+
+¹These data are from the website https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
+---PAGE_BREAK---
+
+## References
+
+Allen-Zhu, Z. Katyusha: The first direct acceleration of stochastic gradient methods. *The Journal of Machine Learning Research*, 18(1):8194–8244, 2017.
+
+Allen-Zhu, Z. and Hazan, E. Variance reduction for faster non-convex optimization. In *International Conference on Machine Learning*, pp. 699–707, 2016.
+
+Balasubramanian, K. and Ghadimi, S. Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. In *Advances in Neural Information Processing Systems*, pp. 3455–3464, 2018.
+
+Chen, J., Zhou, D., Yi, J., and Gu, Q. A frank-wolfe framework for efficient and effective adversarial attacks. *arXiv preprint arXiv:1811.10828*, 2018.
+
+Chen, S., Luo, L., Yang, J., Gong, C., Li, J., and Huang, H. Curvilinear distance metric learning. In *Advances in Neural Information Processing Systems*, pp. 4223–4232, 2019a.
+
+Chen, X., Liu, S., Xu, K., Li, X., Lin, X., Hong, M., and Cox, D. Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization. In *Advances in Neural Information Processing Systems*, pp. 7202–7213, 2019b.
+
+Cutkosky, A. and Orabona, F. Momentum-based variance reduction in non-convex sgd. In *Advances in Neural Information Processing Systems*, pp. 15210–15219, 2019.
+
+Defazio, A., Bach, F., and Lacoste-Julien, S. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In *Advances in neural information processing systems*, pp. 1646–1654, 2014.
+
+Duchi, J. C., Jordan, M. I., Wainwright, M. J., and Wibisono, A. Optimal rates for zero-order convex optimization: The power of two function evaluations. *IEEE Transactions on Information Theory*, 61(5):2788–2806, 2015.
+
+Fang, C., Li, C. J., Lin, Z., and Zhang, T. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In *Advances in Neural Information Processing Systems*, pp. 689–699, 2018.
+
+Frank, M. and Wolfe, P. An algorithm for quadratic programming. *Naval research logistics quarterly*, 3(1-2): 95–110, 1956.
+
+Gao, X., Jiang, B., and Zhang, S. On the information-adaptive variants of the admm: an iteration complexity perspective. *Journal of Scientific Computing*, 76(1):327–363, 2018.
+
+Ghadimi, S. and Lan, G. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. *SIAM Journal on Optimization*, 23(4):2341–2368, 2013.
+
+Ghadimi, S. and Lan, G. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. *Mathematical Programming*, 156(1-2):59–99, 2016.
+
+Ghadimi, S., Lan, G., and Zhang, H. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. *Mathematical Programming*, 155 (1-2):267–305, 2016.
+
+Guo, C., Gardner, J., You, Y., Wilson, A. G., and Weinberger, K. Simple black-box adversarial attacks. In *International Conference on Machine Learning*, pp. 2484–2493, 2019.
+
+Hassani, H., Karbasi, A., Mokhtari, A., and Shen, Z. Stochastic conditional gradient++. *arXiv preprint arXiv:1902.06992*, 2019.
+
+Hazan, E. and Kale, S. Projection-free online learning. *arXiv preprint arXiv:1206.4657*, 2012.
+
+Hazan, E. and Luo, H. Variance-reduced and projection-free stochastic optimization. In *ICML*, pp. 1263–1271, 2016.
+
+Huang, F., Gao, S., Chen, S., and Huang, H. Zeroth-order stochastic alternating direction method of multipliers for nonconvex nonsmooth optimization. In *Proceedings of the 28th International Joint Conference on Artificial Intelligence*, pp. 2549–2555. AAAI Press, 2019a.
+
+Huang, F., Gao, S., Pei, J., and Huang, H. Nonconvex zeroth-order stochastic admm methods with lower function query complexity. *arXiv preprint arXiv:1907.13463*, 2019b.
+
+Huang, F., Gu, B., Huo, Z., Chen, S., and Huang, H. Faster gradient-free proximal stochastic methods for nonconvex nonsmooth optimization. In *AAAI*, pp. 1503–1510, 2019c.
+
+Huang, F., Gao, S., Pei, J., and Huang, H. Momentum-based policy gradient methods. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 3996–4007, 2020.
+
+Iusem, A. On the convergence properties of the projected gradient method for convex optimization. *Computational & Applied Mathematics*, 22(1):37–52, 2003.
+
+Jaggi, M. Revisiting frank-wolfe: Projection-free sparse convex optimization. In *ICML*, pp. 427–435, 2013.
+
+Ji, K., Wang, Z., Zhou, Y., and Liang, Y. Improved zeroth-order variance reduced algorithms and analysis for non-convex optimization. In *International Conference on Machine Learning*, pp. 3100–3109, 2019.
+---PAGE_BREAK---
+
+Johnson, R. and Zhang, T. Accelerating stochastic gradient descent using predictive variance reduction. In *NIPS*, pp. 315–323, 2013.
+
+Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
+
+Lacoste-Julien, S. Convergence rate of frank-wolfe for non-convex objectives. *arXiv preprint arXiv:1607.00345*, 2016.
+
+Lacoste-Julien, S. and Jaggi, M. On the global linear convergence of frank-wolfe optimization variants. In *NeurIPS*, pp. 496–504, 2015.
+
+Lacoste-Julien, S., Jaggi, M., Schmidt, M., and Pletscher, P. Block-coordinate frank-wolfe optimization for structural svms. In *ICML*, pp. 53–61, 2013.
+
+Lan, G. and Zhou, Y. Conditional gradient sliding for convex optimization. *SIAM Journal on Optimization*, 26(2):1379–1409, 2016.
+
+LeCun, Y., Cortes, C., and Burges, C. Mnist handwritten digit database. *ATT Labs [Online]*. Available: http://yann.lecun.com/exdb/mnist, 2, 2010.
+
+Lei, L., Ju, C., Chen, J., and Jordan, M. I. Non-convex finite-sum optimization via scsg methods. In *Advances in Neural Information Processing Systems*, pp. 2348–2358, 2017.
+
+Lin, Q., Lu, Z., and Xiao, L. An accelerated proximal coordinate gradient method. In *Advances in Neural Information Processing Systems*, pp. 3059–3067, 2014.
+
+Liu, S., Chen, J., Chen, P.-Y., and Hero, A. Zeroth-order online alternating direction method of multipliers: Convergence analysis and applications. In *The Twenty-First International Conference on Artificial Intelligence and Statistics*, volume 84, pp. 288–297, 2018a.
+
+Liu, S., Kailkhura, B., Chen, P.-Y., Ting, P., Chang, S., and Amini, L. Zeroth-order stochastic variance reduction for nonconvex optimization. In *Advances in Neural Information Processing Systems*, pp. 3727–3737, 2018b.
+
+Liu, S., Li, X., Chen, P.-Y., Haupt, J., and Amini, L. Zeroth-order stochastic projected gradient descent for nonconvex optimization. In *2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)*, pp. 1179–1183. IEEE, 2018c.
+
+Malik, D., Pananjady, A., Bhatia, K., Khamaru, K., Bartlett, P., and Wainwright, M. Derivative-free methods for policy optimization: Guarantees for linear quadratic systems. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 2916–2925, 2019.
+
+Nesterov, Y. *Introductory Lectures on Convex Programming Volume I: Basic course*. Kluwer, Boston, 2004.
+
+Nesterov, Y. and Spokoiny, V. G. Random gradient-free minimization of convex functions. *Foundations of Computational Mathematics*, 17:527–566, 2017.
+
+Nguyen, L. M., Liu, J., Scheinberg, K., and Takáč, M. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In *Proceedings of the 34th International Conference on Machine Learning-Volume 70*, pp. 2613–2621. JMLR.org, 2017a.
+
+Nguyen, L. M., Liu, J., Scheinberg, K., and Takáč, M. Stochastic recursive gradient algorithm for nonconvex optimization. *arXiv preprint arXiv:1705.07261*, 2017b.
+
+Nitanda, A. Stochastic proximal gradient descent with acceleration techniques. In *Advances in Neural Information Processing Systems*, pp. 1574–1582, 2014.
+
+Qu, C., Li, Y., and Xu, H. Non-convex conditional gradient sliding. In *ICML*, pp. 4205–4214, 2018.
+
+Reddi, S. J., Hefny, A., Sra, S., Poczos, B., and Smola, A. Stochastic variance reduction for nonconvex optimization. In *International conference on machine learning*, pp. 314–323, 2016.
+
+Roux, N. L., Schmidt, M., and Bach, F. R. A stochastic gradient method with an exponential convergence rate for finite training sets. In *Advances in neural information processing systems*, pp. 2663–2671, 2012.
+
+Sahu, A. K., Zaheer, M., and Kar, S. Towards gradient free and projection free stochastic optimization. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3468–3477, 2019.
+
+Shen, Z., Fang, C., Zhao, P., Huang, J., and Qian, H. Complexities in projection-free stochastic non-convex minimization. In *AISTATS*, pp. 2868–2876, 2019.
+
+Tran-Dinh, Q., Pham, N. H., Phan, D. T., and Nguyen, L. M. A hybrid stochastic optimization framework for stochastic composite nonconvex optimization. *arXiv preprint arXiv:1907.03793*, 2019.
+
+Wainwright, M. J., Jordan, M. I., et al. Graphical models, exponential families, and variational inference. *Foundations and Trends® in Machine Learning*, 1(1-2):1–305, 2008.
+
+Wang, Z., Ji, K., Zhou, Y., Liang, Y., and Tarokh, V. Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization. *arXiv preprint arXiv:1810.10690*, 2018.
+---PAGE_BREAK---
+
+Wang, Z., Ji, K., Zhou, Y., Liang, Y., and Tarokh, V. Spiderboost and momentum: Faster variance reduction algorithms. In *Advances in Neural Information Processing Systems*, pp. 2403–2413, 2019.
+
+Xie, J., Shen, Z., Zhang, C., Qian, H., and Wang, B. Stochastic recursive gradient-based methods for projection-free online learning. *arXiv preprint arXiv:1910.09396*, 2019.
+
+Xu, Y. and Yang, T. Frank-wolfe method is automatically adaptive to error bound condition. *arXiv preprint arXiv:1810.04765*, 2018.
+
+Yurtsever, A., Sra, S., and Cevher, V. Conditional gradient methods via stochastic path-integrated differential estimator. In *ICML*, pp. 7282–7291, 2019.
+
+Zhang, M., Shen, Z., Mokhtari, A., Hassani, H., and Karbasi, A. One sample stochastic frank-wolfe. *arXiv preprint arXiv:1910.04322*, 2019.
+
+Zhou, D., Xu, P., and Gu, Q. Stochastic nested variance reduction for nonconvex optimization. In *Proceedings of the 32nd International Conference on Neural Information Processing Systems*, pp. 3925–3936. Curran Associates Inc., 2018.
\ No newline at end of file
diff --git a/samples/texts_merged/7336068.md b/samples/texts_merged/7336068.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc5e0f81f2205db50c4774dfacfaad974e34953d
--- /dev/null
+++ b/samples/texts_merged/7336068.md
@@ -0,0 +1,501 @@
+
+---PAGE_BREAK---
+
+# Magnetoelastic properties of a spin-1/2 Ising-Heisenberg diamond chain in vicinity of a triple coexistence point
+
+N. Ferreira¹, J. Torrico², S.M. de Souza¹, O. Rojas¹, J. Strečka³
+
+¹ Departamento de Física, Universidade Federal de Lavras CP 3037, 37200-900, Lavras — MG, Brazil
+
+² Departamento de Física, Universidade Federal de Minas Gerais, C. P. 702, 30123-970, Belo Horizonte, Mg, Brazil
+
+³ Department of Theoretical Physics and Astrophysics, Faculty of Science, P. J. Šafárik University, Park Angelinum 9, 040 01 Košice, Slovakia
+
+Received June 23, 2020, in final form September 9, 2020
+
+We study magnetoelastic properties of a spin-1/2 Ising-Heisenberg diamond chain, whose elementary unit cell consists of two decorating Heisenberg spins and one nodal Ising spin. It is assumed that each couple of the decorating atoms including the Heisenberg spins harmonically vibrates perpendicularly to the chain axis, while the nodal atoms involving the Ising spins are placed at rigid positions when ignoring their lattice vibrations. An effect of the magnetoelastic coupling on a ground state and finite-temperature properties is particularly investigated close to a triple coexistence point depending on a spring-stiffness constant ascribed to the Heisenberg interaction. The magnetoelastic nature of the Heisenberg dimers is reflected through a non-null plateau of the entropy emergent in a low-temperature region, whereas the specific heat displays an anomalous peak slightly below the temperature region corresponding to the entropy plateau. The magnetization also exhibits a plateau in the same temperature region at almost saturated value before it gradually tends to zero upon increasing of temperature. The magnetic susceptibility displays within the plateau region an inverse temperature dependence, which slightly drops above this plateau, whereas an inverse temperature dependence is repeatedly recovered at high enough temperatures.
+
+**Key words:** magnetoelastic chain, spin magnetization, thermodynamics
+
+## 1. Introduction
+
+Atomic vibrations of the crystalline materials may influence the magnetic ordering and vice versa. This effect usually has peculiar manifestations especially in a close vicinity of phase transitions related to a breakdown of spontaneous long-range order of two- and three-dimensional magnetic crystals [1–3]. The magnetoelastic interaction produces deformation of a lattice structure (magnetostriction) when applying the external magnetic field and, consequently, magneto-thermodynamic properties of ferromagnetic materials are also altered. A rigorous thermodynamic study of the magnetic crystals involving their magnetic, vibrational, and elastic properties still remains a challenging problem of current research interest owing to computational difficulties arising from a mutual coupling of the magnetic and lattice degrees of freedom through the magnetoelastic interaction. The magnetoelastic changes are typically quite small, the measured strain is for instance of the order of $10^{-5} – 10^{-4}$ for Fe-, Ni- and Co-based alloys although some specific materials like Tb$_{0.3}$Dy$_{0.7}$Fe$_{1.9}$ may exhibit giant magnetoelastic changes with the measured strain of the order ~ $10^{-3}$ [4]. A new type of magnetostriction was found in the materials later referred to as ferromagnetic shape-memory alloys such as Ni$_{2}$MnGa [5, 6].
+
+An effect of the magnetoelastic coupling was previously investigated in a class of the mixed spin-(1/2,S) Ising models on decorated planar lattices [7, 8]. Magnetic and lattice degrees of freedom were in this particular case decoupled from each other through the local canonical transformation [9], which
+---PAGE_BREAK---
+
+either gives rise to an effective next-nearest-neighbor interaction for the spin case $S = 1/2$ [7] or an
+effective three-site four-spin interaction and uniaxial single-ion anisotropy for the spin case $S > 1/2$ [8].
+It was evidenced that the magnetoelastic coupling enforces a remarkable spin frustration of the decorating
+atoms, which was comprehensively studied in the mixed-spin Ising model with the three-site four-spin
+interaction on decorated planar lattices [10, 11] with the help of exact mapping transformations [12–16].
+
+On the other hand, the magnetic behavior of one-dimensional Ising systems drew attention due to
+the influence of the lattice compressibility. In early 1960-ies Mattis and Schultz [17] reported an exact
+solution for the compressible Ising chain with free boundary condition and concluded that there is no
+effect due to the spin-lattice coupling. Later Enting [9] considered the periodic boundary condition and
+verified that the effective spin Hamiltonian is equivalent to a rigid Ising chain with first- and second-
+neighbor interactions. Salinas [18] obtained the free energy of the compressible Ising chain subjected to
+fixed forces by a standard Legendre transformation, which relates it to the free energy of the compressible
+Ising chain confined to a fixed length. Early in 1980s Kneževič and Miloševič [19] considered the
+compressible Ising chain with higher spin values $S = 1$ and $S = 3/2$, which can be mapped to an effective
+rigid spin Hamiltonian with an additional biquadratic interaction.
+
+More recently, several aspects of compressible spin chains were investigated such as the effect of
+two independent fields on the compressible Ising chain [20], thermodynamic properties of the Ising-
+chain model accounting both for elastic and vibrational degrees of freedom [21] and the magnetocaloric
+properties of the Ising chain [22]. The seminal contribution in this field of study was by Derzhko and co-
+workers when rigorously solving a set of four deformable spin-chain models [23]. Among other matters,
+Derzhko et al. proved that the (inverse) compressibility of the Ising chain in a longitudinal field and the
+quantum XX chain in a transverse field shows a sudden jump at field-driven quantum phase transition,
+while it gradually diminishes near quantum critical points of the Ising chain in a transverse field and the
+Heisenberg-Ising chain [23].
+
+Lately, different versions of the Ising-Heisenberg diamond chains have provided a useful playground
+full of intriguing features and unexpected findings such as the existence of intermediate magnetiza-
+tion plateaus [24–26], Lyapunov exponent and superstability [27], the non-conserved magnetization
+and “fire-and-ice” ground states [28], the enhanced magnetocaloric effect [29], the pseudo-critical be-
+havior mimicking a temperature-driven phase transition [30–33] or the pseudo-universality [34]. Most
+importantly, Derzhko and co-workers [35] convincingly evidenced that the exact solution for the Ising-
+Heisenberg diamond chain may be used as a useful starting point for the perturbative treatment of the
+full Heisenberg counterpart model. It was shown that this type of many-body perturbation theory may
+even bring insight into exotic quantum states such as a quantum spin liquid not captured by the original
+Ising-Heisenberg model [35].
+
+This article is organized as follows. Section 2 is devoted to a definition and solution of the spin-1/2
+Ising-Heisenberg diamond chain with the vibrating character of the Heisenberg dimers. The ground-state
+phase diagram as a function of the spring stiffness, magnetoelastic constant and geometric structure is
+explored in section 3. In section 4 we present the thermodynamics of the model, where the magnetization,
+entropy and specific heat are analyzed in detail. Finally our conclusions are reported in section 5.
+
+## 2. Ising-Heisenberg diamond chain with magnetoelastic coupling
+
+Let us consider the spin-1/2 Ising-Heisenberg diamond chain schematically depicted in figure 1,
+which in an elementary unit cell involves two Heisenberg spins $S_{a,j}$ and $S_{b,j}$ and one nodal Ising spin
+$\sigma_j$. It will be further assumed that the decorating atoms involving the Heisenberg spins harmonically
+vibrate perpendicularly to the chain axis, while the nodal atoms involving the Ising spins are rather rigid
+when neglecting their lattice vibrations.
+
+Under this condition, the spin-1/2 Ising-Heisenberg diamond chain can be defined through the
+Hamiltonian
+
+$$ \mathcal{H} = \sum_{j=1}^{N} \mathcal{H}_j = \sum_{j=1}^{N} \left( \mathcal{H}_{j}^{(\text{p})} + \mathcal{H}_{j}^{\text{(me)}} \right), \quad (2.1) $$
+---PAGE_BREAK---
+
+**Figure 1.** (Colour online) A schematic representation of the spin-1/2 Ising-Heisenberg diamond chain with the magnetoelastic coupling. The Ising spins $\sigma_j$ are placed at rigid lattice positions, while the Heisenberg spins $S_{a,j}$ and $S_{b,j}$ harmonically vibrate in a direction perpendicular with respect to the chain axis.
+
+where $\mathcal{H}_j^{(p)}$ corresponds to the pure phonon part and $\mathcal{H}_j^{(\text{me})}$ stands for the magnetoelastic part of the Hamiltonian $\mathcal{H}_j$ explicitly given in what follows.
+
+A specification of the displacements within the diamond unit cell is depicted in figure 2 (a), where $x_0$ is the equilibrium distance between the Ising spins, $y_0$ corresponds to the equilibrium distance between the Heisenberg spins, and $d_0$ is the equilibrium distance between the Ising and Heisenberg spins. It is supposed that the decorating atoms a and b including the Heisenberg spins perform harmonic oscillations around their equilibrium lattice positions, which can be characterized through small displacements $y_a$ and $y_b$, while nodal Ising spins are considered at a rigid position (heavy atoms), this assumption is also reasonable because there is no direct interaction between Ising spins. Consequently, the instantaneous distances between the Heisenberg and Ising spins are changed to $d_a = d_0 + y_a \sin(\theta/2)$ and $d_b = d_0 + y_b \sin(\theta/2)$ though the distance $x_0$ between the Ising spins remains unaltered. Note that $y_a$ and $y_b$ are considered positive when dimer particles expand, but when they compress, we consider them negative.
+
+**Figure 2.** (Colour online) A specification of the diamond unit cell under the geometric deformation through the displacements (a) and the exchange interactions (b).
+
+Under the linear approximation, the magnetoelastic part of the bond Hamiltonian (2.1) can be written in this form
+
+$$
+\begin{align}
+\mathcal{H}_j^{(\text{me})} = & - \mathcal{J}_{x,j} (S_{a,j}^x S_{b,j}^x + S_{a,j}^y S_{b,j}^y) - \mathcal{J}_{z,j} S_{a,j}^z S_{b,j}^z - (\mathcal{J}_{a,j} S_{a,j}^z + \mathcal{J}_{b,j} S_{b,j}^z) (\sigma_j + \sigma_{j+1}) \nonumber \\
+& - h_H (S_{a,j}^z + S_{b,j}^z) - \frac{h_1}{2} (\sigma_j + \sigma_{j+1}), \tag{2.2}
+\end{align}
+$$
+---PAGE_BREAK---
+
+where the exchange interactions [see figure 2 (b)] are given by
+
+$$
+\begin{align}
+\mathcal{J}_{x,j} &= J_x \left[1 - \kappa(y_{a,j} + y_{b,j})\right], & \mathcal{J}_{z,j} &= J_z \left[1 - \kappa(y_{a,j} + y_{b,j})\right], \nonumber \\
+\mathcal{J}_{a,j} &= J_0 \left[1 - \eta y_{a,j} \sin\left(\frac{\theta}{2}\right)\right], & \mathcal{J}_{b,j} &= J_0 \left[1 - \eta y_{b,j} \sin\left(\frac{\theta}{2}\right)\right]. \tag{2.3}
+\end{align}
+$$
+
+Here, $J_x$ and $J_z$ correspond to xy- and z-component of the Heisenberg exchange interaction when assuming the decorating atoms at equilibrium positions. Similarly, $J_0$ corresponds to the Ising exchange interaction between the Heisenberg and Ising spins at equilibrium positions. Finally, $\kappa$ is the linear expansion coefficient for the magnetoelastic coupling within the Heisenberg dimers and $\eta$ is the linear expansion coefficient for the magnetoelastic coupling between the Ising and Heisenberg spins. We do not include a term of second-order contribution, because we assume simply a linear dependence as considered in reference [8] and in references therein.
+
+For simplicity, from now on, we will consider the standard atomic units (au) which means the Planck's constant is $\hbar = 1$, Boltzmann constant becomes $k_B = 1$, Bohr magneton constant is $\mu_B = 1/2$ and the gyromagnetic ratio is estimated as $\gamma \approx 2$, so we have $\mu_B\gamma = 1$. Therefore, exchange interaction parameters, external magnetic field, displacements, Hooke's constant are all in atomic units.
+
+The purely elastic part of the bond Hamiltonian (2.1) can be defined as follows:
+
+$$
+\mathcal{H}_j^{(p)} = \frac{\mathbf{p}_{a,j}^2}{2m} + \frac{\mathbf{p}_{b,j}^2}{2m} + \frac{\bar{K}}{2} (y_{a,j}^2 + y_{b,j}^2) + \frac{k_H}{2} (y_{a,j} + y_{b,j})^2, \quad (2.4)
+$$
+
+where $p_{a,j}$ and $p_{b,j}$ are momenta of the decorating atoms with the mass $m$, $y_{a,j}$ and $y_{b,j}$ denote their displacements from equilibrium positions, $k_H$ is the “spring-stiffness” constant ascribed to the Heisenberg coupling, and $\bar{K} = 2k_l \sin^2(\theta/2)$ is the effective “spring-stiffness” constant of the Ising coupling when $k_l$ is a true “spring-stiffness” constant ascribed to the Ising coupling.
+
+**2.1. Local canonical transformation**
+
+The Hamiltonian (2.1) involves magnetoelastic and pure elastic (phonon) contributions, which are cou-
+pled together through the linear expansion coefficients $\kappa$ and $\eta$ pertinent to the magnetoelastic couplings.
+However, both contributions can be decoupled through the local canonical coordinate transformation
+
+$$
+\boldsymbol{q}_j = \frac{1}{\sqrt{2}} (\boldsymbol{y}_{a,j} + \boldsymbol{y}_{b,j}) \quad \text{and} \quad \bar{\boldsymbol{q}}_j = \frac{1}{\sqrt{2}} (\boldsymbol{y}_{a,j} - \boldsymbol{y}_{b,j}), \tag{2.5}
+$$
+
+which defines the positions of two fictitious particles. Analogously, the momenta in the canonical
+coordinates take the form
+
+$$
+\boldsymbol{p}_j = \frac{1}{\sqrt{2}} (\boldsymbol{p}_{a,j} + \boldsymbol{p}_{b,j}) \quad \text{and} \quad \bar{\boldsymbol{p}}_j = \frac{1}{\sqrt{2}} (\boldsymbol{p}_{a,j} - \boldsymbol{p}_{b,j}). \tag{2.6}
+$$
+
+Thus, the Hamiltonian (2.4) in the canonical coordinates can be rewritten as follows:
+
+$$
+\mathcal{H}_j^{(p)} = \frac{\boldsymbol{p}_j^2}{2m} + \frac{\bar{\boldsymbol{p}}_j^2}{2m} + \frac{\bar{K}}{2} (\boldsymbol{q}_j^2 + \bar{\boldsymbol{q}}_j^2) + k_H \boldsymbol{q}_j^2 . \quad (2.7)
+$$
+
+**2.2. Diagonalization of the magnetoelastic part**
+
+Since the commutation relation [$\mathcal{H}_i^{(\text{me})}, \mathcal{H}_j^{(\text{me})}] = 0$ is satisfied, the magnetoelastic part of the bond Hamiltonian (2.2) can be diagonalized separately for each unit cell and the respective eigenvalues can be expressed as a function of the canonical coordinate for a position $\mathbf{q}_j$ in the following form
+
+$$
+\epsilon_{k,j} = e_{k,j}^{(0)} + e_{k,j}^{(1)} q_j,
+\quad (2.8)
+$$
+---PAGE_BREAK---
+
+with the first index being $k = \{1, 2, 3, 4\}$ and the second index specifying the unit cell. The eigenvalues (2.8) were decomposed into two terms, whereas the first terms $e_{k,j}^{(0)}$ correspond to a fully rigid diamond chain
+
+$$
+\begin{aligned}
+\mathbf{e}_{1,j}^{(0)} &= -\frac{J_z}{4} - h_H - \left(\frac{h_1}{2} + J_0\right) \mu_j, &
+\mathbf{e}_{2,j}^{(0)} &= -\frac{J_z}{4} + h_H - \left(\frac{h_1}{2} - J_0\right) \mu_j, \\
+\mathbf{e}_{3,j}^{(0)} &= \frac{J_z}{4} + \frac{J_x}{2} - \frac{h_1}{2} \mu_j, &
+\mathbf{e}_{4,j}^{(0)} &= \frac{J_z}{4} - \frac{J_x}{2} - \frac{h_1}{2} \mu_j,
+\end{aligned}
+\quad (2.9) $$
+
+which can be alternatively interpreted as the energy eigenvalues when the decorating atoms are at
+equilibrium positions $y_{a,j} = y_{b,j} = 0$ and for simplicity, we denoted $\mu_j = (\sigma_j + \sigma_{j+1})$. The second terms
+$e_{k,j}^{(1)}$ determine a vibrational contribution to the overall energy, which is given by
+
+$$
+\begin{aligned}
+\mathbf{e}_{1,j}^{(1)} &= \frac{\sqrt{2}}{4} \left[ J_z \kappa + 2 J_0 \eta \mu_j \sin\left(\frac{\theta}{2}\right) \right], & \mathbf{e}_{2,j}^{(1)} &= \frac{\sqrt{2}}{4} \left[ J_z \kappa - 2 J_0 \eta \mu_j \sin\left(\frac{\theta}{2}\right) \right], \\
+\mathbf{e}_{3,j}^{(1)} &= -\frac{\sqrt{2}}{4} \kappa (2 J_x + J_z), & \mathbf{e}_{4,j}^{(1)} &= \frac{\sqrt{2}}{4} \kappa (2 J_x - J_z).
+\end{aligned}
+\quad (2.10) $$
+
+The eigenvectors, which correspond to the energy eigenvalues (2.8), can be expressed using the notation
+$|\Sigma_a^z\rangle_j$ as follows
+
+$$
+\begin{aligned}
+|\varphi_1\rangle_j &= |+_x\rangle_j, & |\varphi_2\rangle_j &= |-_x\rangle_j, & |\varphi_3\rangle_j &= \sin(\vartheta_j)|+_x\rangle_j - \cos(\vartheta_j)|+_x\rangle_j, \\
+|\varphi_4\rangle_j &= \cos(\vartheta_j)|+_x\rangle_j + \sin(\vartheta_j)|-_x\rangle_j,
+\end{aligned}
+$$
+
+whereas the relevant mixing angle $\vartheta_j$ entering the last two eigenvectors is defined as follows:
+
+$$ \tan(2\vartheta_j) = \frac{J_x(1 - \kappa\sqrt{2q_j})}{\sqrt{2}J_0\eta\sin(\frac{\theta}{2})\bar{q}_j\mu_j}. $$
+
+It is worth to mention that the last two eigenvectors depend on the positions $q_j$ and $\bar{q}_j$ of the decorating
+atoms.
+
+**2.3. Diagonalization of the phonon part**
+
+After performing the canonical coordinate transformation and diagonalizing, the magnetoelastic part
+for each eigenvalue of bond Hamiltonian takes the form
+
+$$ H_{k,j} = e_{k,j}^{(0)} + e_{k,j}^{(1)} q_j + \frac{\mathbf{p}_j^2}{2m} + \frac{\bar{\mathbf{p}}_j^2}{2m} + \frac{\bar{K}}{2} (\mathbf{q}_j^2 + \bar{\mathbf{q}}_j^2) + k_H \mathbf{q}_j^2. \quad (2.11) $$
+
+The above result can be subsequently fully decoupled and diagonalized by completing square through an
+additional local transformation for relative position of the Heisenberg dimers
+
+$$ q_j = q'_j - \frac{\mathbf{e}_{k,j}^{(1)}}{\bar{K}}, \quad (2.12) $$
+
+which is defined through the effective spring-stiffness constants $K = \bar{K} + 2k_H$ and $\bar{K} = 2k_L \sin^2(\theta/2)$.
+Substituting a shift of the canonical coordinate for position (2.12) into equation (2.11), one actually
+achieves a decoupling of the magnetoelastic and pure phonon parts of the bond Hamiltonian, whereas
+the effective phonon part becomes
+
+$$ H_j^{(p)} = \frac{\mathbf{p}_j^2}{2m} + \frac{\bar{\mathbf{p}}_j^2}{2m} + \frac{\bar{K}}{2} (\mathbf{q}'_j)^2 + \frac{\bar{K}}{2} \bar{\mathbf{q}}_j^2, \quad (2.13) $$
+---PAGE_BREAK---
+
+while the effective magnetoelastic part reads
+
+$$
+\mathcal{H}_{k,j}^{(\text{me})} = e_{k,j}^{(0)} - \frac{(e_{k,j}^{(1)})^2}{2K}. \qquad (2.14)
+$$
+
+To proceed further, let us introduce the annihilation **b**j and creation **b**j† bosonic operators describ-
+ing phonon positions, which satisfy the standard bosonic commutation relations [**b**j, **b**j'†] = δj,j' and
+[**b**j, **b**j'] = 0. The shifted position **q**'j and momentum operator **p**j of the Heisenberg dimer can be
+consequently rewritten in terms of the newly defined creation **b**j† and annihilation **b**j bosonic operators
+(in units of ħ = 1)
+
+$$
+\boldsymbol{q}'_j = \frac{1}{\sqrt{2m\omega}} (\boldsymbol{b}_j^\dagger + \boldsymbol{b}_j) \quad \text{and} \quad \boldsymbol{p}_j = i \sqrt{\frac{m\omega}{2}} (\boldsymbol{b}_j^\dagger - \boldsymbol{b}_j), \tag{2.15}
+$$
+
+where $\omega = \sqrt{K/m}$ is the respective phonon frequency. Similarly, one may also introduce the annihilation $\bar{b}_j$ and creation $\bar{b}_j^\dagger$ bosonic operators describing the corresponding phonon term, which also satisfy the standard bosonic commutation relations $[\bar{b}_j, \bar{b}_{j'}^\dagger] = \delta_{j,j'}$ and $[\bar{b}_j, \bar{b}_{j'}] = 0$. Therefore, the position $\bar{q}_j$ and momentum $\bar{p}_j$ can be also defined as
+
+$$
+\bar{q}_j = \frac{1}{\sqrt{2m\bar{\omega}}} (\bar{b}_j^\dagger + \bar{b}_j) \quad \text{and} \quad \bar{p}_j = i \sqrt{\frac{m\bar{\omega}}{2}} (\bar{b}_j^\dagger - \bar{b}_j), \qquad (2.16)
+$$
+
+where $\bar{\omega} = \sqrt{\bar{K}/m}$ is the respective phonon frequency. The phonon part of the Hamiltonian (2.13) can be
+subsequently expressed in terms of the number operators $\hat{n}_j$ and $\hat{n}_j$ for two aforedescribed phonons
+
+$$
+\mathcal{H}_j^{(\mathrm{p})} = \left(\frac{1}{2} + \mathbf{b}_j^\dagger \mathbf{b}_j\right) \omega + \left(\frac{1}{2} + \bar{\mathbf{b}}_j^\dagger \bar{\mathbf{b}}_j\right) \bar{\omega} = \left(\frac{1}{2} + \hat{n}_j\right) \omega + \left(\frac{1}{2} + \bar{\hat{n}}_j\right) \bar{\omega}. \quad (2.17)
+$$
+
+In this way, we have achieved diagonalization of the phonon part of the Hamiltonian (2.17) as the number
+operators **n**j and **n**̅j acquire the following set of eigenvalues nj and n̅j ∈ {0, 1, 2, ...} with regard to
+the bosonic character of the underlying operators. It is worthwhile to remark that the bond Hamiltonian
+*H*k,j = *H*(me)k,j + *H*(p)j is now expressed in a fully diagonal form with regard to the diagonal character
+of the magnetoelastic (2.14) and the phonon (2.17) parts, which additionally mutually commute with
+each other, which will be of principal importance for calculation of the partition function presented in
+section 4.
+
+3. Ground-state phase diagram
+
+Before exploring the magnetoelastic properties, let us first analyze a ground-state phase diagram of
+the spin-1/2 Ising-Heisenberg diamond chain supplemented with the magnetoelastic coupling, which
+exhibits three different ground states on the assumption that kH = k⊥ and κ = η. The first ground state
+can be classified as the saturated paramagnetic state (SA) given by the eigenvector
+
+$$
+|\text{SA}\rangle = \prod_{j=1}^{N} |\uparrow\rangle_j |{\uparrow}\rangle_j, \qquad (3.1)
+$$
+
+whereas the former (latter) state vector defines a spin state of the Heisenberg dimer (the Ising spin) from
+the jth unit cell. The saturated paramagnetic state has the following eigenenergy
+
+$$
+E_{\text{SA}} = -\frac{J_z}{4} - h_{\text{H}} - \frac{h_1}{2} - J_0 - \frac{(J_z \kappa + 2J_0 \eta \sin \frac{\theta}{2})^2}{16K}. \quad (3.2)
+$$
+---PAGE_BREAK---
+
+The second ground state can be viewed as the classical ferrimagnetic phase (FI₁) given by the eigenvector
+
+$$
+|FI_1\rangle = \prod_{j=1}^{N} |+_j\rangle_j |\downarrow\rangle_j, \quad (3.3)
+$$
+
+whereas the corresponding energy becomes
+
+$$
+E_{\mathrm{FI}_1} = -\frac{J_z}{4} - h_H + \frac{h_1}{2} + J_0 - \frac{(J_z \kappa - 2J_0 \eta \sin \frac{\theta}{2})^2}{16K}. \quad (3.4)
+$$
+
+The sublattice magnetization of the Ising spins is $M_H = -1/2$ per unit cell, the sublattice magnetization of the Heisenberg spins is $M_H = 1$ per unit cell so that the total magnetization per unit cell is $M_t = 1/2$. The third ground state can be classified as the quantum ferrimagnetic phase (FI₂) given by the eigenvector
+
+$$
+|\text{FI}_2\rangle = \prod_{j=1}^{N} (\cos(\vartheta_j)|_-^{\dagger}\rangle_j + \sin(\vartheta_j)|_+^{\dagger}\rangle_j) |\uparrow\rangle_j, \quad (3.5)
+$$
+
+whose corresponding eigenenergy reads as follows:
+
+$$
+E_{\text{FI}_2} = \frac{J_z}{4} - \frac{J_x}{2} - \frac{h_1}{2} - \frac{\kappa^2(2J_x - J_z)^2}{16K}. \quad (3.6)
+$$
+
+The sublattice magnetization of the Heisenberg spins is $M_H = 0$ due to a singlet-like character of the Heisenberg dimers and hence, the sublattice magnetization of the Ising spins $M_I = 1/2$ provides the only nonzero contribution to the total magnetization per unit cell $M_t = 1/2$.
+
+Figure 3. (Colour online) Ground-state phase diagram in $J_z-h$ plane for $J_0 = -1$, $J_x = 1$ and: (a)–(b) $\kappa = 0.5$, $\theta = \pi/2$ and $k_H = \{25, 50, 200, 1000\}$; (c) $\kappa = 0.5$, $k_H = 50$ and $\theta = \{2\pi/3, \pi/2, \pi/3, \pi/4, \pi/8\}$; (d) $\theta = \pi/3$, $k_H = 50$ and $\kappa = \{0.1, 0.2, 0.3, 0.4, 0.5\}$.
+
+Here, we consider the local magnetic fields $h = h_1 = h_H$ acting on the Ising and Heisenberg spins, which physically corresponds to the equality of their Landé g-factors. A few typical ground-state phase diagrams are plotted in figure 3 in $J_z-h$ plane for the fixed values of the coupling constants $J_0 = -1$ and $J_x = 1$. It is evident that the ground-state phase boundaries are only gradually shifted with respect to a perfectly rigid limit when assuming realistic (rather high) values of the spring-stiffness constants (see [36] for a comparison). As a matter of fact, the changes in the ground-state phase diagram shown in figure 3 (a) for the fixed values of $\kappa = 0.5$ and $\theta = \pi/2$ due to variation of the spring-stiffness constant $k_H = \{25, 50, 200, 1000\}$ are almost indistinguishable within the displayed scale, while they become evident just in a magnified scale as illustrated in figure 3 (b). Note that similar effects also result from the changes of other interaction parameters [see figure 3 (c)–(d)]. The role of lattice geometry can be traced back in figure 3 (c), where the ground-state phase diagrams are plotted for fixed values of $\kappa = 0.5$, $k_H = 50$ and several values of the angle $\theta = \{2\pi/3, \pi/2, \pi/3, \pi/4, \pi/8\}$. Finally, the effect of magnetoelastic constant on the ground-state phase diagrams is illustrated in figure 3 (d) when assuming a fixed value of $\theta = \pi/3$, $k_H = 50$ upon variation of $\kappa = \{0.1, 0.2, 0.3, 0.4, 0.5\}$.
+
+The phase boundary between two ferrimagnetic phases FI₁ and FI₂ is given by
+
+$$
+J_z = \frac{[4(2J_0 + J_x)k_H - J_0^2 \kappa^2](\sin^2 \frac{\theta}{2} + 1) + (J_0^2 + J_x^2)\kappa^2}{\kappa^2(J_x - J_0 \sin \frac{\theta}{2}) + 4k_H(\sin^2 \frac{\theta}{2} + 1)}, \quad (3.7)
+$$
+---PAGE_BREAK---
+
+which is independent of the magnetic field $h$ as evidenced by a vertical character of the relevant phase boundaries in figure 3. The phase boundary between the phases FI₁ and SA follows from the formula
+
+$$h = -2J_0 - \frac{J_0 J_z \kappa^2 \sin \frac{\theta}{2}}{4k_H \left(\sin^2 \frac{\theta}{2} + 1\right)}. \quad (3.8)$$
+
+While this phase boundary is for a perfectly rigid model ($\kappa = 0$) completely independent of $J_z$, the model with nonzero magnetoelastic coupling constant $\kappa \neq 0$ shows a relatively weak dependence on the coupling constant $J_z$ because $\kappa$ is in general a very small quantity ($\kappa \ll 1$), while the spring-stiffness constant $k_H \gg e^{(1)}$ [i.e. $k_H \gg J_0 J_z$, see figure 3(b)-(d) for illustration]. A similar finding concerns with the phase boundary between FI₂ and SA phases, which is given by
+
+$$h = \frac{1}{2}(J_x - 2J_0 - J_z) + \kappa^2 \frac{(J_x + J_0 \sin \frac{\theta}{2})(J_x - J_z - J_0 \sin \frac{\theta}{2})}{8k_H (\sin^2 \frac{\theta}{2} + 1)}. \quad (3.9)$$
+
+This phase boundary depends on the coupling constant $J_z$ even in the perfectly rigid limit ($\kappa = 0$), but there appears a small correction to this dependence once nonzero magnetoelastic coupling constant ($\kappa \neq 0$) is taken into consideration.
+
+## 4. Thermodynamics
+
+As previously commented, the phonon $H_j^{(p)}$ and magnetoelastic $H_{k,j}^{(me)}$ parts of the bond Hamiltonian [2.14 and (2.17)] commute with each other. For this reason, the bond Hamiltonians corresponding to two different unit cells also satisfy the commutation relation $[H_j, H_{j'}] = 0$. The partition function of the spin-1/2 Ising-Heisenberg diamond chain accounting for the magnetoelastic interaction can be accordingly obtained by using the transfer-matrix approach. A decoupled character of the phonon and magnetoelastic parts of the Hamiltonian allows one to factorize the partition function into a product of two terms
+
+$$Z_N = Z_N^{(p)} Z_N^{(me)}, \quad (4.1)$$
+
+whereas the former one $Z_N^{(p)}$ denotes the phonon contribution and the latter one $Z_N^{(me)}$ corresponds to the magnetoelastic contribution. It is noteworthy that the phonons corresponding to $(p, q)$ and $(\bar{p}, \bar{q})$ of the Heisenberg dimers are independent of each other and hence, the phonon part of the partition function can be expressed more explicitly as follows
+
+$$Z_N^{(p)} = (u\bar{u})^N, \quad (4.2)$$
+
+where individual contributions stemming from two kinds of phonons involved in the Hamiltonian (2.14) follow from a summation over all accessible values of the quantum numbers $n_j$ and $\bar{n}_j$
+
+$$u = \sum_{n_j=0}^{\infty} e^{-\beta(\frac{1}{2}+n_j)\omega} = \frac{1}{2\sinh\left(\frac{\beta\omega}{2}\right)}, \quad \bar{u} = \sum_{\bar{n}_j=0}^{\infty} e^{-\beta(\frac{1}{2}+\bar{n}_j)\bar{\omega}} = \frac{1}{2\sinh\left(\frac{\beta\bar{\omega}}{2}\right)}. \quad (4.3)$$
+
+The magnetoelastic part of the partition function can be calculated using the transfer matrix
+
+$$W = \begin{pmatrix} w_1 & w_0 \\ w_0 & w_{-1} \end{pmatrix}, \quad (4.4)$$
+
+which involves the Boltzmann factors of the jth Heisenberg dimer defined as follows:
+
+$$w_{\mu_j} = \sum_{k=1}^{4} e^{-\beta[e_{k,j}^{(0)} - \frac{1}{2K}(e_{k,j}^{(1)})^2]} . \quad (4.5)$$
+---PAGE_BREAK---
+
+In the above, the energy contributions $e_{k,j}^{(0)}$ and $e_{k,j}^{(1)}$ are given by equations (2.9) and (2.10), respectively. After a straightforward diagonalization of the transfer matrix (4.4), one gets two eigenvalues
+
+$$ \lambda_{\pm} = \frac{1}{2} \left( w_1 + w_{-1} \pm \sqrt{(w_1 - w_{-1})^2 + 4w_0^2} \right). \quad (4.6) $$
+
+The magnetoelastic part of the partition function can be expressed via the transfer-matrix eigenvalues
+
+$$ Z_N^{(\text{me})} = \lambda_+^N + \lambda_-^N. \quad (4.7) $$
+
+At this stage, one may substitute the phonon and magnetoelastic parts of the partition functions (4.2 and (4.7)) into equation (4.1) in order to get the final result for the partition function and the associated free energy, which in the thermodynamic limit reduces to the form
+
+$$ f = -\frac{1}{\beta} \lim_{N \to \infty} \frac{1}{N} \ln Z_N = -\frac{1}{\beta} \ln(u\bar{u}) - \frac{1}{\beta} \ln(\lambda_+) . \quad (4.8) $$
+
+With the free energy in hand, we are able to discuss magnetoelastic properties of the spin-1/2 Ising-Heisenberg diamond chain at nonzero temperatures.
+
+In what follows our particular attention will be devoted to the magnetoelastic behavior at and near a triple point, where all three phases SA, FI₁ and FI₂ coexist together at zero temperature. In the case of a completely rigid model ($\kappa = \eta = 0$), the three phases coexist together for fixed parameters $J_0 = -1$, $J_x = 1$, $J_z = -1$ when the magnetic field $h \to 2$ [see in figure 3 (a)]. After some algebraic manipulations, one obtains the following asymptotic expression for the free energy of the rigid model $f_0 = -T \ln(\lambda_+)$ in the zero-temperature limit $T \to 0$
+
+$$ f_0 = E_0 - T \ln \left( \frac{3 + \sqrt{5}}{2} \right) - \frac{\sqrt{5}}{10} (h_1 - 2) - \frac{1}{2} \left( 1 + \frac{\sqrt{5}}{5} \right) (h_H - 2), \quad (4.9) $$
+
+where $E_0 = E_{FI_2} = E_{FI_1} = E_{SA}$ defines the corresponding ground-state energy at a triple point. Now, one may differentiate the free energy (4.9) in order to calculate the entropy
+
+$$ S_0 = - \left( \frac{\partial f_0}{\partial T} \right)_h = \ln \left( \frac{3 + \sqrt{5}}{2} \right) \approx 0.962423, \quad (4.10) $$
+
+the sublattice magnetization of the Ising spins $M_{I,0} = -(\partial f_0 / \partial h_1)_T = \sqrt{5}/10$, the sublattice magnetization of the Heisenberg spins $M_{H,0} = -(\partial f_0 / \partial h_H)_T = 1/2 \cdot (1 + \sqrt{5}/5)$, while the total magnetization per unit cell equals
+
+$$ M_{t,0} = M_{I,0} + M_{H,0} = \frac{1}{2} + \frac{\sqrt{5}}{5} \approx 0.9472136. \quad (4.11) $$
+
+This exact result will be confirmed later by numerical computation at finite temperatures.
+
+Now, let us compare the magnetic behavior of the model accounting for the magnetoelastic coupling in the vicinity of the triple point with that of the fully rigid model in order to find out differences arising from the magnetoelastic coupling. To this end, the entropy is plotted in figure 4 (a) as a function of temperature in semi-logarithmic scale, whereas a blue line corresponds to the fully rigid model, a green line describes the phonon contribution and a red line reports the overall entropy for the fixed values $k_H = 100$, $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$ and $h = 2$. It can be seen that the entropy closely follows the entropy of the rigid model at low enough temperatures, whereas it tends to the phonon contribution at a sufficiently high temperature. Figure 4 (c) depicts the temperature dependencies of the specific heat corresponding to figure 4 (a). Figure 4 (b) illustrates the overall entropy of the model accounting for the magnetoelastic coupling for the fixed values of $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$, $h = 2$ and four different values of the spring-stiffness constant $k_H = \{25, 50, 100, 200\}$. It is evident from this figure that the entropy displays a plateau at $S_0 = \ln[(3 + \sqrt{5})/2]$ regardless of the
+---PAGE_BREAK---
+
+**Figure 4.** (Colour online) (a) The entropy as a function of temperature in a semi-logarithmic scale for fixed values of $k_H = 100$, $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$ and $h = 2$. A blue line corresponds to the rigid model, a green line describes the phonon contribution and a red line reports the overall entropy of the model with the magnetoelastic coupling; (b) The overall entropy as a function of temperature in a semi-logarithmic scale for fixed values of $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$, $h = 2$ and four different values of $k_H = \{25, 50, 100, 200\}$; (c) Temperature variations of the specific heat corresponding to the panel (a); (d) Temperature variations of the specific heat corresponding to the panel (b).
+
+**Figure 5.** (Colour online) (a) The total magnetization per unit cell as a function of temperature in a semi-logarithmic scale for fixed values of $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$, $h = 2$ and four different values of $k_H = \{25, 50, 100, 200\}$; (b) A semi-logarithmic plot of the magnetic susceptibility times temperature ($\chi T$) product for the same parameter set as used in figure 5 (a) for the magnetization.
+
+spring-stiffness constant in the range of moderate temperatures $10^{-3} \le T \le 10^{-1}$ before it tends to zero in the asymptotic limit $T \to 0$. Note that the fully rigid model ($k_H \to \infty$) exhibits, for the considered set of parameters, the residual entropy $S_0 = \ln[(3 + \sqrt{5})/2]$, which is however lifted for finite values of the spring-stiffness constant due to the shift of the ground-state phase boundaries. Finally, figure 4 (d) depicts temperature variations of the specific heat corresponding to figure 4 (b), where the formation of the additional small peak can be observed in a low-temperature region due to the respective changes of the entropy from a nonzero plateau to null.
+
+The total magnetization is depicted in figure 5 (a) against the temperature in a semi-logarithmic scale by assuming fixed values of $\kappa = 0.5$, $J_0 = -1$, $J_x = 1$, $J_z = -1$, $\theta = \pi/2$, $h = 2$ and four different values of the spring-stiffness constant $k_H = \{25, 50, 100, 200\}$. It turns out that the total magnetization per unit cell tends in the zero-temperature limit to the initial value $M_t = 0.5$ irrespective of the spring-stiffness constant. Then it increases in agreement with the formula (4.11) to the value $M_t \sim 0.947$ in the range of moderate temperatures $10^{-3} \le T \le 10^{-1}$ before it finally tends to zero in the high-temperature region. Figure 5 (b) illustrates the respective temperature variations of the magnetic susceptibility times temperature ($\chi T$) product for the same set of parameters as used in figure 5 (a) for the magnetization. It is obvious that the product vanishes $\chi T \to 0$ as temperature tends to zero, an intermediate plateau around the value $\chi T \sim 0.18$ is found at moderate temperatures $10^{-3} \le T \le 10^{-1}$ and the product reaches the asymptotic value $\chi T \sim 0.8$ in the high-temperature limit.
+---PAGE_BREAK---
+
+## 5. Conclusions
+
+In the present paper we have examined in detail the magnetoelastic properties of the spin-1/2 Ising-Heisenberg diamond chain, which involves two Heisenberg spins and one nodal Ising spin in each unit cell. It is supposed that the decorating atoms involving the Heisenberg spins harmonically vibrate perpendicular to the chain axis, while the nodal atoms involving the Ising spins are placed at rigid lattice positions when completely neglecting their lattice vibrations. In particular, we have first investigated the ground-state phase diagram depending on the magnetoelastic constant and the spring-stiffness constant ascribed to the Heisenberg coupling with the main emphasis laid on an investigation of the parameter region close to a triple coexistence point of two ferrimagnetic phases and one saturated paramagnetic phase. Next, we have also examined in detail the thermodynamic properties at nonzero temperatures.
+
+It has been found that the magnetoelastic nature of the Heisenberg dimers is reflected through a nonzero plateau of the entropy in a low-temperature region, whereas the specific heat displays an anomalous peak slightly below the temperature region corresponding to the entropy plateau. It also turns out that the magnetization exhibits a plateau at almost saturated value in the same temperature region as the entropy before it gradually tends to zero upon further increase of temperature. The magnetic susceptibility displays within the plateau region an inverse temperature dependence, which slightly drops above this plateau, whereas an inverse temperature dependence is repeatedly recovered at high enough temperatures.
+
+## Acknowledgments
+
+N. F. thanks Brazilian agency CAPES for full financial support. J. T. thanks Brazilian agency CNPq grant No. 159792/2019-3 for full partial support. O. R. and S. M. de S thanks CNPq and FAPEMIG for partial financial support. J. S. thanks Slovak Research and Development Agency for financial support provided under grant No. APVV-18-0197 and The Ministry of Education, Science, Research and Sport of the Slovak Republic for financial support provided under grant No. VEGA 1/0531/19.
+
+## References
+
+1. Henriques V.B., Salinas S.R., J. Phys. C: Solid State Phys., 1987, **20**, 2415, doi:10.1088/0022-3719/20/16/014.
+
+2. Massimino P., Diep H.T., J. Appl. Phys., 2000, **87**, 7043, doi:10.1063/1.372925.
+
+3. Boubcheur E., Massimino P., Diep H.T., J. Magn. Magn. Mater., 2001, **223**, 163, doi:10.1016/S0304-8853(00)00752-6.
+
+4. Armstrong W.D., Mater. Sci. Eng., B, 1997, **47**, 47, doi:10.1016/S0921-5107(97)02040-0.
+
+5. Tickle R., James R.D., Shield T., Wuttig M., Kokorin V.V., IEEE Trans. Magn., 1999, **35**, 4301, doi:10.1109/20.799080.
+
+6. Tickle R., James R.D., J. Magn. Magn. Mater., 1999, **195**, 627, doi:10.1016/S0304-8853(99)00292-9.
+
+7. Strečka J., Rojas O., de Souza S.M., Phys. Lett. A, 2012, **376**, 197, doi:10.1016/j.physleta.2011.11.008.
+
+8. Strečka J., Rebič M., Rojas O., de Souza S.M., J. Magn. Magn. Mater., 2019, **469**, 655, doi:10.1016/j.physleta.2019.05.017.
+
+9. Enting I.G., J. Phys. A: Math. Nucl. Gen., 1973, **6**, 170, doi:10.1088/0305-4470/6/2/008.
+
+10. Jaščur M., Štubňa V., Szalowski K., Balcerzak T., J. Magn. Magn. Mater., 2016, **417**, 92, doi:10.1016/j.jmmm.2016.05.048.
+
+11. Štubňa V., Jaščur M., J. Magn. Magn. Mater., 2017, **442**, 364, doi:10.1016/j.jmmm.2017.07.011.
+
+12. Fisher M.E., Phys. Rev., 1959, **113**, 969, doi:10.1103/PhysRev.113.969.
+
+13. Syozi I., In: Phase Transitions and Critical Phenomena, Vol. 1, C. Domb and M. S. Green (Eds.), Academic Press, New York, 1972.
+
+14. Rojas O., Valverde J.S., de Souza S.M., Physica A, 2009, **388**, 1419, doi:10.1016/j.physa.2008.12.063.
+
+15. Strečka J., Phys. Lett. A, 2010, **374**, 3718, doi:10.1016/j.physleta.2010.07.030.
+
+16. Rojas O., de Souza S.M., J. Phys. A: Math. Theor., 2011, **44**, 245001, doi:10.1088/1751-8113/44/24/245001.
+
+17. Mattis D.C., Schultz T.D., Phys. Rev., 1963, **129**, 175, doi:10.1103/PhysRev.129.175.
+
+18. Salinas S.R., J. Phys. A: Math. Nucl. Gen., 1973, **6**, 1527, doi:10.1088/0305-4470/6/10/011.
+
+19. Knežević M., Milošević S., J. Phys. A: Math. Gen., 1980, **13**, 2479, doi:10.1088/0305-4470/13/7/029.
+---PAGE_BREAK---
+
+20. Lemos C.G.O., Santos M., Ferreira A.L., Figuereido W., Phys. Rev. E, 2019, **99**, 012129,
+doi:10.1103/PhysRevE.99.012129.
+
+21. Balcerzak T., Szałowski K., Jaščur M., J. Magn. Magn. Mater., 2020, **507**, 166825,
+doi:10.1016/j.jmmm.2020.166825.
+
+22. Qi Y., Jia L., Yu N., Du A., Solid State Commun., 2016, **233**, 1, doi:10.1016/j.ssc.2016.02.001.
+
+23. Derzhko O., Strečka J., Gálisová L., Eur. Phys. J. B, 2013, **86**, 88, doi:10.1140/epjb/e2013-30979-4.
+
+24. Lisnyi B., Strečka J., Phys. Status Solidi B, 2014, **251**, 1083, doi:10.1002/pssb.201350393.
+
+25. Lisnyi B., Strečka J., Physica A, 2016, **462**, 104, doi:10.1016/j.physa.2016.06.088.
+
+26. Ananikian N., Strečka J., Hovhannisyan V., Solid State Commun., 2014, **194**, 48, doi:10.1016/j.ssc.2014.06.015.
+
+27. Ananikian N.S., Hovhannisyan V.V., Physica A, 2013, **392**, 2375, doi:10.1016/j.physa.2013.01.040.
+
+28. Torrico J., Ohanyan V., Rojas O., J. Magn. Magn. Mater., 2018, **454**, 85, doi:10.1016/j.jmmm.2018.01.044.
+
+29. Gálisová L., Condens. Matter Phys., 2014, **17**, 13001, doi:10.5488/CMP.17.13001.
+
+30. De Souza S.M., Rojas O., Solid State Commun., 2018, **269**, 131, doi:10.1016/j.ssc.2017.10.006.
+
+31. Gálisová L., Strečka J., Phys. Rev. E, 2015, **91**, 022134, doi:10.1103/PhysRevE.91.022134.
+
+32. Torrico J., Rojas M., de Souza S.M., Rojas O., Phys. Lett. A, 2016, **380**, 3655,
+doi:10.1016/j.physleta.2016.08.007.
+
+33. Carvalho I.M., Torrico J., de Souza S.M., Rojas O., Derzhko O., Ann. Phys., 2019,
+**402**, 45, doi:10.1016/j.aop.2019.01.001.
+
+34. Rojas O., Strečka J., Lyra M.L., de Souza S.M., Phys. Rev. E, 2019,
+**99**, 042117, doi:10.1103/PhysRevE.99.042117.
+
+35. Derzhko O., Krupnitska O., Lisnyi B., Strečka J., EPL, 2015, **112**, 37002, doi:10.1209/0295-5075/112/37002.
+
+36. Čanová L., Strečka J., Jaščur M., J. Phys.: Condens. Matter, 2006, **18**, 4967, doi:10.1088/0953-8984/18/20/020.
+
+Магнетоеластичні властивості спін-1/2 ромбічного
+ланцюжка Ізінґа-Гайзенберґа поблизу потрійної точки
+співіснування
+
+Н. Феррейра¹, Дж. Торрико², С.М. де Соуза¹, О. Рохас¹, Й. Стречка³
+
+¹ Кафедра фізики, Федеральний університет Лавраса, С. Р. 3037, 37200-900, Ліврас, Міна Жере,
+Бразилія
+
+² Кафедра фізики, Федеральний університет Міна Жере, С. Р. 702, 30123-970, Белу-Оризонті,
+Міна Жере, Бразилія
+
+³ Інститут фізики, Факультет природничих наук, Університет імені П. Й. Шафарика, парк Ангелінум 9,
+Кошиці 04001, Словаччина
+
+Ми вивчаємо магнетоеластичні властивості спін-1/2 ромбічного ланцюжка Ізінґа-Гайзенберґа, елемен-
+тарні комірки яких складаються з двох декорованих Гайзенборгових спінів і одного центрального Ізінґо-
+вого спіна. Припускаємо, що кожна пара декорованих атомів, які несуть Гайзенборґові спіни, вібрує пер-
+пендикулярно до осі ланцюжка, в той час як Ізінґові спіни перебувають у фіксованих позиціях внаслі-
+док відсутності інших вібрацій гратки. Вплив магнетоелестичної взаємодії на основний стан і скінченно-
+температурні властивості досліджуються біля потрійної точки співіснування в залежності від константи
+жорсткості, яка відноситься до Гайзенберґової взаємодії. Магнетоеластична природа Гайзенберґових ди-
+мерів відображається через ненульове плато ентропії в низькотемпературній області, в той час як те-
+плоємність демонструє аномальний пік дещо нижче температурної області, що відповідає плато ентропії.
+Намагніченість також виявляє плато у їй ж температурній області близько до значень насичення пе-
+ред тим як вона поступово прямує до нуля зі зростанням температури. В межах області плато магнітна
+сприятливість показує обернену температурну залежність, яка руйнується нижче цього плато, в той
+час як обернена температурна залежність відновлюється при достатньо високих температурах.
+
+Ключові слова: магнетоеластичний ланцюжок, спінова намагніченість, термодинаміка
\ No newline at end of file
diff --git a/samples/texts_merged/7430557.md b/samples/texts_merged/7430557.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc2ea8d7bf7c88a6023c1bb74c5a6d3832f6d537
--- /dev/null
+++ b/samples/texts_merged/7430557.md
@@ -0,0 +1,520 @@
+
+---PAGE_BREAK---
+
+# GENERALIZATIONS OF THE FAMILYWISE ERROR RATE
+
+BY E. L. LEHMANN AND JOSEPH P. ROMANO
+
+*University of California, Berkeley and Stanford University*
+
+Consider the problem of simultaneously testing null hypotheses $H_1, \dots, H_s$. The usual approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWER), the probability of even one false rejection. In many applications, particularly if $s$ is large, one might be willing to tolerate more than one false rejection provided the number of such cases is controlled, thereby increasing the ability of the procedure to detect false null hypotheses. This suggests replacing control of the FWER by controlling the probability of $k$ or more false rejections, which we call the $k$-FWER. We derive both single-step and stepdown procedures that control the $k$-FWER, without making any assumptions concerning the dependence structure of the $p$-values of the individual tests. In particular, we derive a stepdown procedure that is quite simple to apply, and prove that it cannot be improved without violation of control of the $k$-FWER. We also consider the false discovery proportion (FDP) defined by the number of false rejections divided by the total number of rejections (defined to be 0 if there are no rejections). The false discovery rate proposed by Benjamini and Hochberg [*J. Roy. Statist. Soc. Ser. B* **57** (1995) 289–300] controls $E(\text{FDP})$. Here, we construct methods such that, for any $\gamma$ and $\alpha$, $P\{\text{FDP} > \gamma\} \le \alpha$. Two stepdown methods are proposed. The first holds under mild conditions on the dependence structure of $p$-values, while the second is more conservative but holds without any dependence assumptions.
+
+**1. Introduction.** In this paper, we will consider the general problem of simultaneously testing a finite number of null hypotheses $H_i$, $i = 1, \dots, s$. We shall assume that tests for the individual hypotheses are available and the problem is how to combine them into a simultaneous test procedure. The easiest approach is to disregard the multiplicity and simply test each hypothesis at level $\alpha$. However, with such a procedure the probability of one or more false rejections increases with $s$. When the number of true hypotheses is large, we shall be nearly certain to reject some of them.
+
+A classical approach to dealing with this problem is to restrict attention to procedures that control the probability of one or more false rejections. This probability is called the familywise error rate (FWER). Here the term “family” refers to the collection of hypotheses $H_1, \dots, H_s$ that is being considered for joint testing. Which tests are to be treated jointly as a family depends on the situation.
+
+Received December 2003; revised June 2004.
+AMS 2000 subject classifications. Primary 62J15; secondary 62G10.
+Key words and phrases. Familywise error rate, multiple testing, p-value, stepdown procedure.
+---PAGE_BREAK---
+
+Once the family has been defined, control of the FWER (at joint level $\alpha$) requires that
+
+$$ (1) \qquad \text{FWER} \le \alpha $$
+
+for all possible constellations of true and false hypotheses. A quite broad treatment of methods that control the FWER is presented in [4].
+
+Safeguards against false rejections are of course not the only concern of multiple testing procedures. Corresponding to the power of a single test, one must also consider the ability of a procedure to detect departures from the hypothesis when they do occur. When the number of tests is in the tens or hundreds of thousands, control of the FWER at conventional levels becomes so stringent that individual departures from the hypothesis have little chance of being detected. For this reason, we shall consider an alternative to the FWER that controls false rejections less severely and consequently provides better power.
+
+Specifically, we shall consider the *k*-FWER, the probability of rejecting at least *k* true null hypotheses. Such an error rate with $k > 1$ is appropriate when one is willing to tolerate one or more false rejections, provided the number of false rejections is controlled.
+
+More formally, suppose data $X$ is available from some model $P \in \Omega$. A general hypothesis $H$ can be viewed as a subset $\omega$ of $\Omega$. For testing $H_i: P \in \omega_i$, $i = 1, \dots, s$, let $I(P)$ denote the set of true null hypotheses when $P$ is the true probability distribution; that is, $i \in I(P)$ if and only if $P \in \omega_i$. Then, the *k*-FWER, which depends on $P$, is defined to be
+
+$$ (2) \qquad \text{k-FWER} = P\{\text{reject at least } k \text{ hypotheses } H_i \text{ with } i \in I(P)\}. $$
+
+Control of the *k*-FWER requires that $\text{k-FWER} \le \alpha$ for all $P$, that is,
+
+$$ (3) \qquad P\{\text{reject at least } k \text{ hypotheses } H_i \text{ with } i \in I(P)\} \le \alpha \qquad \text{for all } P. $$
+
+Evidently, the case $k=1$ reduces to control of the usual FWER.
+
+We will also consider control of the *false discovery proportion* (FDP), defined as the total number of false rejections divided by the total number of rejections (and equal to 0 if there are no rejections). Given a user specified value $\gamma \in (0, 1)$, the measure of error control we wish to control is $P\{FDP > \gamma\}$ and we derive methods where this is bounded by $\alpha$.
+
+Recently, there has been a flurry of activity in finding methods that control error rates that are less stringent than the FWER, which is no doubt inspired by the FDR controlling method of Benjamini and Hochberg [1] and applications such as genomic studies where $s$ is so large that control of the FWER is too stringent. For example, Genovese and Wasserman [3] study asymptotic procedures that control the FDP (and the FDR) in the framework of a random effects mixture model. These ideas are extended in [9], where in the context of random fields the number of null hypotheses is uncountable. Korn, Troendle, McShane and Simon [8] provide
+---PAGE_BREAK---
+
+methods that control both the k-FWER and FDP; they provide some justification for their methods, but they are limited to a multivariate permutation model. Alternative methods of control of the k-FWER and FDP are given in van der Laan, Dudoit and Pollard [13]; they include both finite sample and asymptotic results. Surprisingly, the methods presented here are distinct from the above techniques. Our methods are not asymptotic and hold under either mild or no assumptions, as long as p-values are available for testing each individual hypothesis.
+
+Before describing methods that provide control of the k-FWER and FDP, we first recall the notion of a p-value, since multiple testing methods are often described by the p-values of the individual tests. Consider a single null hypothesis $H: P \in \omega$. Assume a family of tests of $H$, indexed by $\alpha$, with level $\alpha$ rejection regions $S_\alpha$ satisfying
+
+$$ (4) \qquad P\{X \in S_\alpha\} \le \alpha \quad \text{for all } 0 < \alpha < 1, P \in \omega, $$
+
+and
+
+$$ (5) \qquad S_{\alpha} \subset S_{\alpha'} \quad \text{whenever } \alpha < \alpha' $$
+
+Then the p-value is defined by
+
+$$ (6) \qquad \hat{p} = \hat{p}(X) = \inf\{\alpha : X \in S_{\alpha}\}. $$
+
+The important property of a p-value that will be used later is the following.
+
+LEMMA 1.1. Assume $\hat{p}$ is defined as above.
+
+(i) If $P \in \omega$, then
+
+$$ (7) \qquad P\{\hat{p} \le u\} \le u. $$
+
+(ii) Furthermore,
+
+$$ (8) \qquad P\{\hat{p} \le u\} \ge P\{X \in S_u\}. $$
+
+Therefore, if the $S_\alpha$ are such that equality holds in (4), then $\hat{p}$ is uniformly distributed on (0, 1) when $P \in \omega$.
+
+PROOF. Assume $P \in \omega$. To prove (i), note that the event $\{\hat{p} \le u\}$ implies $\{X \in S_{u+\epsilon}\}$ for any small $\epsilon > 0$. Therefore,
+
+$$ P\{\hat{p} \le u\} \le P\{X \in S_{u+\epsilon}\} \le u + \epsilon $$
+
+by assumption (4). Now let $\epsilon \to 0$. To prove (ii), the event $\{X \in S_u\}$ implies $\{\hat{p} \le u\}$, and so (8) follows. $\square$
+
+Two classic procedures that control the FWER are the Bonferroni procedure and the Holm procedure. The Bonferroni procedure rejects $H_i$ if its corresponding p-value satisfies $\hat{p}_i \le \alpha/s$. Assuming $\hat{p}_i$ satisfies
+
+$$ (9) \qquad P\{\hat{p}_i \le u\} \le u \quad \text{for any } u \in (0, 1) \text{ and any } P \in \omega_i, $$
+---PAGE_BREAK---
+
+the Bonferroni procedure provides strong control of the FWER. Unfortunately, the ability of the Bonferroni procedure to detect cases in which $H_i$ is false will typically be very low since $H_i$ is tested at level $\alpha/s$ which—particularly if $s$ is large—is orders smaller than the conventional $\alpha$ levels.
+
+For this reason procedures are prized for which the levels of the individual tests are increased over $\alpha/s$ without an increase in the FWER. It turns out that such a procedure due to Holm [5] is available under the present minimal assumptions.
+
+The Holm procedure can conveniently be stated in terms of the *p*-values $\hat{p}_1, \dots, \hat{p}_s$ of the $s$ individual tests. Let the ordered *p*-values be denoted by $\hat{p}_{(1)} \le \dots \le \hat{p}_{(s)}$, and the associated hypotheses by $H_{(1)}, \dots, H_{(s)}$. Then the Holm procedure is defined stepwise as follows:
+
+*Step 0.* Let $k=0$.
+
+*Step 1.* If $\hat{p}_{(k+1)} > \alpha/(s-k)$, go to step 2. Otherwise set $k = k + 1$ and repeat step 1.
+
+*Step 2.* Reject $H_{(j)}$ for $j \le k$ and accept $H_{(j)}$ for $j > k$.
+
+The Bonferroni method is an example of a single-step procedure, meaning any null hypothesis is rejected if its corresponding *p*-value is less than or equal to a common cutoff value (which in the Bonferroni case is $\alpha/s$). The Holm procedure is a special case of a class of stepdown procedures, which we now briefly describe.
+Let
+
+$$ (10) \qquad \alpha_1 \le \alpha_2 \le \dots \le \alpha_s $$
+
+be constants. If $\hat{p}_{(1)} > \alpha_1$, reject no null hypotheses. Otherwise, if
+
+$$ (11) \qquad \hat{p}_{(1)} \le \alpha_1, \dots, \hat{p}_{(r)} \le \alpha_r, $$
+
+reject hypotheses $H_{(1)}, \dots, H_{(r)}$ where the largest $r$ satisfying (11) is used. That is, a stepdown procedure starts with the most significant *p*-value and continues rejecting hypotheses as long as their corresponding *p*-values are small. The Holm procedure uses $\alpha_i = \alpha/(s-i+1)$.
+
+**2. Control of the k-FWER.** The usual Bonferroni procedure compares each *p*-value $\hat{p}_i$ with $\alpha/s$. Control of the k-FWER allows one to increase $\alpha/s$ to $k\alpha/s$, and thereby greatly increase the ability to detect false hypotheses. That such a simple modification results in control of the k-FWER is seen in the following result.
+
+**THEOREM 2.1.** For testing $H_i: P \in \omega_i$, $i = 1, \dots, s$, suppose $\hat{p}_i$ satisfies (9). Consider the procedure that rejects any $H_i$ for which $\hat{p}_i \le k\alpha/s$.
+---PAGE_BREAK---
+
+(i) This procedure controls the k-FWER, so that (3) holds. Equivalently, if each of the hypotheses is tested at level $k\alpha/s$, then the k-FWER is controlled.
+
+(ii) For this procedure, the inequality (3) is sharp in the sense that there exists a joint distribution for $(\hat{p}_1, \dots, \hat{p}_s)$ for which equality is attained in (3).
+
+PROOF. (i) Fix any $P$ and suppose $H_i$ with $i \in I = I(P)$ are true and the remainder false, with $|I|$ denoting the cardinality of $I$. Let $N$ be the number of false rejections. Then, by Markov's inequality,
+
+$$
+\begin{aligned}
+P\{N \geq k\} & \leq \frac{E(N)}{k} = \frac{E[\sum_{i \in I(P)} I\{\hat{p}_i \leq k\alpha/s\}]}{k} = \sum_{i \in I(P)} \frac{P\{\hat{p}_i \leq k\alpha/s\}}{k} \\
+& \leq \sum_{i \in I(P)} \frac{k\alpha/s}{k} = |I(P)|^{\frac{\alpha}{s}} \leq \alpha.
+\end{aligned}
+$$
+
+To prove (ii), consider the following construction. Pick $k$ indices at random without replacement from $\{1, \dots, s\}$. Call them $J$. Given $i \in J$, let $\hat{p}_i = U_1$, where $U_1$ is uniform on $(0, k/s)$, that is, $U_1 \sim U(0, k/s)$. Given $i \notin J$, let $\hat{p}_i = U_2$, where $U_2$ is independent of $U_1$ and $U_2 \sim U(k/s, 1)$. Then, unconditionally,
+
+$$ \hat{p}_i \sim \frac{k}{s} U\left(0, \frac{k}{s}\right) + \left(1 - \frac{k}{s}\right) U\left(\frac{k}{s}, 1\right) \sim U(0, 1). $$
+
+Indeed, if $u \leq k/s$,
+
+$$ P\{\hat{p}_i \le u\} = P\{i \in J\} \cdot P\{U_1 \le u\} = \frac{k}{s} \cdot \frac{u}{k/s} = u $$
+
+and if $u \geq k/s$,
+
+$$ P\{\hat{p}_i \le u\} = P\{i \in J\} \cdot 1 + P\{i \notin J\} \cdot P\{U_2 \le u\} = \frac{k}{s} + \left(1 - \frac{k}{s}\right) \cdot \frac{u-k/s}{1-k/s} = u. $$
+
+Now exactly $k$ of the $\hat{p}_i$ are less than or equal to $k/s$ by construction. The probability that these are all less than or equal to $k/s$ is
+
+$$ P\left\{U_1 \le \frac{\alpha k}{s}\right\} = \frac{\alpha k/s}{k/s} = \alpha. \quad \square $$
+
+As is the case for the Bonferroni method, the above single-stage procedure can be strengthened by a Holm type of improvement. Consider the stepdown procedure described in (11), where now we specifically consider
+
+$$ (12) \qquad \alpha_i = \begin{cases} \frac{k\alpha}{s}, & i \le k, \\ \frac{k\alpha}{s+k-i}, & i > k. \end{cases} $$
+
+Of course, the $\alpha_i$ depend on $s$ and $k$, but we suppress this dependence in the notation.
+---PAGE_BREAK---
+
+**THEOREM 2.2.** For testing $H_i: P \in \omega_i$, $i = 1, \dots, s$, suppose $\hat{p}_i$ satisfies (9). The stepdown procedure described in (11) with $\alpha_i$ given by (12) controls the $k$-FWER, that is, (3) holds.
+
+**PROOF.** Fix any $P$ and let $I(P)$ be the indices of the true null hypotheses. Assume $|I(P)| \ge k$ or there is nothing to prove. Order the $p$-values corresponding to the $|I(P)|$ true null hypotheses; call them
+
+$$ \hat{q}_{(1)} \le \cdots \le \hat{q}_{|I(P)|}. $$
+
+Let $j$ be the smallest (random) index satisfying $\hat{p}_{(j)} = \hat{q}_{(k)}$, so
+
+$$ (13) \qquad k \le j \le s - |I(P)| + k $$
+
+because the largest possible index $j$ occurs when all the smallest $p$-values correspond to the $s - |I(P)|$ false null hypotheses and the next $|I(P)|$ $p$-values correspond to the true null hypotheses. So $\hat{p}_{(j)} = \hat{q}_{(k)}$. Then our generalized Holm procedure commits at least $k$ false rejections if and only if
+
+$$ \hat{p}_{(1)} \le \alpha_1, \quad \hat{p}_{(2)} \le \alpha_2, \quad \dots, \quad \hat{p}_{(j)} \le \alpha_j, $$
+
+which certainly implies that
+
+$$ \hat{q}_{(k)} = \hat{p}_{(j)} \le \alpha_j = \frac{k\alpha}{s+k-j}. $$
+
+But by (13),
+
+$$ \frac{k\alpha}{s+k-j} \le \frac{k\alpha}{|I(P)|}. $$
+
+So the probability of at least $k$ false rejections is bounded above by
+
+$$ P\left\{\hat{q}_{(k)} \le \frac{k\alpha}{|I(P)|}\right\}. $$
+
+By Theorem 2.1(i) the chance that the $k$th largest among $I(P)$ $p$-values is less than or equal to $k\alpha/|I(P)|$ is less than or equal to $\alpha$. $\square$
+
+**REMARK 2.1.** Evidently, one can always reject the hypotheses corresponding to the smallest $k-1$ $p$-values without violating control of the $k$-FWER. However, it seems counterintuitive to consider a stepdown procedure whose corresponding $\alpha_i$ are not monotone nondecreasing. In addition, automatic rejection of $k-1$ hypotheses, regardless of the data, appears at the very least a little too optimistic. To ensure monotonicity, our stepdown procedure uses $\alpha_i = k\alpha/s$. Even if we were to adopt the more optimistic strategy of always rejecting the hypotheses corresponding to the first $k-1$ hypotheses, we could still only reject $k$ or more hypotheses if $\hat{p}_{(k)} \le k\alpha/s$, which is also true for the specific procedure of Theorem 2.2.
+---PAGE_BREAK---
+
+**REMARK 2.2.** If the *p*-values have discrete distributions, it is possible that there may be ties among them. However, the proof remains valid regardless of how tied *p*-values are ordered because monotonicity of the $\alpha_i$ ensures that all hypotheses with a common tied *p*-value will be rejected if any of them are rejected.
+
+The question naturally arises whether it is possible to improve the procedure further by increasing the critical values $\alpha_1, \alpha_2, \dots$ without violating control of the k-FWER (3). By the previous remark we can always increase $\alpha_i$ to 1 for $i < k$. A more interesting question is whether we can increase $\alpha_i$ for $i \ge k$. We will show that this is not possible by exhibiting for each $i \ge k$ a joint distribution of the *p*-values for which
+
+$$ (14) \qquad P\{\hat{p}_{(1)} \le \alpha_1, \hat{p}_{(2)} \le \alpha_2, \dots, \hat{p}_{(i-1)} \le \alpha_{i-1}, \hat{p}_{(i)} \le \alpha_i\} = \alpha. $$
+
+Moreover, changing $\alpha_i$ to $\beta_i > \alpha_i$ results in the right-hand side being greater than $\alpha$. Thus, with $i \ge k$, one cannot increase $\alpha_i$ without violating the k-FWER. Then, having picked $\alpha_1, \dots, \alpha_k, \dots, \alpha_{i-1}$, the largest possible choice for $\alpha_i$ is as stated in the algorithm.
+
+**THEOREM 2.3.** (i) Let the $\alpha_i$ be given in (12). For any $i \ge k$ there exists a joint distribution for $\hat{p}_1, \dots, \hat{p}_s$ such that $s + k - i$ of the $\hat{p}_i$ are uniformly distributed on $(0, 1)$ and (14) holds.
+
+(ii) For testing $H_i: P \in \omega_i$, $i = 1, \dots, s$, suppose $\hat{p}_i$ satisfies (9). For the stepdown procedure (11) with $\alpha_i$ given in (12), one cannot increase even one of the constants $\alpha_i$ (for $i \ge k$) without violating the k-FWER.
+
+Before proving the theorem, we make use of the following lemma.
+
+**LEMMA 2.1.** Fix $k$, $u$ and constants $0 < \beta_1 \le \beta_2 \le \dots \le \beta_k \le u$. Assume for every $j=2, \dots, k$,
+
+$$ (15) \qquad \frac{j(\beta_j - \beta_{j-1})}{\beta_j} \le 1. $$
+
+Then there exists a joint distribution for $(\hat{q}_1, \dots, \hat{q}_k)$ satisfying the $\hat{q}_i$ are marginally uniform on $(0, u)$ such that the ordered values $\hat{q}_{(1)} \le \dots \le \hat{q}_{(k)}$ satisfy
+
+$$ (16) \qquad P\{\hat{q}_{(1)} \le \beta_1, \dots, \hat{q}_{(k)} \le \beta_k\} = \beta_k/u. $$
+
+**PROOF.** The proof is by induction on $k$. The result clearly holds for $k=1$. With probability $\beta_k/u$ we will construct $(\hat{q}_1, \dots, \hat{q}_k)$ equal to $(\tilde{q}_1, \dots, \tilde{q}_k)$, where $\tilde{q}_i \sim U(0, \beta_k)$ for $i=1, \dots, k$ and such that their ordered values $\tilde{q}_{(1)} \le \dots \le \tilde{q}_{(k)}$ satisfy
+
+$$ (17) \qquad P\{\tilde{q}_{(1)} \le \beta_1, \dots, \tilde{q}_{(k)} \le \beta_k\} = 1. $$
+---PAGE_BREAK---
+
+But, with probability $1 - \beta_k/u$, construct the $\tilde{q}_j$ to be conditionally distributed as $U(\beta_k, u)$. Then unconditionally the $\hat{q}_j$ satisfy (16) and are marginally distributed as $U(0, u)$. So it suffices to construct the $\tilde{q}_j$ satisfying $\tilde{q}_j \sim U(0, \beta_k)$ and (17).
+
+Let $\beta_0 = 0$ and for $i = 1, \dots, k$ let $E_i = \{(\beta_{i-1}, \beta_i)\}$ and $p_i = \beta_i - \beta_{i-1}$. First construct $Y_1, \dots, Y_{k-1}$, each taking values in $(0, \beta_{k-1}]$ such that their ordered values $Y_{(1)} \le \dots \le Y_{(k-1)}$ satisfy
+
+$$ (18) \qquad P\{Y_{(1)} \le \beta_1, \dots, Y_{(k-1)} \le \beta_{k-1}\} = 1 $$
+
+and $Y_i$ is uniform on $(0, \beta_{k-1}]$. This is possible by the inductive hypothesis, since we can assume the result holds for $k-1$ as long as $\beta_1, \dots, \beta_k$ and $u$ satisfy the stated conditions; in particular, we apply the result with $u = \beta_{k-1}$. Next, let $Y_k$ be uniform on $E_i$ with probability $\theta p_i$ for $i=1, \dots, k-1$ and let it be uniform on $E_k$ with probability $1-\theta\beta_{k-1}$, where $\theta$ satisfies
+
+$$ (19) \qquad \theta = \frac{1}{\beta_{k-1}} \left[ 1 - \frac{k(\beta_k - \beta_{k-1})}{\beta_k} \right]. $$
+
+Finally, let $\tilde{q}_1, \dots, \tilde{q}_k$ be a random permutation of $Y_1, \dots, Y_k$. Because of (18) and the fact that $Y_k \le \beta_k$, the ordered values of $Y_1, \dots, Y_k$ and hence the ordered values of $\tilde{q}_1, \dots, \tilde{q}_k$ satisfy (17). Furthermore, it is easy to check that $\tilde{q}_i$ falls in $E_j$ with probability $p_j$ and so $\tilde{q}_i$ is $U(0, \beta_k)$. Indeed, if $j < k$, the probability that $\tilde{q}_i$ falls in $E_j$, conditional on $\tilde{q}_i$ not being equal to $Y_k$, is $p_i/\beta_{k-1}$ and is $\theta p_i$ in the latter case, which unconditionally is
+
+$$ \frac{k-1}{k} \cdot \frac{p_i}{\beta_{k-1}} + \frac{1}{k} \theta p_i = p_i, $$
+
+and similarly for the probability that $\hat{q}_i$ falls in $E_k$. The only detail that remains is to note that this construction with $\theta$ defined in (19) is possible only if $\theta p_i$ and $1 - \theta\beta_{k-1}$ are all values in $(0, 1)$. But
+
+$$ 1 - \theta\beta_{k-1} = \frac{k(\beta_k - \beta_{k-1})}{\beta_k}, $$
+
+which is certainly $\ge 0$ since $\beta_k \ge \beta_{k-1}$. It is also $\le 1$ by the assumption (15). Also,
+
+$$ \theta p_i = \frac{p_i}{\beta_{k-1}} \cdot \left[ 1 - \frac{k(\beta_k - \beta_{k-1})}{\beta_k} \right]. $$
+
+But the first factor $p_i/\beta_{k-1}$ is in $(0, 1)$ as is the latter by the above, and so the product is in $(0, 1). \square$
+
+**PROOF OF THEOREM 2.3.** The case $i=k$ follows from the construction in the proof of Theorem 2.1. Let the first $i-k$ of the $\hat{p}_j$ be identically equal to 0. (Actually, rather than point mass at 0, any distribution supported on $[0, \alpha_1]$ will do.) For the remaining $s' = s + k - i$ p-values $\hat{p}_j$, $j = i - k + 1, \dots, s$, randomly choose $k$ indices from $i - k + 1, \dots, s$. The $k$ that are chosen will be
+---PAGE_BREAK---
+
+marginally $U(0, k/s')$ and have a joint distribution which will be specified below;
+the remaining $s-i$ can be taken to be distributed as $U(k/s', 1)$.
+
+Let $\hat{q}_1, \dots, \hat{q}_k$ denote the $k$ observations that are marginally $U(0, k/s')$. We need to specify the joint distribution of $\hat{q}_1, \dots, \hat{q}_k$ so that their ordered values $\hat{q}_{(1)} \le \dots \le \hat{q}_{(k)}$ satisfy
+
+$$ (20) \quad P\{\hat{q}_{(1)} \le \alpha_{i-k+1}, \hat{q}_{(2)} \le \alpha_{i-k+2}, \dots, \hat{q}_{(k)} \le \alpha_i\} = \alpha $$
+
+(because $\hat{q}_{(j)} = \hat{p}_{(j+i-k)}$ for $j=1, \dots, k$). So the problem reduces to constructing a joint distribution for $(\hat{q}_1, \dots, \hat{q}_k)$ satisfying (20) subject to the constraint that $\hat{q}_j$ is marginally distributed as $U(0, k/s')$. To do this, apply Lemma 2.1 with $u=k/s'$ and $\beta_j = \alpha_{i-k+j}$. We need to verify the conditions of the lemma, which reduces to showing
+
+$$ (21) \quad \frac{j(\alpha_{i-k+j} - \alpha_{i-k+j-1})}{\alpha_{i-k+j}} \le 1 $$
+
+for $i \ge k$ (and $s$ and $k$ fixed). But, if $i-k+j-1 \le k$, then the left-hand side of (21) is 0; otherwise it is easily seen to simplify to
+
+$$ (22) \quad \frac{j}{s+2k-i-j} \le \frac{j}{s+k-j} \le k/s, $$
+
+where the first inequality holds because $i \ge k$ and the second because $j \le k$. But $k/s \le 1$ and so the conditions of the lemma are satisfied. Therefore, we can conclude that the left-hand side of (20) is given by
+
+$$ \frac{\beta_k}{u} = \frac{\alpha_i}{k/s'} = \alpha, $$
+
+and (i) is proved.
+
+To prove (ii), the construction used in (i) can be used even if $\alpha_i$ is replaced by $\bar{\alpha}_i > \alpha_i$, as long as such a switch still allows one to appeal to the lemma. However, the same argument works as long as $\bar{\alpha}_i$ does not get bigger than $s/k \cdot \alpha_i$, so that the argument leading to (22) being less than or equal to 1 still applies. For such an $\bar{\alpha}_i$, the argument for (i) then shows that, if the left-hand side of (14) has $\alpha_i$ replaced by $c\alpha_i$ for some $1 < c < s/k$, then the right-hand side of (14) will be $c\alpha > \alpha$, which would violate control of the $k$-FWER. $\square$
+
+**3. Control of the false discovery proportion.** The number $k$ of false rejections that one is willing to tolerate will often increase with the number of hypotheses rejected. So it might be of interest to control not the number of false rejections (sometimes called false discoveries) but the proportion of false discoveries. Specifically, let the *false discovery proportion* (FDP) be defined by
+
+$$ (23) \quad \text{FDP} = \begin{cases} \frac{\text{Number of false rejections}}{\text{Total number of rejections}}, & \text{if the denominator is greater than 0,} \\ 0, & \text{if there are no rejections.} \end{cases} $$
+---PAGE_BREAK---
+
+Thus FDP is the proportion of rejected hypotheses that are rejected erroneously. When none of the hypotheses is rejected, both numerator and denominator of that proportion are 0; since in particular there are no false rejections, the FDP is then defined to be 0.
+
+Benjamini and Hochberg [1] proposed to replace control of the FWER by control of the *false discovery rate* (FDR), defined as
+
+$$ (24) \qquad \text{FDR} = E(\text{FDP}). $$
+
+The FDR has gained wide acceptance in both theory and practice, largely because Benjamini and Hochberg proposed a simple stepup procedure to control the FDR. Unlike control of the k-FWER, however, their procedure is not valid without assumptions on the dependence structure of the p-values. Their original paper assumed the very strong assumption of independence of p-values, but this has been weakened to include certain types of dependence; see [2]. In any case, control of the FDR does not prohibit the FDP from varying, even if its average value is bounded. Instead, we consider an alternative measure of control that guarantees the FDP is bounded, at least with prescribed probability. That is, for a given $\gamma$ and $\alpha$ in (0, 1), we require
+
+$$ (25) \qquad P\{\text{FDP} > \gamma\} \le \alpha. $$
+
+To develop a stepdown procedure satisfying (25), let $F$ denote the number of false rejections. At step $i$, having rejected $i-1$ hypotheses, we want to guarantee $F/i \le \gamma$, that is, $F \le \lfloor \gamma i \rfloor$, where $\lfloor x \rfloor$ is the greatest integer less than or equal to $x$. So, if $k = \lfloor \gamma i \rfloor + 1$, then $F \ge k$ should have probability no greater than $\alpha$; that is, we must control the number of false rejections to be less than or equal to $k$. Therefore, we use the stepdown constant $\alpha_i$ with this choice of $k$ (which now depends on $i$); that is,
+
+$$ (26) \qquad \alpha_i = \frac{(\lfloor \gamma i \rfloor + 1)\alpha}{s + \lfloor \gamma i \rfloor + 1 - i}. $$
+
+We give two results that show the stepdown procedure with this choice of $\alpha_i$ satisfies (25). Unfortunately, like FDR control, some assumptions on the dependence of p-values are required, at least by our method of proof. Later, we will modify the method so we can dispense with the dependence assumptions. As before, $\hat{p}_1, \dots, \hat{p}_s$ denotes the p-values of the individual tests. Also, let $\hat{q}_1, \dots, \hat{q}_{|I|}$ denote the p-values corresponding to the $|I| = |I(P)|$ true null hypotheses. So $q_i = p_{j_i}$, where $j_1, \dots, j_{|I|}$ correspond to the indices of the true null hypotheses. Also, let $\hat{r}_1, \dots, \hat{r}_{s-|I|}$ denote the p-values of the false null hypotheses. Consider the following condition: for any $i=1, \dots, |I|$,
+
+$$ (27) \qquad P\{\hat{q}_i \le u|\hat{r}_1, \dots, \hat{r}_{s-|I|}\} \le u; $$
+
+that is, conditional on the observed p-values of the false null hypotheses, a p-value corresponding to a true null hypothesis is (conditionally) dominated by the
+---PAGE_BREAK---
+
+uniform distribution, as it is unconditionally in the sense of (7). No assumption
+is made regarding the unconditional (or conditional) dependence structure of
+the true p-values, nor is there made any explicit assumption regarding the joint
+structure of the p-values corresponding to false hypotheses, other than the basic
+assumption (27). So, for example, if the p-values corresponding to true null
+hypotheses are independent of the false ones, but have arbitrary joint dependence
+within the group of true null hypotheses, the above assumption holds.
+
+**THEOREM 3.1.** *Assume condition (27). Then the stepdown procedure with $\alpha_i$ given by (26) controls the FDP in the sense of (25).*
+
+PROOF. Assume the number of true null hypotheses is $|I(P)| > 0$ (or there is nothing to prove) and the number of false null hypotheses is $f = s - |I(P)|$. The argument is conditional on the $\{\hat{r}_i\}$. Let
+
+$$
+\hat{r}_{(1)} \le \hat{r}_{(2)} \le \dots \le \hat{r}_{(f)}
+$$
+
+denote the ordered values of the $\hat{r}_i$ and similarly for the $\hat{q}_i$. Let $\alpha_0 = 0$ and define $R_i$ to be the number of $\hat{r}_i$ in the interval $(\alpha_{i-1}, \alpha_i]$. (Actually, assume $R_1$ includes the value 0 as well.) Given the values of $\hat{r}_1, \dots, \hat{r}_f$, it may be impossible to have $FDP > \gamma$, that is,
+
+$$
+P\{FDP > \gamma | \hat{r}_1, \dots, \hat{r}_f\} = 0.
+$$
+
+Otherwise, let $j = j(\hat{r}_1, \dots, \hat{r}_f)$ be defined as
+
+$$
+(28) \qquad j = \min \left\{ m : m - \sum_{i=1}^{m} R_i > m\gamma \right\}.
+$$
+
+To interpret this, given the *p*-values of the false hypotheses, *j* is the smallest
+critical index (depending only on the *ĥ*ᵢ) where it is possible to have *FDP* > *γ*, except whenever there are several *p*-values within an interval (*α*ᵢ₋₁, *α*ᵢ]. (Actually, assume *R*₁ includes the value 0 as well.) Given the values of *r*₁, ..., *r*_f, it may be impossible to have *FDP* > *γ*, that is,
+
+For example, if $s = 100$, $f = 5$ and $\gamma = 0.1$, then if all five of the $\hat{r}_i$ are less than $\alpha_1$, then we define $j = 6$ even though the smallest true $p$-value could be the smallest among the 100. So the FDP could be greater than 0.1 after the first step of the algorithm if $\hat{q}_{(1)} < \hat{r}_{(1)}$, but even if this is the case, we then know we will reject at least six total hypotheses. So the important point here is that, given such a configuration of $\{\hat{r}_i\}$, in order for FDP to be greater than 0.1, it must be the case that we reject a true null hypothesis at step 6.
+---PAGE_BREAK---
+
+Note that, with $j$ so defined, $R_j = 0$. For if $\sum_{i=1}^j R_i = j-k$ with $k/j > \gamma$ and $R_j > 0$, then
+
+$$ \sum_{i=1}^{j-1} R_i = j - k - R_j \le j - 1 - k $$
+
+and $k/(j-1) > \gamma$, so that $m = j-1$ satisfies the criterion. Furthermore, we also have $\sum_{i=1}^j R_i = j-k$ (so not $< j-k$), where $k/j > \gamma$, because if $\sum_{i=1}^j R_i < j-k \le j-1-k$ say, then $k/(j-1) > \gamma$ if $k/j > \gamma$ and so $j$ can again be reduced to $j-1$.
+
+In addition, at the index $j$ it must be the case that
+
+$$ k = k(j) = j - \sum_{i=1}^{j} R_i = 1 + \lfloor \gamma j \rfloor. $$
+
+But $k > \gamma j$ implies $k \ge \lfloor \gamma j \rfloor + 1$. But if $k > \lfloor \gamma j \rfloor + 1$, then $k-1 \ge \lfloor \gamma j \rfloor + 1$ and so
+
+$$ \frac{k-1}{j-1} \ge \frac{\lfloor \gamma j \rfloor + 1}{j-1} > \gamma, $$
+
+the last equality trivially following from $1 + \lfloor \gamma j \rfloor \ge \gamma j > \gamma(j-1)$.
+
+We can now complete the argument. At the index $j$ we must have $k = j - \sum_{i=1}^j R_i = 1 + \lfloor \gamma j \rfloor$ of the $\hat{q}_i$ being $\le \alpha_j$. But from Theorem 2.1 (applied conditional on the $\hat{r}_i$),
+
+$$ P\{\text{at least } k(j)\text{ of the } \hat{q}_i \le \alpha_j | \hat{r}_1, \dots, \hat{r}_f\} \\
+\le \frac{|I|\alpha_j}{k(j)} \\
+= \frac{|I|(\lfloor \gamma j \rfloor + 1)\alpha}{k(j)(s + \lfloor \gamma j \rfloor + 1 - j)} = \frac{|I|\alpha}{s + \lfloor \gamma j \rfloor + 1 - j}. $$
+
+But $|I| \le s - \sum_{i=1}^j R_i = s-j+k$, so the above probability is less than or equal to
+
+$$ \frac{s-j+k}{s+\lfloor\gamma j\rfloor+1-j} \cdot \alpha = \alpha. $$
+
+Therefore,
+
+$$ P\{\text{FDP} > \gamma | \hat{r}_1, \dots, \hat{r}_f\} \le \alpha, $$
+
+which of course implies $P\{\text{FDP} > \gamma\} \le \alpha$. $\square$
+
+Next, we prove the same stepdown procedure controls the FDP in the sense of (25) under an alternative assumption. Here, the assumption only involves the dependence of the p-values corresponding to true null hypotheses.
+---PAGE_BREAK---
+
+THEOREM 3.2. Consider testing $s$ null hypotheses, with $|I|$ of them true. Let $\hat{q}_{(1)} \le \dots \le \hat{q}_{(|I|)}$ denote their corresponding ordered p-values. Set $M = \min(\lfloor \gamma s \rfloor + 1, |I|)$.
+
+(i) For the stepdown procedure with $\alpha_i$ given by (26),
+
+$$ (29) \qquad P\{\text{FDP} > \gamma\} \le P\left\{\bigcup_{i=1}^{M} \{\hat{q}_{(i)} \le \frac{i\alpha}{|I|}\}\right\}. $$
+
+(ii) Therefore, if the joint distribution of the p-values of the true null hypotheses satisfies Simes inequality, that is,
+
+$$ P\left\{\{\hat{q}_{(1)} \le \frac{\alpha}{|I|}\} \cup \{\hat{q}_{(2)} \le \frac{2\alpha}{|I|}\} \cup \cdots \cup \{\hat{q}_{(|I|)} \le \alpha\}\right\} \le \alpha, $$
+
+then $P\{\text{FDP} > \gamma\} \le \alpha$.
+
+PROOF. Let $j$ be the smallest (random) index where the FDP exceeds $\gamma$ for the first time at step $j$; that is, the number of false rejections corresponding to the first $j-1$ rejections divided by $j$ exceeds $\gamma$ for the first time at $j$. If $j$ is such that $\gamma j < 1$, then $FDP > \gamma$ at step $j$ implies $\hat{p}_{(j)} \le \alpha_j$. But this implies
+
+$$ \hat{q}_{(1)} \le \alpha_j = \frac{\alpha}{s+1-j} \le \frac{\alpha}{|I|}, $$
+
+because the number of true null hypotheses $|I|$ necessarily satisfies $|I| \le s - (j-1)$ for such a $j$.
+
+Similarly, if $j$ is such that $1 \le \gamma j < 2$, then we must have $\hat{p}_{(i)} \le \alpha_i$ and $\hat{p}_{(j)} \le \alpha_j$ for some $i < j$, where $i, j$ correspond to true null hypotheses. But for such a $j$, $\alpha_j = 2\alpha/(s+2-j)$, and so we must have $\hat{q}_{(2)} \le 2\alpha/(s-j+2)$. But, by definition of $j$, we must have $|I| \le s - (j-2)$ and so $\hat{q}_{(2)} \le 2\alpha/|I|$.
+
+Continuing in this way, if $m-1 \le \gamma j < m$, the event $FDP > \gamma$ at step $j$ implies $\hat{q}_{(m)} \le m\alpha/|I|$. The largest value of $j$ is of course $s$ and so the largest possible $m$ is $\lfloor \gamma s \rfloor + 1$. Also, we cannot have $m > |I|$. So, with $M$ as in the statement of the theorem,
+
+$$ P\{\text{FDP} > \gamma\} \le \sum_{m=1}^{M} P\{\hat{q}_{(m)} \le \frac{m\alpha}{|I|}, m-1 \le \gamma j < m\} \\
+\le \sum_{m=1}^{M} P\left\{\bigcup_{i=1}^{M} \left\{\hat{q}_{(i)} \le \frac{i\alpha}{|I|}\right\}, m-1 \le \gamma j < m\right\} \\
+\le P\left\{\bigcup_{i=1}^{M} \left\{\hat{q}_{(i)} \le \frac{i\alpha}{|I|}\right\}\right\}. $$
+
+Part (ii) follows trivially. $\square$
+---PAGE_BREAK---
+
+In fact, there are many joint distributions of positively dependent variables for which Simes inequality is known to hold. In particular, Sarkar and Chang [11] and Sarkar [10] have shown that the Simes inequality holds for the family of distributions which is characterized by the multivariate positive of order 2 condition, as well as some other important distributions.
+
+Theorem 3.2 points toward a method that controls the FDP without any dependence assumptions. One simply needs to bound the right-hand side of (29). In fact, Hommel [6] has shown that
+
+$$P\left\{\bigcup_{i=1}^{|I|} \{\hat{q}_{(i)} \le \frac{i\alpha}{|I|}\}\right\} \le \alpha \sum_{i=1}^{|I|} \frac{1}{i}.$$
+
+This suggests we replace $\alpha$ by $\alpha(\sum_{i=1}^{|I|} (1/i))^{-1}$. But of course $|I|$ is unknown. So one possibility is to bound $|I|$ by $s$, which then results in replacing $\alpha$ by $\alpha/C_s$, where
+
+$$ (30) \qquad C_j = \sum_{i=1}^{j} (1/i). $$
+
+As is well known, $C_s \approx \log(s+0.5) + \zeta_E$, with $\zeta_E \approx 0.5772156649$ known as Euler's constant. Clearly, changing $\alpha$ in this way is much too conservative and results in a much less powerful method. However, notice in (29) that we really only need to bound the union over $M \subseteq \lceil \gamma s + 1 \rceil$ events. Therefore, we need to slightly generalize the inequality by Hommel [6], which is done in the following lemma.
+
+**LEMMA 3.1.** Suppose $\hat{p}_1, \dots, \hat{p}_t$ are $p$-values in the sense that $P\{\hat{p}_i \le u\} \le u$ for all $i$ and $u$ in $(0, 1)$. Let their ordered values be $\hat{p}_{(1)} \le \dots \le \hat{p}_{(t)}$. Let $0 = \beta_0 \le \beta_1 \le \beta_2 \le \dots \le \beta_m \le 1$ for some $m \le t$.
+
+(i) Then
+
+$$ (31) \qquad P\{\{\hat{p}_{(1)} \le \beta_1\} \cup \{\hat{p}_{(2)} \le \beta_2\} \cup \dots \cup \{\hat{p}_{(m)} \le \beta_m\}\} \le t \sum_{i=1}^{m} (\beta_i - \beta_{i-1})/i. $$
+
+(ii) As long as the right-hand side of (31) is less than or equal to 1, the bound is sharp in the sense that there exists a joint distribution for the p-values for which the inequality is an equality.
+
+**PROOF.** Let $J$ be the smallest (random) index $j$ among $1 \le j \le m$ for which $\hat{p}_{(j)} \le \beta_j$; define $J$ to be $t+1$ if $\hat{p}_{(j)} > \beta_j$ for all $1 \le j \le m$. Let $\theta_k = P\{J=k\}$. Then the left-hand side of (31) is equal to
+
+$$ P\left\{\bigcup_{k=1}^{m} \{J=k\}\right\} = \sum_{k=1}^{m} \theta_k, $$
+---PAGE_BREAK---
+
+since the events {$J = k$} are disjoint. We wish to bound $\sum_k \theta_k$. For any $1 \le j \le m$,
+
+$$ \sum_{k=1}^{j} JI\{J=k\} = JI\{J \le j\} \le S_j, $$
+
+where $S_j$ is the number of $p$-values $\le \beta_j$. Taking expectations yields
+
+$$ (32) \qquad \sum_{k=1}^{j} k\theta_k \le t\beta_j, \quad j = 1, \dots, m. $$
+
+For $j = 1, \dots, m-1$, multiply both sides of (32) by $1/[j(j+1)]$, and for $j=m$, multiply both sides by $1/m$; then sum over $j$ to yield
+
+$$ (33) \qquad \sum_{j=1}^{m-1} \frac{1}{j(j+1)} \sum_{k=1}^{j} k\theta_k + \frac{1}{m} \sum_{k=1}^{m} k\theta_k \le \sum_{j=1}^{m-1} \frac{t\beta_j}{j(j+1)} + \frac{t\beta_m}{m}. $$
+
+By changing the order of summation, the left-hand side of (33) becomes
+
+$$ \sum_{k=1}^{m-1} k\theta_k \left( \frac{1}{k} - \frac{1}{m} \right) + \frac{1}{m} \sum_{k=1}^{m} k\theta_k = \sum_{k=1}^{m} \theta_k. $$
+
+The right-hand side of (33) is easily seen to be the right-hand side of (31) and (i)
+follows.
+
+To prove (ii), we construct $\hat{p}_1, \dots, \hat{p}_t$ as follows. Let $U_i$ be uniform in $I_i$ and
+let $U_{m+1}$ be uniform in $(\beta_m, 1)$. Let $p$ be equal to the right-hand side of (31),
+assumed less than or equal to 1. Let $\pi_1, \dots, \pi_m$ be probabilities summing to 1,
+with $\pi_i \propto (\beta_i - \beta_{i-1})/i$. Then, with probability $\pi_i p$, randomly pick $i$ indices and
+let those $p$-values be equal to $U_i$, and the remaining $t-i$ $p$-values equal to $U_{m+1}$.
+With the remaining probability $1-p$, let all $p$-values be equal to $U_{m+1}$. With this
+construction it is easily checked that $\hat{p}_i$ is uniform on $(0, 1)$ and the left-hand side
+of (31) is equal to the right-hand side of (31). $\square$
+
+Theorem 3.2 and Lemma 3.1 now lead to the following result.
+
+**THEOREM 3.3.** For testing $H_i: P \in \omega_i$, $i = 1, \dots, s$, suppose $\hat{p}_i$ satisfies (9). Consider the stepdown procedure with constants $\alpha'_i = \alpha_i/C_{[\gamma s]+1}$, where $\alpha_i$ is given by (26) and $C_j$ is defined by (30). Then $P\{FDP > \gamma\} \le \alpha$.
+
+**PROOF.** By Theorem 3.2(i), $P\{FDP > \gamma\}$ is bounded by the right-hand side of (29) with $\alpha$ replaced by $\alpha/C_{[\gamma s]+1}$, which is further bounded by the same expression with $M$ replaced by $[\gamma s] + 1$. Then apply Lemma 3.1 with $t = |I|$ and $\beta_i = i\alpha/(C_{[\gamma s]+1}|I|)$. $\square$
+
+It is of interest to compare control of the FDP with control of the FDR. Some
+obvious connections between methods that control the FDP in the sense of (25)
+---PAGE_BREAK---
+
+and methods that control its expected value, the FDR, can be made. Indeed, for
+any random variable $X$ on $[0, 1]$, we have
+
+$$
+\begin{align*}
+E(X) &= E(X|X \le \gamma)P\{X \le \gamma\} + E(X|X > \gamma)P\{X > \gamma\} \\
+ &\le \gamma P\{X \le \gamma\} + P\{X > \gamma\},
+\end{align*}
+$$
+
+which leads to
+
+$$
+(34) \qquad \frac{E(X) - \gamma}{1 - \gamma} \le P\{X > \gamma\} \le \frac{E(X)}{\gamma},
+$$
+
+with the last inequality just Markov's inequality. Applying this to $X = \text{FDP}$, we
+see that, if a method controls the FDR at level $q$, then it controls the FDP in the
+sense $P\{FDP > \gamma\} \le q/\gamma$. Obviously, this is very crude because if $q$ and $\gamma$ are
+both small, the ratio can be quite large. The first inequality in (34) says that if
+the FDP is controlled in the sense of (25), then the FDR is controlled at level
+$\alpha(1-\gamma)+\gamma$, which is greater than or equal to $\alpha$ but typically only slightly. These
+crude arguments suggest that control of the FDP is perhaps more stringent than
+control of the FDR.
+
+The comparison of actual methods, however, is complicated by the fact that the FDR controlling procedure of Benjamini and Hochberg [1] is a stepup procedure, but we have only considered stepdown procedures. It is interesting to note that, in order to make our procedure work without any dependence assumptions, we needed to change $\alpha$ to $\alpha/C_{[\gamma s]+1}$. Benjamini and Yekutieli [2] show that the Benjamini-Hochberg procedure that controls the FDR at level $q$ can also work without dependence assumptions, if you replace $q$ by $q/C_s$. Clearly, this is a more drastic change since $C_s$ is typically much larger than $C_{[\gamma s]+1}$. Such connections need to be explored more fully.
+
+**4. Conclusions.** We have seen that a very simple stepdown procedure is available to control the *k*-FWER under absolutely no assumptions on the dependence structure of the *p*-values. Furthermore, control of the *k*-FWER provides a measure of control for the *actual* number of false rejections, while the number of false rejections in the case of the FDR can vary widely. We have also considered two stepdown methods that control the FDP in the sense of (25). The first method provides control under very reasonable types of dependence assumptions, while the second holds in general.
+
+**Acknowledgments.** We thank Juliet Shaffer and Michael Wolf for some helpful discussion and references. We also thank the referees and an Associate Editor for many helpful suggestions that greatly improved the clarity of the paper. Thanks to Wenge Guo for pointing out an error in an earlier version.
+
+After the revision and acceptance of this paper, we became aware of the work
+by Hommel and Hoffman [7] which has much overlap with the results in Section 2,
+---PAGE_BREAK---
+
+and we'd like to thank Helmut Finner for pointing out this oversight. In particular, Hommel and Hoffman [7] provide Theorem 2.1(i) with proof, Theorem 2.2 (stated but no proof) and a weaker version of Theorem 2.3(ii) (stated but no proof). They attribute the idea of controlling the number of false hypotheses to Victor [14], who also suggested control of the FDP. However, Hommel and Hoffman did not further discuss control of the FDP as they "could not find suitable procedures satisfying this criterion." As far as we know, the three theorems in Section 3 which address control of the FDP are new.
+
+REFERENCES
+
+[1] BENJAMINI, Y. and HOCHBERG, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. *J. Roy. Statist. Soc. Ser. B* **57** 289–300.
+
+[2] BENJAMINI, Y. and YEKUTIELI, D. (2001). The control of the false discovery rate in multiple testing under dependency. *Ann. Statist.* **29** 1165–1188.
+
+[3] GENOVESE, C. and WASSERMAN, L. (2004). A stochastic process approach to false discovery control. *Ann. Statist.* **32** 1035–1061.
+
+[4] HOCHBERG, Y. and TAMHANE, A. (1987). *Multiple Comparison Procedures*. Wiley, New York.
+
+[5] HOLM, S. (1979). A simple sequentially rejective multiple test procedure. *Scand. J. Statist.* **6** 65–70.
+
+[6] HOMMEL, G. (1983). Tests of the overall hypothesis for arbitrary dependence structures. *Biometrical J.* **25** 423–430.
+
+[7] HOMMEL, G. and HOFFMAN, T. (1988). Controlled uncertainty. In *Multiple Hypotheses Testing* (P. Bauer, G. Hommel and E. Sonnemann, eds.) 154–161. Springer, Heidelberg.
+
+[8] KORN, E., TROENDLE, J., MCSHANE, L. and SIMON, R. (2004). Controlling the number of false discoveries: Application to high-dimensional genomic data. *J. Statist. Plann. Inference* **124** 379–398.
+
+[9] PERONE PACIFICO, M., GENOVESE, C., VERDINELLI, I. and WASSERMAN, L. (2004). False discovery control for random fields. *J. Amer. Statist. Assoc.* **99** 1002–1014.
+
+[10] SARKAR, S. (1998). Some probability inequalities for ordered $MTP_2$ random variables: A proof of the Simes conjecture. *Ann. Statist.* **26** 494–504.
+
+[11] SARKAR, S. and CHANG, C. (1997). The Simes method for multiple hypothesis testing with positively dependent test statistics. *J. Amer. Statist. Assoc.* **92** 1601–1608.
+
+[12] SIMES, R. (1986). An improved Bonferroni procedure for multiple tests of significance. *Biometrika* **73** 751–754.
+
+[13] VAN DER LAAN, M., DUDOIT, S. and POLLARD, K. (2004). Multiple testing. Part III. Procedures for control of the generalized family-wise error rate and proportion of false positives. Working Paper Series, Paper 141, Div. Biostatistics, Univ. California, Berkeley.
+
+[14] VICTOR, N. (1982). Exploratory data analysis and clinical research. *Methods of Information in Medicine* **21** 53–54.
+
+DEPARTMENT OF STATISTICS
+UNIVERSITY OF CALIFORNIA
+BERKELEY, CALIFORNIA 94720
+USA
+
+DEPARTMENT OF STATISTICS
+STANFORD UNIVERSITY
+STANFORD, CALIFORNIA 94305-4065
+USA
+E-MAIL: romano@stat.stanford.edu
\ No newline at end of file
diff --git a/samples/texts_merged/746118.md b/samples/texts_merged/746118.md
new file mode 100644
index 0000000000000000000000000000000000000000..4576e830159240bb1916a15fd158d5d00d8a104b
--- /dev/null
+++ b/samples/texts_merged/746118.md
@@ -0,0 +1,495 @@
+
+---PAGE_BREAK---
+
+# Proving Equalities in a Commutative Ring
+## Done Right in Coq
+
+Benjamin Grégoire¹ and Assia Mahboubi²
+
+INRIA Sophia-Antipolis
+2204, routes des Lucioles - B.P. 93
+06902 Sophia Antipolis Cedex, France
+
+¹Benjamin.Gregoire@sophia.inria.fr
+
+²Assia.Mahboubi@sophia.inria.fr
+
+**Abstract.** We present a new implementation of a reflexive tactic which solves equalities in a ring structure inside the Coq system. The efficiency is improved to a point that we can now prove equalities that were previously beyond reach. A special care has been taken to implement efficient algorithms while keeping the complexity of the correctness proofs low. This leads to a single tool, with a single implementation, which can be addressed for a ring or for a semi-ring, abstract or not, using the Leibniz equality or a setoid equality. This example shows that such reflective methods can be effectively used in symbolic computation.
+
+# 1 Introduction
+
+In the context of a computer algebra system, one of the most extensively used functionalities is the simplification of symbolic expressions, and in particular, the use of algebraic identities. These identities are usually established by elementary combinations of canonical identities, stored in a very large database, in a quite efficient way. Programing similar tools in a proof assistant consists in programing decision procedures, as the user is concerned with the reliability of the result.
+
+Algebraic identities that the user of proof assistant is to handle are often equalities modulo the axioms of a ring. There are numerous examples of such identities: the product of two bi-squares is itself a bi-square, remarkable identities like the famous $(a+b)^2 = a^2 + 2ab + b^2$ or event more complex properties like the fact that the product of sums of eight squares is a sum of eight squares. These equalities are decidable and it seems natural to relieve the user of a proof assistant of such goals, by providing an automatic tool. Otherwise the proof of the identity:
+
+$$ (a+b)^3 = a^3 + 3a^2b + 3ab^2 + b^3 $$
+
+would require no more than thirty elementary rewriting steps of the ring axioms.
+
+The Coq [12] proof assistant already provides such a tool called *ring*. It is not based on an automatic rewriting strategy but built using a reflexive technique [3]. The use of convexity has already reduced the size of the generated
+---PAGE_BREAK---
+
+proof terms and the time for building and checking them. Nevertheless, the efficiency of ring is not satisfactory. For example, proving $10 * 100 = 1000$, is immediate if the multiplication ranges over the integers, while it takes about a hundred seconds on a 3GHz machine if the multiplication ranges over the axiomatic implementation of real numbers. The efficiency of the method on such goals should not depend on the computational nature of the underlying ring structure. This bad behaviour on constants strongly affects the efficiency of the method on algebraic identities of higher degree. Moreover the implementation choices made in the ring development are really limiting the size of the entries ring is able to deal with.
+
+Currently, there exists eight different implementations of ring depending on the kind of ring: semi-ring or ring, abstract or not, setoid equality or Leibniz equality. Here, we factorize these eight implementations through a modular implementation which will be finally instantiated to fit the kind of ring required.
+
+The Coq system has recently been improved by the introduction of a compiler and an abstract machine, which now allows the evaluation of Coq programs with the same efficiency as Ocaml programs [8]. After the experiences of marrying computer algebra systems with theorem provers to get both efficiency and reliability [9], it now seems reasonable to use Coq as a single environment for programming, certifying and evaluating computer algebra algorithms. Our newring decision procedure is one of these efficient tools required for the manipulation of symbolic expressions, showing that the reflexive methods are the way to separate computations from checking, inside the proof assistant. Furthermore it is the first step for a bunch of other decision procedures, like the simplification of field equalities [6], or decision methods in geometry [11].
+
+In Section 2, we begin with some general remarks about the reflexive method and its use in our particular context. The Section 3 is dedicated to our choice to get efficient representation of polynomials, which is a crucial point for the efficiency. The Section 4 shows the major importance of the choice of coefficients set for these polynomials. In the Section 5, we introduce a new axiomatic structure, called almost-ring, which allows to unify the implementations of the procedure for rings and semi-rings. In Section 6 we show how the use of the new metalanguage Ltac [5,2] allows to completely avoid the use of external Ocaml code. Section 7 is dedicated to examples and benchmarks before we conclude in Section 8.
+
+# 2 Overall view of the method
+
+## 2.1 Reflexivity
+
+In the Coq system, the rewriting steps are explicit in a proof: each step builds a predicate having the size of the current goal when the rewriting was performed, hence the size of the proof term heavily depends on the number of these rewriting steps. The reflection technique introduced by [1] takes benefit of the reduction system of the proof assistant to reduce the size of the proof term computed and consequently to speed up its checking. It relies on the following remark:
+---PAGE_BREAK---
+
+- Let $P: A \to Prop$ be a predicate over a set $A$.
+
+- Suppose that we are able to write in the system a semi decision procedure $f$, such that $f$ is computable and if $f$ returns *true* on the entry $x$, then $P(x)$ is valid, that is to say:
+
+`f correct:forall x,f(x)=true -> P(x).`
+
+If we want to prove $P(y)$ for a particular $y$, and if we know that $f(y)$ reduces to true, then we can simply apply the lemma `f Correct` to $y$ and to a proof that true = true. Thanks to the conversion rule which allows to change implicitly the type of a term by an equivalent (modulo $\beta$-reduction):
+
+$$ \frac{\Gamma \vdash t : T \qquad \Gamma \vdash U : s \qquad T \equiv U}{\Gamma \vdash t : U} $$
+
+This latter proof, which is (`refl_equal true`), is also implicitly a proof that $f(y) = true$ because $f(y)$ reduces to true, so $true = true$ is convertible with $f(y) = true$. Finally the proof of $P(y)$ we have built is :
+
+`f correct y (refl equal true)`
+
+The size of such a proof now only depends on the size of the particular argument $y$ and does not depend on the number of implicit $\beta$-reduction steps: explicit rewriting steps have been replaced by implicit $\beta$-reductions. The size of the proof term of the correctness lemma for $f$ may be large, it is only done once and for all. It will be shared by all the instantiations and will no more be type-checked. The efficiency of this technique of course strongly depends on the efficiency of the system to reduce the application of the decision procedure $f(y)$, hence on the efficiency of the decision procedure itself.
+
+## 2.2 General scheme of the newring tactic
+
+The `newring` tactic operates on a ring structure `A`, which includes a base type for its elements, two constants 0 and 1, three binary operations `+`, `*`, `-` over `A` and an opposite unary function `-`, together with the usual axioms defining a commutative ring structure. Its goal is to prove the equality of two terms $t_1$ and $t_2$ of type `A` modulo the ring axioms.
+
+Working by reflection means that we want to build a semi decision procedure $f$, which will take $t_1$ and $t_2$ as arguments and return *true* if $t_1$ and $t_2$ are equal modulo associative-commutative rewriting in the ring structure.
+
+A natural way to perform a comparison between two terms seems to be the pattern-matching. Yet the Coq system does not allow pattern matching over arbitrary terms, but only over inductive types. That is why terms of the type `A` are going to be *reflected* into an appropriate inductive type `PolExpr`, which describes the syntax of terms of type `A`. This step is also called the *metaification*. A term of type `A` is mapped by the meta-function $\mathcal{T}$ to a polynomial expression in `PolExpr` by:
+
+- interpreting every ring constant as a constant polynomial expression (eg. 0,1)
+---PAGE_BREAK---
+
+- interpreting every ring operation as an operation over polynomial expressions
+
+- hiding every subterm which is neither a ring constant, nor the application of a ring operation to other subterms behind a labeled variable and building the corresponding association list.
+
+$\mathcal{T}$ is a kind of oracle, we will explain in Section 6 how to build such a function using the *meta*-language Ltac[5] which allows to do pattern-matching over an arbitrary Coq expression.
+
+Once we have built the two `PolExpr`, $e_1$ and $e_2$, corresponding to $t_1$ and $t_2$, the idea is to check the equality of the normal forms of $e_1$ and $e_2$ and to prove that this implies the equality of $t_1$ and $t_2$. For this purpose, we should ensure the correctness of the following diagram:
+
+by the correctness lemma:
+
+$$ \forall e \in \text{PolExpr}, \varphi_{PE}(e) = \varphi_P(\text{norm}(e)) $$
+
+$\varphi_{PE}$ (resp. $\varphi_P$) are the evaluation functions. They evaluate polynomial expressions (resp. normalized polynomial `Pol`) into elements of $A$, by interpreting back each constant polynomial to a constant of $A$, each variable by the ring term it was hiding and each representation of an operator by the corresponding ring operator.
+
+These functions can be easily defined within the theory by pattern matching over the reflected inductive types.
+
+The inductive type `PolExpr` is adapted to the metaification. To ensure the completeness of our tactic it should verify the following meta property:
+
+$$ \forall a \in A. \varphi_{PE}(\mathcal{T}(a)) = a. $$
+
+Note that we do not have to prove this property, which can not be expressed inside Coq. It does not affect the correctness of our decision procedure, but only its completeness.
+
+The type `Pol` stands for the set of the normalized forms of polynomial expressions, which does not need to be the same as `PolExpr`. It is adapted to build normal forms efficiently. The `norm` function bridges the gap between these two kind of constraints: `PolExpr` suits to the syntax of the terms in $A$ and `Pol` allows efficient computations.
+
+To prove the equality of $t_1$ and $t_2$, our tactic first computes $e_1$ and $e_2$ using $\mathcal{T}$, and then checks the equality of their normal forms. If it holds, the correctness lemma and the transitivity of equality ensure the equality of $t_1$ and $t_2$:
+
+$$ t_1 = \varphi_{PE}(\mathcal{T}(t_1)) = \varphi_P(\text{norm}(\mathcal{T}(t_1))) = \varphi_P(\text{norm}(\mathcal{T}(t_2))) = \varphi_{PE}(\mathcal{T}(t_2)) = t_2 $$
+---PAGE_BREAK---
+
+### 3 Sparse Horner normal forms
+
+Choosing the shape of the normal form is a crucial point for the complexity. The normal form for terms in the ring will be determined by the choice made for the normal form of polynomial expressions. We present here the choice we made for the normal form, the sparse Horner normal form, which provides the required efficiency.
+
+#### 3.1 Representation
+
+Horner form for polynomials in $C[X]$ can be represented by the following inductive type:
+
+Inductive Pol1 (C:Set) : Set :=
+| Pc : C -> Pol1 C
+| PX : Pol1 C -> C -> Pol1 C.
+
+where $(Pc \ c)$ represents the constant polynomial $c$ and $(PX \ P \ c)$ represents the polynomial $P * X + c$. The problem with such a representation is that a polynomial can have a lot of holes due to gaps in the degrees. For example, $X^4 + 1$ is represented in the Horner form as:
+
+$(PX (PX (PX (Pc 1) 0) 0) 0) 1)$. The number of nested PX constructors of such a polynomial is indeed its degree. To get a more compact representation of the Horner form we can factorize these gaps by adding a power index in the constructor of non constant polynomials:
+
+Inductive Pol1 (C:Set) : Set :=
+| Pc : C -> Pol1 C
+| PX : Pol1 C -> positive -> C -> Pol1 C.
+
+where `positive` is an inductive type representing $\mathbb{N}^*$.
+
+Now $(PX \ P \ i \ c)$ stands for the polynomial $P * X^i + c$. So $X^4 + 1$ is now represented as $(PX (Pc 1) 4) 1)$.
+
+Once the representation of univariate polynomials is fixed, there is a natural way to extend it to multivariate polynomials, using the canonical isomorphism $C[X_1, \dots, X_n] = C[X_1 \dots X_{n-1}][X_n]$. In Coq this can be done by declaring the following fixpoint using dependent type:
+
+Fixpoint Poln (C:Set) (n:nat) {struct n} : Set :=
+match n with
+| 0 => C
+| S m => Pol1 (Poln C m)
+end.
+
+The type $(Poln \ C \ n)$ represents the set of polynomials with $n$ variables. Namely $(Poln \ C \ (S \ n))$ represents the set of univariate polynomials with coefficients in $(Poln \ C \ n)$ and $(Poln \ C \ 0)$ is the set of constant polynomials in $C$.
+---PAGE_BREAK---
+
+This representation creates another kind of holes corresponding to holes in
+variables. For example the polynomial 1 will be encoded either by (Pc 1) if it
+is seen as an element of Z[X] or by (Pc (Pc (Pc (Pc 1)))) if it is seen as an
+element of Z[W, X, Y, Z]. To solve this problem, we give up the idea of defining
+multivariate polynomials recursively from univariate ones. We now define the set
+of polynomials in an arbitrary number of variables in one shot.
+
+Inductive Pol (C:Set) : Set :=
+| Pc : C -> Pol C
+| Pinj : positive -> Pol C -> Pol C
+| PX : Pol C -> positive -> Pol C -> Pol C.
+
+- (Pc c) stands for the constant polynomial $c \in C[X_1, \dots, X_n]$ for any $n$.
+
+- If $Q \in C[X_1, \dots, X_{n-j}]$, and $Q$ is its representation, then $(\text{Pinj j Q})$ represents $Q$ as a polynomial in $n$ variables, namely $Q.X_{n-j+1}^0 * \dots * X_n^0$. We have "pushed" $Q$ from $C[X_1, \dots, X_{n-j}]$ to $C[X_1, \dots, X_n]$. $j$ is called the injection index.
+
+- Finally, (PX P i Q) stands for $P * X_n^i + Q$ where $P \in C[X_1 \dots X_n]$ and $Q \in C[X_1 \dots X_{n-1}]$ is constant in $X_n$.
+
+## 3.2 Normalization
+
+Our sparse Horner form does not provide a unique representation for arbitrary
+polynomials. In $C[X]$ the polynomial $X^4 + 1$ can be represented by $(PX (Pc
+1) 4 (Pc 1))$ or by $(PX (PX (Pc 1) 3 (Pc 0)) 1 (Pc 1))$. To solve this, we
+can define a normalization function that build a canonical representative of a
+polynomial, and then define the equality on polynomial as the equality of the
+canonical representatives.
+
+Instead of normalizing before checking equality, our choice is to always ma-
+nipulate canonical representatives verifying the three following properties:
+
+- the coefficient of highest degree is never zero;
+
+- the injection index is the biggest possible;
+
+- the power index is the biggest possible.
+
+So the canonical representative of $X^4 + 1$ is $(\text{PX (Pc } 1\text{)} 4 (\text{Pc } 1\text{)})$. Note that, it is also the most compact representation of a sparse Horner form. Since the complexity of operations depends on the size of the polynomials, linear for addition and quadratic for multiplication, it is interesting to work with canonical terms. This means that each operation on polynomials should only build canonical terms. If P and Q are in canonical form, building the canonical representation of $(\text{PX P i Q})$ is not expensive, since we only need to locally destruct P:
+
+- if P = (Pc 0) then build the canonical representative of (Pinj 1 Q);
+
+- if P = PX P' i' (Pc 0) then the canonical representative is:
+ (PX P' (i+i') Q)
+
+- else (PX P i Q) is the canonical representative.
+---PAGE_BREAK---
+
+Our defined operations on polynomial, denoted by Padd, Pmul, Psub, and Popp, keep the following invariant: if their arguments are canonical then their result is canonical. To ensure this, we use specialized constructors that perform local normalizations: mkPinj and mkPX. For example, the addition of (PX P i Q) and (PX P' i Q') leads to the term (mkPX (Padd P P') i (Padd Q Q')). Since the addition of P and P' can be the zero polynomial, we need to use mkPX to ensure that the result is canonical. But we directly use constructors Pinj and PX, which are costless, each time the invariant allows it, as in the addition of (PX P i Q) and (Pc c) which reduces to (PX P i (Padd Q (Pc c))), here P can not be zero or of the form (PX P' i' (Pc 0)), since (PX P i Q) is canonical, so (PX P i (Padd Q (Pc c))) is canonical.
+
+For each operator, we prove a correctness lemma showing that the operator is correct up to evaluation. For the addition the lemma is:
+
+Lemma Padd correct : $\forall$P Q 1,
+$\phi_P 1 (\mathrm{Padd}\ P\ Q) == (\phi_P 1 P) + (\phi_P 1 Q)$.
+
+where == is the setoid equality over the initial ring (or semi-ring) structure and
++ is its addition.
+
+Note that using mkPX instead of PX has no influence on the correctness, because (phiP 1 (mkPX P i Q)) is equal to (phiP 1 (PX P i Q)). The only influence is for completeness, since using PX instead of mkPX can produce a non-canonical representative. But again, we do not need to prove completeness.
+
+The normalization function from polynomial expressions to their canonical sparse Horner forms consists in mapping variables to monomials, constants to constant polynomials and operation constructors to operation functions on Horner form. The canonical representative is given by the evaluation of the term obtained.
+
+After having defined the normalization function, we can prove its correctness:
+
+Lemma norm correct : $\forall$l e, phiPE l e == phiP l (norm e).
+
+And then the main lemma, which expresses the correctness of our decision procedure:
+
+Lemma f correct : $\forall$l e1 e2,
+$\mathrm{Peq}(\mathrm{norm}\ e1)(\mathrm{norm}\ e2) = \mathrm{true} \rightarrow \mathrm{phiPE}\ l\ e1 == \mathrm{phiPE}\ l\ e2.$
+
+where Peq stands for a defined function which checks the syntactic equality over
+sparse Horner forms.
+
+The set of coefficients *C* is the carrier of the computations performed by the normalization function. The following section will show that the choice made for *C* is crucial, especially for the efficiency of the procedure, as *C* catches the “best computational part” of the ring.
+
+# 4 Computations over the parametric coefficient set
+
+The normalization function we have described above strongly relies on the computational behavior of the set of coefficients. For example the normalization of
+---PAGE_BREAK---
+
+$x + (-x)$ leads to $(1 + (-1)) \cdot x$, which will reduce to $0 \cdot x$. $C$ has to be chosen as a set over which we know how to compute, as efficiently as possible. In the Coq system, these kind of sets will be represented by inductive types, and the operations are defined as functional programs.
+
+In the Coq system, $Z$ is an implementation of $\mathbb{Z}$ as lists of binary digits. In the case $Z$ is the underlying ring of the equality to be proved, $Z$ itself is a good candidate. On the other hand, if the underlying ring is $\mathbb{R}$, the axiomatic implementation of real numbers in Coq, $\mathbb{R}$ itself will not be an appropriate set of coefficients. Indeed, in $\mathbb{R}$, $1 + (-1)$ is equal to 0 (using ring axioms) but does not reduce to 0: the subtraction as the other operations and constants of $\mathbb{R}$ are only symbols, and are not evaluable. Hence $x + (-x)$ would not reduced to $0 \cdot x$ by the normalization function. Since there is a natural inclusion of $\mathbb{Z}$ in $\mathbb{R}$, we can use $Z$ as a set of coefficients. Moreover, whatever ring $A$ we are dealing with, the canonical morphism from $\mathbb{Z}$ to $A$ will enable us to use again $Z$ as a set of coefficients. This type $Z$ seems then to be a universal candidate for coefficients.
+
+Nevertheless, $Z$ will not always be the good choice. If the computational content of the ring operations is stronger than the ring axioms, this method will allow to prove more than what is provable by sole rewriting of the rings axioms. In the case we are working in the ring `bool`, the equality $x+x=0$ holds, even if it is not provable using only the ring axioms. The good choice for $C$ is now `bool` itself: the left side of the equality is again reflected in $X+X$ (with coefficients in `bool`), whose normal form $(1+1) \cdot X$ is reduced to $0 \cdot X = 0$ by the normalization function, thanks to the computations over the coefficients in `bool`. Hence our choice is to parametrize our tactic by the set of coefficients and to let the user make the most appropriate choice.
+
+An inductive type has to fulfill some requirements to be admissible as a set of coefficients. These requirements will ensure the correctness of the normalization function. Formally, $C$ will be admissible if it is equipped with the constants and operators of a ring, and with a decidable equality relation $=_{C}$. The last requirement is needed to implement the `mkPX` and `mkInj` constructors (we need to be able to check the equality at 0). It also allows to get a decidable equality on sparse Horner form.
+
+We also require a suitable evaluation function from $C$ to $A$, mapping the constants of $C$ to the elements of $A$ and this function should be compatible with the respective operations of $C$ and $A$. These requirements can be expressed by the existence a so-called *morphism* between $C$ and $A$ (even if $C$ does not need to be a ring). This morphism evaluates the constants and operators in $C$ into their analogous in $A$, and the decidable equality relation $=_{C}$ over $C$ should satisfies : if $(x =_{C} y)$ returns true, then the evaluations of $x$ and $y$ will be equal in $A$.
+
+Once we have got $C$ and a proof of all these specifications, we define in a generic way the operations over polynomials as explained in Section 3, and extend the morphism between $C$ and $A$ into two evaluation functions $\varphi_{PE}$ and $\varphi_{P}$, from the polynomial expressions and sparse Horner form to $A$. We also obtain a proof of the general diagram of the reflection presented in 2.2, *Pol*.
+---PAGE_BREAK---
+
+and *PolExpr* being now replaced by their parametrized version *Pol(C)* and *PolExpr(C)*.
+
+We have implemented the identity morphism which corresponds to taking
+the ring itself as the set of coefficient. The user can always apply the resulting
+tactic even if it may not prove much equalities (like in the case $\mathbb{R}$ is involved).
+We have also implemented the morphism from $\mathbb{Z}$ to an arbitrary ring, which can
+always be used as an efficient default choice, but is not necessary the best choice
+(cf the case of `bool`).
+
+In order to get the maximal efficiency from this method, the user has to make
+to most appropriate choice for $C$. If the ring structure is defined in an axiomatic
+way, like $\mathbb{R}$, $Z$ will always be a good choice for the set of coefficients. In the case
+the ring already presents a computational content, like $Z$ or `bool`, it may be
+a good choice to take the ring itself as the coefficient set. Nevertheless, if the
+available operations are not efficient enough, like it is the case for example in
+the semi-ring of Peano numbers, it may be more appropriate to obtain the most
+efficient computational content by changing the set of coefficients all the same,
+here for example by taking a binary representation of natural numbers.
+
+5 Unifying rings and semi-rings
+
+A semi-ring is a ring where the axioms stating the existence of an opposite
+(and of a subtraction) have been replaced by an extra axiom : $\forall x, 0 * x = 0$.
+These structures are quite alike and we would like to get a tool also adapted
+to semi-rings without duplicating the code. For this purpose, we work with an
+intermediate structure, called almost-ring. The idea is to complete a semi-ring
+with a unary operator, called almost-opposite which is morally the opposite oper-
+ator of a ring structure. This operator will be instantiated by a dummy function
+to equip a semi-ring with such a structure. In fact the fundamental remark is
+the following : in the correctness proof of the normalization function, the axiom
+defining the opposite operator as an inverse, by stating that $\forall x, x + (-x) = 0$
+is never used itself, but only the properties which describe its combination with
+the other operators. Finally an almost-ring is defined by the following axioms:
+
+$$
+- \forall x, 0 + x = x
+$$
+
+- $\forall x y, x + y = y + x$
+
+- $\forall x y z, x + (y + z) = (x + y) + z$
+
+- $\forall x, 1 * x = x$
+
+- $\forall x y, x * y = y * x$
+
+- $\forall x y z, x * (y * z) = (x * y) * z$
+
+- $\forall x y z, (x + y) * z = x * z + y * z$
+
+- $\forall x, 0 * x = x$ (at that point we have a semi-ring)
+
+- $\forall x y, -(x * y) = -x * y$ (combination of pseudo-opposite with product)
+
+- $\forall x y, -(x + y) = -x - y$ (combination of pseudo-opposite with addition)
+
+- $\forall x y, x - y = x + -y$ (definition of an associated pseudo-subtraction)
+---PAGE_BREAK---
+
+It is straightforward to prove that every ring is an almost-ring. The axioms of an almost-ring do not allow to prove the missing axiom defining the opposite in ring $x + -x = 0$. Anyway, this identity will be proved by our tactic, provided that in the set of coefficients $1 + (-1)$ reduces to 0. This is ensured thanks to the existence of a morphism from the set of coefficients to the ring. Every semi-ring can also be equipped with an almost-ring structure if we take the identity as an almost-opposite operator and the defined addition operator of the semi-ring as subtraction.
+
+The tactic is finally designed for an almost-ring structure. We have moreover built the proofs required to transform any ring or semi-ring into the associated almost-ring.
+
+The last parameter given to the tactic is the equality relation used over the ring. It may not be the Leibniz equality, but an equivalence relation adapted to the ring structure. For example, this is the case for an implementation of $\mathbb{Q}$ as $\mathbb{Z} \times \mathbb{N}^*$. A set equipped with such an equality relation is called a setoid ([7], [10]). Proving equalities in such a setoid ring requires extra properties stating that all the ring operations are compatibles with the given setoid equality. In the case the equality involved in the goal is the Leibniz one, these requirements are trivial to fulfill. That is why the tactic will finally also be parametrized by a setoid equality and the related compatibility lemmas for the operations.
+
+# 6 Programming the metaification and the tactic
+
+The purpose of the `newring` tactic is to solve goals of the form $t_1 == t_2$ by applying the `f correct` lemma. To do so we need to produce a list of values $l$ and two polynomial expressions $e_1$ and $e_2$ such that the evaluation of $e_1$ (resp. $e_2$) at $l$ is convertible to $t_1$ (resp. $t_2$). Consider the following equality
+
+$$3 * \sin(x) * x = x * (\sin(x) + 2 * \sin(x)) + 0 * y$$
+
+In this case $l$ will be $[\sin(x); x; y]$, $e_1$ will be $3 * X_1 * X_2$ and $e_2$ will be
+$X_2 * (X_1 + 2 * X_1) + 0 * X_3$.
+
+## 6.1 Programming the metaification
+
+We use the Coq proof-dedicated metalanguage Ltac[5] to design the oracle producing the expected values $(l, e_1, e_2)$. This metalanguage allows to do pattern-matching on arbitrary Coq terms, and thereby to program this metafunction, which is a tactic, in a natural way as done in [6].
+
+We first build a function FV which computes the list $l$ containing the subterms to abstract. These are the ones which do not belong to the syntax of a ring. Then the `mkPolexpr` tactic computes the two expressions $e_1$ and $e_2$ and the list $l$ is used to know which variable is associated to a given subexpression to abstract.
+
+Ltac mkPolexpr Cst add mul sub opp t 1 :=
+let rec mkP t :=
+---PAGE_BREAK---
+
+```pascal
+match t with
+ | (add ?t1 ?t2) =>
+ let e1 := mkP t1 in
+ let e2 := mkP t2 in constr:(PEadd e1 e2)
+ | (mul ?t1 ?t2) => ...
+ | (sub ?t1 ?t2) => ...
+ | (opp ?t1) => ...
+ | _ =>
+ match Cst t with
+ | false => let p := Find_at t 1 in constr:(PEX p)
+ | ?c => constr:(PEc c)
+ end
+end
+in mkP t.
+```
+
+The tactic `mkPolexpr` takes as arguments a term `t`, the list `l` of terms to abstract, the ring operators and a tactic `Cst`. It matches the head symbol of `t`:
+
+- If this symbol is one of the given operators then it builds recursively the corresponding polynomial expression;
+
+- If the head symbol is not an operator then either `t` is a constant or it has to be abstracted into a variable. This discrimination is performed by the tactic `Cst` given in argument:
+
+ * If `Cst` returns `false` then the index of the proper variable is given by the position of `t` in the list `l` given in argument.
+
+ * Otherwise `t` is mapped to the corresponding constant.
+
+The definition of the `Cst` tactic depends on the ring `A`. If `A` is an abstract ring, the set of coefficients will be `Z`, and we can already define a naive tactic which matches only the neutral elements of `A` (`r0` and `rI`).
+
+```pascal
+Ltac genCstZ r0 rI t :=
+ match t with
+ | r0 => constr:(0%Z)
+ | rI => constr:(1%Z)
+ | _ => constr:false
+ end.
+```
+
+On the other hand, in the case `A` is `Z`, the set of coefficients will be `Z` itself, and we can match much more constants: in fact all the terms built only with the constructors of `Z`.
+
+```pascal
+Ltac ZCst t :=
+ match (is_ZCst t) with
+ | true => constr:t
+ | false => constr:false
+ end.
+```
+---PAGE_BREAK---
+
+Here `is_ZCst` is a tactic matching the terms built only with the constructors of the inductive type Z.
+
+This method has also been generalized to the case of semi-rings, where N, the implementation of binary natural numbers plays the role of Z. We have also built such a tactic `Cst` for boolean, where the target constants are booleans.
+
+## 6.2 The generic tactic
+
+To define the `newring` tactic itself, we use the possibility given by Ltac to program a higher-order function, which builds a tactic, solving equalities in the structure given in argument. For the sake of clarity we present a simplified version that can be used only if the goal is a valid equality modulo ring axioms and fails otherwise. The real implementation also replace both members of the equality by their normal form if they are not equal.
+
+```lisp
+Ltac Make-ring_tac add mul sub opp req Cst-tac :=
+ match goal with
+ | [ |- req ?r1 ?r2 ] => (
+ let fv := FV Cst-tac add mul sub opp (add r1 r2) (nil R) in
+ let e1 := mkPolexpr Cst-tac add mul sub opp r1 fv in
+ let e2 := mkPolexpr Cst-tac add mul sub opp r2 fv in
+ apply (f correct fv e1 e2); compute; exact (refl_equal true)
+ | _ => fail "not equality"
+ end.
+```
+
+The tactic first checks that the current goal is an equality. If so, it computes a single list `fv` of subterms to be abstracted in both terms, and the two polynomial expressions `e1` and `e2` representing the members of the equality. Then the tactic applies the correctness lemma `f correct`. At that point the tactic should prove the hypothesis of the lemma, namely check that `(norm e1) ?=:= (norm e2)` is equal to `true`.
+
+If `r1` and `r2` are equal modulo ring axioms then this new goal is convertible to `true = true`. So it is now possible to complete the proof with the term `(refl_equal true)`. The tactic `exact` checks that the provided term has a type convertible to the current goal `((norm e1) ?=:= (norm e2)) = true`. This is performed using a lazy reduction strategy. Here checking the convertibility is equivalent to computing the normal form of the equality's left-hand side. The efficient strategy suitable to this problem is the call by value reduction. So the tactic first uses the `compute` tactic to reduce the goal in this way, before concluding with `exact`.
+
+We can now apply the `Make_ring_tac` to obtain a tactic which automatically prove ring equality in Z:
+
+```lisp
+Ltac zring := Make_ring_tac Zplus Zmult Zminus Zopp (@eq Z) ZCst.
+```
+
+We also have implemented such a tactic for booleans (`bring`), reals (`rring`) and natural numbers (`nring`), Peano numbers as well as their binary implementation.
+---PAGE_BREAK---
+
+Finally, the `newring` tactic analyzes the type of the equality to prove and calls
+the corresponding specialized tactic:
+
+```
+Ltac newring :=
+match goal with
+| [|- @eq Z _ _ ] => zring
+| [|- @eq R _ _ ] => rring
+| [|- @eq bool _ _ ] => bring
+| [|- @eq nat _ _ ] => nring
+end.
+```
+
+To work with an other user-defined structure, one can always use the prede-
+fined tactic `Make-ring-tac` to build the appropriate tactic for proving equalities
+in this structure.
+
+# 7 Examples and Benchmarks
+
+The `newring` tactic has performed two orthogonal improvements compared to the choices made in the `ring` tactic developed by S. Boutin [3]. The first one is the choice of the sparse Horner form for the representation of normal forms instead of an ordered sum of monomials, being themselves an ordered product of variables. The second is to use Z as the set of coefficients for reflected expressions when working with abstract rings (R for example).
+
+## 7.1 Sparse Horner form
+
+Figure 1 describes the time to normalize the expression $(x_1 + \dots + x_n)^d$ seen as a polynomial with coefficients in $Z$. For `ring`, the normal form of this expression is its expansion in an ordered sum of monomials, each prefixed by a coefficient in $Z$. Both tactics use $Z$ as a set of coefficients, so these benchmarks show the interest of the sparse Horner form to deal with polynomials of higher degree. The gain in time for $n = 5$ and $d = 5$ is a factor 6 and a factor 500 for $n = 7$ and $d = 9$, thanks to the compactness of sparse Horner form representation. Using a naive Horner form (without power and injection index, or not maintaining canonical representatives) introduces an overhead of 30%. Moreover, the `ring` tactic is not able to normalize this expression when $n = 8$ and $d = 9$, and when $n = 12$ it fails for $d = 6$. The `newring` tactic is able to normalize the expression for $n = 12$ and $d = 11$.
+
+Comparing the time to normalize expressions of the form $(x_1 + \dots + x_n)^d$
+to the results given by the `expand` function of Maple, is deceiving. The algo-
+rithm used by the computer algebra system in mainly focused on the access
+to a database of stored identities, and possible simple combinations of them.
+When the precomputed identities are useless, the system is of course less effi-
+cient, and can even fail because of the size of the normal form. This is the case
+---PAGE_BREAK---
+
+Fig. 1. Time to prove that $(x_1 + \dots + x_n)^d$ is equal to its normal form
+
+for expressions of the following form
+
+$$ (y + x_2 + \dots + x_{n-1} + x_n) * $$
+
+$$ (x_1 + y + \dots + x_{n-1} + x_n) * $$
+
+$$ \vdots $$
+
+$$ (x_1 + x_2 + \dots + y + x_n) * $$
+
+$$ (x_1 + x_2 + \dots + x_{n-1} + y) * $$
+
+For $n = 8$ the newring tactic is four times slower than the expand strategy of Maple (0.4s for newring, 0.12s for Maple). But Maple fails to expand the formula when $n = 9$ (Error, (in expand/bigprod) object too large), while newring finishes in 1.7s.
+
+## 7.2 The set of coefficients
+
+Beside the successful use of the Horner form, the use of Z as the set of coefficients when we are working with an abstract ring has been a major improvement for efficiency. For the previous ring tactic, the representation of normal forms in an abstract ring leads to coefficients equivalent to unary numbers, hence computations are completely inefficient. Proving that $10*100 = 1000$ takes about one hundred of seconds on a 3GHz machine using ring, and it is now immediate with newring (as one would expect). It is worth paying attention to the efficiency of such a tactic over (large) integers. One often deals with expressions with small coefficients but successive computations may increase their size in a significant way. A well-known phenomenon of explosion in the size of the coefficients occurs while computing a remainder sequence of polynomials, like the computation of a polynomial gcd in $\mathbb{Q}[X]$. For example, in the context of the checking of
+---PAGE_BREAK---
+
+computations made by an external oracle [9] (Maple or any dedicated program producing a trace of certificates...), checking the successive steps of such a computation will force to deal with large coefficients, even if the initial polynomial entries had small ones.
+
+## 8 Conclusion
+
+This development shows that it is worth paying attention to the algorithmic aspects in programming such a procedure in the same way we would have done while programming it in a functional language. The choices we made in that sense turned out to be primordial for efficiency. This gain in efficiency could have lead to a complication of the associated correctness proofs. This is not the case, as the possible difficulties in the proofs lie in the mathematical complexity of the problem more than in the choices made for computations. This effort has even allowed to reduce the size of the development, by factorizing the eight versions of the tactic in a single one.
+
+One other characteristic feature of the reflexive method is that it requires, for the reflection step, the use of an operator defined in the meta level, and hence using the meta-language of the system. The Ltac metalanguage turns out to be exactly the tool needed in reflexive tactic to program this reflection step in the meta-theory. The mechanism of pattern-matching over Coq terms indeed enables to write this function easily, without any knowledge of the inside of Coq and to work entirely at the top-level, without needing to compile again and again the whole sources of the system to integrate the new tactic.
+
+A possible improvement for our development would be to allow negative powers in the representation of polynomials, to deal with Laurent series. But, one can also use the remark that proving an equality in a field can be transformed into a goal in a certain ring plus nonzero conditions for the denominators. This implementation of a *newfield* tactic has been achieved by L. Théry.
+
+This work shows that the sparse Horner form is the right representation to compute efficiently with polynomials. We hope that existing developments, such as the decision procedure for geometry [11], strongly relying on the *ring* tactic will gain in efficiency and hence in power.
+
+We are also convinced that this will allow the development of other efficient procedures to deal with symbolic expressions, providing a basic toolkit for larger developments in the domain of certified computer algebra. In particular, the second author uses the Horner representation of polynomials to develop a decision procedure for real numbers theory based on G. Collins' cylindrical algebraic decomposition [4], which is a quite complex algorithm resting on numerous computations over polynomials (computations of gcd, subresultant coefficients,...).
+
+The efficiency of *newring* overcomes what was before a strongly limiting factor in such a development, showing that it is possible to compute efficiently within a proof assistant. This makes possible to use the proof assistant as a single environment for computing and proving as well as an efficient checker efficiently computing possibly performed by an external tool as described in [9].
+---PAGE_BREAK---
+
+The systematic use of Z as a set of coefficients has considerably increased the efficiency of the tactic. Yet Z, in which numbers are represented as lists of bits, is not the best possible implementation for integers. An other step toward the efficiency of a genuine computer algebra system will be to provide to the user the possibility to use a library of machine binary integers, comprising fast computing operations, in order to deal even more efficiently with the huge integers occurring during symbolic computations (eg. polynomial gcds, prime numbers).
+
+## References
+
+1. S. F. Allen, R. L. Constable, D. J. Howe, and W. Aitken. The semantics of reflected proof. In *Proceedings of the 5th Symposium on Logic in Computer Science*, pages 95–197, Philadelphia, Pennsylvania, June 1990. IEEE, *IEEE Computer Society Press*.
+
+2. Y. Bertot and P.Casteran. *Interactive Theorem Proving and Program Development. Coq'Art: the Calculus of Inductive Constructions*. Texts in Theoretical Computer Science. 2004.
+
+3. S. Boutin. Using reflection to build efficient and certified decision procedures. In *TACS*, pages 515–529, 1997.
+
+4. G. E. Collins. Quantifier elimination for the elementary theory of real closed fields by cylindrical algebraic decomposition. volume 33 of *Lecture Notes In Computer Science*, pages 134–183. Springer-Verlag, Berlin, 1975.
+
+5. D. Delahaye. A Tactic Language for the System Coq. In *LPAR, Reunion Island*, volume 1955, pages 85–95. Springer-Verlag LNCS/LNAI, November 2000.
+http://cedric.cnam.fr/delahaye/publications/LPAR2000-ltac.ps.gz.
+
+6. D. Delahaye and M. Mayero. Field: une procédure de décision pour les nombres réels en Coq. In *Journees Francophones des Langages Applicatifs, Pontarlier*. INRIA, Janvier 2001.
+http://cedric.cnam.fr/delahaye/publications/JFLA2000-Field.ps.gz.
+
+7. V. C. G. Barthe and O. Pons. Setoids in type theory. In *Journal of Functional Programming*, 13(2):261–293, March 2003.
+
+8. B. Grégoire and X. Leroy. A compiled implementation of strong reduction. In *International Conference on Functional Programming 2002*, pages 235–246. ACM Press, 2002.
+
+9. J. Harrison and L. Théry. A skeptic's approach to combining HOL and Maple. *Journal of Automated Reasoning*, 21:279–294, 1998.
+
+10. M. Hofmann. A simple model for quotient types. In *TLCA '95*, pages 216–234, April 1995.
+
+11. J. Narboux. A decision procedure for geometry in coq. In *TPHOLs*, pages 225–240, 2004.
+
+12. The Coq development team. The coq proof assistant reference manual v7.2. Technical Report 255, INRIA, France, mars 2002. http://coq.inria.fr/doc8/main.html.
\ No newline at end of file
diff --git a/samples/texts_merged/7659893.md b/samples/texts_merged/7659893.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1bed9683fd9349266b75caf563ce24a5470d33b
--- /dev/null
+++ b/samples/texts_merged/7659893.md
@@ -0,0 +1,76 @@
+
+---PAGE_BREAK---
+
+# Bitcoin, Currencies, and Fragility: Supplementary Discussions
+
+Nassim Nicholas Taleb*†‡
+
+†Universa Investments
+
+‡Tandon School of Engineering, New York University
+
+## RATIONAL EXPECTATIONS
+
+Discretely seen, a price is expected cash flow received at the end of the next period $t+1$ plus expected price at period $t+1$. So let $P_t$, $C_t$, and $I_t$ be the price, cash flow (payout to investor) and information, respectively, at period $t$, with $r_d$ the discount rate. Without any loss, we simplify by assuming $C_i$ and $r_d$ are not stochastic. We note that "cash flow" to investor includes any payout, not just dividend, so $C_t$ includes the liquidation value.
+
+$$ P_t = \frac{1}{1+r_d} \left\{ C_{t+1} + \underbrace{\mathbb{E}(P_{t+1}|I_t)}_{\substack{\\ \\ \frac{1}{1+r_d}\left(C_{t+2}+\mathbb{E}\left(\mathbb{E}(P_{t+2}|I_{t+1})|I_t\right)\right)}} \right\}, \quad (1) $$
+
+By the law of iterated expectations,
+
+$$ \mathbb{E}(\mathbb{E}(P_{t+2}|I_{t+1})|I_t) = \mathbb{E}(P_{t+2}|I_t). $$
+
+Allora, noting that, at the present, seen from period $t$, $\mathbb{E}(P_{t+1}|I_t)$ is written as $\mathbb{E}(P_{t+1})$.
+
+$$ P_t = \lim_{n \to \infty} \left( \underbrace{\sum_{i=1}^{n} \left( \frac{1}{1+r_d} \right)^i C_{t+i}}_{=0 \text{ for bitcoin}} + \left( \frac{1}{1+r_d} \right)^n \mathbb{E}(P_{t+n}) \right), \quad (2) $$
+
+We notice that the second term vanishes under the smallest positive discount rate. In the standard rational bubble model [1] *P* (actually, its equivalent, the component that doesn't translate into future cash flow ) needs to grow around $r_d$ forever. Cases of *P* growing faster than $r_d$ are never considered as the price becomes explosive (intuitively, given that we are dealing with infinities, it would exceed the value of the economy) [2].
+
+As we increase $n$, additional cash goes into $C_{t+n}$; in principle, for $n \to \infty$ it must be all cash outside of bubbles.
+
+## EARNING-FREE ASSETS WITH ABSORBING BARRIER
+
+Now, bitcoin is all in the second term, with a hitch: there is an absorbing barrier — should there be an interruption of
+
+the ledger updating process, some loss of interest in it, a technological replacement, its value is gone forever. As we insist, bitcoin requires distributed attention.
+
+We define the stopping time as $\tau \triangleq \inf\{n > 0; P_{t+n} = 0\}$, with $P_{>\tau} = 0$.
+
+**Comment 1: Failure rate**
+
+Critically the probability of hitting the barrier does not need to come from price dynamics, but from any failure rate — the only assumption here is a failure rate >0.
+
+So we impose a layer on top of the dynamics.
+
+$$ \begin{aligned} \mathbb{E}(P_{t+n}) ={}& \mathbb{E}(P_{t+n|t+n<\tau}) \mathbb{P}(t+n<\tau) \\ & + \underbrace{\mathbb{E}(P_{t+n|t+n\ge\tau})}_{=0} \mathbb{P}(t+n\ge\tau) \end{aligned} \quad (3) $$
+
+Let $\pi$ be the probability of being absorbed over a single period. Rewriting Eq. 1 with no cash flow, i.e. $C_{t+i} = 0 \forall i$, and eliminating cases for which the expectation is infinite:
+
+$$ P_t = \frac{1}{1+r_d} \left( (1-\pi) \underbrace{\mathbb{E}(P_{t+1}|I_t|(t+1)<\tau))}_{\substack{\\ \\ \frac{1}{1+r_d}\left((1-\pi)\mathbb{E}(P_{t+2}|I_t|(t+2)<\tau)\right)}} + \underbrace{\mathbb{E}(P_{t+n}|t+n\ge\tau)}_{=0} \mathbb{P}(t+n\ge\tau) \right). \quad (4) $$
+
+We therefore have
+
+$$ P_t = \lim_{n \to \infty} \left( \frac{1 - \pi}{1 + r_d} \right)^n \mathbb{E}(P_{t+n} | (t+n) < \tau) = 0 \quad (5) $$
+
+For the price to be positive now, $P_t$ must grow forever, exactly at a gigantic exponential scale, $e^{n(r+\pi)}$, without remission, and with total certainty.
+---PAGE_BREAK---
+
+**Comment 2: The problem of $P_{\infty}$**
+
+The argument that $P$ can grow faster than $e^{n(r+\pi)}$ for
+a while and accumulate valuation is insufficient: once it
+stops growing, by backward induction, future absorption
+makes $P_t$ valued at 0. Remember that we are dealing
+with infinities.
+
+Furthermore variable mortality rates makes the needed
+growth vastly in excess of both rates $r_d$ and $\pi$. Let $\pi$ be
+stochastic with realizations $\pi(1+a)$ and $\pi(1-a)$ — two Diracs
+at the mean deviation of $\pi$. Then the required growth rate
+must be $e^{n(r+\pi+\sigma)}$, where $\sigma = \frac{\log(\cosh(\pi an))}{n}$, an additional
+convexity term $\sigma \approx a\pi$.
+
+REFERENCES
+
+[1] O. J. Blanchard and M. W. Watson, “Bubbles, rational expectations and financial markets,” *NBER working paper*, no. w0945, 1982.
+
+[2] M. K. Brunnermeier, “Bubbles,” in *Banking Crises*. Springer, 2016, pp. 28–36.
\ No newline at end of file
diff --git a/samples/texts_merged/7755458.md b/samples/texts_merged/7755458.md
new file mode 100644
index 0000000000000000000000000000000000000000..5be70d24420b80895e571d44de9ec3bd627a65ba
--- /dev/null
+++ b/samples/texts_merged/7755458.md
@@ -0,0 +1,445 @@
+
+---PAGE_BREAK---
+
+Isogenies of Degree $p$ of Elliptic Curves over Local Fields
+and Kummer Theory
+
+Mayumi KAWACHI
+
+Tokyo Metropolitan University
+
+(Communicated by K. Nakamula)
+
+**Abstract.** Let $p$ be a prime number. In order to calculate the Selmer group of a $p$-isogeny $\nu : E \to E'$ of elliptic curves, we determine the image of a local Kummer map $E'(K)/\nu E(K) \to H^1(K, \ker \nu)$ over a finite extension $K$ of $\mathbb{Q}_p$. We describe the image using a filtration on a unit group of a local field and the valuation of a coefficient of a leading term in a formal power series of an isogeny.
+
+# 1. Introduction.
+
+Let $\nu: E \to E'$ be an isogeny of elliptic curves over a number field $K$. We are interested in its Selmer group $\text{Sel}(\nu)$ which is a subgroup of $H^1(K, \ker \nu)$ generated by the elements whose local images in $H^1(K_v, \ker \nu)$ are in $\text{Im} \, \delta_v$ for all primes $v$. Here $\delta_v$ is a connecting homomorphism of an exact sequence over $K_v$
+
+$$1 \to \ker \nu \to E^\nu \to E' \to 1.$$
+
+So $\delta_v$ fits in an exact sequence
+
+$$1 \to E'(K_v)/\nu E(K_v) \xrightarrow{\delta_v} H^1(K_v, \ker \nu) \to H^1(K_v, E)$$
+
+for each $v$. Let $p$ be a prime number. We assume $\nu$ is a $p$-isogeny, namely $\ker \nu$ is a group of order $p$. In order to study such Selmer group $\text{Sel}(\nu)$, one of the difficult problems is to know $\text{Im} \, \delta_v$ for primes $v$ over $p$. If $E$ has good reduction at $v$ and $v$ does not divide $p$, then $\text{Im} \, \delta_v = H_{ur}^1(K_v, \ker v)$, where $H_{ur}^1(K_v, \ker v) = \text{ker}(H^1(K_v, \ker v) \to H^1(K_v^{ur}, \ker v))$. But if $v$ divides $p$ then the equation does not hold. This paper is devoted to the study of $\text{Im} \, \delta_v$ for $v$ over $p$. In [1], Berkovič treated the case when $E$ has a complex multiplication and $v \in \text{End}(E)$, and expressed $\text{Im} \, \delta_v$ as a subgroup of $K_v^\times / K_v^{\times p}$, under the assumption $K_v \supset \mu_p$ and $E(K_v) \supset \ker v$. In this paper we treat the case when $v$ is a general $p$-isogeny.
+
+We also assume that $K_v \supset \mu_p$ and $E(K_v) \supset \ker v$. Let $\mathcal{O}_v$ be the ring of integers of $K_v$, $\mathfrak{M}_v$ the maximal ideal of $\mathcal{O}_v$ and $U$ the unit group of $\mathcal{O}_v$. Let $U^0 = U$ and $U^i = 1 + \mathfrak{M}_v^i$ for $i \ge 1$. This gives a filtration on the unit group of $K_v$, $K_v^\times \supset U^0 \supset U^1 \supset U^2 \supset \dots$. It also induces a filtration $K_v^\times / K_v^{\times p} \supset C^0 \supset C^1 \supset \dots \supset C^{p_e_0+1} = \{1\}$, where $C^i = U^i / K^{\times p} \cap U^i$ for $i \ge 0$ and $e_0$ is the ramification index of $K_v$ over $\mathbb{Q}_p(\zeta_p)$. On the other hand let $E$
+
+Received March 22, 2000; revised March 5, 2002
+---PAGE_BREAK---
+
+be a minimal Weierstrass model over $O_v$, $E_0$ be the set of points with nonsingular reduction and $E_i = \{(x, y) \in E(K_v) | v(x) \le -2i, v(y) \le -3i\}$ for $i \ge 1$. This gives filtrations $E(K_v) \supset E_0 \supset E_1 \dots$ and $E'(K_v) \supset E'_0 \supset E'_1 \supset \dots$. The filtration on $E'(K_v)$ induces the filtration $E'(K_v)/vE(K_v) \supset D^0 \supset D^1 \supset \dots$, where $D^i = E'_i(K_v)/vE(K_v) \cap E'_i(K_v)$ for $i \ge 0$. Let $t$ be the index such that the generator of $\ker v$ is contained in $E_t(K_v) \setminus E_{t+1}(K_v)$. We regard $\delta_v$ as a homomorphism
+
+$$ \delta_v : E'(K_v)/vE(K_v) \to K_v^\times / K_v^{\times p} $$
+
+by identifying
+
+$$ H^1(K_v, \ker v) \simeq H^1(K_v, \mu_p) \simeq K_v^\times / K_v^{\times p}. $$
+
+Then $\delta_v$ maps the filtration on $E'(K_v)/vE(K_v)$ to that on $K_v^\times / K_v^{\times p}$. By investigating this map, we will show the following theorem.
+
+**THEOREM. 1)** If E has ordinary good reduction over $K_v$, then
+
+$$ \mathrm{Im} \, \delta_v = \begin{cases} C^1 & \text{if } \pi(\ker v) = \{0\} \\ C^{e_0 p} & \text{if } \pi(\ker v) \neq \{0\} \end{cases} $$
+
+where $\pi$ is the reduction map.
+
+2) If E has supersingular good reduction over $K_v$, then
+
+$$ \mathrm{Im} \, \delta_v = C^{1+(e_0-t)p}. $$
+
+3) If E has multiplicative reduction over $K_v$ and $p \neq 2$, then E has split multiplicative reduction and
+
+$$ \mathrm{Im} \, \delta_v = \begin{cases} K_v^\times / K_v^{\times p} & \text{if } \ker v = (\zeta_p) \\ 1 & \text{if } \ker v = (\zeta_p^i / \sqrt{q}) \quad \text{for } i = 0, \dots, p-1. \end{cases} $$
+
+We here remark that if E has bad reduction, $\mathrm{Im} \, \delta_v$ is not necessarily contained in $C^1$, as in the case 3). In the case 2), $\mathrm{Im} \, \delta_v$ can be written by using the parameter $t$. In §5, we give some examples and calculate that values of $t$ for them.
+
+**ACKNOWLEDGMENT.** The author wishes to thank Professor M. Kurihara for many valuable comments and suggestions.
+
+## 2. Preliminaries from formal groups.
+
+**2.1. The map $\delta$ of formal groups.** Let $K$ be a finite extension of $Q_p$, $v$ be a normalized valuation on $K$, $O_K$ the ring of integers of $K$, $\mathfrak{M}_K$ the maximal ideal in $O_K$ and $k = O_K/\mathfrak{M}_K$ the residue field. We put $e = v(p)$. Let $\zeta_p$ be a primitive $p$-th root of unity. Let $\mathfrak{F}_K, \mathfrak{F}'_K$ be formal groups over $O_K$. Assume that there is an isogeny $\nu : \mathfrak{F}_K \to \mathfrak{F}'_K$ over $K$. We regard $\nu$ as a power series $\nu(z) = a_1z + a_2z^2 + \cdots \in O_K[[z]]$.
+
+LEMMA 2.1.1 (cf. [1], Lemma 1.1.1). Let $\varphi(z)$ be an isogeny of formal groups defined over a commutative ring of characteristic $p$. Then there exists an integer $h \ge 0$ such that $\varphi(z)$ is a power series in $z^{ph}$.
+---PAGE_BREAK---
+
+PROOF. See [3], Chap. 1, §3, Theorem 2.
+
+By the above lemma, we define the height of an isogeny over $\mathcal{O}_K$ as follows.
+
+1) If there is a positive integer $h$ such that $\nu(z) \equiv \psi(z^{p^h}) \mod \mathfrak{M}_K$, where $\psi(z) = b_1z + b_2z^2 + \cdots \in \mathcal{O}_K[[z]]$, $b_1 \notin \mathfrak{M}_K$ and $b_i \in \mathcal{O}_K$, then the height of $\nu$ is defined to be $h$. We denote $h$ by $\text{ht}(\nu)$.
+
+2) If $\nu(z) \equiv 0 \mod \mathfrak{M}_K$, then the height of $\nu$ is defined to be infinity.
+
+We also define a height of a formal group $\mathfrak{F}_K$ to be the height of $[p]$, the multiplication by $p$ on $\mathfrak{F}_K$. We assume that $\text{ht}(\nu) = 1$ and that the points of $\text{ker } \nu$ are defined over $K$. For an algebraic extension $L$, we define $\mathfrak{F}_K(L) = \mathfrak{F}_K(\mathfrak{M}_L)$. For a point $P$ of $\mathfrak{F}_K(L)$, we denote by $z(P)$ the corresponding element of $\mathfrak{M}_L$. We will denote $\mathfrak{F}_K(K)$ simply by $\mathfrak{F}_K$. We define a decreasing filtration on $\mathfrak{F}_K$ by $\mathfrak{F}_K^i = \mathfrak{F}(\mathfrak{M}_K^i)$. So we have $\mathfrak{F}_K = \mathfrak{F}_K^1 \supset \mathfrak{F}_K^2 \supset \cdots$. Put $\mathcal{D}_K = \mathfrak{F}'_K/\nu\mathfrak{F}_K$. The filtration on $\mathfrak{F}_K$ induces a filtration on $\mathcal{D}_K$. Namely put $\mathcal{D}_K^i = \mathfrak{F}_K^i/\nu\mathfrak{F}_K \cap \mathfrak{F}_K^{i+1}$, then we have a filtration $\mathcal{D}_K = \mathcal{D}_K^1 \supset \mathcal{D}_K^2 \supset \cdots$.
+
+LEMMA 2.1.2 (cf. [1], Lemma 2.1.1). *For $i$ such that $p \nmid i$, we have $a_1 | a_i$.*
+
+PROOF. If $a_1 \notin \mathfrak{M}_K$, it is obvious. In the case $a_1 \in \mathfrak{M}_K$, by Corollary 1 in p. 112 of [3], there exists a dual isogeny $\tilde{\nu}$ of $\nu$, that is $\tilde{\nu} \circ \nu = [p]$. So we have $a_1 | p$. Put $R = \mathcal{O}_K/(a_1)$ and consider an isogeny $\bar{\nu} = \nu \operatorname{mod}(a_1) : \mathfrak{F}_K/(a_1) \to \mathfrak{F}'_K/(a_1)$. Then $R$ is a ring of characteristic $p$, and we have an isogeny $\bar{\nu}(z) = \nu(z) \operatorname{mod}(a_1) = \bar{a}_1 z + \cdots + \bar{a}_p z^p + \cdots$ over $R$. Since $\bar{a}_1 = 0$, $\text{ht}(\bar{\nu}) \neq 0$. By Lemma 2.1.1, $\bar{\nu}(z)$ is a power series in $z^p$. Hence for $i$ such that $p \nmid i$, $\bar{a}_i = 0$, that is $a_1 | a_i$.
+
+We define
+
+$$t = \frac{\nu(a_1)}{p-1}.$$
+
+The following lemma shows that $t$ is an integer.
+
+LEMMA 2.1.3 (cf. [1], Lemma 1.1.2). *For any non-zero point $P \in \text{ker } v$, the valuation of $z(P)$ does not depend on the choice of $P$. In fact $P$ is in $\mathfrak{F}'_K \setminus \mathfrak{F}^{t+1}_K$, where $t$ is in the number defined above.*
+
+PROOF. Let $z = z(P)$, then $\nu(z) = a_1z + a_2z^2 + \cdots + a_pz^p + \cdots = 0$. By Lemma 2.1.2, we have $a_1 | a_i$ for $p \nmid i$. So $\nu(a_1z) = \nu(a_pz^p)$. Hence we have $\nu(z) = \frac{\nu(a_1)}{p-1} = t$, since $\nu(a_p) = 0$.
+
+LEMMA 2.1.4 (cf. [1], Lemma 1.1.2).
+
+1) If $1 \le i < pt$, then
+
+$$\mathcal{D}_{K}^{i}/\mathcal{D}_{K}^{i+1} \cong \begin{cases} k & \text{if } p \nmid i \\ 1 & \text{if } p \mid i. \end{cases}$$
+
+2) If $i \ge pt+1$, then $\mathcal{D}_K^i = 1$.
+
+3)
+
+$$\mathcal{D}_{K}^{pt}/\mathcal{D}_{K}^{pt+1} \cong \mathbf{Z}/p\mathbf{Z}.$$
+---PAGE_BREAK---
+
+PROOF. 1) If $1 \le j < t$, then $\nu(\mathfrak{F}^j) \subset \mathfrak{F}'^{pj}$. So $\tilde{\nu} : \mathfrak{F}^j/\mathfrak{F}^{j+1} \to \mathfrak{F}'^{pj}/\mathfrak{F}'^{pj+1}$ is induced by $\nu$. This is identified with $\tilde{\nu} : k \to k, \tilde{\nu}(x) = a_p x^p$, for $x \in k$. Since $k$ is perfect, $\tilde{\nu}$ is an isomorphism. If $i = pj$, then $\mathcal{D}_K^{pj}/\mathcal{D}_K^{pj+1} = 1$. If $p \nmid i$ then $\mathcal{D}_K^i/\mathcal{D}_K^{i+1} \simeq \mathfrak{F}_K^i/\mathfrak{F}_K^{i+1} \simeq k$.
+
+2) If $j \ge t+1$, $\nu(\mathfrak{F}_K^j) \subset \mathfrak{F}_K'^{j+(p-1)t}$. So $\tilde{\nu} : \mathfrak{F}_K^j/\mathfrak{F}_K^{j+1} \to \mathfrak{F}_K'^{j+(p-1)t}/\mathfrak{F}_K'^{j+(p-1)t+1}$ is induced by $\nu$. Put $a_1 = \pi_K^{t(p-1)}u$, where $\pi_K$ is a prime element of $K$ and $u \in \mathcal{O}_K^\times$. Then $\tilde{\nu} : k \to k$ can be regarded as $\tilde{\nu}(x) = ux$ for $x \in k$. So $\tilde{\nu}$ is an isomorphism. Hence $\nu : \mathfrak{F}_K^j \to \mathfrak{F}_K'^{j+(p-1)t}$ is an isomorphism. So if $i \ge pt+1$, then $\mathcal{D}_K^i = 1$.
+
+3) For $i = pt$, $\nu(\mathfrak{F}_K^i) \subset \mathfrak{F}_K'^{pt}$. So $\nu$ induces $\tilde{\nu} : \mathfrak{F}_K^i/\mathfrak{F}_K^{i+1} \to \mathfrak{F}_K'^{pt}/\mathfrak{F}_K'^{pt+1}$ and $\tilde{\nu}(x) = ux + a_p x^p$ for $x \in k$. This is extended to $\tilde{\nu} : \bar{k} \to k$. Because $H^1(k, \bar{k}) = 1$, we have $k/\tilde{\nu}(k) \simeq H^1(k, \ker \tilde{\nu})$. Since $\ker \nu \subset \mathfrak{F}_K^i \setminus \mathfrak{F}_K'^{i+1}$, $\ker \tilde{\nu} \simeq \mathbb{Z}/p\mathbb{Z}$ as $\text{Gal}(\bar{k}/k)$-modules. Using the fact that $k$ is finite, we have $k/\tilde{\nu}(k) \simeq H^1(k, \ker \tilde{\nu}) \simeq H^1(k, \mathbb{Z}/p\mathbb{Z}) \simeq \mathbb{Z}/p\mathbb{Z}$.
+
+For $[P] \in \mathcal{D}_K$, let $Q \in \mathfrak{F}_K(\bar{K})$ be a point such that $P = \nu(Q)$. Let $K' = K(Q)$ be a definition field of $Q$ over $K$. We prepare the next lemma for Theorem 2.1.6 which is the general $p$-isogenies' case of Theorem 2.1.1 in Berkovič [1]. Since $\text{ht}(\nu) = 1$, we can write
+
+$$\nu(z) - z(P) = (b_0 + b_1 z + \cdots + z^p)U(z),$$
+
+where $b_i \in \mathcal{O}_K$ and $U(z) \in \mathcal{O}_K[[z]]^\times$, by Weierstrass preparation theorem. So $z(Q)$ is a solution of the equation of degree $p$. Since $\ker v \subset \mathfrak{F}_K(K)$, $K'/K$ is a Galois extension of degree $\le p$. Let $G = \text{Gal}(K'/K)$. For $\sigma \in G$, $\sigma(Q)$ can be written as $\sigma(Q) = Q \oplus T$, where $T \in \ker v$ and $\oplus$ is the formal group law of $\mathfrak{F}$. For a prime element $\pi$ of $K'$, define $i_G(\sigma) = v_{K'}(\sigma(\pi) - \pi)$. Then it does not depend on the choice of $\pi$. By calculating $i_G(\sigma)$, we give a simpler proof of Theorem 2.1.6 than that of [1]. The idea of this proof is adviced by Kurihara.
+
+LEMMA 2.1.5. Let $[P] \in \mathcal{D}_K^i \setminus \mathcal{D}_K^{i+1}$, then
+
+1) If $1 \le i < pt$ and $p \nmid i$, then $K'/K$ is a totally ramified extension of degree $p$ and
+$i_G(\sigma) = pt - i + 1$ for $\sigma \in G$.
+
+2) If $i=pt$, then $K'/K$ is an unramified extension of degree $p$.
+
+PROOF. 1) Let $v_{K'}(z(Q)) = j$ then $v_{K'}(z(v(Q))) = pj$. If $v_K = v_{K'}$ then $i = v_K(z(P)) = pj$. This contradicts to $p\nmid i$. So $K'/K$ is a totally ramified extension of degree $p$. Let $y = z(Q)$ and $\pi_K$ be a prime element of $K$. We can choose integers $a, b$ such that $ai + bp = 1$. Then $\pi = y^a\pi_K^b$ is a prime element of $K'$. We have $i_G(\sigma) = v_{K'}(\frac{\sigma(\pi)}{\pi}-1)+1 = v_{K'}(\frac{\sigma(y)^a\pi_K^b}{y^a\pi_K^b}-1)+1$. Let $\ker v \ni T \ne 0$ and $\xi = z(T)$. Then $v_{K'}(y) = i$, $v_{K'}(\xi) = tp$ and $\sigma(y) = y + \xi = y + \xi + \gamma$, where $v_{K'}(\gamma) > v_{K'}(y+\xi)$. Therefore $\sigma(\pi) = \sigma(y)^a\pi_K^b = (y+\xi+\gamma)^a\pi_K^b = (y^a+a y^{a-1}\xi + \gamma')\pi_K^b$, where $v_{K'}(\gamma') > v_{K'}(ay^{a-1}\xi) > v_{K'}(y^a)$. So $v_{K'}(\frac{\sigma(\pi)}{\pi}-1) = v_{K'}(\frac{y^a+a y^{a-1}\xi+\gamma'}{y^a}-1) = v_{K'}(\xi)-v_{K'}(y) = pt-i$. Hence $i_G(\sigma) = pt-i+1$.
+
+2) Since $a_1 = \pi_K^{(p-1)t}u$, where $u \in \mathcal{O}_K^\times$, $\nu(\pi_K^t x) = \pi_K^{pt}(ux + \cdots + a_p x^p + \cdots)$. Let $z(P) = \pi_K^{pt}\beta$, where $\beta \in \mathcal{O}_K^\times$. Because $P \notin v\mathfrak{F}_K$, by Hensel's lemma, the solution
+---PAGE_BREAK---
+
+of $\beta \equiv ux + a_p x^p \bmod \mathfrak{M}_K$ is not contained in $k$. So the solution is contained in a finite extension over $k$ of degree $p$. Since $u \not\equiv 0 \bmod \mathfrak{M}_K$, $ux + a_p x^p \bmod \mathfrak{M}_K$ is separable. So we have a solution in an unramified extension over $K$ of degree $p$.
+
+We will consider the special case that $\mathfrak{F}_K$ is isomorphic to $\mathbf{G}_m$, that is $\mathfrak{F}_K = U^1 = 1 + \mathfrak{M}_K$. We take $v$ to be the $p$-th power. We assume $K \supsetneq \zeta_p$ and let $e_0 = \frac{e}{p-1}$, $\mathfrak{F}_K^i = U^i = 1 + \mathfrak{M}_K^i$, $\mathfrak{D}_K^i = C_K^i = U^i/K^{\times p} \cap U^i$ and $t = e_0$. We fix an arbitrary formal group and denote it by $\mathfrak{F}_K$ again. Then we will consider the correspondence of $\mathfrak{F}_K$ to $\mathbf{G}_m$. We use the same notation $v$, $t$ and $\mathfrak{D}_K$ for $\mathfrak{F}_K$. Let $[P] \in \mathfrak{D}_K$ and $v(Q) = P$. If $\mathfrak{F}_K(K) \supset \ker v$ and $K \supseteq \zeta_p$, the definiton field $K'$ of $Q$ is a Kummer extension over $K$, that is $K' = K(\sqrt[p]{\alpha})$, where $[\alpha] \in K^{\times}/K^{\times p}$. We can define the map $\delta : \mathfrak{D}_K \to K^{\times}/K^{\times p}$ by $\delta([P]) = [\alpha]$. Then we have the next theorem.
+
+**THEOREM 2.1.6.** *Assume that $\mathfrak{F}_K \supset \ker v$ and $K \supseteq \zeta_p$. If $[P] \in \mathfrak{D}_K^i \setminus \mathfrak{D}_K^{i+1}$ for $1 \le i < pt$ and $p \nmid i$ or $i = pt$, then $\delta([P]) \in C_K^{i+(e_0-t)p} \setminus C_K^{i+(e_0-t)p+1}$.*
+
+PROOF. Let $K'$ be a definition field of $Q$, where $v(Q) = P$. If $1 \le i < pt$ and $p \nmid i$, then by Lemma 2.1.5, 1), $K'/K$ is a totally ramified extension and $i_G(\sigma) = pt - i + 1$. On the other hand $K'$ is regarded as a Kummer extension $K(\sqrt[p]{\alpha})$, where $\alpha \in C_K^j \setminus C_K^{j+1}$. Then $[\alpha] = \delta([P])$. Since $K'/K$ is a totally ramified extension, by applying the Lemma 2.1.5, 1) to the case when $\mathfrak{F}_K = \mathbf{G}_m$, that is, $v$ is $p$-th power map and $t = e_0$, we have $i_G(\sigma) = pe_0 - j + 1$. So by comparing the two representation of $i_G(\sigma)$, we have $j = i + (e_0 - t)p$. If $i = pt$ then by Lemma 2.1.5, 2), $K'$ is an unramified extension. So $\delta([P]) = [\alpha]$, where $[\alpha] \in C_K^{e_0 p} \setminus C_K^{e_0 p+1}$.
+
+COROLLARY 2.1.7. $\delta(\mathfrak{D}_K^1) = C_K^{1+(e_0-t)p}$.
+
+PROOF. By Theorem 2.1.6, $\delta$ induces an injection and by Lemma 2.1.4 this is an isomorphism of finite groups,
+
+$$
+\begin{aligned}
+\mathfrak{D}_K^i / \mathfrak{D}_K^{i+1} &\approx C_K^{i+(e_0-t)p} / C_K^{i+(e_0-t)p+1} \\
+&\cong \begin{cases} k & \text{if } p \nmid i, 1 \le i < pt \\ 1 & \text{if } p \mid i, 1 \le i < pt, \end{cases} \\
+\mathfrak{D}_K^i &= C_K^{i+(e_0-t)p} = 1 \quad \text{for } i \ge tp+1
+\end{aligned}
+$$
+
+and
+
+$$
+\begin{gather*}
+\mathfrak{D}_K^{pt} / \mathfrak{D}_K^{pt+1} \cong C_K^{e_0 p} / C_K^{e_0 p+1} \cong \mathbb{Z} / p\mathbb{Z}.
+\\
+\text{Hence we have } \delta(\mathfrak{D}_K^1) = C_K^{1+(e_0-t)p}.
+\end{gather*}
+$$
+
+### 3. Elliptic curves over K.
+
+**3.1. The map $\delta$ of elliptic curves.** Let $E$ and $E'$ be elliptic curves defined over $K$, $v: E \rightarrow E'$ be an isogeny of degree $p$ defined over $K$ and $\tilde{v}: E' \rightarrow E$ be a dual isogeny of $v$. We assume $E(K) \supset \ker v$ and $E'(K) \supset \ker \tilde{v}$. Then we easily see $K \supseteq \zeta_p$ by using Weil pairing.
+---PAGE_BREAK---
+
+An exact sequence
+
+$$1 \to \ker \nu \to E \to E' \to 1$$
+
+induces an exact sequence
+
+$$1 \to E'(K)/\nu E(K) \xrightarrow{\delta_1} H^1(K, \ker \nu) \to H^1(K, E)$$
+
+where $\delta_1$ is a connecting homomorphism. We fix an isomorphism $\ker \nu \simeq \mu_p$. Then we have an isomorphism
+
+$$\kappa : H^1(K, \ker \nu) \xrightarrow{\sim} H^1(K, \mu_p).$$
+
+By Kummer theory, there is an isomorphism
+
+$$\delta_2 : K^{\times}/K^{\times p} \xrightarrow{\sim} H^1(k, \mu_p).$$
+
+Let $\delta = \delta_2^{-1} \circ \kappa \circ \delta_1$.
+
+Put $K' = K(\nu^{-1}(E(K)))$. Then $K'/K$ is an abelian extension of exponent $p$, hence a Kummer extension. So there is a subgroup $B$ of $K^*/K^{*p}$ such that $K' = K(\sqrt[p]{B})$. Put $D_K = E'(K)/\nu E(K)$.
+
+LEMMA 3.1.1. *The image $\delta(D_K)$ does not depend on the choice of the isomorphism $\kappa$. In fact we have $\delta(D_K) = B$.*
+
+PROOF. Let $[P] \in D_K$, $[P] \neq 0$ and $\nu(Q) = P$. Put $L = K(Q)$ and $\delta([P]) = [\alpha]$.
+By a commutative diagram
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+D_L \arrow[r] & H^1(L, \ker \nu) \arrow[r, "Res"] & H^1(L, \mu_p) \arrow[r, "Res"] & L^\times/L^{\times p} \\
+\arrow[u, "D_k"] & \arrow[u, "H^1(K, \ker \nu)"] & \arrow[u, "H^1(K, \mu_p)"] & \arrow[u, "H^1(K, \mu_p)"] \\
+\arrow[r, "L^\times/L^{\times p}"] & & & \\
+& D_K^\times & D_K^{*p} &
+\end{tikzcd}
+$$
+
+we have $\alpha \in L^{\times p}$. So $L = K(\sqrt[p]{\alpha})$ since $[L:K] = p$, this implies $\alpha \in B$. Conversely let $\alpha \in B$ and $L = K(\sqrt[p]{\alpha})$. Then there exists $Q \in \nu^{-1}(E(K))$ such that $L = K(Q)$. By the above diagram, $\delta([\nu(Q)]) = [\alpha]$.
+
+**3.2. The case of good reduction.** Let $E$ be a minimal Weierstrass model over $O_K$ and $\pi : E(K) \to \tilde{E}(k)$ be a reduction map. Define $E_0(K) = \pi^{-1}(\tilde{E}_{ns}(k))$, $E_1(K) = \ker \pi$ and for $i \ge 1$, $E_i(K) = \{(x, y) \in E(K) | \nu(x) \le -2i, \nu(y) \le -3i\}$. Let $\nu : E \to E'$ be an isogeny of degree $p$ over $K$ such that $E'$ is a minimal Weierstrass model over $O_K$. Assume that $\ker \nu \subset E(K)$ and $\ker \tilde{\nu} \subset E'(K)$. We define $E'_i(K)$ by the same way of $E_i(K)$ for $i \ge 1$. Let $D_K = E'(K)/\nu E(K)$ and $D_K^i = E'_i(K)/\nu E(K) \cap E'_i(K)$ for $i \ge 0$, then we have a filtration $D_K \supset D_K^0 \supset D_K^{i-1} \supset \dots$.
+
+We make change of variable $z = -x/y$. By mapping $(x, y)$ to $z$, $E_1(K)$ (resp. $E'_1(K)$) is isomorphic to the formal group $\hat{E}(\mathfrak{M}_K)$ (resp. $\hat{E}'(\mathfrak{M}_K)$). Let $\Phi$ be a finite subgroup of $\hat{E}(\mathfrak{M}_K)$ such that $\ker \nu \cap E_1(K) \simeq \Phi$. Then there exists a formal group $\mathfrak{G}$ and an isogeny $\hat{\upsilon} : \hat{E} \to \mathfrak{G}$ both defined over $\mathcal{O}_K$ such that $\ker \hat{\upsilon} = \Phi$ by Theorem 4 in p. 112 of [3]. Since $E'$ is a minimal model over $\mathcal{O}_K$, $\mathfrak{G} = \hat{E}'$. Since $\Phi \simeq \ker \nu$ or $\Phi = \{0\}$, $\text{ht}(\hat{\upsilon}) = 1$ or 0.
+---PAGE_BREAK---
+
+If $ht(\hat{v}) = 1$, we denote $\hat{v}$ by $v$. Then we define $\mathfrak{D}_K = \hat{E}'(\mathfrak{M}_K)/\nu\hat{E}(\mathfrak{M}_K) \cap \hat{E}'(\mathfrak{M}_K)$.
+By mapping $(x, y)$ to $z$, $E_i(K) \simeq \hat{E}(\mathfrak{M}_K^i)$ and $E'_i(K) \simeq \hat{E}'(\mathfrak{M}_K^i)$, for $i \ge 1$. Then the
+map induces $D_K^i \simeq \mathfrak{D}_K^i$, where $\mathfrak{D}_K^i = \hat{E}'(\mathfrak{M}_K^i)/\nu\hat{E}(\mathfrak{M}_K) \cap \hat{E}'(\mathfrak{M}_K^i)$. By Lemma 3.1.1,
+$\delta(D_K^1) = \delta(\mathfrak{D}_K)$.
+
+LEMMA 3.2.1. If $E$ has ordinary (resp. supersingular) good reduction, then $E'$ has ordinary (resp. supersingular) good reduction.
+
+PROOF. By Cor. 7.2 of Chap. 7 in [9], isogenous elliptic curves both have good re-
+duction or neither have. Let $\tilde{\nu}$ be a dual isogeny of $\hat{\nu}$. Since $\tilde{\nu} \circ \hat{\nu} = [p] : \hat{E} \to \hat{E}$ and
+$\hat{\nu} \circ \tilde{\nu} = [p] : \hat{E}' \to \hat{E}', \hat{E}$ and $\hat{E}'$ have the same height.
+
+In this case $E = E_0$ and $E' = E'_0$. We define $\tilde{\nu} : \tilde{E} \to \tilde{E}'$ to be an isogeny such that
+$\ker \tilde{\nu} = \pi(\ker \nu)$. By Remark 4.13.2 of Chap. 3 of [9], $\tilde{\nu}$ is defined over $k$. Then we have a
+commutative diagram,
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+ & & 1 & & 1 \\
+ \vdots & \ker \nu & \arrow[r, "-\pi"] & \vdots & \ker \tilde{\nu} \\
+ 1 \arrow[r] & E_1(K) & E_0(K) \arrow[r, "-\pi"] & \tilde{E}(k) & 1 \arrow[r] \\
+ \vdots & \vdots & \vdots & \vdots & \vdots \\
+ 1 \arrow[r] & E'_1(K) & E'_0(K) \arrow[r, "-\pi"] & \tilde{E}'(k) & 1 \arrow[r] \\
+ & E'_1(K)/\nu E_1(K) & D_K^0 & \arrow[r, "-\pi"] & \tilde{E}'(k)/\tilde{\nu}\tilde{E}(k) & 1 \\
+ & \vdots & \vdots & \vdots & \vdots & \\
+ 1 & & 1 & & 1 &
+\end{tikzcd}
+$$
+
+LEMMA 3.2.3. If $\ker \tilde{\nu} = \{0\}$, then $D_K^0/D_K^1 = 1$.
+
+PROOF. Since $\tilde{\nu}$ is injective and $\# \tilde{E}(k) = \# \tilde{E}'(k)$ (see e.g. [2], Chap. 25), $\tilde{\nu}$ is an isomorphism. So $D_K^0/(E_1'(K)/\nu E_1(K)) = 1$ by (3.1). Hence $D_K^0/D_K^1 \simeq \nu E_0(K) \cap E_1'(K)/\nu E_1(K)$. Let $x \in E_0(K)$. If $\nu(x) \in \nu E_0(K) \cap E_1'(K)$ then $\pi(\nu(x)) = 1$. Since $\tilde{E}(k) \simeq \tilde{E}'(k)$, $\pi(x) = 1$. Hence $x \in E_1(K)$. So $\nu E_0(K) \cap E_1'(K)/\nu E_1(K) = 1$.
+
+LEMMA 3.2.4. If $\ker \tilde{\nu} \neq \{0\}$, then $E'_1(K)/\nu E_1(K) = 1$ and $E'_1(K_{ur})/\nu E_1(K_{ur}) = 1$.
+---PAGE_BREAK---
+
+PROOF. If $ker \tilde{v} \neq \{0\}$ then $ker \tilde{v} \simeq ker v$. So $v : E_1(K) \to E'_1(K)$ is injective. Then the isogeny $\hat{v} : \hat{E}(\mathfrak{M}_K) \to \hat{E}'(\mathfrak{M}_K)$ as formal groups is height 0. By the similar argument of Lemma 2.1.4, 2), $\hat{v}$ is an isomorphism. Hence $E'_1(K)/vE_1(K) = 1$. Let $\pi' : E_0(K_{ur}) \to \tilde{E}(\bar{k})$ be a reduction map. The minimal model of $E/O_{K_{ur}}$ is equal to that of $E/O_K$. So $\pi'(ker v) = \pi(ker v) \neq \{0\}$. Therefore we can apply the same argument to $E_1(K_{ur})$.
+
+LEMMA 3.2.5. If $ker \tilde{v} \neq \{0\}$, then $\delta(D_K^0) = C_K^{e_0 p}$.
+
+PROOF. Let $\text{Res}_1$ be a restriction map of $H^1(K, \text{ker } v)$ to $H^1(K_{ur}, \text{ker } v)$. Then we will first prove that $\delta_1(D_K^0) = \text{ker}(\text{Res}_1)$, where $\delta_1$ was defined in §3.1. Since $E'_1(K_{ur})/vE_1(K_{ur}) = 1$ by Lemma 3.2.4 and $\tilde{E}'(\bar{k})/v\tilde{E}(\bar{k}) = 1$, $D_{K_{ur}}^0 = 1$ by the exact sequence
+
+$$E'_1(K_{ur})/vE_1(K_{ur}) \to D_{K_{ur}}^0 \xrightarrow{\pi} \tilde{E}'(\bar{k})/v\tilde{E}(\bar{k}) \to 1.$$
+
+Since the diagram below is commutative
+
+$$
+\begin{tikzcd}
+H^1(K, \text{ker } v) \arrow[r, "Res_1"] & H^1(K_{ur}, \text{ker } v) \\
+\delta_1 \arrow[u] \arrow[d] & D_K^0 \arrow[u] \arrow[d]
+\arrow["{\sim}"]{c}
+\arrow["{\sim}"]{d}
+\end{tikzcd}
+$$
+
+$\delta_1(D_K^0) \subset \ker(\text{Res}_1)$. In order to prove equatlity, we consider an exact sequence
+
+$$
+1 \to \tilde{E}'(k)/\tilde{v}\tilde{E}(k) \xrightarrow{\tilde{\delta}_1} H^1(k, \operatorname{ker} \tilde{v}) \to H^1(k, \tilde{E}).
+$$
+
+Since $H^1(k, \tilde{E}) = 1$ (see e.g. [2], Chap. 25), $\tilde{\delta}_1$ is an isomorphism. By Lemma 3.2.4, $E'_1(K)/vE_1(K) = 1$. So $D_K^0 \simeq \tilde{E}'(k)/\tilde{v}\tilde{E}(k) \simeq H^1(k, \text{ker }\tilde{v}) \simeq \text{ker}(\text{Res}_1)$. Here, the last isomorphism is a consequence of the exact sequence
+
+$$
+1 \to H^1(k, \ker \tilde{\nu}) \to H^1(K, \ker \nu) \to H^1(K_{ur}, \ker \nu).
+$$
+
+Hence $\delta_1(D_K^0) = \ker(\text{Res}_1)$.
+
+Next, let $\delta_2$ be defined in §3.1 and $\text{Res}_2$ be a restriction map of $H^1(K, \mu_p)$ to $H^1(K_{ur}, \mu_p)$. By Lemma 2.1.5, 2), $\delta_2(C_K^{e_0p}) \subset \ker(\text{Res}_2)$. Since $C_K^{e_0p} \simeq Z/pZ$ by Lemma 2.1.4, 3) and $|\ker(\text{Res}_2)| = |H^1(K_{ur}/K, \mu_p)| = p$, we have $\delta_2(C_K^{e_0p}) = \ker(\text{Res}_2)$.
+
+We fix an isomorphism $\ker v \simeq \mu_p$. Then we have an isomorphism $\kappa$ and a commutative diagram
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+H^1(K, \ker v) & H^1(K, \mu_p) \\
+\text{Res}_1 \arrow[r] & \arrow[down]\arrow[r, "κ"] \\
+H^1(K_{ur}, \ker v) & H^1(K_{ur}, \mu_p)
+\arrow[r] & \arrow[down]\arrow[r, "Res_2]
+\end{tikzcd}
+$$
+
+Therefore $\kappa \circ \delta_1(D_K^0) = \kappa(\ker(\text{Res}_1)) = \ker(\text{Res}_2) = \delta_2(C_K^{e_0p})$. Since $\delta = \delta_2^{-1} \circ \kappa \circ \delta_1$ and $\text{Im}\,\delta$ does not depend on the choice of $\kappa$ by Lemma 3.1.1, we have $\delta(D_K^0) = C_K^{e_0p}$.
+---PAGE_BREAK---
+
+THEOREM 3.2.6. 1) If $E$ has ordinary good reduction over $K$, then
+
+$$ \delta(D_K) = \begin{cases} C_K^{e_0 p} & \text{if } \ker \tilde{\nu} \neq \{0\} \\ C_K^1 & \text{if } \ker \tilde{\nu} = \{0\}. \end{cases} $$
+
+2) If $E$ has supersingular good reduction over $K$ and the generator of $\ker \nu$ is contained in $E_t(K) \setminus E_{t+1}(K)$, then
+
+$$ \delta(D_K) = C_K^{1+(e_0-t)p}. $$
+
+PROOF. 1) If $E$ has ordinary good reduction, then there is an exact sequence
+
+$$ 1 \to X_p \to E[p] \xrightarrow{\pi} \tilde{E}[p] \to 1 $$
+
+where $\pi$ is a reduction mod $\mathfrak{M}_{\bar{K}}$ and the kernel $X_p$ is a cyclic group order $p$. If $\ker \tilde{\nu} = \{0\}$ then $D_K^0 = D_K^1$ by Lemma 3.2.3. Since $t = e_0$, $\delta(D_K) = C_K^1$ by Corollary 2.1.7. If $\ker \tilde{\nu} \neq \{0\}$ then we can apply Lemma 3.2.5 to this case.
+
+2) If $E$ has supersingular good reduction then $\tilde{E}[p] = \{0\}$, so $\ker \tilde{\nu} = \{0\}$. By Lemma 3.2.3, $D_K^0 = D_K^1$. So $\delta(D_K) = C_K^{1+(e_0-t)p}$ by Corollary 2.1.7.
+
+# 4. Multiplicative reduction case.
+
+## 4.1. Multiplicative reduction case.
+
+LEMMA 4.1.1. If $E$ has multiplicative (resp. additive) reduction over $K$, then $E'$ has multiplicative (resp. additive) reduction.
+
+PROOF. Let $l$ be a prime number distinct from $p$, and let $T_l(E)$ and $T_l(E')$ be the Tate modules. Then $\nu : T_l(E) \to T_l(E')$ is an isomorphism. The action of Gal($\bar{K}/K$) is compatible with $\nu$. So the representations $\rho : \text{Gal}(\bar{K}/K) \to \text{Aut}(T_l(E))$ and $\rho' : \text{Gal}(\bar{K}/K) \to \text{Aut}(T_l(E'))$ have the same images. By [4], $E$ has semistable reduction if and only if Im $\rho|_l$ is unipotent, where $I$ is the inertia group of Gal($\bar{K}/K$). This is equivalent to the unipotential of Im $\rho'|_I$. Hence $E'$ has semistable reduction. By Lemma 3.2.1 the reduction type of $E$ is equal to that of $E'$.
+
+If $E$ has multiplicative reduction, then $\nu(j(E)) < 0$. So by [10], Chap. 5, Theorem 5.3, there exists a unique $q \in K^\times$, with $\nu(q) > 0$ such that $E$ is isomorphic over $\bar{K}$ to the Tate curve $E_q$. Then we define the isomorphism by $\psi : E_q \to E$. By Lemma 4.1.1, $E'$ also has multiplicative reduction. So we can define an isomorphism $\psi' : E_{q'} \to E'$ over $\bar{K}$ for a unique $q' \in K^\times$ with $\nu(q') > 0$. Let $L$ be a unique quadratic extension over $K$ which is unramified. Since $E_q$ (resp. $E_{q'}$) is defined over $K$ by [10], Chap. 5, Theorem 3.1 (a), $E_q$ (resp. $E_{q'}$) is a quadratic twist of $E$ (resp. $E'$) that is, $\psi$ (resp. $\psi'$) is defined over $L$. If $\psi$ (resp. $\psi'$) is defined over $K$, $E$ (resp. $E'$) has split multiplicative reduction, otherwise it has non-split multiplicative reduction.
+
+Let $\phi : \bar{K}^\times / \langle q \rangle \to E_q$ (resp. $\phi' : \bar{K}^\times / \langle q' \rangle \to E_{q'}$) be an isomorphism defined by a power series of $q$ (resp. $q'$) as in [10], Chap. 5, Theorem 3.1 (c). This isomorphism is compatible with the action of Gal($\bar{K}/K$).
+---PAGE_BREAK---
+
+For $\psi^{-1}(\ker \nu)$, there exists a Tate curve $E_q/\psi^{-1}(\ker \nu)$ and an isogeny $E_q \to E_q/\psi^{-1}(\ker \nu)$. Then $\psi$ induces an isomorphism $E_q/\psi^{-1}(\ker \nu) \to E/\ker \nu$. So $E_q/\psi^{-1}(\ker \nu)$ must be $E_{q'}$, since $E_{q'}$ is a unique Tate curve isomorphic to $E'$.
+
+Then there exists an isogeny $\tilde{K}^{\times}/\langle q \rangle \to \tilde{K}^{\times}/\langle q' \rangle$ whose kernel is $(\psi \circ \phi)^{-1}(\ker \nu)$. The kernel of multiplication-by-$p$ map of $\tilde{K}^{\times}/\langle q \rangle$ is $\langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle$, where $i = 0, \dots, p-1$. So $(\psi \circ \phi)^{-1}(\ker \nu)$ is one of the 1-dimensional subspaces of this $F_p$-vector space $\langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle$. Hence it is $\langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle$ or $\langle \zeta_p \rangle$.
+
+LEMMA 4.1.2. Assume that $p \neq 2$, $\ker v \subset E(K)$ and $\zeta_p \in K$. Then both $E$ and $E'$ have split multiplicative reduction.
+
+PROOF. Assume that $E$ has non-split multiplicative reduction. Let $N_{L/K}: L^*/q^Z \to K^*/q^{2Z}$ be a norm map. Then by [10], Chap. 5, Corollary 5.4, for $u \in L^*/q^Z$, $\psi \circ \phi(u) \in E(K)$ is equivalent to $N_{L/K}(u) \in q^Z/q^{2Z}$. Since $(\psi \circ \phi)^{-1}(\ker v) = \langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle$ for $i=0, \dots, p-1$ or $\langle \zeta_p \rangle$ and $\ker v \subset E(K)$, it must be $N_{L/K}(\mathcal{V}/\overline{q}) \in q^Z/q^{2Z}$ or $N_{L/K}(\zeta_p) = \zeta_p^2 \in q^Z/q^{2Z}$. Since $p \neq 2$, this is a contradiction. So $E$ has split multiplicative reduction.
+
+In this case, $\psi$ is an isomorphism over $K$. So the induced isomorphism $E_q/\psi^{-1}(\ker v) \to E/\ker v$ is defined over $K$, that is, $\psi'$ is an isomorphism over $K$. Hence $E'$ has split multiplicative reduction.
+
+By the above lemma, we can identify $E$ (resp. $E'$) with $E_q$ (resp. $E_{q'}$). Hence we have the next proposition.
+
+PROPOSITION 4.1.3. Assume that $p \neq 2$, $\ker v \subset E(K)$ and $\zeta_p \in K$. Then
+
+$$
+\begin{cases}
+\operatorname{Im} \delta = K^*/K^{*p} & \text{if } \ker v = \langle \zeta_p \rangle \\
+\operatorname{Im} \delta = 1 & \text{if } \ker v = \langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle \quad \text{for } i = 0, \dots, p-1.
+\end{cases}
+$$
+
+PROOF. If $\ker v = (\zeta_p)$, then $q' = q^p$ and the isogeny is written by
+
+$$
+v: \tilde{K}^*/\langle q \rangle \longrightarrow \tilde{K}^*/\langle q' \rangle \\
+z \operatorname{mod} \langle q \rangle \longmapsto z^p \operatorname{mod} \langle q^p \rangle.
+$$
+
+For any $[z] \in K^*/\langle q \rangle$, we have $K(v^{-1}([z])) = K(\mathcal{V}/\overline{z})$. Hence by Lemma 3.1.1, $\operatorname{Im} \delta = K^*/K^{*p}$.
+
+If $\ker v = (\zeta_p^i, \mathcal{V}/\overline{q})$, then $q' = \zeta_p^i, \mathcal{V}/\overline{q}$ and $v$ is written by
+
+$$
+v: \tilde{K}^*/\langle q \rangle \longrightarrow \tilde{K}^*/\langle q' \rangle \\
+z \operatorname{mod} \langle q \rangle \longmapsto z \operatorname{mod} \langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle.
+$$
+
+For $[z] \in K^*/\langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle$, $K(v^{-1}([z])) = K(\mathcal{V}/\overline{q})$. Our assumption $\ker v \subset E(K)$ implies $\langle \zeta_p^i, \mathcal{V}/\overline{q} \rangle \subset K^*/\langle q \rangle$. So we have $\mathcal{V}/\overline{q} \in K$. Hence $K(v^{-1}([z])) = K$. So by Lemma 3.1.1, $\operatorname{Im} \delta = 1$.
+---PAGE_BREAK---
+
+**5. The calculation of Im $\delta$.**
+
+**5.1. p-isogenies over Q.** In this section we consider elliptic curves $E$ and $E'$ over $Q$ and a $p$-isogeny $\nu : E \to E'$ over $Q$. Take $\mathcal{K} = \mathbb{Q}(\ker \nu, \mu_p)$. Let $\mathcal{K}$ be a completion of $\mathcal{K}$ at a place of $\mathcal{K}$ above $p$.
+
+LEMMA 5.1.1 $\mathcal{K}/\mathbb{Q}_p(\mu_p)$ is an unramified extension. So $e_0 = v_K(p)/(p-1) = 1$.
+
+PROOF. Following Mazur [6], §5, we consider the following character. Let $\chi : \text{Gal}(\tilde{\mathbb{Q}}/\mathbb{Q}) \to \text{Aut}(\ker \nu) \simeq \text{F}_p^\times$ be defined by $T^\sigma = \chi(\sigma)T$, where $\langle T \rangle = \ker \nu$. Since $\mathbb{Q}(\ker \nu)/\mathbb{Q}$ is an abelian extension, $\chi$ factors through $\text{Gal}(\mathbb{Q}^{ab}/\mathbb{Q})$. By local class field theory, there exists an isomorphism $\rho : U(\mathbb{Q}_p) \simeq \text{Gal}(\mathbb{Q}_p^{ab}/\mathbb{Q}_p^{ur})$ and we restrict $\chi$ to $\text{Gal}(\mathbb{Q}_p^{ab}/\mathbb{Q}_p)$. So we have a homomorphism
+
+$$ \varepsilon : U(\mathbb{Q}_p) \xrightarrow{\rho} \text{Gal}(\mathbb{Q}_p^{ab}/\mathbb{Q}_p) \xrightarrow{\chi} \text{F}_p^\times. $$
+
+Since $U(\mathbb{Q}_p) \simeq \mathbb{Z}_p^\times \simeq \text{Gal}(\mathbb{Q}_p(\mu_{p^\infty})/\mathbb{Q}_p)$, $\varepsilon$ is the cyclotomic character. Then there exists $k \in \mathbb{Z}$ such that $\chi = \varepsilon^k \alpha$, where $\alpha$ is an unramified character at $p$. Then the character group which corresponds to $\mathbb{Q}(\mu_p)$ (resp. $\mathbb{Q}(\ker \nu)$) is $\langle \varepsilon \rangle$ (resp. $\langle \chi \rangle$). So the character group which corresponds to $\mathcal{K}$ is $\langle \varepsilon, \chi \rangle = \langle \varepsilon, \alpha \rangle$.
+
+**5.2. The case of $p=5$.** We study an elliptic curve $E$ over $Q$ with a 5-isogeny $\nu$ over $Q$. By Lecacheux [5], the $j$-invariant of such a curve is $j = -(n^2 - 10n + 5)^3/n$, where $n \in Q$ and $E$ is isomorphic to a curve
+
+$$ Y^2 = X^3 - (5n - 10n + n^2) \frac{d}{48} X + (-n - 4n + n^2) \frac{d^2}{864} $$
+
+with discriminant $\Delta = -nd^3$, where $d = n^2 - 22n + 125$. Let $\mathcal{K} = \mathbb{Q}(\mu_5, \ker \nu)$ and $\mathcal{K} = \mathbb{Q}_5(\mu_5, \ker \nu)$.
+
+EXAMPLE 5.2.1. We take $n = 10$, then $j = -25/2$, $\Delta = -2 \cdot 5^4$ and $E_{(10)}$ over $Q$ is written by
+
+$$ Y^2 = X^3 - \frac{25}{48}X + \frac{1475}{864}. $$
+
+By [5], the coordinate of a generator $P$ of $\ker \nu$ is $(x_P, y_P) = (\frac{5+6\sqrt{5}}{12}, \frac{\sqrt{50+10\sqrt{5}}}{4})$. So $\mathcal{K} = \mathbb{Q}(\zeta_5, \sqrt{-1})$. Since $e_0 = 1$ by Lemma 5.1.1, the ramification index of $K/\mathbb{Q}_5$ is 4. By Tate's algorithm [11], we can verify that a minimal model of $E_{(10)}$ over $\mathcal{O}_K$ is written by
+
+$$ Y^2 = X^3 - \frac{5}{48}X + \frac{59\sqrt{5}}{864}. $$
+
+Then $E_{(10)}$ has additive reduction over $\mathcal{O}_K$ with type IV and $v_K(\Delta) = 4$. By this change of coordinates, we have $v_K(x_P) = v_K(y_P) = 0$. Put $z_P = -x_P/y_P$. We have $v_K(z_P) = 0$.
+
+Let $L$ be an extension over $K$ with the ramification index 3. Since $j \equiv 0 \bmod 5$, $E$ has supersingular good reduction over $\mathcal{O}_L$. Let $\pi_L$ be a prime element of $\mathcal{O}_L$. Then a minimal
+---PAGE_BREAK---
+
+model over $\mathcal{O}_L$ is written by
+
+$$Y^2 = X^3 - \frac{5}{48\pi_L^4}X + \frac{59}{864u_1},$$
+
+where $u_1 = \pi_L^6/\sqrt{5}$. By this change of coordinates, we have $v_L(x_P) = -2$ and $v_L(y_P) = -3$. Then $t = v_L(z_P) = 1$. Hence $\delta(D_L) = C_L^{1+(3-1)\cdot 5} = C_L^{11}$, by Theorem 3.2.6.
+
+EXAMPLE 5.2.2. If $n = \frac{25}{2}$, $\Delta = -\frac{5^8}{26}$ and $j = -\frac{121945}{32}$ and $E(\frac{25}{2})$ over $\mathbb{Q}$ is written by
+
+$$Y^2 = X^3 - \frac{91 \cdot 5^3}{28 \cdot 3} X - \frac{421 \cdot 5^4}{2^{11} \cdot 3^3}.$$
+
+If $E'_{(10)}$ is 5-isogenous to $E_{(10)}$ over $\mathbb{Q}$, then $E(\frac{25}{3})$ is isomorphic to $E'_{(10)}$ over $\mathbb{Q}(\sqrt{-1})$. The generator of $\ker v$ is $(x_P, y_P) = (-\frac{35}{48}, \frac{5\sqrt{5}}{4})$. So $\mathcal{K} = \mathbb{Q}(\zeta_5)$. By change of coordinates, $E(\frac{25}{3})$ over $\mathcal{O}_K$ is written by
+
+$$Y^2 = X^3 - \frac{91 \cdot 5}{768} X - \frac{421 \cdot 5}{55296}.$$
+
+We have $v_K(x_P) = v_K(y_P) = 0$.
+
+Let $L$ be an extension over $K$ with the ramification index 3. By change of coordinates, we have $v_L(\Delta) = 0$. So $E(\frac{25}{3})$ is good reduction over $\mathcal{O}_L$. By this change of coordinates, $v_L(x_P) = -4$ and $v_L(y_P) = -6$. So $t = v_L(z_P) = 2$. Hence $\delta(D_L) = C_L^{1+(3-2)\cdot 5} = C_L^6$.
+
+EXAMPLE 5.2.3. If $n = 7$, $j = 4096/7$, $\Delta = -2^6 \cdot 5^3 \cdot 7$ and $E_{(7)}$ over $\mathbb{Q}$ is written by
+
+$$Y^2 = X^3 - \frac{20}{3}X + \frac{250}{27}.$$
+
+Then generator of $\ker v$ is $(x_P, y_P) = (\frac{5+3\sqrt{5}}{3}, \sqrt{50+20\sqrt{5}})$. So $\mathcal{K} = \mathbb{Q}(\zeta_5, \sqrt{-2})$. Since $e_0 = 1$, the ramification index of $K/\mathbb{Q}_5$ is 4. By change of coordinates, $v_K(\Delta) = 0$. Therefore $E_{(7)}$ has good reduction over $K$ and a minimal model of $E_{(7)}$ is written by
+
+$$Y^2 = X^3 - \frac{4}{3}X + \frac{10\sqrt{5}}{27}.$$
+
+By this change of coordinates, $v_K(x_P) = v_K(y_P) = 0$. So $t = v_K(z_P) = 0$. Because $j \neq 0 \pmod 5$, $E_{(7)}$ has ordinary good reduction. Since $\ker v \not\subset E_{(7)_1}(K)$, $\text{Im}\,\delta = C^{e_0 p} = C^5$ by Theorem 3.2.6.
+
+## References
+
+[1] V. G. BERKOVIČ, On the division by an isogeny of the points of an elliptic curves, Math. USSR Sbornik **22** (1974), 473–492.
+
+[2] J. W. S. CASSELS, *Lectures on Elliptic Curves*, Cambridge Univ. Press. (1991).
+
+[3] A. FRÖHLICH, *Formal Groups*, Lecture Notes in Math. **74** (1968), Springer.
+
+[4] A. GROTHENDIECK, *Modèles de Néron et monodromie*, SGA71 exposé IX, Lecture Notes in Math. **288** (1972), Springer, 313–523.
+---PAGE_BREAK---
+
+[5] O. LECACHEUX, *Courbes elliptiques et groupes de classes d'idéaux de corps quartiques*, C.R. Acad. Sci. Paris. t. **316**, Série I (1993), 217-220.
+
+[6] B. MAZUR, Rational isogenies of prime degree, Invent. Math. **44** (1978), 129-162.
+
+[7] J. P. SERRE, Propriété galoisiennes des points d'order fini des courbes elliptiques, Invent. Math. **15** (1972), 259-331.
+
+[8] J. P. SERRE, *Local Fields*, Grad. Texts in Math. **67** (1979), Springer.
+
+[9] J. H. SILVERMAN, *The Arithmetic of Elliptic Curves*, Grad. Texts in Math. **106** (1985), Springer.
+
+[10] J. H. SILVERMAN, *Advanced Topics in the Arithmetic of Elliptic Curves*, Grad. Texts in Math. **151** (1994), Springer.
+
+[11] J. TATE, Algorithm for determining the type of a singular fiber in an elliptic pencil, *Moduler Functions of One Variable IV*, Lecture Notes in Math. **476** (1975), Springer, 33-52.
+
+Present Address:
+
+DEPARTMENT OF MATHEMATICS, TOKYO METROPOLITAN UNIVERSITY,
+MINAMI-OHSAWA, HACHIOJI-SHI, TOKYO, 192-0397 JAPAN.
+e-mail: kawachi@comp.metro-u.ac.jp
\ No newline at end of file
diff --git a/samples/texts_merged/7796331.md b/samples/texts_merged/7796331.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e2cf08894a5e4b4e5876b4bd82a89d83928a2f6
--- /dev/null
+++ b/samples/texts_merged/7796331.md
@@ -0,0 +1,388 @@
+
+---PAGE_BREAK---
+
+Topology Proceedings
+
+**Web:** http://topology.auburn.edu/tp/
+
+**Mail:** Topology Proceedings
+Department of Mathematics & Statistics
+Auburn University, Alabama 36849, USA
+
+**E-mail:** topolog@auburn.edu
+
+**ISSN:** 0146-4124
+
+COPYRIGHT © by Topology Proceedings. All rights reserved.
+---PAGE_BREAK---
+
+# GENERALIZED BALANCED PAIR ALGORITHM¹
+
+BRIAN F. MARTENSEN
+
+**ABSTRACT.** We present here a more general version of the balanced pair algorithm. This version works in the reducible case and terminates more often than the standard algorithm. We present examples to illustrate this point. Lastly, we discuss the features which lead to balanced pair algorithms not terminating and state several conjectures.
+
+## 1. INTRODUCTION
+
+The balanced pair algorithm was first introduced by A. N. Livshits [8, 9] for checking the pure discrete spectrum for the Z-action of a substitution. A version of this algorithm was presented by V. F. Sirvent and B. Solomyak [12] for irreducible substitutions. For substitutions of Pisot type, Sirvent and Solomyak also give an explicit relationship between this algorithm and an overlap algorithm used in [13] for checking the pure discrete spectrum of the $\mathbb{R}$-action on the substitution tiling space. Recent results of A. Clark and L. Sadun [3] give conditions for the conjugacy of the $\mathbb{R}$-actions
+
+2000 Mathematics Subject Classification. Primary 37A30; Secondary 37B10
+52C23.
+
+Key words and phrases. balanced pair, pure discrete spectrum, pure point
+spectrum, tiling spaces, weakly mixing.
+
+The author was supported in part by NSF Grant DMS-0072675.
+
+¹This article was written while the author was at The University of Texas
+at Austin.
+---PAGE_BREAK---
+
+on substitution tiling spaces which otherwise do not differ combinatorially or topologically. Their results give an immediate relation between the Z-action on the sequence space and the R-action on the tiling space. The conditions given in [3] also allow us a way to generalize the balanced pair algorithm to reducible substitutions. The procedure for doing this is described in section 3 below.
+
+The purpose of extending this algorithm is two-fold. First, it creates a way in which the algorithm will terminate more often. When the balanced pair algorithm terminates has been much studied. M. Hollander [5] (see also [6]) has shown that the balanced pair algorithm will terminate for all 2-symbols Pisot substitutions. This fact, along with the solution to the coincidence conjecture for 2-symbols [2], has shown that all 2-symbol Pisot substitutions have pure discrete spectrum. This has not yet been shown for an arbitrary number of symbols. In fact, it is not yet known whether the balanced pair algorithm terminates for all Pisot substitutions.
+
+The second reason for extending the algorithm to the reducible case is related to collaring or rewriting substitutions to obtain new (yet conjugate) substitutions. Collaring or rewriting procedures generally increase the number of symbols which turns irreducible substitutions into reducible ones. It would be beneficial to know that such procedures did not change the potential for a balanced pair algorithm to terminate. In particular, there are rewriting procedures which automatically produce coincidences. Thus, the question of pure point spectrum (and even the coincidence conjecture) relies entirely on whether these reducible systems terminate. We explore these and other questions in section 4.
+
+## 2. PRELIMINARIES
+
+Let $\mathcal{A} = \{1, 2, \dots, n\}$ be a finite alphabet and $\mathcal{A}^*$ denote the collection of finite nonempty words with letters in $\mathcal{A}$. A *substitution* is a map $\varphi : \mathcal{A} \to \mathcal{A}^*$. It extends naturally to $\varphi : \mathcal{A}^* \to \mathcal{A}^*$ and $\varphi : \mathcal{A} \to \mathcal{A}$ by concatenation.
+
+We associate to each word $w$ a population vector
+
+$$ p(w) = (p_1(w), p_2(w), \dots, p_n(w)) $$
+
+which assigns to each $p_i(w)$ the number of appearances of the letter $i$ in the word $w$. To a substitution $\varphi$ there is an associated transition matrix $A_\varphi = (a_{ij})_{i \in \mathcal{A}, j \in \mathcal{A}}$ in which $a_{ij} = p_i(\varphi(j))$. Note that
+---PAGE_BREAK---
+
+$A_{\varphi}(\mathbf{p}(w)) = \mathbf{p}(\varphi(w))$. We say a substitution $\varphi$ is *primitive* if $\varphi^m(i)$ contains $j$ for all $i, j \in \mathcal{A}$ and sufficiently large $m$. Equivalently, $\varphi$ is primitive if and only if the matrix $A_{\varphi}$ is eventually positive (there exists $m$ such that the entries of $A_{\varphi}^m$ are strictly positive). This condition implies that $A_{\varphi}$ has an eigenvalue $\lambda_{\varphi}$ larger in modulus than its remaining eigenvalues called the Perron-Frobenius eigenvalue of $A_{\varphi}$ (and $\varphi$).
+
+We form a space $\Omega_{\varphi}$ as the set of allowed bi-infinite words for $\varphi$. That is, $\mathbf{u} \in \Omega_{\varphi}$ if and only if for each finite subword $w$ of $\mathbf{u}$, there are $i \in \mathcal{A}$ and $n \in \mathbb{N}$ such that $w$ is a subword of $\varphi^n(i)$. We give $\Omega_{\varphi}$ the subspace topology (of $\mathcal{A}$ with the product topology) and denote the natural shift homeomorphism by $\sigma$. Then $(\Omega_{\varphi}, \sigma)$ is a topological dynamical system which is minimal and uniquely ergodic provided the substitution is primitive. We will often use the fact that, due to unique ergodicity, any allowable word $w$ appears in any $\mathbf{u} \in \Omega_{\varphi}$ with a well-defined and bounded positive frequency.
+
+The spectral type of the measure-preserving transformation $(\Omega_{\varphi}, \sigma, \mu)$ is, by definition, the spectral type of the unitary operator $U_{\varphi}: f(\cdot) \mapsto f(\sigma \cdot)$ on $L^2(\Omega_{\varphi}, \mu)$. We say $(\Omega_{\varphi}, \sigma, \mu)$ has pure discrete spectrum if and only if there is a basis for $L^2(\Omega_{\varphi}, \mu)$ consisting of eigenfunctions for $U_{\varphi}$.
+
+Additionally, to each substitution $\varphi$ and any $\mathbf{u} \in \Omega_{\varphi}$ we can form a tiling of the real line by intervals. For each letter $i \in \mathcal{A}$ we assign a closed interval of length $l_i$. We refer to the vector $\mathbf{L} = (l_1, \dots, l_n)$ as the length vector. We then “lay” copies of these closed intervals down on the real line (so that they do not overlap on their interiors) according to the order prescribed by $\mathbf{u} = \dots u_{-1} \cdot u_0 \cdot u_1 \dots$, with the placement of the origin given by the decimal point. There is a natural translation acting $\Gamma^t$ on a tiling $T$ which forms new tilings by simply moving the origin by a distance $t \in \mathbb{R}$ to the right. For a fixed $\mathbf{u}$, we form a compact metric space by taking the completion of the space of the translates of $\mathbf{u}$. Here the completion is taken with respect to the metric which considers two tilings to be close if they agree on a large neighborhood about the origin after a small translation. If the substitution is primitive, then this space is independent of $\mathbf{u}$, is minimal and uniquely ergodic. We refer to the $\mathbb{R}$-action on $(\Omega_{\varphi}, \Gamma^t, \mathbf{L})$ as the *tiling dynamical system*. It is topologically conjugate to the suspension flow over the $\mathbb{Z}$-action $(\Omega_{\varphi}, \sigma)$, with the height function equal to $l_i$ over the cylinder $i$.
+---PAGE_BREAK---
+
+The system $(\Omega_{\phi}, \Gamma^t, \mathbf{L}, \mu)$ is then said to have *pure point spectrum* if there is a basis of $L^2(\Omega_{\phi}, \Gamma^t, \mathbf{L}, \mu)$ consisting of eigenfunctions for the $\mathbb{R}$-action.
+
+A primitive substitution is of *Pisot type* if all of its non-Perron-Frobenius eigenvalues are strictly between 0 and 1 in magnitude. Sirvent and Solomyak [12] have shown, for a substitution of Pisot type, that if the $\mathbb{R}$-action has pure discrete spectrum then so does the $\mathbb{Z}$-action. A recent result of Clark and Sadun [3] implies that this can be strengthened to an “if and only if” statement.
+
+In fact, the result of Clark and Sadun has more general applications than just the Pisot case and we briefly describe some aspects of their results. First, however, we should give a few observations in order that we may put their results in context.
+
+We denote as $\mathbf{L}_{\lambda}$ the left eigenvector associated to the Perron-Frobenius eigenvalue $\lambda$ of the transition matrix $A_{\phi}$. The choice of $\mathbf{L}_{\lambda}$ as a length vector causes any tiling which is (combinatorially) fixed under $\phi$ to be *self-similar*. The similarity map is expansion by $\lambda$ and the tile-substitution is the geometric version of $\phi$. On the other hand, the length vector $\mathbf{L}_1 = (1, 1, \dots, 1)$ is a natural choice since the group action of the time-1 return map for the tiling dynamical system is conjugate to the shift homeomorphism of the substitution dynamical system. Other length changes, though producing tiling dynamical systems conjugate to the suspension of the shift, will produce no such group action. It is easy to see that the $\mathbb{R}$-action of the tiling dynamical system with constant length vector $\mathbf{L}_1$ has pure discrete spectrum if and only if the $\mathbb{Z}$-action does. Therefore, questions about the relation of the $\mathbb{R}$-action with length vector $\mathcal{L}$ to the $\mathbb{Z}$-action can be rephrased into questions about the conjugacy of the $\mathbb{R}$-actions of the systems using $\mathbf{L}$ and $\mathbf{L}_1$, respectively.
+
+Clark and Sadun [3] give explicit conditions for the $\mathbb{R}$-action associated to one length vector to be conjugate (up to an overall rescaling) to that of another via a homeomorphism which preserves the combinatorics (i.e., is homotopic to the identity). Assume the length vectors $\mathbf{L}$ and $\mathbf{L}'$ have been rescaled to agree in the Perron-Frobenius direction. Then, the conjugacy occurs if and only if $\mathbf{L}A_{\phi}^k - \mathbf{L}'A_{\phi}^n \to 0$ as $n \to \infty$. Note that every length vector can be written as a linear combination of vectors living in the Perron-Frobenius eigenspace, the small eigenspaces (those associated to the
+---PAGE_BREAK---
+
+eigenvalues of magnitude less than one) and the large eigenspaces (those associated to eigenvalues of magnitude greater than or equal to one, though we don't include the P.F.-eigenspace here). In particular, after the rescaling, $L-L'$ must avoid the large eigenspaces for the conjugacy to occur. If the substitution is of Pisot-type, then all systems length changes produce conjugate systems and we arrive at the result above.
+
+We point out a few special kinds of substitutions we will be considering. A substitution $\varphi$ is said to be *irreducible* if the characteristic polynomial of its transition matrix is irreducible. In particular, $L_{\lambda}$ of an irreducible substitution is such that none of its entries are rationally related to one another. Lastly, a substitution $\varphi$ is said to have *constant length* if the number of letters appearing in $\varphi(i)$ is the same for all $i \in \mathcal{A}$.
+
+### 3. BALANCED PAIR ALGORITHMS
+
+We now describe the balanced pair algorithm in a variety of circumstances. We begin with the case that the substitution is irreducible. This case was studied extensively in [12] as an adaptation of the algorithm of [9]. The extension of this algorithm to a specific class of reducible substitutions containing “letter equivalences” can then easily be described. We end this section with a description of how one extends this procedure for generic reducible cases.
+
+#### 3.1. IRREDUCIBLE CASE
+
+A pair of allowable words $u$ and $v$ is called *balanced* if each member of the pair has the same population vector. We write $|_v^u|$ if $u$ and $v$ are balanced. Note that if $|_v^u|$ is balanced, then so is $|\varphi(u)|$.
+
+Let the right infinite sequence $\mathbf{u} = u_0u_1...$ be fixed under the substitution and let $w$ be a non-empty prefix of $\mathbf{u}$. Since $w$ appears in $\mathbf{u}$ with positive frequency, $\mathbf{u}$ can be written as $\mathbf{u} = wX_1wX_2...$. We may then speak of splitting $|\sigma^{w|\mathbf{u}}|$ into balanced pairs in the following way:
+
+$$ | \mathbf{u} |_{\sigma^{w}|\mathbf{u}} = \left| \begin{array}{c} wX_1 \\ X_1w \end{array} \right| \left| \begin{array}{c} wX_2 \\ X_2w \end{array} \right| \dots $$
+
+Since appearances of $w$ in $\mathbf{u}$ are bounded, there are only finitely many different balanced pairs encountered in the process above. We
+---PAGE_BREAK---
+
+may further reduce each of these to form a finite set of irreducible balanced pairs which we will refer to as $I_1(w)$.
+
+We may now inductively define, for $n > 1$:
+
+$$ I_n(w) = \left\{ \begin{array}{l} |u| \\ |v| \end{array} : |u| \text{ appears as an irreducible balanced pair} \right. $$
+
+in the reduction of $|\varphi(x)|$, for some $|x| \in I_{n-1}$.
+
+Let $I(w) = \bigcup_{n=1}^{\infty} I_n(w)$. If $I(w)$ is finite, then we say that the balanced pair algorithm associated to a prefix $w$, or bpa-$w$, terminates. Below, we will state how this algorithm is used to determine pure discrete spectrum, though our main interest here is in determining precisely when the algorithm terminates. We illustrate the computation of $I(w)$ for a simple example.
+
+**Example 3.1 (Fibonacci substitution).** Consider the substitution given by:
+
+$$
+\begin{array}{rcl}
+1 & \rightarrow & 112 \\
+2 & \rightarrow & 12
+\end{array}
+ $$
+
+Then $\mathbf{u} = 11211212112...$ and we take the prefix $w = 1$. Then it is easy to see that:
+
+$$ I_1 = \left\{ \begin{array}{c|c} 1 & 12 \\ 1 & 21 \end{array} \right\}. $$
+
+Now,
+
+$$
+\begin{array}{l@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c}
+\left|\begin{array}{@{}c@{}} 1 \\ 1 \end{array}\right| & \rightarrow & \left|\begin{array}{@{}c@{}} 1 \\ 1 \end{array}\right| & \left|\begin{array}{@{}c@{}} 2 \\ 2 \end{array}\right| \\
+& \text{and} & \left|\begin{array}{@{}c@{}} 12 \\ 21 \end{array}\right| & \rightarrow \left|\begin{array}{@{}c@{}} 1 \\ 1 \end{array}\right| \left|\begin{array}{@{}c@{}} 12 \\ 21 \end{array}\right| \left|\begin{array}{@{}c@{}} 1 \\ 1 \end{array}\right| \left|\begin{array}{@{}c@{}} 2 \\ 2 \end{array}\right|
+\end{array}
+ $$
+
+so that $I_2 = \{|1|, |12|, |2|^2\}$ and further, $I_2 = I_3 = \dots = I(w)$. Thus, the algorithm terminates.
+
+## 3.2 SUBSTITUTIONS INVOLVING LETTER EQUIVALENCES
+
+The balanced pair algorithm as originally described by Livshits [8, 9] has an additional feature. Consider two letters $i, j \in A$ to be equivalent if $\varphi^n(i)$ and $\varphi^n(j)$ have the same number or symbols for all $n \in \mathbb{N}$. Then, a pair of words is balanced if both contain the same number of symbols from each equivalence class. The algorithm runs in the usual way, starting with an initial list of balanced pairs, substituting and reducing. It stops if no new balanced pairs are produced. In particular, in the constant length case all letters are equivalent so all irreducible balanced pairs are just pairs
+---PAGE_BREAK---
+
+of symbols. Thus, the algorithm always terminates in this case. This way, the algorithm includes F. M. Dekking's criterion [4] in the constant length case. For any irreducible case, no two symbols are equivalent and thus the algorithm runs just as before.
+
+**Example 3.2 (Substitution of constant length).** Consider the substitution given by:
+
+$$1 \rightarrow 112$$
+
+$$2 \rightarrow 122$$
+
+which is a constant length substitution. If we ignore the equivalence relation of Livshits, then the troubling balanced pair is $|{^{12}_{21}}|$. Iterating this pair, we see:
+
+$$ \begin{vmatrix} 12 & 1 & 1212 & 2 \\ 21 & 1 & 2211 & 2 \end{vmatrix} \rightarrow \begin{vmatrix} 1212 & 12211212 & 2 \\ 2211 & 221221121 & 2 \end{vmatrix} $$
+
+In particular, this process generates new balanced pairs of the form $|{^{1z^2}_{2z_1}}|$ for longer and longer words z.
+
+Once we take the equivalence relation into account, however, all letters are equivalent and therefore the above process terminates.
+
+**Example 3.3 (A non-constant length substitution).** Consider the substitution given by:
+
+$$1 \rightarrow 31$$
+
+$$2 \rightarrow 412$$
+
+$$3 \rightarrow 312$$
+
+$$4 \rightarrow 412$$
+
+If we were to again ignore the equivalence relation of Livshits, then there is a potential problem with balanced pairs that “match up” the letter 3 with 4. To see this, note that the right eigenvectors of the transition matrix in some sense describe the frequencies in which letters appear. Here, there is one large eigenvalue other than the Perron-Frobenius eigenvalue. The eigenvalue is 1 and it has right eigenvector $(0, 0, 1, -1)^T$. We can thus view a balanced pair of the form $|{^{3...}_{4...}}|$ as initially having an abundance of a 3 and a lack of a 4 on top. Since this corresponds to the right eigenvector of 1, this difference persists under substitution. Note that we say this is a “potential” problem as this association to a large eigenvalue
+---PAGE_BREAK---
+
+in itself will not force the bpa-$w$ to not terminate. In this case,
+however, we show that this does in fact occur. After shifting the
+fixed word $\mathbf{u} = 312...$ by the prefix $w = 31$, we see the balanced
+pair $|_{41231}^{31412}$. Iterating this pair, we see:
+
+$$
+\left| \begin{array}{c} 31412 \\ 41231 \end{array} \right| \rightarrow \left| \begin{array}{c} 3123141231412 \\ 4123141231231 \end{array} \right|.
+$$
+
+Thus, we generate new balanced pairs of the form $|\begin{matrix} 3z^{412} \\ 4z^{231} \end{matrix}|$ for longer and longer words $z$. The equivalence relation tells us, however, that 2, 3, and 4 are actually equivalent letters and thus, the above process terminates as $|\begin{matrix} 3 \\ 4 \end{matrix}|$ and $|\begin{matrix} 4 \\ 2 \end{matrix}| |\begin{matrix} 12 \\ 31 \end{matrix}|$ are balanced. Note that the left Perron-Frobenius eigenvector for this system is $(1, \lambda, \lambda, \lambda)$ so that 2, 3, and 4 all have the same length in the self-similar system.
+
+**3.3 REDUCIBLE CASE**
+
+The example above gives some hint as to how to describe the
+balanced pair algorithm in the generic reducible case. For a system
+whose lengths of tiles have rational relations which persist under
+substitution, it may be the case that entire words should be identi-
+fied even when no individual letters are. We therefore introduce an
+equivalence relation which exploits these rational relations. Con-
+sider two words $v$ and $w$ to be equivalent if:
+
+$$
+\mathbf{L}(\mathbf{p}(\varphi^n(v)) - \mathbf{p}(\varphi^n(w))) = 0, \forall n \in \mathbb{N},
+$$
+
+or equivalently, if
+
+$$
+\mathbf{L}(A_{\phi}^{n}\mathbf{p}(v) - A_{\phi}^{n}\mathbf{p}(w)) = 0, \forall n \in \mathbb{N}.
+$$
+
+We write $v \sim_L w$ to emphasize the dependence of the equivalence
+relation on choice of $\mathbf{L}$. In the case in which we use the left Perron-
+Frobenius eigenvector $\mathbf{L}_\lambda$ for our lengths, the equivalent pairs cor-
+respond simply to the *geometric balanced pairs* of [12]. In fact, we
+will show below that in this case, the algorithm is equivalent to the
+overlap algorithm and for a general $\mathbf{L}$ is meant to bridge the gap
+between the balanced pair and overlap algorithms in the reducible
+case.
+
+The algorithm once again runs in the usual way, with a pair of
+words considered balanced if they are equivalent under our rela-
+tionship above. We denote this algorithm by $bpa(w, L)$ and the set
+of balanced (or equivalent) pairs by $I(w, L)$ to emphasize that there
+is now an additional dependence on $L$.
+---PAGE_BREAK---
+
+**Remark 3.4.** Our equivalence condition above that $\mathbf{L}(\mathbf{p}(\varphi^n(v)) - \mathbf{p}(\varphi^n(w))) = 0, \forall n \in \mathbb{N}$, could be weakened to allow $\mathbf{L}(\mathbf{p}(\varphi^n(v)) - \mathbf{p}(\varphi^n(w)))) \to 0$ as $n \to \infty$, which essentially allows inconsequential length changes in the small eigenspaces mentioned above.
+
+**Example 3.5** (*A newly balanced substitution)*. Consider the substitution given by:
+
+$$1 \rightarrow 112$$
+
+$$2 \rightarrow 2321$$
+
+$$3 \rightarrow 12.$$
+
+The eigenvalues of the transition matrix are $\frac{3\pm\sqrt{13}}{2}$ and 1 while $\mathbf{L}_{\lambda} = (1, \frac{-1+\sqrt{13}}{2}, \frac{5-\sqrt{13}}{2})$. The right eigenvector associated to the eigenvalue 1 is $(2, -1, -1)^T$. This indicates that there is a potential problem with balanced pairs which “match up” the word 11 with 23. Such a matching occurs in the balanced pair $|^{11223}_{23211}|$, which was found by performing the balanced pair algorithm after shifting the fixed word by two spaces. Iterating this balanced pair, we generate balanced pairs of the form $|^{11z23}_{23z11}|$ for longer and longer words z.
+
+Fortunately, $\mathbf{L}_{\lambda} \cdot (2, -1, -1) = 0$ so that 11 and 23 are equivalent words and hence $|^{11}_{23}|$ is balanced and the above algorithm will terminate. Also note that $\mathbf{L}_1$ is also perpendicular to $(2, -1, -1)$ so that the algorithm will also terminate for what we consider to be the other interesting case. This is immediate from the fact that these two systems are conjugate by [3]. On the other hand, $\mathbf{L} = (1, 1, 2)$ for example is not conjugate to these and furthermore the algorithm does not terminate in this case.
+
+**Proposition 3.6.** Let $\mathcal{E}(L) = \{(u,v) : u \sim_{\mathbf{L}} v\}$. Then, for any length vector $\mathbf{L}, \mathcal{E}(L) \subseteq \mathcal{E}(\mathbf{L}_{\lambda})$
+
+*Proof:* Let $u \sim_{\mathbf{L}} v$. Let $\mathbf{z} = \mathbf{p}(u) - \mathbf{p}(v)$. Since $\mathbf{L}$ has only positive terms, it must have a component in the $\mathbf{L}_{\lambda}$ direction. Let $\mathbf{L} = a_{\lambda}\mathbf{L}_{\lambda} + \sum_{j=1}^{l} b_j B_j + \sum_{j=1}^{s} a_j A_j$, where $B_j$ is a left eigenvector for eigenvalue $\beta_j \ge 1$ and $A_j$ is a left eigenvector for eigenvalue $\alpha_j < 1$ for each $j$. Then,
+
+$$0 = \mathbf{L} \cdot A^n \mathbf{z} = (\lambda^n a_\lambda \mathbf{L}_\lambda + \sum_{j=1}^l \beta_j^n b_j B_j + \sum_{j=1}^s \alpha_j^n a_j A_j) \cdot \mathbf{z}.$$
+---PAGE_BREAK---
+
+If $\mathbf{L}_{\lambda} \cdot \mathbf{z} \neq 0$, then the above implies $\lambda^n \approx \sum_{j=1}^{l} C_j \beta_j^n$ for large enough $n$ and some constants $C_j$. This is a contradiction since the Perron-Frobenius eigenvalue dominates all others. $\square$
+
+**Corollary 3.7.** For any length vector **L**, if the bpa(*w*, **L**) terminates, then the bpa(*w*, **L**λ) terminates.
+
+Let $\mathcal{L}$ denote the span of the right eigenvectors with eigenvalues greater than or equal to 1 in magnitude but strictly less than the Perron-Frobenius eigenvalue. Similarly, let $\mathcal{S}$ denote the span of the right eigenvectors with eigenvalues strictly less than 1 in magnitude. We say that $(u, v)$ lies in a vector space $\mathcal{P}$ if $(\mathbf{p}(u) - \mathbf{p}(v)) \in \mathcal{P}$.
+
+**Remark 3.8.** Then, the set of equivalence words lies entirely in these spaces since they form precisely what is perpendicular to $\mathbf{L}_\lambda$. In our examples above, choosing an $\mathbf{L}$ which missed identifying equivalent pairs in $\mathcal{L}$ would have led to the bpa$(w, \mathbf{L})$ not terminating. Contrast this with Example 3.3 in which choosing an $\mathbf{L}$ which neglects to identify 2 and 4 will have no effect on whether the algorithm terminates. The vector associated to the pairing of 2 and 4, namely $(0, 1, 0, -1)$, lives in the zero-eigenspace and thus differences in frequencies should be quickly dispelled. Our contention here is that generally the equivalence relations which live in $\mathcal{S}$ should not affect the algorithm whereas those in $\mathcal{L}$ affect it greatly.
+
+**Corollary 3.9.** $u \sim_L v$ implies $p(u) - p(v) \in \mathcal{L} \oplus \mathcal{S}$. The converse is true if $L = L_\lambda$.
+
+A balanced (equivalent) pair $|i|$, for $i \in A$, is called a coincidence. We say that a balanced pair $|u|$ leads to a coincidence if there exists $m$ such that the reduction of $|\varphi_n(u)|$ contains a coincidence. Notice that coincidences lead to coincidences since $|\varphi(i)|$ has nothing but coincidences in its reduction.
+
+**Theorem 3.10.** Let $\varphi$ be a primitive substitution such that $\mathbf{u} = u_0\mathbf{u}_1 \dots$ is a right infinite fixed word and let $\mathbf{L}$ be a length vector.
+
+(a) If for some prefix $w$ the bpa$(w, \mathbf{L})$ terminates and every equivalent pair in $I(w, \mathbf{L})$ leads to a coincidence, then $(\Omega_\varphi, \Gamma^t, \mathbf{L}_\lambda)$ has pure discrete spectrum.
+---PAGE_BREAK---
+
+(b) If the bpa$(w, \mathbf{L})$ terminates for some prefix $w = u_0...u_m$
+such that $u_{m+1} = u_0$, and $(\Omega_\varphi, \Gamma^t, \mathbf{L}_\lambda)$ has pure discrete
+spectrum, then every balanced pair in $I(w, \mathbf{L})$ leads to a
+coincidence.
+
+Before beginning the proof of Theorem 3.10, we make the fol-
+lowing observations regarding the densities of coincident pairs. Let
+**u** = *u*₀**u**₁... be a fixed right-sided sequence. Let *z* be a prefix of **u**
+and **u** - **L** · **p**(*z*) = *v*₀**v**₁.... Let *D*(*z*) = {*u*ᵢ : *u*ᵢ = *v*ⱼ some *j* with
+*u*₀...*u*ᵢ₋₁ ~**L** *v*₀...*v*ⱼ₋₁}. Suppose we define a density function,
+dens**L**(*D*(*z*)) = lim*k*→∞ p(Dz∩u0.uk, if the limit exists. The ex-
+istence of this limit follows from the unique ergodicity of (Ωφ, Γt, **L**).
+Notice that for **L** = **L**₁, this definition of density agrees with that
+used in [12] for the (irreducible) balanced pair algorithm and for
+**L** = **L**λ it agrees with that of [13] for the overlap algorithm. Now,
+by Proposition 3.6, coincident pairs for 𝒜(**L**) are also coincident
+pairs for 𝒜(**L*λ*).
+
+**Remark 3.11.** The proof of this theorem differs from that of the irreducible case only in the way in which we define density and the set of irreducible balanced pairs. We therefore only include a sketch of the proof that follows closely a sketch provided in [12] for the irreducible case. The full details of that case have been worked out in [5]. A theorem of this sort was proved by [9], though coincidences and the balanced pair algorithm go back to [4] and [10], respectively. Part (a) is largely contained in [11].
+
+*Proof:* Let $w$ be a prefix of the fixed word $\mathbf{u}$. Denote by $D(z)$ the density defined above. We will be interested in $\text{dens}_{\mathbf{L}}(D(\varphi^l(w)))$ as $l \to \infty$. Let
+
+$$
+\mathbf{u}^{(l)} = \mu_1^{(l)} \mu_2^{(l)} \dots
+$$
+
+be the reduction of $\left|\sigma_{pl_u}^u\right|$ into irreducible equivalent pairs. For an
+equivalent pair $\beta = \left|\begin{matrix} u \\ v \end{matrix}\right|$ let $|\beta| = \mathbf{L} \cdot \mathbf{p}(u)$ and $\delta(\beta) = \{u_i \in u : u_i = v_j \text{ some } j \text{ with } u_0 \dots u_{i-1} \sim \mathbf{L} v_0 \dots v_{j-1}\}$. Then
+
+$$
+\mathrm{dens}_{\mathbf{L}}(D(\varphi^l(w))) = \lim_{N \to \infty} \frac{\sum_{j=1}^{N} \mathbf{L} \cdot \mathbf{p}(\delta(\mu_j^{(l)}))}{\sum_{j=1}^{N} \mathbf{L} \cdot \mathbf{p}(\mu_j^{(l)})}.
+$$
+
+Now consider the substitution $\hat{\varphi}$ on the set of irreducible balanced pairs $I(w, \mathbf{L})$. By definition, clearly $\mathbf{u}^{(l)} \in I(w, \mathbf{L})^N$ and $\mathbf{u}^{(l)} =$
+---PAGE_BREAK---
+
+$\hat{\phi}^l(\mathbf{u}^{(0)}), \text{ where } \mathbf{u}^{(0)} \text{ is the reduction of } |_{\sigma|w|\mathbf{u}} |\mathbf{u}| \text{ into irreducible equivalent pairs.}$
+
+There is a directed graph $\mathcal{G}(\hat{\phi})$ associated with the substitution $\hat{\phi}$. Its vertices are labelled by the members of $I(w, \mathbf{L})$, and for every vertex $\beta$ there are directed edges from $\beta$ into of the letters of $\hat{\phi}(\beta)$ with multiplicities.
+
+(a) Let $\beta \in I(w, L)$ be an irreducible equivalent pair which is not a coincidence. By assumption there is a path in the graph $\mathcal{G}(\hat{\phi})$ leading from $\beta$ to a coincidence. Since all the edges from coincidences lead to coincidences, a standard argument shows that the frequency of the symbol $\beta$ in $\mathbf{u}^{(l)} = \hat{\phi}^l(\mathbf{u}^{(0)})$ goes to zero geometrically fast as $l \to \infty$. Since $I(w, L)$ is finite and $1 - \delta(\beta) > 0$ if and only if $\beta$ is a non-coincidence, it follows that
+
+$$1 - \operatorname{dens}(D_{pl}) \leq \text{const} \cdot \gamma^l$$
+
+for some $\gamma \in (0, 1)$. This implies that $\mathbf{u}$ is mean-almost periodic so we can conclude (similar to [11, VI.25]) that $\varphi$ has pure discrete spectrum. We note that the argument only relied on $I(w, L)$ being finite and that the equivalence relation does not create false coincidences.
+
+(b) Let $w = u_o...u_m$ be such that $u_{m+1} = u_0$ and $p_l = L \cdot p(\varphi^l(w))$. It follows from [7] (see also [13, Theorem 4.3]) that $\lim_{l \to \infty} e^{2\pi i \lambda p_l} = 1$ for any eigenvalue $e^{2\pi i \lambda}$ of the dynamical system $(\Omega_\varphi, \Gamma^t, L, \mu)$. If the spectrum is pure discrete, then the eigenfunctions span a dense subset of $L^2(\Omega_\varphi)$ so that $\lim_{l \to \infty} ||U_\varphi^{p_l}f - f||_2 = 0$ for every $f \in L^2(\Omega_\varphi)$. Taking $f$ to be the characteristic function of the cylinder set corresponding to $i \in A$ with heights prescribed by $L$, we obtain (see [11, just after Lemma VI.26]) $\lim_{l \to \infty} \operatorname{dens}_L(D(\varphi^l(w))) = 1$. On the other hand, suppose that there is an irreducible equivalent pair in $I(w, L)$ which does not lead to a coincidence. Then there exists an irreducible component $\mathcal{G}_0$ of the graph $\mathcal{G}(\hat{\phi})$ which contains no coincidences. There exists $l_0$ such that for every $l \ge l_0$ elements of the component $\mathcal{G}_0$ occur in $\mathbf{u}^{(l)}$ with positive frequency. (Note that different elements of $\mathcal{G}_0$ may occur for different $l$..) Further, it can be shown that this frequency is bounded away from zero as $l \to \infty$. Since $1 - \delta(\beta) > 0$ for all $\beta \in \mathcal{G}_0$, it follows that $\operatorname{dens}_L(D(\varphi^l(w))) \ne 1$, which is a contradiction. □
+---PAGE_BREAK---
+
+We also note the following relationship between the balanced pair algorithm and the overlap algorithm in the case that the tiling space is self-similar.
+
+**Proposition 3.12.** Let $\varphi$ be a primitive substitution such that $\varphi(1)$ begins with 1. Then the overlap-algorithm associated to $x = x(w)$ terminates with half-coincidences if and only if bpa($w$, $L_{\lambda}$) terminates.
+
+*Proof:* Assume the bpa($w$, $L_{\lambda}$) terminates. Then the distance between half-coincidences (endpoints of our equivalent pairs) arising from looking at $(T, T - \lambda^n x)$ is bounded, where $x = x(w) = L \cdot p(w)$. Theorem 5.6 of [12] implies that the overlap algorithm associated to $x(w)$ terminates with half-coincidences.
+
+Assume the overlap-algorithm associated to $x = L \cdot p(w)$ terminates with half-coincidences. Again by Theorem 5.6 of [12], the distance between half-coincidences arising from $(T, T - \lambda^n x)$ is bounded. Hence, bpa($w$, $L_{\lambda}$) terminates. $\square$
+
+**Example 3.13 (A non-terminating example).** We now give an example which will not terminate even with the extended equivalence relations presented here. This example is a rewriting of the Morse-Thue systems in which we have forced coincidences. We will use $L_{\lambda}$ as our length vector so that we are conjugate to the original Morse-Thue system and hence do not have pure discrete spectrum. This example therefore cannot terminate for any version of the balanced pair algorithm.
+
+The substitution is given by:
+
+$$
+\begin{array}{rcl}
+1 & \rightarrow & 1234 \\
+2 & \rightarrow & 124 \\
+3 & \rightarrow & 13234 \\
+4 & \rightarrow & 1324
+\end{array}
+$$
+
+Here, the P.F. eigenvalue is 4 with left eigenvector $(3, 2, 4, 3)$. (Note that this vector also corresponds to the $\mathbb{Z}$-action of the original Morse-Thue system.) The other important eigenvalue (the remaining two are zero) is 1 with right eigenvector $(1, 1, -2, 1)^T$. Thus, 124 is equivalent to 33; however, these words do not cluster close enough to each other to aid in terminating the balanced pair algorithm. Notice also the system generated by the length vector $(1, 1, 1, 1)$ is not conjugate to the system generated by $L_{\lambda}$. But by
+---PAGE_BREAK---
+
+Corollary 3.7, the balanced pair algorithm will also not terminate for this length vector. In fact, using techniques from [3] to directly compute its spectrum, one can see that it will not have pure discrete spectrum either.
+
+### 4. OPEN PROBLEMS AND CONJECTURES
+
+The motivation for studying the effects of the new equivalence relation on the balanced pair algorithm was not primarily to understand reducible substitutions, but mainly to aid in analyzing substitutions which have been collared or rewritten. We use the example below as a test case for a general Pisot substitution. Beginning with a Pisot substitution, we will rewrite it into an equivalent substitution that always begins with the same letter. In this way we force every balanced pair to lead to a coincidence. It remains to show that the new substitution will terminate in order to show that all Pisot substitutions have pure discrete spectrum. A difficulty arises in that rewriting increases the size of the alphabet and can therefore add additional eigenvalues of 0 and $\pm 1$. The zeros do not concern us, but the roots of unity might. Considering the $\mathbb{Z}$-action on our original substitution by changing tiles to unit lengths produces an integer vector in the rewritten substitution which will not have a component in the eigenspace of the roots of unity. Further, these eigenspaces generate equivalent words so that it is our hope that the equivalence classes will always force the algorithm to terminate.
+
+**Example 4.1 (A rewritten Pisot substitution).** Consider the substitution $\varphi$ given by $a \to abb$ and $b \to ba$. Using the rewriting procedure of [1], we square $\varphi$ and generate a substitution on $1 = abbb, 2 = ab, 3 = aabbb$ and $4 = aabb$ by:
+
+$$
+\begin{array}{rcl}
+1 & \rightarrow & 122334 \\
+2 & \rightarrow & 1224 \\
+3 & \rightarrow & 12322334 \\
+4 & \rightarrow & 1232234
+\end{array}
+ $$
+
+The eigenvalues of the transition matrix of this substitution are $0, 3 \pm 2\sqrt{2}$ and $1$. Since the original substitution is Pisot, it is insensitive to length changes in $a$ and $b$. The new substitution will also be insensitive to any length change which is consistent with length changes in $a$ and $b$. For example, setting $a = b = 1$ generates the length vector $\mathbf{L} = (3, 2, 5, 4)$ and will in fact miss the
+---PAGE_BREAK---
+
+eigenspace associated to 1. (Note, however, that $\mathbf{L}_1 = (1, 1, 1, 1)$ does not.) The right eigenvalue of $1$, $v_1 = (1, 1, -2, 1)^T$, is perpendicular to $\mathbf{L}$ so that $abd \sim_{\mathbf{L}} cc$. Because $\mathbf{L}_1$ is not perpendicular to $v_1$, this makes the bpa($w, \mathbf{L}$) difficult to run. Some tedious calculations reveal that bpa($w, \mathbf{L}$), where one must keep an eye out for the equivalence relations, will terminate. This example produces 30 different irreducible balanced pairs, the longest one of which contains a word of length 11. We suspect that bpa($w, \mathbf{L}_1$) will not terminate, but this has also not yet been shown.
+
+Generally, if the balanced pair algorithm terminates for a Pisot substitution, [12] gives us that the distance between half-coincidences is bounded. For the rewritten substitution, the new tiles are compositions of smaller old tiles and these half-coincidences may occur "internally." We suspect this cannot happen and that the algorithm must terminate for the new substitution as well. More precisely we have:
+
+**Conjecture 4.2.** Let $\varphi$ be a Pisot substitution on $n$-letters. Let $\tilde{\varphi}$ be a rewriting of $\varphi$ so that for some letters $b, e$ in the rewritten alphabet, $\tilde{\varphi}(i) = b...e$ for all $i$. Assume that the balanced pair algorithm for the original Pisot substitution $\varphi$ terminates. Then the balanced pair algorithm for the rewritten substitution $\tilde{\varphi}$ also terminates for length vector $L_\lambda$.
+
+An immediate corollary of this would be that if the balanced pair
+algorithm of a Pisot type substitution terminates, then it must do
+so with coincidences.
+
+**Acknowledgments:** The author would like to thank Lorenzo Sadun for his numerous contributions to this work and helpful suggestions and Boris Solomyak for some especially illuminating correspondences.
+
+**REFERENCES**
+
+[1] M. Barge and B. Diamond, *A complete invariant for the topology of one-dimensional substitution tiling spaces*, Ergodic Theory Dynam. Systems **21** (2001), no. 5, 1333–1358.
+
+[2] M. Barge and B. Diamond, *Coincidence for substitutions of Pisot type*, Bull. Soc. Math. France **130** (2002), no. 4, 619–626.
+
+[3] A. Clark and L. Sadun, *When size matters: subshifts and their related tiling spaces*, Ergodic Theory Dynam. Systems **23** (2003), no. 4, 1043–1057.
+---PAGE_BREAK---
+
+[4] F. M. Dekking, *The spectrum of dynamical systems arising from substitutions of constant length*, Zeit. Wahr. **41** (1977/78), no. 3, 221–239.
+
+[5] M. Hollander, *Linear numeration systems, finite $\beta$-expansions, and discrete spectrum for substitution systems*. Ph.D. Thesis. University of Washington, 1996.
+
+[6] M. Hollander and B. Solomyak, *Two-symbol Pisot substitutions have pure discrete spectrum*, Ergodic Theory Dynam. Systems **23** (2003), no. 2, 533–540.
+
+[7] B. Host, *Valeurs propres de systèm dynamiques définis par de substitutions de longuer variable*, Ergodic Theory Dynam. Systems **6** (1986), no. 4, 529–540.
+
+[8] A. N. Livshits, *On the spectra of adic transformations of Markov compact sets*, (Russian) Uspekhi Mat. Nauk **42** (1987), no. 3 (255), 189–190.
+
+[9] A. N. Livshits, *Some examples of adic transformations and automorphisms of substitutions*, selected translations, Selecta Math. Soviet. **11** (1992), no. 1, 83–104.
+
+[10] P. Michel, *Coincidence values and spectra of substitutions*, Zeit. Wahr. **42** (1978), no. 3, 205–227.
+
+[11] M. Queffélec, *Substitution Dynamical Systems – Spectral Analysis*, Lecture Notes in Mathematics, 1294. Berlin: Springer-Verlag, 1987.
+
+[12] V. F. Sirvent and B. Solomyak, *Pure discrete spectrum for one-dimensional substitution systems of Pisot type*, Canadian Math. Bull. **45** (2002), no. 4, 697–710.
+
+[13] B. Solomyak, *Dynamics of self-similar tilings*, Ergodic Theory Dynam. Systems **17** (1997), no. 3, 695–738.
+
+[14] B. Solomyak, *Corrections to: "Dynamics of self-similar tilings,"* Ergodic Theory Dynam. Systems **19** (1999), no. 6, 1695.
+
+CM 134; ROSE-HULMAN INSTITUTE OF TECHNOLOGY; 5500 WABASH AVENUE; TERRE HAUTE, IN 47803
+
+*E-mail address: martense@rose-hulman.edu*
\ No newline at end of file
diff --git a/samples/texts_merged/783284.md b/samples/texts_merged/783284.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9961f0ce398d4963934913730f4942674b8480c
--- /dev/null
+++ b/samples/texts_merged/783284.md
@@ -0,0 +1,1340 @@
+
+---PAGE_BREAK---
+
+STEADY-STATE SIMULATION OF REFLECTED BROWNIAN
+MOTION AND RELATED STOCHASTIC NETWORKS¹
+
+BY JOSE BLANCHET AND XINYUN CHEN
+
+*Columbia University and Stony Brook University*
+
+This paper develops the first class of algorithms that enable unbiased estimation of steady-state expectations for multidimensional reflected Brownian motion. In order to explain our ideas, we first consider the case of compound Poisson (possibly Markov modulated) input. In this case, we analyze the complexity of our procedure as the dimension of the network increases and show that, under certain assumptions, the algorithm has polynomial-expected termination time. Our methodology includes procedures that are of interest beyond steady-state simulation and reflected processes. For instance, we use wavelets to construct a piecewise linear function that can be guaranteed to be within $\epsilon$ distance (deterministic) in the uniform norm to Brownian motion in any compact time interval.
+
+**1. Introduction.** This paper studies simulation methodology that allows estimation, without any bias, of steady-state expectations of multidimensional reflected processes. Our algorithms are presented with companion rates of convergence. Multidimensional reflected processes, as we shall explain, are very important for the analysis of stochastic queueing networks. However, in order to motivate the models that we study, let us quickly review a formulation introduced by Kella (1996).
+
+Consider a network of $d$ queueing stations indexed by $\{1, 2, \dots, d\}$. Suppose that jobs arrive to the network according to a Poisson process with rate $\lambda$, denoted by $(N(t): t \ge 0)$. Specifically, the $k$th arrival brings a vector of job requirements $\mathbf{W}(k) = (W_1(k), \dots, W_d(k))^T$ which are nonnegative random variables (r.v.'s), and they add to the workload at each station right at the moment of arrival. So if the $k$th arrival occurs at time $t$, the workload of the $i$th station (for $i \in \{1, \dots, d\}$) increases by $W_i(k)$ units right at time $t$. We assume that $\mathbf{W} = (\mathbf{W}(k): k \ge 1)$ is a sequence of i.i.d. (independent and identically distributed) nonnegative r.v.'s. For fixed $k$, the coordinates of $\mathbf{W}(k)$ are not necessarily independent; however, $\mathbf{W}$ is assumed to be independent of $N(\cdot)$.
+
+Throughout the paper we shall use boldface to write vector quantities, which are encoded as columns. For instance, we write $\mathbf{y} = (y_1, \dots, y_d)^T$.
+
+Received January 2012; revised September 2014.
+
+¹Supported in part by the Grants NSF-CMMI-0846816 and NSF-CMMI-1069064.
+MSC2010 subject classifications. 60J65, 65C05.
+
+*Key words and phrases.* Reflected Brownian motion, steady-state simulation, dominated coupling from the past, wavelet representation.
+---PAGE_BREAK---
+
+The total amount of external work that arrives to the *i*th station up to (and including) time *t* is denoted by
+
+$$J_i(t) = \sum_{k=1}^{N(t)} W_i(k).$$
+
+Now, assume that the workload at the *i*th station is processed as a fluid by the server at a rate $r_i$, continuously in time. This means that if the workload in the *i*th station remains strictly positive during the time interval $[t, t+dt]$, then the output from station *i* during this time interval equals $r_i dt$. In addition, suppose that a proportion $Q_{i,j} \ge 0$ of the fluid processed by the *i*th station is circulated to the *j*th server. We have that $\sum_{j=1}^d Q_{i,j} \le 1$, $Q_{i,i} = 0$, and we define $Q_{i,0} = 1 - \sum_{j=1}^d Q_{i,j}$. The proportion $Q_{i,0}$ corresponds to the fluid that goes out of the network from station *i*.
+
+The dynamics stated in the previous paragraph are expressed formally by a differential equation as follows. Let $Y_i(t)$ denote the workload content of the *i*th station at time *t*. Then for given $Y_i(0)$, we have
+
+$$
+\begin{align}
+dY_i(t) &= dJ_i(t) - r_i I(Y_i(t) > 0) dt + \sum_{j:j \neq i} Q_{j,i} r_j I(Y_j(t) > 0) dt \tag{1} \\
+&= dJ_i(t) - r_i dt + \sum_{j:j \neq i} Q_{j,i} r_j dt \nonumber \\
+&\quad + r_i I(Y_i(t) = 0) dt - \sum_{j:j \neq i} Q_{j,i} r_j I(Y_j(t) = 0) dt
+\end{align}
+$$
+
+for $i \in \{1, \dots, d\}$. It is well known that the resulting vector-valued workload process, $\mathbf{Y}(t) = (Y_1(t), \dots, Y_d(t))^T$, is Markovian. The differential equation (1) admits a unique piecewise linear solution that is right-continuous and has left limits (RCLL). This can be established by elementary methods, and we shall comment on far-reaching extensions shortly.
+
+The equations given in (1) take a neat form in matrix notation. This notation is convenient when examining stability issues and other topics which are related to the steady-state simulation problem we investigate. In particular, let $\mathbf{r} = (r_1, \dots, r_d)^T$ be the column vector corresponding to the service rates, write $\mathbf{R} = (I - \mathbf{Q})^T$ and define
+
+$$\mathbf{X}(t) = \mathbf{J}(t) - R\mathbf{r}t,$$
+
+where $\mathbf{J}(t)$ is a column vector with its $i$th coordinate equal to $J_i(t)$. Then equation (1) can be expressed as
+
+$$ (2) \qquad \mathbf{Y}(t) = \mathbf{Y}(0) + \mathbf{X}(t) + R\mathbf{L}(t), $$
+
+where $\mathbf{L}(t)$ is a column vector with its $i$th coordinate equal to
+
+$$ L_i(t) = \int_0^t r_i I(Y_i(s) = 0) ds. $$
+---PAGE_BREAK---
+
+As mentioned earlier, $\mathbf{Y} = (\mathbf{Y}(t): t \ge 0)$ is a Markov process. Let us assume that $Q^n \to 0$ as $n \to \infty$. This assumption is synonymous with the assumption that the network is open. In detail, for each $i$ such that $\lambda_i > 0$, there exists a path $(i_1, i_2, \dots, i_k)$ satisfying that $\lambda_i Q_{i,i_1} Q_{i_1,i_2} \cdots Q_{i_{k-1},i_k} > 0$ with $i_k = 0$ and $k \le d$. In addition, under this assumption the matrix $R^{-1}$ exists and has nonnegative coordinates. To ensure stability, we assume that $R^{-1}E\mathbf{X}(1) < 0$—inequalities involving vectors are understood coordinate-wise throughout the paper. It follows from Theorem 2.4 of Kella and Ramasubramanian (2012) that $\mathbf{Y}(t)$ converges in distribution to $\mathbf{Y}(\infty)$ as $t \to \infty$, where $\mathbf{Y}(\infty)$ is an r.v. with the (unique) stationary distribution of $\mathbf{Y}(\cdot)$.
+
+The first contribution of this paper is that we develop an exact sampling algorithm (i.e., simulation without bias) for $\mathbf{Y}(\infty)$. This algorithm is developed in Section 2 of this paper under the assumption that $\mathbf{W}(k)$ has a finite moment-generating function. In addition, we analyze the order of computational complexity (measured in terms of expected random numbers generated) of our algorithm as $d$ increases, and we show that it is polynomially bounded.
+
+Moreover, we extend our exact sampling algorithm to the case in which there is an independent Markov chain driving the arrival rates, the service rates, and the distribution of job sizes at the time of arrivals. This extension is discussed in Section 3.
+
+The workload process $(\mathbf{Y}(t): t \ge 0)$ is a particular case of a reflected (or constrained) stochastic network. Although the models introduced in the previous paragraphs are interesting in their own right, our main interest is the steady-state simulation techniques for reflected Brownian motion. These techniques are obtained by abstracting the construction formulated in (2). This abstraction is presented in terms of a Skorokhod problem, which we describe as follows. Let $\mathbf{X}=(\mathbf{X}(t): t \ge 0)$ with $\mathbf{X}(0) \ge 0$, and $R$ be an $M$-matrix $R$ so that the inverse $R^{-1}$ exists and has nonnegative coordinates. To solve the Skorokhod problem requires finding a pair of processes $(\mathbf{Y}, \mathbf{L})$ satisfying equation (2), subject to:
+
+(i) $\mathbf{Y}(t) \ge 0$ for each $t$,
+
+(ii) $L_i(\cdot)$ nondecreasing for each $i \in \{1, \dots, d\}$ and $L_i(0) = 0$,
+
+(iii) $\int_0^t Y_i(s) dL_i(s) = 0$ for each $t$.
+
+Eventually we shall take the input process $\mathbf{X}(\cdot)$ as a Brownian motion with constant drift $\mathbf{v} = E\mathbf{X}(1)$ and nondegenerate covariance matrix $\Sigma$. There then exists a strong solution (i.e., path-by-path and not only in law) to the stochastic differential equation (SDE) (2) subject to the Skorokhod problem constraints (i) to (iii), and the initial condition $\mathbf{Y}(0)$. This was proved by Harrison and Reiman (1981), who introduced the notion of reflected Brownian motion (RBM). When $R$ is an $M$-matrix, $R^{-1}\mu < 0$ is a necessary and sufficient condition for the stability of an RBM; see Harrison and Williams (1987). Our algorithm for the RBM is motivated by the fact that in great generality (i.e., only requiring the existence of variances of service
+---PAGE_BREAK---
+
+times and inter-arrival times), the so-called generalized Jackson networks (which are single-server queues connected with Markovian routing) converge weakly to a reflected Brownian motion in a heavy traffic asymptotic environment as in Reiman (1984). Moreover, recent papers from Gamarnik and Zeevi (2006) and Budhiraja and Lee (2009) have shown that convergence occurs also at the level of steady-state distributions. Therefore, reflected Brownian motion (RBM) plays a central role in queueing theory.
+
+The second contribution of this paper is the development of an algorithm that allows estimation with no bias of $E[g(\mathbf{Y}(\infty))]$ for positive and continuous functions $g(\cdot)$. Moreover, given $\varepsilon > 0$, we provide a simulation algorithm that outputs a random variable $\mathbf{Y}_\varepsilon(\infty)$ that can be guaranteed to be within $\varepsilon$ distance (say in the Euclidian norm) from an unbiased sample $\mathbf{Y}(\infty)$ from the steady-state distribution of RBM. This contribution is developed in Section 4 of this paper. We show that the number of Gaussian random variables generated to produce $\mathbf{Y}_\varepsilon(\infty)$ is of order $O(\varepsilon^{-a_c-2}\log(1/\varepsilon))$ as $\varepsilon \searrow 0$, where $a_c$ is a constant only depending on the covariance matrix of the Brownian motion; see Section 4.4. In the special case when the $d$-dimensional Brownian motion has nonnegative correlations, the number of random variables generated is of order $O(\varepsilon^{-d-2}\log(1/\varepsilon))$.
+
+Our methods allow estimation without bias of $E[g(\mathbf{Y}(t_1), \mathbf{Y}(t_2), \dots, \mathbf{Y}(t_m))]$ for a positive function $g(\cdot)$ continuous almost everywhere and for any $0 < t_1 < t_2 < \dots < t_m$. Simulation of RBM has been studied in the literature. In the one-dimensional setting it is not difficult to sample RBM exactly; this follows, for instance, from the methods in Devroye (2009). The paper of Asmussen, Glynn and Pitman (1995) also studies the one-dimensional case and provides an enhanced Euler-type scheme with an improved convergence rate. The work of Burdzy and Chen (2008) provides approximations of reflected Brownian motion with orthogonal reflection (the case in which $R = I$).
+
+With regard to steady-state computations, the work of Dai and Harrison (1992) provides numerical methods for approximating the steady-state expectation by numerically evaluating the density of $\mathbf{Y}(\infty)$. In contrast to our methods, Dai and Harrison's procedure is based on projections in mean-squared norm with respect to a suitable reference measure. Since such an algorithm is nonrandomized, it is therefore, in some sense, preferable to simulation approaches, which are necessarily randomized. However, the theoretical justification of Dai and Harrison's algorithm relies on a conjecture that is believed to be true but has not been rigorously established; see Dai and Dieker (2011). In addition, no rate of convergence is known for this procedure, even assuming that the conjecture is true.
+
+Finally, we briefly discuss some features of our procedure and our strategy at a high level. There are two sources of bias that arise in the setting of steady-state simulation of RBM. First, discretization error in the simulation of the process $\mathbf{Y}$ is inevitable due to the continuous nature of Brownian motion, especially when the reflection matrix $R$ is not the identity. This issue is present even in finite time
+---PAGE_BREAK---
+
+horizon. The second issue is, naturally, that we are concerned with steady-state expectations which inherently involve, in principle, an infinite time horizon.
+
+In order to concentrate on removing the bias issues arising from the infinite horizon, we first consider the reflected compound Poisson case where we can simulate the solution of the Skorokhod problem in any finite interval exactly and without any bias. Our strategy is based on the dominated coupling from the past (DCFTP). This technique was proposed by Kendall (2004), following the introduction of coupling from the past by Propp and Wilson (1996). The idea behind DCFTP is to construct suitable upper- and lower-bound processes that can be simulated in stationarity and backward in time. We take the lower bound to be the process identically equal to zero. We use results from Harrison and Williams (1987) (for the RBM) and Kella (1996) (for the reflected compound Poisson process), to construct an upper bound process based on the solution of the Skorokhod problem with reflection matrix $R = I$. It turns out that simulation of the stationary upper-bound process backward involves sampling the infinite horizon maximum (coordinatewise) from $t$ to infinity of a $d$-dimensional compound Poisson Process with negative drift. We use sequential acceptance/rejection techniques (based on an exponential tilting distributions used in rare-event simulation) to simulate from an infinite horizon maximum process.
+
+Then we turn to RBM. A problem that arises, in addition to the discretization error given the continuous nature of Brownian motion, is the fact that in dimensions higher than one (as in our setting) RBM never reaches the origin. Nevertheless, it will be arbitrarily close to the origin, and we shall certainly leverage off this property to obtain simulation that is guaranteed to be $\varepsilon$-close to a genuine steady-state sample. Now in order to deal with the discretization error we use wavelet-based techniques. We take advantage of a well-known wavelet construction of Brownian motion; see Steele (2001).
+
+Instead of simply simulating Brownian motion using the wavelets, which is the standard practice, we simulate the wavelet coefficients jointly with suitably defined random times. Consequently, we are able to guarantee with probability one that our wavelet approximation is $\varepsilon$-close in the uniform metric to Brownian motion in any compact time interval (note that $\varepsilon$ is deterministic and defined by the user; see Section 4.2).
+
+Finally, we use the following fact. Let process $Y$ be the solution to the Skorokhod problem. Then the process $Y$, as a function of the input process $X$, is Lipschitz continuous with a computable Lipschitz constant, under the uniform topology. These observations combined with an additional randomization, in the spirit of Beskos, Peluchetti and Roberts (2012), allow estimation with no bias of the steady-state expectation.
+
+We strongly believe that the use of tolerance-enforced coupling based on wavelet constructions, as we illustrate here, can be extended more broadly in the numerical analysis of the Skorokhod and related problems.
+---PAGE_BREAK---
+
+We perform some numerical experiments to validate our algorithms. Our results are reported in Section 5. Further numerical experiments are pursued in a companion paper, in which we also discuss further implementation issues and some adaptations, which are specially important in the case of RBM.
+
+The rest of the paper is organized as follows: in Section 2, we consider the problem of exact simulation from the steady-state distribution of the reflected compound Poisson process discussed earlier; we then show how our procedure is adapted without major complications to Markov-modulated input in Section 3; in Section 4, we continue explaining the main strategy to be used for the reflected Brownian motion case; finally, the numerical experiments are given in Section 5.
+
+**2. Exact simulation of reflected compound Poisson processes.** The model that we consider has been explained at the beginning of the Introduction. We summarize the assumptions that we shall impose next.
+
+*Assumptions:*
+
+(A1) the matrix $R$ is an $M$-matrix;
+
+(A2) $R^{-1}E\mathbf{X}(1) < 0$ (recall that inequalities apply coordinate-wise for vectors);
+
+(A3) there exists $\theta > 0$, $\theta \in \mathbb{R}^d$ such that
+
+$$E[\exp(\theta^T \mathbf{W}(k))] < \infty.$$
+
+We have commented on (A1) and (A2) in the Introduction. Assumption (A3) is important in order to do exponential tilting when we simulate a stationary version of the upper-bound process.
+
+In addition to (A1) to (A3), we shall assume that one can simulate from exponential tilting distributions associated to the marginal distribution of $\mathbf{W}(k)$. That is, we can simulate from $P_{\theta_i}(\cdot)$ such that
+
+$$P_{\theta_i}(W_1(k) \in dy_1, \dots, W_d(k) \in dy_d) \\ = \frac{\exp(\theta_i y_i)}{E \exp(\theta_i W_i(k))} P(W_1(k) \in dy_1, \dots, W_d(k) \in dy_d),$$
+
+where $\theta_i \in \mathbb{R}$ and $E \exp(\theta_i W_i(k)) < \infty$. We will determine the value of $\theta_i$ through assumption (A3b), as given below.
+
+Let us briefly explain our program, which is based on DCFTP. First, we will construct a *stationary* dominating process ($\mathbf{Y}^+(s): -\infty < s \le 0$) that is *coupled* with our target process, that is, a stationary version of the process ($\mathbf{Y}(s): -\infty < s \le 0$) satisfying the Skorokhod problem (2). Under coupling, the dominating process satisfies
+
+$$ (3) \qquad R^{-1}\mathbf{Y}(s) \le R^{-1}\mathbf{Y}^{+}(s), $$
+
+for each $s \le 0$. We then simulate the process $\mathbf{Y}^{+}(\cdot)$ backward up to a time $-\tau \le 0$ such that $\mathbf{Y}^{+}(-\tau) = 0$. Following the tradition of the CFTP literature, we call a time $-\tau$ such that $\mathbf{Y}^{+}(-\tau) = 0$ a coalescence time. Since $\mathbf{Y}(s) \ge 0$, inequality (3)
+---PAGE_BREAK---
+
+yields $\mathbf{Y}(-\tau) = 0$. The next and final step in our strategy is to evolve the solution $\mathbf{Y}(s)$ of the Skorokhod problem (2) forward from $s = -\tau$ to $s = 0$ with $\mathbf{Y}(-\tau) = 0$, using the same input that drives the construction of $(\mathbf{Y}^+(s): -\tau \le s \le 0)$ so that $\mathbf{Y}$ and $\mathbf{Y}^+$ are coupled. The output is therefore $\mathbf{Y}(0)$, which is stationary. The precise algorithm will be summarized in Section 2.2.
+
+So, a crucial part of the whole plan is the construction of $\mathbf{Y}^+(\cdot)$ together with a coupling that guarantees inequality (3). In addition, the coupling must be such that one can use the driving randomness that defines $\mathbf{Y}^+(\cdot)$ directly as an input to the Skorokhod problem (2) that is then used to evolve $\mathbf{Y}^+(\cdot)$. We shall first start by constructing a time reversed stationary version of a suitable dominating process $\mathbf{Y}^+$.
+
+**2.1. Construction of the dominating process.** In order to construct the dominating process $\mathbf{Y}^+(\cdot)$, we first need the following result attributed to Kella (1996) (Lemma 3.1).
+
+LEMMA 1. There exists $\mathbf{z}$ such that $EX(1) < \mathbf{z}$ and $R^{-1}\mathbf{z} < 0$. Moreover, if
+
+$$ \mathbf{Z}(t) = \mathbf{X}(t) - \mathbf{z}t, $$
+
+and $\mathbf{Y}^+(\cdot)$ is the solution to the Skorokhod problem
+
+$$ \begin{gathered} d\mathbf{Y}^+(t) = d\mathbf{Z}(t) + d\mathbf{L}^+(t), \quad \mathbf{Y}^+(0) = \mathbf{y}_0, \\ (4) \qquad \mathbf{Y}^+(t) \ge 0, \quad Y_j^+(t) dL_j^+(t) = 0, \quad L_j^+(0) = 0, \quad dL_j^+(t) \ge 0, \end{gathered} $$
+
+then $0 \le R^{-1}\mathbf{Y}(t) \le R^{-1}\mathbf{Y}^+(t)$ for all $t \ge 0$ where $\mathbf{Y}(\cdot)$ solves the Skorokhod problem
+
+$$ \begin{gathered} d\mathbf{Y}(t) = d\mathbf{X}(t) + R d\mathbf{L}(t), \quad \mathbf{Y}(0) = \mathbf{y}_0, \\ \mathbf{Y}(t) \ge 0, \quad Y_j(t) d\mathcal{L}_j(t) = 0, \quad \mathcal{L}_j(0) = 0, \quad d\mathcal{L}_j(t) \ge 0. \end{gathered} $$
+
+We note that computing $\mathbf{z}$ from the previous lemma is not difficult. One can simply pick $\mathbf{z} = EX(1) + \delta\mathbf{1}$, where $\mathbf{1} = (1, \dots, 1)^T$ and with $\delta$ chosen so that $0 < \delta R^{-1}\mathbf{1} < -R^{-1}EX(1)$. In what follows we shall assume that $\mathbf{z}$ has been selected in this form, and we shall assume without loss of generality that $E[\mathbf{Z}(1)] < 0$.
+
+The Skorokhod problem corresponding to the dominating process can be solved explicitly. It is not difficult to verify [see, e.g., Harrison and Reiman (1981)] that if $\mathbf{Y}^+(0) = 0$, the solution of the Skorokhod problem (4) is given by
+
+$$ (5) \qquad \mathbf{Y}^+(t) = \mathbf{Z}(t) - \min_{0 \le u \le t} \mathbf{Z}(u) = \max_{0 \le u \le t} (\mathbf{Z}(t) - \mathbf{Z}(u)) $$
+
+where the running maximum is obtained coordinate-by-coordinate.
+
+In order to construct a stationary version of $\mathbf{Y}^+(\cdot)$ backward in time, we first extend $\mathbf{Z}(\cdot)$ to a two-sided compound Poisson process with $\mathbf{Z}(0) = 0$. We define a
+---PAGE_BREAK---
+
+time-reversal of $\mathbf{Z}(\cdot)$ as $\mathbf{Z}^\leftarrow(t) = -\mathbf{Z}(-t)$. It is easy to check that $\mathbf{Z}^\leftarrow(\cdot)$ has stationary and independent increments that are identically distributed as those of $\mathbf{Z}(\cdot)$.
+
+For any given $T \le 0$, we define a process $\mathbf{Z}_T^\leftarrow$ via $\mathbf{Z}_T^\leftarrow(t) = \mathbf{Z}^\leftarrow(T+t)$ for $0 \le t \le |T|$. And for any given $\mathbf{y} \ge 0$ we define $\mathbf{Y}_T^+(t, \mathbf{y})$ for $0 \le t \le |T|$ to be the solution to the Skorokhod problem with input process $\mathbf{Z}_T^\leftarrow$, initial condition $\mathbf{Y}_T^+(0, \mathbf{y}) = \mathbf{y}$ and reflection matrix $R = I$. In detail, $\mathbf{Y}_T^+(\cdot, \mathbf{y})$ solves
+
+$$ (6) \qquad \begin{aligned} d\mathbf{Y}_T^+(t, \mathbf{y}) &= d\mathbf{Z}_T^\leftarrow(t) + d\mathbf{L}_T^+(t, \mathbf{y}), & \mathbf{Y}_T^+(0, \mathbf{y}) &= \mathbf{y}, \\ \mathbf{Y}_T^+(t, \mathbf{y}) &\ge 0, & Y_{T,j}^+(t, \mathbf{y}) d\mathbf{L}_{T,j}^+(t, \mathbf{y}) &= 0, \\ \mathbf{L}_{T,j}^+(0, \mathbf{y}) &= 0, & d\mathbf{L}_{T,j}^+(t, \mathbf{y}) &\ge 0. \end{aligned} $$
+
+According to (5), if $\mathbf{y}=0$,
+
+$$ (7) \qquad \mathbf{Y}_T^+(t, 0) = \max_{0 \le u \le t} (\mathbf{Z}_T^\leftarrow(t) - \mathbf{Z}_T^\leftarrow(u)). $$
+
+Since $E[\mathbf{Z}(1)] < 0$, the process $\mathbf{Y}^+$ satisfying the Skorokhod problem (4) with orthogonal reflection ($R=I$) possesses a unique stationary distribution. So, we can construct a stationary version of $(\mathbf{Y}^+(s): -\infty < s \le 0)$ as
+
+$$ (8) \qquad \mathbf{Y}_*^+(s) = \lim_{T \to -\infty} \mathbf{Y}_T^+(-T-s, 0). $$
+
+The following representation of $\mathbf{Y}_*^+(\cdot)$ is known in the queueing literature; still we include a short proof to make the presentation self-contained.
+
+**PROPOSITION 1.** Given any $t \ge 0$,
+
+$$ (9) \qquad \mathbf{Y}_*^+(-t) = -\mathbf{Z}(t) + \max_{t \le u < \infty} \mathbf{Z}(u). $$
+
+**PROOF.** Expression (7) together with the definition of $\mathbf{Z}_T^\leftarrow(\cdot)$ yields
+
+$$ \begin{align*} \mathbf{Y}_T^+(-T+s, 0) &= \max_{0 \le u \le -T+s} (\mathbf{Z}^\leftarrow(s) - \mathbf{Z}^\leftarrow(T+u)) = \max_{T \le r \le s} (\mathbf{Z}^\leftarrow(s) - \mathbf{Z}^\leftarrow(r)) \\ &= \max_{T \le r \le s} (-\mathbf{Z}(-s) + \mathbf{Z}(-r)) = -\mathbf{Z}(-s) + \max_{T \le r \le s} \mathbf{Z}(-r). \end{align*} $$
+
+Let $-s = t \ge 0$ and $-r = u \ge 0$, and we obtain $\mathbf{Y}_T^+(-T-t, 0) = -\mathbf{Z}(t) + \max_{t \le u \le -T} \mathbf{Z}(u)$. Now send $-T \to \infty$ and arrive at (9), thereby obtaining the result. $\square$
+
+2.2. The structure of the main simulation procedure. We now are ready to explain our main algorithm to simulate unbiased samples from the steady-state distribution of $\mathbf{Y}$. For this purpose, let us first define
+
+$$ \mathbf{M}(t) = \max_{t \le u < \infty} \mathbf{Z}(u), $$
+---PAGE_BREAK---
+
+for $t \ge 0$ so that $\mathbf{Y}_*^+(-t) = \mathbf{M}(t) - \mathbf{Z}(t)$. Since $E[\mathbf{Z}(1)] < 0$, it follows that $\mathbf{M}(0) < \infty$, and hence $(\mathbf{M}(t): t \ge 0)$ is a stochastic process with finite value. We assume that we can simulate $\mathbf{M}(\cdot)$ jointly with $\mathbf{Z}(\cdot)$ until the coalescence time $\tau$, and we shall explain how to perform such simulation procedures in Section 2.3.
+
+ALGORITHM 1 [Exact sampling of $\mathbf{Y}(\infty)$]. Step 1: Simulate $(\mathbf{M}(t), \mathbf{Z}(t))$ jointly until time $\tau \ge 0$ such that $\mathbf{Z}(\tau) = \mathbf{M}(\tau)$.
+
+Step 2: Set $\mathbf{X}_{-\tau}^{-}(t) = \mathbf{Z}(\tau) - \mathbf{Z}(\tau - t) + \mathbf{z} \times t$, and compute $\mathbf{Y}_{-\tau}(t, 0)$ for $0 \le t \le \tau$ that solves the Skorokhod problem with input process $\mathbf{X}_{-\tau}^{-}(t)$ and initial value $\mathbf{Y}_{-\tau}(0, 0) = 0$. In detail, $\mathbf{Y}_{-\tau}(t, 0)$ solves
+
+$$d\mathbf{Y}_{-\tau}(t, 0) = d\mathbf{X}_{-\tau}^{-}(t) + R d\mathbf{L}_{-\tau}(t, 0),$$
+
+$$\mathbf{Y}_{-\tau}(t, 0) \ge 0, \quad Y_{-\tau,j}(t, 0) dL_{-\tau,j}(t, 0) = 0,$$
+
+$$L_{-\tau,j}(0, 0) = 0, \quad dL_{-\tau,j}(t, 0) \ge 0,$$
+
+for $\tau$ units of time.
+
+Step 3: Output $\mathbf{Y}_{-\tau}(\tau, 0)$ which has the distribution of $\mathbf{Y}(\infty)$.
+
+In step 2, The constant $\mathbf{z}$ is chosen according to Lemma 1 such that $\mathbf{Z}(t) = \mathbf{X}(t) - \mathbf{z}t$. The time is $-\tau$ precisely the coalescence time as in a DCFTP algorithm. The following proposition summarizes the validity of this algorithm.
+
+PROPOSITION 2. *The previous algorithm terminates with probability one, and its output is an unbiased sample from the distribution of $\mathbf{Y}(\infty)$.*
+
+PROOF. The argument is similar to the classic Lyones construction. Let us start by first noting that
+
+$$\mathbf{Y}_+^*(0) = \mathbf{M}(0) = 0 \lor (-U_1\mu + \mathbf{W}(1) + \mathbf{M}').$$
+
+Here $U_1$ is the arrival time of the first job and follows an exponential distribution. $\mathbf{M}' = \max_{0\le t<\infty} \mathbf{Z}(t+U_1) - \mathbf{Z}(U_1) < \infty$ is equal in distribution to $\mathbf{M}(0)$. Then $P(\mathbf{Y}_+^*(0)=0) = P(U_1 \ge \max_i(W_i(1)+M_i')/\mu_i) > 0$ since $U_1$ has infinite support and is independent of both $\mathbf{W}(1)$ and $\mathbf{M}'$. Therefore, $\mathbf{Y}^+(\infty)$ has an atom at zero. This implies that $\tau < \infty$ with probability one. Actually, we will show later that $E[\exp(\delta\tau)] < \infty$ for some $\delta > 0$ in Theorem 1. Let $T < 0$, and note that, thanks to Lemma 1, for $t \in (0, |T|]$
+
+$$R^{-1}\mathbf{Y}_T(t, 0) \le R^{-1}\mathbf{Y}_T^+(t, 0).$$
+
+In addition, by monotonicity of the solution to the Skorokhod problem in terms of its initial condition [see Kella and Whitt (1996)], we also have [using the definition of $\mathbf{Y}_T^+(t, y)$ from (6) and $\mathbf{Y}_*^+(T)$ from (8)] that
+
+$$\mathbf{Y}_T^+(t, 0) \le \mathbf{Y}_T^+(t, \mathbf{Y}_*^+(T)) = \mathbf{Y}_*^+(T+t).$$
+---PAGE_BREAK---
+
+So $Y_*^+(T+t) = 0$ implies $Y_T^+(t, 0) = 0$. One step further, as $R^{-1}$ has nonnegative coordinates, equations (10) and (11) imply that $Y_T(t, 0) = 0$. Consequently, if $-T > \tau \ge 0$,
+
+$$Y_T(|T| - \tau, 0) = 0,$$
+
+which in particular yields that $Y_T(-T, 0) = Y_{-\tau}(\tau, 0)$. We then obtain that
+
+$$\lim_{T \to -\infty} Y_T(-T, 0) = Y_{-\tau}(\tau, 0),$$
+
+thereby concluding that $Y_\tau(-\tau, 0)$ follows the distribution $Y(\infty)$ as claimed. $\square$
+
+Step 2 in Algorithm 1.1 is straightforward to implement because the process $X_{-\tau}^-(\cdot)$ is piecewise linear, and the solution to the Skorokhod problem, namely $Y_{-\tau}(\cdot, 0)$, is also piecewise linear. The gradients are simply obtained by solving a sequence of linear system of equations which are dictated by evolving the ordinary differential equations given in (1). Therefore, the most interesting part is the simulation of the stochastic object ($M(t): 0 \le t \le \tau$) in step 1, as we will discuss in Section 2.3.
+
+**2.3. Simulation of the stationary dominating process.** As customary, we use the notation $E_0(\cdot)$ or $P_0(\cdot)$ to indicate the conditioning $\mathbf{Z}(0)=0$. We define $\phi_i(\theta) = E_0[\exp(\theta Z_i(1))]$ to be the moment-generating function of $Z_i(1)$, and let $\psi_i(\theta) = \log(\phi_i(\theta))$. In order to simplify the explanation of the simulation procedure to sample ($M(t): t \ge 0$), we introduce the following assumption:
+
+*Assumption: (A3b)* Suppose that in every dimension *i* there exists $\theta_i^* \in (0, \infty)$ such that
+
+$$\psi_i(\theta_i^*) = \log E_0 \exp(\theta_i^* Z_i(1)) = 0.$$
+
+This assumption is a strengthening of assumption (A3), and it is known as Cramer's condition in the large deviations literature. As we shall explain at the end of Section 2.3, it is possible to dispense this assumption and only work under assumption (A3). For the moment, we continue under assumption (A3b).
+
+We wish to simulate ($\mathbf{Z}(t): 0 \le t \le \tau$) where $\tau$ is a time such that
+
+$$\mathbf{Z}(\tau) = \mathbf{M}(\tau) = \max_{s \ge \tau} \mathbf{Z}(s) \quad \text{and hence} \quad \forall 0 \le t \le \tau, \quad \mathbf{M}(t) = \max_{t \le s \le \tau} \mathbf{Z}(s).$$
+
+Recall that $-\tau$ is precisely the coalescence time since $Y_*^+(-\tau) = 0$. We also keep in mind that our formulation at the beginning of the Introduction implies that
+
+$$\mathbf{Z}(t) = \mathbf{J}(t) - R\mathbf{r}t - \mathbf{z}t = \sum_{k=1}^{N(t)} \mathbf{W}(k) - R\mathbf{r}t - \mathbf{z}t,$$
+
+where $\mathbf{z}$ is selected according to Lemma 1. Define
+
+$$\mu = R\mathbf{r} + \mathbf{z},$$
+---PAGE_BREAK---
+
+and let $\mu_i > 0$ be the $i$th coordinate of $\mu$. In addition, we assume that we can choose a constant $m > 0$ large enough such that
+
+$$ (12) \qquad \sum_{i=1}^{d} \exp(-\theta_i^* m) < 1. $$
+
+Define
+
+$$ (13) \qquad T_m = \inf\{t \ge 0 : Z_i(t) \ge m, \text{ for some } i\}. $$
+
+Now we are ready to propose the following procedure to simulate $\tau$:
+
+ALGORITHM 1.1 (Simulating the coalescence time). The output of this algorithm is $(Z(t): 0 \le t \le \tau)$, and the coalescence time $\tau$. Choose the constant $m$ according to (12):
+
+(1) Set $\tau = 0$, $Z(0) = 0$.
+
+(2) Generate an inter-arrival time $U$ distributed Exp$(\lambda)$, and sample $W = (W_1, \dots, W_d)$ independent of $U$.
+
+(3) Let $Z(\tau + t) = Z(\tau) - t\mu$ for $0 \le t < U$ and $Z(\tau + U) = Z(\tau) + W - U\mu$.
+
+(4) If there exists an index $i$, such that $W_i - U\mu_i \ge -m$, then return to step 2 and reset $\tau \leftarrow \tau + U$. Otherwise, sample a Bernoulli $I$ with parameter $p = P_0(T_m < \infty)$.
+
+(5) If $I=1$, simulate a new conditional path $(C(t): 0 \le t \le T_m)$ following the conditional distribution of $(Z(t): 0 \le t \le T_m)$ given that $T_m < \infty$ and $Z(0) = 0$. Let $Z(\tau + t) = Z(\tau) + C(t)$ for $0 \le t \le T_m$, and reset $\tau \leftarrow \tau + T_m$. Return to step 2.
+
+(6) Else, if $I=0$, stop and return $\tau$ along with the feed-in path $(Z(t): 0 \le t \le \tau)$.
+
+We shall now explain how to execute the key steps in the previous algorithm, namely, steps 4 and 5.
+
+**2.3.1. Simulating a path conditional on reaching a positive level in finite time.** The procedure that we shall explain now is an extension of the one-dimensional procedure given in Blanchet and Sigman (2011); see also the related one-dimensional procedure by Ensor and Glynn (2000). The strategy is to use acceptance/rejection. The proposed distribution is based on importance sampling by means of exponential tilting. In order to describe our strategy, we need to introduce some notation.
+
+We think of the probability measure $P_0(\cdot)$ as defined on the canonical space of right-continuous with left-limits $\mathbb{R}^d$-valued functions, namely, the ambient space of $(Z(t): t \ge 0)$ which we denote by $\Omega = D_{[0,\infty)}(\mathbb{R}^d)$. We endow the probability space with the Borel $\sigma$-field generated by the Skorokhod $J_1$ topology; see Billingsley (1999). Our goal is to simulate from the conditional law of
+---PAGE_BREAK---
+
+($\mathbf{Z}(t): 0 \le t \le T_m$) given that $T_m < \infty$ and $\mathbf{Z}(0) = 0$, which we shall denote by $P_0^*$ in the rest of this part.
+
+Now let us introduce our proposed distribution, $P_0'(\cdot)$, defined on the space $\Omega' = D_{[0,\infty)}(\mathbb{R}^d) \times \{1, 2, \dots, d\}$. We endow the probability space with the product $\sigma$-field induced by the Borel $\sigma$-field generated by the Skorokhod $J_1$ topology and all the subsets of $\{1, 2, \dots, d\}$. So, a typical element $\omega'$ sampled under $P_0'(\cdot)$ is of the form $\omega' = ((\mathbf{Z}(t): t \ge 0), \text{Index})$, where $\text{Index} \in \{1, 2, \dots, d\}$. The distribution of $\omega'$ induced by $P_0'(\cdot)$ is described as follows. First, set
+
+$$ (14) \qquad P_0'(\text{Index} = i) = w_i := \frac{\exp(-\theta_i^* m)}{\sum_{j=1}^d \exp(-\theta_j^* m)}. $$
+
+Now, given Index = *i*, for every set $A \in \sigma(\mathbf{Z}(s): 0 \le s \le t)$,
+
+$$ P_0'(A | \text{Index} = i) = E_0[\exp(\theta_i^* Z_i(t)) I_A]. $$
+
+So, in particular, the Radon–Nikodym derivative (i.e., the likelihood ratio) between
+the distribution of $\omega = (\mathbf{Z}(s): 0 \le s \le t)$ under $P_0'(\cdot)$ and $P_0(\cdot)$ is given by
+
+$$ \frac{dP_0'}{dP_0}(\omega) = \sum_{i=1}^{d} w_i \exp(\theta_i^* Z_i(t)). $$
+
+The distribution of $(\mathbf{Z}(s): s \ge 0)$ under $P_0'(\cdot)$ is precisely the proposed distribu-
+tion that we shall use to apply acceptance/rejection. It is straightforward to simu-
+late under $P_0'(\cdot)$. First, sample Index according to the distribution (14). Then, con-
+ditional on Index = *i*, the process $\mathbf{Z}(\cdot)$ also follows a compound Poisson process.
+Given Index = *i*, under $P_0'(\cdot)$, it follows that $\mathbf{J}(t)$ can be represented as
+
+$$ (15) \qquad \mathbf{J}(t) = \sum_{k=1}^{\hat{N}(t)} \mathbf{W}'(k), $$
+
+where $\hat{N}(\cdot)$ is a Poisson process with rate $\lambda E[\exp(\theta_i^* W_i)]$. In addition, the distri-
+bution of $\mathbf{W}'$ is obtained by exponential titling such that for all $A \in \sigma(\mathbf{W})$,
+
+$$ (16) \qquad P'(\mathbf{W}' \in A) = E[\exp(\theta_i^* W_i) I_A]. $$
+
+In sum, conditional on Index = *i*, we simply let
+
+$$ (17) \qquad \mathbf{Z}(t) = \sum_{k=1}^{\hat{N}(t)} (\mathbf{W}'(k) - \mu t). $$
+
+Now, note that we can write
+
+$$
+\begin{align*}
+E_0'(Z_{\text{Index}}(t)) &= \sum_{i=1}^{d} E_0(Z_i(t) \exp(\theta_i^* Z_i(t))) P'(\text{Index} = i) \\
+&= \sum_{i=1}^{d} \frac{d\phi_i(\theta_i^*)}{d\theta} w_i > 0,
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where the last inequality follows by convexity of $\psi_k(\cdot)$ and by definition of $\theta_k^*$. So, we have that $Z_{\text{Index}}(t) \nearrow \infty$ as $t \nearrow \infty$ with probability one under $P_0'(\cdot)$ by the law of large numbers. Consequently $T_m < \infty$ a.s. under $P_0'(\cdot)$.
+
+Recall that $P_0^*(\cdot)$ is the conditional law of $(\mathbf{Z}(t): 0 \le t \le T_m)$ given that $T_m < \infty$ and $\mathbf{Z}(0)=0$. In order to assure that we can indeed apply acceptance/rejection theory to simulate from $P_0^*(\cdot)$, we need to show that the likelihood ratio $dP_0/dP_0'$ is bounded:
+
+$$ (18) \qquad \begin{aligned} & \frac{dP_0^*}{dP_0'}(\mathbf{Z}(t):0 \le t \le T_m) \\ &= \frac{1}{P_0(T_m < \infty)} \times \frac{dP_0}{dP_0'}(\mathbf{Z}(t):0 \le t \le T_m) \\ &= \frac{1}{P_0(T_m < \infty)} \times \frac{1}{\sum_{i=1}^d w_i \exp(\theta_i^* Z_i(T_m))}. \end{aligned} $$
+
+Upon $T_m$, there is an index $L$ ($L$ may be different from Index) such that $\exp(\theta_L^* Z_L(T_m)) \ge \exp(\theta_L^* m)$, therefore
+
+$$ (19) \qquad \frac{1}{\sum_{i=1}^d w_i \exp(\theta_i^* Z_i(T_m))} \le \frac{1}{w_L \exp(\theta_L^* m)} = \sum_{i=1}^d \exp(-\theta_i^* m) < 1, $$
+
+where the last inequality follows by (12). Consequently, plugging (19) into (18) we obtain that
+
+$$ (20) \qquad \frac{dP_0^*}{dP_0'}(\mathbf{Z}(t):0 \le t \le T_m) \le \frac{1}{P_0(T_m < \infty)}. $$
+
+We now are ready to summarize our acceptance/rejection procedure and the proof of its validity.
+
+**ALGORITHM 1.1.1 (Simulation of paths conditional on $T_m < \infty$).**
+
+*Step 1:* Sample $(\mathbf{Z}(t): 0 \le t \le T_m)$ according to $P_0'(\cdot)$ as indicated via equations (14), (15) and (17).
+
+*Step 2:* Given $(\mathbf{Z}(t): 0 \le t \le T_m)$, simulate a Bernoulli $I$ with probability
+
+$$ \frac{1}{\sum_{i=1}^{d} w_i \exp(\theta_i^* Z_i(T_m))}. $$
+
+[Note that the previous quantity is less than unity due to (19).]
+
+*Step 3:* If $I=1$, output $(\mathbf{Z}(t): 0 \le t \le T_m)$ and Stop, otherwise go to step 1.
+
+**PROPOSITION 3.** The probability that $I=1$ at any given call of step 3 in Algorithm 1.1.1 is $P_0(T_m < \infty)$. Moreover, the output of Algorithm 1.1.1 follows the distribution $P_0^*$.
+
+
+---PAGE_BREAK---
+
+PROOF. The result follows directly from the theory of acceptance/rejection; see Asmussen and Glynn (2007), pages 39–42. According to it, since the two probability measures $P_0^*$ and $P'_0$ satisfy
+
+$$ \frac{dP_0^*}{dP'_0} \le c = \frac{1}{P_0(T_m < \infty)}, $$
+
+as indicated by (18) and (20), one can sample exactly from $P_0^*$ by the so-called acceptance/rejection procedure:
+
+(1) Generate i.i.d. samples $\{\omega_i\}$ from $P_0'$ and i.i.d. random numbers $U_i \sim U[0, 1]$ independent of $\{\omega_i\}$.
+
+(2) Define $N = \inf\{n \ge 1 : U_n \le c^{-1} \frac{dP_0^*}{dP'_0}(\omega_i)\}$.
+
+(3) Output $\omega_N$.
+
+The output $w_N$ follows exactly the law $P_0^*$, and $N$ is a geometric random variable with mean $c$; in other words, the probability of accepting a proposal is $c$. In our specific case, we have $c = 1/P_0(T_m < \infty)$, and according to (18) the likelihood ratio divided by constant $c$ is
+
+$$ c^{-1} \frac{dP_0^*}{dP'_0}(\omega) = \frac{1}{\sum_{i=1}^{d} w_i \exp(\theta_i^* Z_i(T_m))}. $$
+
+Therefore, Algorithm 1.1.1 has acceptance probability $P(I=1) = P_0(T_m < \infty)$, and it generates a path exactly from $P_0^*$ upon acceptance. $\square$
+
+As the previous result shows, the output of the previous procedure follows exactly the distribution of $(Z(t): 0 \le t \le T_m)$ given that $T_m < \infty$ and $Z(0) = 0$. Moreover, the Bernoulli random variable $I$ has probability $P_0(T_m < \infty)$ of success. So this procedure actually allows both steps 4 and 5 in Algorithm 1.1 to be executed simultaneously. In detail, one simulates a path following the law of $P_0'$ until $T_m$, and then, if the proposed path is accepted, it can be concluded that $T_m$ is finite and the proposed path is exactly a sample path following the law of $P_0^*$; otherwise one can conclude that $T = \infty$.
+
+**REMARK.** As mentioned earlier, assumption (A3b) is a strengthening of assumption (A3). We can carry out our ideas under assumption (A3) as follows. First, instead of $(M(t): t \ge 0)$, we consider the following process $Z_a(\cdot)$ and $M_a(\cdot)$ defined by
+
+$$ Z_a(t) := Z(t) + at, \quad M_a(t) = \max_{s \ge t} (Z_a(s)). $$
+
+We shall explain how to choose the nonnegative vector $\mathbf{a} = (a_1, a_2, \ldots, a_d)^T$ in a moment. Note that we can simulate $(M(t): t \ge 0)$ jointly with $(Z(t): t \ge 0)$ if we are able to simulate $(M_a(t): t \ge 0)$ jointly with $(Z_a(t): t \ge 0)$. Now note that $\psi_i(\cdot)$
+---PAGE_BREAK---
+
+is strictly convex and that $\dot{\psi}_i(0) < 0$, so there exists $a_i > 0$ large enough to force
+the existence of $\theta_i^* > 0$ such that $E \exp(\theta_i^* Z_i(1) + a_i \theta_i^*) = 1$, but at the same time
+small enough to keep $E(Z_i(1) + a_i) < 0$; again, this follows by strict convexity
+of $\psi_i(\cdot)$ at the origin. So, if assumption (A3b) does not hold, but assumption (A3)
+holds, one can then execute Algorithm 1.1 based on the process $\mathbf{Z}_a(\cdot)$.
+
+2.4. *Computational complexity.* In this section we provide a complexity analysis of our algorithm. We first make some direct observations assuming the dimension of the network remains fixed. In particular, we note that the expected number of random variables simulated has a finite moment-generating function in a neighborhood of the origin.
+
+**THEOREM 1.** *Suppose that (A1) to (A3) are in force. Let $\tau$ be the coalescence time, and N be the number of random variables generated to terminate the overall procedure to sample $\mathbf{Y}(\infty)$. Then there exists $\delta > 0$ such that*
+
+$$E \exp(\delta\tau + \delta N) < \infty.$$
+
+PROOF. This follows directly from classical results about random walks; see Gut (2009). In particular it follows that $E_0'(\exp(\delta T_m)) < \infty$. The rest of the proof follows from elementary properties of compound geometric random variables arising from the acceptance/rejection procedure. $\square$
+
+We are more interested, however, in complexity properties as the network increases. We shall impose some regularity conditions that allow us to consider a sequence of systems indexed by the number of dimensions $d$. We shall grow the size of the network in a meaningful way; in particular, we need to make sure that the network remains stable as the dimension $d$ increases. Additional regularity will also be imposed.
+
+Assumptions:
+
+There exists two constants $0 < \delta < 1 < H < \infty$ independent of $d$ satisfying the following conditions:
+
+(C1) $R^{-1}E[\mathbf{X}(1)] < -2\delta R^{-1}\mathbf{1}$ in each network.
+
+(C2) Let $\theta_i^*$ for $i = 1, \dots, d$ be the tilting parameters as defined in assumption (A3b), then
+
+$$E \exp[(\delta + \theta_i^*) W_i] \leq H < \infty$$
+
+and
+
+$$H > \delta + \theta_i^* \quad \text{for all } 1 \leq i \leq d.$$
+
+(C3) The arrival rate $\lambda \in (\delta, H)$.
+---PAGE_BREAK---
+
+**REMARK.** Assumption (C1) implies that $\boldsymbol{\mu} = R\mathbf{r} + \mathbf{z} > \delta\mathbf{1}$, where $\mathbf{z}$ is defined according to Lemma 1. In detail, we choose $\mathbf{z} = E[\mathbf{X}(1)] + \delta\mathbf{1}$ and therefore, $R\mathbf{r} + \mathbf{z} = E[\mathbf{J}(1)] + \delta\mathbf{1} > \delta\mathbf{1}$.
+
+Note that $x \le \exp(ax)/(ae)$ for any $a > 0$ and $x \ge 0$. Plugging in $a = \theta_i^* + \delta$, we have $E[W_i] \le E[\exp((\theta_i^* + \delta)W_i)]/(e(\delta + \theta_i^*)) < H/(e\delta)$ and therefore
+
+$$ \boldsymbol{\mu} = \lambda E[\mathbf{W}] + \delta\mathbf{1} < (H^2/(e\delta) + \delta)\mathbf{1} = H'\mathbf{1}, $$
+
+where $H' = H^2/(e\delta) + \delta$. Similarly, we also have that $E[W_i^2] \le E[4\exp((\theta_i^* + \delta)W_i)]/(e^2(\theta_i^* + \delta)^2) \le 4H/(e^2\delta^2)$, and then we can compute
+
+$$
+\begin{aligned}
+E[Z_i(1)^2] &= E\left[\left(\sum_{k=1}^{N(1)} W_i(k) - \mu_i\right)^2\right] \le 2E\left[\mu_i^2 + \left(\sum_{k=1}^{N(1)} W_i(k)\right)^2\right] \\
+&\le 2\mu_i^2 + 2(\lambda + \lambda^2)\frac{4H}{e^2\delta^2} \le 2H'^2 + \frac{8(H^2 + H^3)}{e^2\delta^2} := H''.
+\end{aligned}
+$$
+
+In sum, we can conclude that
+
+$$ \max_{1 \le i \le d} E_0[Z_i(1)^2] \le H''. $$
+
+In the complexity analysis, we shall only use the fact that $H, H'$ and $H''$ are constants independent of $d$. As a result, for the simplicity of notation, we shall write $H$ for $H, H'$ and $H''$ in the rest of this section and assume, without loss of generality, that
+
+$$ \boldsymbol{\mu} \le H\mathbf{1} \quad \text{and} \quad \max_{1 \le i \le d} E_0[Z_i(1)^2] \le H. $$
+
+As discussed in Section 2.3.1, in Algorithm 1.1, we actually do steps 4 and 5 simultaneously. Therefore, we can rewrite Algorithm 1.1 as follows:
+
+ALGORITHM 1.1' (Simulate the coalescence time).
+
+(1) Set $\tau = 0$, $\mathbf{Z}(0) = 0$, $N = 0$.
+
+(2) Simulate a sample from $W - U\boldsymbol{\mu}$. Here $U$ is exponentially distributed with mean $1/\lambda$ and independent of $W$. Record the value of $\mathbf{Z}(t)$ for $\tau \le t \le \tau + U$. Reset $N \leftarrow N + 1$, $\mathbf{Z}(\tau + U) \leftarrow \mathbf{Z}(\tau) + W - U\boldsymbol{\mu}$, $\tau \leftarrow \tau + U$.
+
+(3) If there exists some index $i$, such that $W_i - U r_i \ge -m$, return to step 2.
+
+(4) Otherwise, simulate a random walk $\{C(n)\}$ such that $C(0) = 0$ and $C(n) = C(n-1) + W'(n) - U'(n)\boldsymbol{\mu}$, where $W'(n) - U'(n)\boldsymbol{\mu}$ are independent and identically distributed as $W' - U'\boldsymbol{\mu}$ under the tilted measure $P'$ defined in Section 2.3.1 through (15) to (17). Perform the simulation until $N_m = \inf\{n \ge 0: C_i(n) > m$ for some $i\}$.
+
+(5) Reset $N \leftarrow N + N_m$. Compute $p = 1/\sum_{k=1}^d w_k \exp(\theta_k^* C_k(N_m))$, and sample a Bernoulli $I$ with probability $p$. If $I = 1$, $\mathbf{Z}(\tau + \sum_{k=1}^{N_m} U'(k)) = \mathbf{Z}(\tau) + C(N_m)$ and $\tau = \tau + \sum_{k=1}^{N_m} U'(k)$. Return to step 2.
+---PAGE_BREAK---
+
+(6) If $I=0$, stop and output $\tau$ with $(Z(t): 0 \le t \le \tau)$.
+
+In this algorithm, the total number of random variables required to generate is $d \cdot N$. Use $N(d)$ instead of $N$ to emphasize the dependence on the number of dimensions $d$. The following result shows that our algorithm has polynomial complexity with respect to $d$:
+
+**THEOREM 2.** *Under assumptions (C1) to (C3),*
+
+$$E[N(d)] = O(d^{\gamma}) \quad \text{as } d \to \infty,$$
+
+*for some $\gamma$ depending on $\delta$ and $H$.*
+
+Denote the number of Bernoulli's generated in step 5 by $N_b$ and the number of random variables generated before executing step 4 in a single iteration by $N_a$. By Wald's identity, we can conclude
+
+$$E[N(d)] = E[N_b](E[N_a] + E[N_m]).$$
+
+The following proposition gives an estimate for $E[N_m]$.
+
+**PROPOSITION 4.** *Under assumptions (C1) to (C3),*
+
+$$E[N_m] = O(\log d),$$
+
+*and the coefficient in the bound depends only on $\delta$ and $H$.*
+
+**PROOF.** First, let us consider the cases in which $W_i$ are uniformly bounded from above by some constant $B$.
+
+Recall that $\phi_i(\theta) = E_0[\exp(\theta Z_i(1))]$. Given $i$, one can check that $E'_0[C_i(1)] = \dot{\phi}_i(\theta_i^*) / (\lambda E[\exp(\theta_i^* W_i)]) \ge \dot{\phi}_i(\theta_i^*) / (\lambda H)$. $N_m$ is a stopping time and $C_i(N_m) < m + B$. By the optional sampling theorem, we have
+
+$$E[N_m] = \sum_{i=1}^{d} \omega_i \frac{E_0'[C_i(N_m)]}{E_0'[C_i(1)]} \le \sum_{i=1}^{d} \omega_i \frac{\lambda H(m+B)}{\dot{\phi}_i(\theta_i^*)}.$$
+
+For each $1 \le i \le d$, we are going to estimate a lower bound for $\dot{\phi}(\theta_i^*)$. Using Taylor's expansion around 0, we have
+
+$$\phi_i(\theta_i^*) = \phi_i(0) + \theta_i^* \dot{\phi}_i(0) + \frac{(\theta_i^*)^2}{2} \ddot{\phi}_i(u_1 \theta_i^*),$$
+
+for some $u_1 \in [0, 1]$. As $\phi_i(\theta_i^*) = \phi_i(0) = 1$, we have
+
+$$\theta_i^* \dot{\phi}_i(0) + \frac{(\theta_i^*)^2}{2} \ddot{\phi}_i(u_1 \theta_i^*) = 0.$$
+---PAGE_BREAK---
+
+As $\theta_i^* > 0$,
+
+$$ (21) \qquad \dot{\phi}_i(0) + \frac{\theta_i^*}{2} \ddot{\phi}_i(u_1 \theta_i^*) = 0. $$
+
+Under assumption (C1), $\dot{\phi}_i(0) = E_0[Z_i(1)] < -\delta$. Under assumption (C2), we have that
+
+$$ \begin{aligned} E_0[\exp((\delta + \theta_i^*) Z_i(1))] &\leq \exp(\lambda \log(E[\exp((\delta + \theta_i^*) W_i)])) \\ &\leq H^\lambda \leq H^H \triangleq H_1 < \infty. \end{aligned} $$
+
+As a result,
+
+$$ \begin{aligned} \ddot{\phi}_i(u_1 \theta_i^*) &= E[Z_i(1)^2 \exp(u_1 \theta_i^* Z_i(1))] \\ &\leq E[Z_i(1)^2 I(Z_i(1) \leq 0)] + E[Z_i(1)^2 \exp(\theta_i^* Z_i(1)) I(Z_i(0) > 0)] \\ &\leq E[Z_i(1)^2] + E[Z_i(1)^2 \exp(\theta_i^* Z_i(1)) I(Z_i(0) > 0)] \\ &\leq E[Z_i(1)^2] + E[Z_i(1)^2 \exp(-\delta Z_i(1)) \exp((\delta + \theta_i^*) Z_i(1))]. \end{aligned} $$
+
+Besides, one can check that for any $x > 0$, $x^2 \exp(-\delta x) \leq 4e^{-2}/\delta^2$. Therefore,
+
+$$ \begin{aligned} \ddot{\phi}_i(u\theta_i^*) &\le E[Z_i(1)^2] + \frac{4}{\delta^2} e^{-2} E[\exp((\delta + \theta_i^*)Z_i(1))] \\ &\le H + \frac{4}{\delta^2} e^{-2} H_1. \end{aligned} $$
+
+Plug this result into equation (21) and use that $\dot{\phi}_i(0) < -\delta$ to complete the inequality
+
+$$ (22) \qquad \theta_i^* \ge \frac{2\delta}{H + 4e^{-2}H_1/\delta^2}. $$
+
+On the other hand, by a Taylor expansion of $\phi_i(\cdot)$ around $\theta_i^*$, we can conclude that
+
+$$ (23) \qquad \dot{\phi}_i(\theta_i^*) = \frac{\theta_i^*}{2} \ddot{\phi}_i(u_2 \theta_i^*), $$
+
+for some $u_2 \in [0, 1]$. Note that
+
+$$ \begin{aligned} \ddot{\phi}_i(u_2 \theta_i^*) &= E_0[Z_i(1)^2 \exp(u_2 \theta_i^* Z_i(1))] \\ &\ge E_0[Z_i(1)^2 \exp(u_2 \theta_i^* Z_i(1)) I(U > 1)] \\ &\ge E[\mu_i^2 \exp(-\theta_i^* \mu_i) I(U > 1)] \\ &\ge \mu_i^2 \exp(-H\mu_i) \exp(-\lambda) \\ &\ge \delta^2 \exp(-H^2 - H). \end{aligned} $$
+
+Thus (22) together with (23) imply
+
+$$ (24) \qquad \dot{\phi}_i(\theta_i^*) \ge \frac{1}{2}\theta_i^*\delta^2 e^{-H^2-H} \ge \frac{\delta^3 e^{-H^2-H}}{H+4e^{-2}H_1/\delta^2}. $$
+---PAGE_BREAK---
+
+Note that for lower bound (24) to hold, we do not require $W_i$ to be bounded.
+Therefore,
+
+$$E[N_m] \leq \sum_{i=1}^{d} \omega_i \frac{\lambda H(m+B)}{\dot{\phi}_i(\theta_i^*)} \leq \frac{\lambda H(m+B)(H + 4e^{-2}H_1/\delta^2)}{\delta^3 e^{-H^2-H}},$$
+
+as $\omega_i > 0$ and $\sum_i \omega_i = 1$.
+
+By (22), we have that $\theta_i^*$ are all uniformly bounded away from 0, so we can choose $m = O(\log d / \min_i \theta_i^*) = O(\log d)$ to satisfy equation (12). Now we can conclude that $E[N_m] = O(\log d)$ as $B, H$ and $\delta$ are all constants independent of $d$.
+
+Now, let us consider the more general cases when the $W_i$'s are not bounded from above. Recall that $\mathbf{W}'$ is derived from $\mathbf{W}$ by exponential tilting; see (16). For any $B > 0$, define $\tilde{\mathbf{W}}'$ by $\tilde{W}_i' = W_i' I(W_i' \le B)$ as the truncation of $\mathbf{W}'$, and define the random walk $\tilde{C}_i(n) = \tilde{C}_i(n-1) + \tilde{W}_i'(n) - U'(n)\mu_i$. Let $\tilde{N}_m = \inf\{n : \tilde{C}_i(n) > m$ for some $i\}$. Since $\tilde{C}_i(n) \le C_i(n)$, we have $\tilde{N}_m \le N_m$. Our goal is to show that one can choose a proper value for $B$ such that $E[\tilde{N}_m] = O(\log d)$ and hence so is $E[N_m]$.
+
+Since $\tilde{W}_i'$ is bounded from above by $B$, by the optimal stopping theorem, we have
+
+$$E[\tilde{N}_m] \leq \sum_{i=1}^{d} \omega_i \frac{m+B}{E[\tilde{C}_i(1)]}.$$
+
+By definition,
+
+$$E[\tilde{C}_i(1)] = E[(W_i I(W_i \le B) - U\mu_i) \exp(\theta_i^*(W_i I(W_i \le B) - U\mu_i))].$$
+
+Since $U\mu_i \ge 0$, we have
+
+$$
+\begin{aligned}
+& E[(W_i I(W_i \le B) - U\mu_i) \exp(\theta_i^*(W_i I(W_i \le B) - U\mu_i))] \\
+& \quad \ge E[(W_i - U\mu_i) \exp(\theta_i^*(W_i - U\mu_i))] - E[W_i \exp(\theta_i^* W_i) I(W_i > B)].
+\end{aligned}
+$$
+
+By assumption (C2), $\delta$ and $H > 0$ are constants independent of $d$ such that
+
+$$E[\exp((\delta + \theta_i^*) W_i)] \leq H < \infty.$$
+
+As a consequence,
+
+$$
+\begin{align*}
+E[W_i \exp(\theta_i^* W_i) I(W_i > B)] &\le E[W_i \exp(-\delta W_i) I(W_i > B) \exp((\delta + \theta_i^*) W_i)] \\
+&\le \max_{w>B} \{w \exp(-\delta w)\} E[\exp((\delta + \theta_i^*) W_i)] \\
+&\le B \exp(-\delta B) H
+\end{align*}
+$$
+
+for all $B > 1/\delta$. Recall that by (24),
+
+$$
+\begin{align*}
+E[(W_i - U\mu_i) \exp(\theta_i^*(W_i - U\mu_i))] &= E[C_i(1)] \ge \frac{\dot{\phi}_i(\theta_i^*)}{(\lambda H)} \\
+&\ge \frac{\delta^3 e^{-H^2-H}}{\lambda H(H + 4e^{-2}H_1/\delta^2)},
+\end{align*}
+$$
+---PAGE_BREAK---
+
+where $H_1 = H^H$. Therefore, we can take $B = O(-\frac{1}{\delta}\log(\frac{\delta^3 e^{-H^2-H}}{2\lambda H^2(H+4\delta e^{-2}H_1/\delta^2)})$ independent of $d$ such that
+
+$$ B \exp(-\delta B) H < \frac{\delta^3 e^{-H^2-H}}{2\lambda H(H + 4e^{-2}H_1/\delta^2)} \quad \text{and hence} $$
+
+$$ E[\tilde{C}_i(1)] \geq \frac{\delta^3 e^{-H^2-H}}{2\lambda H(H + 4e^{-2}H_1/\delta^2)}. $$
+
+In the end, since $m = O(\log(d))$, we have
+
+$$ E[N_m] \le E[\tilde{N}_m] \le \frac{2\lambda H(m+B)(2H + 8e^{-2}H_1/\delta^2)}{\delta^3 e^{-H^2-H}} = O(\log d). \quad \square $$
+
+Now we give the proof of the main result in this subsection.
+
+PROOF OF THEOREM 2. Recall that
+
+$$ E[N] = E[N_b](E[N_a] + E[N_m]). $$
+
+Since $N_b$ is the number of trials required to obtain $I=0$, $E[N_b] = 1/P(I=0)$. As discussed in Section 2.3.1, $P(I=0) \ge 1 - \sum_{i=1}^d \exp(-\theta_i^{*} m)$ and hence
+
+$$ E[N_b] \le \frac{1}{1 - \sum_{i=1}^{d} \exp(-\theta_i^{*} m)} \le \frac{1}{1 - 1/d} $$
+
+if we take $m = 2 \log d / \min_i \theta_i^*$.
+
+Similarly, we have $E[N_a] = 1/P(U > (m+W_i)/\mu_i, \forall i)$. For any $K>0$,
+
+$$ P\left(U > \frac{m+W_i}{\mu_i}, \forall i\right) \ge P\left(U > \frac{m+K}{\min_i \mu_i}; W_i \le K \text{ for all } i\right). $$
+
+Under assumption (C2), we have
+
+$$ P(W_i \le K \text{ for all } i) \ge 1 - \sum_{i=1}^{d} P(W_i > K) \ge 1 - dH \exp(-K\delta). $$
+
+Under assumption (C3), we have
+
+$$ P\left(U > \frac{m+K}{\min_i \mu_i}\right) \geq \exp\left(-\frac{H(m+K)}{\min_i \mu_i}\right). $$
+
+As *U* and *W* are independent,
+
+$$ P\left(U > \frac{m+W_i}{\mu_i}, \forall i\right) \geq \exp\left(-\frac{H(m+K)}{\min_i \mu_i}\right) (1 - dH \exp(-K\delta)). $$
+
+Choosing $K = (2\log d + \log H)/\delta$ and plugging in $m = 2\log d/\min_i \theta_i^*$, we get
+
+$$ E[N_a] \le \frac{1}{1 - 1/d} d^{(2H/(\min_i \mu_i \min_i \theta_i^*) + 2H/(\delta \min_i \mu_i))} H^H/(\delta \min_i \mu_i). $$
+---PAGE_BREAK---
+
+By Proposition 4 we have $E[N_m] = O(\log d)$. In summary, we have
+
+$$
+\begin{align*}
+E[N] &= E[N_b](E[N_a] + E[N_m]) = O\left(\left(\frac{1}{1-d}\right)^2 \log d d^{2H/(\min_i \mu_i \min_i \theta_i^*)}\right) \\
+&= O(d^{1+2H/(\min_i \mu_i \min_i \theta_i^*)}).
+\end{align*}
+$$
+
+As discussed in the proof of Proposition 4, $\theta_i^* \ge \delta/(H + 4e^{-2}H_1/\delta^2)$ and $\mu_i \ge \delta$
+are uniformly bounded away from 0, therefore,
+
+$$
+E[N] = O\left(d^{1+2H(H+4e^{-2}H_1/\delta)/\delta^2}\right). \quad \square
+$$
+
+3. Extension to Markov-modulated processes. We shall briefly explain how our development in Section 2, specifically Algorithm 1, can be implemented beyond input with stationary and independent increments. As an example, we shall concentrate on Markov-modulated stochastic fluid networks. Our extension to Markov-modulated networks is first explained in the one-dimensional case, and later we will indicate how to treat the multidimensional setting.
+
+Let $(\hat{I}(t): t \ge 0)$ be an irreducible continuous-time Markov chain taking values on the set $\{1, \dots, n\}$. We assume that, conditional on $\hat{I}(\cdot)$, the number of arrivals, $\hat{N}(\cdot)$, follows a time-inhomogeneous Poisson process with rate $\lambda_{\hat{I}(\cdot)}$. We further assume that $\int_0^t \lambda_{\hat{I}(s)} ds > 0$ with positive probability. The process $\hat{N}(\cdot)$ is said to be a Markov-modulated Poisson process with intensity $\lambda_{\hat{I}(\cdot)}$. Define $\hat{A}_k$ to be the time of the $k$th arrival, for $k \ge 1$; that is, $\hat{A}_k = \inf\{t \ge 0: \hat{N}(t) = k\}$.
+
+We assume that the $k$th arrival brings a job requirement equal to $\hat{W}(k)$. We also assume that the $\hat{W}(k)$'s are conditionally independent given the process $\hat{I}(\cdot)$. Moreover, we assume that the moment-generating function $\phi_i(\cdot)$ defined via
+
+$$
+\phi_i(\theta) = E(\exp(\theta \hat{X}(k)) | \hat{I}(\hat{A}_k) = i),
+$$
+
+is finite in a neighborhood of the origin. In simple words, the job requirement of
+the *k*th arrival might depend upon the environment, $\hat{I}(\cdot)$, at the time of arrival. But,
+conditional on the environment, the job sizes are independent. Finally, we assume
+that the service rate at time *t* is equal to $\mu_{\hat{I}(*)} \ge 0$.
+
+Let $\hat{X}(t) = \sum_{k=1}^{\hat{N}(t)} \hat{W}(k) - \int_0^t \mu_{\hat{I}(s)} ds$. Then the workload process, $(Y(t): t \ge 0)$, can be expressed as
+
+$$
+Y(t) = \hat{X}(t) - \inf_{0 \le s \le t} \hat{X}(s),
+$$
+
+assuming that $Y(0) = 0$. In order for the process $Y(\cdot)$ to be stable, in the sense of
+having a stationary distribution, we assume that $\sum_i \pi_i (\lambda_i E[\hat{W}|\hat{I}=i] - \mu_i) < 0$,
+where $\pi_i$ is the stationary distribution of the Markov chain $\hat{I}$. Following the same
+argument as in Section 2, we can construct a stationary version of the process $Y(\cdot)$
+by a time reversal argument.
+---PAGE_BREAK---
+
+Since $\hat{I}(\cdot)$ is irreducible, one can define its associated stationary time-reversed Markov chain $I(\cdot)$ with transition rate matrix $\mathcal{A}$; for the existence and detailed description of such reversed chain, see Chapter 2.5 of Asmussen (2003). Let us write $N(\cdot)$ to denote a Markov-modulated Poisson process with intensity $\lambda_{I(\cdot)}$, and let $A_k = \inf\{t \ge 0: N(t) = k\}$. We consider a sequence $(W(k): k \ge 1)$ of conditionally independent random variables representing the service requirements (backward in time) such that $\phi_i(\theta) = E(\exp(\theta W(k))|I(A_k) = i)$.
+
+We then can define $Z(t) = \sum_{k=1}^{N(t)} W(k) - \int_0^t \mu_{I(s)} ds$. Following the same arguments as in Section 2, we can run a stationary version $Y^*$ of $Y$ backward via the process
+
+$$Y^*(-t) = \sup_{s \ge t} (Z(s) - Z(t)).$$
+
+Therefore, $Y^*(-t)$ can be simulated exactly as long as a convenient change of measure can be constructed for the process $(I(\cdot), Z(\cdot))$, so that a suitable adaptation of Algorithm 1.1.1 can be applied. Once the adaptation of Algorithm 1.1.1 is in place, the adaptation of Algorithms 1.1 and 1 is straightforward.
+
+In order to define such a change of measure, let us define the matrix $M(\theta, t) \in \mathbb{R}^{n \times n}$, for $t \ge 0$, via
+
+$$M_{ij}(\theta, t) = E_i[\exp(\theta Z(t)); I(t) = j],$$
+
+where the notation $E_i(\cdot)$ means that $I(0) = i$. Note that $M(\cdot, t)$ is well defined in a neighborhood of the origin. In what follows we assume that $\theta$ is such that all coordinates of $M(\theta, t)$ are finite.
+
+It is known [see, e.g., Chapters 11.2 and 13.8 of Asmussen (2003) and the references therein] that $M(\theta, t) = \exp(tG(\theta))$ where the matrix G is defined by
+
+$$G_{ij}(\theta) = \begin{cases} A_{ij}, & \text{if } i \neq j, \\ A_{ii} - \mu_i\theta + \lambda_i\phi_i(\theta), & \text{if } i = j. \end{cases}$$
+
+Besides, $G(\theta)$ has a unique eigenvalue $\beta(\theta)$ corresponding to a strictly positive eigenvector $(u(i, \theta): 1 \le i \le n)$. The eigenvalue $\beta(\theta)$ has the following properties which follow from Propositions 2.4 and 2.10 in Chapter 11.2 of Asmussen (2003):
+
+LEMMA 2.
+
+(1) $\beta(\theta)$ is convex in $\theta$ and $\dot{\beta}(\theta)$ is well defined.
+
+(2) $\lim_{t \to \infty} Z(t)/t = \hat{\beta}(0) = \lim_{t \to \infty} \hat{X}(t)/t < 0$.
+
+(3) $(M(t, \theta): t \ge 0)$ defined via
+
+$$M(t, \theta) = \frac{u(I(t), \theta)}{u(I(0), \theta)} \exp\left(\theta Z(t) - t\beta(\theta)\right)$$
+
+is a martingale.
+---PAGE_BREAK---
+
+As explained in Chapter 13.8 of Asmussen (2003), the martingale $M(\cdot)$ induces a change of measure for the process $(I(\cdot), Z(\cdot))$ as we shall explain. Let $P$ be the probability law of $(I(\cdot), Z(\cdot))$, and define a new probability measure $\tilde{P}$ for $(I(s), Z(s): s \le t)$ as $d\tilde{P} = M(t, \theta) dP$.
+
+We now describe the law of $(I(\cdot), Z(\cdot))$ under $\tilde{P}$. The process $I(\cdot)$ is a continuous time Markov chain with rate matrix $\tilde{\mathcal{A}}_{ij} = \mathcal{A}_{ij} u(j, \theta)/u(i, \theta)$ for $i \ne j$ (and $\tilde{\mathcal{A}}_{ii} = -\sum_{j \ne i} \tilde{\mathcal{A}}_{ij}$). In addition,
+
+$$Z(t) \stackrel{d}{=} \sum_{k=1}^{\tilde{N}(t)} \tilde{W}(k) - \int_0^t \mu_{I(s)} ds,$$
+
+where $\tilde{N}$ is a Markov-modulated Poisson process with rate at time $t$ equal to $\phi_{I(t)}(\theta)\lambda(I(t))$, and the $\tilde{W}(k)$'s are conditionally independent given $I(\cdot)$ with moment generating function $\tilde{\phi}_i(\cdot)$ defined via
+
+$$\tilde{\phi}_i(\eta; \theta) = \tilde{E}(\exp(\eta \tilde{W}(k)) | A_k = i) = \phi_i(\eta + \theta) / \phi_i(\eta),$$
+
+which is finite in a neighborhood of the origin. In addition, $Z(t)/t \to \dot{\beta}(\theta)$ under $\tilde{P}$.
+
+Because of the stability condition of the system, we have that $\dot{\beta}(0) < 0$. Then, following the same argument as in the remark given at the end of Section 2.3, we may assume the existence of the Cramer root $\theta^* > 0$ such that $\beta(\theta^*) = 0$ and $\dot{\beta}(\theta^*) > 0$. The change of measure that allows adaption of Algorithm 1.1.1 is given by selecting $\theta^* > 0$ as indicated. Now, select $m > 0$ such that
+
+$$ (25) \qquad K := \exp(-\theta^* m) \max_{i,j} \frac{u(i, \theta^*)}{u(j, \theta^*)} \le 1. $$
+
+We will use the notation $P_{0,i}(\cdot)$ to denote the law $P(\cdot)$ conditional on $Z(0)=0$ and $I(0)=i$. Let us write $P_{0,i}^*(\cdot)$ to denote the law of $(Z(t): 0 \le t \le T_m)$ [under $P_{0,i}(\cdot)$] conditional on $T_m < \infty$. Further, we write $\tilde{P}_{0,i}(\cdot)$ to denote the law of $\tilde{P}(\cdot)$, selecting $\theta = \theta^*$, conditional on $Z(0)=0$ and $I(0)=i$. Then we have that $\tilde{P}_{0,i}(T_m < \infty) = 1$ [by Lemma 2 since $\dot{\beta}(\theta^*) > 0$], and therefore [by (25)], we have
+
+$$
+\begin{align*}
+& \frac{d P_{0,i}^*}{d \tilde{P}_{0,i}}((I(t), Z(t)): 0 \le t \le T_m) \\
+&= \frac{u(i, \theta^*)}{u(I(T_m), \theta^*)} \times \frac{\exp(-\theta^* Z(T_m)) I(T_m < \infty)}{P_{0,i}(T_m < \infty)} \\
+&\le \frac{K}{P_{0,i}(T_m < \infty)} \le \frac{1}{P_{0,i}(T_m < \infty)}. \\
+\end{align*}
+$$
+
+It is clear from this identity, which is completely analogous to identities (18) and (20), which are the basis for Algorithm 1.1.1, that the corresponding adaptation to our current setting follows.
+---PAGE_BREAK---
+
+For the $d$-dimensional case ($d > 1$), we first assume the existence of the Cramer root $\theta_j^* > 0$ for each dimension $j \in \{1, \dots, d\}$. In this setting we also must compute the corresponding positive eigenvector $(u_j(i, \theta_j^*)) : 1 \le i \le n)$ for each $j \in \{1, \dots, d\}$. The desired change of measure that allows the adaptation of Algorithm 1.1.1 is just a mixture of changes of measures such as those described above induced by $M(\cdot, \theta_j^*)$ in each direction, just as discussed in Section 2.3.1, with weight $w_j = \exp(-\theta_j^* m) / \sum_{k=1}^m \exp(-\theta_k^* m)$. The corresponding likelihood ratio is then
+
+$$
+\begin{aligned}
+& \frac{dP_{0,i}^*}{d\tilde{P}_{0,i}}((I(t), Z(t)): 0 \le t \le T_m) \\
+& \quad = \frac{1}{\sum_{j=1}^d w_j \exp(\theta_j^* Z_j(T_m)) u_j(I(T_m), \theta_j^*) / u_j(i, \theta_j^*)},
+\end{aligned}
+$$
+
+and $m$ must be selected so that
+
+$$ \sum_{j=1}^{d} \exp(-\theta_j^* m) \sup_{j,i,k} \frac{u_j(i, \theta_j^*)}{u_j(k, \theta_j^*)} \le 1. $$
+
+**4. Algorithm for reflected Brownian motion.** In this section, we revise our algorithm and explain how we can apply it to the case of reflected Brownian motion. Consider a multidimensional Brownian motion
+
+$$ X(t) = vt + AB(t), $$
+
+where $v \in \mathbb{R}^d$ is the drift vector, and $A \cdot A^T \triangleq \Sigma \in \mathbb{R}^{d \times d}$ is the positive definite covariance matrix. Our target process $Y(t)$ is the solution to the following Skorokhod problem with input process $X(\cdot)$ and initial value $Y(0) = y_0$:
+
+$$
+\begin{align*}
+d\mathbf{Y}(t) &= d\mathbf{X}(t) + R d\mathbf{L}(t), & \mathbf{Y}(0) &= \mathbf{y}_0, \\
+\mathbf{Y}(t) &\ge 0, & Y_j(t) dL_j(t) &\ge 0, & L_j(0) &= 0, & dL_j(t) &\ge 0.
+\end{align*}
+$$
+
+We assume that the reflection matrix $R$ is an $M$-matrix of the form $R = I - Q^T$, where $Q$ has nonnegative coordinates and a spectral radius equal to $\alpha < 1$ so that $R^{-1}$ has only nonnegative elements; see page 304 of Harrison and Reiman (1981). We also assume the stability condition $R^{-1}v < 0$ for the existence of the steady-state distribution. As discussed in the Harrison and Reiman (1981), there is a unique solution pair $(\mathbf{Y}, \mathbf{L})$ to the Skorokhod problem associated with $\mathbf{X}$, and the process $\mathbf{Y}$ is called a reflected Brownian Motion (RBM). We wish to sample $\mathbf{Y}(\infty)$ (at least approximately, with a pre-defined controlled error).
+
+The stochastic dominance result for reflected Brownian motions that is analogous to Lemma 1 was first developed in the proof of Lemma 12 in Harrison and Williams (1987). In detail, we can construct a dominating process $\mathbf{Y}^+(\cdot)$ as follows. First, we can choose $\mathbf{z} \in \mathbb{R}^d$ such that $\mathbf{v} < \mathbf{z}$ and $R^{-1}\mathbf{z} < 0$. Define a process
+
+$$ (26) \qquad \mathbf{Z}(t) = \mathbf{X}(t) - \mathbf{z}t := A\mathbf{B}(t) - \mu t, $$
+---PAGE_BREAK---
+
+where $\boldsymbol{\mu} = \mathbf{v} - \mathbf{z}$, and let $\mathbf{Y}^+(\cdot)$ be the RBM corresponding to the Skorokhod problem (4), which has orthogonal reflection. Then $R^{-1}\mathbf{Y}(t) \le R^{-1}\mathbf{Y}^+(t)$. As a result, we can assume without loss of generality that the input Brownian motion has strictly negative drift coordinatewise. In sum, the following assumption is in force throughout this section:
+
+**ASSUMPTION (D).** The input process $\mathbf{Z}(\cdot)$ satisfies (26) with $\mu_i > \delta_0 > 0$ for all $1 \le i \le d$, and we assume that $A$ is nondegenerate so that $A^T A$ is positive definite.
+
+Since $\mathbf{Z}(\cdot)$ has strictly negative drift, following the same argument given for Proposition 1, we can construct a stationary version of the dominating process as
+
+$$ (27) \quad \mathbf{Y}^{+}(-t) = -\mathbf{Z}(t) + \max_{u \ge t} \mathbf{Z}(u) \triangleq \mathbf{Z}(t) - \mathbf{M}(t) \quad \text{for all } t \ge 0. $$
+
+In order to apply the same strategy as in Algorithm 1 to the RBM, we need to address two problems. First, the input process $\mathbf{Z}$ requires a continuous path description while the computer can only encode and generate discrete objects. Second, the dominating process is a reflected Brownian motion with orthogonal reflection. Therefore the hitting time $\tau$ to the origin is almost surely infinity [see Varadhan and Williams (1985)], which means that Algorithm 1 will not terminate in finite time, in this case. To solve the first problem, we take advantage of a wavelet representation of Brownian motion and use it to simulate a piecewise linear approximation with uniformly small (deterministic) error. To solve the second problem, we define an approximated coalescent time $\tau_\epsilon$ as the first passage time to a small ball around the origin so that $E[\tau_\epsilon] < \infty$ and the error caused by replacing $\tau$ with $\tau_\epsilon$ is bounded by $\epsilon$. In sum, we concede to an algorithm that is not exact but one that could give any user-defined $\epsilon$ precision. Nevertheless, at the end of Section 4.1 we will show that we can actually use this $\epsilon$-biased algorithm to estimate without any bias the steady-state expectation of continuous functions of RBM by introducing an extra randomization step.
+
+Section 4 is organized as follows. In Section 4.1, we will describe the main strategy of our algorithm. In Section 4.2, we use a wavelet representation to simulate a piecewise linear approximation of Brownian motion. In Section 4.3, we will discuss the details in simulating jointly $\tau_\epsilon$ and the stationary dominating process based on the techniques we have already used for the compound Poisson cases. In the end, in Section 4.4, we will give an estimate of the computational complexity of our algorithm.
+
+4.1. *The structure of the main simulation procedure.* The main strategy of the algorithm is almost the same as Algorithm 1, except for two modifications due to the two issues discussed above: first, instead of simulating the input process $\mathbf{Z}$ exactly, we simulate a piecewise linear approximation $\mathbf{Z}^\epsilon$ such that $|\mathbf{Z}_i^\epsilon(t)| -$
+---PAGE_BREAK---
+
+$Z_i(t)| < \varepsilon$ for all indices $i$ and $t \ge 0$; second, instead of sampling the coalescence time $\tau$ such that $\mathbf{M}(\tau) = \mathbf{Z}(\tau)$, we simulate an approximation coalescence time, $\tau_\varepsilon$, such that $\mathbf{M}(\tau_\varepsilon) \le \mathbf{Z}(\tau_\varepsilon) + \boldsymbol{\varepsilon}$.
+
+With this notation, we now give the structure of our algorithm. The details will be given later in Sections 4.2 and 4.3:
+
+ALGORITHM 2 [Sampling with controlled error of $\mathbf{Y}(\infty)]$.
+
+Step 1: Let $\tau_\varepsilon \ge 0$ be any time for which $\mathbf{M}(\tau_\varepsilon) \le \mathbf{Z}(\tau_\varepsilon) + \boldsymbol{\varepsilon}$, and simulate, jointly with $\tau_\varepsilon$, $\mathbf{Z}_{-\tau_\varepsilon}^{-}(t) = -\mathbf{Z}^\varepsilon(\tau_\varepsilon - t)$ for $0 \le t \le \tau_\varepsilon$.
+
+Step 2: Define $\mathbf{X}_{-\tau_\varepsilon}^{-}(t) = \mathbf{Z}^{\varepsilon}(\tau_\varepsilon) - \mathbf{Z}^{\varepsilon}(\tau_\varepsilon - t) + \mathbf{z}_t$, and compute $\mathbf{Y}_{-\tau_\varepsilon}^{\varepsilon}(\tau_\varepsilon, 0)$ which is obtained by evolving the solution $\mathbf{Y}_{-\tau_\varepsilon}^{\varepsilon}(\cdot, 0)$ to the Skorokhod problem
+
+$$d\mathbf{Y}_{-\tau_{\varepsilon}}^{\varepsilon}(t, 0) = d\mathbf{X}_{-\tau_{\varepsilon}}^{-}(t) + R d\mathbf{L}_{-\tau}(t, 0),$$
+
+$$\mathbf{Y}_{-\tau_{\epsilon}}^{\epsilon}(t, 0) \geq 0, \quad Y_{-\tau_{\epsilon}, j}^{\epsilon}(t, 0) dL_{-\tau_{\epsilon}, j}(t, 0) \geq 0,$$
+
+$$L_{-\tau_{\epsilon}, j}(0, 0) = 0, \quad dL_{-\tau_{\epsilon}, j}(t, 0) \geq 0,$$
+
+for $\tau_\epsilon$ units of time.
+
+Step 3: Output $\mathbf{Y}_{-\tau_\epsilon}^\epsilon(\tau_\epsilon, 0)$.
+
+First, we show that there exists a stationary version ${\mathbf{Y}^*(t): t \le 0}$ that is coupled with the dominating stationary process ${\mathbf{Y}^+(t): t \le 0}$ as given by (27).
+
+LEMMA 3. There exists a stationary version ${\mathbf{Y}^*(t): t \le 0}$ of $\mathbf{Y}$ such that $R^{-1}\mathbf{Y}^*(t) \le R^{-1}\mathbf{Y}^+(t)$ for all $t \le 0$.
+
+PROOF. The proof follows the same argument as that of Proposition 2. □
+
+The following proposition shows that the error of the above algorithm has a small and deterministic bound.
+
+PROPOSITION 5. Suppose $\mathbf{X} \in \mathbb{R}^d$. Let $r = \max_{i,j} R_{ij}^{-1} / \min_{i,j} \{R_{ij}^{-1}: R_{ij}^{-1} > 0\}$. Then there exists a stationary version $\mathbf{Y}^*$ of $\mathbf{Y}$ such that in each index $i$,
+
+$$|\mathbf{Y}_i^*(0) - \mathbf{Y}_{\tau_\epsilon, i}^\epsilon(\tau_\epsilon, 0)| \le \left( \frac{1}{1-\alpha} + dr \right) \epsilon.$$
+
+Here $0 \le \alpha < 1$ is the spectral radius of the matrix $Q$.
+
+PROOF. Consider three processes on $[-\tau_\epsilon, 0]$. The first is the coupled stationary process $\mathbf{Y}^*(\cdot)$ as constructed in Lemma 3, which is the solution to the Skorokhod problem with initial value $\mathbf{Y}^*(-\tau_\epsilon)$ at time $-\tau_\epsilon$ and input process $\tilde{\mathbf{X}}(\cdot) = \mathbf{X}(\tau_\epsilon) - \mathbf{X}(-\cdot)$ on $[-\tau_\epsilon, 0]$; the second is a process $\tilde{\mathbf{Y}}(\cdot)$, which is the solution to the Skorokhod problem with initial value 0 at time $-\tau_\epsilon$ and input process $\tilde{\mathbf{X}}(\cdot)$; the third is the process $\mathbf{Y}_{-\tau_\epsilon}^\epsilon(t, 0)$ as we described in the algorithm, which is the solution to the Skorokhod problem with initial value 0 at time $-\tau_\epsilon$ and input process $\mathbf{X}_{-\tau_\epsilon}^{-}(t)$ as defined in step 2 of Algorithm 2.
+---PAGE_BREAK---
+
+By definition, we know that for each index $i$, $|Y_i^+(\tau_\epsilon)| < \epsilon$. Since $R^{-1}\mathbf{Y}(\tau_\epsilon) \le R^{-1}\mathbf{Y}^+(\tau_\epsilon)$, the coupled process $Y_i^*(-\tau_\epsilon) < dr \epsilon$. Note that $\mathbf{Y}^*(\cdot)$ has the same input data as $\tilde{\mathbf{Y}}(\cdot)$ except for their initial values. According to the comparison theorem of Ramasubramanian (2000), the difference between these two processes is uniformly bounded by the difference of their initial values coordinate-wise. Therefore, we can conclude $|Y_i^*(0) - \tilde{Y}_i(0)| < dr \epsilon$.
+
+On the other hand, $\tilde{\mathbf{Y}}(\cdot)$ and $\mathbf{Y}_{-\tau_\epsilon}^\epsilon(\cdot, 0)$ have common initial value 0 and input processes whose difference is uniformly bounded by $\epsilon$. It was proved in Harrison and Reiman (1981) that the Skorokhod mapping is Lipschitz continuous under the uniform metric $d_T(Y^1(\cdot), Y^2(\cdot)) \triangleq \max_{1 \le i \le d} \sup_{0 \le t \le T} |Y_i^1(t) - Y_i^2(t)|$ for all $0 < T < \infty$, and the Lipschitz constant is equal to $1/(1-\alpha)$, where $0 \le \alpha < 1$ is the spectral radius of $Q$. Therefore, we have that $|\tilde{Y}_i(0) - Y_{-\tau_\epsilon,i}^\epsilon(\tau_\epsilon, 0)| < \epsilon/(1-\alpha)$.
+
+Simply applying the triangle inequality, we obtain that
+
+$$|Y_i^*(0) - Y_{\tau_\epsilon, i}^\epsilon(\tau_\epsilon, 0)| \le \left(\frac{1}{1-\alpha} + dr\right)\epsilon. \quad \square$$
+
+We conclude this subsection by explaining how to remove the $\epsilon$-bias induced by Algorithm 2. Let $T$ be any positive random variable with positive density $\{f(t): t \ge 0\}$ independent of $\mathbf{Y}^*(0)$. Let $g: \mathbb{R}^d \to \mathbb{R}$ be any positive Lipschitz continuous function such that there exists constant $K > 0$ and for all $\mathbf{x}$ and $\mathbf{y} \in \mathbb{R}^d$, $|g(\mathbf{x}) - g(\mathbf{y})| \le K \max_{i=1} |x_i - y_i|$. As illustrated in Beskos, Peluchetti and Roberts (2012),
+
+$$
+\begin{align*}
+E[g(\mathbf{Y}^*(0))] &= E\left[\int_0^{g(\mathbf{Y}^*(0))} dt\right] = E\left[\int_0^{g(\mathbf{Y}^*(0))} \frac{f(t)}{f(T)} dt\right] \\
+&= E\left[\frac{1(g(\mathbf{Y}^*(0)) > T)}{f(T)}\right].
+\end{align*}
+$$
+
+Since $|Y_i^*(0) - Y_{\tau_\epsilon,i}^\epsilon(\tau_\epsilon, 0)| \le (1+dr)\epsilon$, we can sample $T$ first, and then select $\epsilon > 0$ small enough, output $1(g(\mathbf{Y}_{\tau_\epsilon}^\epsilon(\tau_\epsilon, 0)) > T)/f(T)$ as an unbiased estimator of $E[g(\mathbf{Y}^*(0))]$ without the need for computing $\mathbf{Y}^*(0)$ exactly. It is important to have $(\mathbf{Y}_{\tau_\epsilon}^\epsilon(\tau_\epsilon, 0) : \epsilon > 0)$ coupled as $\epsilon \to 0$, and this can be achieved thanks to the wavelet construction that we will discuss next.
+
+4.2. *Wavelet representation of Brownian motion.* In this part, we give an algorithm to generate piecewise linear approximations to a Brownian motion path-by-path, with uniform precision on any finite time interval. The main idea is to use a wavelet representation for Brownian motion.
+
+By the Cholesky decomposition, any multidimensional Brownian motion can be expressed as a linear combination of independent one-dimensional Brownian motions. Our goal is to give a piecewise linear approximation to a $d$-dimensional Brownian motion $\mathbf{Z}$ with uniform precision $\epsilon$ on $[0, 1]$. Suppose that we can write
+---PAGE_BREAK---
+
+**Z = AB**, where A is the Cholesky decomposition of the covariance matrix, and the $B_i$'s are independent standard Brownian motions. If we are able to give a piecewise linear approximation $\tilde{B}_i$ to each $B_i$ on [0, 1] with precision $\epsilon/(d \cdot a)$ where $a = \max_{i,j} |A_{ij}|$, then $AB$ is a piecewise linear approximation to **Z** with uniform error $\epsilon$. Therefore, in the rest of this part, we only need to work with a standard one-dimensional Brownian motion.
+
+Now let us introduce the precise statement of a wavelet representation of Brownian motion; see Steele (2001), pages 34–39. First we need to define step function $H(\cdot)$ on [0, 1] by
+
+$$H(t) = \begin{cases} 1, & \text{for } 0 \le t < \frac{1}{2}, \\ -1, & \text{for } \frac{1}{2} \le t \le 1, \\ 0, & \text{otherwise.} \end{cases}$$
+
+Then define a family of functions
+
+$$H_k(t) = 2^{j/2} H(2^j t - l)$$
+
+for $k = 2^j + l$ where $j > 0$ and $0 \le l \le 2^j$. Set $H_0(t) = 1$. The following wavelet representation theorem can be seen in Steele (2001):
+
+**THEOREM 3.** If {$W^k: 0 \le k < \infty$} is a sequence of independent standard normal random variables, then the series defined by
+
+$$B_t = \sum_{k=0}^{\infty} \left( W^k \int_0^t H_k(s) ds \right)$$
+
+converges uniformly on [0, 1] with probability one. Moreover, the process {$B_t$} defined by the limit is a standard Brownian motion on [0, 1].
+
+Choose $\eta_k = 4 \cdot \sqrt{\log k}$, and note that $P(|W^k| > \eta_k) = O(k^{-4})$, so $\sum_{k=0}^\infty P(|W^k| > \eta_k) < \infty$. Therefore, $P(|W^k| > \eta_k, \text{i.o.}) = 0$. The simulation strategy will be to sample {$W^k$} jointly with the finite set $\{k: |W^k| \ge \eta_k\}$.
+
+Note that if we take $j = \lfloor \log_2 k \rfloor$, as shown in Steele (2001),
+
+$$\sum_{k=1}^{\infty} \left( W^k \int_0^t H_k(s) ds \right) \le \sum_{j=0}^{\infty} \left( 2^{-j/2} \cdot \max_{2^j \le k \le 2^{j+1}-1} |W^k| \right).$$
+
+Since $\sum_{j=0}^{\infty} 2^{-j/2}\sqrt{j+1} < \infty$, for any $\epsilon > 0$ there exists $K_0 > 0$, such that
+
+$$ (28) \qquad \sum_{j=\lceil \log K_0 \rceil} 2^{-j/2} \sqrt{j+1} < \epsilon. $$
+
+As a result, define
+
+$$ (29) \qquad K = \max\{k: |W^k| > \eta_k\} \vee K_0 < \infty,$$
+---PAGE_BREAK---
+
+then $\sum_{k=K+1}^{\infty} |W^k| \int_0^t H_k(s) ds \le \varepsilon$. If we can simulate ${\{(W^k)_{k=1}^K, K\}}$ jointly,
+
+$$ (30) \qquad B^\varepsilon(t) = \sum_{k=0}^{K} W^k \int_0^t H_k(s) ds $$
+
+will be a piecewise linear approximation to a standard Brownian motion within
+precision $\varepsilon$ in $C[0, 1]$.
+
+Now we show how to simulate $K$ jointly with $\{W^k: 1 \le k \le K\}$. The algorithm
+is as below with $\rho = 4$ as we have chosen $\eta_k = 4 \cdot \sqrt{\log k}$:
+
+ALGORITHM 2w (Simulate $K$ jointly with $\{W^k\}$).
+
+* Step 0: Initialize $G = K_0$ and $S$ to be an empty array.
+* Step 1: Set $U = 1, D = 0$. Simulate $V \sim \text{Uniform}(0, 1)$.
+* Step 2: While $U > V > D$, set $G \leftarrow G + 1$ and $U \leftarrow P(|W^G| \le \rho\sqrt{\log G}) \times U$ and $D \leftarrow (1 - G^{1-\rho^2/2}) \times U$.
+* Step 3: If $V \ge U$, add $G$ to the end of $S$, that is, $S = [S, G]$, and return to step 1.
+* Step 4: If $V \le D$, $K = \max(S, K_0)$.
+* Step 5: For every $k \in S$, generate $W^k$ according to the conditional distribution of $Z$ given $\{|W| > \rho\sqrt{\log k}\}$; for other $1 \le k \le K$, generate $W^k$ according to the conditional distribution of $W$ given $\{|W| \le \rho\sqrt{\log k}\}$.
+
+In this algorithm, we keep an array $S$, which is used to record the indices such that $|W^k| > \rho\sqrt{\log k}$, and a number $G$ which is the next index to be added into $S$. Precisely speaking, given that the last element in array $S$ is $N$, say, $\max(S) = N$, $G = \inf\{k \ge N+1 : |W^k| > \rho\sqrt{\log k}\}$. The key part of the algorithm is to simulate a Bernoulli with success parameter $P(G < \infty)$ and to sample $G$ given $G < \infty$.
+
+For this purpose, we keep updating two constants $U$ and $D$ such that $U > P(G = \infty) > D$ and $(U - D) \to 0$ as the number of iterations grows. To illustrate this point, denote the value of $U$ and $D$ in the $m$th iteration by $U_m$ and $D_m$, respectively. Then for all $m > 0$,
+
+$$ P(G = \infty) = \prod_{k=N+1}^{\infty} P(|W^k| \le \rho\sqrt{\log k}) < \prod_{k=N+1}^{N+m} P(|W^k| \le \rho\sqrt{\log k}) = U_m. $$
+
+On the other hand, for all $\rho > \sqrt{2}$ and $N$ large enough,
+
+$$
+\begin{aligned}
+& \prod_{k=N+m+1}^{\infty} P(|W^k| \le \rho\sqrt{\log k}) > 1 - \sum_{k=N+m+1}^{\infty} P(|W^k| > \rho\sqrt{\log k}) \\
+& \qquad \ge 1 - (N+m+1)^{1-\rho^2/2},
+\end{aligned}
+$$
+
+and hence we conclude that $D_m = (1 - (N + m + 1)^{1-\rho^2/2})U_m < P(G = \infty)$.
+Because $(1 - (N + m + 1)^{1-\rho^2/2}) \to 1$ as $m \to \infty$, the algorithm proceeds to
+---PAGE_BREAK---
+
+steps 3 or 4 after a finite number of iterations, and we can decide whether $G < \infty$ or not.
+
+Now we show that we can actually sample $G$ simultaneously as the Bernoulli with success probability $P(G < \infty)$ is generated. If $V < D$, we conclude that $V < P(G = \infty)$ and hence $G = \infty$ and $K = \max(S)$. Otherwise, we have $G < \infty$. In this case, suppose step 2 ends in the $(m+1)$th iteration and $V > U$. Since $U_m = P(|W^k| \le \rho\sqrt{\log k}$ for $k = K+1, \dots, K+m), $, $U_{m+1} \le V < U_m$ implies nothing but that $K+m+1 = \inf\{k \ge K+1 : |W^k| > \rho\sqrt{\log k}\}$. Therefore, by definition, $G = K+m+1$ and should be added into array $S$. Once $S$ and $K$ are generated, $\{W^k : 1 \le k \le K\}$ can be generated jointly with $S$ and $K$ according to step 5.
+
+Also we note that $B^\epsilon(t)$ has the following nice property:
+
+PROPOSITION 6.
+
+$$B^{\epsilon}(1) = B(1).$$
+
+PROOF. The equality follows from the fact that $\int_0^1 H_n(s) ds = 0$ for any $n \ge 1$ and $m \ge 1$. $\square$
+
+As a consequence of this property, for any compact time interval $[0, T]$ (without loss of generality, assume $T$ is an integer), in order to give an approximation for $B(t)$ on $[0, T]$ with guaranteed $\epsilon$ precision uniformly in $[0, T]$, we only need to run the above algorithm $T$ times to get $T$ i.i.d. sample paths $\{B^{\epsilon,(i)}(t): t \in [0, 1]\}$ for $i = 1, 2, \dots, T$, and define recursively
+
+$$B^{\epsilon}(t) = \sum_{i=1}^{\lfloor t \rfloor} B^{\epsilon,(i)}(1) + B_{\lfloor t \rfloor}^{\epsilon}(t - \lfloor t \rfloor).$$
+
+**4.3.** A conceptual framework for the joint simulation of $\tau_\epsilon$ and $Z^\epsilon$. Our goal now is to develop an algorithm for simulating $\tau_\epsilon$ and $(Z^\epsilon(t): 0 \le t \le \tau_\epsilon)$ jointly. In detail, we want to simulate $Z^\epsilon(t)$ forward in time and stop at a random time $\tau_\epsilon$ such that for any time $s > \tau_\epsilon$, $Z_i(s) \le Z_i(\tau_\epsilon) + \epsilon$ for $1 \le i \le d$.
+
+Because of the special structure of the wavelet representation used in simulating the process $Z^\epsilon(\cdot)$, the time $T_m \triangleq \inf\{t \ge 0: Z_i^\epsilon(t) > m$ for some $1 \le i \le d\}$ is no longer a stopping time with respect to the filtration generated by $Z(\cdot)$. As a consequence, we cannot directly carry out importance sampling as in Algorithm 1.1.1. To remedy this problem, we decompose the process $Z^\epsilon(t)$ into two parts: a random walk $\{Z^\epsilon(n): n \ge 0\}$ with Gaussian increment and a series of independent Brownian bridges $\{\tilde{B}_n(s) \triangleq Z^\epsilon(n+s) - Z^\epsilon(n): s \in [0, 1], n \ge 0\}$. Our strategy is to first carry out the importance sampling as in Algorithm 1.1.1 to the random walk $\{Z^\epsilon(n): n \ge 0\}$ to find its upper bound, and next develop a new scheme to control
+---PAGE_BREAK---
+
+the upper bounds attained in the intervals $\{(n, n+1) : n \ge 0\}$ for the i.i.d. Brownian bridges $\{\bar{\mathbf{B}}_n(s) : s \in [0, 1], n \ge 0\}$.
+
+The whole procedure is based on the wavelet representation of Brownian motion. Let $\{W_n^k(i) : n, k \in \mathbb{N}, i = 1, 2, \dots, d\}$ be a sequence of i.i.d. standard normal random variables. According to the expression given in Theorem 3, for any $t = n + s$, $s \in [0, 1]$,
+
+$$ (31) \qquad Z_i(t) = Z_i(n) + s(Z_i(n+1) - Z_i(n)) \\ \qquad \qquad + \sum_{j=1}^{d} A_{ij} \left( \sum_{k=1}^{\infty} W_n^k(j) \int_0^s H_k(u) du \right). $$
+
+Let us put (31) in matrix form,
+
+$$ \mathbf{Z}(t) = \mathbf{Z}(n) + s(\mathbf{Z}(n+1) - \mathbf{Z}(n)) + A \sum_{k=1}^{\infty} \mathbf{W}_n^k \cdot \int_0^s H_k(u) \, du. $$
+
+For all $n \ge 0$ and $s \in [0, 1]$, $\bar{\mathbf{B}}_n(s) = A \sum_{k=1}^{\infty} \mathbf{W}_n^k \cdot \int_0^s H_k(u) \, du$. Then the sequence $\{\bar{\mathbf{B}}_n(\cdot) : n \ge 0\}$ is i.i.d. Note that $(Z_i(n+1) - Z_i(n))$ is independent of $\{W_n^k(i) : k \ge 1\}$. We can split the simulation into two independent parts:
+
+(1) Simulate the discrete-time random walk $\{\mathbf{Z}(n) : n \ge 0\}$ with i.i.d. Gaussian increments and $\mathbf{Z}(0) = 0$. That is, $Z_i(0) = 0$ and $Z_i(n+1) = Z_i(n) + \sum_{j=1}^d A_{ij} W_{n+1}^0(j) - \mu_i$, where $\{W_n^0(j) : n \ge 0\}$ are i.i.d. standard normals.
+
+(2) For each $n$, simulate $\bar{\mathbf{B}}_n(s)$ to do bridging between $\mathbf{Z}(n)$ and $\mathbf{Z}(n+1)$.
+
+Now, any time $t_0 > 0$ is an approximate coalescence time $\tau_\epsilon$ if there exists some positive constant $\zeta > 0$ such that the following two conditions hold for all $n \ge t_0$: Condition (1), $\mathbf{Z}(n) \le \mathbf{Z}(t_0) - \zeta(n - \lfloor t_0 \rfloor)\mathbf{1} + e$, and condition (2), $\max\{\bar{\mathbf{B}}_n(s) : s \in [0, 1]\} \le \zeta(n - \lfloor t_0 \rfloor)\mathbf{1}$. Based on these observations, we develop an algorithm to simulate the approximate coalescence time $\tau_\epsilon$ jointly with $\{\mathbf{Z}^\epsilon(t) : 0 \le t \le \tau_\epsilon\}$.
+
+By Assumption (D), $\mu_i > \delta_0$ for some $\delta_0 > 0$. Let $\zeta = \delta_0/2$, and define $\mathbf{S}(n) = \mathbf{Z}(n) + n\zeta\mathbf{1}$ such that $\{\mathbf{S}(n) : n \ge 0\}$ is a random walk with strictly negative drift. Therefore, condition (1) can be checked by carrying out the importance sampling procedure as in Algorithm 1.1.1 for the random walk $\{\mathbf{S}(n) : n \ge 0\}$. More precisely, since $S_i(n)$ has Gaussian increments, we can compute explicitly that $\theta_i^* = 2(\mu_i - \zeta)/\sigma_i$ and choose $m > 0$ satisfying (12) in order to carry out the importance sampling procedure for the random walk $\{\mathbf{S}(n) : n \ge 0\}$. Suppose we use the importance sampling procedure and find $t_0$ such that $\mathbf{S}(n) \le \mathbf{S}(t_0)$ for all $n \ge t_0$, and hence condition (1) is satisfied for $t_0$.
+
+About condition (2), recall that $\bar{\mathbf{B}}_n(\cdot)$'s are i.i.d. linear combinations of Brownian bridges, and let $M$ be a random time, finite almost surely, such that
+
+$$ (32) \qquad M \ge \max\{n \ge t_0 : \max_{0 \le s \le 1} (\bar{\mathbf{B}}_{n,i}(s) - \zeta(n-t_0)) > 0 \text{ for some } i\}. $$
+---PAGE_BREAK---
+
+Observe that for $t_0$ to be an approximate coalescence time, conditions (1) and (2) must hold simultaneously. If for time $t_0$, for example, condition (1) is satisfied while condition (2) is not, we need to continue the testing procedure and simulation of the process for $t > t_0$. Then, however, the random walk $\{S(n): n \ge \lceil t_0 \rceil\}$ should be conditioned on that $S(n) \le S(t_0)$ for the fact that condition (1) holds for $t_0$ reveals "additional information" on the random walk for $n \ge t_0$. Therefore, such "additional information" or "conditioning event" must be incorporated and tracked when conditions (1) and (2) are sequentially tested. All of these conditioning events are described and accounted for in Section 4.3.2, which also includes the overall procedure to sample $\tau_\varepsilon$ jointly with $\mathbf{Z}^\varepsilon$.
+
+Now, let us first provide a precise description of $M$ and explain the simulation algorithm for $M$ in Section 4.3.1.
+
+**4.3.1.** Simulating $M$ and $\{\bar{\mathbf{B}}_n^e(\cdot): 1 \le n \le M\}$. Recall that $\bar{\mathbf{B}}_n(t) = A \sum_{k=1}^\infty \mathbf{W}_n^k \cdot \int_0^t H_k(u) du$, where $\{W_n^k(i): n \ge 0, k \ge 1, 1 \le i \le d\}$ are i.i.d. standard normals. Note that
+
+$$ \sum_{n=0} \sum_{k=1} P(|W_n^k(i)| \ge 4\sqrt{\log(n+1)} + 4\sqrt{\log k}) \le \sum_{n=0} \sum_{k=1} \frac{1}{((n+1)k)^4} < \infty. $$
+
+By the Borel-Cantelli lemma, we can conclude that for each $i \in \{1, \dots, d\}$ there exists $M^i < \infty$ such that for all $(n+1)k > M^i$, $|W_n^k(i)| \le 4\sqrt{\log(n+1)} + 4\sqrt{\log k}$. Clearly, $\sqrt{\log t} = o(t)$ as $t \to \infty$, so we can select a $m_0$ large enough such that for any $n > m_0$,
+
+$$ (n+1)\zeta - ad\left(4\sqrt{\log(n+1)} - \sum_{j=1}^{\infty} 2^{-j}\sqrt{j}\right) \ge 0. $$
+
+Note that $M^i$ can be simulated jointly with ($W_n^k(i): n \ge 0, k \ge 1, 1 \le i \le d, (n+1)k \le M^i$) by adapting Algorithm 2w in Section 4.2 and $M^i$'s are independent of each other. Then, for any $n > \max_{i=1}^d M^i \vee m_0$,
+
+$$ \begin{align*} \bar{\mathbf{B}}_n(t) &= A \sum_{k=1}^{\infty} \mathbf{W}_n^k \cdot \int_0^t H_k(u) du \\ &\le ad \left( 4\sqrt{\log(n+1)} + \sum_{j=1}^{\infty} 2^{-j/2}\sqrt{j} \right) \le (n+1)\zeta, \end{align*} $$
+
+where, $j = \lceil \log_2 k \rceil$. Therefore, we can choose $M = \max_i M^i \vee m_0$.
+
+Now we introduce a variation of Algorithm 2w that will be used in the procedure to simulate $M$ and $\{\bar{\mathbf{B}}_n^e(\cdot): 1 \le n \le M\}$ jointly. In the following algorithm, a sequence of “conditioning events” of the form $|W^k| \le \beta_k$, for some given constants $\{\beta^k: \beta^k > 4\sqrt{\log k}\}$, is in force. Let $\Phi(a) = P(|W| < a)$ for all $a > 0$, where $W$ is a standard normal. The random number $K$ to be simulated is defined as in (29).
+---PAGE_BREAK---
+
+ALGORITHM 2w' (Simulate K jointly with {$W^k: 1 \le k \le K$} conditional on $|W^k| \le \beta^k$ for all $k \ge 1$).
+
+Step 0: Initialize $G = K_0$ as defined in (28) and $S$ to be an empty array.
+
+Step 1: Set $U = 1, D = 0$. Simulate $V \sim \text{Uniform}(0, 1)$.
+
+Step 2: While $U > V > D$, set $G \leftarrow G + 1$ and $U \leftarrow \frac{\Phi(4\sqrt{\log G})}{\Phi(\beta^k)} \times U$ and $D \leftarrow (1 - G^{-7}) \times U$.
+
+Step 3: If $V \ge U$, add $G$ to the end of $S$, that is, $S = [S, G]$, and return to step 1.
+
+Step 4: If $V \le D$, $K = \max(S, K_0)$.
+
+Step 5: For every $k \in S$, generate $W^k$ according to the conditional distribution of $Z$ given $\{4\sqrt{\log k} < |W| \le \beta^k\}$; for other $1 \le k \le K$, generate $W^k$ according to the conditional distribution of $W$ given $\{|W| \le 4\sqrt{\log k}\}$.
+
+The main difference between Algorithm 2w' and the original Algorithm 2w is that $U$ and $V$ are now computed from the conditional probability; however, the relations $U > V > D$ and $U - D \to 0$ still hold, and hence Algorithm 2w' is valid. Based on this, we can now give the main procedure to simulate $M$ and $\{\tilde{B}_n^\epsilon(\cdot): 1 \le n \le M\}$ jointly:
+
+ALGORITHM 2m (Simulating of $M$ and $\{\tilde{B}_n^\epsilon(\cdot): 1 \le n \le M\}$ jointly).
+
+(1) For each index $i$, simulate $M^i$ and ($W_n^k(i): n \ge 0, k \ge 1, nk < M$). Compute $M = \max_i M^i \vee m_0$. (As discussed earlier, $M^i$'s are simulated by adapting Algorithm 2w.)
+
+(2) For each $0 \le n \le M$ and each index $i$, {$W_n^k(i): k < M^i/n$} are already given in step 1. For $k \ge M^i/n$, use Algorithm 2w' to simulate $K_n^i$ jointly with {$W_n^k(i): M^i/n \le k \le K$} conditional on $|W_n^k(i)| \le 4(\sqrt{\log(n+1)} + \sqrt{\log k}) \triangleq \beta^k > 4\sqrt{\log k}$.
+
+(3) For any $0 \le n \le M$, compute and output
+
+$$ (33) \qquad \tilde{B}_{n,i}^{\epsilon}(t) = \sum_{i=1}^{d} A_{ij} \left( \sum_{k=1}^{K_n^i} W_n^k(i) \int_0^t H_k(u) du \right). $$
+
+In step 1 of Algorithm 2m, we can use a similar procedure as in Algorithm 2w' to impose conditioning events of form $|W_n^k(i)| \le \beta_n^k(i)$ while simulating $M_i$'s jointly with $W_n^k(i)$'s. In this way, we derive an algorithm that is able to simulate $M$ jointly with $\{\tilde{B}_n^\epsilon(\cdot): 1 \le n \le M\}$ conditional on $|W_n^k(i)| \le \beta_n^k(i)$ for all $n \ge 0$, $k \ge 1$ and $1 \le i \le d$ for any given sequence of $\{\beta_n^k(i)\}$ such that $\beta_n^k(i) > 4(\sqrt{\log(n+1)} + \sqrt{\log k})$.
+
+ALGORITHM 2m' (Simulating of $M$ and $\{\tilde{B}_n^\epsilon(\cdot): 1 \le n \le M\}$ jointly conditional on $|W_n^k(i)| \le \beta_n^k(i)$ for all $n \ge 0, k \ge 1$ and $1 \le i \le d$).
+---PAGE_BREAK---
+
+(1) For each index $i$, simulate $M_i$ and ($W_n^k(i): n \ge 0, k \ge 1, nk < M$) conditional on $|W_n^k(i)| \le \beta_n^k(i)$ using a similar procedure as in Algorithm 2w'. Compute $M = \max_i M^i \lor m_0$.
+
+(2) For each $0 \le n \le M$ and each index $i$, {$W_n^k(i): k < M^i/n$} are already given in step 1. For $k \ge M^i/n$, use Algorithm 2w' to simulate $K_n^i$ jointly with {$W_n^k(i): M^i/n \le k \le K$} conditional on $|W_n^k(i)| \le 4(\sqrt{\log(n+1)} + \sqrt{\log k})$. [Note that $\beta_n^k(i) > 4(\sqrt{\log(n+1)} + \sqrt{\log k}) > 4\sqrt{\log k}$, and hence this step is well defined.]
+
+(3) For any $0 \le n \le M$, compute and output
+
+$$ \bar{B}_{n,i}^{\epsilon}(t) = \sum_{i=1}^{d} A_{ij} \left( \sum_{k=1}^{K_n^i} W_n^k(i) \int_0^t H_k(u) du \right). $$
+
+Algorithm 2m' will be used in the next section in order to keep track of “conditioning events” corresponding to condition (2).
+
+**4.3.2. Keeping track of the conditioning events.** As we have discussed just prior to the beginning of Section 4.3.1, we need to keep track of several conditioning events introduced by conditions (1) and (2). First, let us explain how to deal with the conditioning event corresponding to condition (1). These conditioning events involve only the random walk $S(\cdot)$. Now we split $S(\cdot)$ according to the sequences of {$\Gamma_l: l \ge 1$} and {$\Delta_l: l \ge 1$} of random times defined as follows:
+
+(1) Set $\Delta_1 = \min\{n: S_i(n) \le -2m$ for every $i$\}.
+
+(2) Define $\Gamma_l = \min\{n \ge \Delta_l: S_i(n) > S_i(\Delta_l) + m$ for some $i$\}.
+
+(3) Put $\Delta_{l+1} = \min\{n \ge \Gamma_l I(\Gamma_l < \infty) \lor \Delta_l: S_i(n) < S_i(\Delta_l) - 2m$ for every $i$\}.$
+
+Figure 1 illustrates a sample path of the random walk with the sequence of random times {$\Gamma_l: l \ge 1$} and {$\Delta_l: l \ge 1$} in one dimension. The message is that the joint simulation of {$S(n): n \ge 0$} with {$\Gamma_l: l \ge 1$} and {$\Delta_l: l \ge 1$} allows us to keep track of the process {$\max_{m \ge n} S(m): n \ge 0$}, which includes the “additional information” introduced by condition (1). The main steps in the simulation of {$S(n): n \ge 0$} jointly with {$\Gamma_l: l \ge 1$} and {$\Delta_l: l \ge 1$} are explained in Lemma 2 through Lemma 4 in Blanchet and Sigman (2011). The approach of Blanchet and Sigman (2011), which works in one dimension, could be modified for multidimensional cases using the change-of-measure as described in Section 2.3.1.
+
+Regarding the verification of condition (2) involving $M$ and the Brownian bridges, as per the discussion in Section 4.3.1, we just need to keep track of certain deterministic $\beta_n^k(i)$ for each $|W_n^k(i)|$, in order to condition on the events of the form $|W_n^k(i)| \le \beta_n^k(i)$. These events are related to the sequential construction of the random variable $M$ when testing condition (2) as described in Section 4.3.1. Now, we can write down the integrated version of our algorithm for sampling $\tau_\epsilon$ and {$Z^\epsilon(t): 0 \le t \le \tau_\epsilon$} jointly.
+---PAGE_BREAK---
+
+FIG. 1. Illustration for the random times {$\Delta_n$} and {$\Gamma_n$}.
+
+ALGORITHM 2.1 (Simulating $\tau_\epsilon$ and $\{\mathbf{Z}^\epsilon(t): 0 \le t \le \tau_\epsilon\}$).
+
+The output of this algorithm is $\{\mathbf{Z}^\epsilon(t): 0 \le t \le \tau_\epsilon\}$, and the approximation coalescence time $\tau_\epsilon$.
+
+(1) Set $\beta_n^k(i) = \infty$ for all $n \ge 1$, $k \ge 1$ and $1 \le i \le d$. Set $L=0$ and $\tau_\epsilon = 0$.
+
+(2) Simulate $S(n)$ until $\Delta_l$, where $l = \min\{j: \Gamma_j = \infty, \Delta_j > \tau_\epsilon\}$. Compute $\mathbf{Z}^\epsilon(n) = S(n) - n\zeta$.
+
+(3) For each $n \in [\tau_\epsilon, \Delta_l] \cap \mathbb{Z}_+$ and each index $1 \le i \le d$, compute the i.i.d. bridges $\{\bar{B}_n^\epsilon(\cdot)\}$ using (33), in which $K_n^i$ is jointly simulated with $(W_n^k(i): 1 \le k \le K_n^i)$ conditional on that $|W_n^k(i)| \le \beta_n^k(i)$ for all $k \ge 1$ using Algorithm 2w'. Given $\bar{B}_n^\epsilon(\cdot)$ and $S(n)$ for $n \in [\tau_\epsilon, \Delta_l] \cap \mathbb{Z}_+$, the process $\mathbf{Z}^\epsilon(t)$ for $t \in [\tau_\epsilon, \Delta_l]$ can be directly computed. If there exists some $t \ge \Gamma_{l-1}$ such that for all $t \le s \le \Delta_l$, $Z_i^\epsilon(t) \ge Z_i^\epsilon(s) - 2\epsilon$ and $Z_i^\epsilon(t) \ge Z_i^\epsilon(\Delta_l) + m - 2\epsilon$, set $\tau_\epsilon \leftarrow t$, and go to step 4. Otherwise, set $\tau_\epsilon \leftarrow \Delta_l$ and return to step 2.
+
+(4) Use Algorithm 2m' to simulate $M$ jointly with $(\bar{B}_{\tau_\epsilon+n}^\epsilon(\cdot): 0 \le n \le M)$ conditional on $|W_{\tau_\epsilon+n}^k(i)| \le \beta_{\tau_\epsilon+n}^k(i)$ for all $n \ge 0$, $k \ge 1$ and $1 \le i \le d$. Update $\beta_{\tau_\epsilon+n}^k(i) \leftarrow 4\sqrt{\log(n+1)} + 4\sqrt{\log k}$ for all $n \cdot k \ge M^i$. Keep simulating $S(n)$ until $n = \Delta_l + M$, and compute $\{\mathbf{Z}^\epsilon(t): t \in [\Delta_l, \Delta_l+M]\}$. If there exist some $t$ and $i$ such that $Z_i^\epsilon(t) > Z_i^\epsilon(\tau_\epsilon) + \epsilon$, set $\tau_\epsilon \leftarrow t$ and return to step 2.
+
+(5) Otherwise, stop and output $\tau_\epsilon$ as the approximation coalescence time along with $(\mathbf{Z}^\epsilon(t): 0 \le t \le \tau_\epsilon)$.
+
+**4.4. Computational complexity.** In this part, we will discuss the complexity of our algorithm when *d* and the other parameters *μ* and *A* are fixed but send the precision parameter *ε* to 0. Denote the total number of random variables needed by *N*(*ε*) when the precision parameter for the algorithm is *ε*.
+---PAGE_BREAK---
+
+According to Assumption (D), the input process $Z(t)$ equals $-\mu t + AB(t)$ with $\mu_i > \delta_0 > 0$. Let $\max_{i,j} |A_{ij}| = a$. The following result shows that our algorithm's running time is polynomial in $1/\epsilon$:
+
+**THEOREM 4.** *Under Assumption (D),*
+
+$$E[N(\varepsilon)] = O\left(\varepsilon^{-aC-2}\log\left(\frac{1}{\varepsilon}\right)\right) \quad \text{as } \varepsilon \to 0,$$
+
+where $a_C$ is a computable constant depending only on A.
+
+The random variables we need to simulate in the algorithm can be divided into two parts: first, the random variables used to construct the discrete random walk $Z(n)$ for $n \le T$ and second, the conditional normals used to bridging between $Z(n-1)$ and $Z(n)$.
+
+Since $1(|W| > \eta)$ and $1(|W| \le \beta)$ are negatively correlated, it follows that
+
+$$P(|W| > \eta||W| \le \beta) \le P(|W| > \eta).$$
+
+Therefore, the expected number of conditional Gaussian random variables used for Brownian bridges between $Z(n-1)$ and $Z(n)$ is smaller than the expected number that we would obtain if we use standard Gaussian random variables instead in steps 3 and 4 in Algorithm 2.1. Let $K = \max\{k: |W_k| > \eta_k\} \cup K_0$ as defined in (29). As discussed above, the expected number of truncated Gaussian random variables needed for each bridge $\bar{B}_{n,i}^\epsilon(\cdot)$ is bounded by $E[K]$.
+
+Therefore,
+
+$$E[N(\varepsilon)] \le (dE[K] + 1)(E[T] + 1).$$
+
+To prove Theorem 2, we first need to study $E[K]$ and $E[T]$.
+
+**PROPOSITION 7.**
+
+$$E[K] = O\left(\varepsilon^{-2} \log \left(\frac{1}{\varepsilon}\right)\right).$$
+
+**PROOF.** Recall that $\eta_k = 4\sqrt{\log k}$, and let $p_k = P(|W^k| > \eta_k)$. Then $p_k = O(k^{-4})$. Therefore
+
+$$
+\begin{align*}
+E[K] &= \sum_{n=1}^{\infty} P(K > n) \le K_0 + \sum_{n=K_0+1}^{\infty} \sum_{k=n}^{\infty} p_k \\
+&= K_0 + \sum_{k=K_0+1}^{\infty} k \cdot p_k \le K_0 + O\left(\sum_{k=1}^{\infty} k^{-3}\right).
+\end{align*}
+$$
+
+The second term of the left-hand side is finite and independent of $\epsilon$ and $K_0$.
+---PAGE_BREAK---
+
+On the other side,
+
+$$ \sum_{j=\log_2 K_0} 2^{-j/2} \sqrt{j+1} \le \frac{2}{\log 2} (\sqrt{K_0})^{-1} \left( \sqrt{\log_2 K_0} + \frac{2}{\log 2} \right). $$
+
+Therefore, we can choose $K_0 = O(\varepsilon^{-2} \log(\frac{1}{\varepsilon}))$ such that $\sum_{j=\log_2 K_0} 2^{-j/2} \times \sqrt{j+1} < \varepsilon$.
+
+In order to get the approximation within error at most $\varepsilon$ for the $d$-dimensional process, according to the Cholesky decomposition as discussed in Section 4.2, we should replace $\varepsilon$ by $\frac{\varepsilon}{da}$. Therefore,
+
+$$ E[K] = O\left(\left(\frac{\varepsilon}{da}\right)^{-2} \log\left(\frac{da}{\varepsilon}\right)\right) = O\left(\varepsilon^{-2} \log\left(\frac{1}{\varepsilon}\right)\right). \quad \square $$
+
+What remains is to estimate $E[T]$. Let $T_a$ be the time before the algorithm executes step 4 in a single iteration. Using the same notation as in Algorithm 2.1 and a similar argument as in Section 2.4, we have
+
+$$ E[T] = \frac{E[T_a] + E[T_m | T_m < \infty] + E[M]}{P(T_m < \infty)p}, $$
+
+where
+
+$$ p = P\left(\max_i Z_i^\epsilon(t) < m + \varepsilon, \forall 0 \le t \le M \mid \mathbf{Z}(0) = 0; S(n) < m\right). $$
+
+As $\mathbf{Z}^\epsilon(t) = \mathbf{S}(n) - n\zeta\mathbf{1} + A\bar{\mathbf{B}}_n(t-n)$ and the Brownian bridge $\bar{\mathbf{B}}_n(\cdot)$ is independent of $\mathbf{S}(\cdot)$, it follows that
+
+$$ p \ge P\left(\max_i \max_{t \ge 0} Z_i(t) < m \mid \mathbf{Z}(0) = 0\right). $$
+
+Since $\mathbf{S}(1)$ is a multidimensional Gaussian random vector with strictly negative drift, assumptions (C1) to (C3) are satisfied. Applying Proposition 4, we can get upper bounds for $E[T_m|T_m < \infty]$, $1/P(T_m < \infty)$ and $1/P(\max_i \max_t Z_i(t) < m|\mathbf{Z}(0)=0)$, which depend only on $d$, $a$ and $\delta$ and thus are independent of $\varepsilon$. Besides, the bound for $E[M]$ can be estimated by the same method as in Proposition 7 in terms of $\zeta = \delta/2$; hence such a bound is also independent of $\varepsilon$. Therefore, we only need to estimate $E[T_a]$.
+
+**PROPOSITION 8.** $E[T_a] = O(\varepsilon^{-d_c})$ as $\varepsilon \to 0$. Here $d_c$ only depends on the matrix $A$. Moreover, in the special cases where $A_{ij} \ge 0$, $a_C = d$.
+
+**PROOF.** Recall that $\mathbf{Z}(t) = -\mu t + A\mathbf{B}(t)$ and $\mu_i > \delta = 2\zeta > 0$ as given in Assumption (D). We divide the path of $\mathbf{Z}(t)$ into segments with length $2(m+\varepsilon)/\zeta$,
+
+$$ \left\{ \left( Z \left( k \cdot \frac{2(m+\varepsilon)}{\zeta} + s \right) : 0 \le s \le \frac{2(m+\varepsilon)}{\zeta} \right) : k \ge 0 \right\}. $$
+---PAGE_BREAK---
+
+Let
+
+$$
+\begin{equation}
+N_b = \min \left\{
+ \begin{array}{@{}l@{}}
+ k : A\mathbf{B}\left(k \cdot \frac{2(m+\varepsilon)}{\zeta} + s\right) - A\mathbf{B}\left(k \cdot \frac{2(m+\varepsilon)}{\zeta}\right) \leq \boldsymbol{\varepsilon} \\
+ \quad \text{for all } 0 \leq s \leq \frac{2(m+\varepsilon)}{\zeta}
+ \end{array}
+\right\}.
+\end{equation}
+$$
+
+By independence and stationarity of the increments of Brownian motion, $N_b$ is a
+geometric random variable with parameter
+
+$$
+p = P \left( A \mathbf{B}(s) \le \boldsymbol{\epsilon} \text{ for all } 0 \le s \le \frac{2(m+\boldsymbol{\epsilon})}{\zeta} \right).
+$$
+
+On the other hand, since $-\mu_i < -2\zeta$, we have:
+
+$$
+(1) Z_i(N_b \cdot \frac{2(m+\epsilon)}{\zeta} + s) \le Z_i(N_b \cdot \frac{2(m+\epsilon)}{\zeta}) + \epsilon, \text{ for all } 0 \le s \le \frac{2(m+\epsilon)}{\zeta}.
+$$
+
+$$
+(2) Z_i((N_b + 1) \cdot \frac{2(m+\epsilon)}{\zeta}) \le Z_i(N_b \cdot \frac{2(m+\epsilon)}{\zeta}) - m.
+$$
+
+Therefore, Algorithm 2.1 should execute step 4 after at most $\frac{2(m+\epsilon)}{\zeta}(N_b + 1)$ units of time in a single iteration,
+
+$$
+E[T_a] \leq \frac{2(m + \epsilon)}{\zeta} E[N_{b+1}] = \frac{2(m + \epsilon)}{\zeta} \left(1 + \frac{1}{p}\right).
+$$
+
+From this inequality, it is now sufficient to show that $p = O(\epsilon^{ac})$.
+
+Note that the set $C = \{\mathbf{y} \in \mathbb{R}^d : A\mathbf{y} \leq \boldsymbol{\varepsilon}\}$ forms a cone with vertex $A^{-1}\boldsymbol{\varepsilon}$ in $\mathbb{R}^d$
+since $A$ is of full rank under Assumption (D). Define $\tau_C = \inf\{t \geq 0 : B(t) \notin C\}$
+given $B(0) = 0$, then
+
+$$
+p = P\left(\tau_C > \frac{2(m+\varepsilon)}{\zeta}\right).
+$$
+
+If $d=2$, it is proved by Burkholder (1977) that $a_C = \frac{\pi}{\theta}$ where $\theta \in [0, \pi)$ is the angle formed by the column vectors of $A^{-1}$. Therefore, we can compute explicitly that
+
+$$
+\theta = \arccos \left( - \frac{A_{11}A_{21} + A_{12}A_{22}}{\sqrt{(A_{11}^2 + A_{12}^2)(A_{21}^2 + A_{22}^2)}} \right),
+$$
+
+which only depends on A.
+
+On the other hand, if $d \ge 3$, applying the results on exit times for Brownian motions given by Corollary 1.3 in DeBlassie (1987),
+
+$$
+P\left(\tau_C > \frac{2(m+\varepsilon)}{\zeta}\right) \sim u \cdot \|A^{-1}\boldsymbol{\varepsilon}\|^{a_C}
+$$
+
+as $\epsilon \to 0$. Here $\|\cdot\|$ represent the Euclidian norm, and $u$ is some constant indepen-
+dent of $\epsilon$. The rate $a_C$ is determined by the principal eigenvalue of the Laplace–
+Beltrami operator on ($S^{d-1} \cap C$), where $S^{d-1}$ is a unit sphere centered at the vertex
+---PAGE_BREAK---
+
+of $C$, namely $A^{-1}\varepsilon$. The principal eigenvalue only depends on the geometric features of $C$, and it is independent of $\varepsilon$; hence so is $a_C$. Since $A$ is given, we have
+
+$$P\left(\tau_C > \frac{2(m+\varepsilon)}{\zeta}\right) = O(\varepsilon^{a_C}) \quad \text{as } \varepsilon \to 0.$$
+
+Computing $a_C$ for $d \ge 3$ is not straightforward in general. However, when $A_{ij} \ge 0$, we can estimate $a_C$ from first principles. Indeed, if $A_{ij} \ge 0$ and we let $a = \max A_{ij}$, we have that
+
+$$C = \{ \mathbf{y} \in \mathbb{R}^d : A \mathbf{y} \le \boldsymbol{\varepsilon} \} \subset \{ \mathbf{y} \in \mathbb{R}^d : y_i \le \frac{\varepsilon}{ad} \}.$$
+
+As the coordinates of $B(t)$ are independent,
+
+$$p \ge P\left( \max_{0 \le t \le 2(m+\varepsilon)/\zeta} B(t) \le \frac{\varepsilon}{ad} \right)^d,$$
+
+where $B(\cdot)$ is a standard Brownian motion on real line.
+
+Applying the reflection principle, we have
+
+$$
+\begin{aligned}
+& P\left(\max_{0 \le t \le 2(m+\varepsilon)/\zeta} B(t) \le \frac{\varepsilon}{ad}\right) \\
+&= \int_{-\varepsilon/(ad)}^{\varepsilon/(ad)} \frac{1}{\sqrt{2\pi(2(m+\varepsilon)/\zeta)}} \exp\left(-\frac{x^2}{2(2(m+\varepsilon)/\zeta)}\right) \\
+&= O(\varepsilon).
+\end{aligned}
+$$
+
+As a result, $p = O(\varepsilon^d)$ when the correlations are all nonnegative. $\square$
+
+Given these propositions, we can now prove the main result in this part.
+
+PROOF OF THEOREM 4. As we have discussed,
+
+$$E[N(\varepsilon)] \le (dE[K] + 1)(E[\tau_\varepsilon] + 1).$$
+
+First, by Proposition 7, $E[K] = O(\varepsilon^{-2} \log(\frac{1}{\varepsilon}))$. Besides, as discussed above,
+
+$$E[T] \le \frac{E[T_a] + E[T_m|T_m < \infty] + E[M]}{P(T_m < \infty) P(\max_i \max_{t \ge 0} Z_i(t) < m | \mathbf{Z}(0) = 0)}.$$
+
+According to Proposition 8, $E[T_a] = O(\varepsilon^{-ac})$, and $a_C$ is a constant when $A$ is fixed. In the end, as we have discussed, $E[T_m|T_m < \infty]$, $P(T_m < \infty)$, $P(\max_i \max_t Z_i(t) < m | \mathbf{Z}(0) = 0)$ and $E[M]$ are independent of $\varepsilon$. Therefore,
+
+$$E[T] = O(\varepsilon^{-ac}).$$
+
+In sum, we have
+
+$$E[N(\varepsilon)] = O\left(\varepsilon^{-ac-2} \log\left(\frac{1}{\varepsilon}\right)\right). \quad \square$$
+---PAGE_BREAK---
+
+TABLE 1
+Unbiased estimates of $E[Y_i(\infty)]$ and $E[Y_i^2(\infty)]$ for a network with ten stations in tandem
+
+| Station | $E[Y_i(\infty)]$ | $E[Y_i^2(\infty)]$ |
|---|
| Simulation result | True value | Simulation result | True value |
|---|
| 1 | 1.7919 ± 0.0521 | 1.8182 | 10.2755 ± 0.5289 | 10.2479 | | 2 | 0.1761 ± 0.0068 | 0.1818 | 0.1511 ± 0.0170 | 0.1642 | | 3 | 0.2171 ± 0.0083 | 0.2222 | 0.2242 ± 0.0224 | 0.2382 | | 4 | 0.2706 ± 0.0102 | 0.2778 | 0.3462 ± 0.0339 | 0.3610 | | 5 | 0.3516 ± 0.0131 | 0.3571 | 0.5717 ± 0.0590 | 0.5778 | | 6 | 0.4737 ± 0.0171 | 0.4762 | 0.9840 ± 0.0871 | 0.9921 | | 7 | 0.6632 ± 0.0233 | 0.6667 | 1.8472 ± 0.1513 | 1.8715 | | 8 | 1.0033 ± 0.0345 | 1.0000 | 4.1004 ± 0.3377 | 4.0300 | | 9 | 1.6497 ± 0.0542 | 1.6667 | 10.3734 ± 0.7823 | 10.6065 | | 10 | 3.3200 ± 0.1040 | 3.3333 | 39.2015 ± 2.9950 | 39.3631 |
+
+**5. Numerical results.** We first implemented Algorithm 1 in order to generate exact samples from the steady-state distribution of stochastic fluid networks, and then we implemented Algorithm 2. Our implementations were performed in Matlab. In all the experiments we simulated 10,000 independent replications, and we displayed our estimates with a margin of error obtained using a 95% confidence interval based on the central limit theorem.
+
+For the case of stochastic fluid networks, we considered a 10-station system in tandem. So, $Q_{i,i+1} = 1$ for $i = 1, 2, \dots, 9$ and $Q_{10,j} = 0$ for all $j = 1, \dots, 10$. We assume the arrival rate $\lambda = 1$ and the job sizes are exponentially distributed with unit mean. The service rates $(\mu_1, \dots, \mu_{10})^T$ are given by (1.55, 1.5, 1.45, 1.4, 1.35, 1.3, 1.25, 1.2, 1.15, 1.1). We are interested in computing the steady-state mean and the second moment of the workload at each station (i.e., $E[Y_i(\infty)]$ and $E[Y_i(\infty)^2]$ for $i = 1, 2, \dots, 10$). For a network of this type, it turns out that the true values of the quantities we are interested in can be computed from the corresponding Laplace transforms as given in Debicki, Dieker and Rolski (2007).
+
+Both the simulation results and the true values are reported in Table 1. The procedure took a few minutes (less than 5) on a desktop, which is quite a reasonable time.
+
+We then implemented a two-dimensional RBM example. Let us denote the RBM by $Y(t)$. The parameters to specify $Y$ are as follows: drift vector $\mu = (-1, -1)$, covariance matrix $\Sigma = [1, 0; 0, 1]$ and reflection matrix $R = [1, -0.2; -0.2, 1]$. For this so-called symmetric RBM, one could compute in close that $E[Y_1(\infty)] = E[Y_2(\infty)] = 5/12 \approx 0.4167$; see, for instance, Dai and Harrison (1992). The output of our simulation algorithm is reported in Table 2.
+
+Our implementations here are given with the objective of verifying empirically the validity of the algorithms proposed. We stress that a direct implementation of
+---PAGE_BREAK---
+
+TABLE 2
+Estimates of $E[Y_i(\infty)]$ for a 2-dimensional RBM with precision $\epsilon = 0.01$
+
+ | Simulation result | True value | | E[Y1(∞)] | 0.4164 ± 0.0137 | 0.4167 | | E[Y2(∞)] | 0.4201 ± 0.0131 | 0.4167 |
+
+Algorithm 2, although capable of ultimately producing unbiased estimations of the expectations of RBM, might not be practical. The simulations took substantially more time to be produced than those reported for the stochastic fluid models. This can be explained by the dependence on $\epsilon$ in Theorem 4. The bottleneck in the algorithm is finding a time at which both stations are close to $\epsilon$. An efficient algorithm based on suitably trading a strongly controlled bias with variance can be used to produce faster running times; we expect to report this algorithm in the future.
+
+**Acknowledgments.** The authors thank Offer Kella for pointing out Lemma 1 and thank Amy Biemiller for her editorial assistance. The authors thank the Editor and referees for their useful comments and suggestions.
+
+## REFERENCES
+
+* ASMUSSEN, S. (2003). *Applied Probability and Queues: Stochastic Modelling and Applied Probability*, 2nd ed. Applications of Mathematics (New York) **51**. Springer, New York. MR1978607
+
+* ASMUSSEN, S. and GLYNN, P. W. (2007). *Stochastic Simulation: Algorithms and Analysis*. Stochastic Modelling and Applied Probability **57**. Springer, New York. MR2331321
+
+* ASMUSSEN, S., GLYNN, P. and PITMAN, J. (1995). Discretization error in simulation of one-dimensional reflecting Brownian motion. *Ann. Appl. Probab.* **5** 875–896. MR1384357
+
+* BESKOS, A., PELUCHETTI, S. and ROBERTS, G. (2012). $\epsilon$-strong simulation of the Brownian path. *Bernoulli* **18** 1223–1248. MR2995793
+
+* BILLINGSLEY, P. (1999). *Convergence of Probability Measures*, 2nd ed. Wiley, New York. MR1700749
+
+* BLANCHET, J. H. and SIGMAN, K. (2011). On exact sampling of stochastic perpetuities. *J. Appl. Probab.* **48A** 165–182. MR2865624
+
+* BUDHIRAJA, A. and LEE, C. (2009). Stationary distribution convergence for generalized Jackson networks in heavy traffic. *Math. Oper. Res.* **34** 45–56. MR2542988
+
+* BURDZY, K. and CHEN, Z.-Q. (2008). Discrete approximations to reflected Brownian motion. *Ann. Probab.* **36** 698–727. MR2393994
+
+* BURKHOLDER, D. L. (1977). Exit times of Brownian motion, harmonic majorization, and Hardy spaces. *Adv. Math.* **26** 182–205. MR0474525
+
+* DAI, J. G. and DIEKER, A. B. (2011). Nonnegativity of solutions to the basic adjoint relationship for some diffusion processes. *Queueing Syst.* **68** 295–303. MR2834200
+
+* DAI, J. G. and HARRISON, J. M. (1992). Reflected Brownian motion in an orthant: Numerical methods for steady-state analysis. *Ann. Appl. Probab.* **2** 65–86. MR1143393
+
+* DEBLASSIE, R. D. (1987). Exit times from cones in $R^n$ of Brownian motion. *Probab. Theory Related Fields* **74** 1–29. MR0863716
+---PAGE_BREAK---
+
+DEVROYE, L. (2009). On exact simulation algorithms for some distributions related to Jacobi theta functions. *Statist. Probab. Lett.* **79** 2251–2259. MR2591982
+
+DEBICKI, K., DIEKER, A. B. and ROLSKI, T. (2007). Quasi-product forms for Lévy-driven fluid networks. *Math. Oper. Res.* **32** 629–647. MR2348239
+
+ENSOR, K. B. and GLYNN, P. W. (2000). Simulating the maximum of a random walk. *J. Statist. Plann. Inference* **85** 127–135. MR1759244
+
+GAMARNIK, D. and ZEEVI, A. (2006). Validity of heavy traffic steady-state approximation in generalized Jackson networks. *Ann. Appl. Probab.* **16** 56–90. MR2209336
+
+GUT, A. (2009). *Stopped Random Walks: Limit Theorems and Applications*, 2nd ed. Springer, New York. MR2489436
+
+HARRISON, J. M. and REIMAN, M. I. (1981). Reflected Brownian motion on an orthant. *Ann. Probab.* **9** 302–308. MR0606992
+
+HARRISON, J. M. and WILLIAMS, R. J. (1987). Brownian models of open queueing networks with homogeneous customer populations. *Stochastics* **22** 77–115. MR0912049
+
+KELLA, O. (1996). Stability and nonproduct form of stochastic fluid networks with Lévy inputs. *Ann. Appl. Probab.* **6** 186–199. MR1389836
+
+KELLA, O. and RAMASUBRAMANIAN, S. (2012). Asymptotic irrelevance of initial conditions for Skorohod reflection mapping on the nonnegative orthant. *Math. Oper. Res.* **37** 301–312. MR2931282
+
+KELLA, O. and WHITT, W. (1996). Stability and structural properties of stochastic storage networks. *J. Appl. Probab.* **33** 1169–1180. MR1416235
+
+KENDALL, W. S. (2004). Geometric ergodicity and perfect simulation. *Electron. Commun. Probab.* **9** 140–151 (electronic). MR2108860
+
+PROPP, J. G. and WILSON, D. B. (1996). Exact sampling with coupled Markov chains and applications to statistical mechanics. *Random Structures Algorithms* **9** 223–252. MR1611693
+
+RAMASUBRAMANIAN, S. (2000). A subsidy-surplus model and the Skorohod problem in an orthant. *Math. Oper. Res.* **25** 509–538. MR1855180
+
+REIMAN, M. I. (1984). Open queueing networks in heavy traffic. *Math. Oper. Res.* **9** 441–458. MR0757317
+
+STEELE, J. M. (2001). *Stochastic Calculus and Financial Applications*. Applications of Mathematics (New York) **45** . Springer, New York. MR1783083
+
+VARADHAN, S. R. S. and WILLIAMS, R. J. (1985). Brownian motion in a wedge with oblique reflection. *Comm. Pure Appl. Math.* **38** 405–443. MR0792398
+
+INDUSTRIAL ENGINEERING
+AND OPERATIONS RESEARCH
+
+COLUMBIA UNIVERSITY
+
+340 S. W. MUDD BUILDING
+
+500 W. 120 STREET
+
+NEW YORK, NEW YORK 10027
+
+USA
+
+E-MAIL: jose.blanchet@columbia.edu
+
+DEPARTMENT OF APPLIED MATHEMATICS
+AND STATISTICS
+
+STONY BROOK UNIVERSITY
+
+MATH TOWER B148
+
+STONY BROOK, NEW YORK 11794-3600
+
+USA
+
+E-MAIL: xinyun.chen@stonybrook.edu
\ No newline at end of file
diff --git a/samples/texts_merged/7872347.md b/samples/texts_merged/7872347.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c655cd5c71f38d8eae635dc76a8f821413af672
--- /dev/null
+++ b/samples/texts_merged/7872347.md
@@ -0,0 +1,722 @@
+
+---PAGE_BREAK---
+
+# Custom flow in overdamped Brownian dynamics
+
+Daniel de las Heras, \* Johannes Renner, and Matthias Schmidt\*
+
+Theoretische Physik II, Physikalisches Institut, Universität Bayreuth, D-95440 Bayreuth, Germany
+
+(Received 6 August 2018; revised manuscript received 13 October 2018; published 8 February 2019)
+
+When an external field drives a colloidal system out of equilibrium, the ensuing colloidal response can be very complex, and obtaining a detailed physical understanding often requires case-by-case considerations. To facilitate systematic analysis, here we present a general iterative scheme for the determination of the unique external force field that yields prescribed inhomogeneous stationary or time-dependent flow in an overdamped Brownian many-body system. The computer simulation method is based on the exact one-body force balance equation and allows to specifically tailor both gradient and rotational velocity contributions, as well as to freely control the one-body density distribution. Hence, compressibility of the flow field can be fully adjusted. The practical convergence to a unique external force field demonstrates the existence of a functional map from both velocity and density to external force field, as predicted by the power functional variational framework. In equilibrium, the method allows to find the conservative force field that generates a prescribed target density profile, and hence implements the Mermin-Evans classical density functional map from density distribution to external potential. The conceptual tools developed here enable one to gain detailed physical insight into complex flow behaviour, as we demonstrate in prototypical situations.
+
+DOI: 10.1103/PhysRevE.99.023306
+
+## I. INTRODUCTION
+
+The controlled application of an external field is a powerful means to drive colloidal systems of mesoscopic suspended particles out of equilibrium [1,2]. The resulting complex interplay of the equilibrium properties of the system with the external perturbation is already present in Perrin's pioneering work on colloidal sedimentation [3]. Gravitationally driven colloidal suspensions [4–6] remain to this day primary model systems for studying structure formation phenomena. There is a large spectrum of different types of further specific external influence on colloids, such as the response of charged colloids to external electric fields [7,8], and magnetic field-induced transport of both diamagnetic and paramagnetic colloidal particles [9,10] across a substrate, where the colloidal motion was recently analysed in terms of the powerful concept of topological protection against perturbation [11]. The external magnetic fields in these setups varied periodically in both space and time. Alternatively, exerting shearlike flow on col- loidal dispersions provides in-depth insights into important effects in fundamental material science, such as shear thick- ening [12,13] and glass formation [14]. Furthermore, optical tweezers form powerful and flexible tools for the generation of complex colloidal flow patterns [15–17].
+
+To systematically study the response of a soft material to an external perturbation, one typically first fixes the external field and then studies the resulting colloidal motion under the effect of the field. Certainly, this concept is compatible with our understanding of a causal relationship between forces and the motion that they generate. If the external force field is static and conservative, then the system will in general reach a new
+
+equilibrium state. This response of a complex system to such an external influence might be highly nontrivial. As a result of the action of the external potential, the system will in general become spatially inhomogeneous. In seminal work, Mermin showed for quantum systems [18], as Evans subsequently did for the classical case [19], that a functional inversion of the relationship between external potential and one-body density distribution applies. Hence, reversing the above “causal” re- lationship, a unique mathematical map exists from the one- body density distribution to the corresponding external poten- tial. This is an important and fundamental result of modern statistical physics, which generalizes Hohenberg and Kohn’s earlier result for quantum ground states [20]. The functional relationship forms the basis of classical density functional theory, which is used in practically all modern microscopic theoretical treatments of spatially inhomogeneous systems [21]. Once the external potential is specified, the Hamiltonian is known (the internal interactions remain unchanged) and hence *all* equilibrium properties of the system are determined and become functional dependent on the density distribution.
+
+In this work we address the functional map in nonequilib- rium steady and time-dependent states in overdamped Brow- nian many-body systems. We first set the desired colloidal motion, as specified by both the velocity field (or, equiv- alently, the one-body current distribution) and the density profile, and then determine the specific external force field that creates the prescribed motion in steady state. We develop and validate a numerical iterative method that enables efficient and straightforward implementation of this task.
+
+That the map from motion to external force field exists and that it is unique follows formally from the power functional variational principle in general time-dependent nonequilib- rium [22]. The functional relationship has not, however, been explicitly demonstrated in an actual many-body system. Here
+
+\*delasheras.daniel@gmail.com
+
+†Matthias.Schmidt@uni-bayreuth.de
+---PAGE_BREAK---
+
+we provide the first such demonstration, both for steady states and for time-dependent nonequilibrium.
+
+As a special case, we also apply the inversion method to equilibrium systems. Here it allows to find the specific conservative external force field that stabilizes a predefined density distribution. As flow is absent and kinetic energy contributions are trivial in equilibrium, this method also applies to inertial (i.e., Hamiltonian) systems. An alternative iterative numerical method that also implements the functional map in equilibrium is presented in Ref. [23].
+
+The conceptual progress in demonstrating the nonequilibrium inversion map explicitly enables the practical solution to the problem of generating tailor-made flow in complex systems. In the method that we present, the sole requirement is, besides the ability to freely control the external force field, to be able to measure the internal one-body force density distribution. This is a readily available quantity in many-body simulations, and it is conceivably also accessible in experimental work. We envisage that tailoring freely flow on demand constitutes a powerful concept for the systematic study of nonequilibrium physics. Here we investigate as a concrete model problem, the task of collecting particles in a certain region of space via a potential trap. The system is initially a homogeneous fluid, and we (i) speed up the natural dynamics by a given factor (chosen as 2 or 3 in our examples) and (ii) demonstrate that any unwanted effects due to superimposed external flow (e.g., due to convection or sedimentation in the system) can be fully canceled.
+
+The paper is organized as follows. In Sec. II we describe how we obtain the iterative method, based on the exact force balance relationship in overdamped Brownian dynamic, covering nonequilibrium steady states, equilibrium, and time-dependent nonequilibrium. In Sec. III we present results for several model situations in which we custom-tailor the flow in a many-body system of repulsive particles. Section IV contains a discussion and provides some conclusions. Details about particle current sampling and convergence properties are given in the Appendices.
+
+## II. THEORY
+
+### A. Dynamical one-body force balance
+
+We consider a system of $N$ interacting Brownian particles in the overdamped limit, where inertial effects are absent and we neglect hydrodynamic interactions. The one-body density distribution (“density profile”) at space point $\mathbf{r}$ and time $t$ is given by
+
+$$ \rho(\mathbf{r}, t) = \left\langle \sum_i \delta(\mathbf{r} - \mathbf{r}_i) \right\rangle, \quad (1) $$
+
+where the angles denote a statistical average, the expression inside the angles is the microscopic density operator, with $\delta(\cdot)$ indicating the Dirac distribution, and $\mathbf{r}_i$ denoting the position of particle $i = 1...N$. In the Fokker-Planck picture, the information required for carrying out the average is encoded in the many-body probability distribution $\Psi(\mathbf{r}^N, t)$ of finding microstate $\mathbf{r}^N = \mathbf{r}_1... \mathbf{r}_N$ at time $t$. The average is then defined as
+
+$$ \langle \cdot \rangle = \int d\mathbf{r}^N \cdot \Psi(\mathbf{r}^N, t), \quad (2) $$
+
+where the integral runs over configuration space, i.e., each $\mathbf{r}_i$ is integrated over the system volume. The time evolution of $\Psi$ is determined by the Smoluchowski equation $\partial\Psi/\partial t = -\sum_i \nabla_i u \cdot v_i$. Here the configuration space velocity $\mathbf{v}_i$ of particle $i$ is given on the many-body level by the instantaneous relation
+
+$$ \gamma \mathbf{v}_i = -k_B T \nabla_i \ln \Psi - \nabla_i u(\mathbf{r}^N) + \mathbf{f}_{\text{ext}}(\mathbf{r}_i, t), \quad (3) $$
+
+where $\gamma$ is the friction constant against the implicit solvent, $k_B$ is the Boltzmann constant, $T$ denotes absolute temperature, $\nabla_i$ is the partial derivative with respect to $\mathbf{r}_i$, the interparticle interaction potential is denoted by $u(\mathbf{r}^N)$, and $\mathbf{f}_{\text{ext}}(\mathbf{r}, t)$ is the external force field, which in general is position- and time-dependent. The three contributions on the right-hand side of Eq. (3) correspond to thermal diffusion (first term), deterministic motion due to interparticle interactions (second term), and the externally imposed force (third term). This formulation of the dynamics is analogous to the Langevin picture, where instead of Eq. (2), the average is taken over a set of stochastic particle trajectories for which a random (position) noise provides the effects of thermal motion. The corresponding discretized version is readily implementable in Brownian dynamics (BD) computer simulations (details of our implementation are given in Sec. III). Note that the configuration space velocity $\mathbf{v}_i$ defined in Eq. (3) is different from the average over the fluctuating velocity over realization of the noise, as represented in BD simulations.
+
+We next supplement Eq. (1) by a corresponding one-body current distribution, defined as
+
+$$ \mathbf{J}(\mathbf{r}, t) = \left\langle \sum_i \delta(\mathbf{r} - \mathbf{r}_i) \mathbf{v}_i \right\rangle, \quad (4) $$
+
+where $\mathbf{v}_i$ at time $t$, is given via Eq. (3). We show in detail in Appendix A how a forward-backward symmetrical time derivative of the particle positions can be used in BD to represent $\mathbf{v}_i$ in Eq. (4).
+
+The density distribution Eq. (1) and the current profile Eq. (4) are linked via the continuity equation,
+
+$$ \frac{\partial \rho(\mathbf{r}, t)}{\partial t} = -\nabla \cdot \mathbf{J}(\mathbf{r}, t), \quad (5) $$
+
+where $\nabla$ indicates the derivative with respect to $\mathbf{r}$. The many-body coupling in Eq. (3) arises due to the presence of the internal interaction potential $u(\mathbf{r}^N)$. On the one-body level, it is hence natural to define a corresponding internal force density field via
+
+$$ \mathbf{F}_{\text{int}}(\mathbf{r}, t) = -\left\langle \sum_i \delta(\mathbf{r} - \mathbf{r}_i) \nabla_i u(\mathbf{r}^N) \right\rangle. \quad (6) $$
+
+Here contributions to the average occur due to two effects, namely (i) due to the bare value of the internal force field $-\nabla_i u(\mathbf{r}^N)$, but also (ii) due to the probability of finding particle $i$ at the considered space point $\mathbf{r}$, as measured by the delta function. Averages such as Eq. (6) hence constitute microscopically resolved *force densities*.
+
+By multiplying Eq. (3) with $\delta(\mathbf{r} - \mathbf{r}_i)$, summing over $i$, averaging according to Eq. (2), and identifying the one-body fields Eqs. (1), (4), and (6), it is straightforward to show that
+
+$$ \gamma \mathbf{J} = -k_B T \nabla \rho + \mathbf{F}_{\text{int}} + \rho \mathbf{f}_{\text{ext}}, \quad (7) $$
+---PAGE_BREAK---
+
+which we use as a basis for the nonequilibrium inversion procedure. Defining the microscopic velocity profile $\mathbf{v}(\mathbf{r}, t)$ simply as the ratio between current profile and density profile,
+
+$$ \mathbf{v} = \mathbf{J}/\rho, \quad (8) $$
+
+allows us to rewrite Eq. (7), after division by the density profile $\rho$, as
+
+$$ \gamma \mathbf{v} = -k_B T \nabla \ln \rho + \mathbf{f}_{\text{int}} + \mathbf{f}_{\text{ext}}, \quad (9) $$
+
+Here the internal force field $\mathbf{f}_{\text{int}}$ is defined as the internal force density Eq. (6) normalized with the density profile, i.e., $\mathbf{f}_{\text{int}}(\mathbf{r}, t) = \mathbf{F}_{\text{int}}(\mathbf{r}, t)/\rho(\mathbf{r}, t)$.
+
+BD computer simulations allow straightforward access to the individual contributions to the force balance relationship Eq. (9). Sampling the density profile is straightforward either using the traditional counting method or more advanced techniques [24,25]. The ideal (diffusive) force field $-k_B T \nabla \ln \rho$ is readily obtained by (numerical) differentiation of the density profile. The internal force density field $\mathbf{F}_{\text{int}}$ can be sampled as an average over BD realizations of the time evolution, or as an average over time when investigating steady states; note that in BD one has direct access to the internal force on the many-body level, $-\nabla_{il}(\mathbf{r}^N)$ in Eq. (6).
+
+In typical applications, the external force field $\mathbf{f}_{\text{ext}}(\mathbf{r}, t)$ is prescribed and $\rho(\mathbf{r}, t)$ and $\mathbf{v}(\mathbf{r}, t)$ emerge as a result of the coupled many-body dynamics. In the following, we address the inverse problem of prescribing $\rho$ and $\mathbf{v}$ a priory and calculating the required form of $\mathbf{f}_{\text{ext}}$ that makes these fields stationary, such that the prescribed field values are identical to the true dynamical averages of density Eq. (1) and velocity Eqs. (4) and (8).
+
+### B. Inversion in nonequilibrium steady states
+
+Let $\rho(\mathbf{r})$ and $\mathbf{v}(\mathbf{r})$ be the predefined stationary (i.e., time-independent) “target” profiles for density and velocity. To represent a valid steady state, the resulting target current profile $\rho\mathbf{v}$, cf. Eq. (8), must be divergence-free, $\nabla \cdot \rho\mathbf{v} = 0$, which follows from Eq. (5) and represents a necessary condition on the allowed set of target functions $\rho$, $\mathbf{v}$. The external force field $\mathbf{f}_{\text{ext}}(\mathbf{r})$ that makes the target profiles stationary is obtained from first rearranging Eq. (9) as
+
+$$ \mathbf{f}_{\text{ext}} = k_B T \nabla \ln \rho - \mathbf{f}_{\text{int}} + \gamma \mathbf{v}, \quad (10) $$
+
+where the internal force field, $\mathbf{f}_{\text{int}}(\mathbf{r}) = \mathbf{F}_{\text{int}}(\mathbf{r})/\rho(\mathbf{r})$, is the only unknown quantity on the right hand side, as $\rho$ and $\mathbf{v}$ are known input quantities. Here $\mathbf{F}_{\text{int}}(\mathbf{r})$ is from Eq. (6) in steady state.
+
+To determine $\mathbf{f}_{\text{int}}$, and hence $\mathbf{f}_{\text{ext}}$ via Eq. (10), we proceed in two steps. First, we present a fixed-point iterative scheme to solve Eq. (10), in which the $k$th iteration step is defined via
+
+$$ \mathbf{f}_{\text{ext}}^{(k)} = k_B T \nabla \ln \rho - \mathbf{f}_{\text{int}}^{(k-1)} + \gamma \mathbf{v}, \quad (11) $$
+
+where the targets, $\rho(\mathbf{r})$ and $\mathbf{v}(\mathbf{r})$, are kept fixed for all steps $k$. Here $\mathbf{f}_{\text{int}}^{(k-1)} = \frac{\mathbf{F}_{\text{int}}^{(k-1)}}{\rho}$, where $\mathbf{F}_{\text{int}}^{(k-1)}$ is the internal force density sampled in the previous iteration step, $k-1$. Data for $\mathbf{F}_{\text{int}}^{(k-1)}$ was obtained from BD sampling under the prescribed external force field $\mathbf{f}_{\text{ext}}^{(k-1)}$. To initialize the iteration scheme,
+
+we set the external force field at step $k=0$ simply as
+
+$$ \mathbf{f}_{\text{ext}}^{(0)} = k_B T \nabla \ln \rho + \gamma \mathbf{v}, \quad (12) $$
+
+which is the exact external force field for the case of an ideal gas. Prescribing Eq. (12) allows to sample $\mathbf{F}_{\text{int}}^{(0)}$ in BD, and then use $\mathbf{f}_{\text{int}}^{(0)}$ as the required input for iteration step $k=1$ in Eq. (11). This completes the description of the functional inversion.
+
+At each iteration step we also sample both the one-body density and one-body current profiles, $\rho^{(k)}$ and $\mathbf{J}^{(k)}$; details on how to sample the current in BD are provided in Appendix A. As a criterion for judging the eventual convergence of $f_{\text{ext}}^{(k)}$ to the real external force field that makes the target density and current profiles stationary, i.e., the solution of Eq. (10), we use the difference between the target and the sampled profiles at step $k$, i.e.,
+
+$$ \Delta\rho = \int d\mathbf{r} (\rho(\mathbf{r}) - \rho^{(k)}(\mathbf{r}))^2 / V < \epsilon_1, \quad (13) $$
+
+$$ \Delta J = \int d\mathbf{r} |\mathbf{J}(\mathbf{r}) - \mathbf{J}^{(k)}(\mathbf{r})|^2 / V < \epsilon_2, \quad (14) $$
+
+where $V = \int d\mathbf{r}$ is the system volume, and $\epsilon_1, \epsilon_2 > 0$ are small tolerance parameters. Numerical details of our implementation are given in Appendix B.
+
+We find that in practice the iteration method converges reliably in all cases considered; results are shown below in Sec. III. That a solution exists for $\mathbf{f}_{\text{ext}}$ and that it is unique are nontrivial properties of our scheme. We expect existence and uniqueness to hold, however, based on the power functional variational framework [22], which is a novel approach for the statistical description of the dynamics of many-body systems. The central object of power functional theory (PFT) is a “free power” functional $R_t[\rho, \mathbf{J}]$ of the one-body density and current or analogously, viz. Eq. (8), of the density and the velocity field. $R_t$ has units of energy per time (power) and plays a role analogous to the free energy functional (as detailed below) in equilibrium. It consists of an ideal gas contribution ($W_t^{\text{id}}$), an excess (over ideal gas) part due to the internal interactions ($W_t^{\text{exc}}$) and an external power ($X_t$) contribution, according to $R_t = W_t^{\text{id}} + W_t^{\text{exc}} - X_t$.
+
+PFT implies that $\mathbf{f}_{\text{int}}$ is a unique functional of density and current distributions, or equivalently of density and velocity profile. In particular, $\mathbf{f}_{\text{int}}$ can be expressed as a functional derivative of the intrinsic excess (over ideal gas) free power functional $W_t^{\text{exc}}[\rho, \mathbf{J}]$, as
+
+$$ \mathbf{f}_{\text{int}}([\rho, \mathbf{v}], \mathbf{r}, t) = - \frac{\delta W_t^{\text{exc}}[\rho, \mathbf{J}]}{\delta \mathbf{J}(\mathbf{r}, t)}, \quad (15) $$
+
+where the density distribution $\rho$ is kept fixed upon the variation, at fixed time $t$.
+
+Typically, one would split further into adiabatic and superadiabatic contributions, $W_t^{\text{exc}} = \dot{F}_{\text{exc}}[\rho] + P_t^{\text{exc}}[\rho, \mathbf{J}]$, where $\dot{F}_{\text{exc}}[\rho]$ is the time derivative of the equilibrium excess (over ideal gas) Helmholtz free energy functional, and $P_t^{\text{exc}}[\rho, \mathbf{J}]$ is the superadiabatic contribution, which describes the genuine nonequilibrium effects. This splitting offers great advantages in terms of the classification of the different types of forces that occur, but it is not required for our present purposes.
+---PAGE_BREAK---
+
+We rather work directly with $\mathbf{f}_{\text{int}}$. Recall that this is directly accessible via Eq. (6) in BD simulations.
+
+The fact that $\mathbf{f}_{\text{int}}$ is generated from a current-density functional, via functional differentiation Eq. (15), implies that the force field itself is a functional of density and current (or velocity profile). Hence, Eq. (15) constitutes a unique map from density and velocity to the internal force field,
+
+$$ \mathbf{f}_{\text{int}}(\mathbf{r}) = \mathbf{f}_{\text{int}}([\rho, \mathbf{v}], \mathbf{r}, t), \quad (16) $$
+
+where the right hand side is the force field functional Eq. (15) evaluated at the target profiles $\rho$ and $\mathbf{v}$; the left-hand side is the corresponding (hitherto unknown) specific form of the internal force on the left hand side of Eq. (10). Hence, by inserting Eq. (15) into Eq. (10) we obtain the explicit form
+
+$$ \mathbf{f}_{\text{ext}} = k_B T \nabla \ln \rho - \mathbf{f}_{\text{int}}[\rho, \mathbf{v}] + \gamma \mathbf{v}, \quad (17) $$
+
+with the iteration procedure Eqs. (11) and (12) being a practical scheme for evaluating the right hand side. Note that Eq. (17) is an explicit expression for $\mathbf{f}_{\text{ext}}$; no hidden dependence on $\mathbf{f}_{\text{ext}}$ occurs on the right hand side. Recall that from Eq. (15), the internal force field depends solely on the “kinematic” fields $\rho$ and $\mathbf{v}$, but not on the external force field. This completes the proof. Before presenting results, we revisit first the equilibrium case and then generalize to time-dependent nonequilibrium.
+
+### C. Inversion in equilibrium
+
+In equilibrium, the velocity profile is identically zero, and we therefore can simply set the target $v(\mathbf{r}) \equiv 0$ and prescribe $\rho(\mathbf{r})$ to find the corresponding external force field $\mathbf{f}_{\text{ext}}(\mathbf{r})$. The external force field is necessary of conservative, gradient form, $\mathbf{f}_{\text{ext}}(\mathbf{r}) = -\nabla v_{\text{ext}}(\mathbf{r})$, where $v_{\text{ext}}$ is the external potential energy. Clearly, there is no dependence on time, and, as we show, one only needs to carry out equilibrium averages. Hence, the method also applies to Hamiltonian systems, as the kinetic contributions are trivial.
+
+In equilibrium we can simplify Eq. (10) to obtain
+
+$$ \mathbf{f}_{\text{ext}} = k_B T \nabla \ln \rho - \mathbf{f}_{\text{int}}, \quad (18) $$
+
+which constitutes an explicit expression for the specific external force field that generates the given target profile $\rho$ in equilibrium.
+
+As it is the case in nonequilibrium steady state, the internal force is unknown, but it can be found iteratively. The iteration step is
+
+$$ \mathbf{f}_{\text{ext}}^{(k)}(\mathbf{r}) = k_B T \nabla \ln \rho(\mathbf{r}) - \mathbf{f}_{\text{int}}^{(k-1)}(\mathbf{r}), \quad (19) $$
+
+and the external force is initialized with the exact solution of an ideal gas,
+
+$$ \mathbf{f}_{\text{ext}}^{(0)}(\mathbf{r}) = k_B T \nabla \ln \rho(\mathbf{r}). \quad (20) $$
+
+We then sample $\mathbf{f}_{\text{int}}^{(0)}$ in equilibrium, under the external force $\mathbf{f}_{\text{ext}}^{(0)}$, and then iterate, on the basis of Eq. (19), until the difference between the target and the sampled density profiles is small, cf. Eq. (13). As only the internal force and the density profiles are required, the sampling can be performed using BD or molecular dynamics simulations. If one wishes to use the Monte Carlo method, then one needs the actual value of the
+
+potential $v_{\text{ext}}^{(k)}$ instead of the force, $\mathbf{f}_{\text{ext}}^{(k)} = -\nabla v_{\text{ext}}^{(k)}$. In systems that effectively depend on only one coordinate, say $x$, the potential at each iteration can be easily obtained from the internal force profile, by performing a one-dimensional spatial integral
+
+$$ v_{\text{ext}}^{(k)}(x) = -k_B T \ln \rho(x) + \int dx f_{\text{int}}^{(k-1)}(x). \quad (21) $$
+
+In two- and three-dimensional systems a line integral or, more generally, the use of an inverse operator $\nabla^{-1}$ is required to obtain the potential from the force field. Hence, the situation is similar to the one addressed in modern “force sampling” methods that yield the density profile [24,25].
+
+That the method converges to a unique external potential is guaranteed by the Mermin-Evans functional map from density profile to external potential [18,19]. In particular, the internal force field is obtained as a functional derivative of the excess free energy functional via
+
+$$ \mathbf{f}_{\text{int}}([\rho], \mathbf{r}) = -\nabla \frac{\delta F_{\text{exc}}[\rho]}{\delta \rho(\mathbf{r})}. \quad (22) $$
+
+Inserting this into Eq. (18) yields
+
+$$ \mathbf{f}_{\text{ext}} = k_B T \ln \rho + \nabla \frac{\delta F_{\text{exc}}[\rho]}{\delta \rho}, \quad (23) $$
+
+which is an explicit expression for the external force field, given $\rho$ as an input, in analogy to the nonequilibrium case, Eqs. (15) and (17). For completeness, and briefly, Eq. (23) is formally obtained from the more general Eqs. (15) and (17) by observing that in equilibrium $\delta W_t^{\text{exc}}/\delta \mathbf{J} = \delta \dot{F}_{\text{exc}}/\delta \mathbf{J}$, where $\dot{F}_{\text{exc}} = \int d\mathbf{r}\mathbf{J} \cdot \nabla \delta F_{\text{exc}}/\delta\rho$, cf. Ref. [22].
+
+We next clarify the relationship to the method of Ref. [23]. Note that at any step in the iteration, given the external force field $\mathbf{f}_{\text{ext}}^{(k-1)}$, the internal force field may be written as
+
+$$ \mathbf{f}_{\text{int}}^{(k-1)} = k_B T \nabla \ln(\rho^{(k-1)}) - \mathbf{f}_{\text{ext}}^{(k-1)}, \quad (24) $$
+
+in terms of the density distribution $\rho^{(k-1)}(\mathbf{r})$ at equilibrium with that external force. Then Eq. (19) may be written as the change in the force, along the iterative procedure,
+
+$$ \mathbf{f}_{\text{ext}}^{(k)} - \mathbf{f}_{\text{ext}}^{(k-1)} = k_B T \nabla \ln(\rho/\rho^{(k-1)}), \quad (25) $$
+
+that vanishes when the target density is achieved, $\rho = \rho^{(k-1)}$. The integration of Eq. (25), to get the change in external potential, and the linear expansion
+
+$$ \ln(\rho/\rho^{(k-1)}) = -(\rho^{(k-1)} - \rho)/\rho + \dots \quad (26) $$
+
+give precisely the method proposed in Ref. [23].
+
+### D. Inversion in time-dependent nonequilibrium
+
+To perform the inversion in time-dependent nonequilibrium, we carry out the procedure of Sec. II B at a discretized sequence of (coarse-graining) times $t_{cg}$ during the time evolution. The method propagates the system forward in time, in sync with the target time evolution. At each coarse-graining time step the required external force field is obtained (via iteration) such that the prescribed target density $\rho(\mathbf{r}, t_{cg})$ and velocity field $v(\mathbf{r}, t_{cg})$ are identical to their respective values in the target time evolution of the system. We interpolate linearly
+---PAGE_BREAK---
+
+the values for the external force field between two consecutive times, which we find sufficient for the test cases presented below.
+
+In detail, at each coarse-graining time step $t_{cg}$ we iterate the value of the external field according to
+
+$$ \mathbf{f}_{\text{ext}}^{(k)}(\mathbf{r}, t_{\text{cg}}) = k_B T \nabla \ln \rho(\mathbf{r}, t_{\text{cg}}) - \mathbf{f}_{\text{int}}^{(k-1)}(\mathbf{r}, t_{\text{cg}}) + \gamma \mathbf{v}(\mathbf{r}, t_{\text{cg}}), \quad (27) $$
+
+where $\rho(\mathbf{r}, t)$ and $\mathbf{v}(\mathbf{r}, t)$ are the target fields, which enter via their values at time $t_{cg}$. The time $t_{cg}$ is kept fixed under the iteration $k \to k+1$ described by Eq. (27). For the first time step we initialize the external force using the exact ideal gas solution, Eq. (12). For the subsequent time steps we initialize the iterative scheme using the solution of the previous time step:
+
+$$ \mathbf{f}_{\text{ext}}^{(0)}(\mathbf{r}, t_{\text{cg}}) = \mathbf{f}_{\text{ext}}(\mathbf{r}, t'_{\text{cg}}), \quad (28) $$
+
+where $t'_{cg}$ indicates the time step previous to $t_{cg}$.
+
+The iteration method proceeds forward in time. To correctly account for memory effects, the many-body dynamics evolves according to continuous, valid trajectories over the entire time-dependent dynamics. In the BD simulations, this requires to start a new coarse-graining time step using the many-body configuration(s) obtained at the end of previous coarse-graining time step. At the end of the process, the entire field $\mathbf{f}_{\text{ext}}(\mathbf{r}, t)$ is known, and as consistency check, can be input into a “bare” nonsteady BD run, to validate that the targets $\rho(\mathbf{r}, t)$ and $\mathbf{v}(\mathbf{r}, t)$ are met during the entire course of time.
+
+### III. RESULTS
+
+In the following we demonstrate that the straightforward application of the method allows to cast new light on fundamental physical effects by studying a two-dimensional model fluid of Brownian particles interacting via the common Weeks-Chandler-Anderson potential [26], i.e., a purely repulsive, truncated-and-shifted Lennard-Jones (LJ) pair potential [26],
+
+$$ \phi(r) = \begin{cases} 4\epsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} + \frac{1}{4} \right] & \text{if } r < r_c \\ 0 & \text{otherwise,} \end{cases} \quad (29) $$
+
+where the parameters $\epsilon$ and $\sigma$ set the energy and length scales, respectively, and $r$ indicates the center-center distance of the particle pair. The cutoff distance $r_c = 2^{1/6}\sigma$ is located at the minimum of the standard LJ potential, and hence the interaction is purely repulsive.
+
+The particles are in a square box of length $L$ with periodic boundary conditions along both directions. Using the standard Euler algorithm, the Langevin equation of motion is integrated in time via
+
+$$ \mathbf{r}_i(t + \Delta t) = \mathbf{r}_i(t) + \frac{\Delta t}{\gamma} [-\nabla_i u(\mathbf{r}^N) + \mathbf{f}_{\text{ext}}(\mathbf{r}_i, t)] + \eta_i(t), \quad (30) $$
+
+where $\eta_i$ is a delta-correlated Gaussian random displacement with standard deviation $\sqrt{2\Delta t k_B T/\gamma}$ in accordance with the fluctuation-dissipation theorem. Here, $\Delta t$ is the integration time step that we set to $\Delta t/\tau = 10^{-4}$ with $\tau = \sigma^2\gamma/\epsilon$ the unit of time; the friction constant is set to $\gamma = 1$.
+
+### A. Effective one-dimensional system
+
+A considerably large class of nonequilibrium situations is effectively of one-dimensional nature, where both the density profile and the current distribution depend only on a single coordinate, say $x$, and the flow direction is along the $x$ axis (i.e., with no shear motion occurring). In the present two-dimensional case, this implies that the system is homogeneous in the $y$-direction. Then the steady state condition reduces to the requirement of the current being constant, $\mathbf{J}(x) = J_0 e_x$, where $J_0 = \text{const}$ and $e_x$ is the unit vector in the $x$ direction. Hence from Eq. (8) the velocity and the density profile possess a reciprocal relationship: $v(x) = J_0/\rho(x)$.
+
+We study such an effective one-dimensional problem with $N=30$ particles in a two-dimensional square simulation box of side length $L/\sigma = 10$. We choose the target density profile to contain a single nontrivial Fourier component, that modulates the homogeneous fluid,
+
+$$ \rho(x) = c_1 \sin^2(\pi x/L) + c_2, \quad (31) $$
+
+with $c_1\sigma^2 = 0.12$ and $c_2\sigma^2 = 0.24$ such that $\int dr\rho(\mathbf{r}) = N$. See the density profile in Fig. 1(a). The temperature is set to $k_B T/\epsilon = 1$.
+
+Our inversion method facilitates the study of fundamental aspects of driven systems. As an illustrative example, we construct a family of steady states that share the same density profile, cf. Eq. (31), but possess different values $J_0$ of the (constant) target current. In Appendix B we describe numerical details of the concrete implementation of the iterative procedure.
+
+We show in Fig. 1 the external force field required to make the target density profile Eq. (31) stationary, for a range of different values of $J_0$. In Figs. 1(a) and 1(b) the final converged density and current profiles are shown; these are indeed (numerically) identical to their targets. We consider four steady states with values of the current $J_0\sigma\tau = 0$ (equilibrium), 0.1, 0.5 and 1. The specific external force field required to produce each such steady state is depicted in Fig. 1(c) for all four cases. The force fields can be represented as the sum of a spatially constant force offset plus a conservative potential contribution. The constant offset drives the particle flow and it can be calculated as the spatial average of the total external force field. The conservative term generates the density modulation. As expected, in the equilibrium case ($J_0 = 0$) only the conservative term is present, and we find that the spatial average of the total external force vanishes. In Fig. 1(d) we show the external potential $v_{\text{ext}}(x)$ that generates the conservative force contribution. As a convention, we have introduced an (irrelevant) shift of the energy scale, such that $v_{\text{ext}} = 0$ at $x=0$ for all four cases. As expected, in equilibrium $v_{\text{ext}}$ possesses a minimum at the location of the density peak. It turns out that to keep the density profile unchanged upon imposing the constant flux of particles in the $x$ direction, the external potential changes its shape very substantially. Both the minimum and the maximum move towards smaller values of $x$, i.e., against the direction of the flow, upon increasing $J_0$ (note the periodicity in $x$). Clearly, this behaviour is a direct consequence of keeping the density profile constant while increasing the flow through this density “landscape.” To rationalize this effect, consider first the case where an
+---PAGE_BREAK---
+
+FIG. 1. One-body density (a) and current profiles (b), external force field (c), and external potential (d) as a function of the x-coordinate in a system with target density profile $\rho(x) = c_1 \sin^2(\pi x/L) + c_2$ with $c_1\sigma^2 = 0.12$ and $c_2\sigma^2 = 0.24$ after $k=40$ iterations. The inset in (a) shows the difference between target and sampled density profiles. Results are shown for different values of the target current: $J_0\sigma\tau = 1$ (blue dotted line), $J_0\sigma\tau = 0.5$ (orange dashed line), $J_0\sigma\tau = 0.1$ (violet solid line), and $J_0\sigma\tau = 0$ (green dot-dashed line), which is in equilibrium. The inset in (b) is a close-view for $J_0\sigma\tau = 0.5$. The system is two-dimensional and homogeneous in the y direction with $N=30$ particles in a square periodic box of side $L/\sigma = 10$ at $k_B T/\epsilon = 1$. The data is obtained by averaging over 25 BD realizations (MC realizations in equilibrium).
+
+external potential generates the density profile, Fig. 1(a), in
+equilibrium. If we now switch on an additional constant
+(positive) external force contribution, the result will be a par-
+ticle flow and the density profile will respond by shifting the
+density peak in the direction of the flow (results not shown).
+In our system the density profile is rather kept constant and
+the shifting of the density peak needs to be canceled by the
+external conservative field, which hence necessarily develops
+the observed shift in the direction opposite to the flow. Besides
+quantifying the positional shift, cf. Fig. 1(d), we also observe
+a marked increase in the amplitude of the external potential
+
+contribution; hence stronger “ordering” forces, $-\nabla v_{\text{ext}}$, are
+required to overcome the homogenizing effect of the flow.
+
+**B. Two-dimensional system**
+
+The iteration scheme is general and it is not restricted
+to effectively one-dimensional inhomogeneous systems. As
+a proof of concept, we construct the external force field
+that makes a two-dimensional density profile stationary. We
+choose the target velocity field to be
+
+$$
+\mathbf{v}(x, y) = \left( -\frac{d_1}{d_2} \sin(2\pi y/L) \right), \quad (32)
+$$
+
+with $d_1, d_2 = \text{const.}$ As above, a companion target density
+profile cannot be chosen arbitrarily, since the resulting current
+must satisfy the steady state condition, $\nabla \cdot \rho \mathbf{v} = 0$. Given
+that Eq. (32) is divergence-free, $\nabla \cdot \mathbf{v} = 0$, the steady-state
+condition reduces to $\mathbf{v} \cdot \nabla \rho = 0$. As an immediate first choice,
+we set
+
+$$
+\rho(x, y) = N/L^2 = \text{const}, \tag{33}
+$$
+
+which trivially satisfies the steady state condition. Note that
+Eqs. (32) and (33) represent a conceptually highly interest-
+ing case of a homogeneous, bulk-fluid-like one-body density
+distribution, with “superimposed” flow Eq. (32) that is fully
+inhomogeneous on microscopic length scales.
+
+Furthermore, as a second choice together with Eq. (32), we
+consider the target density profile,
+
+$$
+\rho(x, y) = N/L^2 + a_0 \cos(2\pi x/L + Y), \quad (34)
+$$
+
+$$
+Y = d_0 \cos(2\pi y/L),
+\quad (35)
+$$
+
+such that $Y(y)$ is a spatially modulating function of the
+given form, $a_0$ is a negative constant such that $|a_0| < N/L^2$
+(to ensure that the $\rho > 0$, and $d_0 = -d_1/d_2$. Since $\nabla\rho$ is
+perpendicular to $\mathbf{v}$ for all $\mathbf{r}$, it is straightforward to show that
+Eqs. (32), (34), and (35) also define a valid steady state.
+
+For both target states (constant and nonconstant density
+profile) we use the inversion method to find the external
+force field that renders a stationary situation. We set $N =$
+$30$, $L/\sigma = 10$, and $k_B T/\epsilon = 0.5$. For the target velocity pro-
+file, we set $d_1 = d_2 = \tau/\sigma$ in Eq. (32). For the inhomoge-
+neous density profile we set $a_0 = 0.5N/L^2$ in Eq. (34).
+
+The two Cartesian components of the velocity profile,
+obtained after 40 BD iterations of the inversion method, are
+shown in Figs. 2(a1) and 2(a2). The sampled velocity and
+density profiles coincide with the target profiles within the
+imposed numerical accuracy. The sampled density profiles are
+shown in Fig. 2(b1) (constant density profile) and Fig. 2(c1)
+(inhomogeneous density profile). The corresponding external
+force fields are presented in Figs. 2(b2), 2(b3) and 2(c2),
+2(c3).
+
+For the case of constant density, the *x* component of the
+external force field *f*ext(x), see Fig. 2(b2), is very similar in shape
+and magnitude to the *x* component of the velocity profile,
+Fig. 2(a1). Given that the friction coefficient is set to γ = 1,
+this means that *f*ext(x) generates the flow in the *x* direction (there
+are small differences between *f*ext(x) and γ*ν**x* related to the *x*
+component of the internal force field). The *y* component of
+---PAGE_BREAK---
+
+FIG. 2. x component (a1) and y component (a2) of the velocity profile sampled after $k = 40$ iterations using BD simulations. (b1) Sampled density profile for the steady state with constant density profile. x (b2) and y (b3) components of the external force field that produces the steady state with constant density. (c1) Sampled density profile for the steady state with inhomogeneous density profile. x (c2) and y (c3) components of the external force field that generates the steady state with inhomogeneous density. In both steady states we set $N = 30$, $L/\sigma = 10$, and $k_B T/\epsilon = 0.5$. The bin size is set to 0.05 $\sigma$ in both directions and the origin or coordinates is located in the middle of the box. Results are averages over 100 BD realizations.
+
+the external force field, shown in Fig. 2(b3), shows a small deviation from an average value which is consistent with the value of the flow in y. This deviation is expected, since we have imposed a constant density profile, and hence the external force has to balance the migration force [27,28] that results from the shear field imposed by $v_x$. The y component of the external force is inhomogeneous but the density profile is constant. Hence, the internal force must cancel the action of $f_{\text{ext}}^{(y)}$. This is a purely superadiabatic effect [29], which is completely neglected in the widely used dynamical density functional theory (DDFT) [30], which rather predicts internal forces to vanish for situations of constant density. Extended
+
+versions of DDFT have been recently proposed to try to overcome these limitations; see, e.g., Refs. [31,32].
+
+The target velocity profile is effectively one-dimensional, and the target density profile is constant. As a result the external force is also effectively one-dimensional. This is not the case when the target density profile is inhomogeneous. Then the x and y components of the external force field depend on both coordinates; see Figs. 2(c2) and 2(c3). Now the external force field generates the flow and also sustains the density gradient. Clearly, the required force field, which generates the fully inhomogeneous steady state, is very complex, and simple physical reasoning, such as we could rely on in the former
+---PAGE_BREAK---
+
+FIG. 3. Inversion in full nonequilibrium as demonstrated by dynamics of confinement. Density profile (left column), current profile (middle column), and external force field (right column) as a function of $x$ for three different situations. (a) Time evolution of the system, density (a1) and current (a2), after switching on a static external field (a3). At $t = 0$ the system is in equilibrium with vanishing external field. Results at four times are shown: $\tau_1/\tau = 0.08$, $\tau_2/\tau = 0.48$, $\tau_3/\tau = 1.0$, and $\tau_4/\tau = 2.2$. At $\tau_4$ the system is near equilibrium with the applied external field (a3). Profiles are obtained by averaging over ~$10^8$ different trajectories. In panels (b) we show a system that evolves following the same dynamics as in (a) but two times faster. The current (b2) is therefore twice as large as the current in the original system, and the required external field (b3) is time-dependent. A movie showing a system that evolves three times faster is presented in the Supplemental Material [33]. Panels (c) show the time evolution in a system that reproduces the same dynamics as (a) but with a global motion toward the right (note that the spatial average of the current (c2) at any time is $J_0\sigma\tau = 0.5$). The required external field (c3) is time-dependent. The horizontal and vertical dashed lines in the plots of the external field are drawn to help the comparison between systems. In all cases we set $N = 30$, $L/\sigma = 10$, and $k_B T/\epsilon = 1.0$.
+
+two cases, is insufficient to obtain even a qualitative, let alone (semi-)quantitative rationalization of the occurring physics.
+
+### C. Dynamic confinement
+
+While the above examples demonstrate custom flow for steady states, we next turn to its implementation for full (time-dependent) nonequilibrium situations, as laid out in Sec. II D. We hence aim to show that the concept is general and valid even for complex dynamics.
+
+As a prototypical situation, we address the time evolution of a two-dimensional system, which in its initial state is a homogeneous equilibrium fluid, (with no external field acting in this initial state). At time $t=0$ we switch on a conservative external field, which represents the potential trap shown Figs. 1(c) and 1(d) for the equilibrium case (green dot-dashed-line). Hence, the external force induces migration of particles towards the center of the system along the x axis, as shown in Fig. 3(a) for the density [Fig. 3(a1)] and the
+
+current [Fig. 3(a2)]. The system remains homogeneous in the y direction.
+
+The external field is static for $t > 0$, see Fig. 3(a3), and its influence evolves the system from the homogeneous state to a confined state that features a well-defined, peaked density modulation, Fig. 3(a1). After only few Brownian times, a new equilibrium state is reached. The particle current almost vanishes already at time $\tau_4/\tau \approx 2.2$; see Fig. 3(a2).
+
+Using the time-dependent version of the custom flow method described in Sec. II D, we chose to determine the time- and position-dependent external force field, $f_{\text{ext}}(t, x)$, that speeds up the dynamics of the system by a factor $\alpha > 1$. That is, we find a system that evolves through the same temporal sequence of density profiles as those in Fig. 3(a), but doing so at a rate which is $\alpha$ times faster. Hence, in the new “fast-forward” system the density profile $\rho_\alpha$ at time $t$ is the same as the density profile in the original system at time $\alpha t$. The current in the new system must be $\alpha$ times the current in the old system due to the continuity equation. That is, in the
+---PAGE_BREAK---
+
+new system:
+
+$$
+\rho_{\alpha}(t, x) = \rho(\alpha t, x), \quad J_{\alpha}(t, x) = \alpha J(\alpha t, x), \qquad (36)
+$$
+
+where quantities on the left (right) hand side refer to the new
+(original) system. To find the external field that induces the
+desired target dynamics, we discretize the time evolution at
+intervals $\Delta t_{cg}/\tau = 0.01$, i.e., on a scale that is $10^2$ times larger
+than the time step of the BD simulation $\Delta t/\tau = 10^{-4}$. At
+each coarse-graining time $t_{cg}$ we run an iterative process to
+find the desired external field at that time. We use a linear
+regression to approximate the external field at every time $t$
+between two consecutive coarse-graining times. The imposed
+coarse-graining time is a good compromise between accuracy
+and computational time. Since the external field does not
+vary profusely during one time interval only a few iterations
+($<$10) are required at each $t_{cg}$. At each iteration we average
+over $10^6$ trajectories. Finally, we average the results over 50
+independent simulation runs.
+
+The dynamics of the system with fast forward factor $\alpha = 2$ is shown in Fig. 3(b). The external field that is required to speed up the dynamics by the chosen factor $\alpha = 2$ is presented in Fig. 3(b3). As expected, $f_{\text{ext}}$ is now a time dependent field, the amplitude of which decreases monotonically during the time evolution. In the limit $t \to \infty$ the external field converges to that of the original system, since the final equilibrium state are required to be the same in both cases. In the Supplemental Material [33], we show a movie of the time evolution and the required external field in a system which moves three times faster ($\alpha = 3$) than the original system.
+
+As a further example, we conceive a system in which
+the current is the same as in the original system (no speed
+up, α = 1), except for a prescribed additive constant J₀. As
+the divergence of the constant vanishes, it has no effect on
+the dynamics of the density distribution via the continuity
+equation. Hence, the density profiles of both systems are the
+same at any time and the current profile in the new system
+is J(t, x) + J₀, where J(t, x) is the original current. Again,
+the confining trap is switched on at time t = 0. The required
+external field that produces such a dynamical evolution is
+shown in Fig. 3(c3), in which we have set J₀στ = 0.5. The
+external force field is again time-dependent. The extrema of
+the force field are shifted with respect to their original location
+in the case of the static force field. This was expected given
+our above results for the steady state, Fig. 1. The amplitude
+of the force and the magnitude of the shift vary in a nontrivial
+way in time, as a result of a delicate balance between memory
+effects and the amplitude of the density modulation. At t → ∞
+the system reaches the same steady state as that shown in
+Fig. 1 (orange dashed line).
+
+IV. DISCUSSION AND CONCLUSIONS
+
+We have presented a numerical iterative method to sys-
+tematically construct the specific form of the external force
+field which is required to drive a prescribed one-body time
+evolution in an overdamped Brownian many-body system.
+
+The same scheme can be used to find the conservative po-
+tential for which a given density profile is in thermodynamic
+equilibrium. In equilibrium the method is not restricted to BD
+systems.
+
+An alternative approach has been previously developed for
+the equilibrium case [23] (also in quantum systems [34]).
+Although we have not studied the relative performance of the
+two methods systematically against each other, preliminary
+tests suggest that the current approach applied to equilibrium
+is both faster and more reliable. Whether the present method
+can or cannot be extended to quantum systems is an open and
+interesting question.
+
+In all cases that we have analysed so far, the iteration
+process has reliably converged. Nevertheless, if the initial
+guess for the external force is very far from the actual force
+field, it might be necessary to improve the simple fixed-point
+iteration scheme presented here to avoid possible divergent
+trajectories (i.e., sequences of $f_{\text{ext}}^{(k)}$). Using, e.g., Anderson
+accelerationlike methods should constitute a possible im-
+provement of the method. Variations of the presented iterative
+scheme, such as, e.g., defining $f_{\text{int}}^{(k-1)} = F_{\text{int}}^{(k-1)}/\rho^{(k)}$ instead of
+$f_{\text{int}}^{(k-1)} = F_{\text{int}}^{(k-1)}/\rho$ in Eq. (11), also converge to the desired
+external force and might be useful in cases where convergence
+issues occur, which might be the case, e.g., in the vicinity of
+dynamical phase transitions.
+
+As the method requires a discretization of the space coor-
+dinate, it hence yields a discretized external force field. The
+quality of the spatial discretization (e.g., size of the bins) is an
+important parameter of the method. Although we have shown
+only one- and two-dimensional mono-component examples,
+the extension to three-dimensional systems and/or mixtures is
+straightforward.
+
+In all cases, whether time-dependent nonequilibrium,
+nonequilibrium steady state, or in equilibrium, the custom
+flow method requires to sample the internal force field. There-
+fore, the practical implementation for hard-body systems is
+not as straightforward as it is in the case of soft interparticle
+potentials. For steady-state hard-body systems it might be
+easier to extent the equilibrium approach of Ref. [23] to
+nonequilibrium conditions.
+
+The custom flow method allows complete control of the dynamics of a given system in both steady state and full nonequilibrium. Possible future applications include the investigation of time crystals [35,36] in BD systems, removal of flow instabilities via the application of external fields in a controlled manner, and obtaining a better understanding of memory effects by, e.g., a systematic analysis of the external fields required to speed up and/or slow down a given dynamical process.
+
+The success of the presented method validates the in-
+herent concept of power functional theory [22] that the
+microscopically resolved internal force field is a one-body
+functional of the density distribution and the flow field,
+with no explicit dependence on the external force field.
+The conceptual implications of this situation are signifi-
+cant. The custom flow method adds a pragmatic dimen-
+sion to previously gained fundamental insights into structural
+[29] and viscous [37] nonequilibrium response of complex
+systems.
+---PAGE_BREAK---
+
+ACKNOWLEDGMENTS
+
+Useful discussions with Jonas Landgraf are gratefully ac-
+knowledged. This work is supported by the German Research
+Foundation (DFG) via SCHM 2632/1-1.
+
+APPENDIX A: SAMPLING THE CURRENT IN BROWNIAN DYNAMICS SIMULATIONS
+
+We briefly comment on three different methods to sample
+the one-body current **J**(*r*) in Brownian dynamics simulations.
+
+1. Method 1: Force balance equation
+
+First, we propose here a new simple method to measure
+the current based on the exact one-body force density balance
+equation, Eq. (7). This equation provides an expression for **J**
+that can be used to directly sample the current. We need to
+sample: (i) the internal force density field **F**int as an average
+over time (in steady state) or over many realizations (in case
+of time-dependent situations), and (ii) the density profile.
+Then, using the density profile one can calculate the thermal
+diffusive term −*k*B*T*∇ρ. Finally, the external force density
+field can either be calculated using the external force and the
+density profile (**F**ext = ρ**f**ext) or sampled directly during the
+simulation.
+
+1. Method 2: Numerical derivative of the position vector
+
+The second method, proposed in Refs. [23,38], is based
+on calculating the velocity of the ith particle, vᵢ(t), via the
+numerical central derivative of the position vector,
+
+$$
+\mathbf{v}_i(t) = \frac{\mathbf{r}_i(t + \Delta t) - \mathbf{r}_i(t - \Delta t)}{2\Delta t}. \quad (\text{A1})
+$$
+
+Due to the stochastic nature of the motion it is crucial to use
+the central derivative to properly compute the velocity of the
+particles, Eq. (A1). Forward and backward derivatives give
+different results that are not consistent with the value of the
+current obtained by the alternative methods presented here.
+
+A spatially resolved average of $\mathbf{v}_i$, Eq. (A1), yields the one-body current profile Eq. (4), which we rewrite more explicitly as
+
+$$
+\mathbf{J}(\mathbf{r}, t) = \left\langle \sum_{i=1}^{N} \mathbf{v}_i(t) \delta(\mathbf{r}_i(t) - \mathbf{r}) \right\rangle, \quad (\text{A2})
+$$
+
+where $⟨·⟩$ indicates an average over either many different
+realizations or time in the case of a steady state.
+
+To sample $\mathbf{v}_i(t)$ it is convenient to rewrite Eq. (A1) as [38]
+
+$$
+\mathbf{v}_i(t) = \frac{\Delta \mathbf{r}_i(t - \Delta t) + \Delta \mathbf{r}'_i(t)}{2\Delta t}, \quad (\text{A3})
+$$
+
+where $\Delta \mathbf{r}_i(t) = \mathbf{r}_i(t + \Delta t) - \mathbf{r}_i(t)$. Plugging Eq. (30) into Eq. (A3) results in
+
+$$
+\begin{equation}
+\begin{split}
+\mathbf{v}_i(t) ={}& \frac{1}{2\gamma} [-\nabla_i u(\mathbf{r}^N(t-\Delta t)) - \nabla_i u(\mathbf{r}^N(t))] \\
+& + \mathbf{f}_{\text{ext}}(\mathbf{r}_i, t-\Delta t) + \mathbf{f}_{\text{ext}}(\mathbf{r}_i, t)] \\
+& + \frac{1}{2\Delta t} [\eta_i(t-\Delta t) + \eta_i(t)].
+\end{split}
+\tag{A4}
+\end{equation}
+$$
+
+The spatially resolved average of ηᵢ(t) vanishes at any space
+point since ηᵢ is a Gaussian random force, and therefore this
+average correlates the random force at time t with the position
+at the same time t. In contrast, it is important to realize that
+the spatially resolved average of ηᵢ(t - Δt) does not vanish
+in general, since it correlates the random force at time t - Δt
+with the position at time t.
+
+In the following we demonstrate in detail the equivalence
+of method 2 with method 1 from above. To do so, we insert the
+decomposition Eq. (A3) of the configurational velocity into
+the definition Eq. (A2) of the current. This allows to split
+
+$$
+\begin{equation}
+\begin{aligned}
+\mathbf{J}(\mathbf{r}, t) = {}& (2\Delta t)^{-1} \left\langle \sum_{i=1}^{N} \Delta \mathbf{r}_i(t - \Delta t) \delta(\mathbf{r}_i(t) - \mathbf{r}) \right\rangle \\
+& + (2\Delta t)^{-1} \left\langle \sum_{i=1}^{N} \Delta \mathbf{r}_i(t) \delta(\mathbf{r}_i(t) - \mathbf{r}) \right\rangle.
+\end{aligned}
+\tag{A5}
+\end{equation}
+$$
+
+Here the spatial displacement from position at time *t* to *t* + Δ*t* is given via Eq. (30) as
+
+$$
+\Delta \mathbf{r}_i(t) = \frac{\Delta t}{\gamma} [-\nabla_i u(\mathbf{r}^N) + \mathbf{f}_{\text{ext}}(\mathbf{r}_i, t)] + \eta_i(t), \quad (\text{A6})
+$$
+
+where all positions on the right hand side are evaluated at time
+$t$. We can hence evaluate the second sum in Eq. (A5) with the
+result:
+
+$$
+(2\gamma)^{-1}[\mathbf{F}_{\text{int}}(\mathbf{r}, t) + \rho(\mathbf{r}, t)\mathbf{f}_{\text{ext}}(\mathbf{r}, t)]. \quad (\text{A7})
+$$
+
+Here the random displacement has no effect, as ηᵢ(t) is uncorrelated with rᵢ(t) and ⟨ηᵢ(t)⟩ = 0.
+
+Addressing the first sum in Eq. (A5) requires to take into
+account that the random displacement at the previous time *t* −
+Δ*t* is correlated with position **r**ᵢ(*t*), as is appearing inside of
+the delta function. We hence Taylor expand the delta function
+in ηᵢ(*t* − Δ*t*), i.e., in the random displacement at the earlier
+time. This gives to first order the result
+
+$$
+\delta(\mathbf{r}_i(t) - \mathbf{r}) = \delta(\mathbf{r}'_i(t) - \mathbf{r}) - \nabla_i \delta(\mathbf{r}'_i(t) - \mathbf{r}) \cdot \eta_i(t - \Delta t), \quad (A8)
+$$
+
+where have expanded around the position **r'**i(t), which is
+defined in such a way that it lacks the random displacement
+from time *t* − Δ*t* to time *t*, and hence
+
+$$
+\boldsymbol{r}'_i(t) = \boldsymbol{r}_i(t - \Delta t) + \Delta \boldsymbol{r}_i^{\det}(t - \Delta t), \quad (\text{A9})
+$$
+
+with $\Delta\mathbf{r}_i^{\det}(t - \Delta t)$ the deterministic displacement at the earlier time, which is given by the first term in (A6) with all time arguments shifted backwards by $\Delta t$, i.e., $\Delta\mathbf{r}_i^{\det}(t - \Delta t) = (-\nabla_i u(\mathbf{r}^N) + f_{\text{ext}}(\mathbf{r}_i, t - \Delta t))\Delta t / \gamma$, where $\mathbf{r}^N = (\mathbf{r}^N(t - \Delta t)$ and $\mathbf{r}_i = (\mathbf{r}_i(t - \Delta t))$. Note that the minus sign in the Taylor expansion Eq. (A8) arises from the fact that $\nabla = -\nabla_i$ in the present case.
+
+One can now carry out the average over the noise, using the
+noise-noise auto-correlator ⟨ηᵢ(t)ηⱼ(t')⟩ = 2k_B T δᵢⱼδᵡₜ' 1Δt/γ,
+where **1** is the d × d unit matrix in (here) d = 2 space di-
+mensions, and the times t, t' are discrete values on a temporal
+grid with grid spacing Δt. Up to higher orders in Δt, which
+are irrelevant in practice, this creates the same term Eq. (A7)
+again, but also a further contribution -(k_B T/γ)∇ρ. Taken
+together with Eq. (A7) this proves that Eq. (A5) is equivalent
+---PAGE_BREAK---
+
+FIG. 4. One-body density profile (a), one-body current (b), and external force field (c) as a function of the x-coordinate for different number of iterations, *k*, as indicated in the legend of (a). The target current is set to $J_0\sigma\tau = 0.1$. The target density profile is $\rho(x) = c_1 \sin^2(\pi x/L) + c_2$ with $c_1\sigma^2 = 0.12$ and $c_2\sigma^2 = 0.24$, which is practically identical to the sampled density profile after 40 iterations (violet solid line). Two-dimensional system with $N=30$ particles in a periodic box of side $L/\sigma = 10$ at temperature $k_B T/\epsilon = 1$. The data has been obtained by averaging over 25 BD realizations. The total simulation time of iteration *k* of one BD realization is set to $\tau_k/\tau = \tau_0 2^{(k-1)/3}$, with $\tau_0/\tau = 100$. The bin size is $\Delta x/\sigma = 0.05$.
+
+to Eq. (7), and hence that the result for the current is the same as in Method 1 above. Alternatively, expanding around $\mathbf{r}_i(t - \Delta t)$ in the total displacement $\Delta \mathbf{r}_i(t - \Delta t)$ gives the same result.
+
+### 3. Method 3: Continuity equation
+
+The continuity equation, Eq. (5) provides an alternative route to compute the current in nonsteady state situations, as shown in Ref. [23]. Having sampled the density profile at different times *t* and *t*', we can compute the numerical time derivative of the density profile, which must be equal to the divergence of the current. In effectively one-dimensional systems the result can be integrated in space and yields the one-body current profile (line integrals or other inversion methods are required in a higher dimensional space). This method yields the current profile up to an additive constant. If the actual value of the current at a given space point is known, then one can easily determine the missing additive constant. For instance, if the system is in contact with a hard wall the current at the hard wall must vanish. To use this
+
+FIG. 5. (a) Scaled error of the density profile $\Delta\rho^* = \Delta\rho L^2\sigma^2$ (green circles) and of the current profile $\Delta J^* = \Delta J L^2\tau^2$ (yellow squares) as a function of the iteration number *k* (bottom axis) and the scaled simulation time of the iteration $\tau_k/\tau$ (upper axis) in nonequilibrium steady state using BD simulations. (b) Scaled error of the density profile $\Delta\rho^* = \Delta\rho L^2\sigma^2$ as a function of the iteration number *k* (bottom axis) and the number of Monte Carlo steps at iteration $\tau_k/\tau$ (upper axis) in equilibrium using MC simulations. In both panels, the data has been obtained by averaging over 25 realizations. Note the logarithmic scale of the vertical axis and of the upper horizontal axis.
+
+method we need to sample $\rho$ at two times *t* and *t*' separated by a time interval $\Delta t$. In our experience a value $\Delta t \approx 10^2 \Delta t$ with $\Delta t$ the time step of the BD simulation provides good results. Method 3 is restricted to situations with no rotational (i.e., divergence free) contribution to the current distribution.
+
+We have checked the three methods presented above give the same one-body current profile within the inherent numerical accuracy of each procedure.
+
+## APPENDIX B: NUMERICAL DETAILS
+
+We provide details of our precise implementation of the iterative scheme. Each iteration step Eq. (11) of the nonequilibrium steady-state inversion method requires carrying out one BD simulation run for the given parameters and given external force field. Before acquiring the data, we let the system
+---PAGE_BREAK---
+
+reach a steady state during $10^2 \tau$. Then, at each iteration $k$, we sample the internal force density during a given sampling time $\tau_k$. The sampling time has a direct impact on both the statistical quality of the sampled internal force field (which is required for the next iteration) as well as on the performance of the method. Instead of using the same sampling time at each iteration, we find it preferable to start with short simulation runs and increase the run length at every iteration. Hence, at iteration $k$, we fix the sampling simulation time $\tau_k$ to
+
+$$\tau_k = \tau_0 2^{(k-1)/3}, \quad (B1)$$
+
+i.e., we double the runlength every three iterations. The total time of the first iteration is set to $\tau_0/\tau = 100$. Finally, we average over several (10–100) realizations of the iteration scheme, Eq. (11), to improve the statistics.
+
+Figure 4 illustrates the iterative process for the effectively one-dimensional system with target profile given by Eq. (31) and target current $J_0 \tau \sigma = 0.1$. Less than 10 iterations suffice to get a very good estimate of the external force, and after $k=40$ iterations the sampled density, and current profiles
+
+are almost indistinguishable from the corresponding target profiles. The results have been obtained by averaging over 25 BD realizations of the iterative scheme. We show in Fig. 5(a) the evolution of the error of the density and the current profile during the iterative process, cf. Eqs. (13) and (14).
+
+For the equilibrium situation of the effectively one-dimensional system shown in Fig. 1 ($J_0 = 0$) we have used Monte Carlo simulations to implement the iterative scheme, cf. Eqs. (19) and (21). For completeness, we also show the efficiency of the method in equilibrium in Fig. 5(b), where we plot the difference between the sampled and the error in the density profile as a function of the number of iterations and the number of Monte Carlo sweeps (MCS). Each MCS is an attempt to sequentially and individually move all the particles in the system. We find it convenient to increase the number of MCS during the iterative process. We begin with $10^4$ MCS at iteration $k=1$, and increase the number of MCS after every iteration such that it doubles every three iterations. Before acquiring data we equilibrate the system by running $10^4$ MCS. As in the nonequilibrium steady state case, we improve the statistics by averaging over 25 realizations.
+
+[1] H. Löwen, Colloidal dispersions in external fields: Recent developments, J. Phys.: Condens. Matter **20**, 404201 (2008).
+
+[2] A. Erbe, M. Zientara, L. Baraban, C. Kreidler, and P. Leiderer, Various driving mechanisms for generating motion of colloidal particles, J. Phys.: Condens. Matter **20**, 404215 (2008).
+
+[3] J. Perrin, Atoms (D. Van Nostrand Company, New York, 1916).
+
+[4] M. Köppl, P. Henseler, A. Erbe, P. Nielaba, and P. Leiderer, Layer Reduction in Driven 2D Colloidal Systems Through Microchannels, Phys. Rev. Lett. **97**, 208302 (2006).
+
+[5] N. B. Simeonova and W. K. Kegel, Gravity-Induced Aging in Glasses of Colloidal Hard Spheres, Phys. Rev. Lett. **93**, 035701 (2004).
+
+[6] J. Palacci, C. Cottin-Bizonne, C. Ybert, and L. Bocquet, Sedi-mentation and Effective Temperature of Active Colloidal Suspensions, Phys. Rev. Lett. **105**, 088304 (2010).
+
+[7] O. D. Velev and K. H. Bhatt, On-chip micromanipulation and assembly of colloidal particles by electric fields, Soft Matter **2**, 738 (2006).
+
+[8] R. W. O'Brien and L. R. White, Electrophoretic mobility of a spherical colloidal particle, J. Chem. Soc., Faraday Trans. **2** **74**, 1607 (1978).
+
+[9] P. Tierno, R. Muruganathan, and T. M. Fischer, Viscoelasticity of Dynamically Self-Assembled Paramagnetic Colloidal Clusters, Phys. Rev. Lett. **98**, 028301 (2007).
+
+[10] A. Snezhko and I. S. Aranson, Magnetic manipulation of self-assembled colloidal asters, Nat. Mater. **10**, 698 (2011).
+
+[11] J. Loehr, M. Loenne, A. Ernst, D. de las Heras, and T. M. Fischer, Topological protection of multiparticle dissipative transport, Nat. Commun. **7**, 11745 (2016).
+
+[12] J. Bender and N. J. Wagner, Reversible shear thickening in monodisperse and bidisperse colloidal dispersions, J. Rheol. **40**, 899 (1996).
+
+[13] N. J. Wagner and J. F. Brady, Shear thickening in colloidal dispersions, Phys. Today **62**(10), 27 (2009).
+
+[14] R. Besseling, E. R. Weeks, A. B. Schofield, and W. C. K. Poon, Three-Dimensional Imaging of Colloidal Glasses Under Steady Shear, Phys. Rev. Lett. **99**, 028301 (2007).
+
+[15] D. B. Ruffner and D. G. Grier, Optical Forces and Torques in Nonuniform Beams of Light, Phys. Rev. Lett. **108**, 173602 (2012).
+
+[16] B. Sun, J. Lin, E. Darby, A. Y. Grosberg, and D. G. Grier, Brownian vortexes, Phys. Rev. E **80**, 010401 (2009).
+
+[17] C. Lutz, M. Kollmann, and C. Bechinger, Single-File Diffusion of Colloids in One-Dimensional Channels, Phys. Rev. Lett. **93**, 026001 (2004).
+
+[18] N. D. Mermin, Thermal properties of the inhomogeneous electron gas, Phys. Rev. **137**, A1441 (1965).
+
+[19] R. Evans, The nature of the liquid-vapour interface and other topics in the statistical mechanics of non-uniform, classical fluids, Adv. Phys. **28**, 143 (1979).
+
+[20] P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. **136**, B864 (1964).
+
+[21] R. Evans, M. Oettel, R. Roth, and G. Kahl, New developments in classical density functional theory, J. Phys.: Condens. Matter **28**, 240401 (2016).
+
+[22] M. Schmidt and J. M. Brader, Power functional theory for Brownian dynamics, J. Chem. Phys. **138**, 214101 (2013).
+
+[23] A. Fortini, D. de las Heras, J. M. Brader, and M. Schmidt, Superadiabatic Forces in Brownian Many-Body Dynamics, Phys. Rev. Lett. **113**, 167801 (2014).
+
+[24] D. Borgis, R. Assaraf, B. Rotenberg, and R. Vuilleumier, Computation of pair distribution functions and three-dimensional densities with a reduced variance principle, Mol. Phys. **111**, 3486 (2013).
+
+[25] D. de las Heras and M. Schmidt, Better Than Counting: Density Profiles from Force Sampling, Phys. Rev. Lett. **120**, 218001 (2018).
+---PAGE_BREAK---
+
+[26] J. D. Weeks, D. Chandler, and H. C. Andersen, Role of repul-
+sive forces in determining the equilibrium structure of simple
+liquids, *J. Chem. Phys.* **54**, 5237 (1971).
+
+[27] D. Leighton and A. Acrivos, The shear-induced migration of particles in concentrated suspensions, *J. Fluid Mech.* **181**, 415 (1987).
+
+[28] M. Frank, D. Anderson, E. R. Weeks, and J. F. Morris, Particle migration in pressure-driven flow of a Brownian suspension, *J. Fluid Mech.* **493**, 363 (2003).
+
+[29] N. C. X. Stuhlmüller, T. Eckert, D. de las Heras, and M. Schmidt, Structural Nonequilibrium Forces in Driven Colloidal Systems, *Phys. Rev. Lett.* **121**, 098002 (2018).
+
+[30] U. M. B. Marconi and P. Tarazona, Dynamic density functional theory of fluids, *J. Chem. Phys.* **110**, 8032 (1999).
+
+[31] J. M. Brader and M. Krüger, Density profiles of a colloidal liquid at a wall under shear flow, *Mol. Phys.* **109**, 1029 (2011).
+
+[32] A. Scacchi, M. Krüger, and J. M. Brader, Driven colloidal fluids: Construction of dynamical density functional theories
+
+from exactly solvable limits, J. Phys.: Condens. Matter **28**, 244023 (2016).
+
+[33] See Supplemental Material at http://link.aps.org/supplemental/10.1103/PhysRevE.99.023306 for a movie with an example in time-dependent situations.
+
+[34] M. Thiele, E. K. U. Gross, and S. Kümmel, Adiabatic Approximation in Nonperturbative Time-Dependent Density-Functional Theory, Phys. Rev. Lett. **100**, 153004 (2008).
+
+[35] J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, and C. Monroe, Observation of a discrete time crystal, Nature **543**, 217 (2017).
+
+[36] R. Moessner and S. L. Sondhi, Equilibration and order in quantum Floquet matter, Nat. Phys. **13**, 424 (2017).
+
+[37] D. de las Heras and M. Schmidt, Velocity Gradient Power Func-
+tional for Brownian Dynamics, Phys. Rev. Lett. **120**, 028001
+(2018).
+
+[38] P. Krinninger, M. Schmidt, and J. M. Brader, Nonequilibrium Phase Behavior from Minimization of Free Power Dissipation, Phys. Rev. Lett. **117**, 208003 (2016).
\ No newline at end of file
diff --git a/samples/texts_merged/7878336.md b/samples/texts_merged/7878336.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9625c509885a09b59c8a0d525417995bab390bc
--- /dev/null
+++ b/samples/texts_merged/7878336.md
@@ -0,0 +1,374 @@
+
+---PAGE_BREAK---
+
+Minimizing the number of complete bipartite graphs in a
+$K_s$-saturated graph
+
+Beka Ergemlidze* Abhishek Methuku† Michael Tait‡ Craig Timmons§
+
+May 29, 2021
+
+Abstract
+
+A graph $G$ is $F$-saturated if it contains no copy of $F$ as a subgraph but the addition of any new edge to $G$ creates a copy of $F$. We prove that for $s \ge 3$ and $t \ge 2$, the minimum number of copies of $K_{1,t}$ in a $K_s$-saturated graph is $\Theta(n^{t/2})$. More precise results are obtained when $t=2$ where the problem is related to Moore graphs with diameter 2 and girth 5. We prove that for $s \ge 4$ and $t \ge 3$, the minimum number of copies of $K_{2,t}$ in an $n$-vertex $K_s$-saturated graph is at least $\Omega(n^{t/5+8/5})$ and at most $O(n^{t/2+3/2})$. These results answer a question of Chakraborti and Loh. General estimates on the number of copies of $K_{a,b}$ in a $K_s$-saturated graph are also obtained, but finding an asymptotic formula remains open.
+
+# 1 Introduction
+
+Let $F$ be a graph with at least one edge. A graph $G$ is $F$-free if $G$ does not contain $F$ as a subgraph. The study of $F$-free graphs is central to extremal combinatorics. Turán's Theorem, widely considered to be a cornerstone result in graph theory, determines the maximum number of edges in an $n$-vertex $K_s$-free graph. An interesting class of $F$-free graphs are those that are maximal with respect to the addition of edges. We say that a graph $G$ is $F$-saturated if $G$ is $F$-free but the addition of an edge joining any pair of nonadjacent vertices of $G$ creates a copy of $F$. The function $\text{sat}(n, F)$ is the saturation number of $F$, and is defined to be the minimum number of edges in an $n$-vertex $F$-saturated graph. In some sense, it is dual to the Turán function $\text{ex}(n, F)$ which is the maximum number of edges in an $n$-vertex $F$-saturated graph.
+
+One of the first results on graph saturation is a theorem of Erdős, Hajnal, and Moon [10] which determines the saturation number of $K_s$. They proved that for $2 \le s \le n$, there is a unique $n$-vertex $K_s$-saturated graph with the minimum number of edges. This graph is the join of a complete graph on $s-2$ vertices and an independent set on $n-s+2$ vertices, denoted $K_{s-2} + \bar{K}_{n-s+2}$. The Erdős-Hajnal-Moon Theorem was proved in the 1960s and since then,
+
+*Department of Mathematics and Statistics, University of South Florida, Tampa, Florida, U.S.A. E-mail: beka.ergemlidze@gmail.com
+
+†School of Mathematics, University of Birmingham, United Kingdom. E-mail: abhishekmethuku@gmail.com. Research is supported by the EPSRC grant number EP/S00100X/1 (A. Methuku).
+
+‡Department of Mathematics & Statistics, Villanova University, U.S.A. E-mail: michael.tait@villanova.edu. Research is supported in part by National Science Foundation grant DMS-2011553.
+
+§Department of Mathematics and Statistics, California State University Sacramento, U.S.A. E-mail: craig.timmons@csus.edu. Research is supported in part by Simons Foundation Grant #359419.
+---PAGE_BREAK---
+
+graph saturation has developed into its own area of extremal combinatorics. We recommend the survey of Faudree, Faudree, and Schmitt [12] as a reference for history and significant results in graph saturation.
+
+The function sat($n, F$) concerns the minimum number of edges in an $F$-saturated graph. More generally, one can ask for the minimum number of copies of $H$ in an $n$-vertex $F$-saturated graph. Let us write sat($n, H, F$) for this minimum. This function was introduced in [18] and was motivated by the well-studied generalized Turán function whose systematic study was initiated by Alon and Shikhelman [2]. Recalling that the Erdős-Hajnal-Moon Theorem determines sat($n, K_s$) = sat($n, K_2, K_s$), it is quite natural to study the generalized function sat($n, K_r, K_s$), where $2 \le r < s$. Answering a question of Kritschgau, Methuku, Tait and Timmons [18], Chakraborti and Loh [5] proved that for every $2 \le r < s$, there is a constant $n_{r,s}$ such that for all $n \ge n_{r,s}$,
+
+$$ \mathrm{sat}(n, K_r, K_s) = (n-s+2) \binom{s-2}{r-1} + \binom{s-2}{r}. $$
+
+Furthermore, they showed that $K_{s-2} + \overline{K_{n-s+2}}$ is the unique graph that minimizes the number of copies of $K_r$ among all $n$-vertex $K_s$-saturated graphs for $n \ge n_{r,s}$. They proved a similar result for cycles where the critical point is that $K_{s-2} + \overline{K_{n-s+2}}$ is again the unique graph that minimizes the number of copies of $C_r$ among all $n$-vertex $K_s$-saturated graphs for $n \ge n_{r,s}$ under some assumptions on $r$ in relation to $s$ (see Theorem 1.4 below). Chakraborti and Loh then asked the following question (Problem 10.5 in [5]).
+
+**Question 1.1** Is there a graph $H$ for which $K_{s-2} + \overline{K_{n-s+2}}$ does not (uniquely) minimize the number of copies of $H$ among all $n$-vertex $K_s$-saturated graphs for all large enough $n$?
+
+Here we answer this question positively and show that there are graphs $H$ for which $K_{s-2} + \overline{K_{n-s+2}}$ is not the unique extremal graph.
+
+We begin by stating our first two results, Theorems 1.2 and 1.3, where $H = K_{1,t}$. Together, they demonstrate a change in behaviour between the cases $H = K_{1,2}$ and $H = K_{1,t}$ with $t > 2$.
+
+**Theorem 1.2** (i) For $n \ge 3$,
+
+$$ \binom{n}{2} - \frac{n^{3/2}}{2} \le \mathrm{sat}(n, K_{1,2}, K_3) \le \binom{n-1}{2}. $$
+
+(ii) For $n \ge s \ge 4$,
+
+$$ \mathrm{sat}(n, K_{1,2}, K_s) = (s-2)\binom{n-1}{2} + (n-s+2)\binom{s-2}{2}. $$
+
+Furthermore, $K_{s-2} + \overline{K_{n-s+2}}$ is the unique $n$-vertex $K_s$-saturated with minimum number of copies of $K_{1,2}$.
+
+**Theorem 1.3** For integers $n \ge s \ge 3$ and $t \ge 3$,
+
+$$ \mathrm{sat}(n, K_{1,t}, K_s) = \Theta(n^{t/2+1}). $$
+---PAGE_BREAK---
+
+A consequence of Theorem 1.3 is that if $s, t \ge 3$ and $n$ is large enough in terms of $t$, $K_{s-2} + \overline{K_{n-s+2}}$ does not minimize the number of copies of $K_{1,t}$ among all $n$-vertex $K_s$-saturated graphs. Indeed, $K_{s-2} + \overline{K_{n-s+2}}$ has $\Theta(n^t)$ copies of $K_{1,t}$. Interestingly, the special case of determining sat($n, K_{1,2}, K_3$) is related to the existence of Moore graphs. This is discussed further in the Concluding Remarks section, but whenever a Moore graph of diameter 2 and girth 5 exists, this graph will have fewer copies of $K_{1,2}$ than $K_1 + \overline{K_{n-1}} = K_{1,n-1}$. Thus, any potential result that determines sat($n, K_{1,2}, K_3$) exactly would have to take this into account.
+
+The graph used to prove the upper bound of Theorem 1.3 is a $K_s$-saturated graph with maximum degree at most $c_s n^{1/2}$. This graph was constructed by Alon, Erdős, Holzman, and Krivelevich [1] and it is structurally very different from $K_{s-2} + \overline{K_{n-s+2}}$. Using this graph one can prove a more general upper bound that applies to any connected bipartite graph. This will be stated in Theorem 1.5 below.
+
+Next we turn our attention to counting copies of $K_{2,t}$ (for $t \ge 2$) in $K_s$-saturated graphs. The graph $K_1 + \overline{K_{n-1}}$ is $K_3$-saturated and $K_{2,t}$-free. Thus, sat($n, K_{2,t}, K_3$) = 0 for all $t \ge 2$. For $t=2$ and $s \ge 4$ Chakraborti and Loh [5] proved that
+
+$$ \mathrm{sat}(n, K_{2,2}, K_s) = (1 + o(1)) \binom{s-2}{2} \binom{n}{2}. \quad (1) $$
+
+Observe that the graph $K_{s-2} + \overline{K_{n-s+2}}$ has
+
+$$ \binom{s-2}{2} \binom{n-s+2}{2} + \binom{s-2}{3} (n-s+2) + \binom{s-2}{4} $$
+
+copies of $K_{2,2}$ and this gives the upper bound in (1). Now the focus of [5] was on counting complete graphs and counting cycles, so here the above result is stated in terms of $K_{2,2}$ but of course $K_{2,2} = C_4$. However, it is important and relevant to this work to mention the following theorem of Chakraborti and Loh which shows that $K_{s-2} + \overline{K_{n-s+2}}$ minimizes the number of copies of $C_r$ in certain cases.
+
+**Theorem 1.4 (Chakraborti and Loh [5])** Let $s \ge 4$ and $r \ge 7$ if r odd, and $r \ge 4\sqrt{s-2}$ if r is even. There is an $n_{r,s}$ such that for all $n \ge n_{r,s}$, the graph $K_{s-2} + \overline{K_{n-s+2}}$ minimizes the number of copies of $C_r$ over all $n$-vertex $K_s$-saturated graphs. Moreover, when $r \le 2s - 4$, this is the unique extremal graph.
+
+It is conjectured in [5] that $K_{s-2} + \overline{K_{n-s+2}}$ is the unique graph that minimizes the number of copies of $C_r$ among all $K_s$-saturated graphs. Currently it is only known that $K_{s-2} + \overline{K_{n-s+2}}$ minimizes the number of copies of $K_r$ (Erdős-Hajnal-Moon for $r=2$ and [5] for $r > 2$), and minimizes the number of copies of $C_r$ under certain assumptions (stated in Theorem 1.4). Theorem 1.3 shows $K_{s-2} + \overline{K_{n-s+2}}$ does not minimize the number of copies of $K_{1,t}$. We extend this to $K_{a,b}$ with $1 \le a+1 < b$ using the following theorem.
+
+**Theorem 1.5** Let F be a connected bipartite graph with parts of size *a* and *b* with $1 \le a+1 < b$. If *s* ≥ 3 be an integer, then
+
+$$ \mathrm{sat}(n, F, K_s) = \begin{cases} 0 & \text{if } a > s - 2, \\ O(n^{\frac{1}{2}(a+b+1)}) & \text{if } a \le s - 2 \end{cases} $$
+
+where the implicit constant can depend on *a*, *b*, and *s*.
+---PAGE_BREAK---
+
+Theorem 1.5 naturally suggests the following question: how many copies of $K_{2,t}$ must there be in a $K_s$-saturated graph? In this direction we prove the following.
+
+**Theorem 1.6** Let $s \ge 4$ and $t \ge 3$ be integers. There is a positive constant $C$ such that
+
+$$\mathrm{sat}(n, K_{2,t}, K_s) \ge C n^{t/5+8/5}.$$
+
+By Theorem 1.5, $\mathrm{sat}(n, K_{2,t}, K_s) \le O_{s,t}(n^{t/2+3/2})$ for $s \ge 4$ and $t \ge 3$, so that there is a gap in the exponent in the upper and lower bounds.
+
+Saturation problems with restrictions on the degrees have also been well-studied. Duffus and Hanson [7] investigated triangle-saturated graphs with minimum degree 2 and 3. Day [8] resolved a 20 year old conjecture of Bollobás [15] which asked for a lower bound on the number of edges in $K_s$-saturated graphs with minimum degree $t$. Gould and Schmitt [14] studied $K_2^t$-saturated graphs (where $K_2^t$ is the complete $t$-partite graph with parts of size 2) with a given minimum degree. Furthermore, $K_s$-saturated graphs with restrictions on the maximum degree were studied in [1, 13, 19]. Turning to generalized saturation numbers, as a step towards generalizing Day's Theorem, Curry et. al. [6] proved bounds on the number of triangles in a $K_s$-saturated graph with minimum degree $t$. Motivated by these results we prove a lower bound on the number of copies of $K_{a,b}$ in $K_s$-saturated graphs in terms of its minimum degree.
+
+**Theorem 1.7** Let $s \ge 4$ and $2 \le a < b$ be integers with $a \le s-2$. If $G$ is an $n$-vertex $K_s$-saturated graph with minimum degree $\delta(G)$, then $G$ contains at least
+
+$$c \left( \frac{n - \delta(G) - 1}{\delta(G)^{a-1}} \right)^{b/2}$$
+
+copies of $K_{a,b}$ for some constant $c = c(s, a, b) > 0$.
+
+Theorem 1.7 shows that if $0 \le \alpha < \frac{1}{a-1}$ and $\delta(G) \le \kappa n^\alpha$ for some $\kappa > 0$, then $G$ contains at least $cn^{b/2(1-\alpha(a-1))}$ copies of $K_{a,b}$. In particular, when $\delta(G)$ is a constant, we obtain $\Omega(n^{t/2})$ copies of $K_{2,t}$. This improves the lower bound of Theorem 1.6, but comes at the cost of a minimum degree assumption.
+
+In the next subsection we give the notation that will be used in our proofs. Section 2 contains the proofs of Theorems 1.2 and 1.3. Section 3 contains the proofs of Theorems 1.5, 1.6, and 1.7.
+
+## 1.1 Notation
+
+For graphs $F$ and $G$, we write $\mathcal{N}(F,G)$ for the number of copies of $F$ in $G$. For a graph $G$ and $x, y \in V(G)$, write $N(x)$ for the neighborhood of $x$, and $N(x,y)$ for $N(x) \cap N(y)$. More generally, if $X \subseteq V(G)$ and $v \in V(G)$, then $N(v,X)$ is the set of vertices adjacent to all of the vertices in $\{v\} \cup X$, and $N(X)$ is the set of vertices adjacent to all vertices in $X$. We write $d(v) = |N(v)|$, $d(X) = |N(X)|$, and $d(v,X) = |N(v,X)|$. The set $N[v] = N(v) \cup \{v\}$ is the closed neighborhood of $v$. For a graph $G$, let $e(G)$ denote the number of edges in $G$.
+
+For a hypergraph $\mathcal{H}$, $d_{\mathcal{H}}(v)$ is the number of edges in $\mathcal{H}$ containing $v$. Similarly, $d_{\mathcal{H}}(X)$ and $d_{\mathcal{H}}(v,X)$ is the number of edges in $\mathcal{H}$ containing $X$ and $\{v\} \cup X$, respectively.
+---PAGE_BREAK---
+
+## 2 Bounds on sat$(n, K_{1,t}, K_s)$
+
+### 2.1 Proof of Theorem 1.2
+
+Since the graph $K_{s-2} + \overline{K}_{n-s+2}$ is $K_s$-saturated, by counting the number of copies of $K_{1,2}$ in it, we have
+
+$$ \mathrm{sat}(n, K_{1,2}, K_s) \le (s-2)\binom{n-1}{2} + (n-s+2)\binom{s-2}{2}. \quad (2) $$
+
+In particular, if $s=3$ we have $\mathrm{sat}(n, K_{1,2}, K_3) \le \binom{n-1}{2}$. We now prove a matching lower bound up to an error term of order $O(n^{3/2})$. Let $G$ be an $n$-vertex $K_3$-saturated graph. If $e(G) \ge \frac{\sqrt{n-1}n}{2}$, then for $n \ge 3$,
+
+$$ \begin{aligned} \mathcal{N}(K_{1,2}, G) &= \sum_{v \in V(G)} \binom{d(v)}{2} \ge n\binom{2e(G)/n}{2} = \frac{2e(G)^2}{n} - e(G) \\ &\ge \binom{n}{2} - \frac{n^{3/2}}{2}. \end{aligned} $$
+
+Now assume that $e(G) < \frac{\sqrt{n-1}n}{2}$. If $x$ and $y$ are not adjacent, then since $G$ is $K_3$-saturated, $x$ and $y$ must be joined by a path of length 2. Hence,
+
+$$ \mathcal{N}(K_{1,2}, G) \ge e(\bar{G}) = \binom{n}{2} - e(G) \ge \binom{n}{2} - \frac{n^{3/2}}{2}. $$
+
+This completes the proof of (i) of Theorem 1.2. To prove (ii) of Theorem 1.2, it suffices to show that for $n \ge s \ge 4$,
+
+$$ \mathrm{sat}(n, K_{1,2}, K_s) \ge (s-2)\binom{n-1}{2} + (n-s+2)\binom{s-2}{2}, $$
+
+since (2) holds. Let $G$ be an $n$-vertex $K_s$-saturated graph with $n \ge s \ge 4$. Kim, Kim, Kostochka and O [17, Theorem 2.1] proved that
+
+$$ \sum_{v \in V(G)} (d(v)+1)(d(v)+2-s) \ge (s-2)n(n-s+1). \quad (3) $$
+
+It is easy to check that
+
+$$ \sum_{v \in V(G)} (d(v)+1)(d(v)+2-s) = \sum_{v \in V(G)} (d(v)-1)d(v) + (4-s) \sum_{v \in V(G)} d(v) + (2-s)n. \quad (4) $$
+
+Therefore, combining (3) and (4), we have
+
+$$ \sum_{v \in V(G)} (d(v)-1)d(v) \ge (s-2)n(n-s+1) + (s-4)2e(G) + (s-2)n. \quad (5) $$
+
+By the Erdős-Hajnal-Moon Theorem
+
+$$ \mathrm{sat}(n, K_s) = (s-2)(n-s+2) + \binom{s-2}{2}, $$
+---PAGE_BREAK---
+
+and $K_{s-2} + \overline{K_{n-s+2}}$ is the unique $n$-vertex $K_s$-saturated with $\text{sat}(n, K_s)$ edges. Thus,
+
+$$2e(G) \geq 2(s-2)(n-s+2) + 2\binom{s-2}{2} = (s-2)(2n-s+1).$$
+
+Plugging this into (5) we get that if $s \geq 4$,
+
+$$\sum_{v \in V(G)} (d(v)-1)d(v) \geq (s-2)n(n-s+1) + (s-4)(s-2)(2n-s+1) + (s-2)n.$$
+
+Dividing through by 2 and simplifying the right-hand side yields
+
+$$\sum_{v \in V(G)} \binom{d(v)}{2} \geq (s-2)\binom{n-1}{2} + (n-s+2)\binom{s-2}{2},$$
+
+where equality holds only if $G = K_{s-2} + \overline{K_{n-s+2}}$. This completes the proof of Theorem 1.2.
+
+## 2.2 Proof of Theorem 1.3
+
+Now we prove a lower bound on the number of copies of $K_{1,t}$ in a $K_s$-saturated graph that gives the correct order of magnitude for all $t \geq 3$.
+
+**Proposition 2.1** Let $n \geq s \geq 3$ and $t \geq 3$ be integers. Then
+
+$$\operatorname{sat}(n, K_{1,t}, K_s) \geq \left(\frac{\sqrt{s-2}}{t}\right)^t n^{t/2+1} + O_{s,t}(n^{t/2}).$$
+
+**Proof.** Let $G$ be an $n$-vertex $K_s$-saturated graph. Kim, Kim, Kostochka and O [17, Theorem 1.1] proved that
+
+$$\sum_{v \in V(G)} d(v)^2 \geq (n-1)^2(s-2) + (s-2)^2(n-s+2) \quad (6)$$
+
+and that equality holds if and only if $G$ is $K_{s-2} + \overline{K_{n-s+2}}$, except for in the case that $s=3$ where equality holds if and only if $G$ is $K_1 + \overline{K_{n-1}}$ or a Moore graph. By the Power Means Inequality,
+
+$$\sum_{v \in V(G)} d(v)^2 \leq n^{1-2/t} \left( \sum_{v \in V(G)} d(v)^t \right)^{2/t} . \quad (7)$$
+
+Combining (6) and (7) with the inequality $\sum_{v \in V(G)} d(v)^t \leq t^t \sum_{v \in V(G)} \binom{d(v)}{t}$ and rearranging, we obtain that $\mathcal{N}(K_{1,t}, G)$ is equal to
+
+$$\sum_{v \in V(G)} \binom{d(v)}{t} \geq \frac{((n-1)^2(s-2) + (s-2)^2(n-s+2))^{t/2}}{t^n n^{t/2-1}} = \left(\frac{\sqrt{s-2}}{t}\right)^t n^{t/2+1} + O_{s,t}(n^{t/2}).$$
+
+This completes the proof of Proposition 2.1. ■
+
+**Proposition 2.2** Let $s \geq 3$ and $t \geq 3$ be integers. For sufficiently large $n$,
+
+$$\operatorname{sat}(n, K_{1,t}, K_s) \leq \frac{c_s^t n^{t/2+1}}{t!}$$
+
+where $c_s$ is a constant depending only on $s$.
+---PAGE_BREAK---
+
+**Proof.** By a result of Alon, Erdős, Holzman, and Krivelevich, for each $s \ge 3$ and sufficiently large $n$, there is a $K_s$-saturated graph $G$ with maximum degree $c_s\sqrt{n}$ (the constant $c_s$ satisfies $c_s \to 2s$ as $s \to \infty$). The number of copies of $K_{1,t}$ in $G$ is then
+
+$$ \sum_{v \in V(G)} \binom{d(v)}{t} \le n \binom{\Delta(G)}{t} \le \frac{c_s^t n^{t/2+1}}{t!}. $$
+
+**Proof of Theorem 1.3.** Theorem 1.3 follows immediately from Propositions 2.1 and 2.2. ■
+
+# 3 Bounds on sat$(n, K_{2,t}, K_s)$ with $s \ge 4$ and $t \ge 3$
+
+## 3.1 Upper bound on sat$(n, K_{2,t}, K_s)$
+
+We begin this section with a basic lemma on counting copies of a graph $F$ in a graph $G$ with maximum degree $\Delta$. It is likely that this lemma, as well as Lemma 3.2, are known.
+
+**Lemma 3.1** Let $F$ be a connected bipartite graph with parts of size $a$ and $b$. If $G$ is an $n$-vertex graph with maximum degree $\Delta$, then
+
+$$ \mathcal{N}(F,G) \le n\Delta^{a+b-1}. $$
+
+**Proof.** We will prove the lemma by counting the number of possible embeddings of $F$ in $G$. Let $d$ be the diameter of $F$, and $x$ be a vertex in $F$. For $0 \le i \le d$, let $N_i(x)$ be the set of vertices at distance $i$ from $x$ in $F$. We count embeddings of $F$ in $G$ by starting with the vertex $x$, and then proceeding through $N_1(x)$, then $N_2(x)$ and so on. There are $n$ ways to choose a vertex in $G$ that corresponds to $x$. Suppose that $v_x$ is the chosen vertex in $G$. The vertices in $G$ corresponding to those in $N_1(x)$ must be neighbors of $v_x$ in $G$ and so there are at most $\Delta^{|N_1(x)|}$ possibilities. This process is then repeated on $N_2(x)$, $N_3(x)$, and so on. The crucial point is that each time a vertex of $F$ is embedded in $G$, it is a neighbor (in $G$) of a previously embedded vertex (from $F$). Therefore, the number of possible embeddings of $F$ in $G$ is at most
+
+$$ n\Delta^{|N_1(x)|}\Delta^{|N_2(x)|}\cdots\Delta^{|N_d(x)|} = n\Delta^{a+b-1}. $$
+
+Here we have used the assumption that since $F$ is a connected graph with diameter $d$, we have the partition
+
+$$ \{x\} \cup N_1(x) \cup N_2(x) \cup \cdots \cup N_d(x) = V(F). $$
+
+**Lemma 3.2** Let $F$ be a connected bipartite graph with parts of size $a$ and $b$. For any $n$-vertex graph $G$,
+
+$$ \mathcal{N}(K_{a,b}, G) \le \mathcal{N}(F, G). $$
+
+**Proof.** If $G$ has no $K_{a,b}$, then the lemma is trivial. Suppose $K$ is a copy of $K_{a,b}$ in $G$. Then, since $F$ is a subgraph of $K_{a,b}$, we have that $F$ is a subgraph of $K$ so $G$ has a copy of $F$. Moreover, since any two different copies of $K_{a,b}$ have different vertex sets, they give rise to different copies of $F$. Thus, for each copy of $K_{a,b}$ in $G$ we obtain a copy of $F$, and no copy of $F$ will be obtained twice in this way. This proves Lemma 3.2. ■
+
+We are now ready to prove Theorem 1.5.
+---PAGE_BREAK---
+
+**Proof of Theorem 1.5.** If $a > s - 2$, then $K_{s-2} + \overline{K}_{n-s+2}$ is $K_s$-saturated with no copies of $F$. Indeed, a copy of $F$ would need at least $a$ vertices from the $K_{s-2}$, but $a > s - 2$.
+
+Now assume $a \leq s - 2$. Let $G_q^s$ be the $K_s$-saturated graph constructed in [1] where $n$ (and thus $q$) is chosen large enough so that $b < \frac{q+1}{2}$. There is a constant $c_s > 0$ such that $\Delta(G_q^s) \leq c_s\sqrt{n}$. By Lemma 3.1, the number of copies of $F$ in $G_q^s$ is at most $nc_s^{a+b-1}n^{(1/2)(a+b-1)} = c_s^{a+b+1}n^{(1/2)(a+b+1)}$. ■
+
+We conclude this subsection by showing that the graph $G_q^s$ used in the proof of Theorem 1.5 cannot be used to further improve upon the upper bound of $O(n^{\frac{1}{2}(a+b+1)})$ when $F = K_{a,b}$. Since we are showing that $G_q^s$ cannot be used to improve the upper bound, we will be brief in our argument. We will use the same terminology as in [1], but one point at which we differ is the notation we use for a vertex. A vertex in $G_q^s$ is determined by its level, place, type, and copy. A vertex at level $i$, place $j$, type $t$, and copy $c$ will be written as
+
+$$((i-1)q + j, t, c).$$
+
+First, take $n$ large enough so that $b < \frac{q+1}{2}$. Choose a sequence $i_1, i_2, \dots, i_a$ of levels with $1 \leq i_1 < i_2 < \dots < i_a \leq \frac{q+1}{2}$. Likewise, choose a sequence of $b$ levels $\frac{q+1}{2} \leq i_{a+1} < i_{a+2} < \dots < i_{a+b} \leq q+1$. This can be done in $(\frac{q+1}{a}) (\frac{q+1}{b})$ ways. Next, choose a place $j_1 \in [q]$ which can be done in $q$ ways, and a type $t_1 \in [s-2]$ which can be done in $s-2$ ways. Finally, choose a sequence of copies $1 \leq c_1, c_2, \dots, c_{a+b} \leq s-1$ arbitrarily. This can be done in $(s-1)^{a+b}$ ways. Using the definition of $G_q^s$, one finds that the $a$ vertices in the set
+
+$$((i_z - 1)q + j_1, t_1, c_z) : 1 \leq z \leq a}$$
+
+are all adjacent to the $b$ vertices in the set
+
+$$((i_z - 1)q + (j_1 + 1)_q, (t_1 + 1)_{s-2}, c_z) : a + 1 \leq z \leq a + b}$$
+
+(here $(j_1+1)_q$ is the unique integer $\zeta$ in $\{1, 2, \dots, q\}$ for which $j_1+1 \equiv \zeta(\text{mod } q)$, and $(t_1+1)_{s-2}$ is the unique integer $\zeta'$ in $\{1, 2, \dots, s-2\}$ for which $t_1+1 \equiv \zeta'(\text{mod } s-2)$). This gives a $K_{a,b}$ in $G_q^s$ and so the number of $K_{a,b}$ in $G_q^s$ is at least
+
+$$\binom{\left(\frac{1}{2}\right)(q+1)}{a} \binom{\left(\frac{1}{2}\right)(q+1)}{b} q(s-2)(s-1)^{a+b} \ge C_{s,a,b} q^{a+b+1} \ge C n^{(1/2)(a+b+1)}.$$
+
+By Lemmas 3.2 and 3.1, $G_q^s$ is a $K_s$-saturated $n$-vertex graph with $\Theta_{s,a,b}(n^{(1/2)(a+b+1)})$ copies of $K_{a,b}$.
+
+## 3.2 Lower bound on sat$(n, K_{2,t}, K_s)$
+
+First we prove Theorem 1.6.
+
+**Proof of Theorem 1.6.** Let $G$ be a $K_s$-saturated graph on $n$ vertices. Note that we can assume
+
+$$e(G) \leq \frac{n^{\frac{7}{4}}}{10}. \quad (8)$$
+
+Otherwise, a theorem of Erdős and Simonovits [9] implies that there is a positive constant $\gamma$ such that
+
+$$\mathcal{N}(K_{2,t}, G) \geq \gamma \frac{e(G)^{2t}}{n^{3t-2}} = \Omega(n^{\frac{t}{2}+2}), \quad (9)$$
+---PAGE_BREAK---
+
+proving Theorem 1.6.
+
+Let $K_4^-$ be the graph consisting of 4 vertices and 5 edges obtained by removing an edge from $K_4$. For a copy of $K_4^-$ with vertices $x, y, u, v$, where $uv \notin E(G)$, let $xy$ be called the *base edge* of this $K_4^-$. We estimate the number of copies of $K_4^-$ in a $K_s$-saturated graph $G$.
+
+For every $u, v$ with $uv \notin E(G)$ there is a set $S$ such that $S \subseteq N(u, v)$ and $S$ induces a $K_{s-2}$ in $G$. Therefore, there are at least $\binom{s-2}{2}$ pairs $x, y \in S$ such that $u, v, x, y$ form a copy of $K_4^-$. On the other hand, every $xy \in E(G)$ is the base edge of at most $\binom{d(x,y)}{2}$ copies of $K_4^-$ in $G$.
+
+Therefore,
+
+$$ \sum_{xy \in E(G)} \binom{d(x,y)}{2} \geq \mathcal{N}(K_4^-, G) \geq \sum_{uv \in E(\overline{G})} \binom{s-2}{2} \geq e(\overline{G}) \stackrel{(8)}{\geq} \frac{n^2}{4}. $$
+
+Thus, there is a constant $c_t = c(t)$ such that the following holds:
+
+$$ \begin{align*} \mathcal{N}(K_{2,t}, G) &\geq \sum_{xy \in E(G)} \binom{d(x,y)}{t} \geq \frac{1}{t^t} \sum_{xy \in E(G)} \left( \left( \binom{d(x,y)}{2} \right)^{\frac{t}{2}} - t^t \right) \geq \\ &\frac{e(G)}{t^t} \left( \frac{\sum_{xy \in E(G)} \binom{d(x,y)}{2}}{e(G)} \right)^{\frac{t}{2}} - e(G) \geq \frac{(n^2/4)^{\frac{t}{2}}}{t^t e(G)^{\frac{t}{2}-1}} - e(G) = \frac{(n^2/4)^{\frac{t}{2}} - t^t e(G)^{t/2}}{t^t e(G)^{\frac{t}{2}-1}} \stackrel{(8)}{\geq} \frac{c_t n^t}{e(G)^{\frac{t}{2}-1}}. \end{align*} $$
+
+Combining this with (9) we get
+
+$$ \mathcal{N}(K_{2,t}, G) \geq \min\left\{\gamma \frac{e(G)^{2t}}{n^{3t-2}}, \frac{c_t n^t}{e(G)^{\frac{t}{2}-1}}\right\}. $$
+
+Let $e(G) = n^\alpha$, then
+
+$$ \mathcal{N}(K_{2,t}, G) \geq \min \left\{ \gamma n^{2\alpha t - 3t + 2}, c_t n^{t - \alpha t / 2 + \alpha} \right\}. $$
+
+Choosing $\alpha = \frac{8t-4}{5t-2}$ and $C = \min\{\gamma, c_t\}$, we get the desired lower bound $\textcolor{red}{Cn^{\frac{t}{5} - \frac{16}{125t-10} + \frac{41}{25}}} > Cn^{\frac{t}{5} + \frac{8}{5}}$. $\blacksquare$
+
+Next we turn to the proof of Theorem 1.7. We need the following lemma.
+
+**Lemma 3.3** Let $s \ge 4$ and $2 \le a \le b$ be integers with $a \le s-2$. Suppose that $G$ is an $n$-vertex $K_s$-saturated graph with vertex set $V$. There is a constant $c = c(s, a, b)$ such that for any $v \in V$, there are at least
+
+$$ c \left( \frac{n - d(v) - 1}{d(v)^{a-1}} \right)^{b/2} $$
+
+copies of $K_{a,b}$ containing $v$.
+
+**Proof.** Let $v \in V$. For each $u \in V\backslash N[v]$, there is a set $S_u \subset N(v)$ such that $S_u$ induces a $K_{s-2}$ in $G$. Fix such an $S_u$ and define an $(s-1)$-uniform hypergraph $\mathcal{H}$ to have vertex set $V\backslash\{v\}$, and edge set $E(\mathcal{H}) = \{\{u\} \cup S_u : u \in V\backslash N[v]\}$. By construction, $\mathcal{H}$ has $n-d(v)-1$ edges, each of which contains exactly one vertex from $V\backslash N[v]$ and $s-2$ vertices from $N(v)$. Also, no two edges of $\mathcal{H}$ contain the same vertex from $V\backslash N[v]$. In what follows, we will add the subscript $\mathcal{H}$ if we are referring to degrees in $\mathcal{H}$, and no subscript will be included if we are referring to degrees or neighborhoods in $G$.
+---PAGE_BREAK---
+
+By averaging, there is a set $X \in \binom{N(v)}{a-1}$ such that
+
+$$d_{\mathcal{H}}(X) \geq \frac{\binom{s-2}{a-1}(n-d(v)-1)}{\binom{d(v)}{a-1}}.$$
+
+We then have
+
+$$\sum_{y \in N(v,X)} d_{\mathcal{H}}(y,X) \geq \frac{d_{\mathcal{H}}(X)}{(s-2) - |X|} \geq c_1 \frac{n-d(v)-1}{d(v)^{a-1}} \quad (10)$$
+
+for some constant $c_1 = c_1(s, a) > 0$. The number of $K_{a,b}$ with $X \cup \{y\}$ forming the part of size $a$ ($y$ is an arbitrary vertex from $N(v, X)$) and $v$ in the part of size $b$ is at least
+
+$$\sum_{y \in N(v,X)} \binom{d_{\mathcal{H}}(y,X)}{b-1} \geq d(v,X) \binom{\frac{c_1(n-d(v)-1)}{d(v,X)d(v)^{a-1}}}{b-1} \geq \frac{c_2(n-d(v)-1)^{b-1}}{d(v,X)^{b-2}d(v)^{(a-1)(b-1)}}.$$
+
+Here we have used convexity, (10), and $c_2 = c_2(s, a, b)$ is some positive constant.
+
+Recalling that $|X| = a-1$, there are $\binom{d(v,X)}{b}$ copies of $K_{a,b}$ where $\{v\} \cup X$ is the part of size $a$ and the part of size $b$ is contained in $N(v)\setminus X$. Thus, for some constant $c_3 = c_3(s, a, b) > 0$, the number of $K_{a,b}$ that contain $v$ is at least
+
+$$\frac{c_3(n-d(v)-1)^{b-1}}{d(v,X)^{b-2}d(v)^{(a-1)(b-1)}} + c_3 d(v,X)^b.$$
+
+By considering cases as to which is this the bigger term in this sum, we find that in both cases, there are at least
+
+$$c_3 \left( \frac{n - d(v) - 1}{d(v)^{a-1}} \right)^{b/2}$$
+
+copies of $K_{a,b}$ containing $v$.
+
+Applying Lemma 3.3 to a vertex $v$ with $d(v) = \delta(G)$ proves Theorem 1.7.
+
+# 4 Concluding Remarks
+
+An interesting open problem is determining the minimum number of copies of $K_{1,2}$ in a $K_3$-saturated graph. There is a connection between this problem and Moore graphs with diameter 2 and girth 5. It is easy to check that an $n$-vertex Moore graph with diameter 2 and girth 5 is $K_3$-saturated, and it is regular with degree $d = \sqrt{n-1}$ [21] so it contains $n\binom{d}{2} = n(\sqrt{n-1})$ copies of $K_{1,2}$, and for all $n \ge 3$, this value is less than $\binom{n-1}{2}$ which is the number of copies of $K_{1,2}$ in $K_1 + K_{n-1} = K_{1,n-1}$. Furthermore, one can duplicate vertices of a Moore graph and preserve the $K_3$-saturated property (where each duplicated vertex has the same neighborhood as the original vertex). Duplicating a vertex of the Petersen graph will lead to an 11-vertex $K_3$-saturated graph with 42 copies of $K_{1,2}$, but $K_{1,10}$ has 45 copies of $K_{1,2}$. Starting from the Hoffman-Singleton graph, one can duplicate a vertex up to 4 times and we can still have fewer copies of $K_{1,2}$ compared to the number of copies of $K_{1,2}$ in $K_{1,n-1}$. Duplicating a single vertex is not necessarily the optimal way to minimize the number of copies of $K_{1,2}$, but the point is that there are other graphs besides the Moore graphs that have fewer copies of $K_{1,2}$ than the number of copies of $K_{1,2}$ in $K_{1,n-1}$.
+---PAGE_BREAK---
+
+It would also be interesting to determine the order of magnitude of sat$(n, K_{2,t}, K_s)$. There is a gap in the exponents (which is discussed in the introduction) and it would be nice to close this gap. It is not clear if our lower or upper bound is closer to the correct answer.
+
+Another potential approach to studying sat$(n, H, F)$ is via the random $F$-free process. This random process orders the pairs of vertices uniformly and then adds them one by one subject to the condition that adding an edge does not create a copy of $F$. The resulting graph is then $F$-saturated. This process was first considered in [4, 11, 20, 22] and has since been studied extensively. If $X_{H,F}$ is the random variable that counts the number of copies of $H$ in the output of this process, then we have that sat$(n, H, F) \le \mathbb{E}(X_{H,F})$. It would be interesting to determine for which graphs $H$ and $F$ that this approach gives better bounds than the explicit constructions that are currently known.
+
+## References
+
+[1] N. Alon, P. Erdős, R. Holzman, M. Krivelevich, On $k$-saturated graphs with restrictions on the degrees, *J. Graph Theory* **23** (1996), no. 1, 1–20.
+
+[2] N. Alon, C. Shikhelman, Many $T$ copies in $H$-free graphs, *J. Combin. Theory Ser. B* **121** (2016), 146–172.
+
+[3] T. Bohman, P. Keevash, The early evolution of the $H$-free process, *Invent. Math.*, **181**(2), (2010), 291–336.
+
+[4] B. Bollobás, O. Riordan, Constrained graph processes, *Electron. J. Combin.* **7**(1), (2000), R18.
+
+[5] D. Chakraborti, P. Loh, Minimizing the numbers of cliques and cycles of fixed size in an $F$-saturated graph, *European J. Combin.*, Vol. 90 (2020).
+
+[6] B. Cole, A. Curry, D. Davini, C. Timmons, Triangles in $K_s$-saturated graphs with minimum degree $t$, *Theory Appl. Graphs*, Vol. 7: Iss. 1, Article 2, (2020).
+
+[7] D. Duffus, D. Hanson, Minimal $k$-saturated and color critical graphs of prescribed minimum degree, *J. Graph Theory*, **10** (1), (1986), 55–67.
+
+[8] A. N. Day, Saturated graphs of prescribed minimum degree, *Combin. Probab. Comput.* **26** (2), (2017), 201–207.
+
+[9] P. Erdős, M. Simonovits, Supersaturated graphs and hypergraphs, *Combinatorica*, **3**(2), 181–192.
+
+[10] P. Erdős, A. Hajnal, J. W. Moon, A problem in graph theory, *Amer. Math. Monthly* **71** (1964), 1107–1110.
+
+[11] P. Erdős, S. Suen, P. Winkler, On the size of a random maximal graph, *Random Struct. Alg.* **6**, (1995), 309–318.
+
+[12] J. R. Faudree, R. J. Faudree, J. R. Schmitt, A survey of minimum saturated graphs, *Electron. J. Combin.*, DS19, (2011).
+
+[13] Z. Füredi, A. Seress, Maximal triangle-free graphs with restrictions on the degrees, *J. Graph Theory*, **18** (1), (1994), 11–24.
+---PAGE_BREAK---
+
+[14] R. J. Gould, J. R. Schmitt, Minimum degree and the minimum size of $K_2^t$-saturated graphs, *Disc. Math.*, 307 (9-10), (2007), 1108-1114.
+
+[15] R. L. Graham, M. Grötschel, L. Lovász, editors, Handbook of Combinatorics, Vol. 2, Elsevier Science B. V., Amsterdam; MIT Press, Cambridge, MA, 1995.
+
+[16] D. Hanson, K. Seyffarth, *k*-saturated graphs of prescribed maximum degree, *Congres. Numer.* 42 (1984), 169-182.
+
+[17] J. Kim, S. Kim, A. Kostochka, S. O. $K_{r+1}$-saturated graphs with small spectral radius arXiv:2006.04355v1 Jun 2020
+
+[18] J. Kritschgau, A. Methuku, M. Tait, C. Timmons, Few *H* copies in *F*-saturated graphs, *J. of Graph Theory*, 94 (3) (2020), 320-348.
+
+[19] O. Pikhurko, Results and Open Problems on Minimum Saturated Graphs, *Ars Combin.*, 72, (2004), 111–127.
+
+[20] A. Rucínski, N. Wormald, Random graph processes with degree restrictions, *Combin. Probab. Comput.* **1** (1992), 169–180.
+
+[21] R. Singleton, There is no irregular Moore graph, *Amer. Math. Monthly* 75 (1968), 42–43.
+
+[22] J. H. Spencer, Maximal triangle-free graphs and Ramsey $R(3,t)$, unpublished manuscript, 1995.
\ No newline at end of file
diff --git a/samples/texts_merged/841018.md b/samples/texts_merged/841018.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9fa2815b83cbc76d83145d9dfa7f34977d2830d
--- /dev/null
+++ b/samples/texts_merged/841018.md
@@ -0,0 +1,511 @@
+
+---PAGE_BREAK---
+
+# Shilov boundary for holomorphic functions on some classical Banach spaces
+
+by
+
+MARÍA D. ACOSTA (Granada) and
+MARY LILIAN LOURENÇO (São Paulo)
+
+**Abstract.** Let $\mathcal{A}_\infty(B_X)$ be the Banach space of all bounded and continuous functions on the closed unit ball $B_X$ of a complex Banach space $X$ and holomorphic on the open unit ball, with sup norm, and let $\mathcal{A}_u(B_X)$ be the subspace of $\mathcal{A}_\infty(B_X)$ of those functions which are uniformly continuous on $B_X$. A subset $B \subset B_X$ is a boundary for $\mathcal{A}_\infty(B_X)$ if $\|f\| = \sup_{x \in B} |f(x)|$ for every $f \in \mathcal{A}_\infty(B_X)$. We prove that for $X = d(w, 1)$ (the Lorentz sequence space) and $X = C_1(H)$, the trace class operators, there is a minimal closed boundary for $\mathcal{A}_\infty(B_X)$. On the other hand, for $X = S$, the Schreier space, and $X = K(\ell_p, \ell_q)$ ($1 \le p \le q < \infty$), there is no minimal closed boundary for the corresponding spaces of holomorphic functions.
+
+**1. Introduction.** A result of Shilov asserts that if $\mathfrak{A}$ is a unital separating algebra of $C(K)$ ($K$ a compact Hausdorff topological space), then there is a smallest closed subset $S \subset K$ such that every function of $\mathfrak{A}$ attains its norm at some point of $S$ [6, Theorem I.4.2]. Bishop [4] proved that if $K$ is metrizable, then, in fact, there is a minimal subset of $K$ satisfying the above condition for every separating algebra of $C(K)$. That subset is the set of peak points for $\mathfrak{A}$ (see definition below).
+
+Globevnik introduced the corresponding concepts for a subalgebra of $C_b(\Omega)$, the space of bounded continuous functions on a topological space $\Omega$ not necessarily compact [9]. In fact, he considered the case $\Omega = B_X$, where $X$ is a Banach space. If $\mathfrak{A}$ is a subspace of $C_b(\Omega)$, we will say that a subset $B \subset \Omega$ is a *boundary* for $\mathfrak{A}$ if
+
+$$ \|f\| = \sup_{b \in B} |f(b)|, \quad \forall f \in \mathfrak{A}. $$
+
+2000 Mathematics Subject Classification: Primary 46G20, 46J20; Secondary 46B99.
+Key words and phrases: holomorphic function, boundary, Shilov boundary, peak point, strong peak point.
+
+The first named author was supported in part by MEC project MTM2006-04837 and Junta de Andalucía "Proyecto de Excelencia" FQM-01438. The second named author was partially supported by CNPq (project no. 472416/2004-9).
+---PAGE_BREAK---
+
+If there is a closed boundary $\mathcal{B}$ that is contained in all closed boundaries for $\mathfrak{A}$, we will say that $\mathcal{B}$ is the *Shilov boundary* of $\mathfrak{A}$.
+
+If $X$ is a complex Banach space, we will denote by $\mathcal{A}_u(B_X)$ the space of uniformly continuous functions on the closed unit ball of $X$ which are holomorphic on the open unit ball. Globevnik [9] described the boundaries of $\mathcal{A}_u(B_{c_0})$. As a consequence of the description, he showed that this algebra has no Shilov boundary. Aron, Choi, Lourenço and Paques [3] gave examples of boundaries for $\mathcal{A}_u(B_{\ell_\infty})$ and proved that there is no Shilov boundary for this algebra. They also showed that the unit sphere of $\ell_1$ is the Shilov boundary for $\mathcal{A}_u(B_{\ell_1})$.
+
+Moraes and Romero [14] gave a characterization of the boundaries of $\mathcal{A}_u(B_{d_*(w,1)})$, where $d_*(w, 1)$ is the canonical predual of the Lorentz sequence space $d(w, 1)$ when $w = (1/n)$. Later Acosta, Moraes and Romero [2] generalized that characterization proving it for any space $d_*(w, 1)$ and obtained another one in terms of the strong peak sets of the unit ball. In this case, there is no Shilov boundary. Choi, García, Kim and Maestre [5] proved that there is no Shilov boundary for $\mathcal{A}_u(B_{C(K)})$, when $K$ is infinite and scattered. Acosta showed the same result for every infinite $K$ and also proved that for this space the set of extreme points of the unit ball of $C(K)$ is a boundary for $\mathcal{A}_u(B_{C(K)})$ (see [1]).
+
+Before going on it is convenient to recall some definitions. Let $\mathcal{A}$ be a function space on a metric space $\Omega$. An element $y \in \Omega$ is called a *peak point* for $\mathcal{A}$ if there is some $f \in \mathcal{A}$ such that $f(y) = 1$ and $|f(x)| < 1$ for all $x \in \Omega \setminus \{y\}$. In this case we say that $f$ *peaks* at $y$. An element $y \in \Omega$ is called a *strong peak point* for $\mathcal{A}$ if there is some $f \in \mathcal{A}$ satisfying $f(y) = 1$ and such that given any $\varepsilon > 0$ there is some $\delta > 0$ such that $\text{dist}(x, y) > \varepsilon$ implies that $|f(x)| < 1 - \delta$. It is clear that every closed boundary for $\mathcal{A}$ contains all the strong peak points.
+
+In this paper we prove that there is no Shilov boundary for $\mathcal{A}_u(B_X)$ when $X$ is the Schreier space or the space $K(\ell_p, \ell_q)$ ($1 \le p \le q < \infty$). For the spaces $X = C_1(H)$, the trace class operators on a complex Hilbert space $H$, or $X = d(w, 1)$, the Shilov boundary for $\mathcal{A}_u(B_X)$ exists. In fact, all the points in the unit sphere of $d(w, 1)$ are strong peak points for $\mathcal{A}_u(B_{d(w,1)})$, and so in this case the Shilov boundary is the unit sphere. For $\ell_1$ the same result holds. That fact was proved in [3] for the finitely supported sequences in the unit sphere. If $K$ is infinite, we also prove that there are no strong peak points for $\mathcal{A}_u(B_{C(K)})$. The set of peak points for $\mathcal{A}_u(B_{C(K)})$ is the set of extreme points of $B_{C(K)}$ if $K$ is separable.
+
+Throughout this paper, all the Banach spaces considered are complex. For a Banach space $X$, $B_X$ and $S_X$ will be the closed unit ball and the unit sphere of $X$, respectively. We will denote by $\mathcal{A}_\infty(B_X)$ the Banach space of all bounded and continuous functions on $B_X$ which are holomorphic on the
+---PAGE_BREAK---
+
+open unit ball, and by $A_u(B_X)$ the space of all functions in $A_\infty(B_X)$ which
+are uniformly continuous.
+
+**2. Existence of the Shilov boundary on the Lorentz sequence space.** Given a decreasing sequence $w$ of positive real numbers satisfying $w_1 = 1$ and $w \in c_0 \setminus \ell_1$, the complex Lorentz sequence space $d(w, 1)$ is given by
+
+$$d(w, 1) = \{ x : \mathbb{N} \to \mathbb{C} : \sup \left\{ \sum_{n=1}^{\infty} |x(\sigma(n))| w_n : \sigma : \mathbb{N} \to \mathbb{N} \text{ injective} \} < \infty \}.$$
+
+The norm is given by
+
+$$\|x\| = \sup\left\{\sum_{n=1}^{\infty} w_n |x(\sigma(n))| : \sigma : \mathbb{N} \to \mathbb{N} \text{ injective}\right\} \quad (x \in d(w, 1)).$$
+
+It is well known and easy to verify that the above supremum is attained for
+the decreasing rearrangement of $x$. The usual vector basis $(e_n)$ is a monotone
+Schauder basis (see [12]).
+
+A canonical predual $d_*(w, 1)$ of $d(w, 1)$ is given by
+
+$$d_*(w, 1) = \left\{ x \in c_0 : \lim_{n} \frac{\sum_{k=1}^{n} x^*(k)}{W_n} = 0 \right\}$$
+
+where $W_n = \sum_{k=1}^n w_k$ and $x^*$ is the decreasing rearrangement of $x$. This
+space is a Banach space endowed with the norm
+
+$$\|x\| = \sup_n \left\{ \frac{\sum_{k=1}^{n} x^*(k)}{W_n} \right\}$$
+
+(see [16] and [7]). $d_*(w, 1)$ has a Schauder basis whose sequence of biorthogonal functionals is, in fact, the canonical basis of $d(w, 1)$.
+
+We begin by presenting some useful lemmas.
+
+LEMMA 2.1. If $(z_n)$ is a bounded sequence of complex numbers such that the sequence $(1 + |z_n| - |1 + z_n|)$ converges to zero, then so does $(|z_n| - z_n)$.
+
+*Proof*. We consider the following identity for a complex number $z$:
+
+$$
+\begin{align*}
+(1 + |z| - |1 + z|)^2 &= 1 + |z|^2 + 2|z| + |1 + z|^2 - 2(1 + |z|) |1 + z| \\
+&= 2(\operatorname{Re}z - |z|) + 2(1 + |z|)(1 + |z| - |1 + z|).
+\end{align*}
+$$
+
+If we apply the above identity to the sequence $(z_n)$ and use the assumption,
+we find that the sequence $(|z_n| - \operatorname{Re} z_n)$ converges to zero.
+
+Now if we consider the expression
+
+$$
+(|z| - \operatorname{Re} z)^2 &= 2(\operatorname{Re} z)^2 + (\operatorname{Im} z)^2 - 2|z| \operatorname{Re} z \\
+&= (\operatorname{Im} z)^2 + 2(\operatorname{Re} z - |z|) \operatorname{Re} z,
+$$
+---PAGE_BREAK---
+
+and we apply the identity to the sequence $(z_n)$, we deduce that $\operatorname{Im} z_n \to 0$. Hence
+
+$$|z_n| - z_n = |z_n| - \operatorname{Re} z_n - i \operatorname{Im} z_n \to 0. \blacksquare$$
+
+**LEMMA 2.2** ([3, Lemma 9]). Let $0 < a < 1$. The real-valued function given by
+
+$$g_a(x) = \left(1 + \frac{x}{1-a}\right) \left(1 + \frac{1-x}{a}\right) \quad (x \in \mathbb{R})$$
+
+attains its maximum at $x = a$ and
+
+$$g_a(x) < g_a(a) = \frac{1}{a(1-a)}, \quad \forall x \in \mathbb{R} \setminus \{a\}.$$
+
+**LEMMA 2.3.** *The set of peak points in $S_X$ for $\mathcal{A}_\infty(B_X)$ is invariant under surjective linear isometries on $X$. The same holds for the set of strong peak points in $S_X$.*
+
+By the maximum modulus theorem, every peak point for a subspace of $\mathcal{A}_\infty(B_X)$ belongs to $S_X$. As a consequence, so does every strong peak point. The following result shows the converse for the subspace of all polynomials on $d(w, 1)$.
+
+**THEOREM 2.4.** *The set of strong peak points for the space of polynomials of degree less than or equal to 2 on $d(w, 1)$ contains the unit sphere of $d(w, 1)$.*
+
+*Proof.* Let $y_0 \in S_{d(w,1)}$. By Lemma 2.3 we can assume that $\supp y_0$ is an interval of positive integers containing $\{1\}$ and
+
+$$ (1) \quad y_0(j) \in \mathbb{R}^+, \quad \forall j \in \supp y_0, \quad y_0(n) \ge y_0(n+1), \quad \forall n \in \mathbb{N}. $$
+
+We will prove that $y_0$ is a strong peak point for $\mathcal{A}_u(d(w, 1))$.
+
+If the support of $y_0$ contains just one element, then $y_0 = e_1$ and it is sufficient to consider the first-degree polynomial given by
+
+$$f(x) = 1 + x(1) \quad (x \in d(w, 1)).$$
+
+Clearly $\|f\| = 2 = f(y_0)$. By using the fact that in $S_{d(w,1)}$ the weak and $\sigma(d(w,1), d_*(w,1))$ converges coincide ([16, Proposition 2.2] and [10, Corollary III.2.15]) and that every point of the unit sphere is a point of weak-norm continuity of the unit ball [13, Proposition 4], it is easily checked that $f$ strongly peaks in the unit ball at $y_0$.
+
+Now assume that $J := \supp y_0$ has at least two elements. Since $\|y_0\| = 1$, by (1), we know that $\sum_{i \in J} w_i y_0(i) = 1$ and so $0 < w_i y_0(i) < 1$ for every $i \in J$.
+
+For every $k \in J$ we define
+
+$$f_k(x) = \frac{1}{M_k} \left( 1 + \frac{w_k x(k)}{1 - w_k y_0(k)} \right) \left( 1 + \frac{1}{w_k y_0(k)} \sum_{j \in J \setminus \{k\}} w_j x(j) \right) \quad (x \in d(w, 1)),$$
+---PAGE_BREAK---
+
+where
+
+$$M_k = \frac{1}{w_k y_0(k) (1 - w_k y_0(k))}.$$
+
+Then $f_k$ is clearly a non-homogeneous polynomial on $d(w, 1)$ of degree 2 and $f_k(y_0) = 1$. We will check that $\|f_k\| = 1$.
+
+If $x \in B_{d(w,1)}$, then
+
+$$
+\begin{align*}
+(2) \quad |f_k(x)| &= \frac{1}{M_k} \left| 1 + \frac{w_k x(k)}{1 - w_k y_0(k)} \right| \left| 1 + \frac{1}{w_k y_0(k)} \sum_{j \in J \setminus \{k\}} w_j x(j) \right| \\
+&\le \frac{1}{M_k} \left( 1 + \frac{w_k |x(k)|}{1 - w_k y_0(k)} \right) \left( 1 + \frac{1}{w_k y_0(k)} \sum_{j \in J \setminus \{k\}} |w_j x(j)| \right) \\
+&\le \frac{1}{M_k} \left( 1 + \frac{w_k |x(k)|}{1 - w_k y_0(k)} \right) \left( 1 + \frac{1 - w_k |x(k)|}{w_k y_0(k)} \right) \quad (\text{since } x \in B_X) \\
+&\le \frac{1}{M_k} \left( 1 + \frac{w_k y_0(k)}{1 - w_k y_0(k)} \right) \left( 1 + \frac{1 - w_k y_0(k)}{w_k y_0(k)} \right) \quad (\text{by Lemma 2.2}) \\
+&= 1.
+\end{align*}
+$$
+
+Hence $\|f_k\| = 1$.
+
+Our intention is to show that $y_0$ is a strong peak point for the space of second-degree polynomials. To this end, we will prove that
+
+$$ (3) \quad x_n \in B_{d(w,1)}, \forall n, \quad |f_k(x_n)| \xrightarrow{n} 1 \Rightarrow x_n(k) \xrightarrow{n} y_0(k). $$
+
+For every fixed $k$, we write
+
+$$u_n = \frac{w_k x_n(k)}{1 - w_k y_0(k)}, \quad v_n = \sum_{\substack{j \in J \\ j \neq k}} \frac{w_j x_n(j)}{w_k y_0(k)}.$$
+
+We rewrite the inequality (2) in terms of the above sequences:
+
+$$|f_k(x_n)| = \frac{1}{M_k} |1 + u_n||1 + v_n| \leq \frac{1}{M_k} (1 + |u_n|)(1 + |v_n|) \leq 1.$$
+
+If we assume that $|f_k(x_n)| \to 1$ as $n \to \infty$, then the sequence $(1+v_n)$ has no subsequence converging to zero. From the above inequality we deduce that
+
+$$|1 + u_n| - 1 - |u_n| \to 0.$$
+
+Since $k$ is fixed, Lemma 2.1 implies that $(|u_n| - u_n)$ converges to zero, that is, $|x_n(k)| - x_n(k) \to 0$ as $n \to \infty$. Also by Lemma 2.2, we know that
+
+$$w_k|x_n(k)| \to w_k y_0(k) \quad \text{as } n \to \infty.$$
+
+Hence we deduce that $x_n(k) \to y_0(k)$ as $n \to \infty$.
+---PAGE_BREAK---
+
+Now we choose a sequence $(\alpha_n)$ in $\ell_1$ such that $\supp \alpha = J$, $\alpha_n > 0$ for all $n \in J$ and $\sum_{n \in J} \alpha_n = 1$. Define
+
+$$f(x) = \sum_{k \in J} \alpha_k f_k(x) \quad (x \in B_{d(w,1)}).$$
+
+Then $f$ is a polynomial of degree at most 2 in $d(w, 1)$ and $\|f\| \le 1 = f(y_0)$.
+
+We now prove that this function strongly peaks in the unit ball of $d(w, 1)$ at $y_0$. So assume that $|f(x_n)| \to 1$ for some sequence $(x_n)$ in the unit ball. Then clearly $f_k(x_n) \to 1$ as $n \to \infty$ for every $k \in J$.
+
+Since $y_0 \in S_{d(w,1)}$, by condition (3), we know that $(x_n)$ converges pointwise to $y_0$. All the elements involved in the argument are in the unit ball of $d(w, 1)$ and so $(x_n)$ converges to $y_0$ in the $\sigma(d(w, 1), d_*(w, 1))$-topology. Since $d_*(w, 1)$ is an M-ideal in its dual (see [16, Proposition 2.2] or [10, Examples III.1.4c]), in the unit ball of $d(w, 1)$, the weak and weak* topologies coincide on the unit sphere, in view of [10, Corollary III.2.15]. By applying this to the element $y_0$, which is the $w^*$-limit of $(x_n)$, we see that in fact $(x_n)$ converges weakly to $y_0$. Since all the points of the unit sphere of $d(w, 1)$ are points of weak-norm continuity [13, Proposition 4], we conclude that $(x_n)$ converges in norm to $y_0$ and $y_0$ is a strong peak point, as we wanted to show. $\blacksquare$
+
+**COROLLARY 2.5.** The Shilov boundary for the space of second-degree polynomials on $d(w, 1)$ is $S_{d(w,1)}$. Hence $S_{d(w,1)}$ is also the Shilov boundary for $\mathcal{A}_u(B_{d(w,1)})$ and $\mathcal{A}_\infty(B_{d(w,1)})$.
+
+It is known that all the finitely supported elements in $S_{\ell_1}$ are strong peak points for the space of second-degree polynomials on $\ell_1$ [3, Theorem 10]. We now extend that result.
+
+**THEOREM 2.6.** $S_{\ell_1}$ is the set of strong peak points for the space of second-degree polynomials on $\ell_1$.
+
+*Proof.* If $y_0 \in S_{\ell_1}$, then, by Lemma 2.3, we can assume that $y_0(n) \ge 0$ for every $n$. If $\supp y_0 = 1$ and $\{n\} = \supp y_0$, the function $x \mapsto 1+x(n)$ strongly peaks in the unit ball of $\ell_1$ at $y_0$. Otherwise, if $J := \supp y_0$ satisfies $|J| \ge 2$, then the second-degree polynomial given by
+
+$$f_k(x) := \frac{1}{y_0(k)(1-y_0(k))} \left(1 + \frac{x(k)}{1-y_0(k)}\right) \left(1 + \frac{\sum_{i \ne k} x(i)}{y_0(k)}\right) \quad (x \in \ell_1)$$
+
+satisfies $f_k(y_0) = 1$. In view of Lemma 2.2, also $\|f_k\| = 1$ and now we can follow the argument in the proof of Theorem 2.4. $\blacksquare$
+
+**3. Boundaries for the Schreier space and C(K).** A subset $E = \{n_1 < \dots < n_k\}$ of the natural numbers $\mathbb{N}$ is said to be *admissible* if $k \le n_1$. The Schreier space $\mathcal{S}$ is the completion of the space $c_{00}$ of all scalar sequences
+---PAGE_BREAK---
+
+of finite support with respect to the norm $\|x\| = \sup \sum_{j \in E} |x_j|$, where the supremum is taken over all admissible sets $E$ of natural numbers.
+
+The following theorem shows in particular that the intersection of all boundaries for $\mathcal{A}_\infty(B_S)$ is empty.
+
+**THEOREM 3.1.** Let $S$ be the Schreier space and $B$ be a boundary for $\mathcal{A}_\infty(B_S)$. If $x_0 \in B$ and $0 < r < 1$, then $B \setminus (x_0 + rB_S)$ is a boundary for $\mathcal{A}_\infty(B_S)$. As a consequence, there is no Shilov boundary for $\mathcal{A}_\infty(B_S)$.
+
+*Proof.* Assume that $h \in \mathcal{A}_\infty(B_S)$. For every $0 < \varepsilon < (1-r)/2$, there is $y_0 \in c_{00}$ such that $\|y_0\| < 1$ and
+
+$$|h(y_0)| > \|h\| - \varepsilon.$$
+
+We write $k = \max\text{supp } y_0$ and denote by $(P_m)$ the sequence of canonical projections associated to the usual basis of $S$. Choose a positive integer $n$ such that $n > k/(1 - \|y_0\|)$ and $\|(I - P_n)(x_0)\| < \varepsilon$. We will check that $y_0 + \lambda y \in B_S$ for every $\lambda \in \mathbb{C}$ with $|\lambda| = 1$ and $y = \sum_{j=n+1}^{2n} (1/n)e_j$.
+
+Let $A = E \cup F$ be an admissible set such that $E \subset \{1, \dots, k\}$ and $\min F > k$. If $E \neq \emptyset$, then $|E| + |F| \leq k$ and
+
+$$\sum_{i \in E \cup F} |y_0 + \lambda y(i)| \leq \sum_{i \in E} |y_0(i)| + \sum_{i \in F} |y(i)| \leq \|y_0\| + \frac{k}{n} \leq 1.$$
+
+If $E = \emptyset$, then $\sum_{i \in F} |(y_0 + \lambda y)(i)| = \sum_{i \in E} |y(i)| \leq 1$. So $\|y_0 + \lambda y\| \leq 1$.
+
+By the maximum modulus theorem, there is $\lambda_0 \in \mathbb{C}$ with $|\lambda_0| = 1$ such that
+
+$$|h(y_0 + \lambda_0 y)| \geq |h(y_0)| - \|h\| - \varepsilon.$$
+
+Fix $\lambda_1 \in \mathbb{C}$ satisfying $|\lambda_1| = 1$ and
+
+$$|h(y_0 + \lambda_0 y) + \lambda_1| = |h(y_0 + \lambda_0 y)| + 1.$$
+
+Since $\|y\| = 1$ and $P_n(y) = 0$, there is $y^* \in S_S$ such that $y^*(\lambda_0 y) = 1$, $y^*(e_j) = 0$ for all $j \leq n$ and so $y^*(y_0) = 0$. Now, we define a holomorphic function $g$ by
+
+$$g(x) := h(x) + \lambda_1 y^*(x) \quad (x \in B_S).$$
+
+Clearly $g \in \mathcal{A}_\infty(B_S)$ and
+
+$$
+\begin{align*}
+\|h\| - \varepsilon + 1 &< |h(y_0)| + 1 \leq |h(y_0 + \lambda_0 y)| + y^*(\lambda_0 y) \\
+&= |g(y_0 + \lambda_0 y)| \leq \|g\| \leq \|h\| + 1.
+\end{align*}
+$$
+
+Since $B$ is a boundary there is $z_0 \in B$ such that
+
+$$|g(z_0)| > \|h\| - \varepsilon + 1.$$
+
+On the other hand,
+
+$$|g(z_0)| \leq |h(z_0)| + |y^*(z_0)| \leq \|h\| + |y^*(z_0)| \leq \|h\| + 1.$$
+---PAGE_BREAK---
+
+This implies $|y^*(z_0)| > 1 - \varepsilon$. Hence
+
+$$
+\|(I - P_n)(z_0)\| \ge |y^*(z_0)| > 1 - \varepsilon.
+$$
+
+Consequently,
+
+$$
+\begin{align*}
+\|z_0 - x_0\| &\ge \| (I - P_n)(z_0 - x_0) \| \\
+&\ge \| (I - P_n)(z_0) \| - \| (I - P_n)x_0 \| \ge 1 - 2\varepsilon > r.
+\end{align*}
+$$
+
+Also $|h(z_0)| + 1 \ge \|h\| + 1 - \varepsilon$ and hence $|h(z_0)| > \|h\| - \varepsilon$. Therefore
+$z_0 \in B \setminus (x_0 + rB_S)$ and this set is a boundary for $\mathcal{A}_\infty(B_S)$. As a consequence,
+the Shilov boundary of this space does not exist. $\blacksquare$
+
+We recall that a point $x \in B_X$ is a $\mathcal{C}$-extreme point of the unit ball if
+
+$$
+(y \in X, \|x + \lambda y\| \le 1, \forall \lambda \in \mathbb{C}, |\lambda| = 1) \Rightarrow y = 0.
+$$
+
+**THEOREM 3.2.** If K is any infinite compact Hausdorff topological space, then there are no strong peak points for A∞(Bc(K)). If K is separable, then all the extreme points in Bc(K) are peak points for the space of first-degree polynomials on C(K).
+
+*Proof.* It is known that every peak point is a $\mathbb{C}$-extreme point [8, Theorem 4]. So we will prove that $\mathbb{C}$-extreme points of $B_{\mathcal{C}(K)}$ are not strong peak points. Assume that $x_0 \in S_{\mathcal{C}(K)}$ is an extreme point of the unit ball. Since $K$ is infinite, there is a sequence $(x_n) \subset \mathcal{C}(K)$ satisfying
+
+$$
+0 \le x_n \le 1, \|x_n\| = 1, \forall n, \quad \text{supp } x_n \cap \text{supp } x_m = \emptyset, \forall n \ne m.
+$$
+
+Assume that $h \in B_{\mathcal{A}_{\infty}(B_{\mathcal{C}(K)})}$ with $h(x_0) = 1$. Since $(x_n)$ is equivalent to the $c_0$-basis, it converges weakly to zero. Then the sequence $(x_0(1-x_n))$ is in the unit ball of $\mathcal{C}(K)$ and converges weakly to $x_0$. Since $\mathcal{C}(K)$ has the Dunford-Pettis property, it also has the polynomial Dunford-Pettis property [15], and so the argument in the proof of [1, Proposition 4.1] shows that
+
+$$
+h(x_0(1 - x_n)) \to 1.
+$$
+
+Since $x_n$ are non-negative elements in the unit sphere, for every $n$ there is
+$t_n \in K$ such that $x_n(t_n) = 1$ and so
+
+$$
+\|x_0(1 - x_n) - x_0\| \ge \|x_0x_n\| \ge |x_0(t_n)x_n(t_n)| = 1.
+$$
+
+Hence $x_0$ is not a strong peak point for $\mathcal{A}_\infty(B_{C(K)})$.
+
+If $K$ is separable and $\{t_n : n \in \mathbb{N}\}$ is a dense set in $K$, we will prove
+that the function $u$ such that $u(K) = \{1\}$ is a peak point for the space
+of first-degree polynomials. In view of Lemma 2.3, this proves the stated
+assertion.
+
+Define
+
+$$
+f(x) := \sum_{n=1}^{\infty} \alpha_n (1 + x(t_n)) \quad (x \in C(K)),
+$$
+---PAGE_BREAK---
+
+where $(\alpha_n) \subset S_{\ell_1}$ with $\alpha_n > 0$ for every $n$. Then $f$ is clearly a first-degree polynomial on $C(K)$ and $f(u) = \|f\| = 2$. If $x \in B_{C(K)}$ and $|f(x)| = 2$, then $|1 + x(t_n)| = 2$ for every $n$ and so $x(t_n) = 1$ for all $n$, that is, $x = u$. ■
+
+Since $\ell_\infty$ has a countable subset of functionals that separate points and attain the norm at the same element of the unit ball, we can also obtain:
+
+**COROLLARY 3.3 ([3]).** *All the extreme points in $B_{\ell_\infty}$ are peak points for the space of first-degree polynomials on $\ell_\infty$.*
+
+**4. Shilov boundary on the trace class operators.** Let $H$ be a complex Hilbert space. An operator $T: H \to H$ is called a trace class operator if there are orthonormal sequences $(e_n)$ and $(f_n)$ in $H$ such that $T(x) = \sum_{n=1}^\infty \lambda_n \langle x, e_n \rangle f_n$ for every $x \in H$ and the sequence $(\lambda_n)$ is in $\ell_1$. In that case, the norm of $T$ is given by $\|T\| = \sum_{n=1}^\infty |\lambda_n|$. We denote by $C_1(H)$ the Banach space of all trace class operators on $H$.
+
+**THEOREM 4.1.** If $H$ is a complex Hilbert space, then the Shilov boundaries for $\mathcal{A}_u(C_1(H))$ and $\mathcal{A}_\infty(C_1(H))$ both exist and coincide.
+
+*Proof.* Assume that $\{e_i : i \in I\}$ is an orthonormal basis of $H$ and $F \subset I$ is any subset. Then the operator $\Pi_F$ given by
+
+$$ \Pi_F(T) := P_F T P_F \quad (T \in C_1(H)), $$
+
+where $P_F(x) = \sum_{i \in F} x(i)e_i(x \in H)$, is a norm one projection on $C_1(H)$. Since $\text{Lin}\{e_i \otimes e_j : i, j \in I\}$ is dense in $C_1(H)$, for every $h \in \mathcal{A}_\infty(B_{C_1(H)})$ we have
+
+$$ \|h\| = \sup_{\substack{F \subset I \\ F \text{ finite}}} \|h \circ \Pi_F\|. $$
+
+For every complex finite-dimensional space $Y$, the subset of peak points of $B_Y$ is a boundary for $\mathcal{A}_u(B_Y)$ [4, Theorem 1]. We will prove that for every finite subset $F \subset I$, every peak point of the unit ball of $\Pi_F(C_1(H))$ for the space of bounded and continuous functions on the unit ball of $\Pi_F(C_1(H))$ which are holomorphic on the open unit ball, is a strong peak point for $\mathcal{A}_u(B_{C_1(H)})$.
+
+Let $T_0 \in S_{C_1(H)} \cap \Pi_F(C_1(H))$ be a peak point. Then there is a continuous function $g$ on the unit ball of $\Pi_F(C_1(H))$, which is holomorphic on the open unit ball and satisfies
+
+$$ g(T_0) = \|g\| = 1 \quad \text{and} \quad |g(T)| < 1, \forall T \in (B_{C_1(H)} \cap \Pi_F(C_1(H))) \setminus \{T_0\}. $$
+
+Now we extend $g$ to $B_{C_1(H)}$ by
+
+$$ \tilde{g}(T) = g(\Pi_F(T)) \quad (T \in B_{C_1(H)}). $$
+
+Clearly $\tilde{g} \in \mathcal{A}_u(B_{C_1(H)})$, $\|\tilde{g}\| \le \|g\| = 1$ and $\tilde{g}(T_0) = 1$. Assume that $(T_n) \subset B_{C_1(H)}$ with $\tilde{g}(T_n) \to 1$, that is, $\lvert g(\Pi_F(T_n)) \rvert \to 1$. Since $\Pi_F(C_1(H))$
+---PAGE_BREAK---
+
+is a finite-dimensional space and $T_0$ is a peak point, we have $\Pi_F(T_n) \to T_0$. Since $\|T_0\| = 1$, it follows that $\|\Pi_F(T_n)\| \to 1$. By using [11, Proposition 2.2], we have
+
+$$
+\begin{align*}
+& \|P_F T_n P_F\|^2 + \|P_F T_n (I - P_F)\|^2 + \| (I - P_F) T_n P_F \|^2 \\
+& \qquad + \| (I - P_F) T_n (I - P_F) \|^2 \\
+& \le \|T_n\|^2 \le 1,
+\end{align*}
+$$
+
+and so ||$\Pi_F(T_n)$ - $T_n$|| = ||$P_F T_n P_F$ - $T_n$|| → 0. Since we know that ($\Pi_F(T_n)$) converges to $T_0$, so does ($T_n$), and $T_0$ is a strong peak point, as we wanted to show. Since the strong peak points are contained in any closed boundary and in this case the set of strong peak points is a boundary for $A_u(B_{C_1(H)})$, the Shilov boundary for this space is the closure of the set of strong peak points of $A_u(B_{C_1(H)})$. The same argument works for $A_\infty(B_{C_1(H)})$. ■
+
+**5. Boundaries for K(ℓₚ, ℓ_q).** We now study the properties of the boundaries for $A_∞(B_X)$, where X is the space of all compact operators on ℓₚ for $1 ≤ p < ∞$.
+
+**THEOREM 5.1.** If $1 \le p \le q < \infty$, then there is no Shilov boundary for $A_∞(B_{K(ℓ_p,ℓ_q)})$. In fact, if B is a boundary for $A_∞(B_{K(ℓ_p,ℓ_q)})$, $0 < r < 1$ and $S_0 \in B$, then $B\setminus(S_0+rB_{K(ℓ_p,ℓ_q)})$ is also a boundary for $A_∞(B_{K(ℓ_p,ℓ_q)})$. There are closed boundaries A, B for $A_∞(B_{K(ℓ_p,ℓ_q)})$ such that $\text{dist}(A, B) \ge 1$. The same assertions hold for $A_u(B_{K(ℓ_p,ℓ_q)})$.
+
+*Proof.* We denote by $(P_n)$ and $(Q_n)$ the sequences of canonical projections associated to the usual bases of $\ell_p$ and $\ell_q$, respectively.
+
+Assume that $B \subset B_K(\ell_p, \ell_q)$ is a boundary for $A_\infty(B_K(\ell_p, \ell_q))$, $0 < r < 1$ and $S_0 \in B$. If $h \in A_\infty(B_K(\ell_p, \ell_q))$ and $0 < \varepsilon < (1-r)/3$, then there are $N \in \mathbb{N}$ and $F \in B_K(\ell_p, \ell_q)$ which satisfy $Q_N F P_N = F$ and
+
+$$
+|h(F)| > \|h\| - \varepsilon.
+$$
+
+Since $S_0$ is a compact operator, there exists $n > N$ with
+
+$$
+\|(I - Q_n)S_0(I - P_n)\| < \varepsilon.
+$$
+
+Choose $R \in S_{K(\ell_p, \ell_q)}$ such that
+
+$$
+(I - Q_n) R (I - P_n) = R,
+$$
+
+and $x_0 \in S_{\ell_p}$ satisfying $P_n x_0 = 0$ and $\|R(x_0)\| = 1$. Then there exists $y^* \in S_{\ell_q^*}$ with $Q_n^*(y^*) = 0$ and $y^*(R(x_0)) = 1$. Notice that $\|F + \lambda R\| \le 1$ for every complex number $\lambda$ with $|\lambda| = 1$. By the maximum modulus theorem, there is $\lambda_0 \in \mathbb{C}$ such that $|\lambda_0| = 1$ and
+
+$$
+|h(F)| \le |h(F + \lambda_0 R)| \le \sup_{|\lambda|=1} |h(F + \lambda R)|.
+$$
+
+If $\lambda_1 \in \mathbb{C}$ is a modulus one scalar satisfying
+
+$$
+|h(F + \lambda_0 R) + \lambda_1 y^*(\lambda_0 R(x_0))| = |h(F + \lambda_0 R)| + 1,
+$$
+---PAGE_BREAK---
+
+we define a holomorphic function $g$ by
+
+$$g(T) := h(T) + \lambda_1 y^*(Tx_0) \quad (T \in B_{K(\ell_p, \ell_q)}).$$
+
+Clearly $g \in A_\infty(B_{K(\ell_p, \ell_q)})$ and
+
+$$
+\begin{align*}
+\|g\| &\ge |g(F + \lambda_0 R)| = |h(F + \lambda_0 R) + \lambda_1 y^*(\lambda_0 R x_0)| \\
+&= |h(F + \lambda_0 R)| + 1 \ge |h(F)| + |y^*(Rx_0)| > \|h\| - \varepsilon + 1.
+\end{align*}
+$$
+
+Since $B$ is a boundary for $\mathcal{A}_\infty(B_{K(\ell_p, \ell_q)})$, there is $S \in B$ such that $|g(S)| > \|g\| - \varepsilon$. Hence
+
+$$ (4) \qquad \|h\| - 2\varepsilon + 1 \le \|g\| - \varepsilon < |g(S)| \le |h(S)| + |y^*(Sx_0)|, $$
+
+and so
+
+$$ |y^*(Sx_0)| \ge 1 - 2\varepsilon. $$
+
+By the choice of $x_0$ and $y^*$,
+
+$$ \| (I - Q_n) S (I - P_n) \| \ge |y^*(I - Q_n) S (I - P_n) x_0| = |y^*(Sx_0)| \ge 1 - 2\varepsilon. $$
+
+Finally, we deduce that
+
+$$
+\begin{align*}
+\|S - S_0\| &\ge \| (I - Q_n)(S - S_0)(I - P_n) \| \\
+&\ge \| (I - Q_n)S(I - P_n)\| - \| (I - Q_n)S_0(I - P_n)\| \ge 1 - 3\varepsilon > r.
+\end{align*}
+$$
+
+From inequality (4), we also obtain
+
+$$ |h(S)| \ge \|h\| - 2\varepsilon. $$
+
+We have just proved that $B\setminus(S_0+rB_{K(\ell_p, \ell_q)})$ is a boundary for $A_\infty(B_{K(\ell_p, \ell_q)})$.
+
+As a consequence, the Shilov boundary of this space does not exist.
+
+Now we give a procedure to construct boundaries for $A_\infty(B_{K(\ell_p, \ell_q)})$. Since $\text{Lin}\{x \otimes y : x \in (\ell_p)^*, y \in \ell_q, \text{supp } x, \text{supp } y \text{ are finite}\}$ is dense in $K(\ell_p, \ell_q)$, for every $h \in A_\infty(B_{K(\ell_p, \ell_q)})$ we have
+
+$$ \|h\| = \sup\{\|h_F\| : F \subset \mathbb{N} \text{ finite}\}, $$
+
+where $h_F(T) := h(Q_F T P_F)$ for $T \in K(\ell_p, \ell_q)$ and $P_F, Q_F$ are the projections given by
+
+$$ P_F(x) = \sum_{n \in F} x(n)e_n \quad (x \in \ell_p), \quad Q_F(x) = \sum_{n \in F} x(n)e_n \quad (x \in \ell_q). $$
+
+Note also that $\|h_F\| \le \|h_G\|$ for $F \subset G$.
+
+Assume that $(F_n)$ is an increasing sequence of finite subsets of $\mathbb{N}$ such that $G_n := F_{n+1} \setminus F_n$ is non-empty and $\bigcup_n F_n = \mathbb{N}$. We consider the subsets $A_n$ whose elements are those operators $T \in B_{K(\ell_p, \ell_q)}$ that admit a decomposition $T = R + S$ satisfying
+
+$$ \|R\| = \|S\| = 1, \quad R = Q_{F_n} R P_{F_n}, \quad Q_{F_n} S P_{F_n} = 0, \quad Q_{G_n} S P_{G_n} = S. $$
+
+Note that $A_n$ is closed for every $n$.
+---PAGE_BREAK---
+
+We now check that $B = \bigcup_n A_n$ is a closed boundary for $\mathcal{A}_\infty(B_K(\ell_p, \ell_q))$. Given $h \in \mathcal{A}_\infty(B_K(\ell_p, \ell_q))$ and $\varepsilon > 0$, there is some finite subset $F \subset \mathbb{N}$ such that $\|h_F\| > \|h\| - \varepsilon$. If $F \subset F_m$, then also $\|h_{F_m}\| \geq \|h\| - \varepsilon$. Hence there is an operator $R \in S_K(\ell_p, \ell_q)$ such that $Q_{F_m}RP_{F_m} = R$ where $h_{F_m}$ attains its norm and so
+
+$$|h(R)| \geq \|h\| - \varepsilon.$$
+
+If $S \in S_K(\ell_p, \ell_q)$ satisfies $Q_{F_m}SP_{F_m} = 0$ and $Q_{G_m}SP_{G_m} = S$, then the operator $R + \lambda S$ is in the unit ball of $K(\ell_p, \ell_q)$, for every complex number $\lambda$ in the unit disk. The maximum modulus theorem applied to the function $\lambda \mapsto h(R + \lambda S)$ defined on the complex unit disk shows that there is a complex number $\lambda_0$ with $|\lambda_0| = 1$ and such that
+
+$$|h(R + \lambda_0 S)| \geq |h(R)| \geq \|h\| - \varepsilon.$$
+
+Since $R + \lambda_0 S \in A_m$, B is a boundary for $\mathcal{A}_\infty(B_K(\ell_p, \ell_q))$.
+
+Note that for two positive integers $n < m$, if $T_n \in A_n$ and $T_m \in A_m$, then
+
+$$ (5) \quad \|T_m - T_n\| \ge \|Q_{G_m}(T_m - T_n)P_{G_m}\| = \|Q_{G_m}T_m P_{G_m}\| = 1. $$
+
+Since every $A_n$ is closed, the above inequality shows that B is also closed.
+
+By the same argument, $\bigcup_n A_{2n}$ and $\bigcup_n A_{2n-1}$ are also closed boundaries for $\mathcal{A}_\infty(B_K(\ell_p, \ell_q))$. In view of (5), the distance between them is at least 1. ■
+
+**Acknowledgements.** This note was prepared while the first named author visited the Universidade de São Paulo and the second one visited the Universidad de Granada. The authors thank both institutions for the support received and the hospitality. They are also grateful to the referee for his/her careful reading.
+
+References
+
+[1] M. D. Acosta, *Boundaries for spaces of holomorphic functions on C(K)*, Publ. Res. Inst. Math. Sci. 42 (2006), 27–44.
+
+[2] M. D. Acosta, L. A. Moraes and L. Romero Grados, *On boundaries on the predual of the Lorentz sequence space*, preprint.
+
+[3] R. M. Aron, Y. S. Choi, M. L. Lourenço and O. W. Paques, *Boundaries for algebras of analytic functions on infinite dimensional Banach spaces*, in: Banach Spaces, B. L. Lin and W. B. Johnson (eds.), Contemp. Math. 144, Amer. Math. Soc., 1993, 15–22.
+
+[4] E. Bishop, *A minimal boundary for function algebras*, Pacific J. Math. 9 (1959), 629–642.
+
+[5] Y. S. Choi, D. García, S. G. Kim and M. Maestre, *Norm or numerical radius attaining polynomials on C(K)*, J. Math. Anal. Appl. 295 (2004), 80–96.
+
+[6] T. W. Gamelin, *Uniform Algebras*, Chelsea, New York, 1984.
+
+[7] D. J. H. Garling, *On symmetric sequence spaces*, Proc. London Math. Soc. 16 (1966), 85–106.
+---PAGE_BREAK---
+
+[8] J. Globevnik, *On interpolation by analytic maps in infinite dimensions*, Math. Proc. Cambridge Philos. Soc. 83 (1978), 243–252.
+
+[9] —, *Boundaries for polydisc algebras in infinite dimensions*, ibid. 85 (1979), 291–303.
+
+[10] P. Harmand, D. Werner and W. Werner, *M-Ideals in Banach Spaces and Banach Algebras*, Lecture Notes in Math. 1547, Springer, Berlin, 1993.
+
+[11] C. Lennard, $C_1$ is uniformly Kadec-Klee, Proc. Amer. Math. Soc. 109 (1990), 71–77.
+
+[12] J. Lindenstrauss and L. Tzafriri, *Classical Banach Spaces I, II*, Springer Monographs in Math., Springer, London, 1991.
+
+[13] A. Moltó, V. Montesinos and S. Troyanski, *On quasi-denting points, denting faces and the geometry of the unit ball of d(w, 1)*, Arch. Math. (Basel) 63 (1994), 45–55.
+
+[14] L. A. Moraes and L. Romero Grados, *Boundaries for algebras of holomorphic functions*, J. Math. Anal. Appl. 281 (2003), 575–586.
+
+[15] R. A. Ryan, *Dunford-Pettis properties*, Bull. Acad. Polon. Sci. Sér. Sci. Math. 27 (1979), 373–379.
+
+[16] D. Werner, *New classes of Banach spaces which are M-ideals in their biduals*, Math. Proc. Cambridge Philos. Soc. 111 (1992), 337–354.
+
+Departamento de Análisis Matemático
+Facultad de Ciencias
+Universidad de Granada
+18071 Granada, Spain
+E-mail: dacosta@ugr.es
+
+Departamento de Matemática e Estatística
+Universidade de São Paulo
+CP 66281 CEP
+05311-970 São Paulo, Brazil
+E-mail: mllouren@ime.usp.br
+
+Received December 15, 2005
+
+Revised version November 19, 2006
+
+(5829)
\ No newline at end of file
|
| |
|
|